prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>Ubuntu 16.04 LTS, Docker 17.12.1, Kubernetes 1.10.0 </p> <p>Kubelet not starting:</p> <p><em>Jun 22 06:45:57 dev-master systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a</em></p> <p><em>Jun 22 06:45:57 dev-master systemd[1]: kubelet.service: Failed with result 'exit-code'.</em></p> <p>Note: No issue with v1.9.1</p> <p><strong>LOGS:</strong></p> <pre><code>Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.518085 20051 docker_service.go:249] Docker Info: &amp;{ID:WDJK:3BCI:BGCM:VNF3:SXGW:XO5G:KJ3Z:EKIH:XGP7:XJGG:LFBL:YWAJ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:btrfs DriverStatus:[[Build Version Btrfs v4.15.1] [Library Vers Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.521232 20051 docker_service.go:262] Setting cgroupDriver to cgroupfs Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.532834 20051 remote_runtime.go:43] Connecting to runtime service unix:///var/run/dockershim.sock Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.533812 20051 kuberuntime_manager.go:186] Container runtime docker initialized, version: 18.05.0-ce, apiVersion: 1.37.0 Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.534071 20051 csi_plugin.go:61] kubernetes.io/csi: plugin initializing... Jun 22 06:45:55 dev-master hyperkube[20051]: W0622 06:45:55.534846 20051 kubelet.go:903] Accelerators feature is deprecated and will be removed in v1.11. Please use device plugins instead. They can be enabled using the DevicePlugins feature gate. Jun 22 06:45:55 dev-master hyperkube[20051]: W0622 06:45:55.535035 20051 kubelet.go:909] GPU manager init error: couldn't get a handle to the library: unable to open a handle to the library, GPU feature is disabled. Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.535082 20051 server.go:129] Starting to listen on 0.0.0.0:10250 Jun 22 06:45:55 dev-master hyperkube[20051]: E0622 06:45:55.535164 20051 kubelet.go:1282] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container / Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.535189 20051 server.go:944] Started kubelet Jun 22 06:45:55 dev-master hyperkube[20051]: E0622 06:45:55.535555 20051 event.go:209] Unable to write event: 'Post https://10.50.50.201:8001/api/v1/namespaces/default/events: dial tcp 10.50.50.201:8001: getsockopt: connection refused' (may retry after sleeping) Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.535825 20051 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.536202 20051 status_manager.go:140] Starting to sync pod status with apiserver Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.536253 20051 kubelet.go:1782] Starting kubelet main sync loop. Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.536285 20051 kubelet.go:1799] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s] Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.536464 20051 volume_manager.go:247] Starting Kubelet Volume Manager Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.536613 20051 desired_state_of_world_populator.go:129] Desired state populator starts to run Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.538574 20051 server.go:299] Adding debug handlers to kubelet server. Jun 22 06:45:55 dev-master hyperkube[20051]: W0622 06:45:55.538664 20051 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d Jun 22 06:45:55 dev-master hyperkube[20051]: E0622 06:45:55.539199 20051 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.636465 20051 kubelet.go:1799] skipping pod synchronization - [container runtime is down] Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.636795 20051 kubelet_node_status.go:289] Setting node annotation to enable volume controller attach/detach Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.638630 20051 kubelet_node_status.go:83] Attempting to register node 10.50.50.201 Jun 22 06:45:55 dev-master hyperkube[20051]: E0622 06:45:55.638954 20051 kubelet_node_status.go:107] Unable to register node "10.50.50.201" with API server: Post https://10.50.50.201:8001/api/v1/nodes: dial tcp 10.50.50.201:8001: getsockopt: connection refused Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.836686 20051 kubelet.go:1799] skipping pod synchronization - [container runtime is down] Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.839219 20051 kubelet_node_status.go:289] Setting node annotation to enable volume controller attach/detach Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.841028 20051 kubelet_node_status.go:83] Attempting to register node 10.50.50.201 Jun 22 06:45:55 dev-master hyperkube[20051]: E0622 06:45:55.841357 20051 kubelet_node_status.go:107] Unable to register node "10.50.50.201" with API server: Post https://10.50.50.201:8001/api/v1/nodes: dial tcp 10.50.50.201:8001: getsockopt: connection refused Jun 22 06:45:56 dev-master hyperkube[20051]: I0622 06:45:56.236826 20051 kubelet.go:1799] skipping pod synchronization - [container runtime is down] Jun 22 06:45:56 dev-master hyperkube[20051]: I0622 06:45:56.241590 20051 kubelet_node_status.go:289] Setting node annotation to enable volume controller attach/detach Jun 22 06:45:56 dev-master hyperkube[20051]: I0622 06:45:56.245081 20051 kubelet_node_status.go:83] Attempting to register node 10.50.50.201 Jun 22 06:45:56 dev-master hyperkube[20051]: E0622 06:45:56.245475 20051 kubelet_node_status.go:107] Unable to register node "10.50.50.201" with API server: Post https://10.50.50.201:8001/api/v1/nodes: dial tcp 10.50.50.201:8001: getsockopt: connection refused Jun 22 06:45:56 dev-master hyperkube[20051]: E0622 06:45:56.492206 20051 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://10.50.50.201:8001/api/v1/services?limit=500&amp;resourceVersion=0: dial tcp 10.50.50.201:8001: getsockopt: connection refused Jun 22 06:45:56 dev-master hyperkube[20051]: E0622 06:45:56.493216 20051 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.50.50.201:8001/api/v1/pods?fieldSelector=spec.nodeName%3D10.50.50.201&amp;limit=500&amp;resourceVersion=0: dial tcp 10.50.50.201:8001: getsockopt: co Jun 22 06:45:56 dev-master hyperkube[20051]: E0622 06:45:56.494240 20051 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://10.50.50.201:8001/api/v1/nodes?fieldSelector=metadata.name%3D10.50.50.201&amp;limit=500&amp;resourceVersion=0: dial tcp 10.50.50.201:8001: getsockopt: connecti Jun 22 06:45:57 dev-master hyperkube[20051]: I0622 06:45:57.036893 20051 kubelet.go:1799] skipping pod synchronization - [container runtime is down] Jun 22 06:45:57 dev-master hyperkube[20051]: I0622 06:45:57.045705 20051 kubelet_node_status.go:289] Setting node annotation to enable volume controller attach/detach Jun 22 06:45:57 dev-master hyperkube[20051]: I0622 06:45:57.047489 20051 kubelet_node_status.go:83] Attempting to register node 10.50.50.201 Jun 22 06:45:57 dev-master hyperkube[20051]: E0622 06:45:57.047787 20051 kubelet_node_status.go:107] Unable to register node "10.50.50.201" with API server: Post https://10.50.50.201:8001/api/v1/nodes: dial tcp 10.50.50.201:8001: getsockopt: connection refused Jun 22 06:45:57 dev-master hyperkube[20051]: E0622 06:45:57.413319 20051 event.go:209] Unable to write event: 'Post https://10.50.50.201:8001/api/v1/namespaces/default/events: dial tcp 10.50.50.201:8001: getsockopt: connection refused' (may retry after sleeping) Jun 22 06:45:57 dev-master hyperkube[20051]: E0622 06:45:57.492781 20051 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://10.50.50.201:8001/api/v1/services?limit=500&amp;resourceVersion=0: dial tcp 10.50.50.201:8001: getsockopt: connection refused Jun 22 06:45:57 dev-master hyperkube[20051]: E0622 06:45:57.493560 20051 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.50.50.201:8001/api/v1/pods?fieldSelector=spec.nodeName%3D10.50.50.201&amp;limit=500&amp;resourceVersion=0: dial tcp 10.50.50.201:8001: getsockopt: co Jun 22 06:45:57 dev-master hyperkube[20051]: E0622 06:45:57.494574 20051 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://10.50.50.201:8001/api/v1/nodes?fieldSelector=metadata.name%3D10.50.50.201&amp;limit=500&amp;resourceVersion=0: dial tcp 10.50.50.201:8001: getsockopt: connecti Jun 22 06:45:57 dev-master hyperkube[20051]: W0622 06:45:57.549477 20051 manager.go:340] Could not configure a source for OOM detection, disabling OOM events: open /dev/kmsg: no such file or directory Jun 22 06:45:57 dev-master hyperkube[20051]: I0622 06:45:57.659932 20051 kubelet_node_status.go:289] Setting node annotation to enable volume controller attach/detach Jun 22 06:45:57 dev-master hyperkube[20051]: I0622 06:45:57.661447 20051 cpu_manager.go:155] [cpumanager] starting with none policy Jun 22 06:45:57 dev-master hyperkube[20051]: I0622 06:45:57.661459 20051 cpu_manager.go:156] [cpumanager] reconciling every 10s Jun 22 06:45:57 dev-master hyperkube[20051]: I0622 06:45:57.661468 20051 policy_none.go:42] [cpumanager] none policy: Start Jun 22 06:45:57 dev-master hyperkube[20051]: W0622 06:45:57.661523 20051 fs.go:539] stat failed on /dev/loop10 with error: no such file or directory Jun 22 06:45:57 dev-master hyperkube[20051]: F0622 06:45:57.661535 20051 kubelet.go:1359] Failed to start ContainerManager failed to get rootfs info: failed to get device for dir "/var/lib/kubelet": could not find device with major: 0, minor: 126 in cached partitions map Jun 22 06:45:57 dev-master systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a Jun 22 06:45:57 dev-master systemd[1]: kubelet.service: Failed with result 'exit-code'. </code></pre>
<p>Run the following command on all your nodes. It worked for me.</p> <pre><code> swapoff -a </code></pre>
<p>I have been trying to deploy Kafka with schema registry locally using Kubernetes. However, the logs of the schema registry pod show this error message:</p> <pre><code>ERROR Server died unexpectedly: (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain:51) org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata </code></pre> <p>What could be the reason of this behavior? ' In order to run Kubernetes locally, I user Minikube version v0.32.0 with Kubernetes version v1.13.0</p> <p>My Kafka configuration:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: kafka-1 spec: ports: - name: client port: 9092 selector: app: kafka server-id: "1" --- apiVersion: apps/v1 kind: Deployment metadata: name: kafka-1 spec: selector: matchLabels: app: kafka server-id: "1" replicas: 1 template: metadata: labels: app: kafka server-id: "1" spec: volumes: - name: kafka-data emptyDir: {} containers: - name: server image: confluent/kafka:0.10.0.0-cp1 env: - name: KAFKA_ZOOKEEPER_CONNECT value: zookeeper-1:2181 - name: KAFKA_ADVERTISED_HOST_NAME value: kafka-1 - name: KAFKA_BROKER_ID value: "1" ports: - containerPort: 9092 volumeMounts: - mountPath: /var/lib/kafka name: kafka-data --- apiVersion: v1 kind: Service metadata: name: schema spec: ports: - name: client port: 8081 selector: app: kafka-schema-registry --- apiVersion: apps/v1 kind: Deployment metadata: name: kafka-schema-registry spec: replicas: 1 selector: matchLabels: app: kafka-schema-registry template: metadata: labels: app: kafka-schema-registry spec: containers: - name: kafka-schema-registry image: confluent/schema-registry:3.0.0 env: - name: SR_KAFKASTORE_CONNECTION_URL value: zookeeper-1:2181 - name: SR_KAFKASTORE_TOPIC value: "_schema_registry" - name: SR_LISTENERS value: "http://0.0.0.0:8081" ports: - containerPort: 8081 </code></pre> <p>Zookeeper configuraion:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: zookeeper spec: ports: - name: client port: 2181 selector: app: zookeeper --- apiVersion: v1 kind: Service metadata: name: zookeeper-1 spec: ports: - name: client port: 2181 - name: followers port: 2888 - name: election port: 3888 selector: app: zookeeper server-id: "1" --- apiVersion: apps/v1 kind: Deployment metadata: name: zookeeper-1 spec: selector: matchLabels: app: zookeeper server-id: "1" replicas: 1 template: metadata: labels: app: zookeeper server-id: "1" spec: volumes: - name: data emptyDir: {} - name: wal emptyDir: medium: Memory containers: - name: server image: elevy/zookeeper:v3.4.7 env: - name: MYID value: "1" - name: SERVERS value: "zookeeper-1" - name: JVMFLAGS value: "-Xmx2G" ports: - containerPort: 2181 - containerPort: 2888 - containerPort: 3888 volumeMounts: - mountPath: /zookeeper/data name: data - mountPath: /zookeeper/wal name: wal </code></pre>
<pre><code>org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata </code></pre> <p>can happen when trying to connect to a broker expecting SSL connections and the client config did not specify;</p> <pre><code>security.protocol=SSL </code></pre>
<p>I'm trying to pull an image from my priavte harbor registry. In Kubernetes I created a secret first as explained in this documentation:</p> <p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a></p> <p>Then I tried to implement that into my deployment.yaml:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-k8s-test9 namespace: k8s-test9 spec: replicas: 1 template: metadata: labels: app: nginx-k8s-test9 spec: containers: - name: nginx-k8s-test9 image: my-registry.com/nginx-test/nginx:1.14.2 imagePullSecrets: - name: harborcred imagePullPolicy: Always volumeMounts: - name: webcontent mountPath: usr/share/nginx/html ports: - containerPort: 80 volumes: - name: webcontent configMap: name: webcontent --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: webcontent namespace: k8s-test9 annotations: volume.alpha.kubernetes.io/storage-class: default spec: accessModes: [ReadWriteOnce] resources: requests: storage: 5Gi </code></pre> <p>When I try to create the deployment I get the following error message:</p> <pre><code>error: error validating "deployment.yaml": error validating data: [ValidationError(Deployment.spec.template.spec.imagePullSecrets[0]): unknown field "imagePullPolicy" in io.k8s.api.core.v1.LocalObjectReference, ValidationError(Deployment.spec.template.spec.imagePullSecrets[0]): unknown field "ports" in io.k8s.api.core.v1.LocalObjectReference, ValidationError(Deployment.spec.template.spec.imagePullSecrets[0]): unknown field "volumeMounts" in io.k8s.api.core.v1.LocalObjectReference]; if you choose to ignore these errors, turn validation off with --validate=false </code></pre> <p>I guess it's a yaml issue somehow but I don't know where it should be. </p>
<p>And here is the solution:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-k8s-test9 namespace: k8s-test9 spec: replicas: 1 template: metadata: labels: app: nginx-k8s-test9 spec: containers: - name: nginx-k8s-test9 image: my-registry.com/nginx-test/nginx:1.14.2 volumeMounts: - name: webcontent mountPath: usr/share/nginx/html ports: - containerPort: 80 volumes: - name: webcontent configMap: name: webcontent imagePullSecrets: - name: harborcred-test --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: webcontent namespace: k8s-test9 annotations: volume.alpha.kubernetes.io/storage-class: default spec: accessModes: [ReadWriteOnce] resources: requests: storage: 5Gi </code></pre> <p>The imagePullSecrets section was not at the right place.</p>
<p>I just learned about ReplicaSets and Services in Kubernetes. I know the concept of desired state in ReplicaSets, it means that if one of our pods goes down, it will be rescheduled to keep the number of replicas as desired.</p> <p>But what happens if a Service goes down/fails? Is there any recovery for that?</p>
<blockquote> <p>A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them - sometimes called a micro-service. The set of Pods targeted by a Service is (usually) determined by a Label Selector (see below for why you might want a Service without a selector). <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">service</a></p> </blockquote> <p>k8s service is a <strong>abstract idea</strong> it does not consume any cpu/memory as <strong>pod</strong> does, so it does not make sense if it goes down. If you delete it then its deleted, there is no concrete object which is active. </p> <p>one of the functionally is to provide static IP (cluster IP) and DNS record so other pods can communicate with it, inter cluster communication. <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#a-records" rel="nofollow noreferrer">DNS record</a> </p> <p>One of the main responsibilities of kube-proxy is to write the iptables rules which implement Services. Nodes are having same iptables rules which are synchronized by kube-proxy.</p>
<p>Lets say that I have two deployments which contain two instances of a backend application. (Instead of having one deployment with multiple replicas, as they need to be configured differently).</p> <p>How would you guys go about load balancing between the two? The classic approach would be to set up HAProxy with the two backends. Does this sound right in the context of Kubernetes? Is there a better way to expose two deployments on a single Ingress Controller resource?</p>
<p>You can define a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a> that will be determined by <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors" rel="nofollow noreferrer">labels selectors</a>. The requests to the service will be spread across the deployments (as the same with ingress)</p> <p>Example:</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: app: my-deployments spec: ports: - port: 80 selector: app: my-deployments </code></pre>
<p>I want to create a deployment kubernetes instead of a pod. Is there a way to do it in apache airflow? . I know about the KubernetesPodOperator, but I want the kind : Deployment and not as kind: Pod.</p> <p>KubernetesPodOperator in Apache Airflow</p> <pre><code>k = KubernetesPodOperator(namespace='default', image="ubuntu:16.04", cmds=["bash", "-cx"], arguments=["echo", "10"], labels={"foo": "bar"}, secrets=[secret_file,secret_env] volume=[volume], volume_mounts=[volume_mount] name="test", task_id="task", affinity=affinity, is_delete_operator_pod=True, hostnetwork=False, tolerations=tolerations ) </code></pre> <p>Thanks</p>
<blockquote> <p>I want to create a deployment kubernetes instead of a pod. Is there a way to do it in apache airflow?</p> </blockquote> <p>Not really. <a href="https://airflow.apache.org/" rel="nofollow noreferrer">Apache Airflow</a> is a tool to run batch tasks defined in a <a href="https://airflow.apache.org/concepts.html" rel="nofollow noreferrer">DAG</a> which does not fit the pattern of running the workload in a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Kubernetes Deployment</a>. A Deployment is meant for long-running services type of applications that don't terminate unless there's a crash or you explicitly terminate/delete it (aka, generally stateless microservices)</p> <p>The controller in Kubernetes that fits more with what Apache Airflow does is the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">Job</a> controller which also creates a Pod when it runs. But that's where Airflow provides a richer feature set of features like support for <a href="https://en.wikipedia.org/wiki/Directed_acyclic_graph" rel="nofollow noreferrer">DAGs</a> </p> <p>In summary, having the ability to create a Deployment using Airflow is more a nice to have, but doesn't necessarily fit into the tool's main pattern usage. I suggest using a different tool/method to create your deployments.</p>
<p>I have nginx with following configuration: </p> <pre><code> proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; client_max_body_size 1g; client_body_buffer_size 128k; proxy_connect_timeout 120; proxy_send_timeout 240; proxy_read_timeout 240; proxy_buffers 32 4k; proxy_hide_header Strict-Transport-Security; proxy_hide_header Content-Type; add_header Content-Type application/json; </code></pre> <p>I would like to translate my nginx config to kubernetes ingress-nginx (Ingress resource). Is there a way to implement this config using kubernetes Ingress resources? Reading ingress-nginx docs I haven't found how to map proxy_pass or multiple rewrites to Ingress resource. I would appreciate ref to some detailed doc or sample with similar config.</p>
<p>Yes can do it using Snippets and Custom Templates as explained <a href="https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/configmap-and-annotations.md" rel="nofollow noreferrer">here</a> in NGINX Ingress Controller documentation by nginxinc.</p> <p>Example of using Snippets via ConfigMap:</p> <pre><code>--- # Source: nginx-ingress/templates/controller-configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: nginx-config labels: app.kubernetes.io/name: nginx-ingress helm.sh/chart: nginx-ingress-0.3.4 app.kubernetes.io/managed-by: Tiller app.kubernetes.io/instance: RELEASE-NAME data: server-snippets: | location /helloworld { proxy_redirect off; proxy_http_version 1.1; } </code></pre>
<p>I'm trying to scale a deployment based on a custom metric coming from a custom metric server. I deployed my server and when I do </p> <p><code>kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/services/kubernetes/test-metric"</code> </p> <p>I get back this JSON</p> <pre><code>{ "kind": "MetricValueList", "apiVersion": "custom.metrics.k8s.io/v1beta1", "metadata": { "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/services/kubernetes/test-metric" }, "items": [ { "describedObject": { "kind": "Service", "namespace": "default", "name": "kubernetes", "apiVersion": "/v1" }, "metricName": "test-metric", "timestamp": "2019-01-26T02:36:19Z", "value": "300m", "selector": null } ] } </code></pre> <p>Then I created my <code>hpa.yml</code> using this</p> <pre><code>apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: test-all-deployment namespace: default spec: maxReplicas: 10 minReplicas: 1 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: test-all-deployment metrics: - type: Object object: target: kind: Service name: kubernetes apiVersion: custom.metrics.k8s.io/v1beta1 metricName: test-metric targetValue: 200m </code></pre> <p>but it doesn't scale and I'm not sure what is wrong. running <code>get hpa</code> returns </p> <pre><code>NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE test-all-deployment Deployment/test-all-deployment &lt;unknown&gt;/200m 1 10 1 9m </code></pre> <p>The part I'm not sure about is the <code>target</code> object in the <code>metrics</code> collection in the hpa definition. Looking at the doc here <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/</a> </p> <p>It has</p> <pre><code> describedObject: apiVersion: extensions/v1beta1 kind: Ingress name: main-route target: kind: Value value: 10k </code></pre> <p>but that gives me a validation error for API <code>v2beta1</code>. and looking at the actual object here <a href="https://github.com/kubernetes/api/blob/master/autoscaling/v2beta1/types.go#L296" rel="nofollow noreferrer">https://github.com/kubernetes/api/blob/master/autoscaling/v2beta1/types.go#L296</a> it doesn't seem to match. I don't know how to specify that with the v2beta1 API. </p>
<p>It looks like there is a mistake in the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">documentation</a>. In the same example two diffierent API version are used.</p> <p>autoscaling/v2beta1 notation:</p> <pre><code> - type: Pods pods: metric: name: packets-per-second targetAverageValue: 1k </code></pre> <p>autoscaling/v2beta2 notation:</p> <pre><code> - type: Resource resource: name: cpu target: type: AverageUtilization averageUtilization: 50 </code></pre> <p>There is a difference between autoscaling/v2beta1 and autoscaling/v2beta2 APIs:</p> <pre><code>kubectl get hpa.v2beta1.autoscaling -o yaml --export &gt; hpa2b1-export.yaml kubectl get hpa.v2beta2.autoscaling -o yaml --export &gt; hpa2b2-export.yaml diff -y hpa2b1-export.yaml hpa2b2-export.yaml #hpa.v2beta1.autoscaling hpa.v2beta2.autoscaling #----------------------------------------------------------------------------------- apiVersion: v1 apiVersion: v1 items: items: - apiVersion: autoscaling/v2beta1 | - apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler kind: HorizontalPodAutoscaler metadata: metadata: creationTimestamp: "2019-03-21T13:17:47Z" creationTimestamp: "2019-03-21T13:17:47Z" name: php-apache name: php-apache namespace: default namespace: default resourceVersion: "8441304" resourceVersion: "8441304" selfLink: /apis/autoscaling/v2beta1/namespaces/default/ho | selfLink: /apis/autoscaling/v2beta2/namespaces/default/ho uid: b8490a0a-4bdb-11e9-9043-42010a9c0003 uid: b8490a0a-4bdb-11e9-9043-42010a9c0003 spec: spec: maxReplicas: 10 maxReplicas: 10 metrics: metrics: - resource: - resource: name: cpu name: cpu targetAverageUtilization: 50 | target: &gt; averageUtilization: 50 &gt; type: Utilization type: Resource type: Resource minReplicas: 1 minReplicas: 1 scaleTargetRef: scaleTargetRef: apiVersion: extensions/v1beta1 apiVersion: extensions/v1beta1 kind: Deployment kind: Deployment name: php-apache name: php-apache status: status: conditions: conditions: - lastTransitionTime: "2019-03-21T13:18:02Z" - lastTransitionTime: "2019-03-21T13:18:02Z" message: recommended size matches current size message: recommended size matches current size reason: ReadyForNewScale reason: ReadyForNewScale status: "True" status: "True" type: AbleToScale type: AbleToScale - lastTransitionTime: "2019-03-21T13:18:47Z" - lastTransitionTime: "2019-03-21T13:18:47Z" message: the HPA was able to successfully calculate a r message: the HPA was able to successfully calculate a r resource utilization (percentage of request) resource utilization (percentage of request) reason: ValidMetricFound reason: ValidMetricFound status: "True" status: "True" type: ScalingActive type: ScalingActive - lastTransitionTime: "2019-03-21T13:23:13Z" - lastTransitionTime: "2019-03-21T13:23:13Z" message: the desired replica count is increasing faster message: the desired replica count is increasing faster rate rate reason: TooFewReplicas reason: TooFewReplicas status: "True" status: "True" type: ScalingLimited type: ScalingLimited currentMetrics: currentMetrics: - resource: - resource: currentAverageUtilization: 0 | current: currentAverageValue: 1m | averageUtilization: 0 &gt; averageValue: 1m name: cpu name: cpu type: Resource type: Resource currentReplicas: 1 currentReplicas: 1 desiredReplicas: 1 desiredReplicas: 1 kind: List kind: List metadata: metadata: resourceVersion: "" resourceVersion: "" selfLink: "" selfLink: "" </code></pre> <hr> <p>Here is how the object definition is supposed to look like:</p> <pre><code>#hpa.v2beta1.autoscaling hpa.v2beta2.autoscaling #----------------------------------------------------------------------------------- type: Object type: Object object: object: metric: metric: name: requests-per-second name: requests-per-second describedObject: describedObject: apiVersion: extensions/v1beta1 apiVersion: extensions/v1beta1 kind: Ingress kind: Ingress name: main-route name: main-route targetValue: 2k target: type: Value value: 2k </code></pre>
<p>We launched a Cloud Composer cluster and want to use it to move data from Cloud SQL (Postgres) to BQ. I followed the notes about doing this mentioned at these two resources:</p> <p><a href="https://stackoverflow.com/questions/50154306/google-cloud-composer-and-google-cloud-sql">Google Cloud Composer and Google Cloud SQL</a></p> <p><a href="https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine" rel="nofollow noreferrer">https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine</a></p> <p>We launch a pod running the cloud_sql_proxy and launch a service to expose the pod. The problem is that Cloud Composer cannot see the service stating the error when attempting to use an ad-hoc query to test:</p> <p><code>cloud not translate host name "sqlproxy-service" to address: Name or service not known"</code></p> <p>Trying by the service IP address results in the page timing out.</p> <p>The <code>-instances</code> passed to cloud_sql_proxy work when used in a local environment or cloud shell. The log files seem to indicate no connection is ever attempted</p> <pre><code>me@cloudshell:~ (my-proj)$ kubectl logs -l app=sqlproxy-service me@2018/11/15 13:32:59 current FDs rlimit set to 1048576, wanted limit is 8500. Nothing to do here. 2018/11/15 13:32:59 using credential file for authentication; email=my-service-account@service.iam.gserviceaccount.com 2018/11/15 13:32:59 Listening on 0.0.0.0:5432 for my-proj:my-ds:my-db 2018/11/15 13:32:59 Ready for new connections </code></pre> <p>I see a comment here <a href="https://stackoverflow.com/a/53307344/1181412">https://stackoverflow.com/a/53307344/1181412</a> that possibly this isn't even supported?</p> <p><strong>Airflow</strong></p> <p><a href="https://i.stack.imgur.com/TWWCZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TWWCZ.png" alt="enter image description here"></a></p> <p><strong>YAML</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: sqlproxy-service namespace: default labels: app: sqlproxy spec: ports: - port: 5432 protocol: TCP targetPort: 5432 selector: app: sqlproxy sessionAffinity: None type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: sqlproxy labels: app: sqlproxy spec: selector: matchLabels: app: sqlproxy template: metadata: labels: app: sqlproxy spec: containers: - name: cloudsql-proxy ports: - containerPort: 5432 protocol: TCP image: gcr.io/cloudsql-docker/gce-proxy:latest imagePullPolicy: Always command: ["/cloud_sql_proxy", "-instances=my-proj:my-region:my-db=tcp:0.0.0.0:5432", "-credential_file=/secrets/cloudsql/credentials.json"] securityContext: runAsUser: 2 # non-root user allowPrivilegeEscalation: false volumeMounts: - name: cloudsql-instance-credentials mountPath: /secrets/cloudsql readOnly: true volumes: - name: cloudsql-instance-credentials secret: secretName: cloudsql-instance-credentials </code></pre>
<p>The information you found in the answer you linked is correct - ad-hoc queries from the Airflow web server to cluster-internal services within the Composer environment are not supported. This is because the web server runs on App Engine flex using its own separate network (not connected to the GKE cluster), which you can see in the <a href="https://cloud.google.com/composer/docs/concepts/overview#architecture" rel="nofollow noreferrer">Composer architecture diagram</a>.</p> <p>Since that is the case, your SQL proxy must be exposed on a public IP address for the Composer Airflow web server to connect to it. For any services/endpoints listening on RFC1918 addresses within the GKE cluster (i.e. not exposed on a public IP), you will need additional network configuration to accept external connections.</p> <p>If this is a major blocker for you, consider running a <a href="https://cloud.google.com/composer/docs/how-to/managing/deploy-webserver" rel="nofollow noreferrer">self-managed Airflow web server</a>. Since this web server would run in the same cluster as the SQL proxy you set up, there would no longer be any issues with name resolution.</p>
<p>I have two microservices that communicate through http protocol. </p> <ul> <li><p>AC6K: C# microservice that gets data from an Atlas Copco 6000 device.</p></li> <li><p>LocalWriter: python aplication that gets data from AC6K and stores the info in a data base</p></li> </ul> <p>I have tested it in Windows and Linux environments and it works fine. When I contenerize each microservice and make the deployment, there is no communication. Please find enclosed herewith the corresponding docker &amp; yaml files used to containerize and deploy the application</p> <p>ac6k docker file:</p> <pre><code>FROM microsoft/aspnetcore-build EXPOSE 5010 WORKDIR /app COPY . . RUN dotnet restore ENTRYPOINT ["dotnet", "ac6kcore.dll"] - ac6kUp.yaml apiVersion: v1 kind: Service metadata: name: ac6kcore labels: run: ac6kcore spec: type: NodePort ports: - port: 5010 name: ac6kcore targetPort: 5010 nodePort: 32766 protocol: TCP selector: run: ac6kcore --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: ac6kcore spec: selector: matchLabels: run: ac6kcore replicas: 1 template: metadata: labels: run: ac6kcore spec: hostNetwork: true containers: - image: afierro/ac6kcore:lw name: ac6kcore ports: - containerPort: 5010 restartPolicy: Always </code></pre> <p>local writer docker file:</p> <pre><code>FROM python:3.6 RUN mkdir -p /mongodbapp WORKDIR /mongodbapp COPY requirements.txt /mongodbapp RUN pip install --no-cache-dir -r requirements.txt ADD . /mongodbapp EXPOSE 9090 CMD ["python", "runapp.py"] - LocalWriter.yaml apiVersion: v1 kind: Service metadata: annotations: service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" name: localwriter labels: app: localwriter spec: type: NodePort ports: - port: 9090 name: localwriter targetPort: 9090 nodePort: 32756 protocol: TCP selector: app: localwriter --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: localwriter spec: replicas: 1 selector: matchLabels: app: localwriter template: metadata: labels: app: localwriter spec: containers: - name: flasknode image: afierro/localwriter:v1 imagePullPolicy: Always ports: - containerPort: 9090 </code></pre> <p>Thanks in advance</p>
<p>You use service to communicate a pod in one deployment to a pod in another deployment. </p> <p>Check services here: </p> <p><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a></p> <p>Make sure you have the right selector in the service. </p> <p>You can use the env vars that are inside the pod to call the other service. Kubernetes starts the pods with the service ip and port as env vars, to check this you can ssh in a pod and use the command printenv. </p>
<p>I’m working on an application that will use Google Cloud Storage to save files and I was wondering: what’s the best way to emulate it for development?</p> <p>The application will run on kubernetes, and I was planning to run a development environment on my machine using Minikube (or similar). I know I can set up a different storage for development purposes but I was wondering if there was a way to avoid charges and most important to be able to work offline.</p> <p>Thanks in advance</p>
<p>You could leverage <a href="https://cloud.google.com/free/" rel="nofollow noreferrer">free trial for GCP</a>, which gives you $300 of credit for start.</p> <p>Alternatively you could use <a href="https://github.com/googleapis/google-cloud-java/blob/master/TESTING.md#on-your-machine-2" rel="nofollow noreferrer">in memory emulator</a> - This is just available with the java library only.</p>
<p>Not that much experienced in php, currently running a php app</p> <p>The cluster uses an nginx ingress load balancer</p> <p>The php container currently uses nginx (FROM that one <a href="https://hub.docker.com/r/wyveo/nginx-php-fpm/" rel="nofollow noreferrer">https://hub.docker.com/r/wyveo/nginx-php-fpm/</a>), so the pod is exposed via nginx</p> <p>I'm having some weird behaviours by using this image, so I had in mind to give apache a shot, in case it would provide a more stable result</p> <p>That is, it does not change the game; is there any other ways to run a php app in such context ? What would be the best way ?</p>
<p>Have you checked the official images? </p> <p>For safety is better to use official images rather than other ones. </p> <p>Check the php official repo, they have a version with apache installed. </p> <p>I am not a php expert either, but the configurations I have made usually use apache to handle requests. </p> <p>The official repo: <a href="https://hub.docker.com/_/php" rel="nofollow noreferrer">https://hub.docker.com/_/php</a></p> <p>Check the tags that use apache. There are versions in stretch and alpine. </p>
<p>We have a Kubernetes cluster with Istio 1.0 (with Envoy proxy) and some other stuff. We use Istio's Gateway to verify client certificates. We would like to pass client certificate's subject to the internal services.</p> <p><a href="https://www.envoyproxy.io/docs/envoy/latest/api-v1/network_filters/http_conn_man#config-http-conn-man-forward-client-cert" rel="nofollow noreferrer">Here</a> in Envoy's documentation I have found the following configuration option: <code>forward_client_cert</code> which enables passing the subject among other information in header <code>x-forwarded-client-cert</code>, although I could not find the way to enable it in Istio.</p> <p>Has anyone tried to do something similar and succeeded? Or Istio is not supporting that?</p>
<p>This is a late answer, but forwarding client cert details is supported in the <a href="https://github.com/istio/istio/issues/8263" rel="nofollow noreferrer">1.1.0 release</a>. This is the default behavior of an https gateway, however, you need to have mutual TLS enabled globally for this to work. To do so apply the following <code>MeshPolicy</code> object:</p> <pre><code>apiVersion: "authentication.istio.io/v1alpha1" kind: "MeshPolicy" metadata: name: "default" spec: peers: - mtls: {} </code></pre> <p>Once this is applied, https calls to ingress will forward an <code>X-Forwarded-Client-Cert</code> header to the server.</p> <p>Keep in mind however, once global mtls is enabled, service to service calls within the cluster must also use tls. This can be done by creating a <code>DestinationRule</code> for each service with the mode set to <code>ISTIO_MUTUAL</code> (or <code>MUTUAL</code> if you want to use your own client certificates instead of those generated by Citadel):</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: myApp namespace: default spec: host: myApp trafficPolicy: tls: mode: ISTIO_MUTUAL </code></pre>
<p>First off, let me start by saying I am fairly new to docker and trying to understand the dockerfile setup.</p> <p>We are currently trying to convert our existing WepApi services to support containerization and orchestration. The plan is to use Docker with Kubernetes. We currently utilize multiple publish profiles with then drive the WebConfig based on the selected publish profile.</p> <p>Looking through the dockerfile, I see stuff such as:</p> <pre><code>RUN dotnet restore "Aeros.Services.Kubernetes/Aeros.Services.Kubernetes.csproj" COPY . . WORKDIR "/src/Aeros.Services.Kubernetes" RUN dotnet build "Aeros.Services.Kubernetes.csproj" -c Release -o /app FROM build AS publish RUN dotnet publish "Aeros.Services.Kubernetes.csproj" -c Release -o /app </code></pre> <p>Where the -c supplies the configuration. Is there any way to get it to run this command based on the publishing profile the user has selected?</p>
<p>You could use an <code>ARG</code> statement in your Dockerfile.</p> <pre><code>ARG publishingProfile RUN dotnet publish "Aeros.Services.Kubernetes.csproj" -c $publishingProfile -o /app </code></pre> <p>Use it like this from the command line:<br> <code>docker build --build-arg publishingProfile=Release</code></p>
<p>I am trying to mount a shared read/write file in my Kubernetes pods. This file has to be identical in all pods. However, in the same directory I need to have a read-only file that is different for each pod. This is why I can't mount a PVC in the directory and call it a day.</p> <p>Any idea how I can put a sharable read/write file in the same directory as all other pods, but have another file in that directory that's different per pod?</p> <p>This idea in simple terms is a read/write configmap which I am aware is non-existent. Any ideas for a workaround?</p>
<p>You can consider using symlinks in the folder that contains the read-only and writable file. Have the symlinks point to well-known locations for the read-only and writable file respectively. Mount the PVC and configmap to the symlink targets and that should give you the desired behaviour. </p> <p>Hope this helps!</p>
<p>I wonder if there is a way to have a deployment stop recreating new pods, when those failed multiple times. In other, given that we can't for instance have a restartPolicy never in a pod template of a Deployment, i am wondering, how can i consider a service failed and have in a stopped state. </p> <p>We have a use case, where imperatively need to have kubernetes interrupt a deployment that have all his pods constantly failing. </p>
<p>Consider using a type "Job" instead of Deployment. According to the docs: </p> <blockquote> <p>Use a Job for Pods that are expected to terminate, for example, batch computations. Jobs are appropriate only for Pods with restartPolicy equal to OnFailure or Never.</p> </blockquote> <p>Hope this helps!</p>
<p>my k8s master node has Public network IP, and worker node deploy in private net. worker node can connect to master but master cannot connect to worker node.</p> <p>I have tested that can deploy a pod by kubectl, the pod running on worker node and master can watch pod status. but when I deploy a ingress, and access the ingress on master node, traffic cannot go to worker node.</p> <p>I use flannel network.</p> <p>I have tried use ssh tunnel, but it hard to management</p> <p>I don't know if there are some suggests, thanks.</p>
<p>If you are deployed in a cloud environment, the most likely cause is incorrect firewall settings or route configurations. However, ingress configuration errors also may appear to look like infrastructure problems at times. </p> <p>The Ingress will redirect your requests to the different services that it is registered with. The endpoint health is also monitored and requests will only be sent to active and healthy endpoints. My troubleshooting flow is as follows: </p> <ol> <li><p>Hit an unregistered path on your url and check if you get the default backend response. If no, then your ingress controller may not be correctly set up (whether it be domain name, access rules, or just configuration). If yes, then your ingress controller should be correctly set up, and this is a problem with the Ingress definition or backend.</p></li> <li><p>Try hitting your registered path on your url. If you get a 504 gateway timeout, then your endpoint is accepting the request, but not responding correctly. You can follow the target pod logs to figure out whether it is behaving properly. </p></li> </ol> <p>If you get a 503 Service Unavailable, then your service might be down or deemed unhealthy by the ingress. In this case, you should definitely verify that your pods are running properly. </p> <ol start="3"> <li>Check your nginx-ingress-controller logs to see how the requests are being redirected and what the internal responses are. </li> </ol>
<p>I was trying to set up a traefik load balancer as an alternative LB for nginx-ingress. I used the helm chart from <a href="https://github.com/helm/charts/tree/master/stable/traefik" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/traefik</a> and installed on my GKE cluster with rbac enabled since I use Kubernetes v1.12:</p> <pre><code>helm install --name traefik-lb --namespace kube-system --set rbac.enabled=true stable/traefik </code></pre> <p>My test application's ingress.yaml points to the new ingress class now:</p> <pre><code>kubernetes.io/ingress.class: "traefik" </code></pre> <p>What I've seen in the logs is that Traefik reloads its config all the time. I would also like to know if Traefik definitely needs a TLS cert to "just" route traffic.</p>
<blockquote> <p>What I've seen in the logs is that traefik reloads its config all the time.</p> </blockquote> <p>It should reload every time you change the Ingress resources associated with it (The Traefik ingress controller). If it reloads all the time without any change to your cluster, there may be an issue with Traefik itself or the way your cluster is set up.</p> <blockquote> <p>I would also like to know if traefik definitely needs a TLS cert to "just" route traffic.</p> </blockquote> <p>No, it doesn't. This <a href="https://docs.traefik.io/providers/kubernetes-ingress/#enabling-and-using-the-provider" rel="nofollow noreferrer">basic example</a> from the documentation shows that you don't need TLS if you don't want to set it up.</p>
<p>I would like to implement functionality (or even better reuse existing libraries/APIs!) that would intercept a kubectl command to create an object and perform some pre-creation validation tasks on it before allowing kubectl command to proceed.</p> <p>e.g. check various values in the yaml against external DB for example check a label conforms to the internal naming convention and so on..</p> <p>Is there an accepted pattern or existing tools etc? Any guidance appreciated</p>
<p>The way to do this is by creating a <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook" rel="nofollow noreferrer">ValidatingAdmissionWebhook</a>. It's not for the faint of heart and even a brief example would be an overkill as a SO answer. A few pointers to start:</p> <p><a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook</a></p> <p><a href="https://banzaicloud.com/blog/k8s-admission-webhooks/" rel="nofollow noreferrer">https://banzaicloud.com/blog/k8s-admission-webhooks/</a></p> <p><a href="https://container-solutions.com/a-gentle-intro-to-validation-admission-webhooks-in-kubernetes/" rel="nofollow noreferrer">https://container-solutions.com/a-gentle-intro-to-validation-admission-webhooks-in-kubernetes/</a></p> <p>I hope this helps :-)</p>
<p>After uploading a parquet file to my kubernetes cluster for processing with Dask, I get a FileNotFoundError when trying to read </p> <pre class="lang-py prettyprint-override"><code>df=dd.read_parquet('home/jovyan/foo.parquet') df.head() </code></pre> <p>Here is the full error:</p> <pre class="lang-py prettyprint-override"><code>FileNotFoundError: [Errno 2] No such file or directory: '/home/jovyan/user_engagement_anon.parquet/part.0.parquet' </code></pre> <p>I can see that the file does indeed exist, and relative to the working directory of my jupyter notebook instance, it's in the expected location.</p> <p>I'm not sure if it matters, but to start the dask client on my kubernetes cluster, I used the following code:</p> <pre class="lang-py prettyprint-override"><code>from dask.distributed import Client, progress client=Client('dask-scheduler:8786', processes=False, threads_per_worker=4, n_workers=1, memory_limit='1GB') client </code></pre> <p>Furthermore, the same operation works fine on my local machine with the same parquet file</p>
<p>The problem was that I was installing dask separately using a helm release. Thus, the dask workers did not share the same file system as the jupyter notebook</p> <p>To fix this, I used dask-kubernetes python library to create the workers, rather than a separate helm release.</p>
<p>I installed my existing Kubernetes Cluster (1.8) running in AWS using KOPS.</p> <p>I would like to add Windows Container to the existing cluster but I can not find the right solution! :(</p> <p>I thought of following these given steps given in:</p> <p><a href="https://kubernetes.io/docs/getting-started-guides/windows/" rel="nofollow noreferrer">https://kubernetes.io/docs/getting-started-guides/windows/</a></p> <p>I downloaded the node binaries and copied it to my Windows machine (Kubelet, Kube-dns, kube-proxy, kubectl) but I got a little confused about the multiple networking options.</p> <p>They have also given the kubeadmin option to join the node to my Master, which I have no idea why since I used Kops to create my cluster.</p> <p>Can someone advise or help me on how I can get my windows node added?</p>
<p>KOPS is really good if the default architecture satisfies your requirements, if you need to make some changes it will give you some trouble. For example I needed to add a GPU Node, I was able to add it, but unable to make this process automatic, being unable to create an auto scaling group. </p> <p>Kops has a lot of pros, like creating all the cluster in a transparent way. </p> <p>Do you really need a windows node? </p> <p>If yes, try to launch a cluster using kube-adm, then joining the windows node to this cluster.</p> <p>Kops will take some time to add this windows nodes feature. </p>
<p>My service need a cfg file which need to be changed before containers start running. So it is not suitable to pack the cfg into docker image. I need to copy from cluster to container, and then the service in container start and reads this cfg.</p> <p>How can I do this?</p>
<p>I think for your use-case , <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="noreferrer">Init Containers</a> might be the best fit. Init Containers are like small scripts that you can run before starting your own containers in kubernetes pod , they must exit. You can have this config file updated in shared Persistent Volume between your Init container and your container. </p> <p>Following article gives a nice example as to how this can be done</p> <p><a href="https://medium.com/@jmarhee/using-initcontainers-to-pre-populate-volume-data-in-kubernetes-99f628cd4519" rel="noreferrer">https://medium.com/@jmarhee/using-initcontainers-to-pre-populate-volume-data-in-kubernetes-99f628cd4519</a></p> <p><strong>UPDATE :</strong></p> <p>I found another answer from stackoverflow which might be related and give you a better approach in handling this</p> <p><a href="https://stackoverflow.com/questions/50012601/can-i-use-a-configmap-created-from-an-init-container-in-the-pod">can i use a configmap created from an init container in the pod</a></p>
<p><a href="https://i.stack.imgur.com/IOowW.png" rel="nofollow noreferrer">looks like this</a> using windows version 10, docker for windows(docker verion) : 18.09.2</p> <p>how to resolve this issue ?</p>
<p>Kubernetes should be running.</p> <p>But check your cluster-info:</p> <pre><code>&gt; kubectl cluster-info Kubernetes master is running at http://localhost:8080 To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it. </code></pre> <p>That is reported both in <a href="https://github.com/docker/machine/issues/1094#issuecomment-460817882" rel="nofollow noreferrer">docker/machine</a> and <a href="https://github.com/docker/for-win/issues/2099" rel="nofollow noreferrer">docker/for-win</a> or <a href="https://github.com/kubernetes/minikube/issues/3562#issuecomment-457142656" rel="nofollow noreferrer">kubernetes/minikube</a>.</p> <p>While the issue is pending, and if no firewall/proxy is involved, I have seen the error caused because <a href="https://stackoverflow.com/a/52313787/6309">the port is already taken</a>.</p> <p>See also <a href="https://www.ntweekly.com/2018/05/08/kubernetes-windows-error-unable-connect-server-dial-tcp-16445-connectex-no-connection-made-target-machine-actively-refused/" rel="nofollow noreferrer">this article</a>:</p> <blockquote> <h2>Issue</h2> <p>The reason you are getting the error message is that Kuberentes is not looking into the correct configuration folder because the configuration path is not configured on the Windows 10 machine.</p> <h2>Solution</h2> <p>To fix the problem, I will run the command below that will tell Kubernetes where to find the configuration file on the machine.</p> <pre><code>Powershell [Environment]::SetEnvironmentVariable(&quot;KUBECONFIG&quot;, $HOME + &quot;\.kube\config&quot;, [EnvironmentVariableTarget]::Machine) </code></pre> </blockquote>
<p>I'm using standard procedure for enabling HTTPS termination for my application that is running on Kubernetes using: - Ingress nginx - AWS ELB classic - Cert Manager for Let's encrypt</p> <p>I've used procedure described here: <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes" rel="nofollow noreferrer">https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes</a></p> <p>I've been <strong>able to make Ingress work with HTTP</strong> before but I'm having problem with HTTPS where I'm getting following error when I try to cURL app URL:</p> <pre><code>$ curl -iv https://&lt;SERVER&gt; * Rebuilt URL to: &lt;SERVER&gt; * Trying &lt;IP_ADDR&gt;... * Connected to &lt;SERVER&gt; (&lt;IP_ADDR&gt;) port 443 (#0) * found 148 certificates in /etc/ssl/certs/ca-certificates.crt * found 592 certificates in /etc/ssl/certs * ALPN, offering http/1.1 * gnutls_handshake() failed: An unexpected TLS packet was received. * Closing connection 0 curl: (35) gnutls_handshake() failed: An unexpected TLS packet was received. </code></pre> <p>This is what I currently have:</p> <p><strong>Cert manager running in kube-system namespace:</strong></p> <pre><code>$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE cert-manager-5f8db6f6c4-c4t4k 1/1 Running 0 2d1h cert-manager-webhook-85dd96d87-rxc7p 1/1 Running 0 2d1h cert-manager-webhook-ca-sync-pgq6b 0/1 Completed 2 2d1h </code></pre> <p><strong>Ingress setup in ingress-nginx namespace:</strong></p> <pre><code>$ kubectl get all -n ingress-nginx NAME READY STATUS RESTARTS AGE pod/default-http-backend-587b7d64b5-ftws2 1/1 Running 0 2d1h pod/nginx-ingress-controller-68bb4bfd98-zsz8d 1/1 Running 0 12h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/default-http-backend ClusterIP &lt;IP_ADDR_1&gt; &lt;none&gt; 80/TCP 2d1h service/ingress-nginx NodePort &lt;IP_ADDR_2&gt; &lt;none&gt; 80:32327/TCP,443:30313/TCP 2d1h NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/default-http-backend 1 1 1 1 2d1h deployment.apps/nginx-ingress-controller 1 1 1 1 12h </code></pre> <p><strong>Application and ingress in app namespace:</strong></p> <pre><code>$ kubectl get all -n app NAME READY STATUS RESTARTS AGE pod/appserver-0 1/1 Running 0 2d1h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/appserver ClusterIP &lt;IP_ADDR&gt; &lt;none&gt; 22/TCP,80/TCP 2d1h NAME DESIRED CURRENT AGE statefulset.apps/appserver 1 1 2d1h $ kubectl describe ingress -n app Name: appserver Namespace: app Address: Default backend: default-http-backend:80 (&lt;none&gt;) TLS: letsencrypt-prod terminates &lt;SERVER&gt; Rules: Host Path Backends ---- ---- -------- &lt;SERVER&gt; / appserver:80 (&lt;none&gt;) Annotations: certmanager.k8s.io/cluster-issuer: letsencrypt-prod kubernetes.io/ingress.class: nginx Events: &lt;none&gt; </code></pre> <p>This is how ingress resource looks like:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx certmanager.k8s.io/cluster-issuer: letsencrypt-prod name: appserver namespace: app spec: tls: - hosts: - &lt;SERVER&gt; secretName: letsencrypt-prod rules: - host: &lt;SERVER&gt; http: paths: - backend: serviceName: appserver servicePort: 80 path: / </code></pre> <p><strong>Additional checks that I've done:</strong></p> <p>Checked that certificates have been generated correctly:</p> <pre><code>$ kubectl describe cert -n app Name: letsencrypt-prod ... Status: Conditions: Last Transition Time: 2019-03-20T14:23:07Z Message: Certificate is up to date and has not expired Reason: Ready Status: True Type: Ready </code></pre> <p>Checked logs from cert-manager:</p> <pre><code>$ kubectl logs -f cert-manager-5f8db6f6c4-c4t4k -n kube-system ... I0320 14:23:08.368872 1 sync.go:177] Certificate "letsencrypt-prod" for ingress "appserver" already exists I0320 14:23:08.368889 1 sync.go:180] Certificate "letsencrypt-prod" for ingress "appserver" is up to date I0320 14:23:08.368894 1 controller.go:179] ingress-shim controller: Finished processing work item "app/appserver" I0320 14:23:12.548963 1 controller.go:183] orders controller: syncing item 'app/letsencrypt-prod-1237734172' I0320 14:23:12.549608 1 controller.go:189] orders controller: Finished processing work item "app/letsencrypt-prod-1237734172" </code></pre> <p>Not really sure at this point what else might be worth of checking? </p>
<p>It appears that you are using legacy versions of the nginx-ingress with cert-manager, which could be the culprit to your error.</p> <p>I suggest you try to deploy the nginx-ingress with Helm using the latest version (ensure that the image tag is v0.23.0 for the nginx-ingress-controller). Also make sure you are deploying your cert-manager with the latest version (--version v0.7.0) during your helm deployment. You should see three pods by default: cert-manager, cert-manager-cainjector, and cert-manager-webhook. </p> <p>I had to disable the webhook because it was preventing the certificate from being issued despite providing it with the correct parameters, so you may have to do the same. </p> <p>Hope this helps!</p>
<p>In my Mongo Helm Chart, I am using PVC for Persistence volume. I am using the chart to install Mongo. When I delete the chart my PV gets deleted. So, I found something to patch it in.</p> <pre><code>kubectl patch pv &lt;your-pv-name&gt; -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' </code></pre> <p>After this my PV is not getting deleted just the status in <strong>Released</strong></p> <pre><code>pvc-fc29a491-499a-11e9-a426-42010a800ff9 8Gi RWO Retain Released default/myapp-mongodb standard 3d </code></pre> <p>How can I bound this PV to my new helm chart installation so that my data should remain persistent even after deleting my Helm Chart?</p>
<p>It's still not resolved issue by Helm. </p> <ul> <li><a href="https://github.com/helm/charts/issues/1472" rel="nofollow noreferrer">https://github.com/helm/charts/issues/1472</a></li> <li><a href="https://github.com/helm/helm/issues/1933" rel="nofollow noreferrer">https://github.com/helm/helm/issues/1933</a></li> </ul> <p>The 'hack' to deal with it, you can find here:</p> <p><a href="https://groups.google.com/forum/#!topic/kubernetes-sig-apps/sLL2pCJ5Ab8" rel="nofollow noreferrer">https://groups.google.com/forum/#!topic/kubernetes-sig-apps/sLL2pCJ5Ab8</a></p>
<p>Below is the Kubernetes deployment yaml file <code>-container</code> image section:</p> <pre><code>image: https://registry.ng.bluemix.net/****/test-service:test-branch-67 imagePullPolicy: Always </code></pre> <p>Below is the error message after deploying:</p> <blockquote> <p>ubuntu@ip-xxxx:~$ kubectl logs test-deployment-69c6d8xxx -n test</p> <p>Error from server (BadRequest): container "test-deployment" in pod "test-deployment-ccccxxx" is waiting to start: InvalidImageName</p> </blockquote> <p>Another error log:</p> <blockquote> <p>Failed to apply default image tag "<a href="https://registry.ng.bluemix.net/test/test-service:test-branch-66" rel="noreferrer">https://registry.ng.bluemix.net/test/test-service:test-branch-66</a>": couldn't parse image reference "<a href="https://registry.ng.bluemix.net/test/test-service:test-branch-66" rel="noreferrer">https://registry.ng.bluemix.net/test/test-service:test-branch-66</a>": invalid reference format</p> </blockquote> <p>Any idea why the pod is not coming up?</p>
<p>Remove the <code>https://</code> from the image name, and if you are using a private registry, make sure to use <code>imagePullSecrets</code>.</p>
<p>I'm using standard procedure for enabling HTTPS termination for my application that is running on Kubernetes using: - Ingress nginx - AWS ELB classic - Cert Manager for Let's encrypt</p> <p>I've used procedure described here: <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes" rel="nofollow noreferrer">https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes</a></p> <p>I've been <strong>able to make Ingress work with HTTP</strong> before but I'm having problem with HTTPS where I'm getting following error when I try to cURL app URL:</p> <pre><code>$ curl -iv https://&lt;SERVER&gt; * Rebuilt URL to: &lt;SERVER&gt; * Trying &lt;IP_ADDR&gt;... * Connected to &lt;SERVER&gt; (&lt;IP_ADDR&gt;) port 443 (#0) * found 148 certificates in /etc/ssl/certs/ca-certificates.crt * found 592 certificates in /etc/ssl/certs * ALPN, offering http/1.1 * gnutls_handshake() failed: An unexpected TLS packet was received. * Closing connection 0 curl: (35) gnutls_handshake() failed: An unexpected TLS packet was received. </code></pre> <p>This is what I currently have:</p> <p><strong>Cert manager running in kube-system namespace:</strong></p> <pre><code>$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE cert-manager-5f8db6f6c4-c4t4k 1/1 Running 0 2d1h cert-manager-webhook-85dd96d87-rxc7p 1/1 Running 0 2d1h cert-manager-webhook-ca-sync-pgq6b 0/1 Completed 2 2d1h </code></pre> <p><strong>Ingress setup in ingress-nginx namespace:</strong></p> <pre><code>$ kubectl get all -n ingress-nginx NAME READY STATUS RESTARTS AGE pod/default-http-backend-587b7d64b5-ftws2 1/1 Running 0 2d1h pod/nginx-ingress-controller-68bb4bfd98-zsz8d 1/1 Running 0 12h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/default-http-backend ClusterIP &lt;IP_ADDR_1&gt; &lt;none&gt; 80/TCP 2d1h service/ingress-nginx NodePort &lt;IP_ADDR_2&gt; &lt;none&gt; 80:32327/TCP,443:30313/TCP 2d1h NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/default-http-backend 1 1 1 1 2d1h deployment.apps/nginx-ingress-controller 1 1 1 1 12h </code></pre> <p><strong>Application and ingress in app namespace:</strong></p> <pre><code>$ kubectl get all -n app NAME READY STATUS RESTARTS AGE pod/appserver-0 1/1 Running 0 2d1h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/appserver ClusterIP &lt;IP_ADDR&gt; &lt;none&gt; 22/TCP,80/TCP 2d1h NAME DESIRED CURRENT AGE statefulset.apps/appserver 1 1 2d1h $ kubectl describe ingress -n app Name: appserver Namespace: app Address: Default backend: default-http-backend:80 (&lt;none&gt;) TLS: letsencrypt-prod terminates &lt;SERVER&gt; Rules: Host Path Backends ---- ---- -------- &lt;SERVER&gt; / appserver:80 (&lt;none&gt;) Annotations: certmanager.k8s.io/cluster-issuer: letsencrypt-prod kubernetes.io/ingress.class: nginx Events: &lt;none&gt; </code></pre> <p>This is how ingress resource looks like:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx certmanager.k8s.io/cluster-issuer: letsencrypt-prod name: appserver namespace: app spec: tls: - hosts: - &lt;SERVER&gt; secretName: letsencrypt-prod rules: - host: &lt;SERVER&gt; http: paths: - backend: serviceName: appserver servicePort: 80 path: / </code></pre> <p><strong>Additional checks that I've done:</strong></p> <p>Checked that certificates have been generated correctly:</p> <pre><code>$ kubectl describe cert -n app Name: letsencrypt-prod ... Status: Conditions: Last Transition Time: 2019-03-20T14:23:07Z Message: Certificate is up to date and has not expired Reason: Ready Status: True Type: Ready </code></pre> <p>Checked logs from cert-manager:</p> <pre><code>$ kubectl logs -f cert-manager-5f8db6f6c4-c4t4k -n kube-system ... I0320 14:23:08.368872 1 sync.go:177] Certificate "letsencrypt-prod" for ingress "appserver" already exists I0320 14:23:08.368889 1 sync.go:180] Certificate "letsencrypt-prod" for ingress "appserver" is up to date I0320 14:23:08.368894 1 controller.go:179] ingress-shim controller: Finished processing work item "app/appserver" I0320 14:23:12.548963 1 controller.go:183] orders controller: syncing item 'app/letsencrypt-prod-1237734172' I0320 14:23:12.549608 1 controller.go:189] orders controller: Finished processing work item "app/letsencrypt-prod-1237734172" </code></pre> <p>Not really sure at this point what else might be worth of checking? </p>
<p>After all, it seems that problem was related with the fact on how I've done setup of listeners on ELB classic on AWS.</p> <p>I've done following:</p> <pre><code>HTTP 80 -&gt; HTTP &lt;INGRESS_SVC_NODE_PORT_1&gt; HTTP 443 -&gt; HTTP &lt;INGRESS_SVC_NODE_PORT_2&gt; </code></pre> <p>First mistake was that I've used HTTP instead of HTTPS for 443 port. When I tried to use HTTPS, I had to enter SSL certificates which didn't make sense to me since I'm doing SSL termination on Ingress level with Let's encrypt.</p> <p>Therefore, following configuration of listener worked:</p> <pre><code>TCP 80 -&gt; TCP &lt;INGRESS_SVC_NODE_PORT_1&gt; TCP 443 -&gt; TCP &lt;INGRESS_SVC_NODE_PORT_2&gt; </code></pre>
<p>I have put neo4j community edition(v3.2.2) into kubernetes. But the community edition do not support hot backup so I have to shut it down for backup/restore data. </p> <p>I tried:</p> <pre><code>kubectl exec neo4j-0 /var/lib/neo4j/bin/neo4j stop </code></pre> <p>but it shows:</p> <pre><code>Neo4j not running </code></pre> <p>also tried:</p> <pre><code>kubectl exec -it neo4j-0 bash /var/lib/neo4j/bin/neo4j stop </code></pre> <p>but still can't stop the neo4j in container</p> <pre><code>Neo4j not running </code></pre> <p>Does any body have a solution?</p>
<p>You can not stop the main process inside a container, otherwise it will be considered as dead and Kubernetes will terminate this pod and schedule a new healthy one. </p> <p>Also Kubernetes does not support suspending pods. It's cheaper to stop/start pods.</p> <p>So, in your case, I'd recommend to down scale you deployment to zero replicas during backup </p> <pre><code>kubectl scale --replicas=0 deployment/neo4j </code></pre> <p>and up scale back to required replicas once backup is completed</p> <pre><code>kubectl scale --replicas=1 deployment/neo4j </code></pre>
<p>My docker build is failing due to the following error:</p> <blockquote> <p>COPY failed: CreateFile \?\C:\ProgramData\Docker\tmp\docker-builder117584470\Aeros.Services.Kubernetes\Aeros.Services.Kubernetes.csproj: The system cannot find the path specified.</p> </blockquote> <p>I am fairly new to docker and have went with the basic project template that is set up when you create a Kubernetes container project template, so I'd figure it would work out of the box, but I'm mistaken.</p> <p>I'm having problems trying to figure out what it's attempting to due in the temp directory structure and the reason it is failing. Can anyone offer some assistance? I've done some searching and others have said the default docker template was incorrect in Visual Studio, but I'm not seeing any of the files being copied over to the temp directory to begin with, so figuring out what is going on is being rather problematic at the time.</p> <p>Here is the docker file, the only thing I've added is a publishingProfile arg so I can tell it which profile to use in the Build and Publish steps :</p> <pre><code>ARG publishingProfile FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base WORKDIR /app EXPOSE 80 FROM microsoft/dotnet:2.1-sdk AS build WORKDIR /src COPY ["Aeros.Services.Kubernetes/Aeros.Services.Kubernetes.csproj", "Aeros.Services.Kubernetes/"] RUN dotnet restore "Aeros.Services.Kubernetes/Aeros.Services.Kubernetes.csproj" COPY . ./ WORKDIR "/src/Aeros.Services.Kubernetes" RUN dotnet build "Aeros.Services.Kubernetes.csproj" -c $publishingProfile -o /app FROM build AS publish RUN dotnet publish "Aeros.Services.Kubernetes.csproj" -c $publishingProfile -o /app FROM base AS final WORKDIR /app COPY --from=publish /app . ENTRYPOINT ["dotnet", "Aeros.Services.Kubernetes.dll"] </code></pre> <p>I haven't touched the yaml file, but if you need that I can provide it as well. Again, all I've done with this is add a few NuGet packages to the project reference. Build in VisualStudio runs fine, but the docker command:</p> <pre><code>docker build . --build-arg publishingProfile=Release </code></pre> <p>is failing with the error mentioned above.</p> <p>Can someone be so kind as to offer some enlightenment? Thanks!</p> <p>Edit 1: I am executing this from the project's folder via a PowerShell command line.</p>
<p>Leandro's comments helped come across the solution.</p> <p>So first a rundown of that COPY command, it takes two parameters, source and destination. Within the template for the Dockerfile for Visual Studio, it includes the folder location of the .csproj file it is attempting to copy. In my case, the command read as follows:</p> <pre><code>COPY ["Aeros.Services.Kubernetes/Aeros.Services.Kubernetes.csproj", "Aeros.Services.Kubernetes/"] </code></pre> <p>So it is looking for my Aeros.Services.Kubernetes.csproj file in the Aeros.Services.Kubernetes project folder and copying it to the Aeros.Services.Kubernetes folder in the src folder of Docker.</p> <p>The problem with this is that if you use the default setup, your dockerfile is included inside the project folder. If you are executing the docker build from within the project folder, the syntax for the COPY command is actually looking in the wrong file location. For instance, if your project is TestApp.csproj located in the TestApp project folder, and you are executing the Docker build command for the dockerfile within the same folder, the syntax for that COPY command:</p> <pre><code>COPY ["TestApp/TestApp.csproj", "TestApp/"] </code></pre> <p>is actually looking for: TestApp/TestApp/TestApp.csproj.</p> <p>The correct syntax for the COPY command in this situation should be:</p> <pre><code>COPY ["TestApp.csproj", "TestApp/"] </code></pre> <p>since you are already within the TestApp project folder.</p> <p>Another problem with the default template that may trouble some is that it doesn't copy the web files for the project either, so once you get past the COPY and dotnet restore steps, you will fail during the BUILD with a:</p> <blockquote> <p>CSC : error CS5001: Program does not contain a static 'Main' method suitable for an entry point</p> </blockquote> <p>This is resolved by adding:</p> <pre><code>COPY . ./ </code></pre> <p>following your RUN dotnet restore command to copy your files.</p> <p>Once these pieces have been addressed in the default template provided, everything should be functioning as expected.</p> <p>Thanks for the help!</p>
<p>I'm setting up an Airflow environment on Google Cloud Composer for testing. I've added some secrets to my namespace, and they show up fine:</p> <pre><code>$ kubectl describe secrets/eric-env-vars Name: eric-env-vars Namespace: eric-dev Labels: &lt;none&gt; Annotations: &lt;none&gt; Type: Opaque Data ==== VERSION_NUMBER: 6 bytes </code></pre> <p>I've referenced this secret in my DAG definition file (leaving out some code for brevity):</p> <pre><code>env_var_secret = Secret( deploy_type='env', deploy_target='VERSION_NUMBER', secret='eric-env-vars', key='VERSION_NUMBER', ) dag = DAG('env_test', schedule_interval=None, start_date=start_date) operator = KubernetesPodOperator( name='k8s-env-var-test', task_id='k8s-env-var-test', dag=dag, image='ubuntu:16.04', cmds=['bash', '-cx'], arguments=['env'], config_file=os.environ['KUBECONFIG'], namespace='eric-dev', secrets=[env_var_secret], ) </code></pre> <p>But when I run this DAG, the <code>VERSION_NUMBER</code> env var isn't printed out. It doesn't look like it's being properly linked to the pod either (apologies for imprecise language, I am new to both Kubernetes and Airflow). This is from the Airflow task log of the pod creation response (also formatted for brevity/readability):</p> <pre><code>'env': [ { 'name': 'VERSION_NUMBER', 'value': None, 'value_from': { 'config_map_key_ref': None, 'field_ref': None, 'resource_field_ref': None, 'secret_key_ref': { 'key': 'VERSION_NUMBER', 'name': 'eric-env-vars', 'optional': None} } } ] </code></pre> <p>I'm assuming that we're somehow calling the constructor for the <code>Secret</code> wrong, but I am not entirely sure. Guidance appreciated!</p>
<p>Turns out this was a misunderstanding of the logs!</p> <p>When providing an environment variable to a Kubernetes pod via a Secret, that <code>value</code> key in the API response is <code>None</code> because the value comes from the <code>secret_key_ref</code>. </p>
<p>I have two microservices that communicate through http protocol. </p> <ul> <li><p>AC6K: C# microservice that gets data from an Atlas Copco 6000 device.</p></li> <li><p>LocalWriter: python aplication that gets data from AC6K and stores the info in a data base</p></li> </ul> <p>I have tested it in Windows and Linux environments and it works fine. When I contenerize each microservice and make the deployment, there is no communication. Please find enclosed herewith the corresponding docker &amp; yaml files used to containerize and deploy the application</p> <p>ac6k docker file:</p> <pre><code>FROM microsoft/aspnetcore-build EXPOSE 5010 WORKDIR /app COPY . . RUN dotnet restore ENTRYPOINT ["dotnet", "ac6kcore.dll"] - ac6kUp.yaml apiVersion: v1 kind: Service metadata: name: ac6kcore labels: run: ac6kcore spec: type: NodePort ports: - port: 5010 name: ac6kcore targetPort: 5010 nodePort: 32766 protocol: TCP selector: run: ac6kcore --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: ac6kcore spec: selector: matchLabels: run: ac6kcore replicas: 1 template: metadata: labels: run: ac6kcore spec: hostNetwork: true containers: - image: afierro/ac6kcore:lw name: ac6kcore ports: - containerPort: 5010 restartPolicy: Always </code></pre> <p>local writer docker file:</p> <pre><code>FROM python:3.6 RUN mkdir -p /mongodbapp WORKDIR /mongodbapp COPY requirements.txt /mongodbapp RUN pip install --no-cache-dir -r requirements.txt ADD . /mongodbapp EXPOSE 9090 CMD ["python", "runapp.py"] - LocalWriter.yaml apiVersion: v1 kind: Service metadata: annotations: service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" name: localwriter labels: app: localwriter spec: type: NodePort ports: - port: 9090 name: localwriter targetPort: 9090 nodePort: 32756 protocol: TCP selector: app: localwriter --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: localwriter spec: replicas: 1 selector: matchLabels: app: localwriter template: metadata: labels: app: localwriter spec: containers: - name: flasknode image: afierro/localwriter:v1 imagePullPolicy: Always ports: - containerPort: 9090 </code></pre> <p>Thanks in advance</p>
<p>According to Leandro Donizetti Soares's recommendation, I have entered the following commands to see the environment variables, and the results seem to me redundant.</p> <pre><code>kubectl get pods NAME READY STATUS RESTARTS AGE ac6kcore-77bc4c4987-dxn29 1/1 Running 0 14m localwriter-55467c5495-px8m4 1/1 Running 0 14m mongo 1/1 Running 0 14m kubectl exec ac6kcore-77bc4c4987-dxn29 -- printenv | grep SERVICE AC6KCORE_SERVICE_PORT_AC6KCORE=5010 KUBERNETES_SERVICE_HOST=10.96.0.1 KUBERNETES_SERVICE_PORT_HTTPS=443 AC6KCORE_SERVICE_PORT=5010 AC6KCORE_SERVICE_HOST=10.107.208.212 LOCALWRITER_SERVICE_HOST=10.100.103.114 LOCALWRITER_SERVICE_PORT=9090 LOCALWRITER_SERVICE_PORT_LOCALWRITER=9090 KUBERNETES_SERVICE_PORT=443 kubectl exec localwriter-55467c5495-px8m4 -- printenv | grep SERVICE KUBERNETES_SERVICE_HOST=10.96.0.1 LOCALWRITER_SERVICE_PORT_LOCALWRITER=9090 AC6KCORE_SERVICE_PORT_AC6KCORE=5010 KUBERNETES_SERVICE_PORT=443 AC6KCORE_SERVICE_HOST=10.107.208.212 LOCALWRITER_SERVICE_PORT=9090 KUBERNETES_SERVICE_PORT_HTTPS=443 AC6KCORE_SERVICE_PORT=5010 LOCALWRITER_SERVICE_HOST=10.100.103.114 </code></pre>
<p>I'm trying to write a little shell script that is checking the log output of a long running Kubernetes Pod when the Pod is done.</p> <p>The script shall wait for status &quot;Completed&quot; but the following command does not exit when the status is switching from &quot;Running&quot; to &quot;Completed&quot;:</p> <blockquote> <p>$ kubectl wait --for=condition=Completed --timeout=24h pod/longrunningpodname</p> <p>^C</p> <p>$ kubectl get pods</p> <p>NAME READY STATUS RESTARTS AGE</p> <p>longrunningpodname 0/1 Completed 0 18h</p> </blockquote> <p>I would also expect the command to return immediately if the Pod is already in the status. But that doesn't happen.</p> <p>Is kubectl wait not the command I'm looking for?</p>
<p>The use of bare pods is not the best approach to run commands that must finish. Consider using a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="noreferrer">Job Controller</a>:</p> <blockquote> <p>A Job creates one or more Pods and ensures that a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions.</p> </blockquote> <p>Then, you can wait for the job condition:<br> <code>kubectl wait --for=condition=complete --timeout=24h job/longrunningjobname</code></p>
<p>New to Kubernetes. </p> <p>I have a private dockerhub image deployed on a Kubernetes instance. When I exec into the pod I can run the following so I know my docker image is running:</p> <pre class="lang-sh prettyprint-override"><code>root@private-reg:/# curl 127.0.0.1:8085 Hello world!root@private-reg:/# </code></pre> <p>From the dashboard I can see my service has an external endpoint which ends with port 8085. When I try to load this I get 404. My service YAML is as below:</p> <pre><code>{ "kind": "Service", "apiVersion": "v1", "metadata": { "name": "test", "namespace": "default", "selfLink": "/api/v1/namespaces/default/services/test", "uid": "a1a2ae23-339b-11e9-a3db-ae0f8069b739", "resourceVersion": "3297377", "creationTimestamp": "2019-02-18T16:38:33Z", "labels": { "k8s-app": "test" } }, "spec": { "ports": [ { "name": "tcp-8085-8085-7vzsb", "protocol": "TCP", "port": 8085, "targetPort": 8085, "nodePort": 31859 } ], "selector": { "k8s-app": "test" }, "clusterIP": "******", "type": "LoadBalancer", "sessionAffinity": "None", "externalTrafficPolicy": "Cluster" }, "status": { "loadBalancer": { "ingress": [ { "ip": "******" } ] } } } </code></pre> <p>Can anyone point me in the right direction.</p>
<p>You didn't mention what type of load balancer or cloud provider you are using but if your load balancer provisioned correctly which you should be able to see in your <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/" rel="nofollow noreferrer">kube-controller-manager</a> logs, then you should be able to access your service with what you see here:</p> <pre><code>"status": { "loadBalancer": { "ingress": [ { "ip": "******" } ] } </code></pre> <p>Then you could check by running:</p> <pre><code>$ curl &lt;ip&gt;:&lt;whatever external port your lb is fronting&gt; </code></pre> <p>It's likely that this didn't provision if as described in other answers this works:</p> <pre><code>$ curl &lt;clusterIP for svc&gt;:8085 </code></pre> <p>and</p> <pre><code>$ curl &lt;NodeIP&gt;:31859 # NodePort </code></pre>
<p>Is this a good idea to host java jar files on ConfigMap when we want to extend a JVM classpath?</p> <p>Normally the application itself is baked into docker image but extending it(via plugin jar etc.) requires either attaching volume, copying the jar file to the volume and and restart the pod but you need a volume to do that. The other option is to put directly the jar into ConfigMap as binary object and restart the pod. The latter seems easier and faster and it should work for <a href="https://github.com/kubernetes/kubernetes/issues/19781#issuecomment-172553264" rel="noreferrer">small size</a> jar files.</p>
<p>It looks like a workaround to put jar into the config maps. Try another solution. </p> <p>Why can not you pack this dependency on docker image? </p> <p>You can use a init container as pointed by @reegnz</p> <p>The docs of this feature are here: </p> <p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/</a></p>
<p>New to Kubernetes. </p> <p>I have a private dockerhub image deployed on a Kubernetes instance. When I exec into the pod I can run the following so I know my docker image is running:</p> <pre class="lang-sh prettyprint-override"><code>root@private-reg:/# curl 127.0.0.1:8085 Hello world!root@private-reg:/# </code></pre> <p>From the dashboard I can see my service has an external endpoint which ends with port 8085. When I try to load this I get 404. My service YAML is as below:</p> <pre><code>{ "kind": "Service", "apiVersion": "v1", "metadata": { "name": "test", "namespace": "default", "selfLink": "/api/v1/namespaces/default/services/test", "uid": "a1a2ae23-339b-11e9-a3db-ae0f8069b739", "resourceVersion": "3297377", "creationTimestamp": "2019-02-18T16:38:33Z", "labels": { "k8s-app": "test" } }, "spec": { "ports": [ { "name": "tcp-8085-8085-7vzsb", "protocol": "TCP", "port": 8085, "targetPort": 8085, "nodePort": 31859 } ], "selector": { "k8s-app": "test" }, "clusterIP": "******", "type": "LoadBalancer", "sessionAffinity": "None", "externalTrafficPolicy": "Cluster" }, "status": { "loadBalancer": { "ingress": [ { "ip": "******" } ] } } } </code></pre> <p>Can anyone point me in the right direction.</p>
<p>What is the output from the below command</p> <p>curl cluzterIP:8085</p> <p>If you get Hello world message then it means that the service is routing the traffic Correctly to the backend pod. </p> <p>curl HostIP:NODEPORT should also be working</p> <p>Most likely that service is not bound to the backend pod. Did you define the below label on the pod? </p> <pre><code>labels: { "k8s-app": "test" } </code></pre>
<p>My workload needs network connectivity to start properly and I want to use a <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks" rel="nofollow noreferrer">postStart lifecycle hook</a> that waits until it is ready and then does something. However, lifecycle hooks seem to block CNI; the following workload will never be assigned an IP:</p> <pre><code>kubectl apply -f &lt;(cat &lt;&lt;EOF apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 lifecycle: postStart: exec: command: - "/bin/sh" - "-c" - | while true; do sleep done EOF ) kubectl get pods -o wide </code></pre> <p>This means my workload never starts (hanging when trying to connect out) and my lifecycle hook loops forever. Is there a way to work around this?</p> <p>EDIT: I used a sidecar instead of a lifecycle hook to achieve the same thing - still unsure why lifecycle hook doesn't work though, executing CNI is part of container creation IMO so I'd expect lifecycle hooks to fire after networking had been configured</p>
<p>This is an interesting one :-) It's not much of an answer but I did some investigation and I thought I share it - perhaps it is of some use.</p> <p>I started from the yaml posted in the question. Then I logged into the machine running this pod and located the container.</p> <pre><code>$ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-8f59d655b-ds7x2 0/1 ContainerCreating 0 3m &lt;none&gt; node-x $ ssh node-x node-x$ docker ps | grep nginx-8f59d655b-ds7x2 2064320d1562 881bd08c0b08 "nginx -g 'daemon off" 3 minutes ago Up 3 minutes k8s_nginx_nginx-8f59d655b-ds7x2_default_14d1e071-4cd4-11e9-8104-42010af00004_0 2f09063ed20b k8s.gcr.io/pause-amd64:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_nginx-8f59d655b-ds7x2_default_14d1e071-4cd4-11e9-8104-42010af00004_0 </code></pre> <p>The second container running <code>/pause</code> is the infrastructure container. The other one is Pod's nginx container. Note that normally this information would be available trough <code>kubectl get pod</code> as well, but in this case it is not. Strange.</p> <p>In the container I'd expect that the networking is set up and nginx is running. Let's verify that:</p> <pre><code>node-x$ docker exec -it 2064320d1562 bash root@nginx-8f59d655b-ds7x2:/# apt update &amp;&amp; apt install -y iproute2 procps ...installs correctly... root@nginx-8f59d655b-ds7x2:/# ip a s eth0 3: eth0@if2136: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1410 qdisc noqueue state UP group default link/ether 0a:58:0a:f4:00:a9 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.244.0.169/24 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::da:d3ff:feda:1cbe/64 scope link valid_lft forever preferred_lft forever </code></pre> <p>So networking is set up, routes are in place and the IP address on eth0 is actually on the overlay network as it is supposed to be. Looking at the process list now:</p> <pre><code>root@nginx-8f59d655b-ds7x2:/# ps auwx USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.1 32652 4900 ? Ss 18:56 0:00 nginx: master process nginx -g daemon off; root 5 5.9 0.0 4276 1332 ? Ss 18:56 0:46 /bin/sh -c while true; do sleep done nginx 94 0.0 0.0 33108 2520 ? S 18:56 0:00 nginx: worker process root 13154 0.0 0.0 36632 2824 ? R+ 19:09 0:00 ps auwx root 24399 0.0 0.0 18176 3212 ? Ss 19:02 0:00 bash </code></pre> <p>Hah, so nginx is running and so is the preStop command. Notice however to large PIDs. There is a typo in the deployment file and it is executing <code>sleep</code> with no parameters - which is an error.</p> <pre><code>root@nginx-8f59d655b-ds7x2:/# sleep sleep: missing operand Try 'sleep --help' for more information. </code></pre> <p>This is running from a loop, hence the loads of forking leading to large PIDs.</p> <p>As another test, from a node I also try to curl the server:</p> <pre><code>node-x$ curl http://10.244.0.169 ... &lt;p&gt;&lt;em&gt;Thank you for using nginx.&lt;/em&gt;&lt;/p&gt; ... </code></pre> <p>Which is very much expected. So finally I'd like to force the preStop command to finish so from inside the container I kill the containing shell:</p> <pre><code>root@nginx-8f59d655b-ds7x2:/# kill -9 5 ...container is terminated in a second, result of the preStop hook failure... $ kubectl get pod NAME READY STATUS RESTARTS AGE nginx-8f59d655b-ds7x2 0/1 PostStartHookError: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (53423560 vs. 16777216) 0 21m </code></pre> <p>Hm, so I imagine the 50MB (!) worth of messages were the failures from the missing parameter to sleep. Actually, what is even more spooky is that the Deployment is not recovering from this failure. This Pod keeps hanging around forever, instead of what you'd expect (spawn another Pod and retry).</p> <p>At this point I deleted the deployment and recreated it with the sleep fixed in the preStop hook (<code>sleep 1</code>). The results are much the same, and the Deployment won't spawn another Pod in that case either (so it was not that just that it choked on the logs).</p> <p>Now I did say at the top that this is not really an answer. But perhaps some takeaway: the lifecycle hooks need some work before they can considered useful and safe.</p>
<p>I was doing some self-learning with Kubernetes and I have these containers that will not permanently shut down:</p> <pre><code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 8e08ecdf12c2 fadcc5d2b066 "/usr/local/bin/kube…" About a minute ago Up About a minute k8s_kube-proxy_kube-proxy-mtksn_kube-system_08f1149a-4ac6-11e9-bea5-080027db2e61_0 744282ae4605 40a817357014 "kube-controller-man…" About a minute ago Up About a minute k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_17eea6fd9342634d7d40a04d577641fd_0 0473a3e3fedb f59dcacceff4 "/coredns -conf /etc…" About a minute ago Up About a minute k8s_coredns_coredns-86c58d9df4-l6mdf_kube-system_08f82a2f-4ac6-11e9-bea5-080027db2e61_0 6e9a0a03dff1 4689081edb10 "/storage-provisioner" About a minute ago Up About a minute k8s_storage-provisioner_storage-provisioner_kube-system_0a7e1c9d-4ac6-11e9-bea5-080027db2e61_0 4bb4356e57e7 dd862b749309 "kube-scheduler --ad…" About a minute ago Up About a minute k8s_kube-scheduler_kube-scheduler-minikube_kube-system_4b52d75cab61380f07c0c5a69fb371d4_0 973e42e849c8 f59dcacceff4 "/coredns -conf /etc…" About a minute ago Up About a minute k8s_coredns_coredns-86c58d9df4-l6hqj_kube-system_08fd4db1-4ac6-11e9-bea5-080027db2e61_1 338b58983301 9c16409588eb "/opt/kube-addons.sh" About a minute ago Up About a minute k8s_kube-addon-manager_kube-addon-manager-minikube_kube-system_5c72fb06dcdda608211b70d63c0ca488_4 3600083cbb01 k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_kube-addon-manager-minikube_kube-system_5c72fb06dcdda608211b70d63c0ca488_3 97dffefb7a4b ldco2016/multi-client "nginx -g 'daemon of…" About a minute ago Up About a minute k8s_client_client-deployment-6d89489556-mgznt_default_1f1f77f2-4c5d-11e9-bea5-080027db2e61_1 55224d847c72 k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_kube-proxy-mtksn_kube-system_08f1149a-4ac6-11e9-bea5-080027db2e61_3 9a66d39da906 3cab8e1b9802 "etcd --advertise-cl…" About a minute ago Up About a minute k8s_etcd_etcd-minikube_kube-system_8490cea1bf6294c73e0c454f26bdf714_6 e75a57524b41 k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_etcd-minikube_kube-system_8490cea1bf6294c73e0c454f26bdf714_5 5a1c02eeea6a fc3801f0fc54 "kube-apiserver --au…" About a minute ago Up About a minute k8s_kube-apiserver_kube-apiserver-minikube_kube-system_d1fc269f154a136c6c9cb809b65b6899_3 2320ac2ab58d k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_kube-apiserver-minikube_kube-system_d1fc269f154a136c6c9cb809b65b6899_3 0195bb0f048c k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_kube-scheduler-minikube_kube-system_4b52d75cab61380f07c0c5a69fb371d4_3 0664e62bf425 k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_coredns-86c58d9df4-l6mdf_kube-system_08f82a2f-4ac6-11e9-bea5-080027db2e61_4 546c4195391e k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_kube-controller-manager-minikube_kube-system_17eea6fd9342634d7d40a04d577641fd_4 9211bc0ce3f8 k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_client-deployment-6d89489556-mgznt_default_1f1f77f2-4c5d-11e9-bea5-080027db2e61_3 c22e7c931f46 k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_coredns-86c58d9df4-l6hqj_kube-system_08fd4db1-4ac6-11e9-bea5-080027db2e61_3 e5b9a76b8d68 k8s.gcr.io/pause:3.1 "/pause" About a minute ago Up About a minute k8s_POD_storage-provisioner_kube-system_0a7e1c9d </code></pre> <p>What is the most efficient way to shut them all down in one go and stop them from restarting?</p> <p>I ran a <code>minikube stop</code> and that took care of it, but I am unclear as to whether that was the proper way to do it.</p>
<p>This looks like the output of <code>docker ps</code>. When using Kubernetes, you should generally not worry about things at the Docker level, and what containers Docker is running. Some of the containers that are running are part of the Kubernetes API itself, so you should only shut these down if you plan to shut down Kubernetes itself. If you plan to shut down Kubernetes itself, the right way to shut it down depends on how you started it (minkube, GKE, etc?). If you don't plan on shutting down Kubernetes itself, but want to shut down any extra containers that Kubernetes is running on your behalf (as opposed to containers that are running as part of the Kubernetes system itself) you could run <code>kubectl get pods --all-namespaces</code> to see all "user-land" pods that are running. "Pod" is the level of abstraction that you primarily interact with when using Kubernetes, and the specific Docker processes that are running is not something you should need to worry about.</p> <p>EDIT: I see you updated your question to say that you ran <code>minikube stop</code>. Yes, that is the correct way to do it, nice!</p>
<p>I am working on Azure Kubernetes service. I am creating AKS through portal successfully. But, I need to do it through the ARM Templates.</p> <p>How to create AKS with help of ARM Templates? </p> <p>For this, I followed <a href="https://github.com/neumanndaniel/armtemplates/blob/master/container/aks.json" rel="nofollow noreferrer">link</a></p> <p>But, here am receiving an issue like:</p> <blockquote> <p>Code : InvalidTemplate</p> <p>Message : Deployment template validation failed: 'The template resource 'AKSsubnet/Microsoft.Authorization/36985XXX-XXXX-XXXX-XXXX-5fb6b7ebXXXX' for type 'Microsoft.Network/virtualNetworks/subnets/providers/roleAssignments' at line '53' and column '9' has incorrect segment lengths. A nested resource type must have identical number of segments as its resource name. A root resource type must have segment length one greater than its resource name. Please see <a href="https://aka.ms/arm-template/#resources" rel="nofollow noreferrer">https://aka.ms/arm-template/#resources</a> for usage details.'.</p> </blockquote>
<p>Old thread but here is why the AKS Advanced Networking ARM Template is not working for you.</p> <p>One of the steps in the deployment assigns the SP as a contributor to the newly created AKS subnets so that the SP can work its advanced networking magic.</p> <p>In order to assign a role in a RG one needs to have Owner permissions on that RG. </p>
<p>I have a deployment for which the env variables for pod are set via config map.</p> <pre><code> envFrom: - configMapRef: name: map </code></pre> <p>My config map will look like this</p> <pre><code>apiVersion: v1 data: HI: HELLO PASSWORD: PWD USERNAME: USER kind: ConfigMap metadata: name: map </code></pre> <p>all the pods have these env variables set from map. Now If I change the config map file and apply - <code>kubectl apply -f map.yaml</code> i get the confirmation that <code>map is configured</code>. However it does not trigger new pods creation with updated env variables.</p> <p>Interestingly this one works</p> <p><code>kubectl set env deploy/mydeploy PASSWORD=NEWPWD</code></p> <p>But not this one</p> <pre><code>kubectl set env deploy/mydeploy --from=cm/map </code></pre> <p>But I am looking for the way for new pods creation with updated env variables via config map!</p>
<blockquote> <p>Interestingly this one works</p> <p>kubectl set env deploy/mydeploy PASSWORD=NEWPWD</p> <p>But not this one</p> <p>kubectl set env deploy/mydeploy --from=cm/map</p> </blockquote> <p>This is expected behavior. Your pod manifest hasn't changed in second command (when you use the <code>cm</code>), that's why Kubernetes not recreating it.</p> <p>There are several ways to deal with that. Basically what you can do is artificially change Pod manifest every time ConfigMap changes, e.g. adding annotation to the Pod with sha256sum of ConfigMap content. This is actually what Helm suggests you do. If you are using Helm it can be done as:</p> <pre><code>kind: Deployment spec: template: metadata: annotations: checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }} [...] </code></pre> <p>From here: <a href="https://github.com/helm/helm/blob/master/docs/charts_tips_and_tricks.md#automatically-roll-deployments-when-configmaps-or-secrets-change" rel="nofollow noreferrer">https://github.com/helm/helm/blob/master/docs/charts_tips_and_tricks.md#automatically-roll-deployments-when-configmaps-or-secrets-change</a></p> <p>Just make sure you add annotation to Pod (template) object, not the Deployment itself.</p>
<p>I created GKE cluster with node pool, but I forgot to label the nodes... In Google Cloud Platform UI I can't edit or add Kubernetes labels for the existing node pool... How can I do it without recreating whole node pool?</p> <p><a href="https://i.stack.imgur.com/lEA9b.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/lEA9b.jpg" alt="The label field is unchangeable"></a></p>
<p>It isn't possible to edit the labels without recreating nodes, so GKE does not support updating labels on node pools. </p> <p>In GKE, the Kubernetes labels are applied to nodes by the <code>kubelet</code> binary which receives them as flags passed in via the node startup script. As it is just as disruptive (or more disruptive) to recreate all nodes in a node pool as to create a new node pool, updating the labels isn't a supported operation for updating a node pool. </p>
<p>We have an Python uWSGI REST API server which handles a lot of calls. When the api calls peak through an external resource, the queue is immediately filled, because the uWSGI queue size set to 100 by default. After some digging we found that this is according to the net.core.somaxconn setting of the server. And in the case of Kubernetes because of the setting of the node.</p> <p>We found this documentation to use sysctl to change net.core.somaxconn. <a href="https://kubernetes.io/docs/concepts/cluster-administration/sysctl-cluster/" rel="noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/sysctl-cluster/</a> But that's not working on GKE as it requires docker 1.12 or newer. </p> <p>We also found this snippet but that seems really hacky. <a href="https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/examples/sysctl/change-proc-values-rc.yaml" rel="noreferrer">https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/examples/sysctl/change-proc-values-rc.yaml</a> Wouldn't a DaemonSet be better instead of a companion container?</p> <p>What would be the best practice to set net.core.somaxconn higher than the default on all nodes of a nodepool?</p>
<p>A good approach can be using a daemonset with privileges, due the fact it will run on all existing and new nodes. Just use provided startup container such as:</p> <p><a href="https://github.com/kubernetes/contrib/blob/master/startup-script/startup-script.yml" rel="nofollow noreferrer">https://github.com/kubernetes/contrib/blob/master/startup-script/startup-script.yml</a></p> <p>For your case:</p> <pre><code>apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: startup spec: updateStrategy: type: RollingUpdate template: spec: hostPID: true containers: - name: system-tweak image: gcr.io/google-containers/startup-script:v1 imagePullPolicy: Always securityContext: privileged: true env: - name: STARTUP_SCRIPT value: | #! /bin/bash echo 32768 &gt; /proc/sys/net/core/somaxconn </code></pre>
<p>I am trying to get programmatically in Go, the namespace of the current-context from ~/.kube/config.</p> <p>So far what I tried is from these modules:</p> <pre><code> "k8s.io/client-go/tools/clientcmd" "k8s.io/client-go/kubernetes" kubeconfig := filepath.Join( os.Getenv("HOME"), ".kube", "config", ) config, err := clientcmd.BuildConfigFromFlags("", kubeconfig) if err != nil { log.Fatal(err) } fmt.Printf("Namespace: %s\n", config.Namespace()) clientset, err := kubernetes.NewForConfig(config) if err != nil { log.Fatal(err) } </code></pre> <p>But still no clue if clientset can give me the namespace I am looking for. From this thread: <a href="https://stackoverflow.com/questions/53283347/how-to-get-current-namespace-of-an-in-cluster-go-kubernetes-client">How to get current namespace of an in-cluster go Kubernetes client</a></p> <p>It says something of this to be done: kubeconfig.Namespace()</p>
<p>I found a solution using <code>NewDefaultClientConfigLoadingRules</code> and then loading the rules. This works if your config is loadable with the default client config loading rules.</p> <p>Example:</p> <pre class="lang-golang prettyprint-override"><code>package main import ( &quot;github.com/davecgh/go-spew/spew&quot; &quot;k8s.io/client-go/tools/clientcmd&quot; ) func main() { clientCfg, err := clientcmd.NewDefaultClientConfigLoadingRules().Load() spew.Dump(clientCfg, err) } </code></pre> <p>Gives you a <a href="https://godoc.org/k8s.io/client-go/tools/clientcmd/api#Config" rel="nofollow noreferrer">https://godoc.org/k8s.io/client-go/tools/clientcmd/api#Config</a> which contains the current context including its namespace.</p> <pre><code>Contexts: (map[string]*api.Context) (len=1) { (string) (len=17) &quot;xxx.xxxxx.xxx&quot;: (*api.Context)(0xc0001b2b40)({ LocationOfOrigin: (string) (len=30) &quot;/path/to/.kube/config&quot;, Cluster: (string) (len=17) &quot;xxx.xxxxx.xxx&quot;, AuthInfo: (string) (len=29) &quot;xxxx@xxxx.com&quot;, Namespace: (string) (len=7) &quot;default&quot;, Extensions: (map[string]runtime.Object) { } }) }, CurrentContext: (string) (len=17) &quot;xxx.xxxxx.xxx&quot;, </code></pre> <p>For your information, <code>ClientConfigLoadingRules</code> is a structure with different properties to tell the client where to load the config from. The default one will use the path in your <code>KUBECONFIG</code> environment variable in the <code>Precedence</code> field.</p> <pre><code>(*clientcmd.ClientConfigLoadingRules)(0xc0000a31d0)({ ExplicitPath: (string) &quot;&quot;, Precedence: ([]string) (len=1 cap=1) { (string) (len=30) &quot;/path/to/.kube/config&quot; }, MigrationRules: (map[string]string) (len=1) { (string) (len=30) &quot;/path/to/.kube/config&quot;: (string) (len=35) &quot;/path/to/.kube/.kubeconfig&quot; }, DoNotResolvePaths: (bool) false, DefaultClientConfig: (clientcmd.ClientConfig) &lt;nil&gt; }) </code></pre>
<p>Using the EFK Stack on Kubernetes (Minikube). Have an asp.net core app using Serilog to write to console as Json. Logs DO ship to Elasticsearch, <strong>but they arrive unparsed strings</strong>, into the "log" field, this is the problem.</p> <p>This is the console output:</p> <pre><code>{ "@timestamp": "2019-03-22T22:08:24.6499272+01:00", "level": "Fatal", "messageTemplate": "Text: {Message}", "message": "Text: \"aaaa\"", "exception": { "Depth": 0, "ClassName": "", "Message": "Boom!", "Source": null, "StackTraceString": null, "RemoteStackTraceString": "", "RemoteStackIndex": -1, "HResult": -2146232832, "HelpURL": null }, "fields": { "Message": "aaaa", "SourceContext": "frontend.values.web.Controllers.HomeController", "ActionId": "0a0967e8-be30-4658-8663-2a1fd7d9eb53", "ActionName": "frontend.values.web.Controllers.HomeController.WriteTrace (frontend.values.web)", "RequestId": "0HLLF1A02IS16:00000005", "RequestPath": "/Home/WriteTrace", "CorrelationId": null, "ConnectionId": "0HLLF1A02IS16", "ExceptionDetail": { "HResult": -2146232832, "Message": "Boom!", "Source": null, "Type": "System.ApplicationException" } } } </code></pre> <p>This is the Program.cs, part of Serilog config (ExceptionAsObjectJsonFormatter inherit from ElasticsearchJsonFormatter):</p> <pre class="lang-cs prettyprint-override"><code>.UseSerilog((ctx, config) =&gt; { var shouldFormatElastic = ctx.Configuration.GetValue&lt;bool&gt;("LOG_ELASTICFORMAT", false); config .ReadFrom.Configuration(ctx.Configuration) // Read from appsettings and env, cmdline .Enrich.FromLogContext() .Enrich.WithExceptionDetails(); var logFormatter = new ExceptionAsObjectJsonFormatter(renderMessage: true); var logMessageTemplate = "[{Timestamp:HH:mm:ss} {Level:u3}] {Message:lj}{NewLine}{Exception}"; if (shouldFormatElastic) config.WriteTo.Console(logFormatter, standardErrorFromLevel: LogEventLevel.Error); else config.WriteTo.Console(standardErrorFromLevel: LogEventLevel.Error, outputTemplate: logMessageTemplate); }) </code></pre> <p>Using these nuget pkgs:</p> <ul> <li>Serilog.AspNetCore</li> <li>Serilog.Exceptions</li> <li>Serilog.Formatting.Elasticsearch</li> <li>Serilog.Settings.Configuration</li> <li>Serilog.Sinks.Console</li> </ul> <p>This is how it looks like in Kibana <a href="https://i.stack.imgur.com/HyEac.png" rel="noreferrer"><img src="https://i.stack.imgur.com/HyEac.png" alt="Here"></a></p> <p>And this is configmap for fluent-bit:</p> <pre><code>fluent-bit-filter.conf: [FILTER] Name kubernetes Match kube.* Kube_URL https://kubernetes.default.svc:443 Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token Merge_Log On K8S-Logging.Parser On K8S-Logging.Exclude On fluent-bit-input.conf: [INPUT] Name tail Path /var/log/containers/*.log Parser docker Tag kube.* Refresh_Interval 5 Mem_Buf_Limit 5MB Skip_Long_Lines On fluent-bit-output.conf: [OUTPUT] Name es Match * Host elasticsearch Port 9200 Logstash_Format On Retry_Limit False Type flb_type Time_Key @timestamp Replace_Dots On Logstash_Prefix kubernetes_cluster fluent-bit-service.conf: [SERVICE] Flush 1 Daemon Off Log_Level info Parsers_File parsers.conf fluent-bit.conf: @INCLUDE fluent-bit-service.conf @INCLUDE fluent-bit-input.conf @INCLUDE fluent-bit-filter.conf @INCLUDE fluent-bit-output.conf parsers.conf: </code></pre> <p><strong>But</strong> I also tried <a href="https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/output/elasticsearch/fluent-bit-configmap.yaml" rel="noreferrer">https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/output/elasticsearch/fluent-bit-configmap.yaml</a> with my modifications.</p> <p>I used Helm to install fluentbit with <code>helm install stable/fluent-bit --name=fluent-bit --namespace=logging --set backend.type=es --set backend.es.host=elasticsearch --set on_minikube=true</code></p> <p>I also get alot of the following errors:</p> <pre><code>log:{"took":0,"errors":true,"items":[{"index":{"_index":"kubernetes_cluster-2019.03.22","_type":"flb_type","_id":"YWCOp2kB4wEngjaDvxNB","status":400,"error":{"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"json_parse_exception","reason":"Duplicate field '@timestamp' at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper@432f75a7; line: 1, column: 1248]"}}}}]} </code></pre> <p>and</p> <pre><code>log:[2019/03/22 22:38:57] [error] [out_es] could not pack/validate JSON response stream:stderr </code></pre> <p>as I can see in Kibana.</p>
<p>Problem was bad fluentbit configmap. This works:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: fluent-bit-config namespace: logging labels: k8s-app: fluent-bit data: # Configuration files: server, input, filters and output # ====================================================== fluent-bit.conf: | [SERVICE] Flush 1 Log_Level info Daemon off Parsers_File parsers.conf HTTP_Server On HTTP_Listen 0.0.0.0 HTTP_Port 2020 @INCLUDE input-kubernetes.conf @INCLUDE filter-kubernetes.conf @INCLUDE output-elasticsearch.conf input-kubernetes.conf: | [INPUT] Name tail Tag kube.* Path /var/log/containers/*.log Parser docker DB /var/log/flb_kube.db Mem_Buf_Limit 5MB Skip_Long_Lines On Refresh_Interval 10 filter-kubernetes.conf: | [FILTER] Name kubernetes Match kube.* Kube_URL https://kubernetes.default.svc:443 # These two may fix some duplicate field exception Merge_Log On Merge_JSON_Key k8s K8S-Logging.Parser On K8S-Logging.exclude True output-elasticsearch.conf: | [OUTPUT] Name es Match * Host ${FLUENT_ELASTICSEARCH_HOST} Port ${FLUENT_ELASTICSEARCH_PORT} Logstash_Format On # This fixes errors where kubernetes.apps.name must object Replace_Dots On Retry_Limit False Type flb_type # This may fix some duplicate field exception Time_Key @timestamp_es # The Index Prefix: Logstash_Prefix logstash_07 parsers.conf: | [PARSER] Name apache Format regex Regex ^(?&lt;host&gt;[^ ]*) [^ ]* (?&lt;user&gt;[^ ]*) \[(?&lt;time&gt;[^\]]*)\] "(?&lt;method&gt;\S+)(?: +(?&lt;path&gt;[^\"]*?)(?: +\S*)?)?" (?&lt;code&gt;[^ ]*) (?&lt;size&gt;[^ ]*)(?: "(?&lt;referer&gt;[^\"]*)" "(?&lt;agent&gt;[^\"]*)")?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name apache2 Format regex Regex ^(?&lt;host&gt;[^ ]*) [^ ]* (?&lt;user&gt;[^ ]*) \[(?&lt;time&gt;[^\]]*)\] "(?&lt;method&gt;\S+)(?: +(?&lt;path&gt;[^ ]*) +\S*)?" (?&lt;code&gt;[^ ]*) (?&lt;size&gt;[^ ]*)(?: "(?&lt;referer&gt;[^\"]*)" "(?&lt;agent&gt;[^\"]*)")?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name apache_error Format regex Regex ^\[[^ ]* (?&lt;time&gt;[^\]]*)\] \[(?&lt;level&gt;[^\]]*)\](?: \[pid (?&lt;pid&gt;[^\]]*)\])?( \[client (?&lt;client&gt;[^\]]*)\])? (?&lt;message&gt;.*)$ [PARSER] Name nginx Format regex Regex ^(?&lt;remote&gt;[^ ]*) (?&lt;host&gt;[^ ]*) (?&lt;user&gt;[^ ]*) \[(?&lt;time&gt;[^\]]*)\] "(?&lt;method&gt;\S+)(?: +(?&lt;path&gt;[^\"]*?)(?: +\S*)?)?" (?&lt;code&gt;[^ ]*) (?&lt;size&gt;[^ ]*)(?: "(?&lt;referer&gt;[^\"]*)" "(?&lt;agent&gt;[^\"]*)")?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name json Format json Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name docker Format json #Time_Key time Time_Key @timestamp Time_Format %Y-%m-%dT%H:%M:%S.%L Time_Keep Off # on # See: https://fluentbit.io/documentation/0.14/parser/decoder.html # Command | Decoder | Field | Optional Action # =============|==================|================= # Decode_Field_As escaped log # Decode_Field_As escaped log do_next # Decode_Field_As json log [PARSER] Name syslog Format regex Regex ^\&lt;(?&lt;pri&gt;[0-9]+)\&gt;(?&lt;time&gt;[^ ]* {1,2}[^ ]* [^ ]*) (?&lt;host&gt;[^ ]*) (?&lt;ident&gt;[a-zA-Z0-9_\/\.\-]*)(?:\[(?&lt;pid&gt;[0-9]+)\])?(?:[^\:]*\:)? *(?&lt;message&gt;.*)$ Time_Key time Time_Format %b %d %H:%M:%S </code></pre>
<p>My deployment's pods are doing work that should not be interrupted. Is it possible that K8s is polling an endpoint about update readiness, or inform my pod that it is about to go down so it can get its affairs in order and then declare itself ready for an update?</p> <p>Ideal process:</p> <ol> <li>An updated pod is ready to replace an old one</li> <li>A request is sent to the old pod by k8s, telling it that it is about to be updated</li> <li>Old pod gets polled about update readiness</li> <li>Old pod gets its affairs in order (e.g. stop receiving new tasks, finishes existing tasks)</li> <li>Old pod says it is ready</li> <li>Old pod gets replaced</li> </ol>
<p>You could perhaps look into using <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/" rel="noreferrer">container lifecycle hooks</a> - specifically prestop in this case. </p> <pre><code>apiVersion: v1 kind: Pod metadata: name: your-pod spec: containers: - name: your-awesome-image image: image-name lifecycle: postStart: exec: command: ["/bin/sh", "my-app", "-start"] preStop: exec: # specifically by adding the cmd you want your image to run here command: ["/bin/sh","my-app","-stop"] </code></pre>
<p>I've setup Istio using the helm charts, and I'm trying to expose services to the istio-ingressgateway.</p> <p>Here's the config I've decided to go with:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: grafana-gateway namespace: istio-system spec: selector: istio: ingressgateway servers: - port: number: 31400 name: http protocol: HTTP hosts: - "*" --- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: kiali-gateway namespace: istio-system spec: selector: istio: ingressgateway servers: - port: number: 15029 name: http protocol: HTTP hosts: - "*" --- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: prometheus-gateway namespace: istio-system spec: selector: istio: ingressgateway servers: - port: number: 15030 name: http protocol: HTTP hosts: - "*" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: grafana-vts namespace: istio-system spec: hosts: - "*" gateways: - grafana-gateway http: - match: - uri: prefix: / route: - destination: host: grafana.istio-system.svc.cluster.local port: number: 3000 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: kiali-vts namespace: istio-system spec: hosts: - "*" gateways: - kiali-gateway http: - match: - uri: prefix: / route: - destination: host: kiali.istio-system.svc.cluster.local port: number: 20001 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: prometheus-vts namespace: istio-system spec: hosts: - "*" gateways: - prometheus-gateway http: - match: - uri: prefix: / route: - destination: host: prometheus.istio-system.svc.cluster.local port: number: 9090 </code></pre> <p>However - this only routes grafana through ports 31400, 15029 and 15030, while it's supposed to do so just for 31400.</p> <p>If I'm using just one Gateway and rewrite the uri, it throws up a 404 error/tells me the reverse-proxy isn't setup properly</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: all-gateway namespace: istio-system spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: grafana-vts namespace: istio-system spec: hosts: - "*" gateways: - all-gateway http: - match: - uri: prefix: "/grafana" rewrite: uri: / route: - destination: host: grafana.istio-system.svc.cluster.local port: number: 3000 and etc... </code></pre> <p>I'm a bit new to istio, and the examples I've browsed through don't exactly talk about these. If you've got an idea, it'd be swell - is it because of how I've wildcarded the hosts?</p>
<p>Your gateway and virtual services are mixed since the same hosts (<code>*</code>) are used for all of them, so their behavior is undefined in Istio. I would allocate fake hostnames, for example, <code>my-grafana.com</code>, <code>my-kiali.com</code> and use them in the Gateway and Virtual Service definitions. I would add these fake hostnames to the <code>/etc/hosts/</code> file and use them to access Grafana and Kiali from my computer.</p>
<p>We have two clusters, named:</p> <ol> <li>MyCluster (created by me)</li> <li>OtherCluster (not created by me)</li> </ol> <p>Where "me" is my own AWS IAM user.</p> <p>I am able to manage the cluster I created, using kubectl:</p> <pre><code>&gt;&gt;&gt; aws eks update-kubeconfig --name MyCluster –profile MyUser &gt;&gt;&gt; kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 172.20.0.1 &lt;none&gt; 443/TCP 59d </code></pre> <p>But, I cannot manage the “OtherCluster” cluster (that was not created by me):</p> <pre><code>&gt;&gt;&gt; aws eks update-kubeconfig --name OtherCluster --profile MyUser &gt;&gt;&gt; kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE error: the server doesn't have a resource type "svc" </code></pre> <p>After reading the feedback of some people experiencing the same issue in this <a href="https://github.com/kubernetes-sigs/aws-iam-authenticator/issues/157" rel="nofollow noreferrer">github issue</a>, I tried doing this under the context of the user who originally created the "OtherCluster".</p> <p>I accomplished this by editing “~/.kube/config”, adding a “AWS_PROFILE” value at “users.user.env”. The profile represents the user who created the cluster.</p> <p>~/.kube/config:</p> <pre><code>… users - name: OtherCluster user: exec: apiVersion: client.authentication.k8s.io/v1alpha1 args: - token - -i - OtherCluster command: aws-iam-authenticator env: - name: AWS_PROFILE value: OTHER_USER_PROFILE … </code></pre> <p>This worked:</p> <pre><code># ~/.kube/config is currently pointing to OtherCluster &gt;&gt;&gt; kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 172.20.0.1 &lt;none&gt; 443/TCP 1d </code></pre> <p>It is obviously not ideal for me to impersonate another person when I am managing the cluster. I would prefer to grant my own user access to manage the cluster via kubectl. Is there any way I can grant permission to manage the cluster to a user other than the original creator? This seems overly restrictive</p>
<p>When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator. <strong>Initially</strong>, only that IAM user can make calls to the Kubernetes API server using kubectl. </p> <p>To grant additional AWS users the ability to interact with your cluster, you must edit the <code>aws-auth</code> ConfigMap within Kubernetes, adding a new <code>mapUsers</code> entry for your ConfigMap. <a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html" rel="nofollow noreferrer">This EKS doc</a> covers all the process.</p> <blockquote> <p>To add an IAM user: add the user details to the mapUsers section of the ConfigMap, under data. Add this section if it does not already exist in the file. Each entry supports the following parameters:</p> <ul> <li>userarn: The ARN of the IAM user to add.</li> <li>username: The user name within Kubernetes to map to the IAM user. By default, the user name is the ARN of the IAM user.</li> <li>groups: A list of groups within Kubernetes to which the user is mapped to. For more information, see Default Roles and Role Bindings<br> in the Kubernetes documentation.</li> </ul> </blockquote> <p>Example:</p> <pre><code>apiVersion: v1 data: mapRoles: | - rolearn: arn:aws:iam::555555555555:role/devel-worker-nodes-NodeInstanceRole-74RF4UBDUKL6 username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes mapUsers: | - userarn: arn:aws:iam::555555555555:user/my-new-admin-user username: my-new-admin-user groups: - system:masters </code></pre>
<p>I would like to know what resources are available in the entire K8s cluster that I am using. </p> <p>To be clear, I am not talking about the <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/" rel="nofollow noreferrer">Resource Quotas</a>, because those only define resources per namespace. I would like to know what the capabilities of the entire cluster are (memory, cpu,...). Please note that the sum of all resource quotas is not equal to the capabilities of the cluster. The sum can be greater (creates race condition for resources between namespaces) or smaller (cluster not used to its fullest potential) than the resources of the cluster. </p> <p>Can I use kubectl to answer this query? </p>
<p>You can use <code>kubectl top</code> command to check the memory and CPU consumption of all the nodes or pods.</p> <pre><code>kubectl top nodes </code></pre> <p>For more information you can do, </p> <pre><code>kubectl top -h </code></pre> <p>For kubectl top command to work, you need to install <code>metrics-server</code> in kubernetes cluster to fetch the CPU and memory metrics. </p>
<p>I'd like to keep the number of cores in my GKE cluster below 3. This becomes much more feasible if the CPU limits of the K8s replication controllers and pods are reduced from 100m to at most 50m. Otherwise, the K8s pods alone take 70% of one core.</p> <p>I decided against increasing the CPU power of a node. This would be conceptually wrong in my opinion because the CPU limit is defined to be measured in cores. Instead, I did the following:</p> <ul> <li>replacing limitranges/limits with a version with "50m" as default CPU limit (not necessary, but in my opinion cleaner)</li> <li>patching all replication controller in the kube-system namespace to use 50m for all containers</li> <li>deleting their pods</li> <li>replacing all non-rc pods in the kube-system namespace with versions that use 50m for all containers</li> </ul> <p>This is a lot of work and probably fragile. Any further changes in upcoming versions of K8s, or changes in the GKE configuration, may break it.</p> <p>So, is there a better way?</p>
<p>I have found one of the best ways to reduce the system resource requests on a GKE cluster, is to use a <a href="https://github.com/kubernetes/autoscaler/commits/master/vertical-pod-autoscaler" rel="noreferrer">vertical autoscaler</a>.</p> <p>Here are the VPA definitions I have used:</p> <pre><code>apiVersion: autoscaling.k8s.io/v1beta2 kind: VerticalPodAutoscaler metadata: namespace: kube-system name: kube-dns-vpa spec: targetRef: apiVersion: "extensions/v1beta1" kind: Deployment name: kube-dns updatePolicy: updateMode: "Auto" --- apiVersion: autoscaling.k8s.io/v1beta2 kind: VerticalPodAutoscaler metadata: namespace: kube-system name: heapster-vpa spec: targetRef: apiVersion: "extensions/v1beta1" kind: Deployment name: heapster-v1.6.0-beta.1 updatePolicy: updateMode: "Initial" --- apiVersion: autoscaling.k8s.io/v1beta2 kind: VerticalPodAutoscaler metadata: namespace: kube-system name: metadata-agent-vpa spec: targetRef: apiVersion: "extensions/v1beta1" kind: DaemonSet name: metadata-agent updatePolicy: updateMode: "Initial" --- apiVersion: autoscaling.k8s.io/v1beta2 kind: VerticalPodAutoscaler metadata: namespace: kube-system name: metrics-server-vpa spec: targetRef: apiVersion: "extensions/v1beta1" kind: Deployment name: metrics-server-v0.3.1 updatePolicy: updateMode: "Initial" --- apiVersion: autoscaling.k8s.io/v1beta2 kind: VerticalPodAutoscaler metadata: namespace: kube-system name: fluentd-vpa spec: targetRef: apiVersion: "extensions/v1beta1" kind: DaemonSet name: fluentd-gcp-v3.1.1 updatePolicy: updateMode: "Initial" --- apiVersion: autoscaling.k8s.io/v1beta2 kind: VerticalPodAutoscaler metadata: namespace: kube-system name: kube-proxy-vpa spec: targetRef: apiVersion: "extensions/v1beta1" kind: DaemonSet name: kube-proxy updatePolicy: updateMode: "Initial" </code></pre> <p><a href="https://i.stack.imgur.com/I65kb.png" rel="noreferrer">Here is a screenshot of what it does to a <code>kube-dns</code> deployment.</a></p>
<p>I am looking for a service proxy (or load-balancer) with URL-based affinity.</p> <p>This is for using in Kubernetes, inside the cluster: I am looking for an "internal" load balancer, I don't need to expose the service outside.</p> <p>By default, the Service in Kubernetes is using a "round robin" algorithm.</p> <p>I would like some affinity based on a part of the HTTP URL: a 1st request would go to a random pod, and subsequent requests that use the same URL would (preferably) go to the same pod.</p> <p>I have read some doc about affinity based on sourceIP, does this exist based on URLs ?</p> <p>I have quickly read about Envoy, maybe using the "Ring hash" load-balancing algorithm would do, but I don't know if it's possible to hash based on the URL.</p> <p>Maybe using the "ipvs" proxy-mode of kube-proxy (<a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-ipvs" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-ipvs</a>) would do, but I see only "destination hashing" and "source hashing" as load balancing algorithms, and I don't know how to configure it either.</p>
<p>As you have already mentioned, <a href="https://github.com/kubernetes/kubernetes/tree/master/pkg/proxy/ipvs" rel="nofollow noreferrer">IPVS</a> proxy algorithm defines source and destination IP addresses in order to generate a unique hash key for load balancing. However, it operates in L4 transport layer of intercepting network traffic for TCP or UDP services. Therefore it might be difficult to interact with HTTP request and make a route decision based on URL path.</p> <p><a href="https://www.envoyproxy.io/docs/envoy/v1.5.0/" rel="nofollow noreferrer">Envoy</a> proxy represents consistent hashing via <a href="https://www.envoyproxy.io/docs/envoy/v1.5.0/api-v1/route_config/route#config-http-conn-man-route-table-hash-policy" rel="nofollow noreferrer">HTTP header</a> values, specified inside <a href="https://www.envoyproxy.io/docs/envoy/v1.5.0/intro/arch_overview/http_routing#arch-overview-http-routing" rel="nofollow noreferrer">HTTP router filter</a> along with <a href="https://www.envoyproxy.io/docs/envoy/v1.5.0/intro/arch_overview/load_balancing.html#ring-hash" rel="nofollow noreferrer">Ring hash</a> load balancing policy. Therefore, you can specify the appropriate header name in Hash policy that can be used to obtain the hash key for load balancing.</p> <pre><code>hash_policy: header: header_name: "x-url" </code></pre> <p>Alternatively, you can consider to use <a href="https://istio.io" rel="nofollow noreferrer">Istio</a> as an intermediate proxy which uses extended version of Envoy. Kubernetes services are involved into the service mesh by deploying a special sidecar proxy throughout your environment that intercepts all network communication between microservices. Istio can be also used for <a href="https://istio.io/docs/reference/config/networking/v1alpha3/destination-rule/#LoadBalancerSettings-ConsistentHashLB" rel="nofollow noreferrer">Hash</a> consistent load balancing with session affinity based on HTTP headers via <a href="https://istio.io/docs/reference/config/networking/v1alpha3/destination-rule/" rel="nofollow noreferrer">DestinationRule</a> resource.</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: example spec: host: my-service.default.svc.cluster.local trafficPolicy: loadBalancer: consistentHash: httpHeaderName: x-url </code></pre>
<p>I have a private Docker registry inside a Kubernetes 1.13 cluster exposed by a Kubernetes NGINX ingress controller 0.23.0 and the following <code>Ingress</code> object:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 name: my-name namespace: my-namespace spec: rules: - host: my-domain http: paths: - backend: serviceName: registry servicePort: 5000 path: /?(.*) tls: - hosts: - my-domain secretName: my-secret </code></pre> <p>This allows me to pull an existing image as follows:</p> <pre><code>docker image pull my-cluster/my-image </code></pre> <p>I would not now change the setup such that the registry is exposed at <code>my-cluster/registry</code>. How can this be done, and is it even possible? </p> <p>Changing <code>path</code> to <code>/registry/?(.*)</code> did not do the trick. <code>docker login my-cluster/registry</code> now produces the following error message and <code>docker image pull my-cluster/registry/my-image</code> also does not work. My current guess as to the root cause is that the registry also also use the prefix <code>registry</code> (as configured in the <code>Ingress</code>) for its internal redirects. If applicable, how can this be configured (preferably in the <code>Ingress</code> too)?</p> <pre><code>Error response from daemon: login attempt to https://my-cluster/v2/ failed with status: 404 Not Found </code></pre>
<p>I have now concluded that referring to a Docker registry by a URL-like location (such as <code>my-cluster/registry</code>) as opposed to a host-like location (such as <code>my-cluster</code> with an optional port) is not possible.</p> <p>So I'll reserve a separate IP address and certificate for <code>registry.my-cluster</code> and proceed with that.</p>
<p>I am trying to play with init pods. I want to use <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="noreferrer">init container</a> to create file and default container to check if file exist and sleep for a while.</p> <p>my yaml:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: init-test-pod spec: containers: - name: myapp-container image: alpine command: ['sh', '-c', 'if [ -e /workdir/test.txt ]; then sleep 99999; fi'] initContainers: - name: init-myservice image: busybox:1.28 command: ['sh', '-c', 'mkdir /workdir; echo&gt;/workdir/test.txt'] </code></pre> <p>When I am trying to debug from alpine image I use the command to create:</p> <blockquote> <p>kubectl run alpine --rm -ti --image=alpine /bin/sh</p> </blockquote> <pre><code>If you don't see a command prompt, try pressing enter. / # if [ -e /workdir/test.txt ]; then sleep 3; fi / # mkdir /workdir; echo&gt;/workdir/test.txt / # if [ -e /workdir/test.txt ]; then sleep 3; fi / *here shell sleeps for 3 seconds / # </code></pre> <p>And it seems like commands working as expected. </p> <p>But on my real k8s cluster I have only CrashLoopBackOff for main container. </p> <blockquote> <p>kubectl describe pod init-test-pod</p> </blockquote> <p>Shows me only that error:</p> <pre><code>Containers: myapp-container: Container ID: docker://xxx Image: alpine Image ID: docker-pullable://alpine@sha256:xxx Port: &lt;none&gt; Host Port: &lt;none&gt; Command: sh -c if [ -e /workdir/test.txt ]; then sleep 99999; fi State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Completed Exit Code: 0 Ready: False Restart Count: 3 Environment: &lt;none&gt; </code></pre>
<p>The problem here is that your main container is not finding the folder you create. When your initial container completes running, the folder gets wiped with it. You will need to use a Persistent Volume to be able to share the folder between the two containers:</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: mypvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi --- apiVersion: v1 kind: Pod metadata: name: init-test-pod spec: volumes: - name: mypvc persistentVolumeClaim: claimName: mypvc containers: - name: myapp-container image: alpine command: ['sh', '-c', 'if [ -f /workdir/test.txt ]; then sleep 99999; fi'] volumeMounts: - name: mypvc mountPath: /workdir initContainers: - name: init-myservice image: busybox:1.28 command: ['sh', '-c', 'mkdir /workdir; echo&gt;/workdir/test.txt'] volumeMounts: - name: mypvc mountPath: /workdir </code></pre> <p>You can as well look at <code>emptyDir</code>, so you won't need the PVC:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: init-test-pod spec: volumes: - name: mydir emptyDir: {} containers: - name: myapp-container image: alpine command: ['sh', '-c', 'if [ -f /workdir/test.txt ]; then sleep 99999; fi'] volumeMounts: - name: mydir mountPath: /workdir initContainers: - name: init-myservice image: busybox:1.28 command: ['sh', '-c', 'mkdir /workdir; echo&gt;/workdir/test.txt'] volumeMounts: - name: mydir mountPath: /workdir </code></pre>
<p>I've created a custom helm chart with <code>elastic-stack</code> as a subchart with following configurations.</p> <pre><code># requirements.yaml dependencies: - name: elastic-stack version: 1.5.0 repository: '@stable' </code></pre> <pre><code># values.yaml elastic-stack: kibana: # at this level enabled is not recognized (does not work) # enabled: true # configs like env, only work at this level env: ELASTICSEARCH_URL: http://foo-elasticsearch-client.default.svc.cluster.local:9200 service: externalPort: 80 # enabled only works at root level elasticsearch: enabled: true kibana: enabled: true logstash: enabled: false </code></pre> <p>What i don't get is why i have to define the <code>enabled</code> tags outside the <code>elasatic-stack:</code> and all other configurations inside?</p> <p>Is this a normal helm behavior or some misconfiguration in elastic-stack chart?</p>
<p><a href="https://helm.sh/docs/developing_charts/#tags-and-condition-fields-in-requirements-yaml" rel="noreferrer">Helm conditions</a> are evaluated in the top parent's values:</p> <blockquote> <p>Condition - The condition field holds one or more YAML paths (delimited by commas). If this path exists in the top parent’s values and resolves to a boolean value, the chart will be enabled or disabled based on that boolean value</p> </blockquote> <p>Take a look at the conditions in <a href="https://github.com/helm/charts/blob/dbfe5e6b55f67ddb513e41a4417160a1ab8c7b6d/stable/elastic-stack/requirements.yaml" rel="noreferrer">requirements.yaml</a> from stable/elastic-stack:</p> <pre><code>- name: elasticsearch version: ^1.17.0 repository: https://kubernetes-charts.storage.googleapis.com/ condition: elasticsearch.enabled - name: kibana version: ^1.1.0 repository: https://kubernetes-charts.storage.googleapis.com/ condition: kibana.enabled - name: logstash version: ^1.2.1 repository: https://kubernetes-charts.storage.googleapis.com/ condition: logstash.enabled </code></pre> <p>The conditions paths are <code>elasticsearch.enabled</code>, <code>kibana.enabled</code> and <code>logstash.enabled</code>, so you need to use them in your parent chart values.</p>
<p>I am trying to size my AKS clusters. What I understood and followed is the number of micro services and their replication copies would be primary parameters. Also the resource usage by each micro services and prediction of that usage increase during coming years also needs to be considered. But all these information seems totally scattered to reach a number for AKS sizing. Sizing I meant by how many nodes to be assigned? what could be the configuration of nodes, how many pods to be considered, how many IP numbers to be reserved based on number of pods etc..</p> <blockquote> <p>Is there any standard matrix here or practical way of calculation to compute AKS cluster sizing, based on any ones'experience?</p> </blockquote>
<p>no, pretty sure there is none (and how it could be)? just take your pod cpu\memory usage and sum that up, you'll get an expectation of the resources needed to run your stuff, add k8s services on top of that.</p> <p>also, like Peter mentions in his comment, you can always scale your cluster, so such planning seems a bit unreasonable.</p>
<p>I'm deploying locally in docker-for-desktop. So that I can migrate to a kubernetes cluster in the future.</p> <p>However I face a problem. Directories in the docker container/pod are over written, when persistent volumes are used.</p> <p>I'm pulling the latest SonarQube image. A lot of plugins and quality profiles are pre-installed. Which is exactly what I want. If I don't use persistent volumes. Everything works as expected. When I use a pv all the data in the image is overwritten. I use helm.</p> <p>In my deployment.yaml I use this:</p> <pre><code> {{- if (eq .Values.volumes.usePersistent "true") }} volumeMounts: - mountPath: "/opt/sonarqube/data" name: sonarqube-data - mountPath: "/opt/sonarqube/extensions" name: sonarqube-extensions volumes: - name: sonarqube-data persistentVolumeClaim: claimName: sonarqube-data-pv-claim - name: sonarqube-extensions persistentVolumeClaim: claimName: sonarqube-extensions-pv-claim {{- end }} </code></pre> <p>In my storage.yaml I use this: </p> <pre><code>{{- if (eq .Values.volumes.usePersistent "true") }} kind: PersistentVolume apiVersion: v1 metadata: name: sonarqube-data-pv-volume labels: type: local app: sonarqube-data spec: storageClassName: manual capacity: storage: 2Gi accessModes: - ReadWriteMany hostPath: path: "/tmp/toolbox/sonarqube/data" --- kind: PersistentVolume apiVersion: v1 metadata: name: sonarqube-extensions-pv-volume labels: type: local app: sonarqube-extensions spec: storageClassName: manual capacity: storage: 2Gi accessModes: - ReadWriteMany hostPath: path: "/tmp/toolbox/sonarqube/extensions" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: sonarqube-data-pv-claim labels: app: sonarqube-data spec: storageClassName: manual accessModes: - ReadWriteMany resources: requests: storage: 2Gi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: sonarqube-extensions-pv-claim labels: app: sonarqube-extensions spec: storageClassName: manual accessModes: - ReadWriteMany resources: requests: storage: 2Gi {{- end }} </code></pre> <p>The pvc are bounded and working. All the data that I need are in de 'data' and 'extensions' folder in the container, coming from the image. For example in the extensions folder:</p> <pre><code>sonarqube@sonarqube-deployment-6b8bdfb766-klnwh:/opt/sonarqube/extensions/plugins$ ls README.txt sonar-java-plugin-5.11.0.17289.jar sonar-scala-plugin-1.5.0.315.jar sonar-csharp-plugin-7.11.0.8083.jar sonar-javascript-plugin-5.1.1.7506.jar sonar-scm-git-plugin-1.8.0.1574.jar sonar-css-plugin-1.0.3.724.jar sonar-kotlin-plugin-1.5.0.315.jar sonar-scm-svn-plugin-1.9.0.1295.jar sonar-flex-plugin-2.4.0.1222.jar sonar-ldap-plugin-2.2.0.608.jar sonar-typescript-plugin-1.9.0.3766.jar sonar-go-plugin-1.1.0.1612.jar sonar-php-plugin-3.0.0.4537.jar sonar-vbnet-plugin-7.11.0.8083.jar sonar-html-plugin-3.1.0.1615.jar sonar-python-plugin-1.13.0.2922.jar sonar-xml-plugin-2.0.1.2020.jar sonar-jacoco-plugin-1.0.1.143.jar sonar-ruby-plugin-1.5.0.315.jar </code></pre> <p>I have made the following directories in my /tmp folder:</p> <pre><code>- data - extensions - downloads - jdbc-driver - plugins </code></pre> <p>I know I must specify the same folders in my pv as in my container. I checked, all the folders are there in my /tmp folder. But are empty. They plugins folder is empty, all the plugin.jar files are gone.</p> <p>BTW I did not include this in the initial post, but I'm using a PostgresDB also with pvc. pg-deploy.yaml:</p> <p><code> {{- if (eq .Values.volumes.usePersistent "true") }} volumeMounts: - mountPath: /var/lib/postgresql/data name: sonarqubedb volumes: - name: sonarqubedb persistentVolumeClaim: claimName: postgres-sq-pv-claim {{- end }} </code></p> <p>storage.yaml:</p> <pre><code>{{- if (eq .Values.volumes.usePersistent "true") }} kind: PersistentVolume apiVersion: v1 metadata: name: postgres-sq-pv-volume labels: type: local app: postgres-sonarqube spec: storageClassName: manual capacity: storage: 2Gi accessModes: - ReadWriteMany hostPath: path: "/tmp/toolbox/postgres-sonarqube" --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: postgres-sq-pv-claim labels: app: postgres-sonarqube spec: storageClassName: manual accessModes: - ReadWriteMany resources: requests: storage: 2Gi {{- end }} </code></pre>
<p>to avoid overwriting data to the the existing files/content inside the same Directory, you can use <strong>subpath</strong> to mount the data and extensions directory (In the example below) in the existing Container file system. for further detail <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="nofollow noreferrer">sub-path</a></p> <pre><code> volumeMounts: - mountPath: "/opt/sonarqube/data" name: sonarqube-data subPath: data - mountPath: "/opt/sonarqube/extensions" name: sonarqube-extensions subPath: extensions </code></pre> <p>This works. However it didn't work until I did the same for the database that sonarqube is using:</p> <pre><code> volumeMounts: - mountPath: /var/lib/postgresql/data name: sonarqubedb subPath: data </code></pre>
<p>I have a cluster of kubernetes (3 VM in VMware server) working with a Flannel not routed network (10.0.0.1/24) and a "public" private IP with Nginx reverse proxy... 10.10.0.1/24. So, all domains points to 10.10.0.10 and I do internal redirect to the exposed service in 10.0.0.1/24.</p> <p>The problem is that I have 2 DMZ... For security reasons, I dont wanna to have 2 interfaces (eth0, ehh1) with one DMZ each one... If some attacker hack my kubemaster, can jump from one DMZ to other.</p> <p>I want to manage this like VMware server do... Passing Trunk with native vlan to a single port. There's some way to config a single interface (eth0) with trunk and native vlan, and use Contiv for expose kubernetes services in differents vlans directly? </p> <p>Honestly I dont want to have one cluster for each vlan of services...</p> <p>Thanks in advance!</p>
<p>This can be accomplished by configuring your Kubernetes nodes to be BGP neighbors of your router and then installing MetalLB and configuring it in BGP mode.</p> <p><a href="https://metallb.universe.tf" rel="nofollow noreferrer">https://metallb.universe.tf</a></p>
<p>I have trouble setting the result value of a shell script to arguments for Kubernetes Cronjob regularly.</p> <p>Is there any good way to set the value refreshed everyday?</p> <p>I use a Kubernetes cronjob in order to perform some daily task.</p> <p>With the cronjob, a Rust application is launched and execute a batch process.</p> <p>As one of arguments for the Rust app, I pass target date (yyyy-MM-dd formatted string) as a command-line argument.</p> <p>Therefore, I tried to pass the date value into the definition yaml file for cronjob as follows.</p> <p>And I try setting ${TARGET_DATE} value with following script. In the sample.sh, the value for TARGET_DATE is exported.</p> <pre class="lang-sh prettyprint-override"><code>cat sample.yml | envsubst | kubectl apply -f sample.sh </code></pre> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: some-batch namespace: some-namespace spec: schedule: "00 1 * * 1-5" jobTemplate: spec: template: spec: containers: - name: some-container image: sample/some-image command: ["./run"] args: ["${TARGET_DATE}"] restartPolicy: Never </code></pre> <p>I expected that this will create TARGET_DATE value everyday, but it does not change from the date I just set for the first time.</p> <p>Is there any good way to set result of shell script into args of cronjob yaml regularly?</p> <p>Thanks.</p>
<p>You can use init containers for that <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/init-containers/</a></p> <p>The idea is the following: you run your script that setting up this value inside init container, write this value into shared emptyDir volume. Then read this value from the main container. Here is example:</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: some-batch namespace: some-namespace spec: schedule: "00 1 * * 1-5" jobTemplate: spec: template: spec: initContainers: - name: init-script image: my-init-image volumeMounts: - name: date mountPath: /date command: - sh - -c - "/my-script &gt; /date/target-date.txt" containers: - name: some-container image: sample/some-image command: ["./run"] args: ["${TARGET_DATE}"] # adjust this part to read from file volumeMounts: - name: date mountPath: /date restartPolicy: Never volumes: - name: date emptyDir: {} </code></pre>
<p>I'm seeing the following error when running kubectl commands against my local cluster (Minikube and through Dockers new Kubernetes functionality)</p> <pre><code>Unable to connect to the server: net/http: TLS handshake timeout </code></pre> <p>All commands work (create deployment, check pods etc) until this error seemingly randomly rears its head. After the first occurrence of this, it won't stop occurring for any and all kubectl commands rendering kubectl useless. The only way to stop it is to completely destroy the local cluster and start again ... until is happens again 5 minutes later!</p> <p>Can anyone shed some light onto this please? Please note a lot of people are asking about this sort of issue in regards to AWS, GCE etc and they have different resolutions based on the platform they are running on, I am yet to see a solution for this when it occurs locally.</p>
<p>Solved by increasing the memory available to Docker from 2gb up to 8gb.</p> <p>To do this, click on the docker icon -> Preferences -> Advanced, then use the slider for "Memory" to increase the available memory to the docker process as you wish.</p> <p>@Koshmaar I hope that helps you as well!</p>
<p>Headless VirtualBox successfully runs inside Docker container</p> <pre><code>docker run --device=/dev/vboxdrv:/dev/vboxdrv my-vb </code></pre> <p>I need to run this image on Kubernetes and I get:</p> <pre><code>VBoxHeadless: Error -1909 in suplibOsInit! VBoxHeadless: Kernel driver not accessible </code></pre> <p>Kubernetes object:</p> <pre><code>metadata: name: vbox labels: app: vbox spec: selector: matchLabels: app: vbox template: metadata: labels: app: vbox spec: securityContext: runAsUser: 0 containers: - name: vbox-vm image: my-vb imagePullPolicy: 'Always' ports: - containerPort: 6666 volumeMounts: - mountPath: /root/img.vdi name: img-vdi - mountPath: /dev/vboxdrv name: vboxdrv volumes: - name: img-vdi hostPath: path: /root/img.vdi type: File - name: vboxdrv hostPath: path: /dev/vboxdrv type: CharDevice </code></pre> <p>This image runs in Docker so must be the problem in Kubernetes configuration.</p>
<p>Slight modification is required in the configuration for this work:</p> <pre><code>metadata: name: vbox labels: app: vbox spec: selector: matchLabels: app: vbox template: metadata: labels: app: vbox spec: securityContext: runAsUser: 0 containers: - name: vbox-vm image: my-vb imagePullPolicy: 'Always' securityContext: # &lt;&lt; added privileged: true ports: - containerPort: 6666 volumeMounts: - mountPath: /root/img.vdi name: img-vdi - mountPath: /dev/vboxdrv name: vboxdrv volumes: - name: img-vdi hostPath: path: /root/img.vdi type: File - name: vboxdrv hostPath: path: /dev/vboxdrv type: CharDevice </code></pre> <p>To be able to run privileged containers you'll need to have:</p> <ul> <li>kube-apiserver running with --allow-privileged</li> <li>kubelet (all hosts that might have this container) running with --allow-privileged=true</li> </ul> <p>See more at <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#privileged-mode-for-pod-containers" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod/#privileged-mode-for-pod-containers</a></p> <p>Once it works do it properly via <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/" rel="nofollow noreferrer">PodSecurityPolicy</a></p>
<p>I understand that this question is asked dozen times, but nothing has helped me through internet searching. </p> <p>My set up: </p> <pre><code>CentOS Linux release 7.5.1804 (Core) Docker Version: 18.06.1-ce Kubernetes: v1.12.3 </code></pre> <p>Installed by official guide and this one:<a href="https://www.techrepublic.com/article/how-to-install-a-kubernetes-cluster-on-centos-7/" rel="noreferrer">https://www.techrepublic.com/article/how-to-install-a-kubernetes-cluster-on-centos-7/</a></p> <p>CoreDNS pods are in Error/CrashLoopBackOff state.</p> <pre><code>kube-system coredns-576cbf47c7-8phwt 0/1 CrashLoopBackOff 8 31m kube-system coredns-576cbf47c7-rn2qc 0/1 CrashLoopBackOff 8 31m </code></pre> <p>My /etc/resolv.conf:</p> <pre><code>nameserver 8.8.8.8 </code></pre> <p>Also tried with my local dns-resolver(router)</p> <pre><code>nameserver 10.10.10.1 </code></pre> <p>Setup and init: </p> <pre><code>kubeadm init --apiserver-advertise-address=10.10.10.3 --pod-network-cidr=192.168.1.0/16 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml </code></pre> <p>I tried to solve this with: Editing the coredns: root@kub~]# kubectl edit cm coredns -n kube-system and changing </p> <pre><code>proxy . /etc/resolv.conf </code></pre> <p>directly to </p> <pre><code>proxy . 10.10.10.1 </code></pre> <p>or proxy . 8.8.8.8</p> <p>Also tried to: </p> <pre><code>kubectl -n kube-system get deployment coredns -o yaml | sed 's/allowPrivilegeEscalation: false/allowPrivilegeEscalation: true/g' | kubectl apply -f - </code></pre> <p>And still nothing helps me. </p> <p>Error from the logs: </p> <pre><code>plugin/loop: Seen "HINFO IN 7847735572277573283.2952120668710018229." more than twice, loop detected </code></pre> <p>The other thread - <a href="https://stackoverflow.com/questions/53075796/coredns-pods-have-crashloopbackoff-or-error-state">coredns pods have CrashLoopBackOff or Error state</a> didnt help at all, becouse i havent hit any solutions that were described there. Nothing helped.</p>
<p><strong>Even I have got such error and I successfully managed to work by below steps.</strong></p> <p><strong>However, you missed 8.8.4.4</strong> </p> <p>sudo nano /etc/resolv.conf</p> <pre><code>nameserver 8.8.8.8 nameserver 8.8.4.4 </code></pre> <p>run following commands to restart daemon and docker service</p> <pre><code>sudo systemctl daemon-reload sudo systemctl restart docker </code></pre> <p>If you are using kubeadm make sure you delete an entire cluster from master and provision cluster again.</p> <pre><code>kubectl drain &lt;node_name&gt; --delete-local-data --force --ignore-daemonsets kubectl delete node &lt;node_name&gt; kubeadm reset </code></pre> <p>Once You Provision the new cluster </p> <pre><code>kubectl get pods --all-namespaces </code></pre> <p>It Should give below expected Result</p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-node-gldlr 2/2 Running 0 24s kube-system coredns-86c58d9df4-lpnj6 1/1 Running 0 40s kube-system coredns-86c58d9df4-xnb5r 1/1 Running 0 40s kube-system kube-proxy-kkb7b 1/1 Running 0 40s kube-system kube-scheduler-osboxes 1/1 Running 0 10s </code></pre>
<p>I am trying to get heapster eventer to work on a cluster with RBAC enabled. Using the same roles that work for /heapster command does not seem to be sufficient.</p> <p>On running the pod logs fill up with entries like this:</p> <pre><code>Failed to load events: events is forbidden: User "system:serviceaccount:kube-system:heapster" cannot list events at the cluster scope </code></pre> <p>Does anyone know the proper authorization for my heapster service account, short of admin rights?</p> <p>Eventer deployment doc:</p> <pre><code>kind: Deployment apiVersion: extensions/v1beta1 metadata: labels: k8s-app: eventer name: eventer namespace: kube-system spec: replicas: 1 selector: matchLabels: k8s-app: eventer strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: labels: k8s-app: eventer spec: serviceAccountName: heapster containers: - name: eventer image: k8s.gcr.io/heapster-amd64:v1.5.4 imagePullPolicy: IfNotPresent command: - /eventer - --source=kubernetes:https://kubernetes.default - --sink=log resources: limits: cpu: 100m memory: 200Mi requests: cpu: 100m memory: 200Mi terminationMessagePath: /dev/termination-log restartPolicy: Always terminationGracePeriodSeconds: 30 </code></pre> <p>RBAC:</p> <pre><code># Original: https://brookbach.com/2018/10/29/Heapster-on-Kubernetes-1.11.3.html apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: heapster rules: - apiGroups: - "" resources: - pods - nodes - namespaces - events verbs: - get - list - watch - apiGroups: - extensions resources: - deployments verbs: - get - list - update - watch - apiGroups: - "" resources: - nodes/stats verbs: - get </code></pre> <p>Cluster role binding:</p> <pre><code># Original: https://github.com/kubernetes-retired/heapster/blob/master/deploy/kube-config/rbac/heapster-rbac.yaml kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: heapster roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: heapster subjects: - kind: ServiceAccount name: heapster namespace: kube-system </code></pre> <p>Related question: <a href="https://stackoverflow.com/questions/39496955/how-to-propagate-kubernetes-events-from-a-gke-cluster-to-google-cloud-log">How to propagate kubernetes events from a GKE cluster to google cloud log</a></p>
<p>All of the above objects seem to be correct to me.</p> <p>It's just a hunch but perhaps you created the Deployment first and then the ClusterRole and/or ClusterBindingRole and/or the ServiceAccount itself. Make sure you have these 3 first, then delete the current heapster Pods (or the Deployment, and wait for the Pod to terminate before recreating the Deployment).</p> <p>(Create the ServiceAccount by <code>kubectl create sa heapster -n kube-system</code>)</p> <p>Also, you can test if ServiceAccount can list the events by:</p> <pre><code>kubectl get ev --all-namespaces --as system:serviceaccount:kube-system:heapster </code></pre>
<p>I'm trying to redirect an ingress for the service deployed in Azure Kubernetes to https. Whatever I try doesn't work. I tried configuring Ingress and Traefik itself (via ConfigMap) with no effect.</p> <p>The config for Traefik looks as the following:</p> <pre><code>--- # Traefik_config.yaml --- apiVersion: v1 kind: ConfigMap metadata: name: traefik-conf namespace: kube-system # traefik.toml data: traefik.toml: | defaultEntryPoints = ["http","https"] [entryPoints] [entryPoints.http] address = ":80" [entryPoints.http.redirect] entryPoint = "https" [entryPoints.https] address = ":443" [entryPoints.https.tls] [frontends] [frontends.frontend2] backend = "backend1" passHostHeader = true # overrides default entry points entrypoints = ["http", "https"] [backends] [backends.backend1] [backends.backend1.servers.server1] url = "http://auth.mywebsite.com" </code></pre> <p>The subject for redirection is containerized IdentityServer API website with no TLS encryption. There are a couple of questions on the matter:</p> <ul> <li>What's the best way to redirect the frontend app in Azure Kubernetes with Traefik</li> <li>In the config the frontend is numbered, i.e. "frontend2". I assume this a sequential number of the app on the Traefik's dashboard. The problem is, the dashboard only shows the total sum of apps. If there are many of them, how to figure what the number is?</li> <li>When I apply annotations to the Ingress, like "traefik.ingress.kubernetes.io/redirect-permanent: true" the respective labels are not showing up in the Traefik's dashboard for the respective app. Is there any reason for that?</li> </ul>
<p>Your configuration for redirecting http to https looks good. If you have followed the official Doc of Traefik to deploy on kubernetes, The Traefik ingress controller service will not have 443. Make sure you have port 443 opened on the Service with service type as <code>LoadBalancer</code>. Once we open a port in service, Then Azure opens the same port in the Azure load balancer. Service yaml is here.</p> <pre><code>kind: Service apiVersion: v1 metadata: name: traefik-ingress-service namespace: kube-system spec: selector: k8s-app: traefik-ingress-lb ports: - protocol: TCP port: 80 name: web - protocol: TCP port: 8080 name: admin type: LoadBalancer </code></pre> <p>If you want to redirect all the http to https in your cluster, You can go for the redirection in the configuration file. If you want to redirect only some of the services, then add annotations in the Ingress to achieve redirection for specific services.</p> <pre><code>traefik.ingress.kubernetes.io/frontend-entry-points: http,https traefik.ingress.kubernetes.io/redirect-entry-point: https </code></pre> <p>After setting up the redirection, Traffic Dashboard reflects that here. You can also set up a permanent rediection using <code>traefik.ingress.kubernetes.io/redirect-permanent: "true</code> <a href="https://i.stack.imgur.com/VTxED.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VTxED.png" alt="enter image description here"></a></p>
<p>I am trying to configure Prometheus, which is included in the Gitlab Helm chart according to <a href="https://gitlab.com/charts/gitlab/blob/master/requirements.yaml" rel="noreferrer">https://gitlab.com/charts/gitlab/blob/master/requirements.yaml</a></p> <p>My main issue is how to configure Prometheus, as the following <code>values.yaml</code> seems to be ignored:</p> <pre><code>global: registry: enabled: false # Disabling minio still requires to disable gitlab.minio or it will complain about "A valid backups.objectStorage.config.secret is needed" minio: enabled: false ingress: configureCertmanager: false class: "nginx" ... prometheus: install: true rbac: create: true #kubeStateMetrics: # enabled: true nodeExporter: enabled: true #pushgateway: # enabled: true server: configMapOverrideName: prometheus-config configPath: /etc/prometheus/conf/prometheus.yml persistentVolume: enabled: true accessModes: - ReadWriteMany mountPath: /etc/prometheus/conf # Increase afterwards, this is for my tests size: 2Gi alertmanager: enabled: true # Overriding the default configuration with the existing one configMapOverrideName: "alertmanager" configFileName: config.yml persistentVolume: enabled: true accessModes: - ReadWriteMany mountPath: /prometheus # Increase afterwards, this is for my tests size: 2Gi </code></pre>
<p>Checked the link you provided and it seems you are trying to add values into values.yaml of your parent chart, where prometheus is a dependent sub-chart.</p> <p>Specifying values at parent values.yaml file is done exactly in the same way you provided above.</p> <p>Values for sub-chart should go into a property named exactly as the sub-chart.</p> <pre><code>parentProp1: value parentProp2: value global: globalProp1: value globalProp2: value subchart1: subchartProp1: value subchartProp2: value </code></pre> <p>Now in the above set of values, let's assume there is a <code>parentchart</code> and it has a sub-chart named <code>subchart1</code>. You need to understand the following points:</p> <ul> <li><code>parentProp1</code> and <code>parentProp2</code> can only be accessed in <code>parentchart</code> and not in <code>subchart1</code> as <code>Values.parentProp1</code> and <code>Values.parentProp2</code></li> <li>global properties can be accessed from both parent and subchart1 as <code>Values.global.globalProp1</code></li> <li><code>subchartProp1</code> and <code>subchartProp2</code> can be accessed as <code>Values.subchart1.subchartProp1</code> and <code>Values.subchart1.subchartProp2</code> in <code>parentchart</code></li> <li><code>subchartProp1</code> and <code>subchartProp2</code> can be accessed as <code>Values.subchartProp1</code> and <code>Values.subchartProp2</code> in <code>subchart1</code></li> </ul> <p>Also please don't forget to use proper syntax of double curly-braces <code>{{ Values.xyz }}</code></p> <p>I hope it helps. :)</p>
<p><strong>I'm setting up multi node cassandra cluster in kubernetes (Azure AKS),Since this a headless service with statefull set pods without having a external IP. How can i connect my spark application with cassandra which is in kubernetes cluster</strong></p> <p><b>We have tried with cluster ip,ingress ip also but only single pod is getting up rest are failing.</b></p> <p>I have 3 manifest:</p> <ol> <li>Service</li> <li>PersistentVolumeClaim </li> <li>StatefulSet</li> </ol> <pre><code>apiVersion: v1 kind: Service metadata: labels: app: cassandra name: cassandra spec: clusterIP: None ports: - port: 9042 selector: app: cassandra kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myvolume-disk-claim spec: storageClassName: default accessModes: - ReadWriteOnce resources: requests: storage: 10Gi apiVersion: "apps/v1" kind: StatefulSet metadata: name: cassandra labels: app: cassandra spec: serviceName: cassandra replicas: 3 selector: matchLabels: app: cassandra template: metadata: labels: app: cassandra spec: containers: - name: cassandra image: gcr.io/google-samples/cassandra:v13 imagePullPolicy: Always ports: - containerPort: 7000 name: intra-node - containerPort: 7001 name: tls-intra-node - containerPort: 7199 name: jmx - containerPort: 9042 name: cql env: - name: CASSANDRA_SEEDS value: cassandra-0.cassandra.default.svc.cluster.local - name: MAX_HEAP_SIZE value: 256M - name: HEAP_NEWSIZE value: 100M - name: CASSANDRA_CLUSTER_NAME value: "Cassandra" - name: CASSANDRA_DC value: "DC1" - name: CASSANDRA_RACK value: "Rack1" - name: CASSANDRA_ENDPOINT_SNITCH value: GossipingPropertyFileSnitch - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP readinessProbe: exec: command: - /bin/bash - -c - /ready-probe.sh initialDelaySeconds: 15 timeoutSeconds: 5 volumeMounts: - mountPath: /var/lib/cassandra/data name: myvolume-disk-claim volumeClaimTemplates: - metadata: name: myvolume-disk-claim spec: storageClassName: default accessModes: - ReadWriteOnce resources: requests: storage: 10Gi </code></pre> <p>Expected Result:(public ip as external IP)</p> <pre><code>dspg@Digiteds28:$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.0.0.1 &lt;none&gt; 443/TCP 1h cassandra ClusterIP None 154.167.90.98 9042/TCP 1h dspg@Digiteds28:$ kubectl get pod NAME READY STATUS RESTARTS AGE cassandra-0 1/1 Running 0 59m cassandra-1 1/1 Running 0 58m cassandra-2 1/1 Running 0 56m </code></pre> <p>Actual Output:</p> <pre><code>dspg@Digiteds28:$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.0.0.1 &lt;none&gt; 443/TCP 1h cassandra ClusterIP None &lt;none&gt; 9042/TCP 1h dspg@Digiteds28:$ kubectl get pod NAME READY STATUS RESTARTS AGE cassandra-0 1/1 Running 0 59m cassandra-1 1/1 Running 0 58m cassandra-2 1/1 Running 0 56m </code></pre> <p>Now this doesnot include external IP to connect to application.</p>
<p>It depends on what exactly you are trying to do. If you need an external IP then in general you'd need to create an additional Service object (probably <code>type: LoadBalancer</code>) like this:</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: app: cassandra name: cassandra-ext spec: type: LoadBalancer ports: - port: 9042 selector: app: cassandra </code></pre> <p>If you need to reach it from within the cluster then use the DNS name <code>cassandra-0.cassandra.default</code> from the other pod (if the StatefulSet was deployed in the default namespace)</p>
<p>I have a Deployment with three replicas, everyone started on a different node, behing an ingress. For tests and troubleshooting, I want to see which pod/node served my request. How is this possible? </p> <p>The only way I know is to open the logs on all of the pods, do my request and search for the pod that has my request in the access log. But this is complicated and error prune, especially on productive apps with requests from other users. </p> <p>I'm looking for something like a HTTP Response header like this:</p> <pre><code>X-Kubernetes-Pod: mypod-abcdef-23874 X-Kubernetes-Node: kubw02 </code></pre>
<p>AFAIK, there is no feature like that out of the box.</p> <p>The easiest way I can think of, is adding these information as headers yourself from your API.</p> <p>You technically have to <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">Expose Pod Information to Containers Through Environment Variables</a> and get it from code to add the headers to the response.</p> <p>Would be something like this:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: dapi-envars-fieldref spec: containers: - name: test-container image: k8s.gcr.io/busybox command: [ "sh", "-c"] args: - while true; do echo -en '\n'; printenv MY_NODE_NAME MY_POD_NAME MY_POD_NAMESPACE; printenv MY_POD_IP MY_POD_SERVICE_ACCOUNT; sleep 10; done; env: - name: MY_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: MY_POD_SERVICE_ACCOUNT valueFrom: fieldRef: fieldPath: spec.serviceAccountName restartPolicy: Never </code></pre> <p>And from the API you get the information and insert into the header.</p>
<p>I'm trying to build a web app where each user gets their own instance of the app, running in its own container. I'm new to kubernetes so I'm probably not understanding something correctly.</p> <p>I will have a few physical servers to use, which in kubernetes as I understand are called nodes. For each node, there is a limitation of 100 pods. So if I am building the app so that each user gets their own pod, will I be limited to 100 users per physical server? (If I have 10 servers, I can only have 500 users?) I suppose I could run multiple VMs that act as nodes on each physical server but doesn't that defeat the purpose of containerization?</p>
<p>The main issue in having too many pods in a node is because it will degrade the node performance and makes is slower(and sometimes unreliable) to manage the containers, each pod is managed individually, increasing the amount will take more time and more resources. </p> <p>When you create a POD, the runtime need to keep a constant track, doing probes (readiness and Liveness), monitoring, Routing rules many other small bits that adds up to the load in the node.</p> <p>Containers also requires processor time to run properly, even though you can allocate fractions of a CPU, adding too many containers\pod will increase the context switch and degrade the performance when the PODs are consuming their quota.</p> <p>Each platform provider also set their own limits to provide a good quality of service and SLAs, overloading the nodes is also a risk, because a node is a single point of failure, and any fault in high density nodes might have a huge impact in the cluster and applications.</p> <p>You should either consider:</p> <ul> <li>Smaller nodes and add more nodes to the cluster or </li> <li>Use Actors instead, where each client will be one Actor. And many actor will be running in a single container. To make it more balanced around the cluster, you partition the actors into multiple containers instances.</li> </ul> <p>Regarding the limits, <a href="https://github.com/kubernetes/kubernetes/issues/23349" rel="nofollow noreferrer">this thread</a> has a good discussion about the concerns</p>
<p>I'm working on a Kubernetes cluster where I am directing service from GCloud Ingress to my Services. One of the services endpoints fails health check as HTTP but passes it as TCP.</p> <p>When I change the health check options inside GCloud to be TCP, the health checks pass, and my endpoint works, but after a few minutes, the health check on GCloud resets for that port back to HTTP and health checks fail again, giving me a 502 response on my endpoint.</p> <p>I don't know if it's a bug inside Google Cloud or something I'm doing wrong in Kubernetes. I have pasted my YAML configuration here:</p> <p><strong>namespace</strong></p> <pre><code>apiVersion: v1 kind: Namespace metadata: name: parity labels: name: parity </code></pre> <p><strong>storageclass</strong></p> <pre><code>apiVersion: storage.k8s.io/v1 metadata: name: classic-ssd namespace: parity provisioner: kubernetes.io/gce-pd parameters: type: pd-ssd zones: us-central1-a reclaimPolicy: Retain </code></pre> <p><strong>secret</strong></p> <pre><code>apiVersion: v1 kind: Secret metadata: name: tls-secret namespace: ingress-nginx data: tls.crt: ./config/redacted.crt tls.key: ./config/redacted.key </code></pre> <p><strong>statefulset</strong></p> <pre><code>apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: parity namespace: parity labels: app: parity spec: replicas: 3 selector: matchLabels: app: parity serviceName: parity template: metadata: name: parity labels: app: parity spec: containers: - name: parity image: "etccoop/parity:latest" imagePullPolicy: Always args: - "--chain=classic" - "--jsonrpc-port=8545" - "--jsonrpc-interface=0.0.0.0" - "--jsonrpc-apis=web3,eth,net" - "--jsonrpc-hosts=all" ports: - containerPort: 8545 protocol: TCP name: rpc-port - containerPort: 443 protocol: TCP name: https readinessProbe: tcpSocket: port: 8545 initialDelaySeconds: 650 livenessProbe: tcpSocket: port: 8545 initialDelaySeconds: 650 volumeMounts: - name: parity-config mountPath: /parity-config readOnly: true - name: parity-data mountPath: /parity-data volumes: - name: parity-config secret: secretName: parity-config volumeClaimTemplates: - metadata: name: parity-data spec: accessModes: ["ReadWriteOnce"] storageClassName: "classic-ssd" resources: requests: storage: 50Gi </code></pre> <p><strong>service</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: labels: app: parity name: parity namespace: parity annotations: cloud.google.com/app-protocols: '{"my-https-port":"HTTPS","my-http-port":"HTTP"}' spec: selector: app: parity ports: - name: default protocol: TCP port: 80 targetPort: 80 - name: rpc-endpoint port: 8545 protocol: TCP targetPort: 8545 - name: https port: 443 protocol: TCP targetPort: 443 type: LoadBalancer </code></pre> <p><strong>ingress</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-parity namespace: parity annotations: #nginx.ingress.kubernetes.io/rewrite-target: / kubernetes.io/ingress.global-static-ip-name: cluster-1 spec: tls: secretName: tls-classic hosts: - www.redacted.com rules: - host: www.redacted.com http: paths: - path: / backend: serviceName: web servicePort: 8080 - path: /rpc backend: serviceName: parity servicePort: 8545 </code></pre> <p><strong>Issue</strong></p> <p>I've redacted hostnames and such, but this is my basic configuration. I've also run a hello-app container from this documentation here for debugging: <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app</a></p> <p>Which is what the endpoint for ingress on <code>/</code> points to on port 8080 for the <code>hello-app</code> service. That works fine and isn't the issue, but just mentioned here for clarification.</p> <p>So, the issue here is that, after creating my cluster with GKE and my ingress LoadBalancer on Google Cloud (the <code>cluster-1</code> global static ip name in the Ingress file), and then creating the Kubernetes configuration in the files above, the Health-Check fails for the <code>/rpc</code> endpoint on Google Cloud when I go to Google Compute Engine -> Health Check -> Specific Health-Check for the <code>/rpc</code> endpoint.</p> <p>When I edit that Health-Check to not use HTTP Protocol and instead use TCP Protocol, health-checks pass for the <code>/rpc</code> endpoint and I can curl it just fine after and it returns me the correct response.</p> <p>The issue is that a few minutes after that, the same Health-Check goes back to HTTP protocol even though I edited it to be TCP, and then the health-checks fail and I get a 502 response when I curl it again.</p> <p>I am not sure if there's a way to attach the Google Cloud Health Check configuration to my Kubernetes Ingress prior to creating the Ingress in kubernetes. Also not sure why it's being reset, can't tell if it's a bug on Google Cloud or something I'm doing wrong in Kubernetes. If you notice on my <code>statefulset</code> deployment, I have specified <code>livenessProbe</code> and <code>readinessProbe</code> to use TCP to check the port 8545.</p> <p>The delay of 650 seconds was due to this ticket issue here which was solved by increasing the delay to greater than 600 seconds (to avoid mentioned race conditions): <a href="https://github.com/kubernetes/ingress-gce/issues/34" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-gce/issues/34</a></p> <p>I really am not sure why the Google Cloud health-check is resetting back to HTTP after I've specified it to be TCP. Any help would be appreciated.</p>
<p>I found a solution where I added a new container for health check on my stateful set on /healthz endpoint, and configured the health check of the ingress to check that endpoint on the 8080 port assigned by kubernetes as an HTTP type of health-check, which made it work.</p> <p>It's not immediately obvious why the reset happens when it's TCP.</p>
<p>I am trying to configure Kubernetes RBAC in the least-permissive way possible and I want to scope my roles to specific resources and subresouces. I've dug through the docs and can't find a concise list of resources and their subresources. </p> <p>I'm particularly interested in a the subresource that governs a part of a Deployment's spec--the container image.</p>
<p>Using <code>kubectl api-resources -o wide</code> shows all the <strong>resources</strong>, <strong>verbs</strong> and associated <strong>API-group</strong>.</p> <pre><code>$ kubectl api-resources -o wide NAME SHORTNAMES APIGROUP NAMESPACED KIND VERBS bindings true Binding [create] componentstatuses cs false ComponentStatus [get list] configmaps cm true ConfigMap [create delete deletecollection get list patch update watch] endpoints ep true Endpoints [create delete deletecollection get list patch update watch] events ev true Event [create delete deletecollection get list patch update watch] limitranges limits true LimitRange [create delete deletecollection get list patch update watch] namespaces ns false Namespace [create delete get list patch update watch] nodes no false Node [create delete deletecollection get list patch update watch] persistentvolumeclaims pvc true PersistentVolumeClaim [create delete deletecollection get list patch update watch] persistentvolumes pv false PersistentVolume [create delete deletecollection get list patch update watch] pods po true Pod [create delete deletecollection get list patch update watch] statefulsets sts apps true StatefulSet [create delete deletecollection get list patch update watch] meshpolicies authentication.istio.io false MeshPolicy [delete deletecollection get list patch create update watch] policies authentication.istio.io true Policy [delete deletecollection get list patch create update watch] ... ... </code></pre> <p>I guess you can use this to create the list of resources needed in your RBAC config</p>
<p>I am setting up a Kubernetes cluster on Google using the Google Kubernetes Engine. I have created the cluster with auto-scaling enabled on my nodepool. <a href="https://i.stack.imgur.com/g2Tu9.png" rel="noreferrer"><img src="https://i.stack.imgur.com/g2Tu9.png" alt="nodepool_setup"></a></p> <p>As far as I understand this should be enough for the cluster to spin up extra nodes if needed.</p> <p>But when I run some load on my cluster, the HPA is activated and wants to spin up some extra instances but can't deploy them due to 'insufficient cpu'. At this point I expected the auto-scaling of the cluster to kick into action but it doesn't seem to scale up. I did however see this: <a href="https://i.stack.imgur.com/1mZzM.png" rel="noreferrer"><img src="https://i.stack.imgur.com/1mZzM.png" alt="error"></a> So the node that is wanting to be created (I guess thanks to the auto-scaler?) can't be created with following message: <strong>Quota 'IN_USE_ADDRESSES' exceeded. Limit: 8.0 in region europe-west1.</strong></p> <p>I also didn't touch the auto-scaling on the instance group, so when running <strong>gcloud compute instance-groups managed list</strong>, it shows as 'autoscaled: no'</p> <p>So any help getting this autoscaling to work would be appreciated.</p> <p>TL;DR I guess the reason it isn't working is: Quota 'IN_USE_ADDRESSES' exceeded. Limit: 8.0 in region europe-west1, but I don't know how I can fix it.</p>
<p>You really have debugged it yourself already. You need to edit the <a href="https://console.cloud.google.com/iam-admin/quotas?usage=USED" rel="noreferrer">Quotas on the GCP Console</a>. Make sure you select the correct project. Increase all that are low: probably addresses and CPUs in the zone. This process is semi automated only, so you might need to wait a bit and possibly pay a deposit.</p>
<h3>The Problem</h3> <p>I need to expose a Kubernetes NodePort service externally over https.</p> <h3>The Setup</h3> <ul> <li>I've deployed Kubernetes on bare-metal and have deployed <a href="https://github.com/polyaxon/polyaxon" rel="noreferrer">Polyaxon</a> on the cluster via Helm</li> <li>I need to access Polyaxon's dashboard via the browser, using a virtual machine that's external to the cluster</li> <li>The dashboard is exposed as a NodePort service, and I'm able to connect to it over http. I am <strong>not</strong> able to connect over https, which is a hard requirement in my case.</li> <li>Following an initial "buildout" period, both the cluster and the virtual machine will not have access to the broader internet. They will connect to one another and that's it.</li> </ul> <p>Polyaxon supposedly supports SSL/TLS through its own configs, but there's very little documentation on this. I've made my best attempts to solve the issue that way and also bumped <a href="https://github.com/polyaxon/polyaxon/issues/288" rel="noreferrer">an issue</a> on their github, but haven't had any luck so far.</p> <p>So I'm now wondering if there might be a more general Kubernetes hack that could help me here. </p> <h3>The Solutions</h3> <p>I'm looking for the simplest solution, rather than the most elegant or scalable. There are also some things that might make my situation simpler than the average user who would want https, namely:</p> <ul> <li>It would be OK to support https on just one node, rather than every node</li> <li>I don't need (or really want) a domain name; connecting at <code>https://&lt;ip_address&gt;:&lt;port&gt;</code> is not just OK but preferred</li> <li>A self-signed certificate is also OK</li> </ul> <p>So I'm hoping there's some way to manipulate the NodePort service directly such that https will work on the virtual machine. If that's not possible, other solutions I've considered are using an Ingress Controller or some sort of proxy, but those solutions are both a little half-baked in my mind. I'm a novice with both Kubernetes and networking ideas in general, so if you're going to propose something more complex please speak very slowly :) </p> <p>Thanks a ton for your help!</p>
<p>Ingress-controller it's a standard way to expose HTTP backend over TLS connection from cluster to client. </p> <p>Existing <code>NodePort</code> service has <code>ClusterIP</code> which can be used as a backend for Ingress. ClusterIP type of service is enough, so you can change service type later to prevent HTTP access via <code>nodeIP:nodePort</code>. Ingress-controller allows you to teminate TLS connection or pass-through TLS traffic to the backend.</p> <p>You can use self-signed certificate or use cert-manager with <a href="https://itnext.io/automated-tls-with-cert-manager-and-letsencrypt-for-kubernetes-7daaa5e0cae4" rel="nofollow noreferrer">Let's encrypt</a> service.</p> <p>Note, that starting from 0.22.0 version Nginx-ingress rewrite syntax <a href="https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/rewrite" rel="nofollow noreferrer">has changed</a> and some examples in the articles may be outdated.</p> <p>Check the links:</p> <ul> <li><a href="https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/tls-termination" rel="nofollow noreferrer">TLS termination</a></li> <li><a href="https://kubernetes.github.io/ingress-nginx/user-guide/tls/" rel="nofollow noreferrer">TLS/HTTPS</a></li> <li><a href="https://stackoverflow.com/questions/49545732/how-to-get-kubernetes-ingress-to-terminate-ssl-and-proxy-to-service">How to get Kubernetes Ingress to terminate SSL and proxy to service? </a></li> <li><a href="https://blogs.technet.microsoft.com/livedevopsinjapan/2017/02/28/configure-nginx-ingress-controller-for-tls-termination-on-kubernetes-on-azure-2/" rel="nofollow noreferrer">Configure Nginx Ingress Controller for TLS termination on Kubernetes on Azure</a></li> </ul>
<p>I have Gitlab (11.8.1) (self-hosted) connected to self-hosted K8s Cluster (1.13.4). There're 3 projects in gitlab name <code>shipment</code>, <code>authentication_service</code> and <code>shipment_mobile_service</code>.</p> <p>All projects add the same K8s configuration exception project namespace.</p> <p>The first project is successful when install Helm Tiller and Gitlab Runner in Gitlab UI.</p> <p>The second and third projects only install Helm Tiller success, Gitlab Runner error with log in install runner pod:</p> <pre><code> Client: &amp;version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"} Error: cannot connect to Tiller + sleep 1s + echo 'Retrying (30)...' + helm repo add runner https://charts.gitlab.io Retrying (30)... "runner" has been added to your repositories + helm repo update Hang tight while we grab the latest from your chart repositories... ...Skip local chart repository ...Successfully got an update from the "runner" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ⎈ Happy Helming!⎈ + helm upgrade runner runner/gitlab-runner --install --reset-values --tls --tls-ca-cert /data/helm/runner/config/ca.pem --tls-cert /data/helm/runner/config/cert.pem --tls-key /data/helm/runner/config/key.pem --version 0.2.0 --set 'rbac.create=true,rbac.enabled=true' --namespace gitlab-managed-apps -f /data/helm/runner/config/values.yaml Error: UPGRADE FAILED: remote error: tls: bad certificate </code></pre> <p>I don't config gitlab-ci with K8s cluster on first project, only setup for the second and third. The weird thing is with the same <code>helm-data</code> (only different by name), the second run success but the third is not.</p> <p>And because there only one gitlab runner available (from the first project), I assign both 2nd and 3rd project to this runner.</p> <p>I use this gitlab-ci.yml for both 2 projects with only different name in helm upgrade command.</p> <pre><code>stages: - test - build - deploy variables: CONTAINER_IMAGE: dockerhub.linhnh.vn/${CI_PROJECT_PATH}:${CI_PIPELINE_ID} CONTAINER_IMAGE_LATEST: dockerhub.linhnh.vn/${CI_PROJECT_PATH}:latest CI_REGISTRY: dockerhub.linhnh.vn DOCKER_DRIVER: overlay2 DOCKER_HOST: tcp://localhost:2375 # required when use dind # test phase and build phase using docker:dind success deploy_beta: stage: deploy image: alpine/helm script: - echo "Deploy test start ..." - helm init --upgrade - helm upgrade --install --force shipment-mobile-service --recreate-pods --set image.tag=${CI_PIPELINE_ID} ./helm-data - echo "Deploy test completed!" environment: name: staging tags: ["kubernetes_beta"] only: - master </code></pre> <p>The helm-data is very simple so I think don't really need to paste here. Here is the log when second project deploy success:</p> <pre><code>Running with gitlab-runner 11.7.0 (8bb608ff) on runner-gitlab-runner-6c8555c86b-gjt9f XrmajZY2 Using Kubernetes namespace: gitlab-managed-apps Using Kubernetes executor with image linkyard/docker-helm ... Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-15-concurrent-0x2bms to be running, status is Pending Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-15-concurrent-0x2bms to be running, status is Pending Running on runner-xrmajzy2-project-15-concurrent-0x2bms via runner-gitlab-runner-6c8555c86b-gjt9f... Cloning into '/root/authentication_service'... Cloning repository... Checking out 5068bf1f as master... Skipping Git submodules setup $ echo "Deploy start ...." Deploy start .... $ helm init --upgrade --dry-run --debug --- apiVersion: extensions/v1beta1 kind: Deployment metadata: creationTimestamp: null labels: app: helm name: tiller name: tiller-deploy namespace: kube-system spec: replicas: 1 strategy: {} template: metadata: creationTimestamp: null labels: app: helm name: tiller spec: automountServiceAccountToken: true containers: - env: - name: TILLER_NAMESPACE value: kube-system - name: TILLER_HISTORY_MAX value: "0" image: gcr.io/kubernetes-helm/tiller:v2.13.0 imagePullPolicy: IfNotPresent livenessProbe: httpGet: path: /liveness port: 44135 initialDelaySeconds: 1 timeoutSeconds: 1 name: tiller ports: - containerPort: 44134 name: tiller - containerPort: 44135 name: http readinessProbe: httpGet: path: /readiness port: 44135 initialDelaySeconds: 1 timeoutSeconds: 1 resources: {} status: {} --- apiVersion: v1 kind: Service metadata: creationTimestamp: null labels: app: helm name: tiller name: tiller-deploy namespace: kube-system spec: ports: - name: tiller port: 44134 targetPort: tiller selector: app: helm name: tiller type: ClusterIP status: loadBalancer: {} ... $ helm upgrade --install --force authentication-service --recreate-pods --set image.tag=${CI_PIPELINE_ID} ./helm-data WARNING: Namespace "gitlab-managed-apps" doesn't match with previous. Release will be deployed to default Release "authentication-service" has been upgraded. Happy Helming! LAST DEPLOYED: Tue Mar 26 05:27:51 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==&gt; v1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE authentication-service 1/1 1 1 17d ==&gt; v1/Pod(related) NAME READY STATUS RESTARTS AGE authentication-service-966c997c4-mglrb 0/1 Pending 0 0s authentication-service-966c997c4-wzrkj 1/1 Terminating 0 49m ==&gt; v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE authentication-service NodePort 10.108.64.133 &lt;none&gt; 80:31340/TCP 17d NOTES: 1. Get the application URL by running these commands: export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services authentication-service) echo http://$NODE_IP:$NODE_PORT $ echo "Deploy completed" Deploy completed Job succeeded </code></pre> <p>And the third project fail:</p> <pre><code>Running with gitlab-runner 11.7.0 (8bb608ff) on runner-gitlab-runner-6c8555c86b-gjt9f XrmajZY2 Using Kubernetes namespace: gitlab-managed-apps Using Kubernetes executor with image alpine/helm ... Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-18-concurrent-0bv4bx to be running, status is Pending Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-18-concurrent-0bv4bx to be running, status is Pending Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-18-concurrent-0bv4bx to be running, status is Pending Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-18-concurrent-0bv4bx to be running, status is Pending Running on runner-xrmajzy2-project-18-concurrent-0bv4bx via runner-gitlab-runner-6c8555c86b-gjt9f... Cloning repository... Cloning into '/canhnv5/shipmentmobile'... Checking out 278cbd3d as master... Skipping Git submodules setup $ echo "Deploy test start ..." Deploy test start ... $ helm init --upgrade Creating /root/.helm Creating /root/.helm/repository Creating /root/.helm/repository/cache Creating /root/.helm/repository/local Creating /root/.helm/plugins Creating /root/.helm/starters Creating /root/.helm/cache/archive Creating /root/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /root/.helm. Error: error installing: deployments.extensions is forbidden: User "system:serviceaccount:shipment-mobile-service:shipment-mobile-service-service-account" cannot create resource "deployments" in API group "extensions" in the namespace "kube-system" ERROR: Job failed: command terminated with exit code 1 </code></pre> <p>I could see they use the same runner <code>XrmajZY2</code> that I install in the first project, same k8s namespace <code>gitlab-managed-apps</code>.</p> <p>I think they use privilege mode but don't know why the second can get the right permission, and the third can not? Should I create user <code>system:serviceaccount:shipment-mobile-service:shipment-mobile-service-service-account</code> and assign to cluster-admin?</p> <p>Thanks to @cookiedough's instruction. I do these steps:</p> <ul> <li><p>Fork the <code>canhv5/shipment-mobile-service</code> into my root account <code>root/shipment-mobile-service</code>.</p></li> <li><p>Delete <code>gitlab-managed-apps</code> namespace without anything inside, run <code>kubectl delete -f gitlab-admin-service-account.yaml</code>.</p></li> <li><p>Apply this file then get the token as @cookiedough guide.</p></li> <li><p>Back to <code>root/shipment-mobile-service</code> in Gitlab, Remove previous Cluster. Add Cluster back with new token. Install Helm Tiller then Gitlab Runner in Gitlab UI.</p></li> <li><p>Re run the job then the magic happens. But I still unclear why <code>canhv5/shipment-mobile-service</code> still get the same error.</p></li> </ul>
<p>Before you do the following, delete the <code>gitlab-managed-apps</code> namespace:</p> <pre><code>kubectl delete namespace gitlab-managed-apps </code></pre> <p>Reciting from the <a href="https://docs.gitlab.com/ee/user/project/clusters/" rel="nofollow noreferrer">GitLab tutorial</a> you will need to create a <code>serviceaccount</code> and <code>clusterrolebinding</code> got GitLab, and you will need the secret created as a result to connect your project to your cluster as a result.</p> <blockquote> <p>Create a file called gitlab-admin-service-account.yaml with contents:</p> </blockquote> <pre><code> apiVersion: v1 kind: ServiceAccount metadata: name: gitlab-admin namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: gitlab-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: gitlab-admin namespace: kube-system </code></pre> <blockquote> <p>Apply the service account and cluster role binding to your cluster:</p> </blockquote> <pre><code>kubectl apply -f gitlab-admin-service-account.yaml </code></pre> <blockquote> <p>Output:</p> </blockquote> <pre><code> serviceaccount "gitlab-admin" created clusterrolebinding "gitlab-admin" created </code></pre> <blockquote> <p>Retrieve the token for the gitlab-admin service account:</p> </blockquote> <pre><code> kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep gitlab-admin | awk '{print $1}') </code></pre> <p><strong>Copy the <code>&lt;authentication_token&gt;</code> value from the output:</strong></p> <pre><code>Name: gitlab-admin-token-b5zv4 Namespace: kube-system Labels: &lt;none&gt; Annotations: kubernetes.io/service-account.name=gitlab-admin kubernetes.io/service-account.uid=bcfe66ac-39be-11e8-97e8-026dce96b6e8 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: &lt;authentication_token&gt; </code></pre> <p>Follow this tutorial to connect your cluster to the project, otherwise you will have to stitch up the same thing along the way with a lot more pain!</p>
<p>I have been trying different things around k8s these days. I am wondering about the field nodeSelector in the POD specification. As I understand we have to assign some labels to the nodes and these labels can further be used in the nodeSelector field part of the POD specification.</p> <p>Assignment of the node to pods based on nodeSelector works fine. But, after I create the pod, now I want to update/overwrite the nodeSelector field which would deploy my pod to new node based on new nodeSelector label updated.</p> <p>I am thinking this in the same way it is done for the normal labels using <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#label" rel="noreferrer">kubectl label</a> command.</p> <p>Are there any hacks to achieve such a case ?</p> <p>If this is not possible for the current latest versions of the kubernetes, why should not we consider it ?</p> <p>Thanks.</p>
<p>While editing deployment manually as cookiedough suggested is one of the options, I believe using <code>kubctl patch</code> would be a better solution. </p> <p>You can either patch by using <a href="https://kubernetes.io/docs/tasks/run-application/update-api-object-kubectl-patch/" rel="noreferrer">yaml</a> file or <a href="https://kubernetes.io/docs/tasks/run-application/update-api-object-kubectl-patch/#use-a-json-merge-patch-to-update-a-deployment" rel="noreferrer">JSON</a> string, which makes it easier to integrate thing into scripts. Here is a complete <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#patch" rel="noreferrer">reference</a>.</p> <hr> <h1>Example</h1> <p>Here's a simple deployment of nginx I used, which will be created on <code>node-1</code>:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 nodeSelector: kubernetes.io/hostname: node-1 </code></pre> <h3>JSON patch</h3> <p>You can patch the deployment to change the desired node as follows:<br/><code>kubectl patch deployments nginx-deployment -p '{"spec": {"template": {"spec": {"nodeSelector": {"kubernetes.io/hostname": "node-2"}}}}}'</code></p> <h3>YAML patch</h3> <p>By running <code>kubectl patch deployment nginx-deployment --patch "$(cat patch.yaml)"</code>, where patch.yaml is prepared as follows:</p> <pre><code>spec: template: spec: nodeSelector: kubernetes.io/hostname: node-2 </code></pre> <p>Both will result in scheduler scheduling new pod on requested node, and terminating the old one as soon as the new one is ready.</p>
<p>I have set up my master nodes using <code>kubeadm</code>.</p> <p>Now I want to run the <code>join</code> command on my nodes so that the later join the cluster.</p> <p>All I have to do is run </p> <pre><code>kubeadm join --token &lt;token&gt; --discovery-token-ca-cert-hash &lt;sha256&gt; </code></pre> <p>where <code>&lt;token&gt;</code> and are values previously returned by the command below:</p> <pre><code>kubeadm init </code></pre> <p>I am also trying to script the above process and I see that parsing the actual tokens from the last command is kinda difficult;</p> <p>So I was wandering whether there is a way to explicitly specify the <code>&lt;token&gt;</code> and the <code>&lt;sha256&gt;</code> during cluster initialization, to avoid having to perform hacky parsing of the <code>init</code> command.</p>
<p>I was trying to make a script for it as well. </p> <p>In order to get the values needed I am using these commands: </p> <pre><code>TOKEN=$(sshpass -p $PASSWORD ssh -o StrictHostKeyChecking=no root@$MASTER_IP sudo kubeadm token list | tail -1 | cut -f 1 -d " ") HASH=$(sshpass -p $PASSWORD ssh -o StrictHostKeyChecking=no root@$MASTER_IP openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2&gt;/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' ) </code></pre> <p>Basically I use this commands to ssh on master and get this values. </p> <p>I have not found a easier way to achieve this. </p>
<p>Using helm for deploying chart on my Kubernetes cluster, since one day, I can't deploy a new one or upgrading one existed.</p> <p>Indeed, each time I am using helm I have an error message telling me that it is not possible to install or upgrade ressources.</p> <p>If I run <code>helm install --name foo . -f values.yaml --namespace foo-namespace</code> I have this output:</p> <blockquote> <p>Error: release foo failed: the server could not find the requested resource</p> </blockquote> <p>If I run <code>helm upgrade --install foo . -f values.yaml --namespace foo-namespace</code> or <code>helm upgrade foo . -f values.yaml --namespace foo-namespace</code> I have this error:</p> <blockquote> <p>Error: UPGRADE FAILED: "foo" has no deployed releases</p> </blockquote> <p>I don't really understand why.</p> <p>This is my helm version:</p> <pre><code>Client: &amp;version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"} Server: &amp;version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"} </code></pre> <p>On my kubernetes cluster I have tiller deployed with the same version, when I run <code>kubectl describe pods tiller-deploy-84b... -n kube-system</code>:</p> <pre><code>Name: tiller-deploy-84b8... Namespace: kube-system Priority: 0 PriorityClassName: &lt;none&gt; Node: k8s-worker-1/167.114.249.216 Start Time: Tue, 26 Feb 2019 10:50:21 +0100 Labels: app=helm name=tiller pod-template-hash=84b... Annotations: &lt;none&gt; Status: Running IP: &lt;IP_NUMBER&gt; Controlled By: ReplicaSet/tiller-deploy-84b8... Containers: tiller: Container ID: docker://0302f9957d5d83db22... Image: gcr.io/kubernetes-helm/tiller:v2.12.3 Image ID: docker-pullable://gcr.io/kubernetes-helm/tiller@sha256:cab750b402d24d... Ports: 44134/TCP, 44135/TCP Host Ports: 0/TCP, 0/TCP State: Running Started: Tue, 26 Feb 2019 10:50:28 +0100 Ready: True Restart Count: 0 Liveness: http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3 Environment: TILLER_NAMESPACE: kube-system TILLER_HISTORY_MAX: 0 Mounts: /var/run/secrets/kubernetes.io/serviceaccount from helm-token-... (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: helm-token-...: Type: Secret (a volume populated by a Secret) SecretName: helm-token-... Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 26m default-scheduler Successfully assigned kube-system/tiller-deploy-84b86cbc59-kxjqv to worker-1 Normal Pulling 26m kubelet, k8s-worker-1 pulling image "gcr.io/kubernetes-helm/tiller:v2.12.3" Normal Pulled 26m kubelet, k8s-worker-1 Successfully pulled image "gcr.io/kubernetes-helm/tiller:v2.12.3" Normal Created 26m kubelet, k8s-worker-1 Created container Normal Started 26m kubelet, k8s-worker-1 Started container </code></pre> <p>Is someone have faced the same issue ?</p> <hr> <p>Update:</p> <p>This the folder structure of my actual chart named foo: structure folder of the chart:</p> <pre><code>&gt; templates/ &gt; deployment.yaml &gt; ingress.yaml &gt; service.yaml &gt; .helmignore &gt; Chart.yaml &gt; values.yaml </code></pre> <p>I have already tried to delete the chart in failure using the delete command <code>helm del --purge foo</code> but the same errors occurred.</p> <p>Just to be more precise, the chart foo is in fact a custom chart using my own private registry. ImagePullSecret are normally setting up.</p> <p>I have run these two commands <code>helm upgrade foo . -f values.yaml --namespace foo-namespace --force</code> | <code>helm upgrade --install foo . -f values.yaml --namespace foo-namespace --force</code> and I still get an error:</p> <pre><code>UPGRADE FAILED ROLLING BACK Error: failed to create resource: the server could not find the requested resource Error: UPGRADE FAILED: failed to create resource: the server could not find the requested resource </code></pre> <p>Notice that foo-namespace already exist. So the error don't come from the namespace name or the namespace itself. Indeed, if I run <code>helm list</code>, I can see that the <strong>foo</strong> chart is in a <code>FAILED</code> status.</p>
<p>Tiller stores all releases as ConfigMaps in Tiller's namespace(<code>kube-system</code> in your case). Try to find broken release and delete it's ConfigMap using commands:</p> <pre><code>$ kubectl get cm --all-namespaces -l OWNER=TILLER NAMESPACE NAME DATA AGE kube-system nginx-ingress.v1 1 22h $ kubectl delete cm nginx-ingress.v1 -n kube-system </code></pre> <p>Next, delete all release objects (deployment,services,ingress, etc) manually and reinstall release using helm again.</p> <p>If it didn't help, you may try to download newer <a href="https://github.com/helm/helm/releases" rel="noreferrer">release</a> of Helm (v2.14.3 at the moment) and update/reinstall Tiller.</p>
<p>I have been trying to deploy Kafka using <a href="https://docs.confluent.io/current/installation/installing_cp/cp-helm-charts/docs/index.html" rel="nofollow noreferrer">Helm charts</a>. So I defined NodePort service for Kafka pods. I checked console Kafka producer and consumer with the same hosts and ports - they work properly. However, when I create Spark application as data consumer and Kafka as producer they are not able to connect to the Kafka service0. I used minikube ip (instead of node ip) for the host and service NodePort port. Although, in Spark logs, I saw that NodePort service resolves endpoints and brokers are discovered as pods addressed and ports:</p> <pre><code>INFO AbstractCoordinator: [Consumer clientId=consumer-1, groupId=avro_data] Discovered group coordinator 172.17.0.20:9092 (id: 2147483645 rack: null) INFO ConsumerCoordinator: [Consumer clientId=consumer-1, groupId=avro_data] Revoking previously assigned partitions [] INFO AbstractCoordinator: [Consumer clientId=consumer-1, groupId=avro_data] (Re-)joining group WARN NetworkClient: [Consumer clientId=consumer-1, groupId=avro_data] Connection to node 2147483645 (/172.17.0.20:9092) could not be established. Broker may not be available. INFO AbstractCoordinator: [Consumer clientId=consumer-1, groupId=avro_data] Group coordinator 172.17.0.20:9092 (id: 2147483645 rack: null) is unavailable or invalid, will attempt rediscovery WARN NetworkClient: [Consumer clientId=consumer-1, groupId=avro_data] Connection to node 2 (/172.17.0.20:9092) could not be established. Broker may not be available. WARN NetworkClient: [Consumer clientId=consumer-1, groupId=avro_data] Connection to node 0 (/172.17.0.12:9092) could not be established. Broker may not be available. </code></pre> <p>How this behavior can be changed?</p> <p>NodePort service definition looks like this:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: kafka-service spec: selector: app: cp-kafka release: my-confluent-oss ports: - protocol: TCP targetPort: 9092 port: 32400 nodePort: 32400 type: NodePort </code></pre> <p>Spark consumer configuration:</p> <pre><code>def kafkaParams() = Map[String, Object]( "bootstrap.servers" -&gt; "192.168.99.100:32400", "schema.registry.url" -&gt; "http://192.168.99.100:8081", "key.deserializer" -&gt; classOf[StringDeserializer], "value.deserializer" -&gt; classOf[KafkaAvroDeserializer], "group.id" -&gt; "avro_data", "auto.offset.reset" -&gt; "earliest", "enable.auto.commit" -&gt; (false: java.lang.Boolean) ) </code></pre> <p>Kafka producer configuration:</p> <pre><code> props.put("bootstrap.servers", "192.168.99.100:32400") props.put("client.id", "avro_data") props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer") props.put("value.serializer", "io.confluent.kafka.serializers.KafkaAvroSerializer") props.put("schema.registry.url", "http://192.168.99.100:32500") </code></pre> <p>All the K8s services for Kafka:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kafka-service NodePort 10.99.113.234 &lt;none&gt; 32400:32400/TCP 6m34s kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 27d my-confluent-oss-cp-kafka ClusterIP 10.100.156.108 &lt;none&gt; 9092/TCP 102m my-confluent-oss-cp-kafka-connect ClusterIP 10.99.78.89 &lt;none&gt; 8083/TCP 102m my-confluent-oss-cp-kafka-headless ClusterIP None &lt;none&gt; 9092/TCP 102m my-confluent-oss-cp-kafka-rest ClusterIP 10.100.152.109 &lt;none&gt; 8082/TCP 102m my-confluent-oss-cp-ksql-server ClusterIP 10.96.249.202 &lt;none&gt; 8088/TCP 102m my-confluent-oss-cp-schema-registry ClusterIP 10.109.27.45 &lt;none&gt; 8081/TCP 102m my-confluent-oss-cp-zookeeper ClusterIP 10.102.182.90 &lt;none&gt; 2181/TCP 102m my-confluent-oss-cp-zookeeper-headless ClusterIP None &lt;none&gt; 2888/TCP,3888/TCP 102m schema-registry-service NodePort 10.103.100.64 &lt;none&gt; 32500:32500/TCP 33m zookeeper-np NodePort 10.98.180.130 &lt;none&gt; 32181:32181/TCP 53m </code></pre>
<p>I had a similar issue when I was trying to access kafka broker (<a href="https://github.com/confluentinc/cp-helm-charts" rel="nofollow noreferrer">cp-helm-chart</a>) running on minikube from outside. </p> <p>Here how I resolved it. Before you install using helm install from local repository.</p> <ol> <li>Edit inside this file <a href="https://github.com/confluentinc/cp-helm-charts/blob/master/charts/cp-kafka/values.yaml" rel="nofollow noreferrer">https://github.com/confluentinc/cp-helm-charts/blob/master/charts/cp-kafka/values.yaml</a></li> <li>Search for nodeport: and change its enabled field to true. <br/> nodeport:<br/> enabled: true</li> <li>Uncomment these two lines by removing #:<br/> "advertised.listeners": |-<br/> EXTERNAL://${HOST_IP}:$((31090 + ${KAFKA_BROKER_ID}))</li> <li>Replace ${HOST_IP} by your minikube ip (enter minikube ip in cmd to retrieve your k8s host ip e.g : 196.169.99.100)</li> <li>Replace ${KAFKA_BROKER_ID} by the broker id (if only one broker is running than its will be by default just 0)</li> <li>Finally it would look something like this:<br/> "advertised.listeners": |-<br/> EXTERNAL://196.169.99.100:31090</li> </ol> <p>Now you can access the kafka broker running within k8s cluster from outside by pointing the bootstrap.servers to 196.169.99.100:31090</p>
<p>While using the go-client API after I use the <code>api.PersistentVolumeClaims(namespace).Create(createOpts)</code> call the PersistentVolumeClaim appears as a resource but stays in the Pending state. I do not see any events when using <code>kubectl describe pvc</code>, I also don't see any Volumes being created etc.</p> <pre><code>$ kubectl describe pvc --namespace=test -R Name: 93007732-9d8c-406e-be99-f48faed3a061 Namespace: test StorageClass: microk8s-hostpath Status: Pending Volume: 93007732-9d8c-406e-be99-f48faed3a061 Labels: &lt;none&gt; Annotations: &lt;none&gt; Finalizers: [kubernetes.io/pvc-protection] Capacity: 0 Access Modes: VolumeMode: Filesystem Events: &lt;none&gt; Mounted By: &lt;none&gt; </code></pre> <p>The code I am using is as follows:</p> <pre><code> volume, errGo := uuid.NewRandom() if errGo != nil { job.failed = kv.Wrap(errGo).With("stack", stack.Trace().TrimRuntime()) return job.failed } job.volume = volume.String() fs := v1.PersistentVolumeFilesystem createOpts := &amp;v1.PersistentVolumeClaim{ ObjectMeta: metav1.ObjectMeta{ Name: job.volume, Namespace: job.namespace, UID: types.UID(job.volume), }, Spec: v1.PersistentVolumeClaimSpec{ AccessModes: []v1.PersistentVolumeAccessMode{v1.ReadWriteOnce}, Resources: v1.ResourceRequirements{ Requests: v1.ResourceList{ v1.ResourceName(v1.ResourceStorage): resource.MustParse("10Gi"), }, }, VolumeName: job.volume, VolumeMode: &amp;fs, }, Status: v1.PersistentVolumeClaimStatus{ Phase: v1.ClaimBound, AccessModes: []v1.PersistentVolumeAccessMode{v1.ReadWriteOnce}, Capacity: v1.ResourceList{ v1.ResourceName(v1.ResourceStorage): resource.MustParse("10Gi"), }, }, } api := Client().CoreV1() if _, errGo = api.PersistentVolumeClaims(namespace).Create(createOpts); errGo != nil { job.failed = kv.Wrap(errGo).With("stack", stack.Trace().TrimRuntime()) return job.failed } </code></pre> <p>I have tried to find good examples for using the Create API with persistent volumes but most examples appear to be for watchers etc and so I spent quite sometime trying to reverse engineer code leading me explicitly setting the <code>Status</code> but this appears to have had zero impact. I also tried defaulting the VolumeMode within the Spec which did not help.</p> <p>The examples I have read come from:</p> <p><a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/volume/persistentvolume/framework_test.go" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/volume/persistentvolume/framework_test.go</a> <br> <a href="https://godoc.org/k8s.io/api/core/v1#PersistentVolumeSpec" rel="nofollow noreferrer">https://godoc.org/k8s.io/api/core/v1#PersistentVolumeSpec</a> <br> <a href="https://github.com/vladimirvivien/k8s-client-examples/tree/master/go/pvcwatch" rel="nofollow noreferrer">https://github.com/vladimirvivien/k8s-client-examples/tree/master/go/pvcwatch</a> <br> <a href="https://medium.com/programming-kubernetes/building-stuff-with-the-kubernetes-api-part-4-using-go-b1d0e3c1c899" rel="nofollow noreferrer">https://medium.com/programming-kubernetes/building-stuff-with-the-kubernetes-api-part-4-using-go-b1d0e3c1c899</a> <br></p> <p>Does anyone know of actually example code for these APIs that goes beyond unit testing within the _test.go files, or can anyone provide any hints as to how to get the creation process actually rolling within the cluster ? I have assumed that the downstream resources needed for example the Volume etc are automatically provisioned when I attempt to create the Claim resource.</p> <p>Many Thanks for taking a look if you got this far...</p>
<p>What you are doing in the code looks correct. However, it looks like that your PVC can't find a matching PV to bind together. </p> <p>It looks like you are using a <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer"><code>hostPath</code></a> PV (with a <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">storage class</a>) that <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#raw-block-volume-support" rel="nofollow noreferrer">doesn't support dynamic provisioning</a>. Also, documented <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#provisioner" rel="nofollow noreferrer">here</a>.</p> <p>So most likely you will have to create a hostPath PV so that your PVC can bind to it. The volume has to be equal or greater in size as what you are requesting in your PVC.</p> <p>Another option is to use a <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#local" rel="nofollow noreferrer">Local</a> volume that supports dynamic provisioning which is different from hostPath.</p> <p>You can debug the dynamic provisioning and binding of the PVC/PV by looking at the kube-controller-manager logs on your K8s control plane leader.</p>
<p>I had deployed ravendb on a 3 node cluster on google cloud. However it's not accessible from the browser. Here is the procedure and the configuration I had followed. Could you please help me troubleshoot the service and deployment. When I run the get pods and get svc commands the pods and services are running but the db isn't accessible from the browser.</p> <p><strong>Procedure followed:</strong></p> <pre><code>I suggest you first run the setup wizard on your local dev machine and get the Let's Encrypt certificate. Just use 127.0.0.1:8080 as the IP, it's not important at the moment. (Even better will be to get your own domain + certificate for production use) You need to convert both the pfx file and the license.json file to base64, In c# for example: Convert.ToBase64String(File.ReadAllBytes(@"C:\work\certs\cluster.server.certificate.iftah.pfx")) Convert.ToBase64String(File.ReadAllBytes(@"C:\work\license.json")) 1. Create a GKE standard cluster with 3 nodes, no special settings. Let's call it raven-cluster 2. Install gcloud and kubectl (follow the getting started guide: https://cloud.google.com/kubernetes-engine/docs/quickstart) run: 3. &gt; gcloud container clusters get-credentials raven-cluster 4. &gt; kubectl create clusterrolebinding my-cluster-admin-binding --clusterrole cluster-admin --user $(gcloud config get-value account) Now you're ready to deploy. Edit the cluster.yaml file to include the base64 certificate (name: raven-ssl) Edit the license.secret.yaml file to include the base64 license (name: ravendb-license) 4a) kubectl label node role=ingress-controller --all 5. kubectl create -f license.secret.yaml 6. kubectl create -f haproxy.yaml 7. kubectl create -f cluster.yaml 9. kubectl get pod 8. kubectl get svc </code></pre> <p><strong>ravendb spec YAML:</strong></p> <pre><code>apiVersion: v1 items: - apiVersion: v1 data: raven-0: "{\r\n \"Setup.Mode\": \"None\",\r\n \"DataDir\": \"/data/RavenData\",\r\n \ \"Security.Certificate.Path\": \"/ssl/ssl\",\r\n \"ServerUrl\": \"https://0.0.0.0\",\r\n \ \"ServerUrl.Tcp\": \"tcp://0.0.0.0:38888\",\r\n \"PublicServerUrl\": \"https://a.tej-test001.ravendb.community\",\r\n \ \"PublicServerUrl.Tcp\": \"tcp://tcp-a.tej-test001.ravendb.community:443\",\r\n \ \"License.Path\": \"/license/license.json\",\r\n \"License.Eula.Accepted\": \"true\",\r\n \"License.CanActivate\": \"false\",\r\n \"License.CanForceUpdate\": \"false\",\r\n \"Server.AllowedDestinations\": \"Azure\",\r\n}" raven-1: "{\r\n \"Setup.Mode\": \"None\",\r\n \"DataDir\": \"/data/RavenData\",\r\n \ \"Security.Certificate.Path\": \"/ssl/ssl\",\r\n \"ServerUrl\": \"https://0.0.0.0\",\r\n \ \"ServerUrl.Tcp\": \"tcp://0.0.0.0:38888\",\r\n \"PublicServerUrl\": \"https://b.tej-test001.ravendb.community\",\r\n \ \"PublicServerUrl.Tcp\": \"tcp://tcp-b.tej-test001.ravendb.community:443\",\r\n \ \"License.Path\": \"/license/license.json\",\r\n \"License.Eula.Accepted\": \"true\",\r\n \"License.CanActivate\": \"false\",\r\n \"License.CanForceUpdate\": \"false\",\r\n \"Server.AllowedDestinations\": \"Azure\",\r\n}" raven-2: "{\r\n \"Setup.Mode\": \"None\",\r\n \"DataDir\": \"/data/RavenData\",\r\n \ \"Security.Certificate.Path\": \"/ssl/ssl\",\r\n \"ServerUrl\": \"https://0.0.0.0\",\r\n \ \"ServerUrl.Tcp\": \"tcp://0.0.0.0:38888\",\r\n \"PublicServerUrl\": \"https://c.tej-test001.ravendb.community\",\r\n \ \"PublicServerUrl.Tcp\": \"tcp://tcp-c.tej-test001.ravendb.community:443\",\r\n \ \"License.Path\": \"/license/license.json\",\r\n \"License.Eula.Accepted\": \"true\",\r\n \"License.CanActivate\": \"false\",\r\n \"License.CanForceUpdate\": \"false\",\r\n \"Server.AllowedDestinations\": \"Azure\",\r\n}" kind: ConfigMap metadata: labels: app: ravendb cluster: ee632d20-0a5f-40e4-a84a-5294da32d6d5 name: raven-settings namespace: default - apiVersion: apps/v1 kind: StatefulSet metadata: labels: app: ravendb cluster: ee632d20-0a5f-40e4-a84a-5294da32d6d5 name: raven namespace: default spec: podManagementPolicy: OrderedReady replicas: 3 revisionHistoryLimit: 10 selector: matchLabels: app: ravendb cluster: ee632d20-0a5f-40e4-a84a-5294da32d6d5 serviceName: raven template: metadata: labels: app: ravendb cluster: ee632d20-0a5f-40e4-a84a-5294da32d6d5 spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: cluster operator: In values: - ee632d20-0a5f-40e4-a84a-5294da32d6d5 topologyKey: kubernetes.io/hostname containers: - command: - /bin/sh - -c - /opt/RavenDB/Server/Raven.Server --config-path /config/$HOSTNAME image: ravendb/ravendb:latest imagePullPolicy: Always name: ravendb ports: - containerPort: 443 name: http-api protocol: TCP - containerPort: 38888 name: tcp-server protocol: TCP - containerPort: 161 name: snmp protocol: TCP resources: limits: cpu: 256m memory: 1900Mi requests: cpu: 256m memory: 1900Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /data name: data - mountPath: /ssl name: ssl - mountPath: /license name: license - mountPath: /config name: config dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 120 volumes: - name: ssl secret: defaultMode: 420 secretName: raven-ssl - configMap: defaultMode: 420 name: raven-settings name: config - name: license secret: defaultMode: 420 secretName: ravendb-license updateStrategy: rollingUpdate: partition: 0 type: RollingUpdate volumeClaimTemplates: - metadata: labels: app: ravendb cluster: ee632d20-0a5f-40e4-a84a-5294da32d6d5 name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi - apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: ingress.kubernetes.io/ssl-passthrough: "true" kubernetes.io/ingress.class: "haproxy" labels: app: ravendb cluster: ee632d20-0a5f-40e4-a84a-5294da32d6d5 name: raven namespace: default spec: rules: - host: a.tej-test001.ravendb.community http: paths: - backend: serviceName: raven-0 servicePort: 443 path: / - host: tcp-a.tej-test001.ravendb.community http: paths: - backend: serviceName: raven-0 servicePort: 38888 path: / - host: b.tej-test001.ravendb.community http: paths: - backend: serviceName: raven-1 servicePort: 443 path: / - host: tcp-b.tej-test001.ravendb.community http: paths: - backend: serviceName: raven-1 servicePort: 38888 path: / - host: c.tej-test001.ravendb.community http: paths: - backend: serviceName: raven-2 servicePort: 443 path: / - host: tcp-c.tej-test001.ravendb.community http: paths: - backend: serviceName: raven-2 servicePort: 38888 path: / - apiVersion: v1 data: ssl: dfdjfdkljfdkjdkjd;kfjdkfjdklfj kind: Secret metadata: labels: app: ravendb cluster: ee632d20-0a5f-40e4-a84a-5294da32d6d5 name: raven-ssl namespace: default type: Opaque - apiVersion: v1 kind: Service metadata: labels: app: ravendb cluster: ee632d20-0a5f-40e4-a84a-5294da32d6d5 name: raven namespace: default spec: clusterIP: None ports: - name: http-api port: 443 protocol: TCP targetPort: 443 - name: tcp-server port: 38888 protocol: TCP targetPort: 38888 - name: snmp port: 161 protocol: TCP targetPort: 161 selector: app: ravendb cluster: ee632d20-0a5f-40e4-a84a-5294da32d6d5 sessionAffinity: None type: ClusterIP status: loadBalancer: {} - apiVersion: v1 kind: Service metadata: labels: app: ravendb cluster: ee632d20-0a5f-40e4-a84a-5294da32d6d5 node: "0" name: raven-0 namespace: default spec: ports: - name: http-api port: 443 protocol: TCP targetPort: 443 - name: tcp-server port: 38888 protocol: TCP targetPort: 38888 - name: snmp port: 161 protocol: TCP targetPort: 161 selector: app: ravendb cluster: ee632d20-0a5f-40e4-a84a-5294da32d6d5 statefulset.kubernetes.io/pod-name: raven-0 sessionAffinity: None type: ClusterIP status: loadBalancer: {} - apiVersion: v1 kind: Service metadata: labels: app: ravendb cluster: ee632d20-0a5f-40e4-a84a-5294da32d6d5 node: "1" name: raven-1 namespace: default spec: ports: - name: http-api port: 443 protocol: TCP targetPort: 443 - name: tcp-server port: 38888 protocol: TCP targetPort: 38888 - name: snmp port: 161 protocol: TCP targetPort: 161 selector: app: ravendb cluster: ee632d20-0a5f-40e4-a84a-5294da32d6d5 statefulset.kubernetes.io/pod-name: raven-1 sessionAffinity: None type: ClusterIP status: loadBalancer: {} - apiVersion: v1 kind: Service metadata: labels: app: ravendb cluster: ee632d20-0a5f-40e4-a84a-5294da32d6d5 node: "2" name: raven-2 namespace: default spec: ports: - name: http-api port: 443 protocol: TCP targetPort: 443 - name: tcp-server port: 38888 protocol: TCP targetPort: 38888 - name: snmp port: 161 protocol: TCP targetPort: 161 selector: app: ravendb cluster: ee632d20-0a5f-40e4-a84a-5294da32d6d5 statefulset.kubernetes.io/pod-name: raven-2 sessionAffinity: None type: ClusterIP status: loadBalancer: {} kind: List </code></pre> <p><strong>haproxy spec yaml:</strong></p> <pre><code>--- apiVersion: v1 kind: ServiceAccount metadata: name: ingress-controller namespace: default --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: ingress-controller rules: - apiGroups: - "" resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get - apiGroups: - "" resources: - services verbs: - get - list - watch - apiGroups: - "extensions" resources: - ingresses verbs: - get - list - watch - apiGroups: - "" resources: - events verbs: - create - patch - apiGroups: - "extensions" resources: - ingresses/status verbs: - update --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: ingress-controller namespace: default rules: - apiGroups: - "" resources: - configmaps - pods - secrets - namespaces verbs: - get - apiGroups: - "" resources: - configmaps verbs: - get - update - apiGroups: - "" resources: - configmaps verbs: - create - apiGroups: - "" resources: - endpoints verbs: - get - create - update --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: ingress-controller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ingress-controller subjects: - kind: ServiceAccount name: ingress-controller namespace: default - apiGroup: rbac.authorization.k8s.io kind: User name: ingress-controller --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: ingress-controller namespace: default roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: ingress-controller subjects: - kind: ServiceAccount name: ingress-controller namespace: default - apiGroup: rbac.authorization.k8s.io kind: User name: ingress-controller --- apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: run: ingress-default-backend name: ingress-default-backend namespace: default spec: selector: matchLabels: run: ingress-default-backend template: metadata: labels: run: ingress-default-backend spec: containers: - name: ingress-default-backend image: gcr.io/google_containers/defaultbackend:1.0 ports: - containerPort: 8080 resources: limits: cpu: 10m memory: 20Mi --- apiVersion: v1 kind: Service metadata: name: ingress-default-backend namespace: default spec: ports: - port: 8080 selector: run: ingress-default-backend --- apiVersion: v1 data: backend-server-slots-increment: "4" dynamic-scaling: "true" kind: ConfigMap metadata: name: haproxy-ingress namespace: default --- apiVersion: apps/v1beta2 kind: Deployment metadata: labels: run: haproxy-ingress name: haproxy-ingress spec: selector: matchLabels: run: haproxy-ingress template: metadata: labels: run: haproxy-ingress spec: serviceAccountName: ingress-controller containers: - name: haproxy-ingress image: quay.io/jcmoraisjr/haproxy-ingress args: - --default-backend-service=$(POD_NAMESPACE)/ingress-default-backend - --configmap=$(POD_NAMESPACE)/haproxy-ingress - --reload-strategy=reusesocket ports: - name: https containerPort: 443 - name: stat containerPort: 1936 livenessProbe: httpGet: path: /healthz port: 10253 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace --- apiVersion: v1 kind: Service metadata: labels: run: haproxy-ingress name: haproxy-ingress namespace: default spec: type: LoadBalancer ports: - name: https port: 443 - name: stat port: 1936 selector: run: haproxy-ingress </code></pre>
<p>The instructions you have are partial. You didn't update the DNS records.</p> <ol start="8"> <li>kubectl get pod</li> <li>kubectl get svc</li> <li>Write down the EXTERNAL-IP field of the haproxy-ingress service. It can take a couple of minutes until the IP is allocated. You will need to use it to update the DNS record of your domain. Go to customers.ravendb.net and edit the DNS record to the new external IP you got. (or if you have your own domain, do this with your domain provider)</li> <li>When all the raven pods are ready, you can go to the browser and access the cluster.</li> </ol>
<p>I install <code>kubernetes</code> on 4 <code>CentOS</code> node. 1 master and three worker.after that i want to <code>autoscale</code> my pod .as i know <code>heapster</code> is deprecated so i install metric server</p> <pre><code>git clone https://github.com/kubernetes-incubator/metrics-server.git [root@ip-10-0-1-91 1.8+]# kubectl apply -f aggregated-metrics-reader.yaml clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created [root@ip-10-0-1-91 1.8+]# kubectl apply -f auth-reader.yaml rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created [root@ip-10-0-1-91 1.8+]# kubectl apply -f auth-delegator.yaml clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created [root@ip-10-0-1-91 1.8+]# kubectl apply -f metrics-apiservice.yaml apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created [root@ip-10-0-1-91 1.8+]# kubectl apply -f resource-reader.yaml clusterrole.rbac.authorization.k8s.io/system:metrics-server created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created [root@ip-10-0-1-91 1.8+]# kubectl apply -f metrics-server-deployment.yaml serviceaccount/metrics-server created deployment.extensions/metrics-server created [root@ip-10-0-1-91 1.8+]# kubectl apply -f metrics-server-service.yaml service/metrics-server created </code></pre> <p>but when i get log of metrics-server i get this </p> <pre><code>kubectl -n kube-system logs metrics-server-55d46868d4-9649g I0327 07:53:03.900200 1 serving.go:273] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key) W0327 07:53:34.327373 1 authentication.go:245] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLE_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' Error: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 10.96.0.1:443: i/o timeout Usage: [flags] </code></pre> <p>and the kube-apiserver has this log</p> <pre><code>http: TLS handshake error from 192.168.122.29:41366: remote error: tls: bad certificate </code></pre>
<p>The issue with your setup is it can't connect to kubelet. You need to change the <code>metrics-server-deployment.yaml</code> file in <code>deploy/1.8+</code> folder</p> <p>In containers section, you need to allow <code>insecure-tls</code>. Please add the following section:</p> <pre><code>containers: - command: - /metrics-server - --metric-resolution=30s - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP </code></pre> <p>Following is full metrics-server-deployment.yaml file, you can replace the full file with following:</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: metrics-server namespace: kube-system --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: metrics-server namespace: kube-system labels: k8s-app: metrics-server spec: selector: matchLabels: k8s-app: metrics-server template: metadata: name: metrics-server labels: k8s-app: metrics-server spec: serviceAccountName: metrics-server volumes: # mount in tmp so we can safely use from-scratch images and/or read-only containers - name: tmp-dir emptyDir: {} containers: - command: - /metrics-server - --metric-resolution=30s - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP name: metrics-server image: k8s.gcr.io/metrics-server-amd64:v0.3.1 imagePullPolicy: Always volumeMounts: - name: tmp-dir mountPath: /tmp </code></pre> <p>Now create the metrics-server-deployment again with the new file and it should work.</p>
<p>I have tomcat as docker image. I have 3 xmls/property files to bring up the war in my tomcat I need to write init container which will </p> <ol> <li>have a script [shell or python]</li> <li>create a volume and mount it to the main container</li> <li>copy the property files from my local system to the mounted volume.</li> <li>then init-container finishes app container starts after this.</li> </ol> <p>for example: on my local I have the following:</p> <pre><code>/work-dir tree ├── bootstrap.properties ├── index.html ├── indexing_configuration.xml ├── repository.xml └── wrapper.sh </code></pre> <p>init container should run a script wrapper.sh to copy these<br> files into the mounted volume on app container which is <code>/usr/share/jack-configs/</code></p>
<p>You have to create a volume and mount on both containers. On Init container you run the script to copy the files to the mounted volume.</p> <p>Instead of using a local file, I would suggest you use a blob storage to copy you files over, will make it much more simple.</p> <p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/" rel="nofollow noreferrer">This</a> docs shows how to do what you want.</p> <p>An example YAML is the following:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: init-demo spec: containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - name: workdir mountPath: /usr/share/nginx/html # These containers are run during pod initialization initContainers: - name: install image: busybox command: - wget - "-O" - "/work-dir/index.html" - http://kubernetes.io volumeMounts: - name: workdir mountPath: "/work-dir" dnsPolicy: Default volumes: - name: workdir emptyDir: {} </code></pre> <p>To accomplish what you want, you have to change the <code>command</code> in the init container to execute your script, this bit I leave you try.</p> <p>PS: If you really want to copy from a local(node) filesystem, you need to mount another volume to the init container and copy from one volume to another</p>
<p>we have a scenario where we want to stop the cluster (worker nodes) in Night when it is not being used and start it again in the morning when people start using the application running on AWS EKS. Any suggestions will be helpful.</p>
<p>I think you can achieve it by changing the <strong>desired capacity</strong> of auto scaling group using aws cli. You can run it as a CRON job:</p> <p><code>aws autoscaling update-auto-scaling-group --auto-scaling-group-name &lt;my-auto-scaling-group&gt; --desired-capacity 0 --min-size 0</code></p>
<p>I have the following questions regarding request/limit quota for ns:</p> <p>Considering the following namespace resource setup: - request: 1 core/1GiB - limit: 2 core/2GiB</p> <ol> <li><p>Does it mean a namespace is guaranteed to have 1/1GiB? How is it achieved physically on cluster nodes? Does it mean k8s somehow strictly reserve these values for a ns (at a time it's created)? At which point of time reservation takes place?</p></li> <li><p>Limit 2 core/2GiB - does it mean it's not guaranteed for a ns and depends on current cluster's state? Like if currently cluster has only 100MiB of free ram available, but in runtime pod needs 200Mib more above a resource request - pod will be restarted? Where does k8s take this resource if pod needs to go above it's request? </p></li> <li><p>Regarding <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-organizing-with-namespaces" rel="nofollow noreferrer">namespace granularity</a> and k8s horizontal auto scaling: consider we have 2 applications and 2 namespaces - 1 ns per each app. We set both ns quota as such that there's some free buffer for 2 extra pods and horizontal auto scaling up to 2 pods with certain CPU threshold. So, is there really a point in doing such a set up? My concern is that if NS reserves it's resources and no other ns can utilize them - we can just create 2 extra pods in each ns replica set with no auto scaling, using these pods constantly. I can see a point in using auto scaling if we have more than 1 application in 1 ns, so that these apps could share same resource buffer for scaling. Is this assumption correct?</p></li> <li><p>How do you think is this a good practice to have 1 ns per app? Why?</p></li> </ol> <p>p.s. i know what resource request/limit are and difference between them. In most info sources there's just very high level explanation of the concept.</p> <p>Thanks in advance.</p>
<p>The <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/" rel="nofollow noreferrer">docs</a> clearly states the following:</p> <blockquote> <p>In the case where the total capacity of the cluster is less than the sum of the quotas of the namespaces, there may be contention for resources. This is handled on a first-come-first-served basis.</p> </blockquote> <p>and</p> <blockquote> <p>ResourceQuotas are independent of the cluster capacity. They are expressed in absolute units. So, if you add nodes to your cluster, this does not automatically give each namespace the ability to consume more resources.</p> </blockquote> <p>and</p> <blockquote> <p>resource quota divides up aggregate cluster resources, but it creates no restrictions around nodes: pods from several namespaces may run on the same node</p> </blockquote> <p><strong><em>ResourceQuotas</em></strong> is a constraint set in the namespace and does not reserve capacity, it just set a limit of resources that can be consumed by each namespace.</p> <p>To effectively "reserve" the capacity, you have to set the restrictions to all namespaces, so that other namespaces does not use more resources than you cluster can provide. This way you can have more guarantees that a namespace will have available capacity to run their load.</p> <p>The docs suggests:</p> <ul> <li>Proportionally divide total cluster resources among several teams(namespaces).</li> <li>Allow each team to grow resource usage as needed, but have a generous limit to prevent accidental resource exhaustion.</li> <li>Detect demand from one namespace, add nodes, and increase quota.</li> </ul> <p>Given that, the answer for your questions are:</p> <ol> <li><p>it is not a reserved capacity, the reservation happens on resource(pod) creation. </p></li> <li><p>Running resources are not affected after reservation. New resources are rejected if the resource creation will over commit the quotas(limits)</p></li> <li><p>As stated in the docs, if the limit are higher than the capacity, the reservation will happen in a first-come-first-served basis.</p></li> <li><p>This question can make to it's own question in SO, in simple terms, for resource isolation and management.</p></li> </ol>
<p>I'm writing the network policies of a Kubernetes cluster. How can I specify a single IP address that I want to authorize in my egress policy instead of authorizing a whole range of IP addresses ?</p>
<p>An example based on the <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource" rel="nofollow noreferrer">official docs</a>:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: default spec: podSelector: matchLabels: role: db policyTypes: - Egress egress: - to: - ipBlock: cidr: 10.11.12.13/32 ports: - protocol: TCP port: 5978 </code></pre> <p>It's essential to use <code>/32</code> subnet prefix length which indicates that you're limiting the scope of the rule just to this one IP address.</p>
<p>How to debug the code written in python in container using azure dev spaces for kubernetes ?</p>
<p>Debugging should be similar just like we have it in Dot net core.In dot net , we used to debug something like this</p> <p><strong>Setting and using breakpoints for debugging</strong></p> <p>If Visual Studio 2017 is still connected to your dev space, click the stop button. Open Controllers/HomeController.cs and click somewhere on line 20 to put your cursor there. To set a breakpoint hit F9 or click Debug then Toggle Breakpoint. To start your service in debugging mode in your dev space, hit F5 or click Debug then Start Debugging.</p> <p>Open your service in a browser and notice no message is displayed. Return to Visual Studio 2017 and observe line 20 is highlighted. The breakpoint you set has paused the service at line 20. To resume the service, hit F5 or click Debug then Continue. Return to your browser and notice the message is now displayed.</p> <p>While running your service in Kubernetes with a debugger attached, you have full access to debug information such as the call stack, local variables, and exception information.</p> <p>Remove the breakpoint by putting your cursor on line 20 in Controllers/HomeController.cs and hitting F9.</p> <p>Try something like this and see if it works.</p> <p>Here is an article which explains debugging python code in visual studio 2017</p> <p><a href="https://learn.microsoft.com/en-us/visualstudio/python/tutorial-working-with-python-in-visual-studio-step-04-debugging?view=vs-2017" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/visualstudio/python/tutorial-working-with-python-in-visual-studio-step-04-debugging?view=vs-2017</a></p> <p>Hope it helps.</p>
<p>I have a namespace in k8s with the setting: scheduler.alpha.kubernetes.io/defaultTolerations: '[{"key": "role_va", "operator": "Exists"}]'</p> <p>If I am not mistaken all pods that are created in this namespace must get this toleration. But the pods don't get it. I read <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#podtolerationrestriction" rel="nofollow noreferrer">this</a> and understood that I must enable the PodTolerationRestriction controller. How can I do this on gloud?</p>
<p>In order to enable <code>PodTolerationRestriction</code> you might be required to set <code>--enable-admission-plugins</code> flag in <code>kube-apiserver</code> configuration. This is according to the official <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/" rel="nofollow noreferrer">documentation</a>, as by default this plugin is not included in admission controller plugins list. </p> <p>However, in <a href="https://cloud.google.com/kubernetes-engine/" rel="nofollow noreferrer">GKE</a> there is no possibility to adapt any specific flag for the current API server run-time configuration, because Kubernetes cluster engine core components are not exposed to any user purpose actions (related Stackoverflow <a href="https://stackoverflow.com/questions/49451780/denyescalatingexec-when-under-gke">thread</a>). </p> <p>Assuming that, you can consider using <a href="https://cloud.google.com/compute/?utm_source=google&amp;utm_medium=cpc&amp;utm_campaign=emea-pl-all-en-dr-bkws-all-all-trial-e-gcp-1003963&amp;utm_content=text-ad-none-any-DEV_c-CRE_253508655583-ADGP_Hybrid%20%7C%20AW%20SEM%20%7C%20BKWS%20~%20EXA_M:1_PL_EN_Compute_Compute%20Engine_gce%20google-KWID_43700016289387310-kwd-79074734358-userloc_9067401&amp;utm_term=KW_gce%20google-ST_gce%20google&amp;ds_rl=1242853&amp;ds_rl=1245734&amp;ds_rl=1245734&amp;gclid=CN_YhMegouECFTtTwgodRD8Img" rel="nofollow noreferrer">GCE</a> and bootstrap cluster with any cluster building <a href="https://kubernetes.io/docs/setup/pick-right-solution/#local-machine-solutions" rel="nofollow noreferrer">solutions</a>, depending on your preference, within a particular GCE VM. </p>
<p>Is it possible get list of nodes instance with prometheus. I have a node exporter but I don't see metrics like that.</p> <p>Should we add a new operator?</p>
<p>You can use <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">kube-state-metrics</a> for this purpose.</p> <blockquote> <p>kube-state-metrics is about generating metrics from Kubernetes API objects without modification. This ensures, that features provided by kube-state-metrics have the same grade of stability as the Kubernetes API objects themselves. In turn this means, that kube-state-metrics in certain situations may not show the exact same values as kubectl, as kubectl applies certain heuristics to display comprehensible messages. kube-state-metrics exposes raw data unmodified from the Kubernetes API, this way users have all the data they require and perform heuristics as they see</p> </blockquote> <p>You can find node metrics <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/docs/node-metrics.md" rel="nofollow noreferrer">here</a>. For example:</p> <pre><code>Metric name: kube_node_info node=&lt;node-address&gt; kernel_version=&lt;kernel-version&gt; os_image=&lt;os-image-name&gt; container_runtime_version=&lt;container-runtime-and-version-combination&gt; kubelet_version=&lt;kubelet-version&gt; kubeproxy_version=&lt;kubeproxy-version&gt; provider_id=&lt;provider-id </code></pre>
<p>I am getting an error in my kubernetes cluster while upgrading my install of <code>kamus</code></p> <pre><code>$ helm --debug upgrade --install soluto/kamus [debug] Created tunnel using local port: '64252' [debug] SERVER: &quot;127.0.0.1:64252&quot; Error: This command needs 2 arguments: release name, chart path </code></pre> <p>Using helm version 2.13.1</p> <p>This error is also known to be cause by not correctly using <code>--set</code> correctly or as intended.</p> <p>As an example when upgrading my ingress-nginx/ingress-nginx installing as such:</p> <pre><code> --set &quot;controller.service.annotations.service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path&quot;=/healthz,&quot;controller.service.annotations.service\.beta\.kubernetes\.io/azure-dns-label-name&quot;=$DNS_LABEL </code></pre> <p>This caused the same error as listed above.</p> <p>When I removed the quotations it worked as intended.</p> <pre><code> --set controller.service.annotations.service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path=/healthz,controller.service.annotations.service\.beta\.kubernetes\.io/azure-dns-label-name=$DNS_LABEL </code></pre> <p>The error in this case had nothing to do with not correctly setting a release name and or chart. More explanation of --set issues and solutions are below.</p>
<p><a href="https://github.com/helm/helm/blob/master/docs/helm/helm_upgrade.md" rel="noreferrer">Helm upgrade</a> command requires release name and chart path. In your case, you missed release name.</p> <blockquote> <p>helm upgrade [RELEASE] [CHART] [flags]</p> </blockquote> <p><code>helm --debug upgrade --install kamus soluto/kamus</code> should work.</p>
<p>I am getting an error mounting a config file, can anyone assist?</p> <p>With subPath on volumeMounts I get the error:</p> <pre><code>Error: stat /var/config/openhim-console.json: no such file or directory. </code></pre> <p>I can read this file.</p> <p>Without subPath on volumeMounts I get this error:</p> <pre><code>Warning Failed 13s kubelet, ip-10-0-65-230.eu-central-1.compute.internal Error: failed to start container "openhim-console": Error response from daemon: oci runtime error: container_linux.go:262: starting container process caused "process_linux.go:339: container init caused \"rootfs_linux.go:57: mounting \\\"/var/config/openhim-console.json\\\" to rootfs \\\"/var/lib/docker/overlay2/7408e2aa7e93b3c42ca4c2320681f61ae4bd4b02208364eee8da5f51d587ed21/merged\\\" at \\\"/var/lib/docker/overlay2/7408e2aa7e93b3c42ca4c2320681f61ae4bd4b02208364eee8da5f51d587ed21/merged/usr/share/nginx/html/config/default.json\\\" caused \\\"not a directory\\\"\"" : Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type Warning BackOff 2s kubelet, ip-10-0-65-230.eu-central-1.compute.internal Back-off restarting failed container </code></pre> <p>Here is the deployment.yaml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: openhim-console-deployment spec: replicas: 1 selector: matchLabels: component: openhim-console template: metadata: labels: component: openhim-console spec: volumes: - name: console-config hostPath: path: /var/config/openhim-console.json containers: - name: openhim-console image: jembi/openhim-console:1.13.rc ports: - containerPort: 80 volumeMounts: - name: console-config mountPath: /usr/share/nginx/html/config/default.json subPath: default.json env: - name: NODE_ENV value: development </code></pre>
<p>Probably <code>hostPath</code> should hold a <code>path</code> rather than your file path: <code>/var/config/openhim-console.json</code> as you're mounting a volume, not a file. </p> <p>If you are, the <code>type</code> should be specified as <code>File</code>.</p> <p>See also <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">docs#hostpath</a></p>
<p>I have a Kubernetes cluster in which there are some MySQL databases.</p> <p>I want to have a replication slave for each database in a different Kubernetes cluster in a different datacenter.</p> <p>I'm using Calico as CNI plugin.</p> <p>To make the replication process work, the slaves must be able to connect to the port 3306 of the master servers. And I would prefer to keep these connections the most isolated as possible.</p> <p>I'm wondering about the best approach to manage this.</p> <p><a href="https://i.stack.imgur.com/VcqXP.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VcqXP.jpg" alt="enter image description here"></a></p>
<p>One of the ways to implement your idea is to use a new tool called <a href="https://submariner.io/" rel="nofollow noreferrer">Submariner</a>.</p> <p>Submariner enables direct networking between pods in different Kubernetes clusters on prem or in the cloud.</p> <p>This new solution overcomes barriers to connectivity between Kubernetes clusters and allows for a host of new multi-cluster implementations, <strong>such as database replication within Kubernetes across geographic regions and deploying service mesh across clusters.</strong></p> <p>Key features of Submariner include:</p> <p>Compatibility and connectivity with existing clusters: Users can deploy Submariner into existing Kubernetes clusters, with the addition of Layer-3 network connectivity between pods in different clusters.</p> <p>Secure paths: Encrypted network connectivity is implemented using IPSec tunnels. Various connectivity mechanisms: While IPsec is the default connectivity mechanism out of the box, Rancher will enable different inter-connectivity plugins in the near future.</p> <p>Centralized broker : Users can register and maintain a set of healthy gateway nodes.</p> <p>Flexible service discovery: Submariner provides service discovery across multiple Kubernetes clusters.</p> <p><strong>CNI compatibility</strong>: Works with popular CNI drivers such as Flannel and <strong>Calico</strong>.</p> <p><a href="https://github.com/rancher/submariner#prerequisites" rel="nofollow noreferrer">Prerequisites</a> to use it:</p> <p>At least 3 Kubernetes clusters, one of which is designated to serve as the central broker that is accessible by all of your connected clusters; this can be one of your connected clusters, but comes with the limitation that the cluster is required to be up in order to facilitate inter-connectivity/negotiation</p> <p>Different cluster/service CIDR's (as well as different Kubernetes DNS suffixes) between clusters. This is to prevent traffic selector/policy/routing conflicts.</p> <p>Direct IP connectivity between instances through the internet (or on the same network if not running Submariner over the internet). Submariner supports 1:1 NAT setups, but has a few caveats/provider specific configuration instructions in this configuration.</p> <p>Knowledge of each cluster's network configuration</p> <p>Helm version that supports crd-install hook (v2.12.1+)</p> <p>You can find more info with installation steps on <a href="https://github.com/rancher/submariner" rel="nofollow noreferrer">submariner github</a>. Also, you may find <a href="https://www.infoq.com/news/2019/03/rancher-submariner-multicluster" rel="nofollow noreferrer">rancher submariner multi-cluster article</a> interesting and useful.</p> <p>Good luck.</p>
<p>I am trying to use <a href="https://docs.aws.amazon.com/eks/latest/userguide/cni-custom-network.html" rel="nofollow noreferrer">CNI Custom Networking on EKS</a> to make sure that Pod IPs are allocated from alternative subsets (to prevent IP starvation in the subnets my cluster nodes are running in). To do this I need to create some ENIConfigs and annotate each node.</p> <p>How can I ensure that each node is annotated before any Pods are scheduled to it to ensure no Pod IPs are allocated from the subnets my nodes are running in?</p> <p>EDIT: The only solution I can think of so far is:</p> <ul> <li>Add a NoSchedule taint to all nodes by default</li> <li>Deploy a custom controller that tolerates the taint</li> <li>Get the controller to annotate all nodes as required and remove the taint</li> </ul> <p>However, if the above is the only workaround that is a lot of effort for a managed service</p>
<p>How about:</p> <ul> <li>Add a <code>ENIConfigComplete: false</code> taint to all nodes by default</li> <li>Deploy DaemonSet that tolerates <code>ENIConfigComplete: false</code></li> <li>DaemonSet creates a pod on each new node which</li> <li>creates some ENIConfigs on the node (bash script??)</li> <li>annotates each node with <code>ENIConfigComplete: true</code></li> <li>DaemonSet no longer tolerates the node, so</li> <li>Pod is removed from the node.</li> </ul> <p>The DaemonSet would ensure that every new node was properly set up.</p> <p>Salesforce talk about this technique for provisioning the disks on their new nodes:</p> <ul> <li><a href="https://engineering.salesforce.com/provisioning-kubernetes-local-persistent-volumes-61a82d1d06b0" rel="nofollow noreferrer">https://engineering.salesforce.com/provisioning-kubernetes-local-persistent-volumes-61a82d1d06b0</a></li> </ul> <p>It would avoid having a long running controller process.</p>
<p>I work with four teams that are using exactly the same environments that are set up in kubernetes namespaces. I have created helm charts to install those environments. Everything works fine but I have to create ingresses by hand because of the following format in hostname:</p> <pre><code>&lt;namespace&gt;.&lt;app&gt;.&lt;k8sdomain&gt; </code></pre> <p>The thing is I would like to just change context with kubectl and then run those charts instead of editing every single values.yaml to change namespace variable.</p> <p>Is it possible to use some predefined or dynamic variable that would add the correct namespace to the host in ingress?</p> <p>Or is there any other solution that would help me to solve this problem?</p> <p>Thanks.</p>
<p>The namespace value can be derived either from <code>--namespace</code> parameter which is the same namespace where helm chart is deployed to. In the charts it should be accessed with <code>{{.Release.Namespace}}</code> then. Or you can set these namespaces using <code>--set</code> when deploying helm chart with <code>helm upgrade</code>. If there are few environments you can access them as aliases in <code>values.yaml</code> and then set namespaces values for them like this:</p> <pre><code>helm upgrade \ &lt;chart_name&gt; \ &lt;path_to_the_chart&gt; \ --set &lt;environment_one&gt;.namespace=namespace1 \ --set &lt;environment_two&gt;.namespace=namespace2 \ ... </code></pre>
<p>i have a base yaml and an overlay yaml and using "kustomize" i want to merge these two yaml. what happens with me is that on running kustomize build there comes an output but it is not expected why? because the kustomize instead of filling the custom information from the overlay yaml replaces the whole heading tag of the base with the overlay "containers" in my case. The intended behaviour that i need is the it should somehow fill in the missing information using the overlay yaml instead of replacement.</p> <p>base yaml: </p> <pre><code>apiVersion: v1 kind: Pod metadata: name: temp labels: tier: temp spec: containers: - name: temp image: temp imagePullPolicy: Always command: temp args: temp envFrom: - configMapRef: name: temp volumeMounts: volumes: </code></pre> <p>overlay yaml:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: temp labels: tier: temp spec: containers: volumeMounts: - name: temppathname mountPath: /temppath volumes: - name: temppathname hostPath: type: temp path: temppath </code></pre> <p>Expected result after kustomize build:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: temp labels: tier: temp spec: containers: - name: temp image: temp imagePullPolicy: Always command: temp args: ["sleep 9000"] envFrom: - configMapRef: name: temp volumeMounts: - name: temppathname mountPath: /temppath volumes: - name: temppathname hostPath: type: temp path: temppath </code></pre> <p>what i'm getting:</p> <pre><code>apiVersion: v1 kind: Pod metadata: labels: tier: temp name: temp spec: containers: volumeMounts: - name: temppathname mountPath: /temppath volumes: - name: temppathname hostPath: type: temp path: temppath </code></pre>
<p>In your <code>base.yaml</code> the value for the key <code>containers</code> is a sequence (node). In your <code>overlay.yaml</code> the value for the key <code>containers</code> is a mapping. Of course those two cannot be merged.</p> <p>Not knowing kustomize at all, it seems logical that because those cannot be merged, the overlay replaces that whole sequence node with the mapping node. Your expectation that the mapping of the overlay is merged with a mapping that happens to be an item (in this case the only item) in the sequence of the base seems completely arbitrary. Which item would need to be taken if there had been multiple items? The first? The last? The last one before item five that is a mapping?</p> <p>If your <code>overlay.yaml</code> looked like:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: temp labels: tier: temp spec: containers: - volumeMounts: # &lt; created a sequence item here by inserting an item indicator - name: temppathname mountPath: /temppath volumes: - name: temppathname hostPath: type: temp path: temppath </code></pre> <p>then I could understand your expectation (and maybe the above change can be applied to make it work, I don't have a way to test).</p>
<p>I have kuberentes cluster. One master and one worker. I install metric-server for auto scaling and then i run stress test</p> <pre><code>$ kubectl run autoscale-test --image=ubuntu:16.04 --requests=cpu=1000m --command sleep 1800 deployment "autoscale-test" created $ kubectl autoscale deployment autoscale-test --cpu-percent=25 --min=1 --max=5 deployment "autoscale-test" autoscaled $ kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE autoscale-test Deployment/autoscale-test 0% / 25% 1 5 1 1m $ kubectl get pod NAME READY STATUS RESTARTS AGE autoscale-test-59d66dcbf7-9fqr8 1/1 Running 0 9m kubectl exec autoscale-test-59d66dcbf7-9fqr8 -- apt-get update kubectl exec autoscale-test-59d66dcbf7-9fqr8 -- apt-get install stress $ kubectl exec autoscale-test-59d66dcbf7-9fqr8 -- stress --cpu 2 --timeout 600s &amp; stress: info: [227] dispatching hogs: 2 cpu, 0 io, 0 vm, 0 hdd </code></pre> <p>everything works fine and the pod was auto scaled but after that the pod that was created by autoscale is still running and they do not terminate after the stress test the hpa shows that the 0% of cpu is in use but the 5 autoscaled pod still running</p> <pre><code>#kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE autoscale-test Deployment/autoscale-test 0%/25% 1 5 5 74m #kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default autoscale-test-8f4d84bbf-7ddjw 1/1 Running 0 61m default autoscale-test-8f4d84bbf-bmr59 1/1 Running 0 61m default autoscale-test-8f4d84bbf-cxt26 1/1 Running 0 61m default autoscale-test-8f4d84bbf-x9jws 1/1 Running 0 61m default autoscale-test-8f4d84bbf-zbhvk 1/1 Running 0 71m </code></pre> <p>I wait for an hour but nothing happen</p>
<p>From the documentation:</p> <p><strong>--horizontal-pod-autoscaler-downscale-delay:</strong> The value for this option is a duration that specifies how long the autoscaler has to wait before another downscale operation can be performed after the current one has completed. The default value is 5 minutes (5m0s).</p> <blockquote> <p>Note: When tuning these parameter values, a cluster operator should be aware of the possible consequences. If the delay (cooldown) value is set too long, there could be complaints that the Horizontal Pod Autoscaler is not responsive to workload changes. However, if the delay value is set too short, the scale of the replicas set may keep thrashing as usual.</p> <p>Finally, just before HPA scales the target, the scale recommendation is recorded. The controller considers all recommendations within a configurable window choosing the highest recommendation from within that window. This value can be configured using the --horizontal-pod-autoscaler-downscale-stabilization-window flag, which defaults to 5 minutes. This means that scaledowns will occur gradually, smoothing out the impact of rapidly fluctuating metric values.</p> </blockquote>
<p>I have a pod in a state of <code>CrashLoopBackOff</code>, the logs I'm seeing from <code>kubectl logs &lt;pod-name&gt; -p</code> present only a partial picutre. Other logs are found in other files (e.g. <code>/var/log/something/something.log</code>).</p> <p>Since this pod is crashed, I can't <code>kubectl exec</code> into a shell there and look at the files.</p> <p>How can I look at the log files produced by a container that is no longer running?</p> <p>To be more specific, I'm looking for log files file under <code>$HOME/logs/es.log</code> (in the container that failed)</p>
<p>I was so frustrated from finding no solution to this seemingly common problem that I built a docker image that tails log files and sends them to stdout, to be used as a sidecar container.</p> <hr> <p>Here's what I did:</p> <ol> <li>I added a volume with <code>emptyDir{}</code> to the pod</li> <li>I mounted that volume to my main container, with the <code>mountPath</code> being the directory to which it writes the logs to</li> <li>I added another container to the pod, called "logger", with the image being the log tracker I wrote (<code>lutraman/logger-sidecar:v2</code>), and mounted the same volume to <code>/logs</code> (I programmed the script to read the logs from this directory)</li> </ol> <p>then, all the logs written to that directory, can be accessed by <code>kubectl logs &lt;pod-name&gt; -c logger</code></p> <hr> <p>Here is an example yaml:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: dummy labels: app: dummy spec: selector: matchLabels: app: dummy template: metadata: labels: app: dummy spec: volumes: - name: logs emptyDir: {} containers: - name: dummy-app # the app that writes logs to files image: lutraman/dummy:v2 ports: - containerPort: 8080 name: http protocol: TCP env: - name: MESSAGE value: 'hello-test' - name: LOG_FILE value: '/var/log/app.log' volumeMounts: - name: logs mountPath: /var/log - name: logger # the sidecar container tracking logs and sending them to stdout image: lutraman/logger-sidecar:v2 volumeMounts: - name: logs mountPath: /logs </code></pre> <hr> <p>For anyone who is interested, here is how I made the sidecar container:</p> <p>Dockerfile:</p> <pre><code>FROM alpine:3.9 RUN apk add bash --no-cache COPY addTail /addTail COPY logtrack.sh /logtrack.sh CMD ["./logtrack.sh"] </code></pre> <p>addTail:</p> <pre class="lang-sh prettyprint-override"><code>#!/bin/sh (exec tail -F logs/$3 | sed "s/^/$3: /" ) &amp; echo $! &gt;&gt; /tmp/pids </code></pre> <p>logtrack.sh:</p> <pre class="lang-sh prettyprint-override"><code>#!/bin/bash trap cleanup INT function cleanup() { while read pid; do kill $pid; echo killed $pid; done &lt; /tmp/pids } : &gt; /tmp/pids for log in $(ls logs); do ./addTail n logs $log done inotifyd ./addTail `pwd`/logs:n </code></pre>
<p>I am trying to migrate a SQLite dump to PostgreSQL deployed on Kubernetes but I am having issues connecting the dots.</p> <p>I have created a SQLite dump using the following command:</p> <p><code>sqlite3 sqlite.db .dump &gt; dump.sql</code></p> <p>Now, I want to migrate it to my Kubernetes deployment. I have connected to it using:</p> <p><code>psql -h &lt;external IP of node&gt; -U postgresadmin --password -p &lt;port that i got using kubectl get svc postgres&gt;</code></p> <p>Everything seems to work so far, it is now I run into problems that resulted in me having two questions!</p> <ol> <li>I have seen some question regarding migrating SQLite to PostgreSQL, and it seems this command will do that: <code>psql /path/to/psql -d database -U username -W &lt; /the/path/to/dump.sql</code>. Will I have to create the same tables (empty but with same columns etc.) in PostgreSQL if the dump contains several tables, or will this command do this automatically? I believe I do, based on output I share in question 2, but will it also move on automatically to the next table?</li> <li>How do I combine the command in question 1 to migrate it to the kubernetes deployment? I tried with adding <code>&lt; dump.sql</code> to the command I used connecting to it but got the following output:</li> </ol> <pre><code>CREATE TABLE INSERT 0 1 ERROR: relation "first_report" does not exist ERROR: current transaction is aborted, commands ignored until end of transaction block ERROR: current transaction is aborted, commands ignored until end of transaction block ERROR: current transaction is aborted, commands ignored until end of transaction block ERROR: current transaction is aborted, commands ignored until end of transaction block </code></pre> <p>So, obviously I am doing something wrong... The dump contains three tables, <code>first_report</code>,<code>second_report</code> and <code>relation_table</code>.</p>
<ol> <li><p>you definitely don't need to create empty tables in PostgreSQL, your dump file should contains <code>create table</code> statements. I'd recommend to test with dumping a schema only, without data <code>sqlite3 sqlite.db .schema &gt; dump.sql</code></p></li> <li><p>In order to restore db from dump inside a container just run</p> <p><code>kubectl exec -it postgres-pod -- psql --username admin --d testdb &lt; dump.sql</code></p></li> </ol> <p>If it won't help, please share you <strong>dump.sql</strong> file (schema only)</p>
<p>I had a t2.micro server running where i had deployed Minikube, however due to memory issue i scared up the server size. For which i had to stop and start the instance.</p> <p>But now after restarting, when i try with kubectl commands i get the below error.</p> <pre><code>root@ip-172-31-23-231:~# kubectl get nodes The connection to the server 172.31.23.231:6443 was refused - did you specify the right host or port? </code></pre> <p>So, how can i bring my earlier kube cluster up once i restart my AWS instance?</p>
<p>I had the same error. In my case minikube was not running. I started it with</p> <pre><code>minikube start </code></pre>
<p>I am using Minikube/Kubernetes and want to <em>add</em> a new user. Therefore I need to sign the certificate request for a this new user. Where is the root certificate of Minikube located?</p>
<p>You can find your Minikube CA certificate(s) in your <code>~/.minikube</code> directory. </p>
<p>I'd like to move an instance of Azure Kubernetes Service to another subnet in the same virtual network. Is it possible or the only way to do this is to recreate the AKS instance? </p>
<p>No, it is not possible, you need to redeploy AKS</p> <p>edit: 08.02.2023 - its actually possible to some extent now: <a href="https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni-dynamic-ip-allocation#configure-networking-with-dynamic-allocation-of-ips-and-enhanced-subnet-support---azure-cli" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni-dynamic-ip-allocation#configure-networking-with-dynamic-allocation-of-ips-and-enhanced-subnet-support---azure-cli</a></p> <p>I'm not sure it can be updated on an existing cluster without recreating it (or the nodepool)</p>
<p>I am trying to deploy an Apache Ignite cluster in Kubernetes. The documentation suggests using TcpDiscoveryKubernetesIpFinder to facilitate the Ignite node discovery in a Kubernetes environment. However, I could not find this class in Apache Ignite for .Net. Is it migrated to .Net at all? If not, how can I use in my Net application? I am not very much familiar with Java.</p> <p>If it is not possible, is there an alternative approach to implement node discovery in the Kubernetes environment without using TcpDiscoveryKubernetesIpFinder? Multicast is not available in Azure Virtual Network. </p> <p>The range of available IPs in my Kubernetes subnet is 1000+ addresses so using TcpDiscoveryStaticIpFinder would not be very efficient. I tried to reduce FailureDetectionTimeout to 1 sec on my local PC to make it more efficient but Ignite generates a bunch of the "critical thread blocked" exception, allegedly each time when an endpoint is found unavailable. So I had to get rid of FailureDetectionTimeout.</p> <p>I am using Azure Kubernetes Service and Apache Ignite 2.7 for Net. Thank you in advance. </p>
<p>You can combine Java-based (Spring XML) configuration with .NET configuration.</p> <ol> <li><p>Configure <code>TcpDiscoveryKubernetesIpFinder</code> in Spring XML file (see <a href="https://apacheignite.readme.io/docs/kubernetes-ip-finder" rel="nofollow noreferrer">https://apacheignite.readme.io/docs/kubernetes-ip-finder</a>)</p></li> <li><p>In .NET, set <code>IgniteConfiguration.SpringConfigUrl</code> to point to that file</p></li> </ol> <p>The way it works is Ignite loads Spring XML first, then applies any custom config properties that are specified on .NET side.</p>
<p>Im trying to create a pod using my local docker image as follow.</p> <p>1.First I run this command in terminal </p> <pre><code>eval $(minikube docker-env) </code></pre> <p>2.I created a docker image as follow</p> <pre><code>sudo docker image build -t my-first-image:3.0.0 . </code></pre> <p>3.I created the pod.yml as shown below and I run this command</p> <pre><code>kubectl -f create pod.yml. </code></pre> <p>4.then i tried to run this command</p> <pre><code>kubectl get pods </code></pre> <p>but it shows following error </p> <pre><code> NAME READY STATUS RESTARTS AGE multiplication-6b6d99554-d62kk 0/1 CrashLoopBackOff 9 22m multiplication2019-5b4555bcf4-nsgkm 0/1 CrashLoopBackOff 8 17m my-first-pod 0/1 CrashLoopBackOff 4 2m51 </code></pre> <p>5.i get the pods logs</p> <pre><code>kubectl describe pod my-first-pod </code></pre> <pre><code> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 6m22s default-scheduler Successfully assigned default/my-first-pod to minikube Normal Pulled 5m20s (x4 over 6m17s) kubelet, minikube Successfully pulled image "docker77nira/myfirstimage:latest" Normal Created 5m20s (x4 over 6m17s) kubelet, minikube Created container Normal Started 5m20s (x4 over 6m17s) kubelet, minikube Started container Normal Pulling 4m39s (x5 over 6m21s) kubelet, minikube pulling image "docker77nira/myfirstimage:latest" Warning BackOff 71s (x26 over 6m12s) kubelet, minikube Back-off restarting failed container </code></pre> <pre><code>Dockerfile FROM node:carbon WORKDIR /app COPY . . CMD [ "node", "index.js" ] </code></pre> <pre><code>pods.yml kind: Pod apiVersion: v1 metadata: name: my-first-pod spec: containers: - name: my-first-container image: my-first-image:3.0.0 </code></pre> <pre><code>index.js var http = require('http'); var server = http.createServer(function(request, response) { response.statusCode = 200; response.setHeader('Content-Type', 'text/plain'); response.end('Welcome to the Golden Guide to Kubernetes Application Development!'); }); server.listen(3000, function() { console.log('Server running on port 3000'); }); </code></pre>
<p>Try checking logs with command <code>kubectl logs -f my-first-pod</code></p>
<p>when i'm connecting to remote-debug port to a pod (openshift) using intellij how can i prevent from the pod to crash and the debug session to stop and keep the debug alive ? ( like in eclipse ) . </p> <p>this is a pod running under openshift platform . when connecting to remote debug withe the same configuration and same port using eclipse the debug session is not terminating and the pod is not crashing .</p> <p>the command line arguments : -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=45288 Debugger mode : Attach to remote VM</p>
<p>You can try using <em>Suspend policy: Thread</em> when debugging. As per <a href="https://www.jetbrains.com/help/idea/using-breakpoints.html#breakpoint-properties" rel="nofollow noreferrer">Breakpoint properties</a> docs:</p> <blockquote> <p>Thread: only the thread containing this breakpoint will be suspended. </p> </blockquote> <p>This approach is far from ideal. If you are debugging a multi-threaded application other threads will continue to run and potentially interfere with your debugging session. However it might allow the pod to pass the liveness test.</p>
<p>In brief,These are the steps I have done :</p> <ol> <li><p>Launched <strong>2</strong> new <code>t3 - small</code> instances in aws, pre-tagged with key <code>kubernetes.io/cluster/&lt;cluster-name&gt;</code> and value <code>member</code>.</p></li> <li><p>Tagged the new security with same tag and opened all ports mentioned here - <a href="https://kubernetes.io/docs/setup/independent/install-kubeadm/#check-required-ports" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/independent/install-kubeadm/#check-required-ports</a></p></li> <li><p>Changed <code>hostname</code> to the output of <code>curl http://169.254.169.254/latest/meta-data/local-hostname</code> and verified with <code>hostnamectl</code></p></li> <li><p>Rebooted</p></li> <li><p>Configured aws with <code>https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html</code></p></li> <li><p>Created <code>IAM role</code> with full (<code>"*"</code>) permissions and assigned to EC2 instances.</p></li> <li><p>Installed <code>kubelet kubeadm kubectl</code> using <code>apt-get</code></p></li> <li><p>Created <code>/etc/default/kubelet</code> with content <code>KUBELET_EXTRA_ARGS=--cloud-provider=aws</code></p></li> <li><p>Ran <code>kubeadm init --pod-network-cidr=10.244.0.0/16</code> on one instance and used output to <code>kubeadm join ...</code> other node.</p></li> <li><p>Installed <a href="https://www.digitalocean.com/community/tutorials/how-to-install-software-on-kubernetes-clusters-with-the-helm-package-manager" rel="nofollow noreferrer">Helm</a>.</p></li> <li><p>Installed <a href="https://akomljen.com/aws-cost-savings-by-utilizing-kubernetes-ingress-with-classic-elb/" rel="nofollow noreferrer">ingress controller</a> with default backend.</p></li> </ol> <p>Previously I have tried the above steps, but, installed ingress from the instructions on <a href="https://kubernetes.github.io/ingress-nginx/deploy/#aws" rel="nofollow noreferrer">kubernetes.github.io</a>. Both ended up with same status, <code>EXTERNAL-IP</code> as <code>&lt;pending&gt;</code>.</p> <hr> <p>Current status is :</p> <p><code>kubectl get pods --all-namespaces -o wide</code></p> <pre><code>NAMESPACE NAME IP NODE ingress ingress-nginx-ingress-controller-77d989fb4d-qz4f5 10.244.1.13 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal ingress ingress-nginx-ingress-default-backend-7f7bf55777-dhj75 10.244.1.12 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal kube-system coredns-86c58d9df4-bklt8 10.244.1.14 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal kube-system coredns-86c58d9df4-ftn8q 10.244.1.16 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal kube-system etcd-ip-XXX-XX-XX-XXX.ap-south-1.compute.internal 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal kube-system kube-apiserver-ip-XXX-XX-XX-XXX.ap-south-1.compute.internal 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal kube-system kube-controller-manager-ip-XXX-XX-XX-XXX.ap-south-1.compute.internal 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal kube-system kube-flannel-ds-amd64-87k8p 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal kube-system kube-flannel-ds-amd64-f4wft 172.31.3.106 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal kube-system kube-proxy-79cp2 172.31.3.106 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal kube-system kube-proxy-sv7md 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal kube-system kube-scheduler-ip-XXX-XX-XX-XXX.ap-south-1.compute.internal 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal kube-system tiller-deploy-5b7c66d59c-fgwcp 10.244.1.15 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal </code></pre> <p><code>kubectl get svc --all-namespaces -o wide</code></p> <pre><code>NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR default kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 73m &lt;none&gt; ingress ingress-nginx-ingress-controller LoadBalancer 10.97.167.197 &lt;pending&gt; 80:32722/TCP,443:30374/TCP 59m app=nginx-ingress,component=controller,release=ingress ingress ingress-nginx-ingress-default-backend ClusterIP 10.109.198.179 &lt;none&gt; 80/TCP 59m app=nginx-ingress,component=default-backend,release=ingress kube-system kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP 73m k8s-app=kube-dns kube-system tiller-deploy ClusterIP 10.96.216.119 &lt;none&gt; 44134/TCP 67m app=helm,name=tiller </code></pre> <hr> <pre><code>kubectl describe service -n ingress ingress-nginx-ingress-controller Name: ingress-nginx-ingress-controller Namespace: ingress Labels: app=nginx-ingress chart=nginx-ingress-1.4.0 component=controller heritage=Tiller release=ingress Annotations: service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: * Selector: app=nginx-ingress,component=controller,release=ingress Type: LoadBalancer IP: 10.104.55.18 Port: http 80/TCP TargetPort: http/TCP NodePort: http 32318/TCP Endpoints: 10.244.1.20:80 Port: https 443/TCP TargetPort: https/TCP NodePort: https 32560/TCP Endpoints: 10.244.1.20:443 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre> <p>attached IAM role inline policy</p> <pre><code>{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "*", "Resource": "*" } ] } </code></pre> <hr> <p>kubectl get nodes -o wide</p> <pre><code>NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ip-172-31-12-119.ap-south-1.compute.internal Ready master 6d19h v1.13.4 172.31.12.119 XX.XXX.XXX.XX Ubuntu 16.04.5 LTS 4.4.0-1077-aws docker://18.6.3 ip-172-31-3-106.ap-south-1.compute.internal Ready &lt;none&gt; 6d19h v1.13.4 172.31.3.106 XX.XXX.XX.XXX Ubuntu 16.04.5 LTS 4.4.0-1077-aws docker://18.6.3 </code></pre> <hr> <p>Could someone please point out what am I missing here, as everywhere on the internet it says a <code>Classic ELB</code> will be deployed automatically ?</p>
<p>For AWS ELB (type Classic) you have to </p> <ol> <li><p>Explicitly specify <code>--cloud-provider=aws</code> in kube services manifests located in <code>/etc/kubernetes/manifests</code> on the master node:</p> <p><code>kube-controller-manager.yaml kube-apiserver.yaml</code></p></li> <li><p>Restart services:</p> <p><code>sudo systemctl daemon-reload</code></p> <p><code>sudo systemctl restart kubelet</code></p></li> </ol> <hr> <p>Along with other commands, add at bottom or top as you wish. The result should be similar to :</p> <p>in <em>kube-controller-manager.yaml</em> </p> <pre><code>spec: containers: - command: - kube-controller-manager - --cloud-provider=aws </code></pre> <p>in <em>kube-apiserver.yaml</em> </p> <pre><code>spec: containers: - command: - kube-apiserver - --cloud-provider=aws </code></pre>