prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I follow the <a href="https://www.kubeflow.org/docs/gke/gcp-e2e/" rel="nofollow noreferrer">tutorial</a> for building kubeflow on GCP.</p> <p>At the last step, I got stuck at "Check the permissions for your training component".</p> <p>After setting these secretName and secretMountPath. </p> <pre><code>kustomize edit add configmap mnist-map-training --from-literal=secretName=user-gcp-sa kustomize edit add configmap mnist-map-training --from-literal=secretMountPath=/var/secrets </code></pre> <p>and run</p> <pre><code>kustomize build . |kubectl apply -f - </code></pre> <p>I got the error:</p> <blockquote> <p>Error: field specified in var '{GOOGLE_APPLICATION_CREDENTIALS ~G_v1_ConfigMap {data.GOOGLE_APPLICATION_CREDENTIALS}}' not found in corresponding resource error: no objects passed to apply</p> </blockquote> <p>I cannot find my GOOGLE_APPLICATION_CREDENTIALS at /var/secrets in my local machine, but I think kubeflow will automatic create for me based on this <a href="https://kubernetes.io/docs/concepts/configuration/secret/#built-in-secrets" rel="nofollow noreferrer">document</a>. </p> <p>Or maybe because I use "Authenticating with username and password" for authenticating kubeflow?</p>
<p>I found the solution at <a href="https://github.com/kubeflow/examples/tree/master/mnist" rel="nofollow noreferrer">here</a>.</p> <pre><code>kustomize edit add configmap mnist-map-monitoring --from-literal=GOOGLE_APPLICATION_CREDENTIALS=/var/secrets/user-gcp-sa.json </code></pre> <p>In original <a href="https://www.kubeflow.org/docs/gke/gcp-e2e/" rel="nofollow noreferrer">tutorial</a>, lacking of this.</p>
<p>In my k8s cluster the ingress does not work on the k8s cluster of digital ocean. I don't get an external ip and so it is not available. Locally there seems to be no problem.</p> <p>I already searched a lot and tried some tutorials, f.e. <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes" rel="nofollow noreferrer">https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes</a>. But it seems that it is an older version and the solution (and even the links) does not work anymore.</p> <p>The nginx-ingress should call the service of a website backend which is on port 8080. </p> <p>I stripped down my ingress code to the following one:</p> <pre><code>kind: Ingress metadata: name: test-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: / backend: serviceName: website servicePort: 8080 </code></pre> <p>With <code>kubectl get ing</code> I see the ingress, but it has no address. It looks like this</p> <pre><code>NAME HOSTS ADDRESS PORTS AGE test-ingress * 80 50s </code></pre> <p>Can anyone help me out and tell me what I have to do to get my k8s cluster running?</p> <p>Thanks peter</p>
<p>Firstly, if you are using Nginx Ingress Controller, you don't need to see ingress address. </p> <p>When you install <a href="https://github.com/kubernetes/ingress-nginx" rel="noreferrer">Nginx Ingress Controller</a> to your k8s cluster, it creates Load Balancer to handle all incoming requests. Make sure that below part completed as explained in <strong>Step 2</strong> of <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes" rel="noreferrer">guide</a> you posted and you are able to see LoadBalancer External ip address.</p> <pre><code>$ kubectl get svc --namespace=ingress-nginx </code></pre> <p>You should see an external IP address, corresponding to the IP address of the DigitalOcean Load Balancer:</p> <pre><code>Output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx LoadBalancer 10.245.247.67 203.0.113.0 80:32486/TCP,443:32096/TCP 20h </code></pre> <p>In above case, after deploying your ingress resource, if you hit <code>http://203.0.113.0</code> you will get your <code>website:8080</code> backend service. </p> <p>Hope it helps!</p>
<p>What are the practical differences between the two? When should I choose one over the other?</p> <p>For example if I'd like to give a developer in my project access to just view the logs of a pod. It seems both a service account or a context could be assigned these permissions via a RoleBinding.</p>
<p><strong>What is Service Account?</strong></p> <p>From <a href="https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/" rel="nofollow noreferrer">Docs</a></p> <blockquote> <p>User accounts are for humans. Service accounts are for processes, which run in pods. </p> <p>User accounts are intended to be global...Service accounts are namespaced.</p> </blockquote> <p><strong>Context</strong></p> <p><code>context</code> related to <code>kubeconfig</code> file(<code>~/.kube/config</code>). As you know <code>kubeconfig</code> file is a yaml file, the section <code>context</code> holds your <code>user/token</code> and <code>cluster</code> references. <code>context</code> is really usefull when you have multiple cluster, you can define all your <code>cluster</code>s and <code>user</code>s in single <code>kubeconfig</code> file, then you can switch between them with help of context (Example: <code>kubectl config --kubeconfig=config-demo use-context dev-frontend</code>)</p> <p>From <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" rel="nofollow noreferrer">Docs</a></p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority: fake-ca-file server: https://1.2.3.4 name: development - cluster: insecure-skip-tls-verify: true server: https://5.6.7.8 name: scratch contexts: - context: cluster: development namespace: frontend user: developer name: dev-frontend - context: cluster: development namespace: storage user: developer name: dev-storage - context: cluster: scratch namespace: default user: experimenter name: exp-scratch current-context: "" kind: Config preferences: {} users: - name: developer user: client-certificate: fake-cert-file client-key: fake-key-file - name: experimenter user: password: some-password username: exp </code></pre> <p>You can above, there are 3 contexts, holds references of <code>cluster</code> and <code>user</code>.</p> <blockquote> <p>..if I'd like to give a developer in my project access to just view the logs of a pod. It seems both a service account or a context could be assigned these permissions via a RoleBinding.</p> </blockquote> <p>That correct, you need to create <code>service account</code>, <code>Role</code>(or <code>ClusterRole</code>), <code>RoleBinding</code>(or <code>ClusterRoleBinding</code>) and generate <code>kubeconfig</code> file that contains service account <code>token</code> and give it your developer.</p> <p>I have a <a href="https://github.com/veerendra2/my-k8s-applications/blob/master/spinnaker/kubeconfig_generator.sh" rel="nofollow noreferrer">script to generate <code>kubconfig</code> file,</a> takes service account name argument. Feel free to check out</p> <p><strong>UPDATE:</strong></p> <p>If you want to create <code>Role</code> and <code>RoleBinding</code>, <a href="https://github.com/veerendra2/my-k8s-applications/blob/master/spinnaker/spinnaker_sa.yaml" rel="nofollow noreferrer">this might help</a> </p>
<p>I've installed Kubernetes by kops on aws, and basically every function is fine so far, except for Dashboad.</p> <p>I've installed it by following this URL, and received no error. <a href="https://github.com/kubernetes/kops/blob/master/docs/addons.md#installing-kubernetes-addons" rel="nofollow noreferrer">https://github.com/kubernetes/kops/blob/master/docs/addons.md#installing-kubernetes-addons</a></p> <p>However, the browser ( chrome, firefox, safari ) just shows me json text like below. The URL is like '<a href="https://api.clustername.xxxx.com/ui" rel="nofollow noreferrer">https://api.clustername.xxxx.com/ui</a>'</p> <blockquote> <p>"paths": [ "/apis", "/apis/", "/apis/apiextensions.k8s.io", "/apis/apiextensions.k8s.io/v1beta1", "/healthz", "/healthz/etcd", "/healthz/ping", "/healthz/poststarthook/generic-apiserver-start-informers", "/healthz/poststarthook/start-apiextensions-controllers", "/healthz/poststarthook/start-apiextensions-informers", "/metrics", "/openapi/v2", "/swagger-2.0.0.json", "/swagger-2.0.0.pb-v1", "/swagger-2.0.0.pb-v1.gz", "/swagger.json", "/swaggerapi", "/version" ]</p> </blockquote> <p>I would like see the real dashboard... What shall I do ?</p>
<blockquote> <p>kubectl cluster-info</p> </blockquote> <p>Kubernetes master:- Use this url</p> <p>Append that url with:- /api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/</p> <p>Username and password will be prompted on browser</p> <blockquote> <p>kubectl config view</p> </blockquote> <p>you will get username :- admin (default) and password </p> <p>Next we need token So create user by below step <a href="https://github.com/kubernetes/dashboard/wiki/Creating-sample-user" rel="nofollow noreferrer">https://github.com/kubernetes/dashboard/wiki/Creating-sample-user</a> create</p> <p>Get token once user is created with admin access</p> <blockquote> <ul> <li>kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')</li> </ul> </blockquote>
<p>I have a Kubernetes 1.9.11 cluster on baremetal machines running Coreos 1576.5.0.</p> <p>Recently I deployed a Glusterfs 4.1.7 cluster, managed by Heketi 8, and created a lot of PVCs to be used by some statfulset applications. The problem is, I can't get metrics about these PVCs through Kublet's 10250 port:</p> <pre><code>curl -k https://aa05:10250/metrics 2&gt;/dev/null | grep kubelet_volume_stats | wc -l 0 </code></pre> <p>So, how can I get these metrics?</p> <p>Any hints will be appreciated.</p>
<p>It looks like kubelet is implementing metrics for volumes in <a href="https://github.com/kubernetes/kubernetes/blob/release-1.9/pkg/kubelet/metrics/metrics.go" rel="nofollow noreferrer">release-1.9.x</a>,but their naming is quite different from what you are grepping for. Try to grep for <code>volume_stats</code>.</p>
<p>I'm trying to write an application where all pods are connected to each other. I have read in <a href="https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/" rel="nofollow noreferrer">documentation</a>:</p> <blockquote> <p>The Pods’ ordinals, hostnames, SRV records, and A record names have not changed, but the IP addresses associated with the Pods may have changed. In the cluster used for this tutorial, they have. This is why it is important not to configure other applications to connect to Pods in a StatefulSet by IP address.</p> <p>If you need to find and connect to the active members of a StatefulSet, you should query the CNAME of the Headless Service (<code>nginx.default.svc.cluster.local</code>). The SRV records associated with the CNAME will contain only the Pods in the StatefulSet that are Running and Ready.</p> <p>If your application already implements connection logic that tests for liveness and readiness, you can use the SRV records of the Pods ( <code>web-0.nginx.default.svc.cluster.local</code>, <code>web-1.nginx.default.svc.cluster.local</code>), as they are stable, and your application will be able to discover the Pods’ addresses when they transition to Running and Ready.</p> </blockquote> <p>I though I can do it in following way:</p> <ul> <li>Lookup SRV records of service to check which pods are ready</li> <li>Connect to all ready pods</li> <li>Open the port which signify the readiness</li> </ul> <p>However when I started implementing it on minikube it seemed to be racy and when I query A/SRV records:</p> <ul> <li>On first node I get no records found error (sounds OK) before I open the port</li> <li>On second node I sometimes get no records found sometimes one record</li> <li>On third node I sometimes get one record and sometimes two records</li> </ul> <p>It seems to me that there is a race between updating of DNS records and statefulset startup. I'm not entirely sure what I'm doing wrong and how I misunderstood the documentation.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: hello-world-lb labels: app: hello-world-lb spec: ports: - port: 8080 name: web type: LoadBalancer selector: app: hello-world --- apiVersion: v1 kind: Service metadata: name: hello-world labels: app: hello-world spec: ports: - port: 8080 name: web - port: 3080 name: hello-world clusterIP: None selector: app: hello-world --- apiVersion: apps/v1 kind: StatefulSet metadata: name: hello-world spec: selector: matchLabels: app: hello-world serviceName: "hello-world" replicas: 3 template: metadata: labels: app: hello-world spec: terminationGracePeriodSeconds: 10 containers: - name: hello-world image: hello-world imagePullPolicy: Never ports: - containerPort: 8080 name: web - containerPort: 3080 name: hello-world livenessProbe: tcpSocket: port: 8080 </code></pre> <p><strong>EDIT</strong> Currently the code is as follows:</p> <ul> <li>Query A/SRV records of <code>hello-world.default.svc.cluster.local.</code>/<code>_hello-world._tcp.hello-world.default.svc.cluster.local.</code> and print them for debug</li> <li>Bind to 3080 port and start to listen (connecting logic not implemented)</li> <li>Open 8080 port</li> </ul> <p>I expected that A/SRV records for <code>hello-world-0</code> will be empty, for <code>hello-world-1</code> will contain hello-world-0 and for <code>hello-world-N+1</code> will contain <code>hello-world-0</code> to <code>hello-world-N</code>. During rolling update A/SRV records would contain all other peers.</p> <p>However it seems that DNS records are updated asynchronously so even when the lifeness of pod <code>n</code> is detected and pod <code>n + 1</code> is started it is not guaranteed that pod <code>n + 1</code> will see address of pod <code>n</code> in DNS.</p>
<p>add the below annotation to service definition</p> <pre><code> annotations: service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" </code></pre>
<p>I'm currently using Kubernetes to schedule a DaemonSet on both master and worker nodes. </p> <p>The DaemonSet definition is the same for both node types (same image, same volumes, etc), the only difference is that when the entrypoint is executed, I need to write a different configuration file (which is generated in Python with some dynamic values) if the node is a master or a worker.</p> <p>Currently, to overcome this I'm using two different DaemonSet definitions with an env value which tells if the node is a master or not. Here's the yaml file (only relevant parts):</p> <pre><code>apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: worker-ds namespace: kube-system labels: k8s-app: worker spec: ... spec: hostNetwork: true containers: - name: my-image ... env: - name: NODE_IP valueFrom: fieldRef: fieldPath: status.hostIP - name: IS_MASTER value: "false" ... --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: master-ds namespace: kube-system labels: k8s-app: master spec: ... spec: hostNetwork: true nodeSelector: node-role.kubernetes.io/master: "" tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule containers: - name: my-image ... env: - name: NODE_IP valueFrom: fieldRef: fieldPath: status.hostIP - name: IS_MASTER value: "true" ... </code></pre> <p>However, since the only difference is the <strong>IS_MASTER</strong> value, I want to collapse both the definitions in a single one that programmatically understands if the current node where the pod is being scheduled is a master or a worker.</p> <p>Is there any way to know this information about the node programmatically (even reading a configuration file [for example something that only the master has or viceversa] in the node or something like that)?</p> <p>Thanks in advance.</p>
<p>Unfortunately, there is not a convenient way to access the node information in pod. </p> <p>If you only want a single <code>DaemonSet</code> definition, you can add a <code>sidecar</code> container to your pod, the <code>sidecar</code> container can <a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod" rel="nofollow noreferrer">access the k8s api</a>, then your main container can get something useful from the <code>sidecar</code>.</p> <p>By the way, I think your current solution is properly :)</p>
<p>I am using Kubernetes Java client API <a href="https://github.com/kubernetes-client/java" rel="nofollow noreferrer">https://github.com/kubernetes-client/java</a> for fetching all namespaces present. I am Getting Error-</p> <pre><code>io.kubernetes.client.ApiException: java.net.ConnectException: Failed to connect to localhost/127.0.0.1:443 at io.kubernetes.client.ApiClient.execute(ApiClient.java:801) at io.kubernetes.client.apis.CoreV1Api.listNamespaceWithHttpInfo(CoreV1Api.java:15939) at io.kubernetes.client.apis.CoreV1Api.listNamespace(CoreV1Api.java:15917) at com.cloud.kubernetes.KubernetesNamespacesAPI.fetchAllNamespaces(KubernetesNamespacesAPI.java:25) at com.cloud.spark.sharedvariable.ClouzerConfigurations.setKubernetesEnvironment(ClouzerConfigurations.java:45) </code></pre> <p>I tried creating cluster role binding and giving permission to the user.</p> <p>Here is my code snippet:</p> <pre><code>public static List&lt;String&gt; fetchAllNamespaces(){ try { return COREV1_API.listNamespace(null, &quot;true&quot;, null, null, null, 0, null, Integer.MAX_VALUE, Boolean.FALSE) .getItems().stream().map(v1Namespace -&gt; v1Namespace.getMetadata().getName()) .collect(Collectors.toList()); }catch(Exception e) { e.printStackTrace(); return new ArrayList&lt;&gt;(); } } </code></pre> <p>Please let me know if I am missing anything. Thanks in advance.</p>
<p>I'm facing the same exception too. After several survey to the client lib's source code, I think you need to make sure of two things.</p> <ul> <li>first of all, can you access your api-server?</li> <li>secondly, you need to check your ApiClient bootstrap order.</li> </ul> <p><strong>Which way do you use to config your connection</strong></p> <p>The first thing here may not correlated to your case or the lib. The api client lib supports three ways of configuration, to communicate with K8S apiserver from both inside of pod or out of cluster.</p> <ul> <li>read env KUBECONFIG </li> <li>read ${home}/.kube/config</li> <li>read the service account resides under /var/run/secrets/kubernetes.io/serviceaccount/ca.crt</li> </ul> <p>If you are using the lib inside a Pod, normally it will try to using the third way.</p> <p><strong>How you bootstrap your client.</strong></p> <p>You must keep in mind to invoke </p> <pre class="lang-java prettyprint-override"><code>Configuration.setDefaultApiClient(apiClient); </code></pre> <p>before you init a CoreV1Api or your CRD api. The reason is quite simply, because under all of the Api class, for example under the class of io.kubernetes.client.api.CoreV1Api</p> <pre class="lang-java prettyprint-override"><code> public class CoreV1Api { private ApiClient apiClient; public CoreV1Api() { this(Configuration.getDefaultApiClient()); } ... } </code></pre> <p>If you haven't set the Configuration's defaultApiClient, it will use all default config, which the basePath will be <strong>localhost:443</strong>, then you will face the error.</p> <p>Under the example package, The client has already created lots of examples and use case. The full configuration logic may be as below:</p> <pre class="lang-java prettyprint-override"><code>public class Example { public static void main(String[] args) throws IOException, ApiException { ApiClient client = Config.defaultClient(); Configuration.setDefaultApiClient(client); // now you are safe to construct a CoreV1Api. CoreV1Api api = new CoreV1Api(); V1PodList list = api.listPodForAllNamespaces(null, null, null, null, null, null, null, null, null); for (V1Pod item : list.getItems()) { System.out.println(item.getMetadata().getName()); } } } </code></pre> <p>Just keeps in mind, order is important if you are using default constructor to init a XXXApi.</p>
<p>I have a application which is GPU intensive. I have tried deploying this on GKE cluster which is GPU enabled and that went well. Now I want to run my application as a Cloud run service on GKE, I did not find any option for specifying GPU's while creating a Cloud run service. Can anyone please help me out. TIA</p> <p>I was following this article <a href="https://cloud.google.com/run/docs/gke/setup" rel="nofollow noreferrer">https://cloud.google.com/run/docs/gke/setup</a></p>
<p>Cloud Run on Kubernetes does not support GPUs.</p>
<p>My kubernetes PKI expired (API server to be exact) and I can't find a way to renew it. The error I get is</p> <pre><code>May 27 08:43:51 node1 kubelet[8751]: I0527 08:43:51.922595 8751 server.go:417] Version: v1.14.2 May 27 08:43:51 node1 kubelet[8751]: I0527 08:43:51.922784 8751 plugins.go:103] No cloud provider specified. May 27 08:43:51 node1 kubelet[8751]: I0527 08:43:51.922800 8751 server.go:754] Client rotation is on, will bootstrap in background May 27 08:43:51 node1 kubelet[8751]: E0527 08:43:51.925859 8751 bootstrap.go:264] Part of the existing bootstrap client certificate is expired: 2019-05-24 13:24:42 +0000 UTC May 27 08:43:51 node1 kubelet[8751]: F0527 08:43:51.925894 8751 server.go:265] failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory </code></pre> <p>The documentation on <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/" rel="noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/</a> describes how to renew but it only works if the API server is not expired. I have tried to do a </p> <pre><code>kubeadm alpha cert renew all </code></pre> <p>and do a reboot but that just made the entire cluster fail so I did a rollback to a snapshot (my cluster is running on VMware).</p> <p>The cluster is running and all containers seem to work but I can't access it via kubectl so I can't really deploy or query.</p> <p>My kubernetes version is 1.14.2.</p>
<p>So the solution was to (first a backup)</p> <pre><code>$ cd /etc/kubernetes/pki/ $ mv {apiserver.crt,apiserver-etcd-client.key,apiserver-kubelet-client.crt,front-proxy-ca.crt,front-proxy-client.crt,front-proxy-client.key,front-proxy-ca.key,apiserver-kubelet-client.key,apiserver.key,apiserver-etcd-client.crt} ~/ $ kubeadm init phase certs all --apiserver-advertise-address &lt;IP&gt; $ cd /etc/kubernetes/ $ mv {admin.conf,controller-manager.conf,kubelet.conf,scheduler.conf} ~/ $ kubeadm init phase kubeconfig all $ reboot </code></pre> <p>then</p> <pre><code>$ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config </code></pre> <p>that did the job for me and thanks for your hints :)</p>
<p>I'm reading myself currently into RBAC and am using Docker For Desktop with a local Kubernetes cluster enabled.</p> <p>If I run <code>kubectl auth can-i get pods</code> which user or group or serviceaccount is used by default?</p> <p>Is it the same call like:</p> <p><code>kubectl auth can-i get pods --as docker-for-desktop --as-group system:serviceaccounts</code> ?</p> <p><code>kubectl config view</code> shows:</p> <pre><code>contexts: - context: cluster: docker-for-desktop-cluster namespace: default user: docker-for-desktop name: docker-for-desktop ... users: - name: docker-for-desktop user: client-certificate-data: REDACTED client-key-data: REDACTED </code></pre> <p>But simply calling <code>kubectl auth can-i get pods --as docker-for-desktop</code> returns NO.</p> <p>Thanks, Kim</p>
<p>To answer your question</p> <blockquote> <p>If I run <code>kubectl auth can-i get pods</code> which user or group or serviceaccount is used by default?</p> </blockquote> <p>As you can read on <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="nofollow noreferrer">Configure Service Accounts for Pods</a>:</p> <blockquote> <p>When you (a human) access the cluster (for example, using <code>kubectl</code>), you are authenticated by the apiserver as a particular User Account (currently this is usually <code>admin</code>, unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver. When they do, they are authenticated as a particular Service Account (for example, <code>default</code>).</p> </blockquote> <p>You can use <code>kubectl get serviceaccount</code> to see what <code>serviceaccounts</code> are setup in the cluster. Try checking which contexts you have available and switching into a which ever you need:</p> <p><code>kubectl config get-contexts</code></p> <p><code>kubectl config use-context docker-for-desktop</code></p> <p>If you are experiencing an issue with missing <code>Role</code> please check <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources" rel="nofollow noreferrer">Referring to Resources</a> to set they correctly for <code>docker-for-desktop</code></p>
<p>Sorry if the question is a bit abstract but I am currently getting into GCP.</p> <p>First of all my goal - I want to set up automatic creation of kubernetes cluster with a number of pods.</p> <p>What I have so far - I have past few days looking in the GCP and Kubernetes documentation as well as some examples. I have two working bits:</p> <ol> <li><p>I have created a cluster config with yaml and jinja files and I can use deployment manager to set them up.</p> <pre><code>gcloud deployment-manager deployments create my-config --config my-config.yaml </code></pre></li> <li><p>I have created another configuration yaml file which uses a docker image stored in the GCP Container Registry to start some pods on the cluster above (which again works fine)</p> <p>kubectl apply -f image-config.yaml --record</p></li> </ol> <p>My question is is it possible to combine somehow the above into a single config file and start everything up with a single command? Or if you can point me in the direction of some appropriate example.</p>
<p>The <a href="https://agones.dev" rel="nofollow noreferrer">Agones</a> project accomplishes this task using a combination of <a href="https://www.terraform.io" rel="nofollow noreferrer">Terraform</a> and <a href="https://helm.sh" rel="nofollow noreferrer">Helm</a> as described in the <a href="https://agones.dev/site/docs/installation/terraform/" rel="nofollow noreferrer">Install with Terraform documentation</a>. There is a single command which builds a GKE cluster and also installs an application into the cluster (e.g. runs some pods). </p> <p>If you don't want to use Helm and build a full installer for your pods, you can also look at using the <a href="https://www.terraform.io/docs/providers/kubernetes/index.html" rel="nofollow noreferrer">Kubernetes Provider</a> to run some simple applications once the cluster has been deployed (check out the <a href="https://www.terraform.io/docs/providers/google/r/container_cluster.html" rel="nofollow noreferrer">google_container_cluster</a> configuration in the GCP provider). </p>
<p>I'm looking for a definitive answer for k8s' response to a job being updated - specifically, if I update the container spec (image / args).</p> <p>If the containers are starting up, will it stop &amp; restart them?</p> <p>If the job's pod is all running, will it stop &amp; restart?</p> <p>If it's Completed, will it run it again with the new setup?</p> <p>If it failed, will it run it again with the new setup?</p> <p>I've not been able to find documentation on this point, but if there is some I'd be very happy to get some signposting.</p>
<p>The <code>.spec.template</code> field can not be updated in a Job, the field is <a href="https://github.com/kubernetes/kubernetes/blob/57c8024036938e40a48010299129b57b818c2a34/pkg/apis/batch/validation/validation.go#L158" rel="noreferrer">immutable</a>. The Job would need to be deleted and recreated which covers all of your questions. </p> <p>The reasoning behind the changes aren't available in the github <a href="https://github.com/kubernetes/kubernetes/commit/9d1838fb647b2d9fcf35f1f20bf2dd9c4b35239f" rel="noreferrer">commit</a> or <a href="https://github.com/kubernetes/kubernetes/pull/14142" rel="noreferrer">pr</a>, but these changes were soon after Jobs were originally added. Your stated questions are likely part of that reasoning as making it immutable removes ambiguity. </p>
<p>I am following <a href="https://www.youtube.com/watch?v=bIdMveCe75c" rel="nofollow noreferrer">this</a> tutorial. I'm trying to create a Jenkins X app locally in <code>minikube</code> and setting it up with Github. </p> <p>But when I do <code>jx create quickstart</code> and follow the steps I get <strong><code>error: secrets "jenkins" not found</code></strong> as error. </p> <p>Also, I found out that there is no secret named <code>jenkins</code> </p> <pre><code>root@Unix:/home/dadart/Downloads# kubectl get secret -n jx jenkins Error from server (NotFound): secrets "jenkins" not found </code></pre> <p>Someone please point out what I'm doing wrong. </p>
<p>Please follow this post on Github with set-up &quot;env settings&quot; <a href="https://github.com/jenkins-x/jx/issues/1554" rel="nofollow noreferrer">before installation</a>.</p> <p>You can find also in &quot;Common problems&quot; section <a href="https://jenkins-x.io/faq/issues/#how-do-i-get-the-password-and-username-for-jenkins" rel="nofollow noreferrer">&quot;How do I get the Password and Username for Jenkins?&quot;</a></p> <p>As per documentation - it seems you missed some part during installation:</p> <blockquote> <p><strong>What happens during installation</strong></p> <p>Jenkins X generates an administration password for Monocular/Nexus/Jenkins and save it in secrets. It then retrieves git secrets for the helm install (so they can be used in the pipelines).</p> </blockquote> <p>this can be helpful &quot;jenkins image&quot; <a href="https://github.com/jenkins-x/jx/issues/3047" rel="nofollow noreferrer">issue</a>.</p> <p>In case you still notice more problems with jenkis installation please open an issue <a href="https://github.com/jenkins-x/jx" rel="nofollow noreferrer">here</a></p> <p>Please share with your findings</p>
<p>I am trying to run the basic example of <a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html#submitting-applications-to-kubernetes" rel="noreferrer">submitting a spark application with a k8s cluster</a>.</p> <p>I created my docker image, using the script from the spark folder : </p> <pre><code>sudo ./bin/docker-image-tool.sh -mt spark-docker build sudo docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE spark-r spark-docker 793527583e00 17 minutes ago 740MB spark-py spark-docker c984e15fe747 18 minutes ago 446MB spark spark-docker 71950de529b3 18 minutes ago 355MB openjdk 8-alpine 88d1c219f815 15 hours ago 105MB hello-world latest fce289e99eb9 3 months ago 1.84kB </code></pre> <p>And then tried to submit the SparkPi examples (as in the official documentation).</p> <pre><code>./bin/spark-submit \ --master k8s://[MY_IP]:8443 \ --deploy-mode cluster \ --name spark-pi --class org.apache.spark.examples.SparkPi \ --driver-memory 1g \ --executor-memory 3g \ --conf spark.executor.instances=2 \ --conf spark.kubernetes.container.image=spark:spark-docker \ local:///opt/spark/examples/jars/spark-examples_2.11-2.4.0.jar </code></pre> <p>But the run fail with the following Exception : </p> <pre><code>io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/pods/spark-pi-1554304245069-driver. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods "spark-pi-1554304245069-driver" is forbidden: User "system:serviceaccount:default:default" cannot get resource "pods" in API group "" in the namespace "default". </code></pre> <p>Here are the full logs of the pod from the Kubernetes Dashboard : </p> <pre><code>2019-04-03 15:10:50 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@49096b06{/executors/threadDump,null,AVAILABLE,@Spark} 2019-04-03 15:10:50 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@4a183d02{/executors/threadDump/json,null,AVAILABLE,@Spark} 2019-04-03 15:10:50 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@5d05ef57{/static,null,AVAILABLE,@Spark} 2019-04-03 15:10:50 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@34237b90{/,null,AVAILABLE,@Spark} 2019-04-03 15:10:50 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@1d01dfa5{/api,null,AVAILABLE,@Spark} 2019-04-03 15:10:50 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@31ff1390{/jobs/job/kill,null,AVAILABLE,@Spark} 2019-04-03 15:10:50 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@759d81f3{/stages/stage/kill,null,AVAILABLE,@Spark} 2019-04-03 15:10:50 INFO SparkUI:54 - Bound SparkUI to 0.0.0.0, and started at http://spark-pi-1554304245069-driver-svc.default.svc:4040 2019-04-03 15:10:50 INFO SparkContext:54 - Added JAR file:///opt/spark/examples/jars/spark-examples_2.11-2.4.0.jar at spark://spark-pi-1554304245069-driver-svc.default.svc:7078/jars/spark-examples_2.11-2.4.0.jar with timestamp 1554304250157 2019-04-03 15:10:51 ERROR SparkContext:91 - Error initializing SparkContext. org.apache.spark.SparkException: External scheduler cannot be instantiated at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2794) at org.apache.spark.SparkContext.&lt;init&gt;(SparkContext.scala:493) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:935) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:926) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:926) at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:31) at org.apache.spark.examples.SparkPi.main(SparkPi.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849) at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86) at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/pods/spark-pi-1554304245069-driver. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods "spark-pi-1554304245069-driver" is forbidden: User "system:serviceaccount:default:default" cannot get resource "pods" in API group "" in the namespace "default". at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:470) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:407) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:379) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:343) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:312) </code></pre> <p>Notes : </p> <ul> <li>Spark 2.4</li> <li>Kubernetes 1.14.0</li> <li>I use Minikube for my k8s cluster</li> </ul>
<p>Hello I had the same issue. I then found this Github issue <a href="https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes/issues/113" rel="noreferrer">https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes/issues/113</a></p> <p>That point me to the problem. I solved the issue following the Spark guide for RBAC cluster here <a href="https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes/issues/113" rel="noreferrer">https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes/issues/113</a></p> <p>Create a serviceaccount</p> <pre><code>kubectl create serviceaccount spark </code></pre> <p>Give the service account the edit role on the cluster</p> <pre><code>kubectl create clusterrolebinding spark-role --clusterrole=edit --serviceaccount=default:spark --namespace=default </code></pre> <p>Run spark submit with the following flag, in order to run it with the (just created(service account)</p> <pre><code>--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark </code></pre> <p>Hope it helps!</p>
<p>I am trying to setup K8S to work with two Windows Nodes (2019). Everything seems to be working well and the containers are working and accessible using k8s service. But, once I introduce configuration for readiness (or liveness) probes - all fails. The exact error is:</p> <blockquote> <p>Readiness probe failed: Get <a href="http://10.244.1.28:80/test.txt" rel="nofollow noreferrer">http://10.244.1.28:80/test.txt</a>: dial tcp 10.244.1.28:80: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.</p> </blockquote> <p>When I try the url from k8s <strong>master</strong>, it works well and I get 200. However I read that the kubelet is the one executing the probe and indeed when trying from the Windows Node - it cannot be reached (which seems weird because the container is running on that same node). Therefore I assume that the problem is related to some network configuration.</p> <p>I have a HyperV with External network Virtual Switch configured. K8S is configured to use flannel overlay (vxlan) as instructed here: <a href="https://learn.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/network-topologies" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/network-topologies</a>.</p> <p>Any idea how to troubleshoot and fix this?</p> <p><strong>UPDATE</strong>: providing the yaml:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: dummywebapplication labels: app: dummywebapplication spec: ports: # the port that this service should serve on - port: 80 targetPort: 80 selector: app: dummywebapplication type: NodePort --- apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: dummywebapplication name: dummywebapplication spec: replicas: 2 template: metadata: labels: app: dummywebapplication name: dummywebapplication spec: containers: - name: dummywebapplication image: &lt;my image&gt; readinessProbe: httpGet: path: /test.txt port: 80 initialDelaySeconds: 15 periodSeconds: 30 timeoutSeconds: 60 nodeSelector: beta.kubernetes.io/os: windows </code></pre> <p>And one more update. In this doc (<a href="https://kubernetes.io/docs/setup/windows/intro-windows-in-kubernetes/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/windows/intro-windows-in-kubernetes/</a>) it is written:</p> <blockquote> <p><strong>My Windows node cannot access NodePort service</strong></p> <p>Local NodePort access from the node itself fails. This is a known limitation. NodePort access works from other nodes or external clients.</p> </blockquote> <p>I don't know if this is related or not as I could not connect to the container from a different node as stated above. I also tried a service of LoadBalancer type but it didn't provide a different result.</p>
<p>The network configuration assumption was correct. It seems that for 'overlay', by default, the kubelet on the node cannot reach the IP of the container. So it keeps returning timeouts and connection refused messages.</p> <p>Possible workarounds:</p> <ol> <li>Insert an 'exception' into the ExceptionList 'OutBoundNAT' of C:\k\cni\config on the nodes. This is somewhat tricky if you start the node with start.ps1 because it overwrites this file everytime. I had to tweak 'Update-CNIConfig' function in c:\k\helper.psm1 to re-insert the exception similar to the 'l2bridge' in that file.</li> <li>Use '<a href="https://learn.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/network-topologies#flannel-in-host-gateway-mode" rel="nofollow noreferrer">l2bridge</a>' configuration. Seems like '<a href="https://kubernetes.io/docs/setup/windows/#networking" rel="nofollow noreferrer">overlay</a>' is running in a more secured isolation, but l2bridge is not.</li> </ol>
<p>How can I configure <code>nginx.ingress.kubernetes.io/rewrite-target</code> and <code>spec.rules.http.paths.path</code> to satisfy the following URI patterns?</p> <pre><code>/aa/bb-aa/coolapp /aa/bb-aa/coolapp/cc </code></pre> <p><strong>Legend</strong>:</p> <ul> <li><strong>a</strong> = Any letter between a-z. Lowercase. Exactly 2 letters - no more, no less.</li> <li><strong>b</strong> = Any letter between a-z. Lowercase. Exactly 2 letters - no more, no less.</li> <li><strong>c</strong> = any valid URI character. Lowercase. Of variable length - think slug.</li> </ul> <p><strong>Example URI:s that should match the above pattern</strong>:</p> <pre><code>/us/en-us/coolapp /us/en-us/coolapp/faq /us/en-us/coolapp/privacy-policy </code></pre> <p><strong>Attention</strong></p> <p>Starting in Version 0.22.0, ingress definitions using the annotation <code>nginx.ingress.kubernetes.io/rewrite-target</code> are not backwards compatible with previous versions. In Version 0.22.0 and beyond, any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a capture group.</p> <p><strong>Note</strong></p> <p>Captured groups are saved in numbered placeholders, chronologically, in the form <code>$1</code>, <code>$2</code> ... <code>$n</code>. These placeholders can be used as parameters in the rewrite-target annotation.</p> <p><strong>References</strong>:</p> <ol> <li><a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="noreferrer">https://kubernetes.github.io/ingress-nginx/examples/rewrite/</a></li> <li><a href="https://github.com/kubernetes/ingress-nginx/pull/3174" rel="noreferrer">https://github.com/kubernetes/ingress-nginx/pull/3174</a></li> </ol>
<p>The <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#rewrite" rel="noreferrer"><code>nginx.ingress.kubernetes.io/rewrite-target</code></a> annotation is used to indicate the target URI where the traffic must be redirected. As per how I understand your question, you only want to match the URI patterns that you specified without redirecting the traffic. In order to achieve this, you can set the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#use-regex" rel="noreferrer"><code>nginx.ingress.kubernetes.io/use-regex</code></a> annotation to <code>true</code>, thus enabling regular expressions in the <code>spec.rules.http.paths.path</code> field. </p> <p>Let's now take a look at the regex that you will need to match your URI patterns. First of all, the regex engine used by ingress-nginx <a href="https://stackoverflow.com/questions/23968992/how-to-match-a-regex-with-backreference-in-go">doesn't support backreferences</a>, therefore a regex like <a href="https://regex101.com/r/M48Do1/1" rel="noreferrer">this one</a> will not work. This is not a problem as you can match the <code>/aa-bb/aa</code> part without forcing the two <code>aa</code>s to be equal since you will —presumably— still have to check the correctness of the URI later on in your service (e.g <code>/us/en-us</code> may be accepted whereas <code>/ab/cd-ab</code> may not).</p> <p>You can use <a href="https://regex101.com/r/M48Do1/2" rel="noreferrer">this regex</a> to match the specified URI patterns:</p> <pre><code>/[a-z]{2}/[a-z]{2}-[a-z]{2}/coolapp(/.*)? </code></pre> <p>If you want to only match URL slugs in the <code>cc</code> part of the pattern you specified, you can use this regex instead:</p> <pre><code>/[a-z]{2}/[a-z]{2}-[a-z]{2}/coolapp(/[a-z0-9]+([a-z0-9]+)*)? </code></pre> <p>Lastly, as the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#use-regex" rel="noreferrer"><code>nginx.ingress.kubernetes.io/use-regex</code></a> enforces case insensitive regex, using <code>[A-Z]</code> instead of <code>[a-z]</code> would lead to the same result.</p> <hr> <p>Follows an ingress example definition using the <code>use-regex</code> annotation:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-regex annotations: nginx.ingress.kubernetes.io/use-regex: "true" spec: rules: - host: test.com http: paths: - path: /[a-z]{2}/[a-z]{2}-[a-z]{2}/coolapp(/.*)? backend: serviceName: test servicePort: 80 </code></pre> <p>You can find more information about Ingress Path Matching in the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/" rel="noreferrer">official user guide</a>.</p>
<p>I'd like to get the current GKE project id from within one of its clusters via the Java client or the GCloud API itself.</p> <ul> <li>I'm running java containers in a GKE cluster of a specific Google Cloud project</li> <li>I initialize the <code>ClusterManagerClient</code> with the appropriate <code>ClusterManagerSettings</code></li> </ul> <p>-> Is it possible to fetch this specific project id with this client?</p> <p>(I'm expecting that there would be a global context within each GKE cluster where we could know the current project we're running on).</p> <p>Thank you</p>
<p>As John Hanley mentioned in his comment above, you can use the instance metadata on the node in your cluster to determine the project that the node is a part of. The easiest way to see it is to use curl from a shell (either on the node or in a container).</p> <p>If you want the project name, it can be seen at:</p> <pre><code>curl "http://metadata.google.internal/computeMetadata/v1/project/project-id" -H "Metadata-Flavor: Google" </code></pre> <p>And if you want the project number, it can be seen at:</p> <pre><code>curl "http://metadata.google.internal/computeMetadata/v1/project/numeric-project-id" -H "Metadata-Flavor: Google" </code></pre> <p>This isn't part of the container API surface, so the <code>ClusterManagerClient</code> isn't the right API client to use. You need to create a client to fetch the instance metadata, which I would expect might be part of the compute client libraries, or you can just make a local HTTP request if you add the right headers (as shown above) since you don't need any special client authentication / authorization to access the local metadata. </p>
<p>I am trying to set up the bookinfo sample application for Istio and Kubernetes on a small cluster. The cluster consists of two machines, a master and a worker, running on Ubuntu 18.04 on two Amazon AWS EC2 instances. Each of the instances has an external IP address assigned. </p> <p>What I'm unable to do is figure out how to expose the bookinfo service to the outside world.</p> <p>I am confused as to whether I need to expose the Istio ingress gateway or each one of the bookinfo services separately. </p> <p>When listing the ingress gateway, the external IP field just says pending. Also, when describing the worker node, there's no mention of an external IP address in the output. </p> <p>I've gone through google but can't really find a proper solution. Describing the ingress gateway only gives internal (i.e. 10.x.x.x) addresses. </p> <p>Output from get and describe commands:</p> <pre><code>kubectl get svc istio-ingressgateway -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-ingressgateway LoadBalancer 10.96.39.4 &lt;pending&gt; 15020:31451/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:31075/TCP,15030:32093/TCP,15031:31560/TCP,15032:30526/TCP,15443:31526/TCP 68m kubectl describe svc istio-ingressgateway -n istio-system Name: istio-ingressgateway Namespace: istio-system Labels: app=istio-ingressgateway chart=gateways heritage=Tiller istio=ingressgateway release=istio Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"istio-ingressgateway","chart":"gateways","heritage":"Til... Selector: app=istio-ingressgateway,istio=ingressgateway,release=istio Type: LoadBalancer IP: 10.96.39.4 Port: status-port 15020/TCP TargetPort: 15020/TCP NodePort: status-port 31451/TCP Endpoints: 10.244.1.6:15020 Port: http2 80/TCP TargetPort: 80/TCP NodePort: http2 31380/TCP Endpoints: 10.244.1.6:80 Port: https 443/TCP TargetPort: 443/TCP NodePort: https 31390/TCP Endpoints: 10.244.1.6:443 Port: tcp 31400/TCP TargetPort: 31400/TCP NodePort: tcp 31400/TCP Endpoints: 10.244.1.6:31400 Port: https-kiali 15029/TCP TargetPort: 15029/TCP NodePort: https-kiali 31075/TCP Endpoints: 10.244.1.6:15029 Port: https-prometheus 15030/TCP TargetPort: 15030/TCP NodePort: https-prometheus 32093/TCP Endpoints: 10.244.1.6:15030 Port: https-grafana 15031/TCP TargetPort: 15031/TCP NodePort: https-grafana 31560/TCP Endpoints: 10.244.1.6:15031 Port: https-tracing 15032/TCP TargetPort: 15032/TCP NodePort: https-tracing 30526/TCP Endpoints: 10.244.1.6:15032 Port: tls 15443/TCP TargetPort: 15443/TCP NodePort: tls 31526/TCP Endpoints: 10.244.1.6:15443 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre> <p>Any help appreciated.</p>
<p>Quoting Istio's <a href="https://istio.io/docs/setup/kubernetes/install/kubernetes/#verifying-the-installation" rel="nofollow noreferrer">official</a> documentation:</p> <blockquote> <p>If your cluster is running in an environment that does not support an external load balancer (e.g., minikube), the EXTERNAL-IP of istio-ingressgateway will say -pending-. To access the gateway, use the service’s NodePort, or use port-forwarding instead.</p> </blockquote> <p>Your cluster seems to fall into 'Custom (cloud)' way of <a href="https://kubernetes.io/docs/setup/" rel="nofollow noreferrer">setting up</a> Kubernetes, which by default does not support Load Balancer.</p> <p><strong>Solution for you:</strong></p> <ul> <li>You must allow inbound traffic to your AWS EC2 instance serving worker role<br> (in other words you have to open NodePort of istio-ingressgateway's service on firewall, see below how to get this port number)</li> <li>Get NodePort of istio-ingressgateway:</li> </ul> <p>with command:</p> <pre><code>export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}') </code></pre> <ul> <li>Get EXTERNAL_IP of your worker node</li> </ul> <p>with command:</p> <pre><code>export INGRESS_HOST=$(kubectl get nodes --selector='!node-role.kubernetes.io/master' -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}') </code></pre> <p>and follow the remaining part of bookinfo sample without any changes.</p>
<p>Can you even define the scale-down-utilization-threshold on GKE? I can't figure out where I would define this if it is actually possible.</p>
<p>I'm not sure if you mean scale down an <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">HPA</a> or scale down a <a href="https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler" rel="nofollow noreferrer">VPA</a> or scale down cluster nodes based on the <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="nofollow noreferrer">cluster autoscaler</a>, or scale down GCP <a href="https://cloud.google.com/compute/docs/autoscaler/" rel="nofollow noreferrer">instance groups</a> or an AWS <a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.html" rel="nofollow noreferrer">ASGs</a>.</p> <ul> <li><p>A Kubernetes HPA supports it based on a metric (typically CPU utilization). It will scale down to a minimum number of replicas defined in the HPA Kubernetes resource.</p></li> <li><p>A Kubernetes VPA <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#is-cluster-autoscaler-compatible-with-cpu-usage-based-node-autoscalers" rel="nofollow noreferrer">supports it</a> based on <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/" rel="nofollow noreferrer">requests/limits</a> (CPU, Memory). Keep in mind that it's not supported if used together with an HPA that is also using CPU and memory as scale metrics.</p></li> <li><p>The Kubernetes cluster autoscaler also supports downscaling based on the <code>--scale-down-utilization-threshold</code> and <code>--scan-interval</code> <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work" rel="nofollow noreferrer">options</a>. <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#is-cluster-autoscaler-compatible-with-cpu-usage-based-node-autoscalers" rel="nofollow noreferrer">It's not recommended</a> to use this together with a GCP instance group or an AWS ASG.</p></li> <li><p>GCP Instance groups or AWS ASGs scale down is supported on the nodes provided that you are not using the cluster autoscaler and using CPU or other metrics specific to the cloud provider VM instances.</p></li> </ul> <p>Hope it helps</p>
<p>I'm running the kuberenets cluster on bare metal servers and my cluster nodes keep added and removed regularly. But when a node is removed, kubernetes does not remove it automatically from nodes list and <strong>kubectl get nodes</strong> keep showing NotReady nodes. Is there any automated way to achieve this? I want similar behavior for nodes as kubernetes does for pods. </p>
<p>To remove a node follow the below steps</p> <pre><code>Run on Master # kubectl cordon &lt;node-name&gt; # kubectl drain &lt;node-name&gt; --force --ignore-daemonsets --delete-emptydir-data # kubectl delete node &lt;node-name&gt; </code></pre>
<p><strong><a href="https://github.com/kubernetes-sigs/aws-alb-ingress-controller/blob/master/docs/guide/ingress/annotation.md#target-type" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/aws-alb-ingress-controller/blob/master/docs/guide/ingress/annotation.md#target-type</a></strong></p> <p>In above link it is mentioned that "instance mode" will route traffic to all ec2 instances within cluster on NodePort opened for your service. so how does kube-proxy make sure that request is served only once in case multiple replicas of pods are running in different instances and how does it makes sure that requests are evenly served from all pods?</p>
<p>As per documentation:</p> <p>Amazon Elastic Load Balancing Application Load Balancer (ALB) is a popular AWS service that load balances incoming traffic at the application layer (layer 7) across multiple targets, such as Amazon EC2 instances. </p> <p>The AWS ALB Ingress controller is a controller that triggers the creation of an ALB and the necessary supporting AWS resources whenever a Kubernetes user declares an Ingress resource on the cluster. The Ingress resource uses the ALB to route HTTP[s] traffic to different endpoints within the cluster.</p> <blockquote> <ol> <li><p>With <strong>instance mode</strong>, ingress traffic start from ALB and <strong>reach Node Port opened for service</strong>. Traffic is routed to the container POD within cluster. Moreover <strong>target-type: "instance mode"</strong> is <strong>default setting</strong> in AWS ALB ingress controller and <strong>service must be type of "NodePort" or "LoadBalancer"</strong> to use this mode.</p></li> <li><p>Managing ALBs is automatic, and you only need to define your ingress resources as you would typically do. ALB ingress controller POD which is running inside the Kubernetes cluster communicates with Kubernetes API and does all the work. However, this POD is only a control plane, it doesn't do any proxying and stuff like that.</p></li> </ol> </blockquote> <p>Your <strong>Application Load Balancer</strong> periodically sends requests to its registered targets <strong>to test their status</strong>. These tests are called health checks. Alb-ingress-controller is performing "health checks" for targets groups. Different "health check's" on target groups can be controlled using annotations.</p> <p>You can find more information about ALB ingress and NodePort <a href="https://akomljen.com/aws-alb-ingress-controller-for-kubernetes/" rel="nofollow noreferrer">here</a> and <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">here</a> </p> <p>Hope this help.</p>
<p>Say, I have two namespaces k8s-app1 and k8s-app2</p> <p>I can list all pods from specific namespace using the below command</p> <pre><code>kubectl get pods -n &lt;namespace&gt; </code></pre> <p>We need to append namespace to all commands to list objects from the respective namespaces. Is there a way to set specific namespace and list objects without including the namespace explicitly?</p>
<p>I like my answers short, to the point and with references to official documentation:</p> <p><strong>Answer</strong>:</p> <pre><code>kubectl config set-context --current --namespace=my-namespace </code></pre> <p><strong>From</strong>:</p> <p><a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/" rel="noreferrer">https://kubernetes.io/docs/reference/kubectl/cheatsheet/</a></p> <pre><code># permanently save the namespace for all subsequent kubectl commands in that context. kubectl config set-context --current --namespace=ggckad-s2 </code></pre>
<p>I'm trying to set up a few micro services in Kubernetes. Everything is working as expected, except the connection from one micro service to RabbitMQ.</p> <p>Problem flow:</p> <ul> <li>.NET Core app --> rabbitmq-kubernetes-service.yml --> RabbitMQ</li> </ul> <p><br/> In the .NET Core app the rabbit connection factory config looks like this:</p> <pre><code>"RabbitMQ": { "Host": "rabbitmq-service", "Port": 7000, "UserName": "guest", "Password": "guest" } </code></pre> <p>The kubernetes rabbit service looks like this:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: rabbitmq-service spec: selector: app: rabbitmq ports: - port: 7000 targetPort: 5672 </code></pre> <p>As well as the rabbit deployment:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: rabbitmq labels: app: rabbitmq spec: replicas: 1 selector: matchLabels: app: rabbitmq template: metadata: labels: app: rabbitmq spec: containers: - name: rabbitmq image: &lt;private ACR with vanilla cfg - the image is: rabbitmq:3.7.9-management-alpine&gt; imagePullPolicy: Always resources: limits: cpu: "1" memory: 512Mi requests: cpu: "0.5" ports: - containerPort: 5672 </code></pre> <p>So this setup is currently <strong>not</strong> working in k8s. Locally it works like a charm with a basic docker-compose.</p> <p><em>However</em>, what I can do in k8s is to go from a LoadBalancer --> to the running rabbit pod and access the management GUI with these config settings.</p> <pre> apiVersion: v1 kind: Service metadata: name: rabbitmqmanagement-loadbalancer spec: type: LoadBalancer selector: app: rabbitmq ports: - port: 80 targetPort: 15672 </pre> <p>Where am I going wrong?</p>
<p>I'm assuming you are running the <code>.NET Core app</code> outside the Kubernetes cluster. If this is indeed the case then you need to use <code>type: LoadBalancer</code>.</p> <p><strong><a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">LoadBalancer</a></strong> is used to expose a service to the internet.</p> <p><strong>ClusterIP</strong> exposes the service inside cluster-internal IP. So <code>Service</code> will be only accessible from within the cluster, also this is a default <code>ServiceType</code>.</p> <p><strong><a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">NodePort</a></strong> exposes the service on each Node's IP at a static port.</p> <p>For more details regarding Services please check the <a href="https://kubernetes.io/docs/concepts/services-networking/service" rel="nofollow noreferrer">Kubernetes docs</a>.</p> <p>You can if the connection is working using a python script:</p> <pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python import pika connection = pika.BlockingConnection( pika.ConnectionParameters(host='RABBITMQ_SERVER_IP')) channel = connection.channel() channel.queue_declare(queue='hello') channel.basic_publish(exchange='', routing_key='hello', body='Hello World!') print(" [x] Sent 'Hello World!'") connection.close() </code></pre> <p>This script will try to connect <code>RABBITMQ_SERVER_IP</code> using port <code>5672</code>.</p> <p>Script requires a library <a href="https://pika.readthedocs.io/en/stable/" rel="nofollow noreferrer"><code>pika</code></a> which can be installed using <code>pip install pika</code>.</p>
<p>I've recently upgraded with kubeadm, which I expect to rotate all certificates, and for good measure, I also ran <code>kubeadm init phase certs all</code>, but I'm not sure what steps are required to verify that the certs are all properly in place and not about to expire.</p> <p>I've seen a <a href="https://stackoverflow.com/a/56334732/238322">SO answer reference</a> <code>kubeadm init phase kubeconfig all</code> is required in addition, but cannot find in the <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init-phase/" rel="nofollow noreferrer">kubernetes kubeadm documentation</a> telling me that it needs to be used in conjunction with phase certs. </p> <p>What do I need to do to make sure that the cluster will not encounter expired certificates?</p> <p>I've tried verifying by connecting to the secure local port: <code>echo -n | openssl s_client -connect localhost:10250 2&gt;&amp;1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' | openssl x509 -text -noout | grep Not</code>, which gives me expirations next month.</p> <p>While <code>openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text</code> and <code>openssl x509 -in /etc/kubernetes/pki/apiserver-kubelet-client.crt -noout -text</code> yield dates over a year in advance.</p> <p>These conflicting dates certainly have me concerned that I will find myself like many others with expired certificates. How do I get in front of that?</p> <p>Thank you for any guidance.</p>
<p>In essence <code>kubeadm init phase certs all</code> regenerates all your certificates including your <code>ca.crt</code> (Certificate Authority), and Kubernetes components use certificate-based authentication to connect to the kube-apiserver (kubelet, kube-scheduler, kube-controller-manager) so you will also have to regenerate pretty much all of those configs by running <code>kubeadm init phase kubeconfig all</code></p> <p>Keep in mind that you will have to regenerate the <code>kubelet.conf</code> on all your nodes since it also needs to be updated to connect to the kube-apiserver with the new <code>ca.crt</code>. Also, make sure you <a href="https://stackoverflow.com/a/53074575/2989261">add all your hostnames/IP addresses</a> that your kube-apiserver is going to serve on to the <code>kubeadm init phase certs all</code> command (<code>--apiserver-cert-extra-sans</code>)</p> <p>Most likely you are not seeing the updated certs when connecting through <code>openssl</code> is because you haven't restarted the Kubernetes components and in particular the kube-apiserver. So you will have to start your kube-apiserver, kube-scheduler, kube-controller-manager, etc (or kube-apiservers, kube-schedulers, etc if you are running a multi-master control plane) You will also have to restart your kubelets on all your nodes. </p>
<p>I want to get Secret object from k8s cluster using go-client API</p> <p>I have function that looks like that</p> <pre><code>func GetSecret( version string) (retVal interface{}, err error){ clientset := GetClientOutOfCluster() labelSelector := metav1.LabelSelector{MatchLabels: map[string]string{"version":version}} listOptions := metav1.ListOptions{ LabelSelector: labelSelector.String(), Limit: 100, } secretList, err := clientset.CoreV1().Secrets("namespace").List( listOptions ) retVal = secretList.Items[0] return retVal, err } </code></pre> <p>GetClientOutOfCluster is basically retrieves configuration from cluster or from local ~/.kube/config</p> <p>I used metav1.LabelSelector just like i do when i generate new Deployment object.So i thought i was cool. But ListOptions.LabelSelector is a string. When i run my function it fails.</p> <pre><code>unable to parse requirement: invalid label key "&amp;LabelSelector{MatchLabels:map[string]string{version:": name part must consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyName', or 'my.name', or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]') </code></pre> <p>I cannot find example of usage of this function anywhere. Documentation assumes that you know what is LabelSelector.</p> <p>What is format of LabelSelector for ListOptions?</p> <p>Thanks</p>
<p>You can use the k8s provided function to do the toString operation</p> <pre><code>import "k8s.io/apimachinery/pkg/labels" ... func GetSecret(version string) (retVal interface{}, err error){ clientset := GetClientOutOfCluster() labelSelector := metav1.LabelSelector{MatchLabels: map[string]string{"version":version}} listOptions := metav1.ListOptions{ LabelSelector: labels.Set(labelSelector.MatchLabels).String(), Limit: 100, } secretList, err := clientset.CoreV1().Secrets("namespace").List(listOptions) retVal = secretList.Items[0] return retVal, err } </code></pre>
<p>values.yaml</p> <pre><code>replicas: { test: 1, stage: 2, prod: 3 } </code></pre> <p>Here I am trying to use Helm templates to define number of replicas per namespace but am unsure of the proper syntax and pattern:</p> <p>deployment.yaml</p> <pre><code>replicas: {{ .Values.replicas.{{ .Release.Namespace }} }} </code></pre> <p>So if this were deployed to <code>--namespace=prod</code>, I would expect the template to return:</p> <pre><code># .Values.replicas.prod replicas: 3 </code></pre>
<p>All of the template functions provided by the standard Go <a href="https://godoc.org/text/template" rel="noreferrer">text/template</a> library are available. In particular, that includes an <code>index</code> function which can do dynamic lookup in an array or map object.</p> <pre><code>replicas: {{ index .Values.replicas .Release.Namespace }} </code></pre>
<p>Currently I am trying to deploy one sample micro service developed using Spring Boot using Jenkins and Kubernetes on my on premise server. For that I am already created my Kubernetes resource using Helm chart.</p> <p>I tested the Helm chart deployment using login in remote machine and in my home directory I created. And using terminal command "helm install" I deployed into kubernetes cluster. And end point is successfully working.</p> <p><strong>My Confusion</strong> </p> <p>Now only tested from terminal. Now I am trying to add the helm install command in my Jenkins pipeline job. So where I need to keep this helm chart? Need to copy to /var/lib/jenkins directory (Jenkins home directory) ? Or I only need to give the full path in command ?</p> <p>What is the best practice for saving Helm chart for Jenkins deployment? I am confused about to follow standard way of implementation. I am new to this CI/CD pipeline. </p>
<p>The Helm chart(s) should almost definitely be source controlled.</p> <p>One reasonable approach is to keep a Helm chart in the same repository as your service. Then when Jenkins builds your project, it will also have the chart available, and can directly run <code>helm install</code>. (Possibly it can pass credentials it owns to <code>helm install --set</code> options to set values during deployment.) This scales reasonably well, since it also means developers can make local changes to charts as part of their development work.</p> <p>You can also <a href="https://helm.sh/docs/developing_charts/#the-chart-repository-guide" rel="nofollow noreferrer">set up a "repository" of charts</a>. In your Jenkins setup one path is just to keep a second source control repository with charts, and check that out during deployment. Some tools like Artifactory also support keeping Helm charts that can be directly deployed without an additional checkout. The corresponding downside here is that if something like a command line or environment variable changes, you need coordinated changes in two places to make it work.</p>
<p>I have two application, both deployed in same cluster.</p> <p>Now from web, there is an ajax request to get data from api, but it always return <code>502 Connection refused</code>.</p> <p>here is my jquery code (web).</p> <pre><code>$.get("http://10.43.244.118/api/users", function (data) { console.log(data); $('#table').bootstrapTable({ data: data }); }); </code></pre> <p>Note: when I change the service type to <code>LoadBalancer</code> from <code>ClusterIP</code> then it works fine.</p>
<p>ClusterIP services (usually) only work within the cluster. You can technically make your CNI address space available externally, but that is rare and you probably shouldn't count on it. The correct way to expose something outside the cluster is either a NodePort service or a LoadBalancer (which is a NodePort plus a cloud load balancer).</p>
<p>Assuming a pod has an environmental variable set both in its spec, as e.g. below</p> <pre><code>spec: containers: - name: env-print-demo image: bash env: - name: FOO value: "BAR" </code></pre> <p>as also injected to it via a <code>ConfigMap</code> (but with a different value) which is the one that will be taken into account?</p>
<p>When a key exists in multiple sources, <strong>the value associated with the last source will take precedence</strong>.</p> <p><a href="https://stackoverflow.com/questions/54398272/override-env-values-defined-in-container-spec">Override env values defined in container spec</a></p>
<p>I am trying to apply ISTIO rate limiting using Redis Handler using <a href="https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/policy/mixer-rule-productpage-redis-quota-rolling-window.yaml" rel="nofollow noreferrer">Redis Handler ISTIO</a></p> <p>But mixer is not able to find the redis handler. Below from mixer log:</p> <blockquote> <p>2019-05-27T11:59:23.910183Z warn Unable to find a handler for action. rule[action]='quota.rule.istio-system[0]', handler='redishandler.istio-system'</p> <ul> <li>redisquota: could not create a connection to redis server: NOAUTH Authentication required.''. Also, how to provide Redis credential for the template?</li> </ul> </blockquote>
<p>redis quota does not support Authentication</p> <pre class="lang-golang prettyprint-override"><code> // test redis connection option := redis.Options{ Addr: b.adapterConfig.RedisServerUrl, } if b.adapterConfig.ConnectionPoolSize &gt; 0 { option.PoolSize = int(b.adapterConfig.ConnectionPoolSize) } </code></pre> <p><a href="https://github.com/istio/istio/blob/36eaeeec4b0fbb0bdf421d0aae81019f68c4840d/mixer/adapter/redisquota/redisquota.go" rel="nofollow noreferrer">https://github.com/istio/istio/redisquota.go</a></p>
<p>I am trying to write a utility which accepts following parameters -</p> <ol> <li>kubernetes service name </li> <li>spring boot actuator endpoint name (e.g. /actuator/loggers)</li> </ol> <p>This utility should invoke the endpoint on all the pods of this service. </p> <p>Currently I am obtaining the name of all the pods through service name and iterating over all of them and meeting the requirement by running -</p> <pre><code>`kubectl exec $pod -- curl http:\\localhost:8081\actuator\loggers` </code></pre> <p>Though it works, I am looking for a solution where I don't have to do "exec" on the pod as I am not sure of the permission the user may have who runs this utility. Is there any way to make http call to individual pods?</p>
<p>I would run this utility inside the kubernetes cluster, and expose this utility to the developers that need the data. That way you only have to expose this utility, rather than exposing all the pods to allow http calls. I think it's much simpler this way.</p> <p>There are different ways to expose a Kubernetes Pod to outside the cluster, but I'd recommend <a href="https://medium.com/microservices-for-net-developers/exposing-your-application-to-the-public-ingress-c628c922e78b" rel="nofollow noreferrer">using Ingress</a>, which uses a nginx proxy to route traffic coming from outside to your pod.</p>
<p>I have a secret of type <code>kubernetes.io/dockerconfigjson</code>:</p> <pre><code>$ kubectl describe secrets dockerjson Name: dockerjson Namespace: my-prd Labels: &lt;none&gt; Annotations: &lt;none&gt; Type: kubernetes.io/dockerconfigjson Data ==== .dockerconfigjson: 1335 bytes </code></pre> <p>When I try to mount this secret into a container - I cannot find a <code>config.json</code>:</p> <pre><code>- name: dump image: kaniko-executor:debug imagePullPolicy: Always command: ["/busybox/find", "/", "-name", "config.json"] volumeMounts: - name: docker-config mountPath: /foobar volumes: - name: docker-config secret: secretName: dockerjson defaultMode: 256 </code></pre> <p>which only prints:</p> <pre><code>/kaniko/.docker/config.json </code></pre> <p>Is this supported at all or am I doing something wrong?</p> <p>Am using OpenShift 3.9 - which should be Kubernetes 1.9.</p>
<pre><code>apiVersion: v1 kind: Pod metadata: name: kaniko spec: containers: - name: kaniko image: gcr.io/kaniko-project/executor:debug-v0.9.0 command: - /busybox/cat resources: limits: cpu: 2 memory: 2Gi requests: cpu: 0.5 memory: 500Mi tty: true volumeMounts: - name: docker-config mountPath: /kaniko/.docker/ volumes: - name: docker-config secret: secretName: dockerjson items: - key: .dockerconfigjson path: config.json </code></pre>
<p>Can anyone point out how to connect to the mongo db instance using mongo client using either command line client or from .net core programs with connection strings?</p> <p>We have created a sample cluster in digitalocean with a namespace, let's say <em>mongodatabase</em>. </p> <p>We installed the mongo statefulset with 3 replicas. We are able to successfully connect with the below command kubectl --kubeconfig=configfile.yaml -n <em>mongodatabase</em> exec -ti mongo-0 mongo But when we connect from a different namespace or from default namespace with the pod names in the below format, it doesn't work.</p> <pre><code> kubectl --kubeconfig=configfile.yaml exec -ti mongo-0.mongo.mongodatabase.cluster.svc.local mongo </code></pre> <p>where <em>mongo-0.mongo.mongodatabase.cluster.svc.local</em> is in <em>pod-0.service_name.namespace.cluster.svc.local</em> (also tried pod-0.statfulset_name.namespace.cluster.svc.local and pod-0.service_name.statefulsetname.namespace.cluster.svc.local) etc.,</p> <p>Can any one help with the correct dns name/connection string to be used while connecting with mongo client in command line and also from the programs like java/.net core etc.,?</p> <p>Also should we use kubernetes deployment instead of statefulsets here?</p>
<p>You need to reference the mongo service by namespaced dns. So if your mongo service is <code>mymongoapp</code> and it is deployed in <code>mymongonamespace</code>, you should be able to access it as <code>mymongoapp.mymongonamespace</code>.</p> <p>To test, I used the <code>bitnami/mongodb</code> docker client. As follows:</p> <p>From within <code>mymongonamespace</code>, this command works</p> <pre><code>$ kubectl config set-context --current --namespace=mymongonamespace $ kubectl run mongodbclient --rm --tty -i --image bitnami/mongodb --command -- mongo --host mymongoapp </code></pre> <p>But when I switched to namespace default it didn't work</p> <pre><code>$ kubectl config set-context --current --namespace=default $ kubectl run mongodbclient --rm --tty -i --image bitnami/mongodb --command -- mongo --host mymongoapp </code></pre> <p>Qualifying the host with the namespace then works</p> <pre><code>$ kubectl run mongodbclient --rm --tty -i --image bitnami/mongodb --command -- mongo --host mymongoapp.mymongonamespace </code></pre>
<p><strong>Requirement</strong></p> <p>Kubernetes in Azure uses <em>Availability Sets</em> as the default availability strategy.</p> <p>I can specify the kubernetes <code>nodeSelector</code> attribute to select a specific node.</p> <pre><code>kind: Pod ... spec: ... nodeSelector: ??? </code></pre> <p><strong>Question</strong></p> <p>Can I specify the <code>nodeSelector</code> rule to use a node in a specific <em>Availability Set</em>?</p> <p>I could label the pods manually after creation. But is there an automatic solution?</p>
<p>first of all, this question makes no sense, as with availability sets you can only have 1 node pool with AKS (and your tags mention AKS) (at least supported). but still, nodes have labels that look like this:</p> <pre><code>agentpool=pool_name </code></pre> <p>so your node selector would looks like this:</p> <pre><code> nodeSelector: agentpool: pool_name </code></pre>
<p>I am following the <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CoreV1Api.md#connect_get_namespaced_pod_exec" rel="nofollow noreferrer">official documentation</a> and to my surprise, the request just won't work.</p> <p>I tried reading multiple questions and answers but in vain. I had set <code>stream</code> api to use but the error did not go away.</p> <p>My code is:</p> <pre><code>from __future__ import print_function import time import kubernetes.client from kubernetes.client.rest import ApiException from pprint import pprint from kubernetes import client, config, stream stream = stream.stream # Configure API key authorization: BearerToken configuration = kubernetes.client.Configuration() configuration.host = "https://my_aws_server.amazonaws.com" configuration.verify_ssl = False configuration.api_key['authorization'] = "some_token" # Uncomment below to setup prefix (e.g. Bearer) for API key, if needed configuration.api_key_prefix['authorization'] = 'Bearer' configuration.debug = True api_instance = kubernetes.client.CoreV1Api( kubernetes.client.ApiClient(configuration)) name = 'jupyter-ankita' # str | name of the PodExecOptions namespace = 'jhub' # str | object name and auth scope, such as for teams and projects # str | Command is the remote command to execute. argv array. Not executed within a shell. (optional) command = 'echo "hail aviral"' # bool | Redirect the standard error stream of the pod for this call. Defaults to true. (optional) stderr = True # bool | Redirect the standard input stream of the pod for this call. Defaults to false. (optional) stdin = True # bool | Redirect the standard output stream of the pod for this call. Defaults to true. (optional) stdout = True # bool | TTY if true indicates that a tty will be allocated for the exec call. Defaults to false. (optional) tty = True try: api_response = stream(api_instance.connect_post_namespaced_pod_exec( name, namespace, command=command, stderr=stderr, stdin=stdin, stdout=stdout) ) pprint(api_response) except ApiException as e: print("Exception when calling CoreV1Api-&gt;connect_post_namespaced_pod_exec: %s\n" % e) </code></pre> <p>I want the command to run but instead, I am facing an error:</p> <pre><code>Exception when calling CoreV1Api-&gt;connect_post_namespaced_pod_exec: (400) Reason: Bad Request HTTP response headers: HTTPHeaderDict({'Audit-Id': '88df8863-61b1-4fe7-9d39-d0e6059ea993', 'Content-Type': 'application/json', 'Date': 'Tue, 28 May 2019 14:04:38 GMT', 'Content-Length': '139'}) HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Upgrade request required","reason":"BadRequest","code":400} </code></pre> <p>My SSL client is updated, I am using Python3.7 from the <code>brew</code> on a MacOS.</p> <p>Also used on Ubuntu, same error.</p> <p>The versions are:</p> <pre><code>Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.3", GitCommit:"some_no", GitTreeState:"clean", BuildDate:"2018-11-27T01:14:37Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.6-eks-d69f1b", GitCommit:"some_no", GitTreeState:"clean", BuildDate:"2019-02-28T20:26:10Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <ul> <li>UPDATE: I changed my functions to:</li> </ul> <pre><code>api_response = stream( api_instance.connect_post_namespaced_pod_exec, name, namespace, command=command, stderr=stderr, stdin=stdin, stdout=stdout ) </code></pre> <p>and it was able to exec. But, now I am facing the following error:</p> <pre><code>('rpc error: code = 2 desc = oci runtime error: exec failed: ' 'container_linux.go:262: starting container process caused "exec: \\"echo ' '\\\\\\"hail aviral\\\\\\"\\": executable file not found in $PATH"\n' '\r\n') </code></pre> <p>It is saying that the <code>exec</code> failed.</p>
<p>Missing entry point for the remote command execution. Try this one:</p> <pre><code>command = [ '/bin/sh', '-c', 'echo hail aviral'] </code></pre>
<p>I'm trying to get visitor IP on my Laravel application that uses Nginx on Google Cloud Kubernetes Engine, under load balancer.</p> <p>I have set up TrustProxies.php like this:</p> <pre><code>&lt;?php namespace App\Http\Middleware; use Illuminate\Http\Request; use Fideloper\Proxy\TrustProxies as Middleware; class TrustProxies extends Middleware { /** * The trusted proxies for this application. * * @var array */ protected $proxies = '*'; /** * The headers that should be used to detect proxies. * * @var int */ protected $headers = Request::HEADER_X_FORWARDED_ALL; } </code></pre> <p>I have also tried</p> <pre><code>protected $proxies = '**'; </code></pre> <p>And</p> <pre><code>protected $proxies = ['loadbalancer_ip_here']; </code></pre> <p>No matter what I have tried, it will always return load balancer ip.</p> <p>Might this be caused by Nginx configuration? Help appreciated.</p>
<p>You have to set traffic policy in your nginx service </p> <pre><code>externalTrafficPolicy: "Local" </code></pre> <p>and also </p> <pre><code>healthCheckNodePort: "numeric port number for the service" </code></pre> <p>More details in <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip" rel="nofollow noreferrer">Preserving the client source IP</a> doc</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: xi-{{instanceId}} name: deployment-creation rules: - apiGroups: [""] resources: ["pods"] verbs: ["get", "list", "watch"] - apiGroups: ["batch", "extensions"] resources: ["jobs"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] </code></pre> <p>In the above example, I permit various operations on pods and jobs. For pods, the apiGroup is blank. For jobs, the apiGroup may be batch or extensions. Where can I find all the possible resources, and which apiGroup I should use with each resource?</p>
<p><code>kubectl api-resources</code> will list all the supported resource types and api-group. Here is the table of <a href="https://kubernetes.io/docs/reference/kubectl/overview/#resource-types" rel="noreferrer">resource-types</a></p>
<p>I created an Kubernetes Cluster in Google Cloud, I'm using my macbook to create PODs, and I'm using <code>gcloud</code> to connect to cluster from my computer:</p> <p><a href="https://i.stack.imgur.com/DjKb5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DjKb5.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/TgqTe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TgqTe.png" alt="enter image description here"></a></p> <p>When I run <code>gcloud container clusters get-credentials gcloud-cluster-dev --zone europe-west1-d --project ***********</code> in my computer, <code>gcloud</code> configures automatically <code>~/.kube/config</code> file.</p> <p>But now I want to connect to kubectl from a Docker container (this one: <code>dtzar/helm-kubectl:2.14.0</code>), and I don't want to use <code>gcloud</code>, I only want to use <code>kubectl</code>.</p> <p>When I run <code>docker run -it dtzar/helm-kubectl:2.14.0 sh</code>, I already have <code>kubectl</code> installed, but not configurated to connect to cluster.</p> <p>I'm trying to connect <code>kubectl</code> to cluster without installing <code>gcloud</code>.</p> <p>I tried basic authentication <a href="https://blog.christianposta.com/kubernetes/logging-into-a-kubernetes-cluster-with-kubectl/" rel="nofollow noreferrer">https://blog.christianposta.com/kubernetes/logging-into-a-kubernetes-cluster-with-kubectl/</a> without success. Returns an error:</p> <pre><code># kubectl get pods error: You must be logged in to the server (Unauthorized) # kubectl version Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"} error: You must be logged in to the server (the server has asked for the client to provide credentials) </code></pre> <p>I also tried this: <a href="https://codefarm.me/2019/02/01/access-kubernetes-api-with-client-certificates/" rel="nofollow noreferrer">https://codefarm.me/2019/02/01/access-kubernetes-api-with-client-certificates/</a> But I don't found where are <code>ca.crt</code> and <code>ca.key</code> to use in this line: <code>(...) -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key (...)</code></p> <p>I only see this: <a href="https://i.stack.imgur.com/LhM20.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LhM20.png" alt="enter image description here"></a></p> <p>Can I use this CA? How?</p> <p>Anyone can help me? Thanks.</p> <p><strong>EDIT:</strong> I can't mount my kubectl config in the docker image, because I created this config with gcloud, and the Docker image don't have gcloud. I want to connect directly to kubectl withou gcloud</p> <pre class="lang-sh prettyprint-override"><code> $ docker run -v ~/.kube:/root/.kube -it dtzar/helm-kubectl:2.14.0 sh # kubectl version Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"} Unable to connect to the server: error executing access token command "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin/gcloud config config-helper --format=json": err=fork/exec /usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin/gcloud: no such file or directory output= stderr= </code></pre>
<p>The easiest would be to mount your ~/.kube/config into your container. Like:</p> <pre><code>docker run -v ~/.kube:/root/.kube &lt;your container image:tag&gt; </code></pre> <p><strong>EDIT:</strong> If this is not enough, you can, also, mount your sdk folder (kinda hackish):</p> <pre><code>docker run -v ~/.kube:/root/.kube -v /usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk:/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk -it dtzar/helm-kubecsh:2.14.0 sh </code></pre>
<p>I’ve the following application which Im able to run in K8S successfully which using service with type load balancer, very simple app with two routes</p> <ol> <li><code>/</code> - you should see 'hello application`</li> <li><code>/api/books</code> should provide list of book in json format</li> </ol> <p>This is the <code>service</code></p> <pre><code>apiVersion: v1 kind: Service metadata: name: go-ms labels: app: go-ms tier: service spec: type: LoadBalancer ports: - port: 8080 selector: app: go-ms </code></pre> <p>This is the <strong>deployment</strong></p> <pre><code> apiVersion: extensions/v1beta1 kind: Deployment metadata: name: go-ms labels: app: go-ms spec: replicas: 2 template: metadata: labels: app: go-ms tier: service spec: containers: - name: go-ms image: rayndockder/http:0.0.2 ports: - containerPort: 8080 env: - name: PORT value: "8080" resources: requests: memory: "64Mi" cpu: "125m" limits: memory: "128Mi" cpu: "250m" </code></pre> <p>after applied the both yamls and when calling the URL:</p> <p><code>http://b0751-1302075110.eu-central-1.elb.amazonaws.com/api/books</code></p> <p><strong>I was able to see the data in the browser as expected and also for the root app using just the external ip</strong></p> <p>Now I want to use <code>istio</code>, so I follow the guide and install it successfully via <code>helm</code> using <a href="https://istio.io/docs/setup/kubernetes/install/helm/" rel="nofollow noreferrer">https://istio.io/docs/setup/kubernetes/install/helm/</a> and verify that all the 53 crd are there and also <code>istio-system</code> components (such as <code>istio-ingressgateway</code> <code>istio-pilot</code> etc all 8 deployments are in up and running)</p> <p>I’ve change the service above from <code>LoadBalancer</code> to <code>NodePort</code> </p> <p>and create the following <code>istio</code> config according to the istio docs</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: http-gateway spec: selector: istio: ingressgateway servers: - port: number: 8080 name: http protocol: HTTP hosts: - "*" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: virtualservice spec: hosts: - "*" gateways: - http-gateway http: - match: - uri: prefix: "/" - uri: exact: "/api/books" route: - destination: port: number: 8080 host: go-ms </code></pre> <p>in addition I’ve added the following</p> <p><code>kubectl label namespace books istio-injection=enabled</code> where the application is deployed,</p> <p>Now to get the external Ip i've used command</p> <p><code>kubectl get svc -n istio-system -l istio=ingressgateway</code></p> <p>and get this in the <code>external-ip</code></p> <p><code>b0751-1302075110.eu-central-1.elb.amazonaws.com</code> when trying to access to the URL </p> <p><code>http://b0751-1302075110.eu-central-1.elb.amazonaws.com/api/books</code></p> <p>I got error: </p> <p>This site can’t be reached</p> <p><code>ERR_CONNECTION_TIMED_OUT</code> </p> <p>if I run the docker <code>rayndockder/http:0.0.2</code> via <code>docker run -it -p 8080:8080 httpv2</code></p> <p>I path's works correctly! </p> <p>Any idea/hint What could be the issue ?</p> <p>Is there a way to <strong>trace</strong> the <code>istio</code> configs to see whether if something is missing or we have some collusion with port or network policy maybe ? </p> <p>btw, the deployment and service can run on each cluster for testing of someone could help...</p> <p>if I <strong>change</strong> all to port to <code>80</code> (in all yaml files and the application and the docker ) I was able to get the data for the root path, but not for "api/books" </p>
<p>I tired your config with the modification of gateway port to 80 from 8080 in my local minikube setup of kubernetes and istio. This is the command I used: </p> <pre><code>kubectl apply -f - &lt;&lt;EOF apiVersion: v1 kind: Service metadata: name: go-ms labels: app: go-ms tier: service spec: ports: - port: 8080 selector: app: go-ms --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: go-ms labels: app: go-ms spec: replicas: 2 template: metadata: labels: app: go-ms tier: service spec: containers: - name: go-ms image: rayndockder/http:0.0.2 ports: - containerPort: 8080 env: - name: PORT value: "8080" resources: requests: memory: "64Mi" cpu: "125m" limits: memory: "128Mi" cpu: "250m" --- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: http-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: go-ms-virtualservice spec: hosts: - "*" gateways: - http-gateway http: - match: - uri: prefix: / - uri: exact: /api/books route: - destination: port: number: 8080 host: go-ms EOF </code></pre> <p>The reason that I changed the gateway port to 80 is that, the istio ingress gateway by default opens up a few ports such as 80, 443 and few others. In my case, as minikube doesn't have an external load balancer, I used node ports which is 31380 in my case.</p> <p>I was able to access the app with url of http://$(minikube ip):31380.</p> <p>There is no point in changing the port of services, deployments since these are application specific.</p> <p>May be <a href="https://stackoverflow.com/questions/53994034/istio-what-for-all-these-ports-are-opened-on-loadbalancer">this</a> question specifies the ports opened by istio ingress gateway.</p>
<p>Our requirement is we need to do batch processing every 3 hours but single process can not handle the work load. we have to run multiple pods for the same cron job. Is there any way to do that ?</p> <p>Thank you.</p>
<p>You can provide <code>parallelism: &lt;num_of_pods&gt;</code> to <code>cronjob.spec.jobTemplate.spec</code> and it will run the multiple pods () at the same time.</p> <p>Following is the example of a cronjob which runs 3 nginx pod every minute.</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: creationTimestamp: null labels: run: cron1 name: cron1 spec: concurrencyPolicy: Allow jobTemplate: metadata: creationTimestamp: null spec: parallelism: 3 template: metadata: creationTimestamp: null labels: run: cron1 spec: containers: - image: nginx name: cron1 resources: {} restartPolicy: OnFailure schedule: '*/1 * * * *' concurrencyPolicy: Forbid status: {} </code></pre>
<p>Here is my overall goal:</p> <ul> <li><p>Have a MongoDB running</p></li> <li><p>Persist the data through pod failures / updates etc</p></li> </ul> <p>The approach I’ve taken:</p> <ul> <li><p>K8S Provider: Digital Ocean</p></li> <li><p>Nodes: 3</p></li> <li><p>Create a PVC</p></li> <li><p>Create a headless Service</p></li> <li><p>Create a StatefulSet</p></li> </ul> <p>Here’s a dumbed down version of the config:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: some-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: do-block-storage --- apiVersion: v1 kind: Service metadata: name: some-headless-service labels: app: my-app spec: ports: - port: 27017 name: my-app-database clusterIP: None selector: app: my-app tier: database --- apiVersion: apps/v1 kind: StatefulSet metadata: name: my-app-database labels: app: my-app tier: database spec: serviceName: some-headless-service replicas: 1 selector: matchLabels: app: my-app tier: database template: metadata: labels: app: my-app tier: database spec: containers: - name: my-app-database image: mongo:latest volumeMounts: - name: some-volume mountPath: /data ports: - containerPort: 27017 name: my-app-database volumes: - name: some-volume persistentVolumeClaim: claimName: some-pvc </code></pre> <p>This is working as expected. I can spin down the replicas to 0:</p> <p><code>kubectl scale —replicas=0 statefulset/my-app-database</code></p> <p>Spin it back up:</p> <p><code>kubectl scale —replicas=1 statefulset/my-app-database</code></p> <p>And the data will persist..</p> <p>But one time, as I was messing around by scaling the statefulset up and down, I was met with this error:</p> <pre><code>Volume is already exclusively attached to one node and can't be attached to another </code></pre> <p>Being new to k8s, I deleted the PVC and “recreated” the same one:</p> <pre><code>kubectl delete pvc some-pvc kubectl apply -f persistent-volume-claims/ </code></pre> <p>The <code>statefulset</code> spun back up with a new PV and the old PV was deleted as the <code>persistentVolumeReclaimPolicy</code> was set to <code>Delete</code> by default.</p> <p>I set this new PV <code>persistentVolumeReclaimPolicy</code> to <code>Retain</code> to ensure that the data would not be automatically removed.. and I realized: I’m not sure how I’d reclaim that PV. Earlier to get through the “volume attachment” error, I deleted the PVC, which will just create another new PV with the setup I have, and now I’m left with my data in that <code>Released</code> PV.</p> <p>My main questions are:</p> <ul> <li><p>Does this overall sound like the right approach for my goal?</p></li> <li><p>Should I look into adding a <code>claimRef</code> to the dynamically created PV and then recreating a new PVC with that claimRef, as mentioned here: <a href="https://stackoverflow.com/questions/34282704/can-a-pvc-be-bound-to-a-specific-pv/34323691#34323691">Can a PVC be bound to a specific PV?</a></p></li> <li><p>Should I be trying to get that fresh <code>statefulset</code> PVC to actually use that old PV?</p></li> <li><p>Would it make sense to try to reattach the old PV to the correct node, and how would I do that?</p></li> </ul>
<p>If your want to use <code>StatefulSet</code> with scalability, your storage should also support this, there are two way to handle this:</p> <ul> <li><p>If <code>do-block-storage</code> storage class supprt <code>ReadWriteMany</code>, then put all pod's data in single volume. </p></li> <li><p>Each pod use a different volume, add <code>volumeClaimTemplate</code> to your <code>StatefulSet.spec</code>, then k8s will create PVC like <code>some-pvc-{statefulset_name}-{idx}</code> automatically:</p></li> </ul> <pre><code>spec: volumeClaimTemplates: - metadata: name: some-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: do-block-storage </code></pre> <p><strong>Update:</strong></p> <p><code>StatefulSet</code> replicas <strong>Must</strong> deploy with <a href="https://docs.mongodb.com/manual/reference/replica-configuration/" rel="nofollow noreferrer">mongodb replication</a>, then each pod in <code>StatefulSet</code> will has same data storage. </p> <p>So when container run <code>mongod</code> command, you must add option <code>--replSet={name}</code>. when all pods up, execute command <code>rs.initiate()</code> to tell mongodb how to handle data replication. When you scale up or down <code>StatefulSet</code>, execute command <code>rs.add()</code> or <code>rs.remove()</code> to tell mongodb members has changed.</p>
<p>With the support of Windows server 2019 in Kubernetes 1.14 it seems possible to have nodes of different OS. For example a Ubuntu 18.04 Node, RHEL 7 Node, Windows Server Node within one cluster.</p> <p>In my use case I would like to have pre-configured queue system with a queue per OS type. The nodes would feed off their specific queues processing the job.</p> <p>With the above in my mind is it possible to configure a Job to go to a specific queue and in turn a specific OS node?</p>
<p>Kubernetes nodes come populated with a standard set of labels, this includes <code>kubernetes.io/os</code></p> <p><a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="nofollow noreferrer">Pods can then be assigned</a> to certain places via a <code>nodeSelector</code>, <code>podAffinity</code> and <code>podAntiAffinity</code>.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Pod metadata: name: anapp spec: containers: - image: docker.io/me/anapp name: anapp ports: - containerPort: 8080 nodeSelector: kubernetes.io/os: linux </code></pre> <p>If you need finer grained control (for example choosing between Ubuntu/RHEL) you will need to add custom labels in your kubernetes node deployment to select from. This level of selection is rare as container runtimes try and hide most of the differences from you, but if you have a particular case then add extra label metadata to the nodes.</p> <p>I would recommend using the <code>ID</code> and <code>VERSION_ID</code> fields from <code>cat /etc/*release*</code> as most Linux distros populate this information in some form. </p> <pre><code>kubectl label node thenode softey.com/release-id=debian kubectl label node thenode softey.com/release-version-id=9 </code></pre>
<p>I am new to Kubernetes and am attempting to get <code>Ingress-Nginx</code> to work on my local k8s cluster.</p> <p>I have it installed and running:</p> <pre><code>$ kubectl get pods --namespace=ingress-nginx NAME READY STATUS RESTARTS AGE nginx-ingress-controller-76f97b74b-bbb6h 1/1 Running 0 13h </code></pre> <p>Then I created two nginx services (wanted to test name-based routing): </p> <pre><code>$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 20h nginx NodePort 10.102.188.253 &lt;none&gt; 80:32025/TCP 36m nginx2 NodePort 10.109.43.89 &lt;none&gt; 80:32458/TCP 35m </code></pre> <p>And I created my ingress:</p> <pre><code>$ kubectl get ingress -o yaml apiVersion: v1 items: - apiVersion: extensions/v1beta1 kind: Ingress metadata: creationTimestamp: 2018-10-25T12:27:44Z generation: 1 name: test namespace: default resourceVersion: "98114" selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/test uid: 5fbc2e9e-d851-11e8-995f-2ae1e5e17bac spec: rules: - host: test-ingress.chbresser.com http: paths: - backend: serviceName: nginx servicePort: 80 - host: test-ingress2.chbresser.com http: paths: - backend: serviceName: nginx2 servicePort: 80 status: loadBalancer: {} kind: List metadata: resourceVersion: "" selfLink: "" </code></pre> <p>From my understanding that is all I was supposed to do, but its not getting an IP address:</p> <pre><code>$ kubectl get ingress NAME HOSTS ADDRESS PORTS AGE test test-ingress.chbresser.com,test-ingress2.chbresser.com 80 33m </code></pre> <p>What step did I miss? Why am I not getting an IP?</p> <p>Edit: </p> <p><code>kubectl logs nginx-ingress-controller-76f97b74b-bbb6h --namespace=ingress-nginx:</code></p> <pre><code>W1025 05:44:10.003587 9 queue.go:130] requeuing &amp;ObjectMeta{Name:sync status,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:&lt;nil&gt;,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,}, err services "ingress-nginx" not found </code></pre>
<p>I use Bare-metal and get IP address by using <code>--report-node-internal-ip-address=true</code> flag in <code>ingress-controller</code>.</p> <p>Here is my <code>ingress-controller</code> yaml:</p> <pre class="lang-html prettyprint-override"><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-ingress-controller spec: replicas: 1 revisionHistoryLimit: 3 template: metadata: labels: k8s-app: ingress-nginx spec: containers: - args: - /nginx-ingress-controller - "--default-backend-service=default/my-app" - "--report-node-internal-ip-address=true" env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace image: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0" name: nginx-ingress-controller ports: - containerPort: 80 hostPort: 80 name: http protocol: TCP </code></pre> <p>my <code>ingress</code> yaml:</p> <pre class="lang-html prettyprint-override"><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-app annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - host: my-app.example.com http: paths: - path: / backend: serviceName: my-app servicePort: 80 </code></pre> <p>and result:</p> <pre class="lang-html prettyprint-override"><code>$ kubectl get ing NAME HOSTS ADDRESS PORTS AGE my-app * 192.168.1.78 80 40h $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 8d my-app ClusterIP 10.110.152.216 &lt;none&gt; 80/TCP,443/TCP 8d $ kubectl get po NAME READY STATUS RESTARTS AGE my-app-rc-d7lw9 1/1 Running 0 8d my-app-rc-spvjb 1/1 Running 0 8d my-app-rc-tvkrw 1/1 Running 0 8d nginx-ingress-controller-6c6c899467-d9sg6 1/1 Running 0 16m </code></pre> <p>for more detail , see <a href="https://github.com/kubernetes/ingress-nginx/issues/1750" rel="nofollow noreferrer">this issues</a>.</p>
<p>I have one docker image and I am using following command to run it. </p> <pre><code>docker run -it -p 1976:1976 --name demo demo.docker.cloud.com/demo/runtime:latest </code></pre> <p>I want to run the same in Kubernetes. This is my current yaml file.</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: v1 kind: Deployment metadata: name: demo-deployment labels: app: demo spec: replicas: 1 selector: matchLabels: app: demo template: metadata: labels: app: demo spec: containers: - name: demo image: demo.docker.cloud.com/demo/runtime:latest ports: - containerPort: 1976 imagePullPolicy: Never</code></pre> </div> </div> </p> <p>This yaml file covers everything except flag "-it". I am not able to find its Kubernetes equivalent. Please help me out with this. Thanks</p>
<p>Looking at the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.14/#container-v1-core" rel="nofollow noreferrer">Container definition in the API reference</a>, the equivalent options are <code>stdin: true</code> and <code>tty: true</code>.</p> <p>(None of the applications I work on have ever needed this; the documentation for <code>stdin:</code> talks about "reads from stdin in the container" and the typical sort of server-type processes you'd run in a Deployment don't read from stdin at all.)</p>
<p>We are trying to evaluate the best ways to scale our J2EE web application and use hosting services with AWS. Are there reasons why we would use the Lambda service over Kubernetes (EKS)? Although it seems that Lambda can scale functional units, I'm not clear why anyone would use that as a substitute for Kubernetes, given Kubernetes can replicate containers based on performance metrics.</p>
<p>They serve different purposes. If you want to have horizontal scalability on a &quot;ec2/pod/container&quot; level and handle the availability yourself (through k8s of course), go for Kubernetes.</p> <p>If you have a straight forward function doing a particular thing and you don't want to bother yourself with operating costs of having to manage a cluster or packaging it, then you can let Lambda administer it for you (at the time of writing, you would pay 20 US cents per million call). It is just another layer of abstraction on top of a system that is probably similar to Kubernetes, scaling your function per needs.</p> <p>The goal of these technologies is to remove as much overhead as possible between you and the code and infrastructure can be painful. To summarize, serverless is to Kubernetes what Kubernetes is to containers.</p> <p>To make a conscious decision, take the following into account:</p> <ul> <li><p>Does your budget covers operation and maintenance of infrastructure</p> </li> <li><p>Do you have the expertise in Kubernetes</p> </li> <li><p>How much would it cost to redesign your J2EE app into serverless ready code</p> </li> <li><p>Your timeline (of course...)</p> </li> <li><p>Based on the AWS resources you will use, how much do you save or not by implementing a k8s cluster (database service?, EBS, EC2s, etc.)</p> </li> </ul>
<p>I tried whitelisting IP address/es in my kubernetes cluster's incoming traffic using <a href="https://istio.io/docs/tasks/policy-enforcement/denial-and-list/#ip-based-whitelists-or-blacklists" rel="nofollow noreferrer">this</a> example :</p> <p>Although this works as expected, wanted to go a step further and try if I can use <code>istio</code> gateways or virtual service when I set up Istio rule, instead of Loadbalancer(ingressgateway).</p> <pre><code>apiVersion: config.istio.io/v1alpha2 kind: rule metadata: name: checkip namespace: my-namespace spec: match: source.labels["app"] == "my-app" actions: - handler: whitelistip.listchecker instances: - sourceip.listentry --- </code></pre> <p>Where <code>my-app</code> is of <code>kind: Gateway</code> with a certain host and port, and labelled <code>app=my-app</code>.</p> <p>Am using istio version 1.1.1 Also my cluster has all the istio-system running with envoy sidecars on almost all service pods.</p>
<p>You confuse one thing that, in above rule, <code>match: source.labels["app"] == "my-app"</code> is not referring to any resource's label, but to pod's label.</p> <p>From <a href="https://istio.io/docs/reference/config/policy-and-telemetry/templates/kubernetes/#OutputTemplate" rel="nofollow noreferrer">OutputTemplate Documentation</a>:</p> <blockquote> <p>sourceLabels | Refers to source pod labels. attributebindings can refer to this field using $out.sourcelabels</p> </blockquote> <p>And you can verify by looking for resources with "app=istio-ingressgateway" label via:</p> <pre><code>kubectl get pods,svc -n istio-system -l "app=istio-ingressgateway" --show-labels </code></pre> <p>You can check this <a href="https://istio.io/blog/2017/adapter-model/" rel="nofollow noreferrer">blog</a> from istio about Mixer Adapter Model, to understand complete mixer model, its handlers,instances and rules.</p> <p>Hope it helps!</p>
<p>I have a folder in my project, which contains 1 properties file and 1 jar file(db-driver) file.</p> <p>I need to copy both of these files to /usr/local/tomcat/lib directory on my pod. I am not sure how to achieve this in kubernetes yaml file. Below is my yaml file where I am trying to achieve this using configMap, but pod creation fails with error &quot;configmap references non-existent config key: app.properties&quot;</p> <p>Target <code>/usr/local/tomcat/lib</code> already has other jar files so I am trying to use configMap to not override entire directory and just add 2 files which are specific to my application.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: tomcatdeployment labels: app: tomcat spec: replicas: 1 selector: matchLabels: app: tomcat template: metadata: labels: app: tomcat spec: containers: - name: tomcat image: tomcat:latest imagePullPolicy: IfNotPresent volumeMounts: - name: appvolume mountPath: /usr/local/data - name: config mountPath: /usr/local/tomcat/lib subPath: ./configuration ports: - name: http containerPort: 8080 protocol: TCP volumes: - name: appvolume - name: config configMap: name: config-map items: - key: app.properties path: app.properties --- apiVersion: v1 kind: ConfigMap metadata: name: config-map data: key: app.properties </code></pre> <p>Current Directory structure...</p> <pre><code>. ├── configuration │   ├── app.properties │   └── mysql-connector-java-5.1.21.jar ├── deployment.yaml └── service.yaml </code></pre> <p>Please share your valuable feedback on how to achieve this.</p> <p>Regards.</p>
<p>Please try this:</p> <p><strong>kubectl create configmap config-map --from-file=app.properties --from-file=mysql-connector-java-5.1.21.jar</strong></p> <pre><code> apiVersion: apps/v1 kind: Deployment metadata: name: tomcatdeployment labels: app: tomcat spec: replicas: 1 selector: matchLabels: app: tomcat template: metadata: labels: app: tomcat spec: containers: - name: tomcat image: tomcat:latest imagePullPolicy: IfNotPresent volumeMounts: - name: config mountPath: /usr/local/tomcat/lib/conf ports: - name: http containerPort: 8080 protocol: TCP volumes: - name: config configMap: name: config-map </code></pre> <p>or</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: tomcatdeployment labels: app: tomcat spec: replicas: 1 selector: matchLabels: app: tomcat template: metadata: labels: app: tomcat spec: containers: - name: tomcat3 image: tomcat:latest imagePullPolicy: IfNotPresent volumeMounts: - name: config mountPath: /usr/local/tomcat/lib/app.properties subPath: app.properties - name: config mountPath: /usr/local/tomcat/lib/mysql-connector-java-5.1.21.jar subPath: mysql-connector-java-5.1.21.jar ports: - name: http containerPort: 8080 protocol: TCP volumes: - name: config configMap: name: config-map items: - key: app.properties path: app.properties - key: mysql-connector-java-5.1.21.jar path: mysql-connector-java-5.1.21.jar </code></pre>
<p>After applying the following <code>ResourceQuota</code> <code>compute-resources</code> to my GKE Cluster</p> <pre><code>apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources spec: hard: limits.cpu: "1" limits.memory: 1Gi </code></pre> <p>and updating a <code>Deployment</code> to</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: my-service spec: selector: matchLabels: app: my-service tier: backend track: stable replicas: 2 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 50% template: metadata: labels: app: my-service tier: backend track: stable spec: containers: - name: my-service image: registry/namespace/my-service:latest ports: - name: http containerPort: 8080 resources: requests: memory: "128Mi" cpu: "125m" limits: memory: "256Mi" cpu: "125m" </code></pre> <p>the scheduling fails 100% of tries due to <code>pods "my-service-5bc4c68df6-4z8wp" is forbidden: failed quota: compute-resources: must specify limits.cpu,limits.memory</code>. Since <code>limits</code> and <code>requests</code> are specified and they fulfill the limit, I don't see a reason why the pods should be forbidden.</p> <p><a href="https://stackoverflow.com/questions/32034827/how-pod-limits-resource-on-kubernetes-enforced-when-the-pod-exceed-limits-after">How pod limits resource on kubernetes enforced when the pod exceed limits after pods is created ?</a> is a different question.</p> <p>I upgraded my cluster to 1.13.6-gke.0.</p>
<p>I was about to suggest to test within separate namespace, but see that you already tried.</p> <p>As another workaround try to setup default limits by enabling LimitRanger admission controller and setting it up e.g.</p> <pre><code>apiVersion: v1 kind: LimitRange metadata: name: cpu-limit-range spec: limits: - default: memory: 256Mi cpu: 125m defaultRequest: cpu: 125m memory: 128Mi type: Container </code></pre> <p>Now if a Container is created in the default namespace, and the Container does not specify its own values for CPU request and CPU limit, the Container is given a default CPU limits of 125m and a default memory limit of 256Mi</p> <p>Also, after setting up LimitRange, make sure you removed your deployment and there are no pods stuck in failed state. </p>
<p>I want to deploy a gRPC + HTTP servers on GKE with HTTP/2 and mutual TLS. My deployment have both a readiness probe and liveness probe with custom path. I expose both the gRPC and HTTP servers via an Ingress.</p> <p>deployment's probes and exposed ports:</p> <pre><code> livenessProbe: failureThreshold: 3 httpGet: path: /_ah/health port: 8443 scheme: HTTPS periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 readinessProbe: failureThreshold: 3 httpGet: path: /_ah/health port: 8443 scheme: HTTPS name: grpc-gke ports: - containerPort: 8443 protocol: TCP - containerPort: 50052 protocol: TCP </code></pre> <p>NodePort service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: grpc-gke-nodeport labels: app: grpc-gke annotations: cloud.google.com/app-protocols: '{"grpc":"HTTP2","http":"HTTP2"}' service.alpha.kubernetes.io/app-protocols: '{"grpc":"HTTP2", "http": "HTTP2"}' spec: type: NodePort ports: - name: grpc port: 50052 protocol: TCP targetPort: 50052 - name: http port: 443 protocol: TCP targetPort: 8443 selector: app: grpc-gke </code></pre> <p>Ingress:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: grpc-gke-ingress annotations: kubernetes.io/ingress.allow-http: "false" #kubernetes.io/ingress.global-static-ip-name: "grpc-gke-ip" labels: app: grpc-gke spec: rules: - http: paths: - path: /_ah/* backend: serviceName: grpc-gke-nodeport servicePort: 443 backend: serviceName: grpc-gke-nodeport servicePort: 50052 </code></pre> <p>The pod does exist, and has a "green" status, before creating the liveness and readiness probes. I see regular logs on my server that both the <code>/_ah/live</code> and <code>/_ah/ready</code> are called by the kube-probe and the server responds with the <code>200</code> response.</p> <p>I use a Google managed TLS certificate on the load balancer (LB). My HTTP server creates a self-signed certificate -- inspired by <a href="https://benguild.com/2018/11/11/quickstart-golang-kubernetes-grpc-tls-lets-encrypt/" rel="nofollow noreferrer">this blog</a>.</p> <p>I create the Ingress after I start seeing the probes' logs. After that it creates an LB with two backends, one for the HTTP and one for the gRPC. The HTTP backend's health checks are OK and the HTTP server is accessible from the Internet. The gRPC backend's health check fails thus the LB does not route the gRPC protocol and I receive the <code>502</code> error response.</p> <p>This is with GKE master 1.12.7-gke.10. I also tried newer 1.13 and older 1.11 masters. The cluster has HTTP load balancing enabled and VPC-native enabled. There are firewall rules to allow access from LB to my pods (I even tried to allow all ports from all IP addresses). Delaying the probes does not help either.</p> <p>Funny thing is that I deployed nearly the same setup, just the server's Docker image is different, couple of months ago and it is running without any issues. I can even deploy new Docker images of the server and everything is great. I cannot find any difference between these two.</p> <p>There is one another issue, the Ingress is stuck on the "Creating Ingress" state for days. It never finishes and never sees the LB. The Ingress' LB never has a front-end and I always have to manually add an HTTP/2 front-end with a static IP and Google managed TLS certificate. This should be happening only for cluster which were created without "HTTP load balancing", but it happens in my case every time for all my "HTTP load balancing enabled" clusters. The working deployment is in this state for months already.</p> <p>Any ideas why the gRPC backend's health check could be failing even though I see logs that the readiness and liveness endpoints are called by kube-probe?</p> <p><strong>EDIT:</strong></p> <p><code>describe svc grpc-gke-nodeport</code></p> <pre><code>Name: grpc-gke-nodeport Namespace: default Labels: app=grpc-gke Annotations: cloud.google.com/app-protocols: {"grpc":"HTTP2","http":"HTTP2"} kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"cloud.google.com/app-protocols":"{\"grpc\":\"HTTP2\",\"http\":\"HTTP2\"}",... service.alpha.kubernetes.io/app-protocols: {"grpc":"HTTP2", "http": "HTTP2"} Selector: app=grpc-gke Type: NodePort IP: 10.4.8.188 Port: grpc 50052/TCP TargetPort: 50052/TCP NodePort: grpc 32148/TCP Endpoints: 10.0.0.25:50052 Port: http 443/TCP TargetPort: 8443/TCP NodePort: http 30863/TCP Endpoints: 10.0.0.25:8443 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre> <p>and the health check for the gRPC backend is an HTTP/2 GET using path <code>/</code> on port <code>32148</code>. Its description is "Default kubernetes L7 Loadbalancing health check." where as the description of the HTTP's back-end health check is "Kubernetes L7 health check generated with readiness probe settings.". Thus the health check for the gRPC back-end is not created from the readiness probe.</p> <p>Editing the health check to point to port <code>30863</code> an changing the path to readiness probe fixes the issue.</p>
<p>Editing the health check to point to the readiness probe's path and changed the port to the one of the HTTP back-end fixed this issue (look for the port in the HTTP back-end's health check. it is the NodePort's.). It runs know without any issues.</p> <p>Using the same health check for the gRPC back-end as for the HTTP back-end did not work, it was reset back to its own health check. Even deleting the gRPC back-end's health check did not help, it was recreated. Only editing it to use a different port and path has helped.</p>
<p>I have one docker image and I am using following command to run it. </p> <pre><code>docker run -it -p 1976:1976 --name demo demo.docker.cloud.com/demo/runtime:latest </code></pre> <p>I want to run the same in Kubernetes. This is my current yaml file.</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: v1 kind: Deployment metadata: name: demo-deployment labels: app: demo spec: replicas: 1 selector: matchLabels: app: demo template: metadata: labels: app: demo spec: containers: - name: demo image: demo.docker.cloud.com/demo/runtime:latest ports: - containerPort: 1976 imagePullPolicy: Never</code></pre> </div> </div> </p> <p>This yaml file covers everything except flag "-it". I am not able to find its Kubernetes equivalent. Please help me out with this. Thanks</p>
<p>I assume you are trying to connect a shell to your running container. Following the guide at <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/</a> - You would need the following commands. To <code>apply</code> your above configuration:</p> <p>Create the pod: <code>kubectl apply -f ./demo-deployment.yaml</code></p> <p>Verify the Container is running: <code>kubectl get pod demo-deployment</code></p> <p>Get a shell to the running Container: <code>kubectl exec -it demo-deployment -- /bin/bash</code></p>
<p>Today randomly minikube seems to be taking very long to respond to command via <code>kubectl</code>.</p> <p>And occasionally even:</p> <pre><code>kubectl get pods Unable to connect to the server: net/http: TLS handshake timeout </code></pre> <p>How can I diagnose this?</p> <p>Some logs from <code>minikube logs</code>:</p> <pre><code>==&gt; kube-scheduler &lt;== I0527 14:16:55.809859 1 serving.go:319] Generated self-signed cert in-memory W0527 14:16:56.256478 1 authentication.go:387] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory W0527 14:16:56.256856 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work. W0527 14:16:56.257077 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work. W0527 14:16:56.257189 1 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory W0527 14:16:56.257307 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work. I0527 14:16:56.264875 1 server.go:142] Version: v1.14.1 I0527 14:16:56.265228 1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory W0527 14:16:56.286959 1 authorization.go:47] Authorization is disabled W0527 14:16:56.286982 1 authentication.go:55] Authentication is disabled I0527 14:16:56.286995 1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251 I0527 14:16:56.287397 1 secure_serving.go:116] Serving securely on 127.0.0.1:10259 I0527 14:16:57.417028 1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller I0527 14:16:57.524378 1 controller_utils.go:1034] Caches are synced for scheduler controller I0527 14:16:57.827438 1 leaderelection.go:217] attempting to acquire leader lease kube-system/kube-scheduler... E0527 14:17:10.865448 1 leaderelection.go:306] error retrieving resource lock kube-system/kube-scheduler: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s: net/http: request canceled (Client.Timeout exceeded while awaiting headers) E0527 14:17:43.418910 1 leaderelection.go:306] error retrieving resource lock kube-system/kube-scheduler: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s: context deadline exceeded (Client.Timeout exceeded while awaiting headers) I0527 14:18:01.447065 1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler I0527 14:18:29.044544 1 leaderelection.go:263] failed to renew lease kube-system/kube-scheduler: failed to tryAcquireOrRenew context deadline exceeded E0527 14:18:38.999295 1 server.go:252] lost master E0527 14:18:39.204637 1 leaderelection.go:306] error retrieving resource lock kube-system/kube-scheduler: Get https://localhost:8443/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s: net/http: request canceled (Client.Timeout exceeded while awaiting headers) lost lease </code></pre> <p><em>Update: To work around this issue I just did a <code>minikube delete</code> and <code>minikube start</code>, and the performance issue resolved..</em></p>
<p>As solution has been found, I am posting this as Community Wiki for future users.</p> <p><strong>1)</strong> Debugging issues with minikube by adding <code>-v</code> flag and set debug level (0, 1, 2, 3, 7).</p> <p>As example: <code>minikube start --v=1</code> to set outbut to INFO level.<br/> More detailed information <a href="https://github.com/kubernetes/minikube/blob/master/docs/debugging.md" rel="noreferrer">here</a></p> <p><strong>2)</strong> Use logs command <code>minikube logs</code></p> <p><strong>3)</strong> Because Minikube is working on Virtual Machine sometimes is better to delete minikube and start it again (It helped in this case).</p> <pre><code>minikube delete minikube start </code></pre> <p><strong>4)</strong> It might get slow due to lack of resources.</p> <p>Minikube as default is using 2048MB of memory and 2 CPUs. More details about this can be fund <a href="https://github.com/kubernetes/minikube/blob/232080ae0cbcf9cb9a388eb76cc11cf6884e19c0/pkg/minikube/constants/constants.go#L97" rel="noreferrer">here</a> In addition, you can enforce Minikube to create more using command</p> <pre><code>minikube start --cpus 4 --memory 8192 </code></pre>
<p>I am trying to deploy an image from my private registry (harbor) to my Kubernetes environment. The registry was set up successfully and already contains my image. </p> <p>To give context this is my deployment file:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: name: sps-app name: sps-app spec: replicas: 1 template: metadata: labels: name: sps-app spec: containers: - image: repo-harbor.test.com/sps_project/spsapp:23 env: - name: MONGODB_URL value: "mongodb://mongo.default.svc.cluster.local:27017/user" name: sps-app ports: - containerPort: 4000 name: sps-app imagePullSecrets: - name: harbor </code></pre> <p>I already had create my harbor secret using the below command</p> <pre><code>kubectl create secret docker-registry harbor \ --docker-server=https://repo-harbor.test.com \ --docker-username=admin \ --docker-password='xxxxxx!' </code></pre> <p>However, when I do a <code>kubectl apply -f</code> of my deployment, it always goes into an image pull backoff. </p> <p>Upon further investigation, I checked the logs of the pod and it states there is a x509 certification error. </p> <p>Kubernetes events:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3m default-scheduler Successfully assigned default/private-image-test-1 to df56bd02-5e0e-4644-a565-c233ac2404fe Normal Pulling 2m (x3 over 3m) kubelet, df56bd02-5e0e-4644-a565-c233ac2404fe pulling image "jur01-harbor.acepod.com/sps_project/spsapp:2" Warning Failed 2m (x3 over 3m) kubelet, df56bd02-5e0e-4644-a565-c233ac2404fe Failed to pull image "jur01-harbor.acepod.com/sps_project/spsapp:2": rpc error: code = Unknown desc = Error response from daemon: Get https://jur01-harbor.acepod.com/v2/: x509: certificate signed by unknown authority Warning Failed 2m (x3 over 3m) kubelet, df56bd02-5e0e-4644-a565-c233ac2404fe Error: ErrImagePull Warning Failed 2m (x4 over 3m) kubelet, df56bd02-5e0e-4644-a565-c233ac2404fe Error: ImagePullBackOff Normal SandboxChanged 2m (x7 over 3m) kubelet, df56bd02-5e0e-4644-a565-c233ac2404fe Pod sandbox changed, it will be killed and re-created. Normal BackOff 2m (x5 over 3m) kubelet, df56bd02-5e0e-4644-a565-c233ac2404fe Back-off pulling image "jur01-harbor.acepod.com/sps_project/spsapp:2" </code></pre> <p>At this point, I'm not sure how to resolve this. Would anyone how to resolve this?</p>
<h3>Root cause</h3> <p>The image registry at <code>jur01-harbor.acepod.com</code> uses a self-signed certificate, which Docker does not trust.</p> <h3>Solution</h3> <p>Copy the custom CA certificate presented by that image registry to all your Kubernetes nodes', in a directory called <code>/etc/docker/certs.d/jur01-harbor.acepod.com/</code>.</p> <p>Reference: <a href="https://docs.docker.com/registry/insecure/#use-self-signed-certificates" rel="nofollow noreferrer">Docker docs / Test an insecure registry</a>.</p>
<p>Why am I getting an error when I try to change the <code>apiVersion</code> of a deployment via <code>kubectl edit deployment example</code> ? </p> <p>Do I have to delete and recreate the object?</p>
<p>You're getting this because there are only certain attributes of a resource that you may change once it's created. ApiVersion, Kind, and Name are some of the prime identifiers of a resource so they can't be changed without deleting/recreating them.</p>
<p>For one of my containers inside a pod I set some of the environment variables using <code>envFrom</code>:</p> <pre><code>envFrom: - configMapRef: name: my-config-map </code></pre> <p>Experimenting, it seems that updating the ConfigMap does not change the value of the corresponding environment value in the container. </p> <p>Is there some way to force the update of the environment variable when setting them using <code>configMapRef</code>? If not, where is this limitation explicitly documented?</p>
<p>The environment variables are set when the container is starting so there is no possible way to update those environment variables. You will need to restart Pod so it that reads again the ConfigMap values and sets the environment for the new created container.</p> <p>You can do this automatically with some tools out there, like <a href="https://github.com/stakater/Reloader" rel="noreferrer">reloader</a>, which will </p> <blockquote> <p>watch changes in ConfigMap and Secrets and then restart pods for Deployment</p> </blockquote>
<p>I have multiple public and private applications running in my kubernetes cluster. I want to separate out traffic for each type by running multiple istio-gateway deployments. Is there any straight methods to implement it with istio.</p> <p>For both type of application I am using custom CA and importing certificates as secret manually. Do I need to anything cert manager part to achieve my use case</p>
<p>The cert-manager is not required to achieve this configuration.</p> <p>To install custom istio-ingress-gateway (for you private domain) next to the default one (for public domain), you can take as an example '<a href="https://github.com/istio/istio/tree/master/install/kubernetes/helm/istio/example-values" rel="nofollow noreferrer">example-values/values-istio-gateways.yaml</a>' values file (part of official Istio github project), and use it with helm, to generate all necessary manifest files to upgrade/extend your current Istio installation. </p> <p>To generate manifest files use the following command:</p> <pre><code>helm template install/kubernetes/helm/istio --set gateways.custom-gateway.namespace=nepomucen-custom -f install/kubernetes/istio-demo.yaml -f install/kubernetes/helm/istio/example-values/values-istio-gateways.yaml </code></pre> <p>Note: This creates custom gateway in non-default namespace</p>
<p>Flux sync git state with cluster state but If I want to delete a resource from Kubernetes in a scenerio where all resources are managed through git, what is the best way to delete a resource (deplyment, service, ingress etc) with flux ?</p>
<p>Flux has an experimental (but quite mature) garbage collection feature that can be enabled by setting the <code>--sync-garbage-collection</code> flag as an argument on the Flux daemon deployment (or by setting <code>syncGarbageCollection.enabled</code> to <code>true</code> in your <code>values.yaml</code> in case you are deploying Flux using the Helm chart).</p> <p>With the garbage collection feature enabled, Flux will remove resources from Kubernetes when they are removed from git.</p> <p>You can find in depth information about the garbage collection feature in the <a href="https://github.com/weaveworks/flux/blob/1.12.3/site/garbagecollection.md" rel="nofollow noreferrer">documentation</a>.</p>
<p>We have a kubernetes cluster made up of 5 nodes. 2 of the nodes are only used for KIAM and the other 3 are for container deployments.</p> <p>I have prometheus and grafana deployed and configured and I need to configure monitoring for CPU, memory and pod usage. However I want to totally exclude the nodes hosting KIAM from any stats or alert thresholds.</p> <p>The only thing I can see being returned by prometheus that can identify the nodes i need is label_workload="gp" from the kube_node_labels metric. What I don't know how to do is to get grafana to only use these nodes in it's calculations.</p> <p>Perhaps it's possible to have some sort of query join or subselect to identify the node names to include??</p> <p>I'd appreciate any help on this!!!</p>
<p>I believe the <code>node_uname_info</code> metrics is a better metric to get all your node infos. So I will explain using that metric.</p> <p>You have two options:</p> <p>Option 1: You hard-code the nodenames into your Grafana dashboard. Your query should look something like this <code>node_uname_info{nodename=~"node1|node2|node3"}</code> then. <code>node1-3</code> are the nodes which you want to have the metrics for.</p> <p>Option 2: You create a variable and allow the user to select the nodes. Let's say the variable name is <code>$nodes</code> and the query should be <code>label_values(node_uname_info, nodename)</code>. It should be allowed to select multiple values. Next in your query you can then do <code>node_uname_info{nodename=~"$nodes"}</code> to only show metrics for selected nodes.</p>
<p>I have set up one Kubernetes master node and 2 workers node. I deployed two web applications as a pod using <code>kubectl</code>. I deployed <code>nginx-ingress-controller</code> ( image: <code>gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.6</code>) and created a service for the same with nodeport option. How do I access the error logs of the <code>nginx-ingress-controller</code>? I'm able to see <code>error.log</code> under <code>/var/log/nginx/</code>, but it is link to <code>/dev/stderr</code>. </p>
<p>TL;DR</p> <pre><code>kubectl logs -n &lt;&lt;ingress_namespace&gt;&gt; &lt;&lt;ingress_pod_name&gt;&gt; </code></pre> <p>Check the namespace under which the Ingress controller is currently running.</p> <pre><code>$ kubectl get pods NAME READY STATUS RESTARTS AGE helloworld-deployment-7dc448d6b-5zvr8 1/1 Running 1 3d20h helloworld-deployment-7dc448d6b-82dnt 1/1 Running 1 3d20h $ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE kube-apiserver-minikube 1/1 Running 1 3d20h nginx-ingress-controller-586cdc477c-rhj9z 1/1 Running 1 3d21h </code></pre> <p>For me, it happens to be the <code>kube-system</code> namespace on the pod <code>nginx-ingress-controller-586cdc477c-rhj9z</code>.</p> <p>Now, get the logs just like you would for any pod using</p> <pre><code> kubectl logs -n kube-system nginx-ingress-controller-586cdc477c-rhj9z </code></pre>
<p>So I have this unhealthy cluster partially working in the datacenter. This is probably the 10th time I have rebuilt from the instructions at: <a href="https://kubernetes.io/docs/setup/independent/high-availability/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/independent/high-availability/</a></p> <p>I can apply some pods to this cluster and it seems to work but eventually it starts slowing down and crashing as you can see below. Here is the scheduler manifest:</p> <pre><code>apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: component: kube-scheduler tier: control-plane name: kube-scheduler namespace: kube-system spec: containers: - command: - kube-scheduler - --bind-address=127.0.0.1 - --kubeconfig=/etc/kubernetes/scheduler.conf - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf - --leader-elect=true image: k8s.gcr.io/kube-scheduler:v1.14.2 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 8 httpGet: host: 127.0.0.1 path: /healthz port: 10251 scheme: HTTP initialDelaySeconds: 15 timeoutSeconds: 15 name: kube-scheduler resources: requests: cpu: 100m volumeMounts: - mountPath: /etc/kubernetes/scheduler.conf name: kubeconfig readOnly: true hostNetwork: true priorityClassName: system-cluster-critical volumes: - hostPath: path: /etc/kubernetes/scheduler.conf type: FileOrCreate name: kubeconfig status: {} </code></pre> <p>$ kubectl -n kube-system get pods</p> <pre><code>NAME READY STATUS RESTARTS AGE coredns-fb8b8dccf-42psn 1/1 Running 9 88m coredns-fb8b8dccf-x9mlt 1/1 Running 11 88m docker-registry-dqvzb 1/1 Running 1 2d6h kube-apiserver-kube-apiserver-1 1/1 Running 44 2d8h kube-apiserver-kube-apiserver-2 1/1 Running 34 2d7h kube-controller-manager-kube-apiserver-1 1/1 Running 198 2d2h kube-controller-manager-kube-apiserver-2 0/1 CrashLoopBackOff 170 2d7h kube-flannel-ds-amd64-4mbfk 1/1 Running 1 2d7h kube-flannel-ds-amd64-55hc7 1/1 Running 1 2d8h kube-flannel-ds-amd64-fvwmf 1/1 Running 1 2d7h kube-flannel-ds-amd64-ht5wm 1/1 Running 3 2d7h kube-flannel-ds-amd64-rjt9l 1/1 Running 4 2d8h kube-flannel-ds-amd64-wpmkj 1/1 Running 1 2d7h kube-proxy-2n64d 1/1 Running 3 2d7h kube-proxy-2pq2g 1/1 Running 1 2d7h kube-proxy-5fbms 1/1 Running 2 2d8h kube-proxy-g8gmn 1/1 Running 1 2d7h kube-proxy-wrdrj 1/1 Running 1 2d8h kube-proxy-wz6gv 1/1 Running 1 2d7h kube-scheduler-kube-apiserver-1 0/1 CrashLoopBackOff 198 2d2h kube-scheduler-kube-apiserver-2 1/1 Running 5 18m nginx-ingress-controller-dz8fm 1/1 Running 3 2d4h nginx-ingress-controller-sdsgg 1/1 Running 3 2d4h nginx-ingress-controller-sfrgb 1/1 Running 1 2d4h </code></pre> <p>$ kubectl -n kube-system describe pod kube-scheduler-kube-apiserver-1 </p> <pre><code>Containers: kube-scheduler: Container ID: docker://c04f3c9061cafef8749b2018cd66e6865d102f67c4d13bdd250d0b4656d5f220 Image: k8s.gcr.io/kube-scheduler:v1.14.2 Image ID: docker-pullable://k8s.gcr.io/kube-scheduler@sha256:052e0322b8a2b22819ab0385089f202555c4099493d1bd33205a34753494d2c2 Port: &lt;none&gt; Host Port: &lt;none&gt; Command: kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Tue, 28 May 2019 23:16:50 -0400 Finished: Tue, 28 May 2019 23:19:56 -0400 Ready: False Restart Count: 195 Requests: cpu: 100m Liveness: http-get http://127.0.0.1:10251/healthz delay=15s timeout=15s period=10s #success=1 #failure=8 Environment: &lt;none&gt; Mounts: /etc/kubernetes/scheduler.conf from kubeconfig (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kubeconfig: Type: HostPath (bare host directory volume) Path: /etc/kubernetes/scheduler.conf HostPathType: FileOrCreate QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: :NoExecute Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Created 4h56m (x104 over 37h) kubelet, kube-apiserver-1 Created container kube-scheduler Normal Started 4h56m (x104 over 37h) kubelet, kube-apiserver-1 Started container kube-scheduler Warning Unhealthy 137m (x71 over 34h) kubelet, kube-apiserver-1 Liveness probe failed: Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused Normal Pulled 132m (x129 over 37h) kubelet, kube-apiserver-1 Container image "k8s.gcr.io/kube-scheduler:v1.14.2" already present on machine Warning BackOff 128m (x1129 over 34h) kubelet, kube-apiserver-1 Back-off restarting failed container Normal SandboxChanged 80m kubelet, kube-apiserver-1 Pod sandbox changed, it will be killed and re-created. Warning Failed 76m kubelet, kube-apiserver-1 Error: context deadline exceeded Normal Pulled 36m (x7 over 78m) kubelet, kube-apiserver-1 Container image "k8s.gcr.io/kube-scheduler:v1.14.2" already present on machine Normal Started 36m (x6 over 74m) kubelet, kube-apiserver-1 Started container kube-scheduler Normal Created 32m (x7 over 74m) kubelet, kube-apiserver-1 Created container kube-scheduler Warning Unhealthy 20m (x9 over 40m) kubelet, kube-apiserver-1 Liveness probe failed: Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused Warning BackOff 2m56s (x85 over 69m) kubelet, kube-apiserver-1 Back-off restarting failed container </code></pre> <p>I feel like I am overlooking a simple option or configuration but I can't find it and after days of dealing with this problem and reading documentation I am at my wits end.</p> <p>The load balancer is a TCP load balancer and seems to be working as expected as I can query the cluster from my desktop.</p> <p>Any suggestions or troubleshooting tips are definitely welcome at this time.</p> <p>Thank you.</p>
<p>The problem with our configuration was that a well intended technician decided to eliminate one of the rules on the kubernetes master firewall which prevented the master from looping back to ports it needed to probe. This caused all kinds of weird issues and misdiagnosed problems which was definitely the wrong direction. After we allowed all ports on the servers Kubernetes was back to its normal behavior.</p>
<p>In azure devops portal, my release pipeline keeps failing at kubectl apply.It says the server could not find the requested resource.</p> <p>NOTE: I haven't created a kubernetes clusture in azure portal yet.so when I went to the portal to create one, it is asking for paid subscription.</p> <p>logs of the release pipeline</p> <pre><code>2019-05-30T06:07:09.1230513Z ##[section]Starting: kubectl apply 2019-05-30T06:07:09.1348192Z ============================================================================== 2019-05-30T06:07:09.1348303Z Task : Deploy to Kubernetes 2019-05-30T06:07:09.1348381Z Description : Deploy, configure, update your Kubernetes cluster in Azure Container Service by running kubectl commands. 2019-05-30T06:07:09.1348441Z Version : 0.151.2 2019-05-30T06:07:09.1348510Z Author : Microsoft Corporation 2019-05-30T06:07:09.1348566Z Help : [More Information](https://go.microsoft.com/fwlink/?linkid=851275) 2019-05-30T06:07:09.1348638Z ============================================================================== 2019-05-30T06:07:12.7827969Z [command]d:\a\_temp\kubectlTask\1559196429507\kubectl.exe --kubeconfig d:\a\_temp\kubectlTask\1559196429507\config apply -f d:\a\r1\a\_devops-sample-CI\drop\Tomcat.yaml 2019-05-30T06:07:15.1191531Z deployment "tomcat-deployment" configured 2019-05-30T06:07:15.1300152Z error: error validating "d:\\a\\r1\\a\\_devops-sample-CI\\drop\\Tomcat.yaml": error validating data: the server could not find the requested resource; if you choose to ignore these errors, turn validation off with --validate=false 2019-05-30T06:07:15.1454497Z ##[error]d:\a\_temp\kubectlTask\1559196429507\kubectl.exe failed with return code: 1 2019-05-30T06:07:15.1634357Z ##[section]Finishing: kubectl apply </code></pre> <p>Tomcat.yaml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: tomcat-deployment labels: app: tomcat spec: replicas: 1 selector: matchLabels: app: tomcat template: metadata: labels: app: tomcat spec: containers: - name: tomcat image: suji165475/devops-sample:113 ports: - containerPort: 80 --- kind: Service apiVersion: v1 metadata: name: tomcat-service spec: type: LoadBalancer selector: app: tomcat ports: - protocol: TCP port: 80 targetPort: 80 </code></pre> <p>why is the server saying that it couldn't find the requested resource even though I have made sure that I created the build artifact(containing tomcat.yaml in drop folder) properly from the build ci pipeline?? Could this be due to the fact that I havent created the kubernetes clusture yet or is this due to some other reason??</p> <p>also would using nodeport instead of LoadBalancer work on azure devops??</p>
<p>I'm fairly certain if you are using local kubernetes cluster and given this error - the issue is with the fact that Azure Devops cant reach you kubernetes cluster. You should make sure your cluster is exposed on a certain IP and ports are not blocked.</p>
<p>Are there any hooks available for Pod lifecycle events? Specifically, I want to run a command to upload logs on pod restart. </p>
<p><strong>Edit: PreStop hook doesn't work for container restart - please see rest of answer below</strong> </p> <p>As standing in <a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/" rel="nofollow noreferrer">documentation</a> there are <code>PreStop</code> and <code>PostStart</code> events and you can attach to them.</p> <p>Example from docs:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: lifecycle-demo spec: containers: - name: lifecycle-demo-container image: nginx lifecycle: postStart: exec: command: ["/bin/sh", "-c", "echo Hello from the postStart handler &gt; /usr/share/message"] preStop: exec: command: ["/bin/sh","-c","nginx -s quit; while killall -0 nginx; do sleep 1; done"] </code></pre> <p>Edit: So I checked with following POC if that preStop hook is executed on container crash and conclusion is: <strong>NOT</strong></p> <pre><code>apiVersion: v1 kind: Pod metadata: name: lifecycle-demo spec: containers: - name: lifecycle-demo-container volumeMounts: - mountPath: /data name: test-volume image: nginx command: ["/bin/sh"] args: ["-c", "sleep 5; exit 1"] lifecycle: postStart: exec: command: ["/bin/sh", "-c", "echo Hello from the postStart handler &gt; /data/postStart"] preStop: exec: command: ["/bin/sh","-c","echo preStop handler! &gt; /data/preStop"] volumes: - name: test-volume hostPath: path: /data type: Directory </code></pre> <p>As solution for you I would recommend to override command section for you container this way:</p> <pre><code>command: ["/bin/sh"] args: ["-c", "your-application-executable; your-logs-upload"] </code></pre> <p>so your-logs-upload executable will be executed after your-application-executable crash/end</p>
<p>Am using <code>minikube</code> to test out the deployment and was going through <a href="https://itnext.io/running-asp-net-core-on-minikube-ad69472c4c95?gi=c0651899e562" rel="noreferrer">this</a> link</p> <p>And my manifest file for deployment is like</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: webapp spec: replicas: 1 template: metadata: labels: app: webapp spec: containers: - name: webapp imagePullPolicy: Never # &lt;-- here we go! image: sams ports: - containerPort: 80 </code></pre> <p>and after this when I tried to execute the below commands got output</p> <pre><code>user@usesr:~/Downloads$ kubectl create -f mydeployment.yaml --validate=false deployment &quot;webapp&quot; created user@user:~/Downloads$ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE ---- -------- ------- ---------- --------- ---- webapp 1 1 1 0 9s user@user:~/Downloads$ kubectl get pods NAME READY STATUS RESTARTS AGE ---- -------- ------- ---------- --------- ---- webapp-5bf5bd94d-2xgs8 0/1 ErrImageNeverPull 0 21s </code></pre> <p>I tried to pull images even from <code>Docker-Hub</code> by removing line <code>imagePullPolicy: Never </code> from the <code>deployment.yml</code> But getting the same error. Can anyone help me here to identify where and what's going wrong?</p> <h2>Updated the question as per the comment</h2> <pre><code>kubectl describe pod $POD_NAME Name: webapp-5bf5bd94d-2xgs8 Namespace: default Node: minikube/10.0.2.15 Start Time: Fri, 31 May 2019 14:25:41 +0530 Labels: app=webapp pod-template-hash=5bf5bd94d Annotations: &lt;none&gt; Status: Pending IP: 172.17.0.4 Controlled By: ReplicaSet/webapp-5bf5bd94d Containers: webapp: Container ID: Image: sams Image ID: Port: 80/TCP State: Waiting Reason: ErrImageNeverPull Ready: False Restart Count: 0 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-wf82w (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: default-token-wf82w: Type: Secret (a volume populated by a Secret) SecretName: default-token-wf82w Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 18m default-scheduler Successfully assigned default/webapp-5bf5bd94d-2xgs8 to minikube Warning ErrImageNeverPull 8m (x50 over 18m) kubelet, minikube Container image &quot;sams&quot; is not present with pull policy of Never Warning Failed 3m (x73 over 18m) kubelet, minikube Error: ErrImageNeverPull docker images: REPOSITORY TAG IMAGE ID CREATED SIZE ---------- --- -------- ------- ---- &lt;none&gt; &lt;none&gt; 723ce2b3d962 3 hours ago 1.91GB bean_ben501/sams latest c7c4a04713f4 4 hours ago 278MB sams latest c7c4a04713f4 4 hours ago 278MB sams v1 c7c4a04713f4 4 hours ago 278MB &lt;none&gt; &lt;none&gt; b222da630bc3 4 hours ago 1.91GB mcr.microsoft.com/dotnet/core/sdk 2.2-stretch e4747ec2aaff 9 days ago 1.74GB mcr.microsoft.com/dotnet/core/aspnet 2.2-stretch-slim f6d51449c477 9 days ago 260MB </code></pre> <hr />
<blockquote> <p>When using a single VM for Kubernetes, it’s useful to reuse Minikube’s built-in Docker daemon. Reusing the built-in daemon means you don’t have to build a Docker registry on your host machine and push the image into it. Instead, you can build inside the same Docker daemon as Minikube, which speeds up local experiments.</p> </blockquote> <p>The following command does the magic <code>eval $(minikube docker-env)</code> <strong>Then you have to rebuild your image again.</strong></p> <p>for <code>imagePullPolicy: Never</code> the images need to be on the minikube node.</p> <p><a href="https://stackoverflow.com/questions/52310599/what-does-minikube-docker-env-mean">This answer provide details </a></p> <p><a href="https://kubernetes.io/docs/setup/minikube/#use-local-images-by-re-using-the-docker-daemon" rel="noreferrer">local-images-in minikube docker environment</a></p>
<p>My goal is to monitor services with Prometheus, so I was following a guide located at:</p> <p><a href="https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md" rel="nofollow noreferrer">https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md</a></p> <p>I am relatively new to all of this, so please forgive my naiveness. I tried looking into the error, but all the answers were convoluted. I have no idea where to start on the debug process (perhaps look into the YAMLs?)</p> <p>I wanted to monitor a custom Service. So, I deployed a service.yaml of the following into a custom namespace (t): </p> <pre><code>kind: Service apiVersion: v1 metadata: namespace: t name: example-service-test labels: app: example-service-test spec: selector: app: example-service-test type: NodePort ports: - name: http nodePort: 30901 port: 8080 protocol: TCP targetPort: http --- apiVersion: v1 kind: Pod metadata: name: example-service-test namespace: t labels: app: example-service-test spec: containers: - name: example-service-test image: python:2.7 imagePullPolicy: IfNotPresent command: ["/bin/bash"] args: ["-c", "echo \"&lt;p&gt;This is POD1 $(hostname)&lt;/p&gt;\" &gt; index.html; python -m SimpleHTTPServer 8080"] ports: - name: http containerPort: 8080 </code></pre> <p>And deployed a service monitor into the namespace:</p> <pre><code>apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: example-service-test labels: team: frontendtest1 namespace: t spec: selector: matchLabels: app: example-service-test endpoints: - port: http </code></pre> <p>So far, the service monitor is detecting the service, as shown: <a href="https://i.stack.imgur.com/CZsK2.png" rel="nofollow noreferrer">Prometheus Service Discovery</a>. However, there is an error with obtaining the metrics from the service: <a href="https://i.stack.imgur.com/BoNzI.png" rel="nofollow noreferrer">Prometheus Targets</a>. </p> <p>From what I know, prometheus isn't able to access the /metrics on the sample service - in that case, do I need to expose the metrics? If so, could I get a step by step guide solution to how to expose metrics? If not, what route should I take? </p>
<p>I'm afraid you could miss the key thing from the tutorial you're following on CoreOS website, about how a metrics from an app are getting to Prometheus:</p> <blockquote> <p>First, deploy three instances of a simple example application, <strong>which listens and exposes metrics on port 8080</strong></p> </blockquote> <p>Yes, your application (website) listens on port 8080, but does not expose any metrics on '/metrics' endpoint in the known to Prometheus format.</p> <p>You can verify about what kind of metrics I'm talking about by hiting the endpoint from inside of Pod/Conatiner where it's hosted.</p> <pre><code>kubectl exec -it $(kubectl get po -l app=example-app -o jsonpath='{.items[0].metadata.name}') -c example-app -- curl localhost:8080/metrics </code></pre> <p>You should see similar output to this one:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code># HELP codelab_api_http_requests_in_progress The current number of API HTTP requests in progress. # TYPE codelab_api_http_requests_in_progress gauge codelab_api_http_requests_in_progress 1 # HELP codelab_api_request_duration_seconds A histogram of the API HTTP request durations in seconds. # TYPE codelab_api_request_duration_seconds histogram codelab_api_request_duration_seconds_bucket{method="GET",path="/api/bar",status="200",le="0.0001"} 0 codelab_api_request_duration_seconds_bucket{method="GET",path="/api/bar",status="200",le="0.00015000000000000001"} 0 codelab_api_request_duration_seconds_bucket{method="GET",path="/api/bar",status="200",le="0.00022500000000000002"} 0 codelab_api_request_duration_seconds_bucket{method="GET",path="/api/bar",status="200",le="0.0003375"} 0 codelab_api_request_duration_seconds_bucket{method="GET",path="/api/bar",status="200",le="0.00050625"} 0 codelab_api_request_duration_seconds_bucket{method="GET",path="/api/bar",status="200",le="0.000759375"} 0</code></pre> </div> </div> </p> <p>Please read more <a href="https://github.com/coreos/prometheus-operator/blob/master/Documentation/exposing-metrics.md" rel="nofollow noreferrer">here</a> on ways of exposing metrics. </p>
<p>I have a Java pod application which connect to CosmosDB using the default connection string. The pod runs perfectly on my local minikube. But it has connection exception when I deploy it to my AKS.</p> <p>The pod keep crashing with this MongoDB error.</p> <blockquote> <p>Timed out after 30000 ms while waiting for server that matches... Client view of cluster state is ... xxx.documents.azure.com:10255</p> </blockquote> <p>Looks like it cannot reach the CosmosDB. At first I think it is because the default Network security rules block the outgoing port 10255. Then I add a NSG to that resource group. Add the outgoing rule on port 10255. It does not solve the problem.</p> <p>Then I stumble upon this article. <a href="https://docs.bitnami.com/kubernetes/how-to/deploy-mean-cosmosdb-osba-aks/" rel="nofollow noreferrer">CosmosDB on AKS using OSBA</a> Is it the only way? do I have to use OSBA to access a public CosmosDB?</p> <p>Connection string copy from Azure portal</p> <blockquote> <p>mongodb://mycompany:some_base64_encrypted_stuff@mycompany.documents.azure.com:10255/?ssl=true&amp;replicaSet=globaldb</p> </blockquote> <p><a href="https://gist.github.com/maxiwu/3a151580801b6b2d7877bc77c15b405f" rel="nofollow noreferrer">error log of my pod</a></p> <p><strong>UPDATE:</strong><br> turns out the spring-boot-starter 2.1.0 is using mongodb-driver 3.8.2. the mongodb-driver appends :27017 to my connection string. I have updated it to 3.10.2. Now that the connection string is correct. My program running in Kubernetes is giving me UnknownHostException mydoc.documents.azure.com. I am guessing there could be a problem caused by build docker image in windows and then run it on alpine.</p> <p><strong>UPDATE:</strong><br> I think I am getting very close to the answer. The problem is from the kubernetes cluster. My cluster contains two nodes. I create another cluster with single node. Deploy my program to it and it connects to CosmosDB without error. But I do not know how to debug kubernetes cluster.</p>
<p>if you are using network policies, you can use the sample network policy to allow all egress traffic:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all spec: podSelector: {} egress: - {} policyTypes: - Egress </code></pre> <p>If you are not using Network policies, connection should just work, you dont need OSBA. You can fine tune the network policy later to make it more restrictive.</p> <p><a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/network-policies/</a></p> <p>ps. if you are using istio with certain settings your outbound requests will be also blocker, you'd need to account for that as well</p>
<p>I have a service account <code>monitoring:prometheus-operator-operator</code> with a clusterrolebinding to to this clusterrole:</p> <pre><code>Name: prometheus-operator-operator Labels: app=prometheus-operator-operator chart=prometheus-operator-5.7.0 heritage=Tiller release=prometheus-operator Annotations: &lt;none&gt; PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- configmaps [] [] [*] secrets [] [] [*] customresourcedefinitions.apiextensions.k8s.io [] [] [*] statefulsets.apps [] [] [*] alertmanagers.monitoring.coreos.com/finalizers [] [] [*] alertmanagers.monitoring.coreos.com [] [] [*] prometheuses.monitoring.coreos.com/finalizers [] [] [*] prometheuses.monitoring.coreos.com [] [] [*] prometheusrules.monitoring.coreos.com [] [] [*] servicemonitors.monitoring.coreos.com [] [] [*] endpoints [] [] [get create update] services [] [] [get create update] namespaces [] [] [get list watch] pods [] [] [list delete] nodes [] [] [list watch] </code></pre> <p>Now, I am attempting to run this </p> <pre><code>curl -ik -X DELETE \ -H "Authorization: Bearer &lt;SERVICE_ACCOUNT_TOKEN&gt;" \ https://kubernetes.default.svc/apis/monitoring.coreos.com/v1/monitoring/prometheusrules/zalenium </code></pre> <p>from <strong>within</strong> a pod in the cluster to delete a <code>PrometheusRule</code>.</p> <p>My request however is not successful and being rejected with a 403:</p> <pre><code>{ "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "monitoring.monitoring.coreos.com \"prometheusrules\" is forbidden: User \"system:serviceaccount:monitoring:prometheus-operator-operator\" cannot delete resource \"monitoring/zalenium\" in API group \"monitoring.coreos.com\" at the cluster scope", "reason": "Forbidden", "details": { "name": "prometheusrules", "group": "monitoring.coreos.com", "kind": "monitoring" }, "code": 403 } </code></pre> <p>Am I wrong believing that the service account in my <code>monitoring</code> namespace should be able to delete <code>PrometheusRule</code> on the cluster level?</p> <p>To me everything looks correct and I don't understand why I get a <code>Forbidden</code> response.</p>
<p>you forgot to put namespace in the URI</p> <pre><code> curl -ik -X DELETE \ -H "Authorization: Bearer &lt;SERVICE_ACCOUNT_TOKEN&gt;" \ https://kubernetes.default.svc/apis/monitoring.coreos.com/v1/namespaces/monitoring/prometheusrules/zalenium </code></pre> <p>with the following command you can verify the if you are allowed to take action X with the resoucres Y</p> <p><code>kubectl auth can-i delete prometheusrules --as system:serviceaccount:monitoring:prometheus-operator-operator -n monitoring</code></p> <p>With the <strong>-v flag</strong> you can increase the verbosity of the request which also provides a request in curl form. </p>
<p>I can run <code>kubectl get pod nginx -o=jsonpath={'.status'}</code> to get just the status in json for my pod.</p> <p>How can I do the same filtering but have the result returned in yaml instead of json?</p> <p>So I would like to get this kind of output by the command:</p> <pre><code>status: conditions: - lastProbeTime: null lastTransitionTime: "2019-05-31T14:58:57Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2019-05-31T14:59:02Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2019-05-31T14:58:57Z" status: "True" type: PodScheduled containerStatuses: - containerID: docker://5eb07d9c8c4de3b1ba454616ef7b258d9ce5548a46d4d5521a0ec5bc2d36b716 image: nginx:1.15.12 imageID: docker-pullable://nginx@sha256:1d0dfe527f801c596818da756e01fa0e7af4649b15edc3eb245e8da92c8381f8 lastState: {} name: nginx ready: true restartCount: 0 state: running: startedAt: "2019-05-31T14:59:01Z" </code></pre>
<p>You cannot do that with kubectl, there is no such output option for it. </p> <p>However, it should be easy to extract that lines with <code>awk</code>.</p> <pre><code>kubectl get pods -o yaml | awk '/^status:/{flag=1}flag' </code></pre> <p>This starts the output at the line <code>status:</code>. In this case that is exactly what yo want.</p>
<p>I have a docker container that runs a GUI application, I can successfully run it, using this command</p> <pre><code>sudo docker run --net=host --env="DISPLAY" --volume="$HOME/.Xauthority:/root/.Xauthority:rw" -it test </code></pre> <p>I have already tried to create this deployment file:</p> <pre><code>{ "kind": "Deployment", "apiVersion": "extensions/v1beta1", "metadata": { "name": "test", "namespace": "default", "selfLink": "/apis/extensions/v1beta1/namespaces/default/deployments/gazebo", "uid": "249a12a9-83b8-11e9-8ec2-32ccf6441134", "resourceVersion": "6165060", "generation": 1, "creationTimestamp": "2019-05-31T15:24:12Z", "labels": { "k8s-app": "test" }, "annotations": { "deployment.kubernetes.io/revision": "1" } }, "spec": { "replicas": 1, "selector": { "matchLabels": { "k8s-app": "test" } }, "template": { "metadata": { "name": "test", "creationTimestamp": null, "labels": { "k8s-app": "test" } }, "spec": { "volumes": [ { "name": "test", "hostPath": { "path": "$HOME/.Xauthority:/root/.Xauthority:rw", "type": "" } } ], "containers": [ { "name": "test", "image": "test:1.0.12", "env": [ { "name": "DISPLAY", "value": ":0" } ], "resources": {}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "imagePullPolicy": "IfNotPresent", "securityContext": { "privileged": false, "procMount": "Default" }, "stdin": true } ], "restartPolicy": "Always", "terminationGracePeriodSeconds": 30, "dnsPolicy": "ClusterFirst", "nodeSelector": { "component": "test" }, "hostNetwork": true, "securityContext": {}, "schedulerName": "default-scheduler" } }, "strategy": { "type": "RollingUpdate", "rollingUpdate": { "maxUnavailable": "25%", "maxSurge": "25%" } }, "revisionHistoryLimit": 10, "progressDeadlineSeconds": 600 }, "status": { "observedGeneration": 1, "replicas": 1, "updatedReplicas": 1, "unavailableReplicas": 1, "conditions": [ { "type": "Progressing", "status": "True", "lastUpdateTime": "2019-05-31T15:24:14Z", "lastTransitionTime": "2019-05-31T15:24:12Z", "reason": "NewReplicaSetAvailable", "message": "ReplicaSet \"test-dbfdb6467\" has successfully progressed." }, { "type": "Available", "status": "False", "lastUpdateTime": "2019-05-31T15:40:21Z", "lastTransitionTime": "2019-05-31T15:40:21Z", "reason": "MinimumReplicasUnavailable", "message": "Deployment does not have minimum availability." } ] } } </code></pre> <p>I have tried to inspect the resulting containers and I have obtained these results:</p> <p>For the one deployed with kubernetes the result is</p> <pre><code>[ { "Id": "114e1d307b8260eaa02bfcf214031cf34ae522cf55258731a8a6dca535527995", "Created": "2019-05-31T15:24:13.320599267Z", "Path": "/home/startup.sh", "Args": [], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 10908, "ExitCode": 0, "Error": "", "StartedAt": "2019-05-31T15:24:13.627542552Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:2b02610511b1d09925b5b0b2471efea087819942c36c0c0bf490d6a28709f54e", "ResolvConfPath": "/home/docker/containers/8537a1c6038f5d6b4186d0b56a6b839207477825d0ad99d8610d36f66967495e/resolv.conf", "HostnamePath": "/home/docker/containers/8537a1c6038f5d6b4186d0b56a6b839207477825d0ad99d8610d36f66967495e/hostname", "HostsPath": "/var/lib/kubelet/pods/249ff7a4-83b8-11e9-8ec2-32ccf6441134/etc-hosts", "LogPath": "/home/docker/containers/114e1d307b8260eaa02bfcf214031cf34ae522cf55258731a8a6dca535527995/114e1d307b8260eaa02bfcf214031cf34ae522cf55258731a8a6dca535527995-json.log", "Name": "/k8s_gazebo_gazebo-dbfdb6467-fd448_default_249ff7a4-83b8-11e9-8ec2-32ccf6441134_0", "RestartCount": 0, "Driver": "aufs", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "docker-default", "ExecIDs": null, "HostConfig": { "Binds": [ "/var/lib/kubelet/pods/249ff7a4-83b8-11e9-8ec2-32ccf6441134/volumes/kubernetes.io~secret/default-token-4ncqs:/var/run/secrets/kubernetes.io/serviceaccount:ro", "/var/lib/kubelet/pods/249ff7a4-83b8-11e9-8ec2-32ccf6441134/etc-hosts:/etc/hosts", "/var/lib/kubelet/pods/249ff7a4-83b8-11e9-8ec2-32ccf6441134/containers/gazebo/f4a9bfa2:/dev/termination-log" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "container:8537a1c6038f5d6b4186d0b56a6b839207477825d0ad99d8610d36f66967495e", "PortBindings": null, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "Dns": null, "DnsOptions": null, "DnsSearch": null, "ExtraHosts": null, "GroupAdd": null, "IpcMode": "container:8537a1c6038f5d6b4186d0b56a6b839207477825d0ad99d8610d36f66967495e", "Cgroup": "", "Links": null, "OomScoreAdj": 1000, "PidMode": "", "Privileged": false, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": [ "seccomp=unconfined" ], "UTSMode": "host", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 2, "Memory": 0, "NanoCpus": 0, "CgroupParent": "/kubepods/besteffort/pod249ff7a4-83b8-11e9-8ec2-32ccf6441134", "BlkioWeight": 0, "BlkioWeightDevice": null, "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 100000, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [], "DeviceCgroupRules": null, "DiskQuota": 0, "KernelMemory": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": 0, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": [ "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware" ], "ReadonlyPaths": [ "/proc/asound", "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ] }, "GraphDriver": { "Data": null, "Name": "aufs" }, "Mounts": [ { "Type": "bind", "Source": "/var/lib/kubelet/pods/249ff7a4-83b8-11e9-8ec2-32ccf6441134/volumes/kubernetes.io~secret/default-token-4ncqs", "Destination": "/var/run/secrets/kubernetes.io/serviceaccount", "Mode": "ro", "RW": false, "Propagation": "rprivate" }, { "Type": "bind", "Source": "/var/lib/kubelet/pods/249ff7a4-83b8-11e9-8ec2-32ccf6441134/etc-hosts", "Destination": "/etc/hosts", "Mode": "", "RW": true, "Propagation": "rprivate" }, { "Type": "bind", "Source": "/var/lib/kubelet/pods/249ff7a4-83b8-11e9-8ec2-32ccf6441134/containers/gazebo/f4a9bfa2", "Destination": "/dev/termination-log", "Mode": "", "RW": true, "Propagation": "rprivate" } ], "Config": { "Hostname": "davidePC", "Domainname": "", "User": "0", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "Tty": false, "OpenStdin": true, "StdinOnce": false, "Env": [ "DISPLAY=:0", "KUBERNETES_PORT_443_TCP_PROTO=tcp", "KUBERNETES_PORT_443_TCP_PORT=443", "KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1", "KUBERNETES_SERVICE_HOST=10.96.0.1", "KUBERNETES_SERVICE_PORT=443", "KUBERNETES_SERVICE_PORT_HTTPS=443", "KUBERNETES_PORT=tcp://10.96.0.1:443", "KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "LANG=C.UTF-8", "LC_ALL=C.UTF-8", "ROS_DISTRO=kinetic", "ROS_MASTER_URI=http://localhost:11311", "ROS_PACKAGE_PATH=/opt/ros/kinetic/share", "DEBIAN_FRONTEND=noninteractive", "TURTLEBOT_3D_SENSOR=no3d", "TURTLEBOT_TOP_PLATE_DEVICE=rplidar", "JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-amd64/" ], "Cmd": [], "Healthcheck": { "Test": [ "NONE" ] }, "ArgsEscaped": true, "Image": "sha256:2b02610511b1d09925b5b0b2471efea087819942c36c0c0bf490d6a28709f54e", "Volumes": null, "WorkingDir": "/home", "Entrypoint": [ "/home/startup.sh" ], "OnBuild": null, "Labels": { "annotation.io.kubernetes.container.hash": "86f118b", "annotation.io.kubernetes.container.restartCount": "0", "annotation.io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "annotation.io.kubernetes.container.terminationMessagePolicy": "File", "annotation.io.kubernetes.pod.terminationGracePeriod": "30", "io.kubernetes.container.logpath": "/var/log/pods/default_gazebo-dbfdb6467-fd448_249ff7a4-83b8-11e9-8ec2-32ccf6441134/gazebo/0.log", "io.kubernetes.container.name": "gazebo", "io.kubernetes.docker.type": "container", "io.kubernetes.pod.name": "gazebo-dbfdb6467-fd448", "io.kubernetes.pod.namespace": "default", "io.kubernetes.pod.uid": "249ff7a4-83b8-11e9-8ec2-32ccf6441134", "io.kubernetes.sandbox.id": "8537a1c6038f5d6b4186d0b56a6b839207477825d0ad99d8610d36f66967495e" } }, "NetworkSettings": { "Bridge": "", "SandboxID": "", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": {}, "SandboxKey": "", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": {} } } ] </code></pre> <p>Instead for the one run from CLI, the result is:</p> <pre><code>[ { "Id": "ed428815132ac62020e36b1d50421ec7402c3b433907575a4858617d56322366", "Created": "2019-05-31T15:13:51.991700539Z", "Path": "/home/startup.sh", "Args": [], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 8377, "ExitCode": 0, "Error": "", "StartedAt": "2019-05-31T15:13:52.469581933Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:2b02610511b1d09925b5b0b2471efea087819942c36c0c0bf490d6a28709f54e", "ResolvConfPath": "/home/docker/containers/ed428815132ac62020e36b1d50421ec7402c3b433907575a4858617d56322366/resolv.conf", "HostnamePath": "/home/docker/containers/ed428815132ac62020e36b1d50421ec7402c3b433907575a4858617d56322366/hostname", "HostsPath": "/home/docker/containers/ed428815132ac62020e36b1d50421ec7402c3b433907575a4858617d56322366/hosts", "LogPath": "/home/docker/containers/ed428815132ac62020e36b1d50421ec7402c3b433907575a4858617d56322366/ed428815132ac62020e36b1d50421ec7402c3b433907575a4858617d56322366-json.log", "Name": "/youthful_benz", "RestartCount": 0, "Driver": "aufs", "Platform": "linux", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "docker-default", "ExecIDs": null, "HostConfig": { "Binds": [ "/home/davide/.Xauthority:/root/.Xauthority:rw" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "host", "PortBindings": {}, "RestartPolicy": { "Name": "no", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "Dns": [], "DnsOptions": [], "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "shareable", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": false, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": null, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 0, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": [], "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [], "DeviceCgroupRules": null, "DiskQuota": 0, "KernelMemory": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": null, "OomKillDisable": false, "PidsLimit": 0, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0, "MaskedPaths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware" ], "ReadonlyPaths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ] }, "GraphDriver": { "Data": null, "Name": "aufs" }, "Mounts": [ { "Type": "bind", "Source": "/home/davide/.Xauthority", "Destination": "/root/.Xauthority", "Mode": "rw", "RW": true, "Propagation": "rprivate" } ], "Config": { "Hostname": "davidePC", "Domainname": "", "User": "", "AttachStdin": true, "AttachStdout": true, "AttachStderr": true, "Tty": true, "OpenStdin": true, "StdinOnce": true, "Env": [ "DISPLAY=:0", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "LANG=C.UTF-8", "LC_ALL=C.UTF-8", "ROS_DISTRO=kinetic", "ROS_MASTER_URI=http://localhost:11311", "ROS_PACKAGE_PATH=/opt/ros/kinetic/share", "DEBIAN_FRONTEND=noninteractive", "TURTLEBOT_3D_SENSOR=no3d", "TURTLEBOT_TOP_PLATE_DEVICE=rplidar", "JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-amd64/" ], "Cmd": [], "ArgsEscaped": true, "Image": "cpswarm/gazebo-em-ex:1.0.12", "Volumes": null, "WorkingDir": "/home", "Entrypoint": [ "/home/startup.sh" ], "OnBuild": null, "Labels": {} }, "NetworkSettings": { "Bridge": "", "SandboxID": "70d4cff7200ece8610fdcc04ffed7b2248caea6e61658c4622368a25d0864660", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": {}, "SandboxKey": "/var/run/docker/netns/default", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "host": { "IPAMConfig": null, "Links": null, "Aliases": null, "NetworkID": "81ea4770bb3b91fca3383336b35ebd9c6a00c7728180873b99fb663fcbb3ef4f", "EndpointID": "1aa92639019060c8a229a06b86a79b0e6607437ab6a9c5de45f83d797b8e5b9f", "Gateway": "", "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "", "DriverOpts": null } } } } ] </code></pre> <p>The result that I obtain is that running from CLI, I can see the GUI of the application running in the docker container. Instead with the container deployed with Kubernetes I see the application running but I cannot see the GUI.</p> <p>Is there something that I can change in the deployment file to make the deployed container working?</p>
<p>To run GUI application on headless server, you need <a href="https://www.x.org/releases/X11R7.6/doc/man/man1/Xvfb.1.xhtml" rel="noreferrer">Xvfb</a> to create a virtual frame buffer as a X-server, and let application connect to the X-server by env <code>DISPLAY</code>.</p> <p>Example yaml use <a href="https://hub.docker.com/r/comiq/xvfb" rel="noreferrer">xvfb docker image</a>:</p> <pre><code>... spec: containers: - name: test image: test:1.0.12 env: - name: DISPLAY value: "localhost:1.0" - name: xvfb image: comiq/xvfb:latest env: - name: DISPLAY value: 1 - name: SCREEN value: 0 ports: - name: xserver containerPort: 6001 </code></pre>
<p>i've came across 2 types of syntax for creating configmaps from files in kubernetes.</p> <p><strong>first one</strong> ;</p> <hr> <pre><code>apiVersion: v1 data: file1.yaml: |+ parameter1=value1 kind: ConfigMap metadata: name: my-configmap </code></pre> <p><strong>second one</strong> ;</p> <pre><code>apiVersion: v1 data: file1.yaml: | - parameter1=value1 kind: ConfigMap metadata: name: my-configmap </code></pre> <p>what is the difference between |+ and |- ?</p>
<p>This is the <a href="https://yaml-multiline.info/" rel="noreferrer">block chomping indicator</a>.</p> <p>Directly quoting from the link:</p> <blockquote> <p>The chomping indicator controls what should happen with newlines at the end of the string. The default, clip, puts a single newline at the end of the string. To remove all newlines, strip them by putting a minus sign (-) after the style indicator. Both clip and strip ignore how many newlines are actually at the end of the block; to keep them all put a plus sign (+) after the style indicator.</p> </blockquote> <p>This means that for:</p> <pre><code>apiVersion: v1 data: file1.yaml: |- parameter1=value1 kind: ConfigMap metadata: name: my-configmap </code></pre> <p>file1.yaml will have the value:</p> <pre><code>parameter1=value1 </code></pre> <p>For:</p> <pre><code>apiVersion: v1 data: file1.yaml: |+ parameter1=value1 kind: ConfigMap metadata: name: my-configmap </code></pre> <p>file1.yaml will have the value:</p> <pre><code>parameter1=value1 # line break # line break # line break </code></pre>
<p>we're experimenting Kubernetes and thinking about migrating our webapp, currently hosted in a classic public cloud with vps and load balancing.</p> <p>I'd like to know how to manage an upload folder (with write/read privileged) inside our dockerized app in which files uploaded by our users are stored.</p> <p>As we use lot of file uploading in our apps, would it be possible to have a persistent data storage in which the webapp can upload files in and manage them via webapp? Do I have to change the code of my app and save files to external object storages or there's a faster way to do it via local filesystem?</p> <p>Thank you !</p>
<p>I'm guessing that if you are running more than one pod, you'd want some sort of shared filesystem? So all uploaded documents are available in one location instead of scattered around several persistent volumes? If that's the case, you want some sort of distibuted file storage like Amazon S3. Even better, you could use a Kubernetes NFS so your code thinks it is using local filesystem but it is backed by a distributed file store.</p> <p>Take a look at this example: <a href="https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs" rel="nofollow noreferrer">https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs</a>. The NFS can use a persistent volume from GKE, Azure, or AWS.</p>
<p>When using a Kuberenetes service on Azure the nodes are by default built with an Ubuntu image. </p> <p>I have a use case of wanting to add more nodes but on the az CLI the os-type is only Linux (ubuntu) or Windows. </p> <p>Is there a way of adding a Node to an existing Kubernetes cluster on Azure that is of different Linux types like CentOS?</p>
<p>AKS only supports ubuntu at this time, so this is not yet possible</p>
<p>Our goal is to run kubernetes in AWS and Azure with minimal customization (setting up kubernetes managed env), support and maintenance. We need portability of containers across cloud providers. </p> <p>Our preferred cloud provider is AWS. We are planning on running containers in EKS. We wanted to understand the customization effort required to run these containers in AKS. </p> <p>Would you recommend choosing a container management platform like Pivotal Cloud Foundry or Redhat OpenShift or run them on AWS EKS or AKS where customization is less to run containers across different cloud providers.</p>
<p>You need to define a common set of storage classes that map to similar volume types on each provider. If you are using some kind of provider based Ingress controller those can vary so I would recommend using an internal one like nginx or traefik. If you are using customization annotations for things like networking those can vary, but using those is pretty rare. Others k8s is k8s.</p>
<p>I have an API written in <code>Express</code>, that connects to a <code>Mongo</code> container. When I spin these up locally, I can connect to the mongo instance using something like <code>mongodb://${DB_HOST}:${DB_PORT}/${DB_NAME}</code> and setting <code>.env</code> variables.</p> <p>What I am struggling to understand, is once this deployed to GKE, how will my API connect to the mongo container / pod?</p> <p>It won't be running on localhost I assume, so perhaps I will need to use the internal IP created? </p> <p>Should it actually be connecting via a <code>service</code>? What would that service look like?</p> <p>I am struggling to find docs on exactly where I am stuck so I am thinking I am missing something really obvious.</p> <p>I'm free new to <code>GKE</code> so any examples would help massively.</p>
<p>Create a mongodb deployment and a mongodb service of type ClusterIP, which basically means that your api will be able to connect to the db internally. If you want to connect your db from outside, create a service of type LoadBalancer or other service types (see <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">here</a>) </p> <p>With a service of type ClusterIP, let's say you give it a <code>name</code> of <code>mongodbservice</code> under <code>metadata</code> key. Then your api can connect to it at <code>mongodb://mongodbservice:${DB_PORT}/${DB_NAME}</code></p>
<p>I have created a k8 cluster in GKE. </p> <p>I have a docker registry created in Artifactory, this artifactory is hosted on AWS. I have a route53 entry for <em>docker-repo.aws.abc.com</em> in <em>aws.abc.com</em> Hosted zone in AWS</p> <p>Now, I need to configure my cluster so that the docker images are pulled from artifactory. </p> <p>I went through documentation and understand I will have to add <em>stubDomain</em> in my <em>kube-dns</em> configmaps. </p> <pre><code>kubectl edit cm kube-dns -n kube-system apiVersion: v1 data: stubDomains: | {"aws.abc.com" : ["XX.XX.XX.XX"]} kind: ConfigMap metadata: creationTimestamp: 2019-05-21T14:30:15Z labels: addonmanager.kubernetes.io/mode: EnsureExists name: kube-dns namespace: kube-system resourceVersion: "7669" selfLink: /api/v1/namespaces/kube-system/configmaps/kube-dns uid: f378aa5f-7bd4-11e9-9df2-42010aa93d03 </code></pre> <p>However, still docker pull command fails. </p> <p><code>docker pull docker-repo.aws.abc.com/abc-sampleapp-java/abc-service:V-57bc9c9-201</code></p> <p><code>Error response from daemon: Get https://docker-repo.aws.abc.com/v2/: dial tcp: lookup docker-dev-repo.aws.abc.com on 169.254.169.254:53: no such host</code></p> <p>Note: When I make an entry in <em>/etc/hosts</em> file on worker nodes, docker pull works fine. </p>
<p>Pulling an image from registry on pod start uses different <em>DNS settings</em> than when we call DNS from pods inside a cluster.</p> <p>When Kubernetes starts new pod, it schedules it to the node and then agent on the node named <em>kubelet</em> calls container engine (Docker) to download an image and run it with designed configuration.</p> <p>Docker uses system DNS to resolve the address of a registry, because it works right on our <em>host system</em>, not in the Kubernetes, that is why any DNS settings will not affect DNS resolving on the image downloading stage. <a href="https://github.com/kubernetes/kubernetes/issues/8735" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/8735</a> is a discussion about it on Github.</p> <p>If we want to change DNS settings and override the registry IP to use it on image downloading stage, we should set it in the host system. In the configuration, we need to modify DNS settings on all your nodes in the cluster. The simplest way to do it is using <em>/etc/hosts</em> file and adding a record with your custom IP, e.g. 192.168.1.124 example.com. </p> <p>After that modifications, Docker on nodes will use the record from <em>/etc/hosts</em> for your registry instead of global DNS records, because that file has higher priority and you will be able to run pods with your image.</p> <p>To update the host file. you can use a DeamonSet with Security Context as privileged see below:</p> <pre><code>apiVersion: apps/v1 kind: DaemonSet metadata: name: hosts-fix-script namespace: kube-system labels: app: hosts-fix-script spec: selector: matchLabels: name: hosts-fix template: metadata: labels: name: hosts-fix spec: hostPID: true containers: - name: hosts-fix-script image: gcr.io/google-containers/startup-script:v1 imagePullPolicy: Always securityContext: privileged: true env: - name: STARTUP_SCRIPT value: | #!/bin/bash echo "10.0.0.11 onprem.registry" &gt;&gt; /etc/hosts echo 'Done' </code></pre> <p>you need to run the kubectl apply -f </p>
<p>we're experimenting Kubernetes and thinking about migrating our webapp, currently hosted in a classic public cloud with vps and load balancing.</p> <p>I'd like to know how to manage an upload folder (with write/read privileged) inside our dockerized app in which files uploaded by our users are stored.</p> <p>As we use lot of file uploading in our apps, would it be possible to have a persistent data storage in which the webapp can upload files in and manage them via webapp? Do I have to change the code of my app and save files to external object storages or there's a faster way to do it via local filesystem?</p> <p>Thank you !</p>
<p>The persistent volume is the right way to go. You can read about it more in the link provided by Veerendra. For ease you can use some managed Kubernetes Engine like GKE/EKS/AKS as PV part is simplified there. You can find a good guide from GCP <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/persistent-disk" rel="nofollow noreferrer">here</a>. </p> <p>About rewriting the application - probably there will be some small changes to do because you will need to use Pod level database. It will also be a more suitable solution as you will get an easy way to scale up and down with your app based on the load and couple of other features of Kubernetes. </p> <p>I would not worry about LoadBalancer here. First you will dockerize your app then put it inside of a Pod which will be then a part of a Deployment then abstract the storage as PV. To get a grasp of how this all work you can start with this simple lab <a href="https://kubernetes.io/docs/tutorials/stateless-application/guestbook/" rel="nofollow noreferrer">Deploying PHP Guestbook application with Redis</a>. You can find another useful resource about using PV with Deployments and StatefulSets <a href="https://akomljen.com/kubernetes-persistent-volumes-with-deployment-and-statefulset/" rel="nofollow noreferrer">here</a>. </p>
<p>I'm sorry if this is a very ignorant question but is it possible for Ambassador to truly handle CORS headers and pre-flight OPTION responses? </p> <p>The docs (<a href="https://www.getambassador.io/reference/cors" rel="noreferrer">https://www.getambassador.io/reference/cors</a>) seem kind of ambiguous to me, if there are just hooks to prevent requests, or if it can truly respond on behalf of the services.</p> <p>Here's my situation: I've got Ambassador in front of all the http requests to some microservices. For [reasons] we now need a separate domain to make requests into the same Ambassador. </p> <p>I have an AuthService configured, and according to the docs "When you use external authorization, each incoming request is authenticated before routing to its destination, including pre-flight OPTIONS requests." Which makes perfect sense, and that's what I'm seeing. My AuthService is configured to allow things correctly and that seems to be working. The AuthService responds with the appropriate headers, but Ambassador seems to just ignore that and only cares if the AuthService responds with a 200 or not. (Which seems totally reasonable.)</p> <p>I have this annotated on my ambassador module:</p> <pre><code>getambassador.io/config: | --- apiVersion: ambassador/v1 kind: Module name: ambassador config: service_port: 8080 cors: origins: [my domain here] credentials: true </code></pre> <p>And that doesn't seem to do what I'd expect, which is handle the CORS headers and pre-flight... instead it forwards it on to the service to handle all the CORS stuff. </p>
<p>Turns out, by specifying <code>headers: "Content-Type"</code> in the <code>cors</code> configuration, things just started to work. Apparently that's not as optional as I thought.</p> <p>So this is now my module:</p> <pre><code>getambassador.io/config: | --- apiVersion: ambassador/v1 kind: Module name: ambassador config: service_port: 8080 cors: origins: [my domain here] headers: "Content-Type" credentials: true </code></pre>
<p>I have an ASP.Net Core Web API 2.2 project that runs normally on my local Docker Desktop. I'm trying to run it on Azure's AKS, but it won't run there, and I can't understand why.<br> Below is my PowerShell script that I use to publish my project into a <code>app</code> directory that will be later inserted into the container:</p> <pre><code>Remove-Item ..\..\..\..\projects\MyProject.Selenium.Commom\src\Selenium.API\bin\Release\* -Force -Recurse dotnet publish ..\..\..\..\projects\MyProject.Selenium.Commom\src\Selenium.Comum.sln -c Release -r linux-musl-x64 $path = (Get-Location).ToString() + "\app" if (Test-Path($path)) { Remove-Item -Path $path -Force -Recurse } New-Item -ItemType Directory -Force app Get-ChildItem ..\..\..\..\projects\MyProject.Selenium.Commom\src\Selenium.API\bin\Release\netcoreapp2.2\linux-musl-x64\publish\* | Copy-Item -Destination .\app -Recurse </code></pre> <p>Here is my <code>Dockerfile</code></p> <pre><code># Build runtime image FROM mcr.microsoft.com/dotnet/core/runtime:2.2-alpine3.9 WORKDIR /app /app WORKDIR /app ENTRYPOINT ["dotnet", "Selenium.API.dll"] </code></pre> <p>Below is my Docker build command:</p> <pre><code>docker build -t mylocaldocker/selenium-web-app:latest -t mylocaldocker/selenium-web-app:v0.0.2 . </code></pre> <p>And my Docker run command</p> <pre><code>docker run --name selweb --detach --rm -p 85:80 mylocaldocker/selenium-web-app:latest </code></pre> <p>Everything spins up nice and smooth, and I'm able to send requests locally on port 85 without an issue (port 80 is being used by IIS)<br> However, doing similar procedures on Azure's AKS, the container won't start. I use the identical PowerShell script to publish my application, and the dockerfile is identical as well. My build command changes so that I can push to Azure's Docker Registry:</p> <pre><code>docker build -t myproject.azurecr.io/selenium-web-app:latest -t myproject.azurecr.io/selenium-web-app:v0.0.1 . </code></pre> <p>I login to the Azure Docker Registry, and push the image to it:</p> <pre><code>docker push myproject.azurecr.io/selenium-web-app:latest </code></pre> <p>I've already created my AKS cluster and gave permission to pull images from my registry. I try to run the image on AKS using the command:</p> <pre><code>kubectl run seleniumweb --image myproject.azurecr.io/selenium-web-app:latest --port 80 </code></pre> <p>And I get the response </p> <pre><code>deployment.apps "seleniumweb" created </code></pre> <p>However, when I get the running pods:</p> <pre><code>kubectl get pods </code></pre> <p>I get an error Status on my pod</p> <pre><code>NAME READY STATUS RESTARTS AGE seleniumweb-7b5f645698-9g7f6 0/1 Error 4 1m </code></pre> <p>When I get the logs from the pod:</p> <pre><code>kubectl logs seleniumweb-7b5f645698-9g7f6 </code></pre> <p>I get this back:</p> <pre><code>Did you mean to run dotnet SDK commands? Please install dotnet SDK from: https://go.microsoft.com/fwlink/?LinkID=798306&amp;clcid=0x409 </code></pre> <p>Below is the result of kubectl describe for the pod:</p> <pre><code>kubectl describe pods Name: seleniumweb-7b5f645698-9g7f6 Namespace: default Priority: 0 PriorityClassName: &lt;none&gt; Node: aks-agentpool-41564776-0/10.240.0.4 Start Time: Sun, 02 Jun 2019 11:40:47 -0300 Labels: pod-template-hash=7b5f645698 run=seleniumweb Annotations: &lt;none&gt; Status: Running IP: 10.240.0.25 Controlled By: ReplicaSet/seleniumweb-7b5f645698 Containers: seleniumweb: Container ID: docker://1d548f4934632efb0b7c5a59dd0ac2bd173f2ee8fa5196b45d480fb10e88a536 Image: myproject.azurecr.io/selenium-web-app:latest Image ID: docker-pullable://myproject.azurecr.io/selenium-web-app@sha256:97e2915a8b43aa8e726799b76274bb9b5b852cb6c78a8630005997e310cfd41a Port: 80/TCP Host Port: 0/TCP State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 145 Started: Sun, 02 Jun 2019 11:43:39 -0300 Finished: Sun, 02 Jun 2019 11:43:39 -0300 Ready: False Restart Count: 5 Environment: KUBERNETES_PORT_443_TCP_ADDR: myprojectus-dns-54302b78.hcp.eastus2.azmk8s.io KUBERNETES_PORT: tcp://myprojectus-dns-54302b78.hcp.eastus2.azmk8s.io:443 KUBERNETES_PORT_443_TCP: tcp://myprojectus-dns-54302b78.hcp.eastus2.azmk8s.io:443 KUBERNETES_SERVICE_HOST: myprojectus-dns-54302b78.hcp.eastus2.azmk8s.io Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-mhvfv (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: default-token-mhvfv: Type: Secret (a volume populated by a Secret) SecretName: default-token-mhvfv Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m default-scheduler Successfully assigned default/seleniumweb-7b5f645698-9g7f6 to aks-agentpool-41564776-0 Normal Created 4m (x4 over 5m) kubelet, aks-agentpool-41564776-0 Created container Normal Started 4m (x4 over 5m) kubelet, aks-agentpool-41564776-0 Started container Normal Pulling 3m (x5 over 5m) kubelet, aks-agentpool-41564776-0 pulling image "myproject.azurecr.io/selenium-web-app:latest" Normal Pulled 3m (x5 over 5m) kubelet, aks-agentpool-41564776-0 Successfully pulled image "myproject.azurecr.io/selenium-web-app:latest" Warning BackOff 20s (x24 over 5m) kubelet, aks-agentpool-41564776-0 Back-off restarting failed container </code></pre> <p>And I don't understand why, since everything runs fine on my local Docker. Any help would be greatly appreciated. Thanks</p>
<p>That Dockerfile looks funny. It doesn't do anything. WORKDIR just "sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile" (from <a href="https://docs.docker.com/engine/reference/builder/#workdir" rel="nofollow noreferrer">docs.docker.com</a>). So you're setting the working directory twice, then nothing else. And the entrypoint would then point to a nonexistent .dll since you never copied it over. I think you want to delete the first WORKDIR command and add this after the remaining WORKDIR command:</p> <pre><code>COPY . ./ </code></pre> <p>Even better, use a <a href="https://docs.docker.com/engine/examples/dotnetcore/" rel="nofollow noreferrer">two stage build</a> so it builds on docker, then copies the build to a runtime image that is published.</p> <p>I don't know why docker run is working locally for you. Is it picking up an old image somehow? Based on your Dockerfile, it shouldn't run.</p>
<p>I see <a href="https://github.com/dgkanatsios/CKAD-exercises/blob/master/d.configuration.md#create-and-display-a-configmap-from-a-file-giving-the-key-special" rel="nofollow noreferrer">here</a> a syntax like this:</p> <pre><code>kubectl create cm configmap4 --from-file=special=config4.txt </code></pre> <p>I did not find a description of what repetition of = and the <strong>special</strong> means here. Kubernetes documentation <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-configmaps-from-files" rel="nofollow noreferrer">here</a> only denotes one time usage of <strong>=</strong> after <strong>--from-file</strong> while creating configmaps in kubectl. </p>
<p>It appears from generating the YAML that this middle key mean all the keys that are being loaded from the file to be nested inside the mentioned key (special keyword in the question example).</p> <p>It appears like this:</p> <pre><code>apiVersion: v1 data: special: | var3=val3 var4=val4 kind: ConfigMap metadata: creationTimestamp: "2019-06-01T08:20:15Z" name: configmap4 namespace: default resourceVersion: "123320" selfLink: /api/v1/namespaces/default/configmaps/configmap4 uid: 1582b155-8446-11e9-87b7-0800277f619d </code></pre>
<p>I am trying to use Terraform Helm provider (<a href="https://www.terraform.io/docs/providers/helm/index.html" rel="noreferrer">https://www.terraform.io/docs/providers/helm/index.html</a>) to deploy a workload to GKE cluster. </p> <p>I am more or less following Google's example - <a href="https://github.com/GoogleCloudPlatform/terraform-google-examples/blob/master/example-gke-k8s-helm/helm.tf" rel="noreferrer">https://github.com/GoogleCloudPlatform/terraform-google-examples/blob/master/example-gke-k8s-helm/helm.tf</a>, but I do want to use RBAC by creating the service account manually.</p> <p>My helm.tf looks like this:</p> <pre><code>variable "helm_version" { default = "v2.13.1" } data "google_client_config" "current" {} provider "helm" { tiller_image = "gcr.io/kubernetes-helm/tiller:${var.helm_version}" install_tiller = false # Temporary kubernetes { host = "${google_container_cluster.data-dome-cluster.endpoint}" token = "${data.google_client_config.current.access_token}" client_certificate = "${base64decode(google_container_cluster.data-dome-cluster.master_auth.0.client_certificate)}" client_key = "${base64decode(google_container_cluster.data-dome-cluster.master_auth.0.client_key)}" cluster_ca_certificate = "${base64decode(google_container_cluster.data-dome-cluster.master_auth.0.cluster_ca_certificate)}" } } resource "helm_release" "nginx-ingress" { name = "ingress" chart = "stable/nginx-ingress" values = [&lt;&lt;EOF rbac: create: false controller: stats: enabled: true metrics: enabled: true service: annotations: cloud.google.com/load-balancer-type: "Internal" externalTrafficPolicy: "Local" EOF ] depends_on = [ "google_container_cluster.data-dome-cluster", ] } </code></pre> <p>I am getting the following error:</p> <pre><code>Error: Error applying plan: 1 error(s) occurred: * module.data-dome-cluster.helm_release.nginx-ingress: 1 error(s) occurred: * helm_release.nginx-ingress: error creating tunnel: "pods is forbidden: User \"client\" cannot list pods in the namespace \"kube-system\"" Terraform does not automatically rollback in the face of errors. Instead, your Terraform state file has been partially updated with any resources that successfully completed. Please address the error above and apply again to incrementally change your infrastructure. </code></pre> <p>This happens after I manually created Helm RBAC and installed Tiller.</p> <p>I also tried to set "install_tiller=true" before with exactly the same error when Tiller was installed</p> <p>"kubectl get pods" works without any problems.</p> <p>What is this user "client" and why it is forbidden from accessing the cluster?</p> <p>Thanks</p>
<p>Creating resources for the service account and cluster role binding explicitly works for me:</p> <pre><code>resource "kubernetes_service_account" "helm_account" { depends_on = [ "google_container_cluster.data-dome-cluster", ] metadata { name = "${var.helm_account_name}" namespace = "kube-system" } } resource "kubernetes_cluster_role_binding" "helm_role_binding" { metadata { name = "${kubernetes_service_account.helm_account.metadata.0.name}" } role_ref { api_group = "rbac.authorization.k8s.io" kind = "ClusterRole" name = "cluster-admin" } subject { api_group = "" kind = "ServiceAccount" name = "${kubernetes_service_account.helm_account.metadata.0.name}" namespace = "kube-system" } provisioner "local-exec" { command = "sleep 15" } } provider "helm" { service_account = "${kubernetes_service_account.helm_account.metadata.0.name}" tiller_image = "gcr.io/kubernetes-helm/tiller:${var.helm_version}" #install_tiller = false # Temporary kubernetes { host = "${google_container_cluster.data-dome-cluster.endpoint}" token = "${data.google_client_config.current.access_token}" client_certificate = "${base64decode(google_container_cluster.data-dome-cluster.master_auth.0.client_certificate)}" client_key = "${base64decode(google_container_cluster.data-dome-cluster.master_auth.0.client_key)}" cluster_ca_certificate = "${base64decode(google_container_cluster.data-dome-cluster.master_auth.0.cluster_ca_certificate)}" } } </code></pre>
<p>So I've been struggling with how to deploy a dockerized application. The app consists of a react frontend, and an express API. My <code>docker-compose.yml</code> for the development environment looks like the following:</p> <pre><code>version: '3' services: # Express Container backend: build: ./backend expose: - ${BACKEND_PORT} env_file: - ./.env environment: - PORT=${BACKEND_PORT} ports: - ${BACKEND_PORT}:${BACKEND_PORT} volumes: - ./backend:/backend command: npm run devstart links: - mongo # React Container frontend: build: './frontend' expose: - ${REACT_APP_PORT} env_file: - ./.env environment: - REACT_APP_BACKEND_PORT=${BACKEND_PORT} ports: - ${REACT_APP_PORT}:${REACT_APP_PORT} volumes: - ./frontend/src:/frontend/src - ./frontend/public:/frontend/public links: - backend command: npm start mongo: image: mongo ports: - "27017:27017" </code></pre> <p>But I've been struggling on how to structure it for production.</p> <p>I've seen that there is basically 3 options:</p> <ol> <li>Deploy the frontend and backend separately to different servers. In this case, the react frontend would be on some web hosting service, and the express backend would be hosted on kubernetes</li> <li>Have the express application serve out the react application</li> <li>Have the applications separate but use NGINX to proxy API requests to the express app</li> </ol> <p>I was thinking I would go with option 3, because it would keep development and production environments quite similar. (Please tell me if this is bad structure, this application is expected to receive a lot of traffic.)</p> <p>Should I maybe forget docker-compose and create a multistage dockerfile that uses multistage builds to copy over frontend and backend code? That way I can deploy a single Docker container?</p> <p>My folder structure looks like the following:</p> <pre><code>app/ .env docker-compose.yml docker-compose.prod.yml .gitignore frontend/ Dockerfile ... react stuff backend Dockerfile .. express stuff </code></pre> <p>Am I going about this all wrong? How have you deployed your applications with docker-compose to production (preferably on kubernetes).</p> <p>I can find tons of stuff about how to get this stuff running in development, but I'm lost when it comes to direction for deploying this type of stack.</p>
<p>You might start with reading through the <a href="https://kubernetes.io/docs/home/" rel="nofollow noreferrer">Kubernetes documentation</a> and understanding what's straightforward and what's not. You're most interested in Deployments and Services, and possibly Ingress. The MongoDB setup with associated persistent state will be more complicated, and you might look at a prepackaged solution like <a href="https://github.com/helm/charts/tree/master/stable/mongodb" rel="nofollow noreferrer">the stable/mongodb Helm chart</a> or <a href="https://github.com/mongodb/mongodb-enterprise-kubernetes" rel="nofollow noreferrer">MongoDB's official operator</a>.</p> <p>Note that an important part of the Kubernetes setup is that there will almost always be multiple Nodes, and you don't get a whole lot of control over which Node a Pod will be placed on. In particular that means that the Docker Compose <code>volumes:</code> you show won't work well in a Kubernetes environment – in addition to doing all the normal Kubernetes deployment work, you'd also need to replicate the application source code to every node. That's twice the work for the same deployment. Usually you will want all of the application code to be contained in the Docker image, with a typical Node-based Dockerfile looking something like</p> <pre><code>FROM node:10 WORKDIR /app COPY package.json yarn.lock ./ RUN yarn install COPY ./ ./ RUN yarn build EXPOSE 3000 CMD yarn start </code></pre> <p>Just within the <code>docker-compose.yml</code> file you show:</p> <ul> <li><p>The <code>volumes:</code> make your containers substantially different from what you might run in production; delete them.</p></li> <li><p>Don't bother making the container-internal ports configurable. In plain Docker, Docker Compose, and Kubernetes you can remap the container-internal port to an arbitrary externally-accessible port at deployment time. You can pick fixed numbers here and it's fine.</p></li> <li><p>Several of the details you show, like the ports the container <code>expose:</code> and the default <code>command:</code> to run, are properly parts of the image (every time you run the image they will be identical), so move these into the Dockerfile.</p></li> <li><p><code>links:</code> are redundant these days, and you can just delete them. In Docker Compose you can always reach the name of another service by the name of its service block.</p></li> <li><p>The names of the other related services will be different in different environments. For example, MongoDB might be on <code>localhost</code> when you're actually developing your application outside of Docker, <code>mongo</code> in the configuration you show, <code>mongo.myapp.svc.cluster.local</code> in Kubernetes, or you might choose to run it outside of Docker entirely. You'll generally want these to be configurable, usually with environment variables.</p></li> </ul> <p>This gives you a <code>docker-compose.yml</code> file a little more like:</p> <pre><code>version: '3' services: backend: build: ./backend environment: - MONGO_URL: 'mongo://mongo' ports: - 3000:3000 frontend: build: './frontend' environment: - BACKEND_URL: 'http://backend' ports: - 8000:8000 mongo: image: mongo ports: - "27017:27017" </code></pre> <p>As @frankd hinted in their answers, it's also very common to use a tool like Webpack to precompile a React application down into a set of static files. Depending on how you're actually deploying this it could make sense to run this compilation step ahead of time and push those compiled Javascript and CSS files out to some other static-hosting service, and take it out of Docker/Kubernetes entirely.</p>
<p>I try to patch a deployment with the following command:</p> <pre><code>kubectl patch deployment spin-clouddriver -n spinnaker --type='json' -p='[{"op": "add", "path": "/spec/spec/containers/0/volumeMounts", "value": {"mountPath": "/etc/ssl/certs/java/cacerts", "subPath": "cacerts", "name": "cacerts"}}]' </code></pre> <p>which results in</p> <pre><code>The "" is invalid </code></pre> <p>I don't see where is the error nor do I see how the message helps me to find the problem. Any hints?</p>
<p>The correct path is <code>"path": "/spec/template/spec/containers/0/volumeMounts</code>. There was missing template key. </p>
<p>I am trying do deploy PostgreSQL on AKS from the following Bitnami chart: <a href="https://github.com/bitnami/charts/tree/master/upstreamed/postgresql/#installing-the-chart" rel="nofollow noreferrer">https://github.com/bitnami/charts/tree/master/upstreamed/postgresql/#installing-the-chart</a></p> <p>During deployment, I have been invited to use the following command (once installed) to make sure I will be able to access postgres outside the cluster (e.g. some local DBMS)</p> <pre><code>kubectl port-forward --namespace default svc/dozing-coral-postgresql 5432:5432 &amp; PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U postgres -p 5432 </code></pre> <p>This results with a syntax error:</p> <pre><code>At line:1 char:80 + ... d --namespace default svc/dozing-coral-postgresql 5432:5432 &amp; PGPASSW ... + ~ The ampersand (&amp;) character is not allowed. The &amp; operator is reserved for future use; wrap an ampersand in double quotation marks ("&amp;") to pass it as part of a string. + CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException + FullyQualifiedErrorId : AmpersandNotAllowed </code></pre> <p>What would be the correct command to unclock the ports?</p>
<p>The output you should see is something like</p> <pre class="lang-sh prettyprint-override"><code>To connect to your database from outside the cluster execute the following commands: kubectl port-forward --namespace default svc/jolly-raccoon-postgresql 5432:5432 &amp; PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U postgres -p 5432 </code></pre> <p>There are two different commands. The first one is to forward the ports, and the final <code>&amp;</code> is to send this command to the background so you are able to continue using the shell</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl port-forward --namespace default svc/jolly-raccoon-postgresql 5432:5432 &amp; [1] 62447 Forwarding from 127.0.0.1:5432 -&gt; 5432 </code></pre> <p>The second command allows you to connect to the database using the forwarded port from another host where you have installed the <code>psql</code> client</p> <pre><code>$ PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U postgres -p 5432 psql (11.3) Type "help" for help. postgres=# </code></pre>
<p>I have defined a controller (Operator) for handling some Custom Resources in my K8S namespace. each custom resource has a Finalizer so the controller can handle it before it is being deleted:</p> <p>e.g.</p> <pre><code>kind: MyCustom metadata: finalizers: - MyCustom.finalizers.com name: mycustomResourceInstance </code></pre> <p>this works well, until I delete the namespace ("kubectl delete ns"). if k8s garbage collects the controller pod first - "mycustomResourceInstance" remains stuck in a deleting state, and prevents successful namespace removal.</p> <p>work around is to edit mycustomResourceInstance and remove the finalizer.</p> <p>is there any way to make sure the controller does not get deleted, while any instances of the custom resource exist in the namespace?</p>
<p>You have to look into owner references and foreground cascading deletion <a href="https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/</a> and implement it into your controller, so garbage collector delete your objects in order.</p>
<p><strong>kubernetes</strong> <em>version</em>:<code>1.10.4</code></p> <p>In my project,I have an <code>initContainer</code> image and an common container image,I want to update the <code>initContainer</code>'s image with zero down time.</p> <p>But the <code>kubectl set image xxx</code> command cannot work on <code>initContainer</code>.</p> <p>I have read the document about rolling update containers' image, but not found the information about <code>initContainer</code> image.</p> <p>Who has encountered this situation?</p>
<p>If you want to do manual change I'd start with </p> <pre><code>kubectl edit deployment xxx </code></pre> <p>for non-interactive operations it's probably easiest to use <code>kubectl patch</code> like </p> <pre><code>kubectl patch deployment/xxx -p '{"spec": {"template": {"spec": {"initContainers":[{"name":"cinit", "image":"alpine:3.6"}]}}}}' </code></pre>
<p>In my kubernetes cluster I have some running pods and a bunch of more pods in "completed" state. I use the query, for eg., <code>kube_pod_container_resource_requests_cpu_cores{namespace="default"}</code> to get the cpu request of the pods in the default namespace. This gives me the cpu request off ALL pods. However, what I want is ONLY the cpu request of the pods in "Running" state. Any idea how to achieve this? Thanks</p>
<p>Please try with following query:</p> <pre><code>kube_pod_container_resource_requests_cpu_cores{job=&quot;kube-state-metrics&quot;} * on (endpoint, instance, job, namespace, pod, service) group_left(phase) (kube_pod_status_phase{phase=~&quot;^(Pending|Running)$&quot;} == 1) </code></pre> <p>which makes use of <a href="https://prometheus.io/docs/prometheus/latest/querying/operators/#vector-matching" rel="nofollow noreferrer">Prometheus matching operator</a> to select labels with regex-match - here only Pods with Running or Pending state.</p>
<p>I am trying to autoscale my StatefulSet on Kubernetes. In order to do so, I need to get the current number of pods.</p> <p>When dealing with deployments:</p> <pre><code>kubectl describe deployments [deployment-name] | grep desired | awk '{print $2}' | head -n1 </code></pre> <p>This outputs a number, which is the amount of current deployments.</p> <p>However, when you run </p> <pre><code>kubectl describe statefulsets </code></pre> <p>We don't get back as much information. Any idea how I can get the current number of replicas of a stateful set?</p>
<pre><code> kubectl get sts web -n default -o=jsonpath='{.status.replicas}' </code></pre> <p>This also works with .status.readyReplicas and .status.currentReplicas</p> <p>From <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/apps/types.go" rel="noreferrer">github.com/kubernetes</a>:</p> <blockquote> <p>// replicas is the number of Pods created by the StatefulSet controller.<br> Replicas int32</p> <p>// readyReplicas is the number of Pods created by the StatefulSet controller that have a Ready Condition.<br> ReadyReplicas int32</p> <p>// currentReplicas is the number of Pods created by the StatefulSet controller from the StatefulSet version<br> // indicated by currentRevision.<br> CurrentReplicas int32</p> </blockquote>
<p>On the aks-engine github there is an example for a custom image for a node as follows: </p> <pre><code>"agentPoolProfiles": [ { "name": "agentpool1", "count": 3, "imageReference": { "name": "stretch", "resourceGroup": "debian" }, "vmSize": "Standard_D2_v2", "availabilityProfile": "AvailabilitySet" } ] </code></pre> <p>When I use this in my aks-engine generated ARM it can't find the resource group because I have not created it and not uploaded an a debian VHD as an Image in Azure. </p> <p>Is there a way of using pre-made images on Azure instead of having to upload our own?</p>
<p>If you mean to create a custom image without upload your own VHD file in Azure when you say using the pre-made images. Then the answer is yes. You can create the image from the base images that Azure provided in the marketplace through the <a href="https://learn.microsoft.com/en-us/azure/virtual-machines/linux/build-image-with-packer" rel="nofollow noreferrer">Packer</a> tool or <a href="https://learn.microsoft.com/en-us/azure/virtual-machines/linux/capture-image" rel="nofollow noreferrer">Capture the image from the Azure VM</a>.</p> <p>As you see in the aks-engine custom image, you need to create the custom image in the resource group first, and then use both resource group name and image to quote the custom image to create the agent pool. The description in <a href="https://github.com/Azure/aks-engine/blob/master/docs/topics/clusterdefinitions.md#agentpoolprofiles" rel="nofollow noreferrer">agentPoolProfiles of aks-engine</a>.</p>
<p>Is there a way to do active and passive load balancing between 2 PODs of a micro-service. Say I have 2 instance(PODs) running of Micro-service, which is exposed using a K8s service object. Is there a way to configure the load balancing such a way that one pod will always get the request and when that pod is down , the other pod will start receiving the request?</p> <p>I have ingress object also on top of that service.</p>
<p>This is what the Kubernetes Service object does, which you already mentioned you are using. Make sure you set up a readiness probe in your pod template so that the system can tell when your app is healthy.</p>
<p>I'm using <a href="http://manpages.courier-mta.org/htmlman5/slapd-mdb.5.html" rel="nofollow noreferrer">slapd-mdb</a> and I'm looking for a way to programmatically update "maxsize" parameter using ldapmodify or similar: <a href="http://manpages.courier-mta.org/htmlman5/slapd-mdb.5.html" rel="nofollow noreferrer">http://manpages.courier-mta.org/htmlman5/slapd-mdb.5.html</a></p> <p>My main problem is that I have a huge dataset and I need more space.</p> <p>Any suggestions on how to update OpenLDAP configuration programmatically would be appreciated.</p> <p>My environment is Kubernetes and I deployed OpenLDAP as a container.</p>
<p>The <a href="http://www.openldap.org/doc/admin24/quickstart.html" rel="nofollow noreferrer">"Quickstart"</a> section of the OpenLDAP documentation includes a mdb sample configuration:</p> <pre><code>dn: olcDatabase=mdb,cn=config objectClass: olcDatabaseConfig objectClass: olcMdbConfig olcDatabase: mdb OlcDbMaxSize: 1073741824 olcSuffix: dc=&lt;MY-DOMAIN&gt;,dc=&lt;COM&gt; olcRootDN: cn=Manager,dc=&lt;MY-DOMAIN&gt;,dc=&lt;COM&gt; olcRootPW: secret olcDbDirectory: /usr/local/var/openldap-data olcDbIndex: objectClass eq </code></pre> <ul> <li><p>Replace the placeholders in <code>olcSuffix</code>, <code>olcRootDN</code> and <code>olcRootPW</code> with your values, change the <code>OlcDbMaxSize</code> value to suit your requirement.</p></li> <li><p>Import your configration database:</p></li> </ul> <pre><code>su root -c /usr/local/sbin/slapadd -n 0 -F /usr/local/etc/slapd.d -l /usr/local/etc/openldap/slapd.ldif </code></pre> <ul> <li>Start SLAPD:</li> </ul> <pre><code>su root -c /usr/local/libexec/slapd -F /usr/local/etc/slapd.d </code></pre>
<p>The application has multiple pipelines which build docker images on each. Multiple APIs and a react web application.</p> <p>The current helm chart is set up to have multiple services and deployments and one ingress controller.</p> <p>Doing it this way means that the whole product is in a single helm chart, which is good. However, it means that when it comes to CI/CD, if the tag changes for one of the APIs, we need to figure out which tag to use for the other images.</p> <p>I've thought about creating a Helm chart for each application but then how would the ingress controller work? Would you have an ingress controller for each chart and let Kubernetes figure out which one to use based on the regex rules?</p> <p>There has to be better structure for this sort of thing and I'm stuck. I've heard the term "Umbrella chart" but not really too sure what that means. </p>
<p>This really depends on how you would want it to be, you can create a single chart for everything or you can create a chart per application.</p> <p>If you create a chart per application you just create a single ingress for each application and it merges them into a single ingress definition (kinda). this is how it looks for me:</p> <pre><code>metadata: name: service1 spec: rules: - host: service1.domain.com http: paths: - backend: serviceName: service1 servicePort: 80 path: / </code></pre> <p>and then for the second service:</p> <pre><code>metadata: name: service2 spec: rules: - host: service2.domain.com http: paths: - backend: serviceName: service2 servicePort: 80 path: / </code></pre> <p>and they would work together without colliding. If you are using path based routing, you'd have to make sure paths dont collide.</p>
<p>I am running a pod that has three containers. need to update the image of one of the container without doing a rolling upgrade.</p> <p>How do I get the container image updated without touching/restarting the other two containers?</p>
<p>If you are asking yourself this question, maybe you should reconsider some things.</p> <p>As stated in the others comment/answers, a pod once created is one unit whatever is inside of it.</p> <p>If you ever needs to scale some part of the pod and not the rest or do updates of just a part and don't want to restart the rest (a caching system for example), you should look to take out the container from you deployment and create another independent one.</p>
<p>So I've been struggling with this all afternoon. I can't at all get my NodeJS application running on kubernetes to connect to my MongoDB Atlas database.</p> <p>In my application I've tried running</p> <pre><code>mongoose.connect('mongodb+srv://admin:&lt;password&gt;@&lt;project&gt;.gcp.mongodb.net/project_prod?retryWrites=true&amp;w=majority', { useNewUrlParser: true }) </code></pre> <p>but I simply get the error</p> <pre><code>UnhandledPromiseRejectionWarning: Error: querySrv ETIMEOUT _mongodb._tcp.&lt;project&gt;.gcp.mongodb.net at QueryReqWrap.onresolve [as oncomplete] (dns.js:196:19) (node:32) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 17) </code></pre> <p>I've tried setting up a ExternalName service too, but using either the URL or the ExternalName results in me not being able to connect to the database.</p> <p>I've whitelisted my IP on MongoDB Atlas, so I know that isn't the issue.</p> <p>It also seems to work on my local machine, but not in the kubernetes pod. What am I doing wrong?</p>
<p>I figured out the issue, my pod DNS was not configured to allow external connections, so I set <code>dnsPolicy: Default</code> in my YML, because oddly enough <code>Default</code> is not actually the default value</p>
<p>I am using Traefik as the ingress controller for my Kubernetes setup. I decided to run some performance test for my application but I faced a huge difference when I sent the requests through the Traefik.</p> <p>The test consists of sending 10K request in parallel and the application returned the compiled result and based on the logs of my application it needs around 5 milliseconds to process one request. The results for the performance test are as below:</p> <ul> <li>Native application: Execution time in milliseconds: 61062</li> <li>Application on Kubernetes (without going through Traefik and just using its IP): Execution time in milliseconds: 62337</li> <li>Application on Kubernetes and using Traefik: Execution time in milliseconds: 159499</li> </ul> <p>My question is why this huge difference exists and is there a way to reduce it (except adding more replicas).</p> <p>I am using these yaml files for setting up Traefik:</p> <pre><code>https://raw.githubusercontent.com/containous/traefik/v1.7/examples/k8s/traefik-rbac.yaml https://raw.githubusercontent.com/containous/traefik/v1.7/examples/k8s/traefik-ds.yaml </code></pre>
<p>I tried Ambassador as my API gateway in kubernetes and its result was much better than Traefik and very close to using the IP of the container (63394 milliseconds). Obviously, Traefik is not as good as people think.</p>
<p>I'm setting up minikube using VirtualBox as the VM driver using a <strong>NAT adapter</strong> and a <strong>host-only adapter</strong>. After the VM is created, I run a few pods one of which is Kafka (the messaging queue). I have issues using Kafka properly because the VM creates 2 network interfaces <code>eth0</code> which points to <code>10.0.2.15</code> and <code>eth1</code> which points to <code>192.168.99.100</code> which is the IP the host-only adapter is set up with.</p> <p>I'm running this on a Mac, so I've tried using HyperKit instead, which seems to work differently. So, when using HyperKit I get one interface <code>eth0</code> which points to 192.168.99.100 and everything works just fine.</p> <p><strong>Why does VirtualBox create 2 interfaces, i.e. <code>eth0</code> and <code>eth1</code>?</strong></p> <pre><code>| Host | | VM | |--------------------------| |-----------------------| | vboxnet0 (192.168.99.100)| | eth0 (10.0.2.15) | &lt;--- why is this created? | ... | | eth1 (192.168.99.100) | </code></pre> <p>As a side note, Kafka is using <code>PLAINTEXT://:9092</code> in the listeners setting, which makes it start the server using <code>eth0</code> and as a result <code>10.0.2.15</code>. This IP is later advertised to any consumer connecting to it. This IP seems to be accessible only within the VM, which makes impossible to connect from the outside, e.g. the host. To be precise, a consumer connects to Kafka, then Kafka sends the advertised listeners, i.e. <code>10.0.2.15</code> and then it cannot send any messages, because it tries to connect to <code>10.0.2.15</code>.</p>
<blockquote> <p>| eth0 (10.0.2.15) | &lt;--- why is this created?</p> </blockquote> <p>Basically, VirtualBox needs 2 interfaces because it uses the <code>vboxnet0 eth0</code> interface to talk to the outside world using NAT and the other interface <code>eth1</code> on the <code>192.168.99.x</code> net so that your machine can talk to the VM. <code>192.168.99.100</code> is used by <code>minikube ssh</code>. You can try it directly by running <code>ssh -i private_key docker@192.168.99.100</code> and getting the private key from running <code>minikube ssh-key</code></p> <p><a href="https://github.com/moby/hyperkit" rel="nofollow noreferrer">HyperKit</a> doesn't need two interfaces because as you noticed it has <code>192.168.99.100</code> which minikube also uses to connect through ssh. HyperKit VMs don't necessarily need an interface to connect to the outside. For that, it generally uses a different mechanism based on <a href="https://github.com/moby/vpnkit" rel="nofollow noreferrer">VPNKit</a>.</p> <p>My suggestion is to use the <a href="http://kafka.apache.org/090/documentation.html#brokerconfigs" rel="nofollow noreferrer"><code>advertised.host.name</code></a> option in your Kafka configs. Another alternative is to upstream an option in <a href="https://github.com/kubernetes/minikube" rel="nofollow noreferrer">minikube</a> that allows you to change the order of the ethernet interfaces on VirtualBox, but that would be more work.</p>
<p>I'm using minikube for kubernetes deployment and I am getting this error:</p> <blockquote> <p>Failed to pull image "libsynadmin/libsynmp:core-api-kubernetes-0.9.8.1": rpc error: code = Unknown desc = Error response from daemon: Get <a href="https://registry-1.docker.io/v2/" rel="nofollow noreferrer">https://registry-1.docker.io/v2/</a>: net/http: TLS handshake timeout ?</p> </blockquote>
<p>I think you need to create docker registry cache to pull images from since the error indicate slow internet connection</p> <p>read more at <a href="https://docs.docker.com/registry/recipes/mirror/" rel="nofollow noreferrer">Docker</a></p>
<p>I have installed kubernetes cluster using <a href="https://medium.com/@Alibaba_Cloud/how-to-install-and-deploy-kubernetes-on-ubuntu-16-04-6769fd1646db" rel="nofollow noreferrer">this tutorial on ubuntu 16</a>. everything works but I need change proxy mode (ipvs) and I don't know how can I change kube-proxy mode using kubectl or something else.</p>
<p>kubectl is more for managing the kubernetes workload. You need to modify the control plane itself. Since you created your cluster with kubeadm, you can use that to enable ipvs. You'd add this to your config file for kube init.</p> <pre><code>... kubeProxy: config: featureGates: SupportIPVSProxyMode: true mode: ipvs ... </code></pre> <p>Here's an article from <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/ipvs/README.md#cluster-created-by-kubeadm" rel="nofollow noreferrer">github.com/kubernetes</a> with more detailed instructions. Depending on your kubernetes version, you can pass it as a flag to kube init instead of using the above configuration.</p> <p>Edit: Here's a link on how to use kubeadm to edit an existing cluster: <a href="https://stackoverflow.com/questions/49810966/how-to-use-kubeadm-upgrade-to-change-some-features-in-kubeadm-config">How to use kubeadm upgrade to change some features in kubeadm-config</a></p>
<p><a href="https://i.stack.imgur.com/NgtfV.png" rel="nofollow noreferrer">Click here to get error screen</a></p> <p>I am deploying MongoDb to Azure AKS with Azure File Share as Volume (using persistent volume &amp; persistent volume claim). If I am increasing replicas more than one then CrashLoopBackOff is occurring. Only one Pod is getting created, other are getting failed.</p> <p><strong>My Docker file to Create MongoDb image.</strong></p> <pre><code>FROM ubuntu:16.04 RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927 RUN echo "deb http://repo.mongodb.org/apt/ubuntu $(cat /etc/lsb-release | grep DISTRIB_CODENAME | cut -d= -f2)/mongodb-org/3.2 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-3.2.list RUN apt-get update &amp;&amp; apt-get install -y mongodb-org EXPOSE 27017 ENTRYPOINT ["/usr/bin/mongod"] </code></pre> <p><strong>YAML file for Deployment</strong> </p> <pre><code>kind: Deployment apiVersion: extensions/v1beta1 metadata: name: mongo labels: name: mongo spec: replicas: 3 template: metadata: labels: app: mongo spec: containers: - name: mongo image: &lt;my image of mongodb&gt; ports: - containerPort: 27017 protocol: TCP name: mongo volumeMounts: - mountPath: /data/db name: az-files-mongo-storage volumes: - name: az-files-mongo-storage persistentVolumeClaim: claimName: mong-pvc --- apiVersion: v1 kind: Service metadata: name: mongo spec: ports: - port: 27017 targetPort: 27017 selector: app: mongo </code></pre>
<p>For your issue, you can take a look at another <a href="https://stackoverflow.com/questions/44497009/mongod-error-98-unable-to-lock-file-data-db-mongod-lock-resource-temporarily/44498179">issue</a> for the same error. So it seems you cannot initialize the same volume when another has already done it for mongo. From the error, I will suggest you just use the volume to store the data. You can initialize in the Dockerfile when creating the image. Or you can use the create volumes for every pod through the <a href="https://learn.microsoft.com/en-us/azure/aks/concepts-clusters-workloads#statefulsets" rel="nofollow noreferrer">StatefulSets</a> and it's more recommended.</p> <p><strong>Update:</strong></p> <p>The yam file below will work for you:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: mongo spec: ports: - port: 27017 targetPort: 27017 selector: app: mongo --- apiVersion: apps/v1 kind: StatefulSet metadata: name: mongo spec: selector: matchLabels: app: mongo serviceName: mongo replicas: 3 template: metadata: labels: app: mongo spec: terminationGracePeriodSeconds: 10 containers: - name: mongo image: charlesacr.azurecr.io/mongodb:v1 ports: - containerPort: 27017 name: mongo volumeMounts: - name: az-files-mongo-storage mountPath: /data/db volumeClaimTemplates: - metadata: name: az-files-mongo-storage spec: accessModes: - ReadWriteOnce storageClassName: az-files-mongo-storage resources: requests: storage: 5Gi </code></pre> <p>And you need to create the StorageClass before you create the statefulSets. The yam file below:</p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: az-files-mongo-storage provisioner: kubernetes.io/azure-file mountOptions: - dir_mode=0777 - file_mode=0777 - uid=1000 - gid=1000 parameters: skuName: Standard_LRS </code></pre> <p>Then the pods work well and the screenshot below:</p> <p><a href="https://i.stack.imgur.com/794s8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/794s8.png" alt="enter image description here"></a></p>
<p>When Kubernetes creates secrets, do they encrypt the given user name and password with certificate?</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: username: YWRtaW4= password: MWYyZDFlMmU2N2Rm </code></pre>
<p>It depends, but yes - it's encrypted at rest. The secrets are store at etcd (the database used to store all Kubernetes objects) and you can enable a Key Management System that will be used to encrypt the secrets. You can find all the relevant details on the <a href="https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/" rel="nofollow noreferrer">documentation</a>.</p> <p>Please note that this does not protect the manifests files - which are not encrypted. The secrets are only encrypted on etcd, but when getting them with kubectl or with the API you will get them decrypted.</p> <p>If you wish to encrypt also the manifest files, there are multiple good solutions to that, like Sealed Secrets, Helm Secrets or Kamus. You can read more about them on my <a href="https://blog.solutotlv.com/can-kubernetes-keep-a-secret/" rel="nofollow noreferrer">blog post</a>.</p>
<p>Here's my environment, I have k8s cluster and some physical machines outside k8s cluster. Now I create a pod in k8s, and this pod will act like a master to create some processes in these physical machines outside k8s cluster. And I need to establish rpc connection between the k8s pod and these external processes. I don't want to use k8s service here. So what kind of other approach I can use to connect a pod in k8s from external world. </p>
<p>You would need to set up your CNI networking in such a way that pod IPs are routable from outside the cluster. How you do this depends on you CNI plugin and your existing network design. You could also use a VPN into the cluster network in some cases.</p>
<p>I'm trying to install Redis cluster (StatefulSet) out of GKE and when getting pvc I've got </p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 10s persistentvolume-controller Failed to provision volume with StorageClass "slow": Failed to get GCE GCECloudProvider with error &lt;nil&gt; </code></pre> <p>Already added "--cloud-provider=gce" on files /etc/kubernetes/manifests/kube-controller-manager.yaml and sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml. Restarted but still the same. Can anyone help me please? What's the trick for making k8s work on GCP?</p> <p>My manifest taken from <a href="https://stackoverflow.com/questions/50100219/kubernetes-failed-to-get-gce-gcecloudprovider-with-error-nil">here</a>:</p> <pre><code>--- apiVersion: v1 kind: ConfigMap metadata: name: redis-cluster labels: app: redis-cluster data: fix-ip.sh: | #!/bin/sh CLUSTER_CONFIG="/data/nodes.conf" if [ -f ${CLUSTER_CONFIG} ]; then if [ -z "${POD_IP}" ]; then echo "Unable to determine Pod IP address!" exit 1 fi echo "Updating my IP to ${POD_IP} in ${CLUSTER_CONFIG}" sed -i.bak -e "/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${POD_IP}/" ${CLUSTER_CONFIG} fi exec "$@" redis.conf: |+ cluster-enabled yes cluster-require-full-coverage no cluster-node-timeout 15000 cluster-config-file /data/nodes.conf cluster-migration-barrier 1 appendonly yes protected-mode no --- kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: slow provisioner: kubernetes.io/gce-pd parameters: type: pd-standard replication-type: none zone: "us-west2-a" reclaimPolicy: Retain --- apiVersion: v1 kind: Service metadata: name: redis-cluster labels: app: redis-cluster spec: ports: - port: 6379 targetPort: 6379 name: client - port: 16379 targetPort: 16379 name: gossip clusterIP: None selector: app: redis-cluster --- apiVersion: apps/v1 kind: StatefulSet metadata: name: redis-cluster labels: app: redis-cluster spec: serviceName: redis-cluster replicas: 5 selector: matchLabels: app: redis-cluster template: metadata: labels: app: redis-cluster spec: containers: - name: redis image: redis:5.0-rc ports: - containerPort: 6379 name: client - containerPort: 16379 name: gossip command: ["/conf/fix-ip.sh", "redis-server", "/conf/redis.conf"] args: - --cluster-announce-ip - "$(POD_IP)" readinessProbe: exec: command: - sh - -c - "redis-cli -h $(hostname) ping" initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: exec: command: - sh - -c - "redis-cli -h $(hostname) ping" initialDelaySeconds: 20 periodSeconds: 3 env: - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP volumeMounts: - name: conf mountPath: /conf readOnly: false - name: data mountPath: /data readOnly: false volumes: - name: conf configMap: name: redis-cluster defaultMode: 0755 volumeClaimTemplates: - metadata: name: data labels: name: redis-cluster spec: accessModes: [ "ReadWriteOnce" ] storageClassName: slow resources: requests: storage: 5Gi </code></pre>
<p>Please verify your "StorageClass: slow", it seems there is an indentation problem (starting with reclaimPolicy) </p> <pre><code>kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: slow provisioner: kubernetes.io/gce-pd parameters: type: pd-standard replication-type: none zone: "us-west2-a" reclaimPolicy: Retain </code></pre> <p># Update:</p> <ol> <li><p>Please add <code>--cloud-provider=gce</code> into: <a href="https://stackoverflow.com/questions/50100219/kubernetes-failed-to-get-gce-gcecloudprovider-with-error-nil"><strong>kube-apiserver.yaml, kube-controller-manager.yaml, KUBELET_KUBECONFIG_ARGS</strong></a>. You can also enable <strong>enable-admission-plugins=<a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/" rel="nofollow noreferrer">DefaultStorageClass</a></strong></p></li> <li><p>Verify in your <strong>"VM instance details"</strong> permissiosn in <strong>"Cloud API access scopes" permissions.</strong></p></li> <li><p>Verify if your storage class pv and pvc are working properly.</p></li> </ol> <hr> <pre><code> kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: slow annotations: storageclass.beta.kubernetes.io/is-default-class: "true" provisioner: kubernetes.io/gce-pd parameters: type: pd-standard apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-test spec: accessModes: - ReadOnlyMany storageClassName: slow resources: requests: storage: 1Gi </code></pre>