prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I'm refactoring a helm chart, and wanted to put some values from <code>deployment.yaml</code> to <code>values.yaml</code> and that value is</p> <pre><code>hosts: - {{ include &quot;myApp.externalHostName&quot; . | quote }} </code></pre> <p>but it gives me the error</p> <pre><code>[ERROR] values.yaml: unable to parse YAML: error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{&quot;toJson include \&quot;myApp.externalHostName\&quot; . | quote&quot;:interface {}(nil)} [ERROR] templates/: cannot load values.yaml: error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{&quot;toJson include \&quot;myApp.externalHostName\&quot; . | quote&quot;:interface {}(nil)} </code></pre> <p>it would work if I just used</p> <pre><code>hosts: - myExternalHostname.something </code></pre> <p>but is it possible to run include in values.yaml?</p>
<p>The <code>values.yaml</code> files are not subject to golang interpolation. If you need dynamic content, you'll need to update files inside the <code>templates</code> directory (which are subject to golang interpolation), or generate the <code>values.yaml</code> content using another mechanism</p> <p>In this specific case, you may find yaml anchors to be helpful:</p> <pre class="lang-yaml prettyprint-override"><code>myApp: externalHostName: &amp;externalHostName myapp.example.com theIngressOrWhatever: hosts: - *externalHostName </code></pre>
<p>According to <a href="https://developers.redhat.com/blog/2020/05/20/getting-started-with-the-fabric8-kubernetes-java-client" rel="nofollow noreferrer">this</a>, I can specify system property <em>KUBECONFIG</em> to tell defaultKubernetesClient to use the specified kubeconfig file instead of the default <code>~/.kube/config</code>file. <br>What should I do that then I'm using fabric8io? <br>Tks in advance.</p>
<p>It looks like there is a typo in the blogpost, If I look at Fabric8 code <a href="https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-client/src/main/java/io/fabric8/kubernetes/client/Config.java#L113" rel="nofollow noreferrer">Config.java</a>, property name is <code>kubeconfig</code> not <code>KUBECONFIG</code>.</p> <p>I moved my <code>.kube/config</code> file to <code>/tmp/mykubeconfig</code> and then I tested with the following code on <a href="https://github.com/kubernetes/minikube" rel="nofollow noreferrer">minikube</a> and it seemed to be working okay</p> <pre class="lang-java prettyprint-override"><code>System.setProperty(&quot;kubeconfig&quot;, &quot;/tmp/mykubeconfig&quot;); try (KubernetesClient kubernetesClient = new DefaultKubernetesClient()) { kubernetesClient.pods().inAnyNamespace().list().getItems().stream() .map(Pod::getMetadata) .map(ObjectMeta::getName) .forEach(System.out::println); } </code></pre>
<p>I have this command in a bash script</p> <pre><code>kubectl get svc --selector='app.kubernetes.io/component=sentinel' --all-namespaces -o json | jq -r ' .items | map( {label:.metadata.name, sentinels: [{host:(.metadata.name + &quot;.&quot; + .metadata.namespace + &quot;.&quot; + &quot;svc&quot; + &quot;.&quot; + &quot;cluster&quot; + &quot;.&quot; + &quot;local&quot;),port: .spec.ports[0].port}], sentinelName:&quot;mymaster&quot;, sentinelPassword: &quot;&quot; dbIndex: 0 }) | {connections: . }' / &gt;local.json </code></pre> <p>which produces something like this output</p> <pre class="lang-json prettyprint-override"><code>{ &quot;connections&quot;: [ { &quot;label&quot;: &quot;Redis1&quot;, &quot;sentinels&quot;: [ { &quot;host&quot;: &quot;Redis1.default.svc.cluster.local&quot;, &quot;port&quot;: 26379 } ], &quot;sentinelName&quot;: &quot;mymaster&quot;, &quot;sentinelPassword&quot;: &quot;&quot;, &quot;dbIndex&quot;: 0 }, { &quot;label&quot;: &quot;Redis2&quot;, &quot;sentinels&quot;: [ { &quot;host&quot;: &quot;Redis2.development.svc.cluster.local&quot;, &quot;port&quot;: 26379 } ], &quot;sentinelName&quot;: &quot;mymaster&quot;, &quot;sentinelPassword&quot;: &quot;&quot;, &quot;dbIndex&quot;: 0 } ] } </code></pre> <p>This config file is injected into container via an init-container so Redis-Commander fetches the redis instances without the user having to manually input any connection config data. This works fine however but one of the instances requires a <code>sentinelPassword</code> value.</p> <p>I can fetch the password using <code>kubectl get secret</code> but I'm trying to figure out how to insert that password into the config file for the particular instance that requires it.</p> <p>I've been trying something along the lines of this but getting my jq syntax wrong. Any help or alternative ways of going around would be appreciated.</p> <pre class="lang-sh prettyprint-override"><code>#store output in var JSON=$(kubectl get svc --selector='app.kubernetes.io/component=sentinel' --all-namespaces -o json | jq -r ' .items | map( {label:.metadata.name, sentinels: [{host:(.metadata.name + &quot;.&quot; + .metadata.namespace + &quot;.&quot; + &quot;svc&quot; + &quot;.&quot; + &quot;cluster&quot; + &quot;.&quot; + &quot;local&quot;),port: .spec.ports[0].port}], sentinelName:&quot;mymaster&quot;, sentinelPassword: &quot;&quot;, dbIndex: 0 }) | {connections: . }') # Find instance by their host value which is unique. (Can't figure out how to do this bit) if $JSON host name contains &quot;Redis2.development.svc.cluster.local&quot; #then do something like this &quot;$JSON&quot; | jq '.[]'| .sentinelPassword = &quot;$password&quot; #var stored from kubectl get secret cmd #save output to file &quot;$JSON&quot;&gt;/local.json </code></pre>
<p>Assuming for the moment that you want to invoke jq on the sample JSON (local.json) as shown, you could run:</p> <pre><code>password=mypassword &lt; local.json jq --arg password &quot;$password&quot; ' .connections[] |= if any(.sentinels[].host; index(&quot;Redis2.development.svc.cluster.local&quot;)) then .sentinelPassword = $password else . end' </code></pre> <p>However, if possible, it would probably be better to invoke jq just once.</p>
<p>I have a Kubernetes cluster and I need to collect the various pod and node event timestamps.</p> <p>I do that by building a go service that communicates with my Kubernetes cluster via client-go library. The timestamp I get for the subscribed pod and node object shows time only till seconds precision.</p> <p>Is there a way one can get time in milliseconds precision level? I found a <a href="https://github.com/kubernetes/kubernetes/issues/81026" rel="nofollow noreferrer">similar issue raised</a> but there is no resolution of that.</p> <p>Could someone help me in this?</p>
<p>Welcome to the community @shresthi-garg</p> <p>First of all, as you correctly found, it's not possible to get precise timestamps from kubernetes components themselves with milliseconds precision. And <a href="https://github.com/kubernetes/kubernetes/issues/81026#issuecomment-832301082" rel="nofollow noreferrer">this github issue</a> is closed for now.</p> <p>However it's still possible to find some exact timings about containers and other events. Below is an example related to a container.</p> <p><strong>Option 1</strong> - kubelet by default writes significant amount of logs to syslog. It's possible to view them with using <code>journalctl</code> (note! this approach works on <code>systemd</code> systems. For other systems please refer to <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster/#looking-at-logs" rel="nofollow noreferrer">official kubenetes documentation</a>). Example of the command:</p> <p><code>journalctl -u kubelet -o short-precise</code></p> <p>-u - filter by unit</p> <p>-o - output options</p> <p>Line from output which we're looking for will be:</p> <pre><code>May 18 21:00:30.221950 control-plane kubelet[8576]: I0518 21:00:30.221566 8576 scope.go:111] &quot;RemoveContainer&quot; containerID=&quot;d7d0403807684ddd4d2597d32b90b1e27d31f082d22cededde26f6da8281cd92&quot; </code></pre> <p><strong>Option 2</strong> - get this information from containerisation engine. In the example below I used Docker for this. I run this command:</p> <p><code>docker inspect container_id/container_name</code></p> <p>Output will be like:</p> <pre><code>{ &quot;Id&quot;: &quot;d7d0403807684ddd4d2597d32b90b1e27d31f082d22cededde26f6da8281cd92&quot;, &quot;Created&quot;: &quot;2021-05-18T21:00:07.388569335Z&quot;, &quot;Path&quot;: &quot;/docker-entrypoint.sh&quot;, &quot;Args&quot;: [ &quot;nginx&quot;, &quot;-g&quot;, &quot;daemon off;&quot; ], &quot;State&quot;: { &quot;Status&quot;: &quot;running&quot;, &quot;Running&quot;: true, &quot;Paused&quot;: false, &quot;Restarting&quot;: false, &quot;OOMKilled&quot;: false, &quot;Dead&quot;: false, &quot;Pid&quot;: 8478, &quot;ExitCode&quot;: 0, &quot;Error&quot;: &quot;&quot;, &quot;StartedAt&quot;: &quot;2021-05-18T21:00:07.593216613Z&quot;, &quot;FinishedAt&quot;: &quot;0001-01-01T00:00:00Z&quot; } </code></pre>
<p>Here is the output I am getting:</p> <pre><code> [root@ip-10-0-3-103 ec2-user]# kubectl get pod --namespace=migration NAME READY STATUS RESTARTS AGE clear-nginx-deployment-cc77649fb-j8mzj 0/1 Pending 0 118m clear-nginx-deployment-temp-cc77649fb-hxst2 0/1 Pending 0 41s </code></pre> <p>Could not understand the message shown in json:</p> <pre><code>*&quot;status&quot;: { &quot;conditions&quot;: [ { &quot;message&quot;: &quot;0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.&quot;, &quot;reason&quot;: &quot;Unschedulable&quot;, &quot;status&quot;: &quot;False&quot;, &quot;type&quot;: &quot;PodScheduled&quot; } ], &quot;phase&quot;: &quot;Pending&quot;, &quot;qosClass&quot;: &quot;BestEffort&quot; }* </code></pre> <p>If you could please help to get through this. The earlier question on stackoverflow doesn't answer my query as my message output is different.</p>
<p>This is due to the fact that your Pods have been instructed to claim storage, however, in your case there is storage available. Check your Pods with <code>kubectl get pods &lt;pod-name&gt; -o yaml</code> and look at the exact yaml that has been applied to the cluster. In there you should be able to see that the Pod is trying to claim a PersistentVolume (PV).</p> <p>To quickly create a PV backed by a <code>hostPath</code> apply the following yaml:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: PersistentVolume metadata: name: stackoverflow-hostpath namespace: migration spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: &quot;/mnt/data&quot; </code></pre> <p>Kubernetes will exponentially try to schedule the Pod again; to speed things up delete one of your pods (<code>kubectl delete pods &lt;pod-name&gt;</code>) to reschedule it immediately.</p>
<p>Argo CD shows two items from linkerd (installed by Helm) are being out of sync. The warnings are caused by the optional <code>preserveUnknownFields: false</code> in the <code>spec</code> section:</p> <p>trafficsplits.split.smi-spec.io <a href="https://i.stack.imgur.com/nkS1p.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nkS1p.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/BpqRD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BpqRD.png" alt="enter image description here" /></a></p> <p>serviceprofiles.linkerd.io</p> <p><a href="https://i.stack.imgur.com/uxkcL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uxkcL.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/0M2fG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0M2fG.png" alt="enter image description here" /></a></p> <p>But I'm not able to figure out how to ignore the difference using <code>ignoreDifferences</code> in the <code>Application</code> manifest. The <code>/spec/preserveUnknownFields</code> json path isn't working. Is it because the field preserveUnknownFields is not present in the left version?</p> <pre><code> apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: linkerd namespace: argocd spec: destination: namespace: linkerd server: https://kubernetes.default.svc project: default source: chart: linkerd2 repoURL: https://helm.linkerd.io/stable targetRevision: 2.10.1 syncPolicy: automated: {} ignoreDifferences: - group: apiextensions.k8s.io/v1 name: trafficsplits.split.smi-spec.io kind: CustomResourceDefinition jsonPointers: - /spec/preserveUnknownFields - group: apiextensions.k8s.io/v1 name: trafficsplits.split.smi-spec.io kind: CustomResourceDefinition jsonPointers: - /spec/preserveUnknownFields </code></pre>
<p>As per <a href="https://argoproj.github.io/argo-cd/user-guide/diffing/#application-level-configuration" rel="noreferrer">documentation</a>, I think you have to use <code>apiextensions.k8s.io</code> not <code>apiextensions.k8s.io/v1</code>.</p>
<p>I am a newbie to Kubernetes and trying to set up a cluster that runs Cassandra in my local machine. I have used kind to create a cluster that was successful. After that when I try to run <em><code>kubectl cluster-info</code></em>, I am getting the below error:</p> <p><em><code>Unable to connect to the server: dial tcp 127.0.0.1:45451: connectex: No connection could be made because the target machine actively refused it.</code></em></p> <p>On <em><code>docker container ls</code></em>, I could see the control-plane running on the container using the port as below:</p> <pre class="lang-sh prettyprint-override"><code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1c4b6101b8ff kindest/node:v1.18.2 &quot;/usr/local/bin/entr&quot; 3 hours ago Up 2 hours 127.0.0.1:45451-&gt;6443/tcp kind-cassandra-control-plane 625fee22e0e6 kindest/node:v1.18.2 &quot;/usr/local/bin/entr&quot; 3 hours ago Up 2 hours kind-cassandra-worker </code></pre> <p>Am able to view the config file by executing <em><code>kubectl config view</code></em> as below, which confirms the kubectl is able to read the correct config file:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 clusters: - cluster: certificate-authority-data: DATA+OMITTED server: https://127.0.0.1:45451 name: kind-kind-cassandra contexts: - context: cluster: kind-kind-cassandra user: kind-kind-cassandra name: kind-kind-cassandra current-context: kind-kind-cassandra kind: Config preferences: {} users: - name: kind-kind-cassandra user: client-certificate-data: REDACTED client-key-data: REDACTED </code></pre> <p><strong>UPDATE:</strong></p> <p>When I run <em><code>netstat</code></em>, I could see below as the active connections on 127.0.0.1</p> <pre><code> TCP 127.0.0.1:2869 TCP 127.0.0.1:2869 TCP 127.0.0.1:2869 TCP 127.0.0.1:2869 TCP 127.0.0.1:2869 TCP 127.0.0.1:2869 TCP 127.0.0.1:2869 TCP 127.0.0.1:2869 TCP 127.0.0.1:2869 TCP 127.0.0.1:2869 TCP 127.0.0.1:2869 TCP 127.0.0.1:2869 TCP 127.0.0.1:2869 TCP 127.0.0.1:2869 TCP 127.0.0.1:2869 TCP 127.0.0.1:2869 TCP 127.0.0.1:2869 TCP 127.0.0.1:2869 TCP 127.0.0.1:2869 TCP 127.0.0.1:2869 TCP 127.0.0.1:2869 TCP 127.0.0.1:2869 TCP 127.0.0.1:2869 TCP 127.0.0.1:2869 TCP 127.0.0.1:5354 TCP 127.0.0.1:5354 TCP 127.0.0.1:27015 TCP 127.0.0.1:49157 TCP 127.0.0.1:49158 TCP 127.0.0.1:49174 </code></pre> <p>Any help is really appreciated. TIA</p>
<p>Try using this command</p> <pre><code>minikube start --driver=docker </code></pre>
<p>Currently in my <code>kubernetes-nodes</code> job in Prometheus, The endpoint <code>/api/v1/nodes/gk3-&lt;cluster name&gt;-default-pool-&lt;something arbitrary&gt;/proxy/metrics</code> is being scraped</p> <p>But the thing is I'm getting a 403 error which says <code>GKEAutopilot authz: cluster scoped resource &quot;nodes/proxy&quot; is managed and access is denied</code> when I try it manually on postman</p> <p>How do I get around this on GKE Autopilot?</p>
<p>While the Autopilot docs don't mention the node proxy API specifically, this is in the limitations section:</p> <blockquote> <p>Most external monitoring tools require access that is restricted. Solutions from several Google Cloud partners are available for use on Autopilot, however not all are supported, and custom monitoring tools cannot be installed on Autopilot clusters.</p> </blockquote> <p>Given that port-forward and all other node-level access is restricted it seems likely this is not available. It's not clear that Autopilot even uses Kubelet at all and they probably aren't going to tell you.</p> <p><em>End of year update:</em></p> <p>This mostly works now. Autopilot has added support for things like cluster-scope objects and webhooks. You do need to reconfigure any install manifests to not touch the <code>kube-system</code> namespace as that is still locked down but you can most of this working if you hammer on it a bunch.</p>
<p>I'm having issues when accessing a service present in another namespace.</p> <p>I have 2 namespaces (in the same cluster) airflow-dev and dask-dev.</p> <p><img src="https://i.stack.imgur.com/UrA7u.jpg" alt="enter image description here" /></p> <p>In dask-dev namespace, I have dask cluster(dask scheduler and workers) deployed. Also, created a service (cluster IP) to dask-scheduler pod. I'm able to access dask-scheduler pod from chrome using 'kubectl port-forward' command.</p> <p><code>kubectl port-forward --namespace dask-dev svc/dask-dev-scheduler 5002:80</code></p> <p>However, am not able to access the service (or dask-scheduler pod) from a pod (airflow-scheduler) present in airflow-dev namespace. Getting '<strong>Host or service not found</strong>' error when trying to access it using the below</p> <p><code>dask-dev-scheduler.dask-dev.svc.cluster.local:8786</code></p> <p>Below is the service that I have created for dask-dev-scheduler. Could you please let me know how to access the service from airflow-dev namespace.</p> <pre><code>apiVersion: v1 metadata: name: dask-dev-scheduler namespace: dask-dev labels: app: dask-dev app.kubernetes.io/managed-by: Helm chart: dask-dev-4.5.7 component: scheduler heritage: Helm release: dask-dev annotations: meta.helm.sh/release-name: dask-dev meta.helm.sh/release-namespace: dask-dev spec: ports: - name: dask-dev-scheduler protocol: TCP port: 8786 targetPort: 8786 - name: dask-dev-webui protocol: TCP port: 80 targetPort: 8787 selector: app: dask-dev component: scheduler release: dask-dev clusterIP: 10.0.249.111 type: ClusterIP sessionAffinity: None status: loadBalancer: {} [1]: https://i.stack.imgur.com/UrA7u.jpg </code></pre>
<p>You can use a local service to reference an external service (a service in a different namespace) using the service <a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname" rel="nofollow noreferrer">externalName Type</a>.</p> <p><code>ExternalName</code> services do not have selectors, or any defined ports or endpoints, therefore, you can use an ExternalName service to direct traffic to an external service.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: service-b namespace: namespace-b spec: selector: app: my-app-b ports: - protocol: TCP port: 3000 targetPort: 3000 </code></pre> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: service-b-ref namespace: namespace-a spec: type: ExternalName externalName: service-b.namespace-b.svc.cluster.local </code></pre> <p>Any traffic in <code>namespace-a</code> that connects to <code>service-b-ref:&lt;port&gt;</code> will be routed to <code>service-b</code> in <code>namespace-b</code> (<code>service-b.namespace-b.svc.cluster.local</code>) Therefore, a call to <code>service-b-ref:3000</code> will route to our service-b.</p> <hr /> <p>In your example, you'd just need to create a service in <code>airflow-dev</code> that will route traffic to the <code>dask-dev-scheduler</code> in the <code>dask-dev</code> namespace:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: dask-dev-svc namespace: airflow-dev spec: type: ExternalName externalName: dask-dev-scheduler.dask-dev.svc.cluster.local </code></pre> <p>Therefore, all <code>airflow-dev</code> resources that need to connect to the <code>dask-dev-scheduler</code> would call: <code>dask-dev-svc:8786</code></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 metadata: name: dask-dev-scheduler namespace: dask-dev spec: ports: - name: dask-dev-scheduler protocol: TCP port: 8786 targetPort: 8786 # ... selector: app: dask-dev </code></pre>
<p>I deployed a EKS cluster and I'd like to add more IAM users to the role. I read this doc <a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html</a> and it mentioned how to map IAM users or roles to k8s but it doesn't say how to map IAM group. Is it not supported? or is there a way to do that? I don't want to map many users one by one. When a new user join the team, I just move them to the IAM group without changing anything in EKS.</p>
<p>You can't. You can only map roles and users. Directly from the <a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html" rel="nofollow noreferrer">documentation</a> you linked:</p> <blockquote> <ol start="3"> <li>Add your IAM users, roles, or AWS accounts to the configMap. You cannot add IAM groups to the configMap.</li> </ol> </blockquote> <p>The easiest workaround would be to have a different IAM role for each group and only grant that group the ability to assume that role.</p>
<p>I'm writing a controller that watches kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">service</a> objects, and creates <a href="https://linkerd.io/2/features/traffic-split/" rel="nofollow noreferrer">trafficsplits</a> if they contain a certain label.</p> <p>Since the native kubernetes go client does not support the trafficsplit object, I had to find a way and extend the client so it would recognize the custom resource. I found this <a href="https://www.martin-helmich.de/en/blog/kubernetes-crd-client.html" rel="nofollow noreferrer">guide</a> which was helpful and allowed me to tackle the issue like so -</p> <pre><code>import ( &quot;splitClientV1alpha1 &quot;github.com/servicemeshinterface/smi-sdk-go/pkg/gen/client/split/clientset/versioned/typed/split/v1alpha1&quot; &quot;k8s.io/client-go/kubernetes&quot; ... ) // getting ./kube/config from file kubehome := filepath.Join(homedir.HomeDir(), &quot;.kube&quot;, &quot;config&quot;) // Building the config from file kubeConfig, err = clientcmd.BuildConfigFromFlags(&quot;&quot;, kubehome) if err != nil { return fmt.Errorf(&quot;error loading kubernetes configuration: %w&quot;, err) } // Creating the native client object kubeClient, err := kubernetes.NewForConfig(kubeConfig) if err != nil { return fmt.Errorf(&quot;error creating kubernetes client: %w&quot;, err) } // Creating another clientset exclusively for the custom resource splitClient, err := splitClientV1alpha1.NewForConfig(kubeConfig) if err != nil { return fmt.Errorf(&quot;error creating split client: %s&quot;, err) } </code></pre> <p>I feel like there must be a way to extend the kubeClient object with the trafficsplit schema, instead of creating a separate client like I did. Is there any way to achieve this?</p>
<p>This is definitely possible! You want to use go's struct extension features :)</p> <p>Bsasically, we create a struct that extends both <code>kubernetes.Clientset</code> and <code>splitClientV1alpha1.SplitV1alpha1Client</code> and initialize it using code very similar to yours above. We can then use methods from either client on that struct.</p> <pre class="lang-go prettyprint-override"><code>import ( &quot;fmt&quot; splitClientV1alpha1 &quot;github.com/servicemeshinterface/smi-sdk-go/pkg/gen/client/split/clientset/versioned/typed/split/v1alpha1&quot; &quot;k8s.io/client-go/kubernetes&quot; &quot;k8s.io/client-go/tools/clientcmd&quot; &quot;k8s.io/client-go/util/homedir&quot; &quot;path/filepath&quot; ) type MyKubeClient struct { kubernetes.Clientset splitClientV1alpha1.SplitV1alpha1Client } func getClient() (*MyKubeClient, error) { // getting ./kube/config from file kubehome := filepath.Join(homedir.HomeDir(), &quot;.kube&quot;, &quot;config&quot;) // Building the config from file kubeConfig, err := clientcmd.BuildConfigFromFlags(&quot;&quot;, kubehome) if err != nil { return nil, fmt.Errorf(&quot;error loading kubernetes configuration: %w&quot;, err) } // Creating the native client object kubeClient, err := kubernetes.NewForConfig(kubeConfig) if err != nil { return nil, fmt.Errorf(&quot;error creating kubernetes client: %w&quot;, err) } // Creating another clientset exclusively for the custom resource splitClient, err := splitClientV1alpha1.NewForConfig(kubeConfig) if err != nil { return nil, fmt.Errorf(&quot;error creating split client: %s&quot;, err) } return &amp;MyKubeClient{ Clientset: *kubeClient, SplitV1alpha1Client: *splitClient, }, nil } func doSomething() error { client, err := getClient() if err != nil { return err } client.CoreV1().Pods(&quot;&quot;).Create(...) client.TrafficSplits(...) } </code></pre> <p>If you need to pass your custom client to a function that expects only the original <code>kubernetes.ClientSet</code>, you can do that with:</p> <pre class="lang-go prettyprint-override"><code> func doSomething() error { client, err := getClient() if err != nil { return err } useClientSet(&amp;client.Clientset) } func useOriginalKubeClientSet(clientSet *kubernetes.Clientset) { # ... do things } </code></pre>
<p>I'm currently creating a Minikube cluster for the developers, they will each have their own Minikube cluster in their local machine for testing, assuming the developers don't know anything about Kubernetes, is creating a bash script to handle all the installations and the setup of the pod the recommended way? Is it possible to do it through Terraform instead?Or there's other way to do this easier? Thanks!</p>
<p>Depending on what your requirements are, choosing <a href="https://github.com/kubernetes/minikube" rel="nofollow noreferrer">Minikube</a> may or may not be the best way to go. Just to give you some other options you might want to take a look at the following tools when it comes to local enviornments for developers (depending on their needs):</p> <ul> <li><a href="https://github.com/kubernetes-sigs/kind" rel="nofollow noreferrer">kind</a></li> <li><a href="https://github.com/galexrt/k8s-vagrant-multi-node" rel="nofollow noreferrer">k8s-vagrant-multi-node</a></li> </ul> <p>Since you do not seem to care about Windows or other users (at least they weren't mentioned), a bash script <em>may</em> be the simplest way to go. However, usually that's were tools like <a href="https://github.com/ansible/ansible" rel="nofollow noreferrer">Ansible</a> come into play. They help you with automating things in a clear fashion <strong>and</strong> allow for proper testing. Some tools (like Ansible) even have support for certain Windows features that may be useful.</p> <p><strong>TL;DR</strong></p> <p>A Bash script is not the recommended way as it has lots of pain points that come with it, however, it may be the fastest approach depending on your skillset. If you want to do it properly use tools like Ansible, Chef, Puppet, etc.</p>
<p>I deployed a helm chart using <code>helm install</code> and after this I want to see if the pods/services/cms related to just this deployment have come up or failed. Is there a way to see this?</p> <p>Using <code>kubectl get pods</code> and greping for the name works but it does not show the services and other resources that got deployed when this helm chart is deployed.</p>
<pre><code>helm get manifest RELEASE_NAME helm get all RELEASE_NAME </code></pre> <p><a href="https://helm.sh/docs/helm/helm_get_manifest/" rel="noreferrer">https://helm.sh/docs/helm/helm_get_manifest/</a></p>
<p>I found specifying like <code>kubectl --context dev --namespace default {other commands}</code> before kubectl client in many examples. Can I get a clear difference between them in a k8's environment?</p>
<p>The kubernetes concept (and term) <strong>context</strong> only applies in the kubernetes client-side, i.e. the place where you run the kubectl command, e.g. your command prompt. The kubernetes server-side doesn't recognise this term 'context'.</p> <p>As an example, in the command prompt, i.e. as the client:</p> <ul> <li>when calling the <code>kubectl get pods -n dev</code>, you're retrieving the list of the pods located under the namespace 'dev'.</li> <li>when calling the <code>kubectl get deployments -n dev</code>, you're retrieving the list of the deployments located under the namespace 'dev'.</li> </ul> <p>If you know that you're targetting basically only the 'dev' namespace at the moment, then instead of adding &quot;-n dev&quot; all the time in each of your kubectl commands, you can just:</p> <ol> <li>Create a context named 'context-dev'.</li> <li>Specify the namespace='dev' for this context.</li> <li>Set the current-context='context-dev'.</li> </ol> <p>This way, your commands above will be simplified to as followings:</p> <ul> <li><code>kubectl get pods</code></li> <li><code>kubectl get deployments</code></li> </ul> <p>You can set different contexts, such as 'context-dev', 'context-staging', etc., whereby each of them is targeting different namespace. BTW it's not obligatory to prefix the name with 'context'. You can just set the name with 'dev', 'staging', etc.</p> <p>Just as an analogy where a group of people are talking about films. So somewhere within the conversation the word 'Rocky' was used. Since they're talking about films, it's clear and there's no ambiguity that 'Rocky' here refers to the boxing film 'Rocky' and not about the &quot;bumpy, stony&quot; terrains. It's redundant and unnecessary to mention 'the movie Rocky' each time. Just one word, 'Rocky', is enough. The context is obviously about film.</p> <p>The same thing with Kubernetes and with the example above. If the context is already set to a certain cluster and namespace, it's redundant and unnecessary to set and / or mention these parameters in each of your commands.</p> <p>My explanation here is just revolving around namespace, but this is just an example. Other than specifying the namespace, within the context you will actually also specify which cluster you're targeting and the user info used to access the cluster. You can have a look inside the ~/.kube/config file to see what information other than the namespace is associated to each context.</p> <p>In the sample command in your question above, both the namespace and the context are specified. In this case, <code>kubectl</code> will use whatever configuration values set within the 'dev' context, but the <code>namespace</code> value specified within this context (if it exists) will be ignored as it will be overriden by the value explicitly set in the command, i.e. 'default'.</p> <p>Meanwhile, the <strong>namespace</strong> concept is used in both sides: server and client sides. It's a logical grouping of Kubernetes objects. Just like how we group files inside different folders in Operating Systems.</p>
<p>So I've a cron job like this:</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: my-cron-job spec: schedule: &quot;0 0 31 2 *&quot; failedJobsHistoryLimit: 3 successfulJobsHistoryLimit: 1 concurrencyPolicy: &quot;Forbid&quot; startingDeadlineSeconds: 30 jobTemplate: spec: backoffLimit: 0 activeDeadlineSeconds: 120 ... </code></pre> <p>Then i trigger the job manually like so:</p> <pre><code>kubectl create job my-job --namespace precompile --from=cronjob/my-cron-job </code></pre> <p>But it seams like I can trigger the job as often as I want and the <code>concurrencyPolicy: &quot;Forbid&quot;</code> is ignored.</p> <p>Is there a way so that manually triggered jobs will respect this or do I have to check this manually?</p>
<blockquote> <p>Note that concurrency policy only applies to the jobs created by the same cron job.</p> </blockquote> <p>The <code>concurrencyPolicy</code> field only applies to jobs created by the same cron job, as stated in the documentation: <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#concurrency-policy" rel="noreferrer">https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#concurrency-policy</a></p> <p>When executing <code>$ kubectl create job my-job --namespace precompile --from=cronjob/my-cron-job</code> you are essentially creating a one-time job on its own that uses the <code>spec.jobTemplate</code> field as a reference to create it. Since <code>concurrencyPolicy</code> is a cronjob field, it is not even being evaluated.</p> <p><strong>TL;DR</strong></p> <p>This actually is the expected behavior. Manually created jobs are not effected by <code>concurrencyPolicy</code>. There is no flag you could pass to change this behavior.</p>
<p>I have a cluster in GKE and it is working, everything seems to be working. If I forward the ports I am able to see that the containers are working.</p> <p>I am not able to setup a domain I own from namecheap.</p> <p>These are the steps I followed</p> <ol> <li>In Namecheap I setup a custom dns for the domain</li> </ol> <pre><code>ns-cloud-c1.googledomains.com. ns-cloud-c2.googledomains.com. ns-cloud-c3.googledomains.com. ns-cloud-c3.googledomains.com. </code></pre> <p>I used the letter <code>c</code> because the cluster is in a <code>c</code> zone (I am not sure if this is right)</p> <ol start="2"> <li>Because I am trying to setup as secure website I installed nginx ingress controller</li> </ol> <pre><code>kubectl create clusterrolebinding cluster-admin-binding \ --clusterrole cluster-admin \ --user $(gcloud config get-value account) </code></pre> <p>and</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.46.0/deploy/static/provider/cloud/deploy.yaml </code></pre> <ol start="3"> <li>I applied the <code>issuer.yml</code></li> </ol> <pre><code>apiVersion: cert-manager.io/v1alpha2 kind: ClusterIssuer metadata: name: letsencrypt-prod namespace: cert-manager spec: acme: # The ACME server URL server: https://acme-v02.api.letsencrypt.org/directory # Email address used for ACME registration email: example@email.com # Name of a secret used to store the ACME account private key privateKeySecretRef: name: letsencrypt-prod # Enable the HTTP-01 challenge provider solvers: - http01: ingress: class: nginx </code></pre> <ol start="4"> <li>I applied ingress</li> </ol> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: namespace: staging name: ingress annotations: cert-manager.io/cluster-issuer: &quot;letsencrypt-prod&quot; spec: tls: - hosts: - www.stagingmyappsrl.com - api.stagingmyappsrl.com secretName: stagingmyappsrl-tls rules: - host: wwwstaging.myappsrl.com http: paths: - backend: serviceName: myappcatalogo-svc servicePort: 80 - host: apistaging.stagingmyappsrl.com http: paths: - backend: serviceName: myappnodeapi-svc servicePort: 80 </code></pre> <p>It seems that everything is created and working if I check in GKE website, but when I try to access I get <code>DNS_PROBE_FINISHED_NXDOMAIN</code></p> <p>I am not sure if I am missing an step or if I am setting up something wrong</p>
<p>GKE should have created a cloud load balancer for your ingress service. Depending on your config, the LB can be internal or external. You can get your LB information by looking at the services:</p> <pre><code>kubectl get svc -n ingress-nginx </code></pre> <p>Create a CNAME record in your DNS (namecheap) with the LB address and that should do it. Alternatively, if you have an IP address of the LB, create an A record in your DNS.</p> <p>Cert-manager will create an ingress resource to resolve <code>HTTPS01</code> challenges. Make sure your ingresses are reachable over the Internet for the <code>HTTPS01</code> challenges to work. Alternatively, you could explore other solvers.</p>
<p><code>kubectl get pods --all-namespaces</code> provides the list of all pods. The column <code>RESTARTS</code> shows the number of restarts that a pod has had. How to get the list of all the pods that have had at least one restart? Thanks</p>
<pre><code>kubectl get pods --all-namespaces | awk '$5&gt;0' </code></pre> <p>or simply just</p> <pre><code>kubectl get po -A | awk '$5&gt;0' </code></pre> <p>Use awk to print if column 5 (RESTARTS) &gt; 0</p> <p>or with the use of an alias</p> <pre><code>alias k='kubectl' k get po -A | awk '$5&gt;0' </code></pre>
<p>I am very new to Traefik and Kubernetes. I installed Traefik through helm (repo: <a href="https://traefik.github.io/traefik-helm-chart/" rel="noreferrer">https://traefik.github.io/traefik-helm-chart/</a>, helm version 3.5.2, chart traefik-9.19.1). Then I wanted to get prometheus metrics from it.</p> <p>Here is an extract of my values.yaml file:</p> <pre><code>ports: metrics: expose: true port: 3333 exposedPort: 3333 protocol: TCP additionalArguments: - &quot;--metrics.prometheus=true&quot; - &quot;--metrics.prometheus.buckets=0.100000, 0.300000, 1.200000, 5.000000&quot; - &quot;--metrics.prometheus.addEntryPointsLabels=true&quot; - &quot;--metrics.prometheus.addServicesLabels=true&quot; - &quot;--entrypoints.metrics.address=:3333/tcp&quot; - &quot;--metrics.prometheus.entryPoint=metrics&quot; </code></pre> <p>My problem is: this configuration exposes the TCP port 3333 to the Internet. For security reasons, I would prefer to avoid this.</p> <p>Is there a way to expose port 3333 only to my cluster?</p>
<p>Try this:</p> <pre><code>ports: metrics: expose: true port: 3333 exposedPort: 3333 protocol: TCP env: - name: POD_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP additionalArguments: - &quot;--metrics.prometheus=true&quot; - &quot;--metrics.prometheus.buckets=0.100000, 0.300000, 1.200000, 5.000000&quot; - &quot;--metrics.prometheus.addEntryPointsLabels=true&quot; - &quot;--metrics.prometheus.addServicesLabels=true&quot; - &quot;--entrypoints.metrics.address=$(POD_IP):3333/tcp&quot; - &quot;--metrics.prometheus.entryPoint=metrics&quot; </code></pre> <p>Traefik will expose metrics only at POD_IP network interface.</p> <p>And/or additionally, i'd propose to update firewall settings at your workers (iptables, etc...)</p>
<p>I exported docker images in .tar format.<br /> Then importing those images using K3S and ctr showed no results :</p> <pre><code>$ sudo k3s ctr i import myimage.tar $ </code></pre> <p>No output from <code>import</code> cmd ?</p> <pre><code>$ sudo k3s ctr i ls $ </code></pre> <p>Nothing there....</p>
<p>Adding <code>--digest=true</code> worked for me :</p> <pre><code>$ sudo k3s ctr i import myimage.tar --digests=true unpacking import-2021-05-19@sha256:f9952729292daa8d099c7bc3275f483fdb11ffa9bb1fc394bc06b44c101047e2 (sha256:f9952729292daa8d099c7bc3275f483fdb11ffa9bb1fc394bc06b44c101047e2)...done </code></pre> <p>And listing images also confirms that the importation has worked :</p> <pre><code>$ sudo k3s ctr i ls ... import-2021-05-19@sha256:f99527292fa9bb1fc394bc06b44c101047e2 application/vnd.docker.distribution.manifest.v2+json sha256:f9952729292dac06b44c101047e2 939.9 MiB linux/amd64 io.cri-containerd.image=managed </code></pre>
<p>I'm migrating Confluence from a VM to an instance of Confluence Server 7.11.1 using the official Atlassian docker image in Kubernetes. I successfully got the application to come up, and was able to get through the set-up screens to start an empty Confluence.</p> <p>To customize my instance I want to copy over my old server's confluence home directory to the default confluence home location and also change the database connection URL. According to documentation <a href="https://confluence.atlassian.com/conf59/migrating-confluence-between-servers-792499892.html" rel="nofollow noreferrer">here</a>, I need to stop confluence from running while doing this. I tried to run the stop-confluence.sh script from the confluence user, but I can't stop the container because there is no catalina.pid file in the docker version of k8s.</p> <p>I tried the alternative of killing the java process that runs Confluence, but the entire container shuts down when I do this.</p> <p>How do I stop Confluence in Kubernetes so that I can copy files and modify configuration in the container? Is the docker image version of the Confluence application not meant to be stopped and everything needs to be provided as env variables? Notes on the official atlassian docker image configuration is <a href="https://hub.docker.com/r/atlassian/confluence-server/" rel="nofollow noreferrer">here</a>.</p> <p>Error message:</p> <pre><code>confluence@usmai-www-confluence-0:~$ /opt/atlassian/confluence/bin/stop-confluence.sh -fg executing as current user If you encounter issues starting up Confluence, please see the Installation guide at http://confluence.atlassian.com/display/DOC/Confluence+Installation+Guide Server startup logs are located in /opt/atlassian/confluence/logs/catalina.out --------------------------------------------------------------------------- Using Java: /opt/java/openjdk/bin/java 2021-05-19 23:29:48,363 INFO [main] [atlassian.confluence.bootstrap.SynchronyProxyWatchdog] A Context element for ${confluence.context.path}/synchrony-proxy is found in /opt/atlassian/confluence/conf/server.xml. No further action is required --------------------------------------------------------------------------- Using CATALINA_BASE: /opt/atlassian/confluence Using CATALINA_HOME: /opt/atlassian/confluence Using CATALINA_TMPDIR: /opt/atlassian/confluence/temp Using JRE_HOME: /opt/java/openjdk Using CLASSPATH: /opt/atlassian/confluence/bin/bootstrap.jar:/opt/atlassian/confluence/bin/tomcat-juli.jar Using CATALINA_OPTS: -XX:ReservedCodeCacheSize=256m -XX:+UseCodeCacheFlushing -Djdk.tls.server.protocols=TLSv1.1,TLSv1.2 -Djdk.tls.client.protocols=TLSv1.1,TLSv1.2 -Dconfluence.context.path=/portal -Djava.locale.providers=JRE,SPI,CLDR -Dorg.apache.tomcat.websocket.DEFAULT_BUFFER_SIZE=32768 -Dsynchrony.enable.xhr.fallback=true -Xms1024m -Xmx1024m -Dconfluence.home= -XX:+UseG1GC -Datlassian.plugins.enable.wait=300 -Djava.awt.headless=true -XX:G1ReservePercent=20 -Xloggc:/opt/atlassian/confluence/logs/gc-2021-05-19_23-29-48.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=2M -Xlog:gc+age=debug:file=/opt/atlassian/confluence/logs/gc-2021-05-19_23-29-48.log::filecount=5,filesize=2M -XX:-PrintGCDetails -XX:+PrintGCDateStamps -XX:-PrintTenuringDistribution -XX:+IgnoreUnrecognizedVMOptions -DConfluenceHomeLogAppender.disabled= Using CATALINA_PID: /opt/atlassian/confluence/work/catalina.pid $CATALINA_PID was set but the specified file does not exist. Is Tomcat running? Stop aborted. </code></pre>
<p>Apparently, the confluence java process is the &quot;ENTRYPOINT&quot; for the docker container, so when you kill the java process, it kills the container as well.</p> <p>I would suggest that you create a persistent volume with readWriteMany attribute set and mount it to a temporary pod - say with ubuntu image.</p> <p>You then use &quot;kubectl cp&quot; to copy the existing home directory to the persistent volume mounted on the ubuntu pod.</p> <p>You can then make any modifications to the files as you please. Once done, just kill the ubuntu pod.</p> <p>Your files will still be present on the persistent volume.</p> <p>Now mount this persistent volume to /opt/atlassian/confluence in your confluence pod and it it should just work.</p>
<p>Kubernetes provides us two deployment strategies. One is <strong><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment" rel="nofollow noreferrer">Rolling Update</a></strong> and another one is <strong><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#recreate-deployment" rel="nofollow noreferrer">Recreate</a></strong>. I should use Rolling Update when I don't want go off air. But when should I be using Recreate?</p>
<p>There are basically two reasons why one would want/need to use <code>Recreate</code>:</p> <ul> <li>Resource issues. Some clusters simply do not have enough resources to be able to schedule additional Pods, which then results in them being stuck and the update procedure with it. This happens especially for local development clusters and/or applications that consume large amout resources.</li> <li>Bad applications. Some applications (especially legacy or monolithic setups) simply cannot handle it when new Pods - that do the exact same thing as they do - spin up. There are too many reasons as to why this may happen to cover all of them here but essentially it means that an application is not suitable for scaling.</li> </ul>
<p>I want to mount an Azure Shared Disk to the multiple deployments/nodes based on this: <a href="https://learn.microsoft.com/en-us/azure/virtual-machines/disks-shared" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/virtual-machines/disks-shared</a></p> <p>So, I created a shared disk in Azure Portal and when trying to mount it to deployments in Kubernetes I got an error:</p> <blockquote> <p>&quot;Multi-Attach error for volume &quot;azuredisk&quot; Volume is already used by pod(s)...&quot;</p> </blockquote> <p>Is it possible to use Shared Disk in Kubernetes? If so how? Thanks for tips.</p>
<p><strong><a href="https://azure.microsoft.com/en-us/blog/announcing-the-general-availability-of-azure-shared-disks-and-new-azure-disk-storage-enhancements/" rel="noreferrer">Yes, you can</a></strong>, and the capability is GA.</p> <p>An Azure Shared Disk can be mounted as ReadWriteMany, which means you can mount it to multiple nodes and pods. It requires the <a href="https://learn.microsoft.com/en-us/azure/aks/azure-disk-csi#shared-disk" rel="noreferrer">Azure Disk CSI driver</a>, and the caveat is that currently only Raw Block volumes are supported, thus the application is responsible for managing the control of writes, reads, locks, caches, mounts, and fencing on the shared disk, which is exposed as a raw block device. This means that you mount the raw block device (disk) to a pod container as a <code>volumeDevice</code> rather than a <code>volumeMount</code>.</p> <p><a href="https://github.com/kubernetes-sigs/azuredisk-csi-driver/tree/master/deploy/example/sharedisk" rel="noreferrer">The documentation examples</a> mostly points to how to create a Storage Class to dynamically provision the static Azure Shared Disk, but I have also created one statically and mounted it to multiple pods on different nodes.</p> <h3>Dynamically Provision Shared Azure Disk</h3> <ol> <li>Create Storage Class and PVC</li> </ol> <pre class="lang-yaml prettyprint-override"><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-csi provisioner: disk.csi.azure.com parameters: skuname: Premium_LRS # Currently shared disk only available with premium SSD maxShares: &quot;2&quot; cachingMode: None # ReadOnly cache is not available for premium SSD with maxShares&gt;1 reclaimPolicy: Delete --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-azuredisk spec: accessModes: - ReadWriteMany resources: requests: storage: 256Gi # minimum size of shared disk is 256GB (P15) volumeMode: Block storageClassName: managed-csi </code></pre> <ol start="2"> <li>Create a deployment with 2 replicas and specify volumeDevices, devicePath in Spec</li> </ol> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: deployment-azuredisk spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx name: deployment-azuredisk spec: containers: - name: deployment-azuredisk image: mcr.microsoft.com/oss/nginx/nginx:1.17.3-alpine volumeDevices: - name: azuredisk devicePath: /dev/sdx volumes: - name: azuredisk persistentVolumeClaim: claimName: pvc-azuredisk </code></pre> <h3>Use a Statically Provisioned Azure Shared Disk</h3> <p>Using an Azure Shared Disk that has been provisioned through ARM, Azure Portal, or through the Azure CLI.</p> <ol> <li>Define a PersistentVolume (PV) that references the DiskURI and DiskName:</li> </ol> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: PersistentVolume metadata: name: azuredisk-shared-block spec: capacity: storage: &quot;256Gi&quot; # 256 is the minimum size allowed for shared disk volumeMode: Block # PV and PVC volumeMode must be 'Block' accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain azureDisk: kind: Managed diskURI: /subscriptions/&lt;subscription&gt;/resourcegroups/&lt;group&gt;/providers/Microsoft.Compute/disks/&lt;disk-name&gt; diskName: &lt;disk-name&gt; cachingMode: None # Caching mode must be 'None' --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-azuredisk-managed spec: resources: requests: storage: 256Gi volumeMode: Block accessModes: - ReadWriteMany volumeName: azuredisk-shared-block # The name of the PV (above) </code></pre> <p>Mounting this PVC is the same for both dynamically and statically provisioned shared disks. Reference the deployment above.</p>
<p>I've configured Let's Encrypt using cert-manager in my cluster and it works just fine for most of my use cases. However I have an application which is installed multiple times on the same hostname but with a different path.</p> <p>My ingress is defined as below</p> <pre><code>{{- if .Values.ingress.enabled -}} {{- $fullName := include &quot;whoami-go.fullname&quot; . -}} {{- $svcPort := .Values.service.port -}} {{- $tls := hasKey .Values.ingress &quot;certIssuer&quot; -}} apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: {{ $fullName }} labels: {{- include &quot;whoami-go.labels&quot; . | nindent 4 }} annotations: {{- if $tls }} cert-manager.io/cluster-issuer: {{ .Values.ingress.certIssuer | quote }} ingress.kubernetes.io/ssl-redirect: &quot;true&quot; {{- end }} spec: {{- if $tls }} tls: - secretName: {{ $fullName }}-tls hosts: - {{ .Values.ingress.hostname | quote }} {{- end }} rules: - host: {{ .Values.ingress.hostname | quote }} http: paths: - path: {{ .Values.ingress.path }} pathType: Prefix backend: service: name: {{ $fullName }} port: number: {{ $svcPort }} {{- end }} </code></pre> <p>And it's instantiated with values like below</p> <pre><code>ingress: enabled: true hostname: whoami-go.c.dhis2.org path: /something certIssuer: letsencrypt-prod </code></pre> <p>Where <code>path</code> is changed for each installation.</p> <p>The problem...</p> <pre><code>E0520 03:13:49.242770 1 sync.go:210] cert-manager/controller/orders &quot;msg&quot;=&quot;failed to create Order resource due to bad request, marking Order as failed&quot; &quot;error&quot;=&quot;429 urn:ietf:params:acme:error:rateLimited: Error creating new order :: too many certificates already issued for exact set of domains: whoami-go.c.dhis2.org: see https://letsencrypt.org/docs/rate-limits/&quot; &quot;resource_kind&quot;=&quot;Order&quot; &quot;resource_name&quot;=&quot;finland-whoami-go-tls-tzvk6-4169341110&quot; &quot;resource_namespace&quot;=&quot;whoami&quot; &quot;resource_version&quot;=&quot;v1&quot; </code></pre> <p>Since only the path is updated I hoped that cert-manager would reuse the certificate but that's obviously not the case. Can I somehow configure my application to use the same certificate for the same hostname across multiple installations of the same chart?</p>
<p>Error meaning</p> <pre><code>urn:ietf:params:acme:error:rateLimited: Error creating new order :: too many certificates already issued for exact set of domains: whoami-go.c.dhis2.org: </code></pre> <p>we can only request a certain amount of SSL/TLS certificate from the let's encrypt in week.</p> <p>Read more at : <a href="https://letsencrypt.org/docs/rate-limits/" rel="nofollow noreferrer">https://letsencrypt.org/docs/rate-limits/</a></p> <p>Due to that, it's showing the error of the rate-limiting. We can request for the 5 cert per week for duplicate cert.</p> <p>you are using the <strong>certIssuer: letsencrypt-prod</strong> or cluster issuer which will be storing the secret into the <strong>Kubernetes secret</strong>.</p> <p>While creating the ingress with different paths just change secret or add the secret to ingress as per need your ingress will be working with HTTPS.</p> <p>while keep only one ingress with cluster issuer or issuer so if certificate getting explored it can auto-renew in to secret and that secret will be used by other ingress.</p> <p>my simple ingress with he <strong>SSL/TLS</strong> cert stored into secret.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx cert-manager.io/cluster-issuer: sls-dev nginx.ingress.kubernetes.io/proxy-read-timeout: &quot;1800&quot; nginx.ingress.kubernetes.io/proxy-send-timeout: &quot;1800&quot; nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/proxy-body-size: &quot;15m&quot; name: sls-function-ingress spec: rules: - host: app.dev.example.io http: paths: - path: /api/v1/ backend: serviceName: test-service servicePort: 80 tls: - hosts: - app.dev.example.io secretName: sls-secret </code></pre> <p>you can keep <code>cert-manager.io/cluster-issuer: sls-dev</code> to one ingress while with other only <strong>secret</strong> need to attach.</p>
<p>I'm working with Kubeflow pipelines. I would like to access the &quot;Run name&quot; from inside the a task component. For example in the below image the run name is &quot;My first XGBoost run&quot; - as seen in the title.</p> <p><a href="https://i.stack.imgur.com/RELO4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RELO4.png" alt="enter image description here" /></a></p> <p>I know for example it's possible to <a href="https://stackoverflow.com/a/57047661/1110328">obtain the workflow ID</a> by passing the parameter <code>{{workflow.uid}}</code> as a command line argument. I have also tried the <a href="https://github.com/argoproj/argo/blob/master/docs/variables.md#global" rel="nofollow noreferrer">Argo variable</a> <code>{{ workflow.name }}</code> but this doesn't give the correct string.</p>
<p>You can use <code>{{workflow.annotations.pipelines.kubeflow.org/run_name}}</code> argo variable to get the run_name</p> <p>For example,</p> <pre class="lang-py prettyprint-override"><code>@func_to_container_op def dummy(run_id, run_name) -&gt; str: return run_id, run_name @dsl.pipeline( name='test_pipeline', ) def test_pipeline(): dummy('{{workflow.labels.pipeline/runid}}', '{{workflow.annotations.pipelines.kubeflow.org/run_name}}') </code></pre> <p>You will find that the placeholders will be replaced with the correct run_id and run_name.</p>
<p>I'm migrating my cluster to GKE using autpilot mode, and I'm trying to apply fluentbit for logging (to be sent to Elasticsearch and then Kibana to be alerted on a slack channel).</p> <p>But it seems that GKE Autopilot doesn't want me to do anything on the <code>hostPath</code> other than reading into files inside <code>/var/log</code> according to this <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview#host_options_restrictions" rel="nofollow noreferrer">documentation</a>. However Fluentbit needs to access <code>/var/lib/docker/containers</code> which is different from <code>/var/log</code> and also access to write inside <code>/var/log</code></p> <p>Is there a way to get around this or how do you usually log in GKE Autopilot with alerts? Experience sharing is also welcome</p>
<p>Citing the official documentation:</p> <blockquote> <h3>External monitoring tools</h3> <p>Most external monitoring tools require access that is restricted. Solutions from several Google Cloud partners are available for use on Autopilot, however not all are supported, and <strong>custom monitoring tools cannot be installed on Autopilot clusters.</strong></p> <p>-- <em><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview#external_monitoring_tools" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Docs: Concepts: Autopilot overview: External monitoring tools </a></em></p> <hr /> <h3>Host options restrictions</h3> <p>HostPort and hostNetwork are not permitted because node management is handled by GKE. Using <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostPath</a> volumes in write mode is prohibited, <strong>while using hostPath volumes in read mode is allowed only for <code>/var/log/</code> path prefixes</strong>. Using <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#host-namespaces" rel="nofollow noreferrer">host namespaces</a> in workloads is prohibited.</p> <p>-- <em><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview#host_options_restrictions" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Docs: Concepts: Autopilot overview: Host options restrictions</a></em></p> </blockquote> <p>As you've already found the access to the <code>/var/lib/docker/containers</code> directory is not possible with the <code>GKE</code> in <code>Autopilot</code> mode.</p> <p>As a <strong>workaround</strong> you could try to <strong>either</strong>:</p> <ul> <li>Use <code>GKE</code> cluster in <code>standard</code> mode.</li> <li>Use <code>Cloud Operations</code> with its Slack notification channel. You can read more about this topic by following: <ul> <li><em><a href="https://cloud.google.com/monitoring/alerts" rel="nofollow noreferrer">Cloud.google.com: Monitoring: Alerts</a></em></li> <li><em><a href="https://cloud.google.com/monitoring/support/notification-options#slack" rel="nofollow noreferrer">Cloud.google.com: Monitoring: Support: Notification options: Slack</a></em></li> </ul> </li> </ul> <p>I'd reckon you could also consider checking the guide for exporting logs to <code>Elasticsearch</code> from <code>Cloud Logging</code>:</p> <ul> <li><em><a href="https://cloud.google.com/architecture/exporting-stackdriver-logging-elasticsearch" rel="nofollow noreferrer">Cloud.google.com: Architecture: Scenarios for exporting Cloud Logging: Elasticsearch</a></em></li> </ul> <hr /> <p>Additional resources:</p> <ul> <li><em><a href="https://stackoverflow.com/a/67599710/12257134">Stackoverflow.com: Answer: Prometheus on GKE Autopilot? </a></em></li> </ul>
<p>Hope someone can help me. To describe the situation in short, I have a self managed k8s cluster, running on 3 machines (1 master, 2 worker nodes). In order to make it HA, I attempted to add a second master to the cluster. After some failed attempts, I found out that I needed to add <code>controlPlaneEndpoint</code> configuration to kubeadm-config config map. So I did, with <code>masternodeHostname:6443</code>.</p> <p>I generated the certificate and join command for the second master, and after running it on the second master machine, it failed with</p> <pre><code>error execution phase control-plane-join/etcd: error creating local etcd static pod manifest file: timeout waiting for etcd cluster to be available </code></pre> <p>Checking the first master now, I get connection refused for the IP on port 6443. So I cannot run any kubectl commands.<br /> Tried recreating the .kube folder, with all the config copied there, no luck.<br /> Restarted kubelet, docker.<br /> The containers running on the cluster seem ok, but I am locked out of any cluster configuration (dashboard is down, kubectl commands not working).<br /> Is there any way I make it work again? Not losing any of the configuration or the deployments already present?<br /> Thanks! Sorry if it’s a noob question.</p> <p>Cluster information:</p> <pre><code>Kubernetes version: 1.15.3 Cloud being used: (put bare-metal if not on a public cloud) bare-metal Installation method: kubeadm Host OS: RHEL 7 CNI and version: weave 0.3.0 CRI and version: containerd 1.2.6 </code></pre>
<p>This is an old, known problem with Kubernetes <code>1.15</code> <sup>[<a href="https://github.com/kubernetes/kubeadm/issues/1712" rel="nofollow noreferrer">1</a>,<a href="https://github.com/kubernetes/website/issues/15637" rel="nofollow noreferrer">2</a>]</sup>.<br /> It is caused by short etcd timeout period. As far as I'm aware it is a hard coded value in <a href="https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/util/etcd/etcd.go#L41-L43" rel="nofollow noreferrer">source</a>, and cannot be changed (feature request to make it configurable is open for version <a href="https://github.com/kubernetes/kubeadm/issues/2463" rel="nofollow noreferrer"><code>1.22</code></a>).<br /> Your best bet would be to upgrade to a newer version, and recreate your cluster.</p>
<p>We are migrating our Spark workloads from Cloudera to Kubernetes.</p> <p>For demo purposes, we wish to run one of our spark jobs within a minikube cluster using spark-submit in cluster mode.</p> <p>I would like to pass a typesafe config file to my executors using the spark.file conf (I tried --files as well). The configuration file has been copied to the spark docker image at build time at the /opt/spark/conf directory.</p> <p>Yet when I submit my job, I have a <strong>java.io.FileNotFoundException: File file:/opt/spark/conf/application.conf does not exist</strong>.</p> <p>My understanding is that spark.files copies the files from driver to executors' working directory.</p> <p>Am I missing something ? Thank you for your help.</p> <p>Here is my spark-submit command</p> <pre><code>spark-submit \ --master k8s://https://192.168.49.2:8443 \ --driver-memory ${SPARK_DRIVER_MEMORY} --executor-memory ${SPARK_EXECUTOR_MEMORY} \ --deploy-mode cluster \ --class &quot;${MAIN_CLASS}&quot; \ --conf spark.driver.defaultJavaOptions=&quot;-Dconfig.file=local://${POD_CONFIG_DIR}/application.conf $JAVA_ARGS&quot; \ --conf spark.files=&quot;file:///${POD_CONFIG_DIR}/application.conf,file:///${POD_CONFIG_DIR}/tlereg.properties&quot; \ --conf spark.executor.defaultJavaOptions=&quot;-Dconfig.file=local://./application.conf&quot; \ --conf spark.executor.instances=5 \ --conf spark.kubernetes.container.image=$SPARK_CONTAINER_IMAGE \ --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \ --conf spark.kryoserializer.buffer.max=512M \ --conf spark.driver.maxResultSize=8192M \ --conf spark.kubernetes.authenticate.caCertFile=$HOME/.minikube/ca.crt \ --conf spark.executor.extraClassPath=&quot;./&quot; \ local:///path/to/uber/jar.jar \ &quot;${PROG_ARGS[@]}&quot; &gt; $LOG_FILE 2&gt;&amp;1 </code></pre>
<p>I've figured it out. <code>spark-submit</code> sends a request to kubernetes master's api-server to create a driver pod. A configmap volume is mounted to the driver's pod at <code>mountPath: /opt/spark/conf</code>, which overrides my config files located at that path in the docker container. Workaround : editing /opt/spark/conf to /opt/spark/config in Dockerfile so that my configuration files are copied from the latter.</p>
<p>Kubernetes provides us two deployment strategies. One is <strong><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment" rel="nofollow noreferrer">Rolling Update</a></strong> and another one is <strong><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#recreate-deployment" rel="nofollow noreferrer">Recreate</a></strong>. I should use Rolling Update when I don't want go off air. But when should I be using Recreate?</p>
<p>+1 to <a href="https://stackoverflow.com/a/67606389/11714114">F1ko's answer</a>, however let me also add a few more details and some real world examples to what was already said.</p> <p>In a perfect world, where every application could be easily updated with <strong>no downtime</strong> we would be fully satisfied having only <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment" rel="nofollow noreferrer">Rolling Update</a> strategy.</p> <p>But as the world isn't a perfect place and all things don't go always so smoothly as we could wish, in certain situations there is also a need for using <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#recreate-deployment" rel="nofollow noreferrer">Recreate</a> strategy.</p> <p>Suppose we have a <strong>stateful application</strong>, running in a <strong>cluster</strong>, where individual instances need to comunicate with each other. Imagine our aplication has recently undergone a <strong>major refactoring</strong> and this new version can't talk any more to instances running the old version. Moreover, we may not even want them to be able to form a cluster together as we can expect that it may cause some unpredicteble mess and in consequence neither old instances nor new ones will work properly when they become available at the same time. So sometimes it's in our best interest to be able to first shutdown every old replica and only once we make sure none of them is runnig, spawn a replica that runs a new version.</p> <p>It can be the case when there is a major migration, let's say a major change in database structure etc. and we want to make sure that no pod, running the old version of our app is able to write any new data to the db during the migration.</p> <p>So I would say, that in majority of cases it is very application-specific, individual scenario involving major migrations, legacy applications etc which would require accepting a certain downtime and <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#recreate-deployment" rel="nofollow noreferrer">Recreate</a> all the pods at once, rather then updating them one-by-one like in <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment" rel="nofollow noreferrer">Rolling Update</a> strategy.</p> <p>Another example which comes to my mind. Let's say you have an extremely old version of Mongodb running in a replicaset consisting of 3 members and you need to migrate it to a modern, currently supported version. As far as I remember, individual members of the replicaset can form a cluster only if there is 1 major version difference between them. So, if the difference is of 2 or more major versions, old and new instances won't be able to keep running in the same cluster anyway. Imagine that you have enough resources to run only 4 replicas at the same time. So rolling update won't help you much in such case. To have a quorum, so that the master can be elected, you need at least 2 members out of 3 available. If the new one won't be able to form a cluster with the old replicas, it's much better to schedule a maintanance window, shut down all the old replicas and have enough resources to start 3 replicas with a new version once the old ones are removed.</p>
<p>I have something in a secret template like this:</p> <pre><code>apiVersion: v1 kind: Secret metadata: # not relevant type: Opaque data: password: {{ randAlphaNum 32 | b64enc | quote }} </code></pre> <p>Now, when doing <code>helm upgrade</code>, the secret is recreated, but the pods using this aren't (they also shouldn't, this is OK).</p> <p>This causes the pods to fail when they are restarted or upgraded as the new password now doesn't match the old one.</p> <p>Is it possible to skip re-creation of the secret when it exists, like, a <code>{{- if not(exists theSecret) }}</code> and how to do it?</p>
<p>You can use the <strong>look up</strong> <strong>function</strong> in HELM to check the if secret exist or not</p> <p><a href="https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-lookup-function" rel="noreferrer">https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-lookup-function</a></p> <p>Function in helm chart goes like : <a href="https://github.com/sankalp-r/helm-charts-examples/blob/1081ab5a5af3a1c7924c826c5a2bed4c19889daf/sample_chart/templates/_helpers.tpl#L67" rel="noreferrer">https://github.com/sankalp-r/helm-charts-examples/blob/1081ab5a5af3a1c7924c826c5a2bed4c19889daf/sample_chart/templates/_helpers.tpl#L67</a></p> <pre><code>{{/* Example for function */}} {{- define &quot;gen.secret&quot; -}} {{- $secret := lookup &quot;v1&quot; &quot;Secret&quot; .Release.Namespace &quot;test-secret&quot; -}} {{- if $secret -}} {{/* Reusing value of secret if exist */}} password: {{ $secret.data.password }} {{- else -}} {{/* add new data */}} password: {{ randAlphaNum 32 | b64enc | quote }} {{- end -}} {{- end -}} </code></pre> <p>secret creation will be something like</p> <p>example file : <a href="https://github.com/sankalp-r/helm-charts-examples/blob/main/sample_chart/templates/secret.yaml" rel="noreferrer">https://github.com/sankalp-r/helm-charts-examples/blob/main/sample_chart/templates/secret.yaml</a></p> <pre><code>apiVersion: v1 kind: Secret metadata: name: &quot;test-secret&quot; type: Opaque data: {{- ( include &quot;gen.secret&quot; . ) | indent 2 -}} </code></pre> <p>chart example : <a href="https://github.com/sankalp-r/helm-charts-examples" rel="noreferrer">https://github.com/sankalp-r/helm-charts-examples</a></p> <pre><code>{{- $secret := (lookup &quot;v1&quot; &quot;Secret&quot; .Release.Namespace &quot;test-secret&quot; -}} apiVersion: v1 kind: Secret metadata: name: test-secret type: Opaque # 2. If the secret exists, write it back {{ if $secret -}} data: password: {{ $secret.data.password }} # 3. If it doesn't exist ... create new {{ else -}} stringData: password: {{ randAlphaNum 32 | b64enc | quote }} {{ end }} </code></pre>
<p>We are running a Kubernetes cluster for building Jenkins jobs. For the pods we are using the <strong>odavid/jenkins-jnlp-slave</strong> JNLP docker image. I mounted the /var/run/docker.sock to the pod container and added jenkins(uid=1000) user to the docker group on the host systems.</p> <p>When running a shell script job in Jenkins with e.g. <code>docker ps</code> it fails with error <code>docker: not found</code>.</p> <pre><code>$ /bin/sh -xe /tmp/jenkins6501091583256440803.sh + id uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins) + docker ps /tmp/jenkins2079497433467634278.sh: 8: /tmp/jenkins2079497433467634278.sh: docker: not found Build step 'Execute shell' marked build as failure Finished: FAILURE </code></pre> <p>The interesting thing is that when connecting into the pod manually and executing docker commands directly in the container as jenkins user, it works:</p> <pre><code>kubectl exec -it jenkins-worker-XXX -- /bin/bash ~$ su - jenkins ~$ id uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins),1000(jenkins) ~$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS </code></pre> <p>What is doing Jenkins in its job differently? Same user, same container, only <code>groups=1000(jenkins),1000(jenkins)</code> lists 1000(jenkins) as group 2 times when connecting manually. What am i missing?</p>
<p>/var/run/docker.sock is just the host socket that allows docker client to run docker commands from the container.</p> <p><strong>What you are missing is the docker client in your container.</strong></p> <p>Download the docker client manually and place it on a persistent volume and ensure that he docker client is in the system path. Also, ensure that the docker client is executable.</p> <p>This command will do it for you. You may have to get the right version of the docker client for your environment</p> <pre><code>curl -fsSLO https://get.docker.com/builds/Linux/x86_64/docker-17.03.1-ce.tgz &amp;&amp; tar --strip-components=1 -xvzf docker-17.03.1-ce.tgz -C /usr/local/bin </code></pre> <p>You may even be able to install the docker using the package manager for your image.</p>
<p>I have a configmap <code>my-config</code> in a namespace and need to make a copy (part of some temporary experimentation) but with another name so I end up with :</p> <pre><code>my-config my-config-copy </code></pre> <p>I can do this with:</p> <pre><code>kubectl get cm my-config -o yaml &gt; my-config-copy.yaml </code></pre> <p>edit the name manually followed by:</p> <pre><code>kubectl create -f my-config-copy.yaml </code></pre> <p>But is there a way to do it automatically in one line?</p> <p>I can get some of the way with:</p> <pre><code>kubectl get cm my-config --export -o yaml | kubectl apply -f - </code></pre> <p>but I am missing the part with the new name (since names are immutable I know this is not standard behavior).</p> <p>Also preferably without using export since:</p> <p><em>Flag --export has been deprecated, This flag is deprecated and will be removed in future.</em></p> <p>Any suggestions?</p>
<p>You can achieve this by combining kubectl's <code>patch</code> and <code>apply</code> functions.</p> <pre><code> kubectl patch cm source-cm -p '{&quot;metadata&quot;:{ &quot;name&quot;:&quot;target-cm&quot;}}' --dry-run=client -o yaml | kubectl apply -f - </code></pre> <p><code>source-cm</code> and <code>target-cm</code> are the config map names</p>
<p>We have a docker image that is processing some files on a samba share.</p> <p>For this we created a cifs share which is mounted to /mnt/dfs and files can be accessed in the container with:</p> <pre><code>docker run -v /mnt/dfs/project1:/workspace image </code></pre> <p>Now what I was aked to do is get the container into k8s and to acces a cifs share from a pod a cifs Volume driver usiong FlexVolume can be used. That's where some questions pop up.</p> <p>I installed this repo as a daemonset</p> <p><a href="https://k8scifsvol.juliohm.com.br/" rel="nofollow noreferrer">https://k8scifsvol.juliohm.com.br/</a></p> <p>and it's up and running.</p> <pre><code>apiVersion: apps/v1 kind: DaemonSet metadata: name: cifs-volumedriver-installer spec: selector: matchLabels: app: cifs-volumedriver-installer template: metadata: name: cifs-volumedriver-installer labels: app: cifs-volumedriver-installer spec: containers: - image: juliohm/kubernetes-cifs-volumedriver-installer:2.4 name: flex-deploy imagePullPolicy: Always securityContext: privileged: true volumeMounts: - mountPath: /flexmnt name: flexvolume-mount volumes: - name: flexvolume-mount hostPath: path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ </code></pre> <p>Next thing to do is add a PeristentVolume, but that needs a capacity, 1Gi in the example. Does this mean that we lose all data on the smb server? Why should there be a capacity for an already existing server?</p> <p>Also, how can we access a subdirectory of the mount /mnt/dfs from within the pod? So how to access data from /mnt/dfs/project1 in the pod?</p> <p>Do we even need a PV? Could the pod just read from the host's mounted share?</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: mycifspv spec: capacity: storage: 1Gi flexVolume: driver: juliohm/cifs options: opts: sec=ntlm,uid=1000 server: my-cifs-host share: /MySharedDirectory secretRef: name: my-secret accessModes: - ReadWriteMany </code></pre>
<p>No, that field has no effect on the FlexVol plugin you linked. It doesn't even bother parsing out the size you pass in :)</p>
<p>I'm working on migrating our services from eks1.14 to eks1.18 cluster. I see lots of errors on some of our deployments .</p> <p>Can someone pls let me know , How can I solve this error?</p> <pre><code>May 19th 2021, 10:56:30.297 io.fabric8.kubernetes.client.KubernetesClientException: too old resource version: 13899376 (13911551) at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$1.onMessage(WatchConnectionManager.java:259) at okhttp3.internal.ws.RealWebSocket.onReadMessage(RealWebSocket.java:323) at okhttp3.internal.ws.WebSocketReader.readMessageFrame(WebSocketReader.java:219) at okhttp3.internal.ws.WebSocketReader.processNextFrame(WebSocketReader.java:105) at okhttp3.internal.ws.RealWebSocket.loopReader(RealWebSocket.java:274) at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:214) </code></pre>
<p>This is a standard behaviour of Kubernetes. When you ask to see changes for a <code>resourceVersion</code> that is too old - i.e. when it can no longer tell you what has changed since that version, since too many things have changed. So, you should avoid upgrading several versions at once. Try to update your cluster from 1.14 to 1.15, then from 1.15 to 1.16 and so on. You can also read more about very similar problem <a href="https://stackoverflow.com/questions/61409596/kubernetes-too-old-resource-version">here</a>. There you can find another solution of your problem.</p> <p>Secondary in the <a href="https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html" rel="nofollow noreferrer">documentation</a> of Amazon EKS, we can find:</p> <blockquote> <p>Updating a cluster from 1.16 to 1.17 will fail if you have any AWS Fargate pods that have a <code>kubelet</code> minor version earlier than 1.16. Before updating your cluster from 1.16 to 1.17, you need to recycle your Fargate pods so that their <code>kubelet</code> is 1.16 before attempting to update the cluster to 1.17.</p> </blockquote> <p>Based on this example and the huge amount of dependencies, it is a good idea to upgrade the cluster version by version.</p>
<p>I have a spring boot application, and I created a ConfigMap of my application.proporties. I have some passwords and other sensible date in this file and I would to override some infos there. How can I do for example to override the elasticsearch.url? I would like to override via command line, in the same way to override the values.yaml, is it possible to do this?</p> <pre><code>kind: ConfigMap apiVersion: v1 metadata: name: application data: application.properties: |- server.port = 8080 elasticsearch.baseuri = url </code></pre>
<p>If this configMap is part of your chart your can just put your secret stuff inside {{ .Values.secretStuff }} and then override it with your helm install command.</p> <p>Example configMap:</p> <pre><code>kind: ConfigMap apiVersion: v1 metadata: name: application data: application.properties: |- server.port = 8080 elasticsearch.baseuri = {{ .Values.elasticSearch.baseUri}} </code></pre> <p>And your helm install command will be</p> <p><code>helm install chart ./mychart --set elasticSearch.baseUri=some-super-secret-stuff</code></p>
<p>How do you change ip address of the master or any worker node</p> <p>I have experimented with:</p> <pre><code>kubeadm init --control-plane-endpoint=cluster-endpoint --apiserver-advertise-address=&lt;x.x.x.x&gt; </code></pre> <p>And then I guess I need the new config with the right certificate:</p> <pre><code>sudo cp -i /etc/kubernetes/admin.conf ~/.kube/config </code></pre> <p>I tried the following suggested from Hajed.Kh:</p> <p>Changed ip address in:</p> <pre><code>etcd.yaml (contained ip) kube-apiserver.yaml (contained ip) kube-controller-manager.yaml (not this one? kube-scheduler.yaml (not this one?) </code></pre> <p>But I still get the same ip address in:</p> <pre><code>sudo cp -i /etc/kubernetes/admin.conf ~/.kube/config </code></pre>
<p>The <code> apiserver-advertise-address</code> flag is located in the api-server manifest file and all Kubernetes components manifests are located here <code>/etc/kubernetes/manifest/</code>.Those are realtime updated files so change and save and it will be redeployed instantally :</p> <pre><code>etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml </code></pre> <p>For the worker I think it will automatically update changes while the kubelet is connected to the api-server.</p>
<p>I am trying to get kubernetes api server from a pod.</p> <p>Here is my pod configuration</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: test spec: schedule: &quot;*/5 * * * *&quot; jobTemplate: spec: template: spec: containers: - name: test image: test:v5 env: imagePullPolicy: IfNotPresent command: ['python3'] args: ['test.py'] restartPolicy: OnFailure </code></pre> <p>And here is my kubernetes-client python code inside test.py</p> <pre><code>from kubernetes import client, config # Configs can be set in Configuration class directly or using helper utility config.load_kube_config() v1 = client.CoreV1Api() print(&quot;Listing pods with their IPs:&quot;) ret = v1.list_pod_for_all_namespaces(watch=False) for i in ret.items: print(&quot;%s\t%s\t%s&quot; % (i.status.pod_ip, i.metadata.namespace, i.metadata.name)) </code></pre> <p>But i am getting this error:</p> <pre><code>kubernetes.config.config_exception.ConfigException: Invalid kube-config file. No configuration found. </code></pre>
<p>When operating in-cluster you want <code>load_incluster_config()</code> instead.</p>
<p>I'm new to kubernetes,</p> <p>Currently i'm triyng to deploy laravel app on kuberetes. I have setup 1 deployment yaml containing 2 containers (nginx and php-fpm) and a shared volume.</p> <p>Here's the full yaml:</p> <pre><code> apiVersion: v1 kind: Service metadata: name: operation-service labels: app: operation-service spec: type: NodePort selector: app: operation ports: - port: 80 targetPort: 80 protocol: TCP name: http - port: 443 targetPort: 443 protocol: TCP name: https - port: 9000 targetPort: 9000 protocol: TCP name: fastcgi --- # Create a pod containing the PHP-FPM application (my-php-app) # and nginx, each mounting the `shared-files` volume to their # respective /var/www/ directories. apiVersion: apps/v1 kind: Deployment metadata: name: operation spec: selector: matchLabels: app: operation replicas: 1 template: metadata: labels: app: operation spec: volumes: # Create the shared files volume to be used in both pods - name: shared-files emptyDir: {} # Secret containing - name: secret-volume secret: secretName: nginxsecret # Add the ConfigMap we declared for the conf.d in nginx - name: configmap-volume configMap: name: nginxconfigmap containers: # Our PHP-FPM application - image: asia.gcr.io/operations name: app volumeMounts: - name: shared-files mountPath: /var/www ports: - containerPort: 9000 lifecycle: postStart: exec: command: [&quot;/bin/sh&quot;, &quot;-c&quot;, &quot;cp -r /app/. /var/www&quot;] - image: nginx:latest name: nginx ports: - containerPort: 443 - containerPort: 80 volumeMounts: - name: shared-files mountPath: /var/www - mountPath: /etc/nginx/ssl name: secret-volume - mountPath: /etc/nginx/conf.d name: configmap-volume --- apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: ingress annotations: nginx.ingress.kubernetes.io/force-ssl-redirect: &quot;true&quot; nginx.ingress.kubernetes.io/from-to-www-redirect: &quot;true&quot; nginx.ingress.kubernetes.io/proxy-body-size: 100m spec: rules: - host: testing.com http: paths: - path: / backend: serviceName: operation-service servicePort: 443 </code></pre> <p>Here's my working <code>nginxconf</code>:</p> <pre><code>server { listen 80; listen [::]:80; # For https listen 443 ssl; listen [::]:443 ssl ipv6only=on; ssl_certificate /etc/nginx/ssl/tls.crt; ssl_certificate_key /etc/nginx/ssl/tls.key; server_name testing.com; root /var/www/public; index index.php index.html index.htm; location / { try_files $uri $uri/ /index.php$is_args$args; } location ~ \.php$ { try_files $uri /index.php =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_buffers 16 16k; fastcgi_buffer_size 32k; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; #fixes timeouts fastcgi_read_timeout 600; include fastcgi_params; } location ~ /\.ht { deny all; } location /.well-known/acme-challenge/ { root /var/www/letsencrypt/; log_not_found off; } error_log /var/log/nginx/laravel_error.log; access_log /var/log/nginx/laravel_access.log; } </code></pre> <p>After deploying, my app won't load on the web. Turns out nginx log is returning:</p> <pre><code>[error] 19#19: *64 FastCGI sent in stderr: &quot;PHP message: PHP Fatal error: Uncaught ErrorException: file_put_contents(/app/storage/framework/views/ba2564046cc89e436fb993df6f661f314e4d2efb.php): failed to open stream: Permission denied in /var/www/vendor/laravel/framework/src/Illuminate/Filesystem/Filesystem.php:185 </code></pre> <p>I know how to setup the volume correctly in local docker, how do i set the shared volume permission in kubernetes correctly?</p>
<p>For anyone who is looking for answer, I manage to setup kubernetes for our production server with php-fpm and nginx.</p> <p>It requires 2 image, 1 contains <code>php-fpm</code> and our code, the other one is <code>nginx</code> image with our conf in it.</p> <p>Also we have to setup a shared volume between those 2 image to access. What I was missing was the <code>postStart</code> command to do <code>chmod</code> and <code>php artisan optimize</code> to make sure I cleared the cache</p> <p>For future reference, please do <code>kubectl logs &lt;pods-name&gt;</code> and <code>kubectl describe pods &lt;pods-name&gt;</code> to easily debug and see what happens in every pods</p> <p>here's the final working config, hope it helps someone in the future</p> <pre><code> apiVersion: v1 kind: Service metadata: name: operation-service labels: app: operation-service spec: type: NodePort selector: app: operation ports: - port: 80 targetPort: 80 protocol: TCP name: http --- # Create a pod containing the PHP-FPM application (my-php-app) # and nginx, each mounting the `shared-files` volume to their # respective /var/www/ directories. apiVersion: apps/v1 kind: Deployment metadata: name: operation spec: selector: matchLabels: app: operation replicas: {{ .Values.replicaCount }} strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 50% type: RollingUpdate template: metadata: labels: app: operation spec: volumes: # Create the shared files volume to be used in both pods - name: shared-files emptyDir: {} securityContext: fsGroup: 82 containers: # Our PHP-FPM application - image: asia.gcr.io/3/operations:{{ .Values.version }} name: app envFrom: - configMapRef: name: prod - secretRef: name: prod volumeMounts: - name: shared-files mountPath: /var/www # Important! After this container has started, the PHP files # in our Docker image aren't in the shared volume. We need to # get them into the shared volume. If we tried to write directly # to this volume from our Docker image the files wouldn't appear # in the nginx container. # # So, after the container has started, copy the PHP files from this # container's local filesystem (/app -- added via the Docker image) # to the shared volume, which is mounted at /var/www. ports: - containerPort: 9000 name: fastcgi lifecycle: postStart: exec: command: - &quot;/bin/sh&quot; - &quot;-c&quot; - &gt; cp -r /app/. /var/www &amp;&amp; cd /var/www &amp;&amp; php artisan optimize &amp;&amp; php artisan migrate --force &amp;&amp; chgrp -R www-data /var/www/* &amp;&amp; chmod -R 775 /var/www/* # Our nginx container, which uses the configuration declared above, # along with the files shared with the PHP-FPM app. - image: asia.gcr.io/3/nginx:1.0 name: nginx ports: - containerPort: 80 volumeMounts: - name: shared-files mountPath: /var/www # We don't need this anymore as we're not using fastcgi straightaway # --- # apiVersion: v1 # kind: ConfigMap # metadata: # name: ingress-cm # data: # SCRIPT_FILENAME: &quot;/var/www/public/index.php$is_args$args&quot; --- apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: operation-ingress labels: app: operation-ingress annotations: kubernetes.io/ingress.class: &quot;nginx&quot; nginx.ingress.kubernetes.io/ssl-redirect: &quot;true&quot; nginx.ingress.kubernetes.io/force-ssl-redirect: &quot;true&quot; nginx.ingress.kubernetes.io/from-to-www-redirect: &quot;true&quot; nginx.ingress.kubernetes.io/proxy-body-size: &quot;0&quot; cert-manager.io/cluster-issuer: letsencrypt-prod spec: tls: - hosts: - myservice.com.au secretName: kubernetes-tls rules: - host: myservice.com.au http: paths: - backend: serviceName: operation-service servicePort: 80 </code></pre>
<p>We are trying to create namespace with specific node pool. How to achieve that on Azure Kubernetes?</p> <pre><code>error: Unable to create namespace with specific node pool. Ex: namespace for user nodepool1 </code></pre>
<p>Posting this as a community wiki, feel free to edit and expend it.</p> <p>As <a href="https://stackoverflow.com/users/5951680/luca-ghersi">Luca Ghersi</a> mentioned in comments, it's possible to have namespaces assigned to a specific nodes. For this matter there's an admission controller <code>PodNodeSelector</code> (you can read about it on <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#podnodeselector" rel="nofollow noreferrer">kubernetes official documentation</a>).</p> <p>In short words:</p> <blockquote> <p>This admission controller defaults and limits what node selectors may be used within a namespace by reading a namespace annotation and a global configuration.</p> </blockquote> <p>Based on <a href="https://learn.microsoft.com/en-us/azure/aks/faq#what-kubernetes-admission-controllers-does-aks-support-can-admission-controllers-be-added-or-removed" rel="nofollow noreferrer">Azure FAQ</a>, Azure AKS has this admission controller enabled by default.</p> <pre><code>AKS supports the following admission controllers: - NamespaceLifecycle - LimitRanger - ServiceAccount - DefaultStorageClass - DefaultTolerationSeconds - MutatingAdmissionWebhook - ValidatingAdmissionWebhook - ResourceQuota - PodNodeSelector - PodTolerationRestriction - ExtendedResourceToleration Currently, you can't modify the list of admission controllers in AKS. </code></pre>
<p>I have a create-react-app with default configurations. I have some PORT and APIs inside <strong>.env file</strong> configured with</p> <pre><code>REACT_APP_PORT=3000 </code></pre> <p>and using inside app with process.env.REACT_APP_PORT.</p> <p>I have my server deployed on Kubernetes. Can someone explain how to configure my create-react-app, to use environment variables provided by pods/containers?</p> <p>I want to access cluster IP via Name given by kubectl svc</p> <p><strong>Update 1 :</strong></p> <p>I have the opposite scenario, I don't want my frontend env variables to be configured in kubernetes pod container, but want to use the pod's env variable</p> <p>e.x CLUSTER_IP and CLUSTER_PORT with their name defined by pod's env variable inside my react app.</p> <p>For eg.-</p> <pre><code>NAME TYPE CLUSTER-IP XYZ ClusterIP x.y.z.a </code></pre> <p>and want to access XYZ in react app to point to the Cluster IP (x.y.z.a)</p>
<p>Use Pod fields as values for environment variables</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: dapi-envars-fieldref spec: containers: - name: test-container image: k8s.gcr.io/busybox command: [ &quot;sh&quot;, &quot;-c&quot;] args: - while true; do echo -en '\n'; printenv MY_NODE_NAME MY_POD_NAME MY_POD_NAMESPACE; printenv MY_POD_IP MY_POD_SERVICE_ACCOUNT; sleep 10; done; env: - name: MY_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: MY_POD_SERVICE_ACCOUNT valueFrom: fieldRef: fieldPath: spec.serviceAccountName restartPolicy: Never </code></pre> <p><a href="https://kubernetes.io/docs/tasks/inject-data-application/_print/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/_print/</a> Maybe above example will help you.</p>
<p>I'm trying to find all the allowed formats used when specifying quotas, but all I can find are examples that may not cover all the cases. For instance</p> <ul> <li>CPU limits are expressed in milliCPU (&quot;200m&quot;) but it seems plain integer are also acceptable and maybe even decimal numbers.</li> <li>Storage size examples are in &quot;Gi&quot; but there could be other units?</li> </ul>
<p>The <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes" rel="nofollow noreferrer">Resource units in Kubernetes</a> section of the doc describes the meaning and the allowed values for the CPU and memory resources.</p> <p>To quote:</p> <p><strong>Meaning of CPU:</strong></p> <blockquote> <p>Limits and requests for CPU resources are measured in cpu units. One cpu, in Kubernetes, is equivalent to 1 vCPU/Core for cloud providers and 1 hyperthread on bare-metal Intel processors.</p> </blockquote> <p><strong>Allowed values:</strong></p> <blockquote> <p>Fractional requests are allowed. A Container with spec.containers[].resources.requests.cpu of 0.5 is guaranteed half as much CPU as one that asks for 1 CPU. The expression 0.1 is equivalent to the expression 100m, which can be read as &quot;one hundred millicpu&quot;. Some people say &quot;one hundred millicores&quot;, and this is understood to mean the same thing. A request with a decimal point, like 0.1, is converted to 100m by the API, and precision finer than 1m is not allowed. For this reason, the form 100m might be preferred.</p> <p>CPU is always requested as an absolute quantity, never as a relative quantity; 0.1 is the same amount of CPU on a single-core, dual-core, or 48-core machine.</p> </blockquote> <p><strong>Meaning of memory and allowed values:</strong></p> <blockquote> <p>Limits and requests for memory are measured in bytes. You can express memory as a plain integer or as a fixed-point number using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. For example, the following represent roughly the same values: <code>128974848, 129e6, 129M, 123Mi</code></p> </blockquote>
<p>I am <a href="https://piotrminkowski.com/2021/02/18/blue-green-deployment-with-a-database-on-kubernetes/" rel="nofollow noreferrer">reading about blue green deployment with database changes on Kubernetes.</a> It explains very clearly and in detail how the process works:</p> <ol> <li>deploy new containers with the new versions while still directing traffic to the old containers</li> <li>migrate database changes and have the services point to the new database</li> <li>redirect traffic to the new containers and remove the old containers when there are no issues</li> </ol> <p>I have some questions particularly about the moment we switch from the old database to the new one.</p> <p>In step 3 of the article, we have <code>person-v1</code> and <code>person-v2</code> services that both still point to the unmodified version of the database (postgres v1):</p> <p><a href="https://i.stack.imgur.com/pCvg1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pCvg1.png" alt="before database migration" /></a></p> <p>From this picture, having <code>person-v2</code> point to the database is probably needed to establish a TCP connection, but it would likely fail due to incompatibility between the code and DB schema. But since all incoming traffic is still directed to <code>person-v1</code> this is not a problem.</p> <p>We now modify the database (to postgres v2) and switch the traffic to <code>person-v2</code> (step 4 in the article). <strong>I assume that both the DB migration and traffic switch happen at the same time?</strong> That means it is impossible for <code>person-v1</code> to communicate with postgres v2 or <code>person-v2</code> to communicate with postgres v1 at any point during this transition? Because this can obviously cause errors (i.e. inserting data in a column that doesn't exist yet/anymore).</p> <p><a href="https://i.stack.imgur.com/PcJO5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PcJO5.png" alt="after database migration" /></a></p> <p>If the above assumption is correct, then <strong>what happens if during the DB migration new data is inserted in postgres v1</strong>? Is it possible for data to become lost with unlucky timing? Just because the traffic switch happens at the same time as the DB switch, does not mean that any ongoing processes in <code>person-v1</code> can not still execute DB statements. It would seem to me that any new inserts/deletes/updates would need to propagate to postgres v2 as well for as long as the migration is still in progress.</p>
<p>Even when doing blue-green for the application servers, you still have to follow normal rules of DB schema compatibility. All schema changes need to be backwards compatible for whatever you consider one full release cycle to be. Both services talk to the same DB during the switchover time but thanks to careful planning each can understand the data from the other and all is well.</p>
<p>I am working on an application that, as I can see is doing multiple health checks?</p> <ol> <li>DB readiness probe</li> <li>Another API dependency readiness probe</li> </ol> <p>When I look at cluster logs, I realize that my service, when it fails a DB-check, just throws 500 and goes down. What I am failing to understand here is that if DB was down or another API was down and IF I do not have a readiness probe then my container is going down anyway. Also, I will see that my application did throw some 500 because DB or another service was off.</p> <p>What is the benefit of the readiness probe of my container was going down anyway? Another question I have is that is Healthcheck something that I should consider only if I am deploying my service to a cluster? If it was not a cluster microservice environment, would it increase/decrease benefits of performing healtheck?</p>
<p>There are three types of probes that Kubernetes uses to check the health of a <code>Pod</code>:</p> <ul> <li><code>Liveness</code>: Tells Kubernetes that something went wrong inside the container, and it's better to restart it to see if Kubernetes can resolve the error.</li> <li><code>Readiness</code>: Tells Kubernetes that the <code>Pod</code> is ready to receive traffic. Sometimes something happens that doesn't wholly incapacitate the <code>Pod</code> but makes it impossible to fulfill the client's request. For example: losing connection to a database or a failure on a third party service. In this case, we don't want Kubernetes to reset the <code>Pod</code>, but we also don't wish for it to send it traffic that it can't fulfill. When a <code>Readiness</code> probe fails, Kubernetes removes the <code>Pod</code> from the service and stops communication with the <code>Pod</code>. Once the error is resolved, Kubernetes can add it back.</li> <li><code>Startup</code>: Tells Kubernetes when a <code>Pod</code> has started and is ready to receive traffic. These probes are especially useful on applications that take a while to begin. While the <code>Pod</code> initiates, Kubernetes doesn't send <code>Liveness</code> or <code>Readiness</code> probes. If it did, they might interfere with the app startup. You can get more information about how probes work on this link:</li> </ul> <p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/</a></p>
<p>I have a monolithic application that is being broken down into domains that are microservices. The microservices live inside a kubernetes cluster using the istio service mesh. I'd like to start replacing the service components of the monolith little by little. Given the UI code is also running inside the cluster, microservices are inside the cluster, but the older web api is outside the cluster, is it possible to use a VirtualService to handle paths I specify to a service within the cluster, but then to forward or proxy the rest of the calls outside the cluster?</p> <p><a href="https://i.stack.imgur.com/MF03C.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MF03C.png" alt="enter image description here" /></a></p>
<p>You will have to define a ServiceEntry so Istio will be aware of your external service. That ServiceEntry can be used as a destination in a VirtualService. <a href="https://istio.io/latest/docs/reference/config/networking/virtual-service/#Destination" rel="nofollow noreferrer">https://istio.io/latest/docs/reference/config/networking/virtual-service/#Destination</a></p>
<p>After applying these manifests <a href="https://github.com/prometheus-operator/kube-prometheus/blob/main/kustomization.yaml" rel="nofollow noreferrer">https://github.com/prometheus-operator/kube-prometheus/blob/main/kustomization.yaml</a> I want to create <code>AlertManager</code> webhook:</p> <pre><code>apiVersion: monitoring.coreos.com/v1alpha1 kind: AlertmanagerConfig metadata: name: custom spec: route: receiver: custom groupBy: ['job'] groupWait: 30s groupInterval: 5m repeatInterval: 12h receivers: - name: custom webhook_configs: - send_resolved: true url: https://example.com </code></pre> <p>getting an error:</p> <p><code>error: error validating &quot;alertmanagerconfig.yaml&quot;: error validating data: ValidationError(AlertmanagerConfig.spec.receivers[0]): unknown field &quot;webhook_configs&quot; in com.coreos.monitoring.v1alpha1.AlertmanagerConfig.spec.receivers; if you choose to ignore these errors, turn validation off with --validate=false</code></p> <p>How to fix that?</p>
<p>The problem is that you are using the configuration reference for the actual AlertManager application and not the one for the Kubernetes Custom Resource provided by the Prometheus Operator.</p> <p>The Prometheus Operator takes the configuration provided in the form of custom resources like AlertManagerConfig and converts them into actual AlertManager config and updates the configuration file the application uses. That is a part of the reason why you use an operator in the first place. It makes these things convenient for you.</p> <p>So the actual configuration reference you should be using here is <a href="https://docs.openshift.com/container-platform/4.7/rest_api/monitoring_apis/alertmanagerconfig-monitoring-coreos-com-v1alpha1.html" rel="nofollow noreferrer">this</a>. This <a href="https://github.com/prometheus-operator/prometheus-operator#customresourcedefinitions" rel="nofollow noreferrer">part</a> of the Prometheus Operator's github readme lists down the available custom resources you can use with it.</p>
<p>For for a StateFul set, I can access its pods via a headless service internally.</p> <p>I think, it would make sense, to have an easy way to expose also single pods externally (since the pods usually have a state and therefore loadbalancing over them makes no sense). </p> <p>So far, I found no straight forward way to do that. Even doing <code>kubectl expose pod pod-1 --type NodePort</code> gives me a service which balances over all pods. Is there a reason why this is like this or is there a nice way to access single pods.</p>
<p>You can expose a specific <code>Pod</code> in a <code>StatefulSet</code> externally by matching on the <code>statefulset.kubernetes.io/pod-name</code> label.</p> <p>For example, if your <code>StatefulSet</code> is named <code>app</code> and you want to expose port 80 from the first <code>Pod</code> as a <code>Service</code>:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: app-0 spec: type: LoadBalancer selector: statefulset.kubernetes.io/pod-name: app-0 ports: - protocol: TCP port: 80 targetPort: 80 </code></pre>
<p>Installing grafana using helm <a href="https://github.com/grafana/helm-charts" rel="nofollow noreferrer">charts</a>, the deployment goes well and the grafana ui is up, needed to add an existence persistence volume, ran the below cmd:</p> <pre><code>helm install grafana grafana/grafana -n prometheus --set persistence.enabled=true --set persistence.existingClaim=grafana-pvc </code></pre> <p>The init container crashes, with the below logs:</p> <pre><code>kubectl logs grafana-847b88556f-gjr8b -n prometheus -c init-chown-data chown: /var/lib/grafana: Operation not permitted chown: /var/lib/grafana: Operation not permitted </code></pre> <p>On checking the deployment yaml found this section:</p> <pre><code>initContainers: - command: - chown - -R - 472:472 - /var/lib/grafana image: busybox:1.31.1 imagePullPolicy: IfNotPresent name: init-chown-data resources: {} securityContext: runAsNonRoot: false runAsUser: 0 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/lib/grafana name: storage restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 472 runAsGroup: 472 runAsUser: 472 serviceAccount: grafana serviceAccountName: grafana </code></pre> <p>Why is the operation failing though its running with <code>runAsUser: 0</code> ? and the pvc is having <code>access:ReadWriteMany</code>, any workaround ? Or am I missing something</p> <p>Thanks !!</p>
<p>NFS turns on <code>root_squash</code> mode by default which functionally disables uid 0 on clients as a superuser (maps those requests to some other UID/GID, usually 65534). You can disable this in your mount options, or use something other than NFS. I would recommend the latter, NFS is bad.</p>
<p>We have two clusters. cluster1 has namespace- test1 and a service running as clusterip we have to call that service from another cluster(cluster2) from namespace dev1.</p> <p>I have defined externalname service in cluster2 pointing to another externalname service in cluster1. And externalname service in cluster1 points to the original service running as clusterip.</p> <p>In cluster2:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: service namespace: dev1 labels: app: service spec: selector: app: service type: ExternalName sessionAffinity: None externalName: service2.test.svc.cluster.local status: loadBalancer: {} </code></pre> <p>In cluster1:Externalname service</p> <pre><code>kind: Service apiVersion: v1 metadata: name: service2 namespace: test1 labels: app: service spec: selector: app: service type: ExternalName sessionAffinity: None externalName: service1.test1.svc.cluster.local status: loadBalancer: {} </code></pre> <p>in cluster1 clusterip service:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: service1 namespace: test1 labels: app: service1 spec: ports: - name: http protocol: TCP port: 9099 targetPort: 9099 selector: app: service1 clusterIP: 102.11.20.100 type: ClusterIP sessionAffinity: None status: loadBalancer: {} </code></pre> <p>But, there is no hit to the service in cluster1. I tried to add spec:port:9099 in externalname services as well, still it does not work.</p> <p>What could be the reason. Nothing specific in logs too</p>
<p>This is not what <code>ExternalName</code> services are for.</p> <p><code>ExternalName</code> services are used to have a cluster internal service name that forwards traffic to another (internal or external) DNS name. In practice what an <code>ExternalName</code> does is create a CNAME record that maps the external DNS name to a cluster-local name. It does not expose anything out of your cluster. See <a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname" rel="nofollow noreferrer">documenation</a>.</p> <p>What you need to do is expose your services outside of your kubernetes clusters and they will become usable from the other cluster as well.</p> <p>There are different ways of doing this. For example:</p> <ul> <li><a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer"><code>NodePort service</code></a>: when using a NodePort, your service will be exposed on each node in the cluster on a random high port (by default in the 30000-32767 range). If your firewall allows traffic to such port you could reach your service from using that port.</li> <li><a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer"><code>LoadBalancer service</code></a>: if you are running kubernetes in an environment that supports Load Balancer allocation you could expose your service to the internet using a load balancer.</li> <li><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer"><code>Ingress</code></a>: if you have an ingress controller running in your cluster you could expose your workload using an <code>Ingress</code>.</li> </ul> <p>On the other cluster, you could simply reach the service exposed.</p>
<p>I delete and re-submit a job with the same name, and I often get a 409 HTTP error with a message that says that the object is being deleted -- my submit comes before the job object is removed.</p> <p>My current solution is to spin-try until I am able to submit a job. I don't like it. This looks quite ugly and I wonder if there's a way to call deletion routine in a way that waits till the object is completely deleted. According to <a href="https://stackoverflow.com/a/52900505/18775">this</a> kubectl waits till the object is actually deleted before returning from delete command. I wonder if there's an option for the Python client.</p> <p>Here's my spin-submit code (not runnable, sorry):</p> <pre><code># Set up client config.load_kube_config(context=context) configuration = client.Configuration() api_client = client.ApiClient(configuration) batch_api = client.BatchV1Api(api_client) job = create_job_definition(...) batch_api.delete_namespaced_job(job.metadata.name, &quot;my-namespace&quot;) for _ in range(50): try: return batch_api.create_namespaced_job(self.namespace, job) except kubernetes.client.rest.ApiException as e: body = json.loads(e.body) job_is_being_deleted = body[&quot;message&quot;].startswith(&quot;object is being deleted&quot;) if not job_is_being_deleted: raise time.sleep(0.05) </code></pre> <p>I wish it was</p> <pre><code>batch_api.delete_namespaced_job(job.metadata.name, &quot;my-namespace&quot;, wait=True) batch_api.create_namespaced_job(self.namespace, job) </code></pre> <p>I have found a similar question, and <a href="https://stackoverflow.com/a/65939132/18775">the answer suggests to use watch</a>, which means I need to start a watch in a separate thread, issue delete command, join the thread that waits till the deletion is confirmed by the watch -- seems like a lot of code for such a thing.</p>
<p>As you have already mentioned, kubectl delete has the <code>--wait</code> flag that does this exact job and is <code>true</code> by default.</p> <p>Let's have a look at the code and see how kubectl implements this. <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubectl/pkg/cmd/delete/delete.go#L368-L378" rel="nofollow noreferrer">Source</a>.</p> <pre><code>waitOptions := cmdwait.WaitOptions{ ResourceFinder: genericclioptions.ResourceFinderForResult(resource.InfoListVisitor(deletedInfos)), UIDMap: uidMap, DynamicClient: o.DynamicClient, Timeout: effectiveTimeout, Printer: printers.NewDiscardingPrinter(), ConditionFn: cmdwait.IsDeleted, IOStreams: o.IOStreams, } err = waitOptions.RunWait() </code></pre> <p>Additionally here are <a href="https://github.com/kubernetes/kubernetes/blob/d8f9e4587ac1265efd723bce74ae6a39576f2d58/staging/src/k8s.io/kubectl/pkg/cmd/wait/wait.go#L233-L265" rel="nofollow noreferrer">RunWait()</a> and <a href="https://github.com/kubernetes/kubernetes/blob/d8f9e4587ac1265efd723bce74ae6a39576f2d58/staging/src/k8s.io/kubectl/pkg/cmd/wait/wait.go#L268-L333" rel="nofollow noreferrer">IsDeleted()</a> function definitions.</p> <p>Now answering your question:</p> <blockquote> <p>[...] which means I need to start a watch in a separate thread, issue delete command, join the thread that waits till the deletion is confirmed by the watch -- seems like a lot of code for such a thing</p> </blockquote> <p>It does look like so - it's a lot of code, but I don't see any alternative. If you want to wait for deletion to finish you need to do it manually. There does not seems to be any other way around it.</p>
<p>I'm using newest springdoc library to create one common endpoint with all Swagger configurations in one place. There're a bunch of microservices deployed in Kubernetes, so having documentation in one place would be convenient. The easiest way to do that is by using sth like this (<a href="https://springdoc.org/faq.html#how-can-i-define-groups-using-applicationyml" rel="nofollow noreferrer">https://springdoc.org/faq.html#how-can-i-define-groups-using-applicationyml</a>):</p> <pre><code>springdoc: api-docs: enabled: true swagger-ui: disable-swagger-default-url: true urls: - name: one-service url: 'http://one.server/v3/api-docs' - name: second-service url: 'http://second.server/v3/api-docs' </code></pre> <p>and it works great, I can choose from list in upper right corner. The problem is that it doesn't work through proxy. According to documentation I need to set some headers (<a href="https://springdoc.org/faq.html#how-can-i-deploy-springdoc-openapi-ui-behind-a-reverse-proxy" rel="nofollow noreferrer">https://springdoc.org/faq.html#how-can-i-deploy-springdoc-openapi-ui-behind-a-reverse-proxy</a>) and it works for single service called directly. But when i try grouping described above, headers are not passed to one-service or second-service, and they generates documentation pointing to localhost.</p> <p>I suspect I need to use grouping (<a href="https://springdoc.org/faq.html#how-can-i-define-multiple-openapi-definitions-in-one-spring-boot-project" rel="nofollow noreferrer">https://springdoc.org/faq.html#how-can-i-define-multiple-openapi-definitions-in-one-spring-boot-project</a>) but I miss good example, how to achive similar effect (grouping documentation from different microservices). Examples shows only one external address, or grouping local endpoints. I hope, that using this approach, I'll be able to pass headers.</p>
<p>The properties <code>springdoc.swagger-ui.urls.*</code>, are suitable to configure external (/v3/api-docs url), for example if you want to agreagte all the endpoints of other services, inside one single application.</p> <p>It will not inherit the proxy configuration, but it will use servers urls defined in your: servers <a href="http://one.server/v3/api-docs" rel="nofollow noreferrer">http://one.server/v3/api-docs</a> and <a href="http://second.server/v3/api-docs" rel="nofollow noreferrer">http://second.server/v3/api-docs</a>.</p> <p>You want to have a proxy in front of your service, it's up to your service to handle the correct server urls you want to expose.</p> <p>If you want it work out of the box, you can use a solution that handles proxying and routing like <a href="https://piotrminkowski.com/2020/02/20/microservices-api-documentation-with-springdoc-openapi/" rel="nofollow noreferrer">spring-cloud-gateway</a></p>
<p>I'm trying to install Gitlab Runner inside my cluster in Azure Kubernetes Service (AKS), but I have 2 errors:</p> <ol> <li><p>Helm Tiller doesn't appear in the application list of Gitlab CI: Most of tutorials tell that it has to be installed, but today it is not even proposed as you can see here: <a href="https://i.stack.imgur.com/FJRcb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FJRcb.png" alt="enter image description here" /></a></p> </li> <li><p>When I install gitlab-runner from this list, I have a message error like &quot;Something went wrong while installing Gitlab Runner Operation failed. Check pod logs for install-runner for more details&quot; So when I check the logs, I have this: <a href="https://i.stack.imgur.com/wbI0K.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wbI0K.png" alt="enter image description here" /></a></p> </li> </ol> <p>The 2 last lines there is an error, some answers tell that I need to change the repo with Helm command, so I do that from the Azure CLI bash in the portal, but I still have the same error, I execute the code like this :</p> <pre><code>helm repo rm stable helm repo add stable https://charts.helm.sh/stable </code></pre> <p>And then I update, do I need to give more arguments in commands?</p>
<p>GitLab 13.12 (May 2021) does clearly mention:</p> <blockquote> <h2><a href="https://about.gitlab.com/releases/2021/05/22/gitlab-13-12-released/#helm-v2-support" rel="nofollow noreferrer">Helm v2 support</a></h2> </blockquote> <blockquote> <p>Helm v2 was <a href="https://helm.sh/blog/helm-v2-deprecation-timeline/" rel="nofollow noreferrer">officially deprecated</a> in November of 2020, with the <code>stable</code> repository being <a href="https://about.gitlab.com/blog/2020/11/09/ensure-auto-devops-work-after-helm-stable-repo/" rel="nofollow noreferrer">de-listed from the Helm Hub shortly thereafter</a>.</p> <p><strong>With the release of GitLab 14.0 (June 2021), which will include the 5.0 release of the <a href="https://docs.gitlab.com/charts/" rel="nofollow noreferrer">GitLab Helm chart</a>, Helm v2 will no longer be supported.</strong></p> <p>Users of the chart should <a href="https://helm.sh/docs/topics/v2_v3_migration/" rel="nofollow noreferrer">upgrade to Helm v3</a> to deploy GitLab 14.0 and above.</p> </blockquote> <p>So that is why Helm Tiller doesn't appear in the application list of Gitlab CI.</p>
<p>I am looking to spin up a specific number of pods that are independent and not load balanced. (The intent is to use these to send and receive certain traffic to and from some external endpoint.) The way I am planning to do this is to create the pods explicitly (yaml snippet as below)</p> <pre><code> apiVersion: v1 kind: Pod metadata: name: generator-agent-pod-1 labels: app: generator-agent version: v1 spec: containers: ... </code></pre> <p>(In this, the name will be auto-generated as <code>generator-agent-pod-1, generator-agent-pod-2</code>, etc.)</p> <p>I am then looking to create one service per pod: so essentially, there'll be a <code>generator-agent-service-1, generator-agent-service-2</code>, etc., and so I can then use the service to be able to reach the pod from outside.</p> <p>I now have two questions: 1. In the service, how do I select a specific pod by name (instead of by labels)? something equivalent to:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: generator-agent-service-1 labels: app: agent-service spec: type: NodePort ports: - port: 8085 protocol: TCP selector: metadata.name: generator-agent-pod-1 </code></pre> <p>(This service does not get any endpoints, so the selector is incorrect, I suppose.)</p> <ol start="2"> <li>Is there a better way to approach this problem (Kubernetes or otherwise)?</li> </ol> <p>Thanks!</p>
<p>I think you are using StatefulSet for controlling Pods. If so, you can use label <code>statefulset.kubernetes.io/pod-name</code> to select pods in a service.</p> <p>For illustration:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: generator-agent-service-1 labels: app: agent-service spec: type: NodePort ports: - port: 8085 protocol: TCP selector: statefulset.kubernetes.io/pod-name: generator-agent-pod-1 </code></pre>
<p>Is there a way to find the history of commands applied to the kubernetes cluster by kubectl? For example, I want to know the last applied command was</p> <pre><code>kubectl apply -f x.yaml </code></pre> <p>or</p> <pre><code>kubectl apply -f y.yaml </code></pre>
<p>You can use <code>kubectl apply view-last-applied</code> command to find the last applied configuration:</p> <pre><code>➜ ~ kubectl apply view-last-applied --help View the latest last-applied-configuration annotations by type/name or file. The default output will be printed to stdout in YAML format. One can use -o option to change output format. Examples: # View the last-applied-configuration annotations by type/name in YAML. kubectl apply view-last-applied deployment/nginx # View the last-applied-configuration annotations by file in JSON kubectl apply view-last-applied -f deploy.yaml -o json [...] </code></pre> <p>To get the full history from the beginning of a cluster creation you should use <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/audit/" rel="nofollow noreferrer">audit logs</a> as already mentioned in comments by @Jonas.</p> <p>additionally, if you adopt gitops you could have all your cluster state under version control. It will allow you to trace back all the changes made to your cluster.</p>
<p>I am new to the writing a custom controllers for the kubernetes and trying to understand this. I have started referring the sample-controller <a href="https://github.com/kubernetes/sample-controller" rel="nofollow noreferrer">https://github.com/kubernetes/sample-controller</a>.</p> <p>I want to extend the sample-controller to operate VM resource in cloud using kubernetes. It could create a Vm if new VM kind resource is detected. Update the sub resources or delete if user want.</p> <p>Schema should be like the below:</p> <pre><code>apiVersion: samplecontroller.k8s.io/v1alpha1 kind: VM metadata: name: sample spec: vmname: test-1 status: vmId: 1234-567-8910 cpuUtilization: 50 </code></pre> <p>Any suggestions or help is highly appreciable :)</p>
<p>Start from <a href="https://book.kubebuilder.io/" rel="nofollow noreferrer">https://book.kubebuilder.io/</a> instead. It's a much better jumping off point than sample-controller.</p>
<p>Am new to trino. Tried installing trino on kubernetes using the helm chart available under trinodb/charts.</p> <p>Coordinator and worker pods come up fine, but am unable to find catalog location. Checked the helm chart and it seems to not have it defined anywhere either.</p> <p>How did others who used the helm chart define new connectors and use.</p> <p>Any pointers ?</p>
<p>They added catalogs in their chart, have a look at this pull request <a href="https://github.com/trinodb/charts/pull/1" rel="nofollow noreferrer">https://github.com/trinodb/charts/pull/1</a></p>
<p>I just want to list pods with their <code>.status.podIP</code> as an extra column. It seems that as soon as I specify <code>-o=custom-colums=</code> the default columns <code>NAME, READY, STATUS, RESTARTS, AGE</code> will disappear.</p> <p>The closest I was able to get is</p> <pre><code>kubectl get pod -o wide -o=custom-columns=&quot;NAME:.metadata.name,STATUS:.status.phase,RESTARTS:.status.containerStatuses[0].restartCount,PODIP:.status.podIP&quot; </code></pre> <p>but that is not really equivalent to the the default columns in the following way:</p> <ul> <li>READY: I don't know how to get the default output (which looks like <code>2/2</code> or <code>0/1</code> by using custom columns</li> <li>STATUS: In the default behaviour STATUS, can be Running, Failed, <strong>Evicted</strong>, but <code>.status.phase</code> will never be <code>Evicted</code>. It seems that the default STATUS is a combination of <code>.status.phase</code> and <code>.status.reason</code>. <strong>Is there a way to say show <code>.status.phase</code> if it's Running but if not show <code>.status.reason</code>?</strong></li> <li>RESTARTS: This only shows the restarts of the first container in the pod (I guess the sum of all containers would be the correct one)</li> <li>AGE: Again I don't know how to get the age of the pod using custom-columns</li> </ul> <p>Does anybody know the definitions of the default columns in custom-columns syntax?</p>
<p>I checked the differences between in API requests between the <code>kubectl get pods</code> and <code>kubectl -o custom columns</code>:</p> <ul> <li>With aggregation:</li> </ul> <pre class="lang-json prettyprint-override"><code>curl -k -v -XGET -H Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json -H User-Agent: kubectl/v1.18.8 (linux/amd64) kubernetes/9f2892a http://127.0.0.1:8001/api/v1/namespaces/default/pods?limit=500 </code></pre> <ul> <li>Without aggregation:</li> </ul> <pre class="lang-json prettyprint-override"><code>curl -k -v -XGET -H Accept: application/json -H User-Agent: kubectl/v1.18.8 (linux/amd64) kubernetes/9f2892a http://127.0.0.1:8001/api/v1/namespaces/default/pods?limit=500 </code></pre> <p>So you will notice that when <code>-o custom columns</code> is used, kubectl gets <code>PodList</code> instead of <code>Table</code> in response body. Podlist does not have that aggregated data, so to my understanding it is not possible to get the same output with kubectl pods using <code>custom-column</code>.</p> <p>Here's a <a href="https://github.com/kubernetes/kubernetes/blob/ea0764452222146c47ec826977f49d7001b0ea8c/pkg/printers/internalversion/printers.go#L828" rel="nofollow noreferrer">code</a> snippet responsible for the output that you desire. Possible solution would be to fork the client and customize it to your own needs since as you already might notice this output requires some custom logic. Another possible solution would be to use one of the Kubernetes <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">api client libraries</a>. Lastly you may want to try <a href="https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/" rel="nofollow noreferrer">extend kubectl</a> functionalities with <a href="https://github.com/ishantanu/awesome-kubectl-plugins" rel="nofollow noreferrer">kubectl plugins</a>.</p>
<p>We have setup Prometheus in a kubernetes cluster using PrometheusOperator. We are trying to configure AlertManager using the AlertManagerConfig custom resource. We tried creating an alert route which maps to a webhook receiver and then triggering a test alert. The alert seems to be caught by AlertManager but it is not being forwarded to the webhook endpoint. AlertManager pod logs are also not printing any logs regarding notifications being send to the receivers for an alert. Sharing the test config below:</p> <pre><code>apiVersion: monitoring.coreos.com/v1alpha1 kind: AlertmanagerConfig metadata: name: discord-config spec: receivers: - name: discord webhookConfigs: - url: '&lt;webhook-url&gt;' sendResolved: true route: groupBy: ['job'] groupWait: 15s groupInterval: 15s repeatInterval: 15s receiver: 'discord' --- apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: test-rules spec: groups: - name: test-rule-group rules: - alert: TestAlert expr: vector(1) labels: severity: medium annotations: description: &quot;This is a reciever test for webhook alert&quot; summary: &quot;This is a dummy summary&quot; </code></pre> <p>Is there anything else that needs to be taken care of for the receivers to start receiving alerts?</p>
<p>I was able to find the root cause of the issue. actually root causes. There were two problems:</p> <ol> <li><p>I was using webhook to integrate with a Discord channel, which I later learned is not straightforward. A middle layer is required to parse and forward webhook alerts to Discord in a compatible template. A good solution is already mentioned in the Prometheus <a href="https://prometheus.io/docs/operating/integrations/#alertmanager-webhook-receiver" rel="nofollow noreferrer">documentation</a>, which points to <a href="https://github.com/benjojo/alertmanager-discord" rel="nofollow noreferrer">alertmanager-discord</a> application. I used the docker image for it to create a deployment and a service which bridged alertmanager to discord.</p> </li> <li><p>The operator was adding an additional <code>namepsace</code> label matcher in the top most alert route. So I added the same label to the alerts I created. I used this R<a href="https://prometheus.io/webtools/alerting/routing-tree-editor/" rel="nofollow noreferrer">outing Tree editor</a> to visualize the routes and make sure the given set of labels matches a route.</p> </li> </ol> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: alertmanager-discord spec: selector: matchLabels: app: alertmanager-discord replicas: 1 template: metadata: labels: app: alertmanager-discord spec: containers: - name: alertmanager-discord image: benjojo/alertmanager-discord resources: limits: memory: &quot;128Mi&quot; cpu: &quot;500m&quot; ports: - containerPort: 9094 env: - name: DISCORD_WEBHOOK value: {{ .Values.webhookURL }} --- apiVersion: v1 kind: Service metadata: name: alertmanager-discord spec: selector: app: alertmanager-discord ports: - port: 9094 targetPort: 9094 type: ClusterIP --- apiVersion: monitoring.coreos.com/v1alpha1 kind: AlertmanagerConfig metadata: name: alertmanager spec: receivers: - name: discord webhookConfigs: - url: 'http://alertmanager-discord:9094' sendResolved: true . . . </code></pre>
<p>when using k3sup to setup k3s with raspbian buster on raspberrypi 4b it works (armv7 architecture; with reference below). with an exact similar similar setup procedure of agent nodes on pi zeros, and running raspi-config, it errors with the following failures:</p> <pre><code>- CONFIG_CGROUP_CPUACCT: enabled - CONFIG_CGROUP_DEVICE: enabled - CONFIG_CGROUP_FREEZER: enabled - CONFIG_CGROUP_SCHED: enabled - CONFIG_CPUSETS: missing (fail) - CONFIG_MEMCG: enabled - CONFIG_KEYS: enabled - CONFIG_VETH: enabled (as module) </code></pre> <p>a possible explanation may be that zeros are using armv6 architecture which have some reports mentioning that they may not be supported. There are also conflicting reports that it has been made possible to run on pi zeros.</p> <pre><code>~excerpt from : https://groups.google.com/g/clusterhat/c/iUcfVqJ1aL0 pi@cnat:~ $ kubectl get node -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME p4 Ready &lt;none&gt; 20m v1.17.2+k3s1 172.19.181.4 &lt;none&gt; Raspbian GNU/Linux 10 (buster) 4.19.97+ containerd://1.3.3-k3s1 p2 Ready &lt;none&gt; 5m46s v1.17.2+k3s1 172.19.181.2 &lt;none&gt; Raspbian GNU/Linux 10 (buster) 4.19.97+ containerd://1.3.3-k3s1 p1 Ready &lt;none&gt; 12m v1.17.2+k3s1 172.19.181.1 &lt;none&gt; Raspbian GNU/Linux 10 (buster) 4.19.97+ containerd://1.3.3-k3s1 cnat Ready master 31m v1.17.2+k3s1 192.168.5.234 &lt;none&gt; Raspbian GNU/Linux 10 (buster) 4.19.97-v7+ containerd://1.3.3-k3s1 p3 Ready &lt;none&gt; 114s v1.17.2+k3s1 172.19.181.3 &lt;none&gt; Raspbian GNU/Linux 10 (buster) 4.19.97+ containerd://1.3.3-k3s1 </code></pre> <p>Is there any configuration or that will enable k3s (agent) to successfully operate on raspberry pi zero (w)? If so what is the node os/version and k3s setup for this? Any help appreciated as this has been some uphill battle.</p> <p>(following reference : <a href="https://alexellisuk.medium.com/walk-through-install-kubernetes-to-your-raspberry-pi-in-15-minutes-84a8492dc95a" rel="nofollow noreferrer">https://alexellisuk.medium.com/walk-through-install-kubernetes-to-your-raspberry-pi-in-15-minutes-84a8492dc95a</a> )</p>
<p>Unfortunately, k3s can't run on Pi Zero because Pi zero is based on ARMv6 and K3S only support ARM starting ARMv7. (see this Github issue : <a href="https://github.com/k3s-io/k3s/issues/2699" rel="nofollow noreferrer">https://github.com/k3s-io/k3s/issues/2699</a>).</p> <p>If you want a Raspberry Pi to run K3s, use a Raspberry Pi with an ARMv7 CPU (Ex : Raspberry pi 4).</p>
<p>I'm trying to create a PostgreSQL database in a Kubernetes cluster on Digital Ocean. To do so, I've created a <code>StatefulSet</code> and a <code>Service</code>. And to set up a volume in order to persist data, I took a look at the <a href="https://docs.digitalocean.com/products/kubernetes/how-to/add-volumes/" rel="nofollow noreferrer">Add Block Storage Volumes tutorial</a>. My k8s configurations for the <code>StatefulSet</code> and <code>Service</code> are down below.</p> <p>I simply used a <code>volumeClaimTemplates</code> . The storage class <code>do-block-storage</code> exists in the cluster (<code>volumeBindingMode</code> is set as <code>Immediate</code>). The <code>pv</code> and the <code>pvc</code> are successfully created.</p> <blockquote> <p>A volumeClaimTemplates that is responsible for locating the block storage volume by name csi-pvc. If a volume by that name does not exist, one will be created.</p> </blockquote> <p>But my pod falls in a <strong>CrashLoopBackOff.</strong> I'm getting:<code>0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims. Back-off restarting failed container</code></p> <p>It is also worth saying that my cluster only has one node.</p> <p>Can any please help me understand why? Thanks</p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: postgres-db spec: serviceName: postgres-db selector: matchLabels: role: db app: my-app replicas: 1 template: metadata: labels: role: db app: my-app spec: containers: - name: postgres image: postgres:13 imagePullPolicy: &quot;IfNotPresent&quot; ports: - containerPort: 5432 volumeMounts: - mountPath: &quot;/data&quot; name: csi-pvc - mountPath: &quot;/config&quot; name: postgres-config-map volumes: - name: postgres-config-map configMap: name: postgres-config volumeClaimTemplates: - metadata: name: csi-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: do-block-storage --- apiVersion: v1 kind: Service metadata: name: postgres-db labels: role: db app: my-app spec: selector: role: db app: my-app ports: - port: 5432 targetPort: 5432 type: ClusterIP </code></pre> <p><a href="https://i.stack.imgur.com/oYZIE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oYZIE.png" alt="enter image description here" /></a></p>
<p>I managed to fix my problem by adding the pvc first instead of using volumeClaimTemplates</p>
<p>I'm trying to remove a key/value pair from an existing deployment's <code>spec.selector.matchLabels</code> config. For example, I'm trying to remove the <code>some.old.label: blah</code> label from <code>spec.selector.matchLabels</code> and <code>spec.template.metadata.labels</code>. So this is an excerpt of what I'm sending to <code>kubectl apply -f</code>:</p> <pre class="lang-yaml prettyprint-override"><code> spec: selector: matchLabels: app: my-app template: metadata: labels: app: my-app </code></pre> <p>but that gives me the following error:</p> <blockquote> <p><code>selector</code> does not match template <code>labels</code></p> </blockquote> <p>I also tried <code>kubectl replace</code>, which gives me this error:</p> <blockquote> <p>v1.LabelSelector{MatchLabels:map[string]string{“app”: &quot;my-app&quot;}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable</p> </blockquote> <p>which makes sense once I checked the deployment's config in prod:</p> <pre class="lang-yaml prettyprint-override"><code>metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | # my config is trying to mutate the matchLabels here: {&quot;apiVersion&quot;:&quot;apps/v1&quot;, ... &quot;selector&quot;:{&quot;matchLabels&quot;:{&quot;app&quot;:&quot;my-app&quot;} ... } # etc... spec: selector: matchLabels: app: my-app some.old.label: blah # how do I remove this label from both the metadata.labels and matchLabels? template: metadata: labels: app: my-app some.old.label: blah # I want to remove this label </code></pre> <p>Notice how the <code>some.old.label: blah</code> key/value is set under <code>selector.matchLabels</code> and <code>template.metadata.labels</code>.</p> <p>Will I have to delete-then-recreate my deployment? Or perhaps call <code>kubectl replace --force</code>?</p> <h3>Notes</h3> <p>I came across this section in the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#label-selector-updates" rel="nofollow noreferrer">Kubernetes Deployment docs</a>:</p> <blockquote> <p>Selector removals removes an existing key from the Deployment selector -- do not require any changes in the Pod template labels. Existing ReplicaSets are not orphaned, and a new ReplicaSet is not created, but note that the removed label still exists in any existing Pods and ReplicaSets.</p> </blockquote> <p>as well as <a href="https://github.com/kubernetes/website/pull/3877" rel="nofollow noreferrer">this PR</a> and <a href="https://github.com/kubernetes/kubernetes/issues/50808" rel="nofollow noreferrer">this Github issue</a> which speak about the reasoning behind the problem, but I can't figure out <em>how</em> I'm can safely update my deployment to remove this selector.</p>
<p>When the error message says &quot;field is immutable&quot;, it means you can't change it once it's been set. You need to delete and recreate the Deployment with the label selector you want (which will also temporarily delete all of the matching Pods).</p> <pre class="lang-sh prettyprint-override"><code>kubectl delete deployment my-app kubectl apply -f ./deployment.yaml </code></pre>
<h2>Question</h2> <p>I am trying to learn Istio and I am setting up my Istio Ingress-Gateway. When I set that up, there are the following port options (<a href="https://istio.io/latest/docs/reference/config/istio.operator.v1alpha1/#ServicePort" rel="nofollow noreferrer">as indicated here</a>):</p> <ul> <li>Port</li> <li>NodePort</li> <li>TargetPort</li> </ul> <p><code>NodePort</code> makes sense to me. That is the port that the Ingress-Gateway will listen to on each worker node in the Kubernetes cluster. Requests that hit there are going to route into the Kubernetes cluster using the Ingress Gateway CRDs.</p> <p>In the examples, <code>Port</code> is usually set to the common port for its matching traffic (80 for http, and 443 for https, etc). I don't understand what Istio needs this port for, as I don't see any traffic using anything but the NodePort.</p> <p><code>TargetPort</code> is a mystery to me. I have seen some documentation on it for normal Istio Gateways (that says it is <a href="https://istio.io/latest/docs/reference/config/networking/gateway/#Port" rel="nofollow noreferrer">only applicable when using ServiceEntries</a>), but nothing that makes sense for an Ingress-Gateway.</p> <p><strong>My question is this, in relation to an Ingress-Gateway (not a normal Gateway) what is a <code>TargetPort</code>?</strong></p> <h2>More Details</h2> <p>In the end, I am trying to debug why my ingress traffic is getting a &quot;connection refused&quot; response.</p> <p>I setup my Istio Operator <a href="https://istio.io/latest/docs/setup/install/operator/" rel="nofollow noreferrer">following this tutorial</a> with this configuration:</p> <pre><code>apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: name: istio-controlplane namespace: istio-system spec: components: ingressGateways: - enabled: true k8s: service: ports: - name: http2 port: 80 nodePort: 30980 hpaSpec: minReplicas: 2 name: istio-ingressgateway pilot: enabled: true k8s: hpaSpec: minReplicas: 2 profile: default </code></pre> <p>I omitted the <code>TargetPort</code> from my config because I found <a href="https://istio.io/latest/news/releases/1.7.x/announcing-1.7/upgrade-notes/#gateways-run-as-non-root" rel="nofollow noreferrer">this release notes</a> that said that Istio will pick safe defaults.</p> <p>With that I tried to follow the steps found in <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/" rel="nofollow noreferrer">this tutorial</a>.</p> <p>I tried the curl command indicated in that tutorial:</p> <pre><code>curl -s -I -H Host:httpbin.example.com &quot;http://10.20.30.40:30980/status/200&quot; </code></pre> <p>I got the response of <code>Failed to connect to 10.20.30.40 port 30980: Connection refused</code></p> <p>But I can ping <code>10.20.30.40</code> fine, and the command to get the NodePort returns <code>30980</code>.</p> <p>So I got to thinking that maybe this is an issue with the <code>TargetPort</code> setting that I don't understand.</p> <p>A check of the <code>istiod</code> logs hinted that I may be on the right track. I ran:</p> <pre><code>kubectl logs -n istio-system -l app=istiod </code></pre> <p>and among the logs I found:</p> <pre><code>warn buildGatewayListeners: skipping privileged gateway port 80 for node istio-ingressgateway-dc748bc9-q44j7.istio-system as it is an unprivileged pod warn gateway has zero listeners for node istio-ingressgateway-dc748bc9-q44j7.istio-system </code></pre> <p>So, if you got this far, then WOW! I thank you for reading it all. If you have any suggestions on what I need to set TargetPort to, or if I am missing something else, I would love to hear it.</p>
<p>Port, Nodeport and TargetPort are not Istio concepts, but Kubernetes ones, more specifically of Kubernetes Services, which is why there is no detailed description of that in the Istio Operator API.</p> <p>The Istio Operator API exposes the options to configure the (Kubernetes) Service of the Ingress Gateway.</p> <p>For a description of those concepts, see the documentation for <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Kubernetes Service</a>.</p> <p>See also <a href="https://stackoverflow.com/questions/49981601/difference-between-targetport-and-port-in-kubernetes-service-definition">Difference between targetPort and port in Kubernetes Service definition</a></p> <p>So the target port is where the containers of the Pod of the Ingress Gateway receive their traffic.</p> <p>Therefore I think, that the configuration of ports and target ports is application specific and the mapping 80-&gt;8080 is more or less arbitrary, i.e. a &quot;decision&quot; of the application.</p> <p>Additional details:</p> <p>The Istio Operator describes the Ingress Gateway, which itself consists of a Kubernetes Service and a Kubernetes Deployment. Usually it is deployed in istio-system. You can inspect the Kubernetes Service of istio-ingressgateway and it will match the specification of that YAML.</p> <p>Therefore the Istio Ingress Gateway is actually talking to its containers. However, this is mostly an implementation detail of the Istio Ingress Gateway and is not related to a Service and a VirtualService which you define for your apps.</p> <p>The Ingressgateway is itself a Service and receives traffic on the port you define (i.e. 80) and forwards it to 8080 on its containers. Then it processes the traffic according to the rules which are configured by Gateways and VirtualServices and sends it to the Service of the application.</p>
<p>To use storage inside Kubernetes PODs I can use <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="noreferrer">volumes</a> and <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="noreferrer">persistent volumes</a>. While the volumes like <code>emptyDir</code> are ephemeral, I could use <code>hostPath</code> and many other cloud based volume plugins which would provide a persistent solution in volumes itself.</p> <p>In that case why should I be using Persistent Volume then?</p>
<p>It is very important to understand the main differences between <code>Volumes</code> and <code>PersistentVolumes</code>. Both <code>Volumes</code> and <code>PersistentVolumes</code> are Kubernetes resources which provides an abstraction of a data storage facility.</p> <ul> <li><p><code>Volumes</code>: let your pod write to a filesystem that exists as long as the pod exists. They also let you share data between containers in the same pod but data in that volume will be destroyed when the pod is restarted. <code>Volume</code> decouples the storage from the Container. Its lifecycle is coupled to a pod.</p> </li> <li><p><code>PersistentVolumes</code>: serves as a long-term storage in your Kubernetes cluster. They exist beyond containers, pods, and nodes. A pod uses a persistent volume claim to to get read and write access to the persistent volume. <code>PersistentVolume</code> decouples the storage from the Pod. Its lifecycle is independent. It enables safe pod restarts and sharing data between pods.</p> </li> </ul> <p>When it comes to <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="noreferrer">hostPath</a>:</p> <blockquote> <p>A <code>hostPath</code> volume mounts a file or directory from the host node's filesystem into your Pod.</p> </blockquote> <p><code>hostPath</code> has its usage scenarios but in general it might not recommended due to several reasons:</p> <ul> <li><p>Pods with identical configuration (such as created from a <code>PodTemplate</code>) may behave differently on different nodes due to different files on the nodes</p> </li> <li><p>The files or directories created on the underlying hosts are only writable by root. You either need to run your process as root in a privileged Container or modify the file permissions on the host to be able to write to a <code>hostPath</code> volume</p> </li> <li><p>You don't always directly control which node your pods will run on, so you're not guaranteed that the pod will actually be scheduled on the node that has the data volume.</p> </li> <li><p>If a node goes down you need the pod to be scheduled on other node where your locally provisioned volume will not be available.</p> </li> </ul> <p>The <code>hostPath</code> would be good if for example you would like to use it for log collector running in a <code>DaemonSet</code>.</p> <p>I recommend the <a href="https://matthewpalmer.net/kubernetes-app-developer/articles/kubernetes-volumes-example-nfs-persistent-volume.html" rel="noreferrer">Kubernetes Volumes Guide</a> as a nice supplement to this topic.</p>
<ol> <li>How many CRs of a certain CRD can a k8s cluster handle?</li> <li>How many CRs can a certain controller (or Operator) reconcile?</li> </ol> <p>Thanks!</p>
<p><strong>1. How many CRs of a certain CRD can a k8s cluster handle?</strong></p> <p>As many as your API server's <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#storage" rel="nofollow noreferrer">storage space allows</a>:</p> <blockquote> <p>Custom resources consume storage space in the same way that ConfigMaps do. Creating too many custom resources may overload your API server's storage space.</p> <p>Aggregated API servers may use the same storage as the main API server, in which case the same warning applies.</p> </blockquote> <p><strong>2. How many CRs can a certain controller (or Operator) reconcile?</strong></p> <p>A controller in Kubernetes keeps track of <strong>at least one resource type</strong>. There many built-in controllers in Kubernetes, replication controller, namespace controller, service account controller, etc. Custom Resource definition along with Custom Controller makes the <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/" rel="nofollow noreferrer">Operator Pattern</a>. It's hard to tell what is the maximum CR that a given Operator could handle as it would depend on different aspects such as the Operator itself. For example, <a href="https://01.org/kubernetes/blogs/2020/using-kubernetes-custom-controller-manage-two-custom-resources-designing-akraino-icn-bpa" rel="nofollow noreferrer">this guide</a> shows that the Binary Provisioning Agent (BPA) Custom Controller can handle two or more custom resources.</p> <p>You can find some useful sources below:</p> <ul> <li><p><a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#storage" rel="nofollow noreferrer">Custom Resources</a></p> </li> <li><p><a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/" rel="nofollow noreferrer">Operator pattern</a></p> </li> <li><p><a href="https://01.org/kubernetes/blogs/2020/using-kubernetes-custom-controller-manage-two-custom-resources-designing-akraino-icn-bpa" rel="nofollow noreferrer">Using a Kubernetes custom controller to manage two custom resources</a></p> </li> <li><p><a href="https://admiralty.io/blog/2018/06/27/kubernetes-custom-resource-controller-and-operator-development-tools/" rel="nofollow noreferrer">Kubernetes Custom Resource, Controller and Operator Development Tools</a></p> </li> </ul>
<p>I deployed a kubernetes cluster on Google Cloud using VMs and <a href="https://github.com/kubernetes-sigs/kubespray/blob/master/docs/setting-up-your-first-cluster.md" rel="nofollow noreferrer">Kubespray</a>.</p> <p>Right now, I am looking to expose a simple node app to external IP using <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip#gcloud" rel="nofollow noreferrer">loadbalancer</a> but showing my external IP from gcloud to service does not work. It stays on pending state when I query <code>kubectl get services</code>.</p> <p>According to <a href="https://stackoverflow.com/questions/44110876/kubernetes-service-external-ip-pending">this</a>, kubespray does not have any loadbalancer mechanicsm included/integrated by default. How should I progress?</p>
<p>Let me start of by summarizing the problem we are trying to solve here.</p> <p>The problem is that you have self-hosted kubernetes cluster and you want to be able to create a service of type=LoadBalancer and you want k8s to create a LB for you with externlIP and in fully automated way, just like it would if you used a GKE (kubernetes as a service solution).</p> <p>Additionally I have to mention that I don't know much of a kubespray, so I will only describe all the steps that need to bo done to make it work, and leave the rest to you. So if you want to make changes in kubespray code, it's on you. All the tests I did with kubeadm cluster but it should not be very difficult to apply it to kubespray.</p> <hr /> <p>I will start of by summarizing all that has to be done into 4 steps:</p> <ol> <li>tagging the instances</li> <li>enabling cloud-provider functionality</li> <li>IAM and service accounts</li> <li>additional info</li> </ol> <hr /> <p><strong>Tagging the instances</strong> All worker node instances on GCP have to be labeled with unique tag that is the name of an instance; these tags are later used to create a firewall rules and target lists for LB. So lets say that you have an instance called <strong>worker-0</strong>; you need to tag that instance with a tag <code>worker-0</code></p> <p>Otherwise it will result in an error (that can be found in controller-manager logs):</p> <pre><code>Error syncing load balancer: failed to ensure load balancer: no node tags supplied and also failed to parse the given lists of hosts for tags. Abort creating firewall rule </code></pre> <hr /> <p><strong>Enabling cloud-provider functionality</strong> K8s has to be informed that it is running in cloud and what cloud provider that is so that it knows how to talk with the api.</p> <p>controller manager logs informing you that it wont create an LB.</p> <pre><code>WARNING: no cloud provider provided, services of type LoadBalancer will fail </code></pre> <p>Controller Manager is responsible for creation of a LoadBalancer. It can be passed a flag <code>--cloud-provider</code>. You can manually add this flag to controller manager pod manifest file; or like in your case since you are running kubespray, you can add this flag somewhere in kubespray code (maybe its already automated and just requires you to set some env or sth, but you need to find it out yourself).</p> <p>Here is how this file looks like with the flag:</p> <pre><code>apiVersion: v1 kind: Pod metadata: labels: component: kube-controller-manager tier: control-plane name: kube-controller-manager namespace: kube-system spec: containers: - command: - kube-controller-manager ... - --cloud-provider=gce # &lt;----- HERE </code></pre> <p>As you can see the value in our case is <code>gce</code>, which stangs for Google Compute Engine. It informs k8s that its running on GCE/GCP.</p> <hr /> <p><strong>IAM and service accounts</strong> Now that you have your provider enabled, and tags covered, I will talk about IAM and permissions.</p> <p>For k8s to be able to create a LB in GCE, it needs to be allowed to do so. Every GCE instance has a deafult service account assigned. Controller Manager uses instance service account, stored within <a href="https://cloud.google.com/compute/docs/storing-retrieving-metadata" rel="nofollow noreferrer">instance metadata</a> to access GCP API.</p> <p>For this to happen you need to set Access Scopes for GCE instance (master node; the one where controller manager is running) so it can use Cloud Engine API.</p> <blockquote> <p>Access scopes -&gt; Set access for each API -&gt; compute engine=Read Write</p> </blockquote> <p>To do this the instance has to be stopped, so now stop the instance. It's better to set these scopes during instance creation so that you don't need to make any unnecessary steps.</p> <p>You also need to go to IAM &amp; Admin page in GCP Console and add permissions so that master instance's service account has <code>Kubernetes Engine Service Agent</code> role assigned. This is a predefined role that has much more permissions than you probably need but I have found that everything works with this role so I decided to use is for demonstration purposes, but you probably want to use <em>least privilege rule</em>.</p> <hr /> <p><strong>additional info</strong> There is one more thing I need to mention. It does not impact you but while testing I have found out an interesting thing.</p> <p>Firstly I created only one node cluster (single master node). Even though this is allowed from k8s point of view, controller manager would not allow me to create a LB and point it to a master node where my application was running. This draws a conclusion that one cannot use LB with only master node and has to create at least one worker node.</p> <hr /> <p>PS I had to figure it out the hard way; by looking at logs, changing things and looking at logs again to see if the issue got solved. I didn't find a single article/documentation page where it is documented in one place. If you manage to solve it for yourself, write the answer for others. Thank you.</p>
<h1>What I wanna accomplish</h1> <p>I'm trying to connect an external HTTPS (L7) load balancer with an NGINX Ingress exposed as a zonal Network Endpoint Group (NEG). My Kubernetes cluster (in GKE) contains a couple of web application deployments that I've exposed as a ClusterIP service.</p> <p>I know that the NGINX Ingress object can be directly exposed as a TCP load balancer. But, this is not what I want. Instead in my architecture, I want to load balance the HTTPS requests with an external HTTPS load balancer. I want this external load balancer to provide SSL/TLS termination and forward HTTP requests to my Ingress resource.</p> <p>The ideal architecture would look like this:</p> <p>HTTPS requests --&gt; external HTTPS load balancer --&gt; HTTP request --&gt; NGINX Ingress zonal NEG --&gt; appropriate web application</p> <p>I'd like to add the zonal NEGs from the NGINX Ingress as the backends for the HTTPS load balancer. This is where things fall apart.</p> <h1>What I've done</h1> <p><strong>NGINX Ingress config</strong></p> <p>I'm using the default NGINX Ingress config from the official kubernetes/ingress-nginx project. Specifically, this YAML file <a href="https://github.com/kubernetes/ingress-nginx/blob/master/deploy/static/provider/cloud/deploy.yaml" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/master/deploy/static/provider/cloud/deploy.yaml</a>. Note that, I've changed the NGINX-controller Service section as follows:</p> <ul> <li><p>Added NEG annotation</p> </li> <li><p>Changed the Service type from <code>LoadBalancer</code> to <code>ClusterIP</code>.</p> </li> </ul> <pre><code># Source: ingress-nginx/templates/controller-service.yaml apiVersion: v1 kind: Service metadata: annotations: # added NEG annotation cloud.google.com/neg: '{&quot;exposed_ports&quot;: {&quot;80&quot;:{&quot;name&quot;: &quot;NGINX_NEG&quot;}}}' labels: helm.sh/chart: ingress-nginx-3.30.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.46.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller namespace: ingress-nginx spec: type: ClusterIP ports: - name: http port: 80 protocol: TCP targetPort: http - name: https port: 443 protocol: TCP targetPort: https selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller --- </code></pre> <p><strong>NGINX Ingress routing</strong></p> <p>I've tested the path based routing rules for the NGINX Ingress to my web applications independently. This works when the NGINX Ingress is configured with a TCP Load Balancer. I've set up my application Deployment and Service configs the usual way.</p> <p><strong>External HTTPS Load Balancer</strong></p> <p>I created an external HTTPS load balancer with the following settings:</p> <ul> <li>Backend: added the zonal NEGs named <code>NGINX_NEG</code> as the backends. The backend is configured to accept HTTP requests on port 80. I configured the health check on the serving port via the TCP protocol. I added the firewall rules to allow incoming traffic from <code>130.211.0.0/22</code> and <code>35.191.0.0/16</code> as mentioned here <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/standalone-neg#traffic_does_not_reach_the_endpoints" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/standalone-neg#traffic_does_not_reach_the_endpoints</a></li> </ul> <h1>What's not working</h1> <p>Soon after the external load balancer is set up, I can see that GCP creates a new endpoint under one of the zonal NEGs. But this shows as &quot;Unhealthy&quot;. Requests to the external HTTPS load balancer return a 502 error.</p> <ul> <li><p>I'm not sure where to start debugging this configuration in GCP logging. I've enabled logging for the health check but nothing shows up in the logs.</p> </li> <li><p>I configured the health check on the <code>/healthz</code> path of the NGINX Ingress controller. That didn't seem to work either.</p> </li> </ul> <p>Any tips on how to get this to work will be much appreciated. Thanks!</p> <p>Edit 1: As requested, I ran the <code>kubectl get svcneg -o yaml --namespace=&lt;namespace&gt;</code>, here's the output</p> <pre><code>apiVersion: networking.gke.io/v1beta1 kind: ServiceNetworkEndpointGroup metadata: creationTimestamp: &quot;2021-05-07T19:04:01Z&quot; finalizers: - networking.gke.io/neg-finalizer generation: 418 labels: networking.gke.io/managed-by: neg-controller networking.gke.io/service-name: ingress-nginx-controller networking.gke.io/service-port: &quot;80&quot; name: NGINX_NEG namespace: ingress-nginx ownerReferences: - apiVersion: v1 blockOwnerDeletion: false controller: true kind: Service name: ingress-nginx-controller uid: &lt;unique ID&gt; resourceVersion: &quot;2922506&quot; selfLink: /apis/networking.gke.io/v1beta1/namespaces/ingress-nginx/servicenetworkendpointgroups/NGINX_NEG uid: &lt;unique ID&gt; spec: {} status: conditions: - lastTransitionTime: &quot;2021-05-07T19:04:08Z&quot; message: &quot;&quot; reason: NegInitializationSuccessful status: &quot;True&quot; type: Initialized - lastTransitionTime: &quot;2021-05-07T19:04:10Z&quot; message: &quot;&quot; reason: NegSyncSuccessful status: &quot;True&quot; type: Synced lastSyncTime: &quot;2021-05-10T15:02:06Z&quot; networkEndpointGroups: - id: &lt;id1&gt; networkEndpointType: GCE_VM_IP_PORT selfLink: https://www.googleapis.com/compute/v1/projects/&lt;project&gt;/zones/us-central1-a/networkEndpointGroups/NGINX_NEG - id: &lt;id2&gt; networkEndpointType: GCE_VM_IP_PORT selfLink: https://www.googleapis.com/compute/v1/projects/&lt;project&gt;/zones/us-central1-b/networkEndpointGroups/NGINX_NEG - id: &lt;id3&gt; networkEndpointType: GCE_VM_IP_PORT selfLink: https://www.googleapis.com/compute/v1/projects/&lt;project&gt;/zones/us-central1-f/networkEndpointGroups/NGINX_NEG </code></pre>
<p>As per my understanding, your issue is - “when an external load balancer is set up, GCP creates a new endpoint under one of the zonal NEGs and it shows &quot;Unhealthy&quot; and requests to the external HTTPS load balancer which return a 502 error”.</p> <p>Essentially, the Service's annotation, cloud.google.com/neg: '{&quot;ingress&quot;: true}', enables container-native load balancing. After creating the Ingress, an HTTP(S) load balancer is created in the project, and NEGs are created in each zone in which the cluster runs. The endpoints in the NEG and the endpoints of the Service are kept in sync. Refer to the link [1].</p> <p>New endpoints generally become reachable after attaching them to the load balancer, provided that they respond to health checks. You might encounter 502 errors or rejected connections if traffic cannot reach the endpoints.</p> <p>One of your endpoints in zonal NEG is showing unhealthy so please confirm the status of other endpoints and how many endpoints are spread across the zones in the backend. If all backends are unhealthy, then your firewall, Ingress, or service might be misconfigured.</p> <p>You can run following command to check if your endpoints are healthy or not and refer link [2] for the same - gcloud compute network-endpoint-groups list-network-endpoints NAME \ --zone=ZONE</p> <p>To troubleshoot traffic that is not reaching the endpoints, verify that health check firewall rules allow incoming TCP traffic to your endpoints in the 130.211.0.0/22 and 35.191.0.0/16 ranges. But as you mentioned you have configured this rule correctly. Please refer link [3] for health check Configuration.</p> <p>Run the Curl command against your LB IP to check for responses -<br /> Curl [LB IP]</p> <p>[1] <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-xlb" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-xlb</a></p> <p>[2] <a href="https://cloud.google.com/load-balancing/docs/negs/zonal-neg-concepts#troubleshooting" rel="nofollow noreferrer">https://cloud.google.com/load-balancing/docs/negs/zonal-neg-concepts#troubleshooting</a></p> <p>[3] <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#health_checks" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#health_checks</a></p>
<p>I have a mysql instance running inside a digitalocean droplet. Originally, we also had a laravel application running inside that droplet with the mysql instance but now we want to move our application to kubernetes.</p> <p>The application has been deployed to kubernetes and we are trying to connect the laravel application to the MySQL instance inside that droplet for the purpose of testing but we keep getting the error:</p> <p>Host '46.101.81.14' is not allowed to connect to this MySQL server</p> <p>That is not IP address I specified as the host, and it is not the IP address of my kubernetes loadbalancer either.</p> <p>These are the steps I took to enable remote access to the database:</p> <ul> <li>set bind address for MySQL to 0.0.0.0</li> <li>CREATE USER ‘someuser’@'localhost’ IDENTIFIED BY 'password’;</li> <li>GRANT ALL ON databasename.* TO remoteuser@'ipaddressofk8s_lb’ IDENTIFIED BY 'password’;</li> <li>sudo ufw allow from ipaddress_of_k8s_lb to any port 3306</li> </ul> <p>Please what could I be missing?</p>
<p>You need to allow access. may this one help you. <a href="https://www.digitalocean.com/community/tutorials/how-to-allow-remote-access-to-mysql" rel="nofollow noreferrer">https://www.digitalocean.com/community/tutorials/how-to-allow-remote-access-to-mysql</a></p>
<p>I have an application that needs to know it's assigned NodePort. Unfortunately it is not possible to write a file to a mountable volume and then read that file in the main container. Therefore, i'm looking for a way to either have the initContainer set an environment variable that gets passed to the main container or to modify the main pod's launch command to add an additional CLI argument. I can't seem to find any examples or documentation that would lead me to this answer. TIA.</p>
<p>There's no direct way so you have to get creative. For example you can make a shared emptyDir mount that both containers can access, have the initContainer write <code>export FOO=bar</code> to that file, and then change the main container command to something like <code>[bash, -c, &quot;source /thatfile &amp;&amp; exec originalcommand&quot;]</code></p>
<p>I have three service that I need to expose via istio ingress gateway, i have setup those services dns records to point to the ingress gateway load balancer but i have not succeded to make it work.</p> <p>The gateway and virtual service config file :</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: test-gateway spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 80 name: http protocol: HTTP hosts: - &quot;*.mywebsite.io&quot; --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: virtualservice spec: hosts: - &quot;*.mywebsite.io&quot; gateways: - test-gateway http: - name: &quot;api-gateway&quot; match: - uri: exact: &quot;gateway.mywebsite.io&quot; route: - destination: host: gateway.default.svc.cluster.local port: number: 8080 - name: &quot;visitor-service&quot; match: - uri: exact: &quot;visitor-service.mywebsite.io&quot; route: - destination: host: visitor-service.default.svc.cluster.local port: number: 8000 - name: &quot;auth-service&quot; match: - uri: exact: &quot;auth-service.mywebsite.io&quot; route: - destination: host: auth-service.default.svc.cluster.local port: number: 3004 </code></pre>
<p>I guess the URI part of the <a href="https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPMatchRequest" rel="nofollow noreferrer">HttpMatchRequest</a> does not work that way. Try to add VirtualServices for each subdomain, i.e. something like.</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: gateway-virtualservice spec: hosts: - &quot;gateway.mywebsite.io&quot; gateways: - test-gateway http: - name: &quot;api-gateway&quot; match: - uri: exact: &quot;/&quot; #or prefix route: - destination: host: gateway.default.svc.cluster.local port: number: 8080 --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: visitor-virtualservice spec: hosts: - &quot;visitor-service.mywebsite.io&quot; gateways: - test-gateway http: - name: &quot;visitor-service&quot; match: - uri: exact: &quot;/&quot; route: - destination: host: visitor-service.default.svc.cluster.local port: number: 8000 </code></pre>
<p>I tried with this <code>Gateway</code>, and <code>VirtualService</code>, didn't work.</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: stomp spec: selector: istio: ingressgateway servers: - port: number: 80 name: stomp protocol: TCP hosts: - rmq-stomp.mycompany.com </code></pre> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: rmq-stomp spec: hosts: - rmq-stomp.mycompany.com gateways: - stomp http: - match: - uri: prefix: / route: - destination: port: number: 61613 host: rabbitmq.default.svc.cluster.local </code></pre> <p>There's no problem with the service, because when I tried to connect from other pod, it's connected.</p>
<p>Use <code>tcp.match</code>, not <code>http.match</code>. Here is the example I have found in <a href="https://istio.io/latest/docs/reference/config/networking/gateway/" rel="nofollow noreferrer">istio gateway docs</a> and in <a href="https://istio.io/latest/docs/reference/config/networking/virtual-service/#TCPRoute" rel="nofollow noreferrer">istio virtualservice dosc</a></p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo-mongo namespace: bookinfo-namespace spec: hosts: - mongosvr.prod.svc.cluster.local # name of internal Mongo service gateways: - some-config-namespace/my-gateway # can omit the namespace if gateway is in same namespace as virtual service. tcp: - match: - port: 27017 route: - destination: host: mongo.prod.svc.cluster.local port: number: 5555 </code></pre> <p>So your would look sth like:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: rmq-stomp spec: hosts: - rmq-stomp.mycompany.com gateways: - stomp tcp: - match: - port: 80 route: - destination: host: rabbitmq.default.svc.cluster.local port: number: 61613 </code></pre> <hr /> <p>Here is a similar question answered: <a href="https://stackoverflow.com/questions/54492068/how-to-configure-istios-virtualservice-for-a-service-which-exposes-multiple-por">how-to-configure-istios-virtualservice-for-a-service-which-exposes-multiple-por</a></p>
<p>I have a 3 node K3s cluster with Linkerd and NGINX Ingress Controller. I installed Linkerd with a default configuration:</p> <pre><code>linkerd install | kubectl apply -f - </code></pre> <p>Then to install the NGINX Ingress Controller I used helm with a default configuration as well:</p> <pre><code>helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm install ingress-nginx ingress-nginx/ingress-nginx </code></pre> <p>I can access the Linkerd dashboard by calling <code>linkerd viz dashboard</code>, but I'd like to expose the dashboard with an Ingress definition. I modified the example yaml file from Linkerd's website located <a href="https://linkerd.io/2.10/tasks/exposing-dashboard/#nginx" rel="nofollow noreferrer">here</a>, so that I could use a prefix path. In the end, my yaml file looked like this:</p> <pre><code>apiVersion: v1 kind: Secret type: Opaque metadata: name: web-ingress-auth namespace: linkerd-viz data: auth: YWRtaW46JGFwcjEkbjdDdTZnSGwkRTQ3b2dmN0NPOE5SWWpFakJPa1dNLgoK --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: web-ingress namespace: linkerd-viz annotations: kubernetes.io/ingress.class: 'nginx' nginx.ingress.kubernetes.io/upstream-vhost: $service_name.$namespace.svc.cluster.local:8084 nginx.ingress.kubernetes.io/configuration-snippet: | proxy_set_header Origin &quot;&quot;; proxy_hide_header l5d-remote-ip; proxy_hide_header l5d-server-id; nginx.ingress.kubernetes.io/auth-type: basic nginx.ingress.kubernetes.io/auth-secret: web-ingress-auth nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required' spec: rules: - http: paths: - path: /linkerd pathType: Prefix backend: serviceName: web servicePort: 8084 </code></pre> <p>For any of my other custom microservices, I can simply access them via the public IP address of my nginx ingress service. I can get this IP like this:</p> <pre><code>kubectl describe svc ingress-nginx-controller | grep &quot;LoadBalancer Ingress&quot; | awk '{ print $3 }' </code></pre> <p>When I try accessing the dashboard at http://EXPOSED_IP/linkerd, I am prompted to enter my username and password (both <strong>admin</strong> by default), but then I get a 404 not found error.</p> <p>Does anybody know what could be the issue? Thank you very much!</p>
<p>It's never going to like the path. It needs to serve on the root of the url. So change path to '/' and it should work fine. I'm happy to try it out locally if that doesn't work.</p>
<p>I am getting issue while terminating the namesapce in the cluster, It's showing many parameters inside the namespace JSON. I followed this link <a href="https://medium.com/@craignewtondev/how-to-fix-kubernetes-namespace-deleting-stuck-in-terminating-state-5ed75792647e" rel="noreferrer">https://medium.com/@craignewtondev/how-to-fix-kubernetes-namespace-deleting-stuck-in-terminating-state-5ed75792647e</a></p> <pre><code> &quot;spec&quot;: {}, &quot;status&quot;: { &quot;conditions&quot;: [ { &quot;lastTransitionTime&quot;: &quot;2021-01-11T08:41:48Z&quot;, &quot;message&quot;: &quot;All resources successfully discovered&quot;, &quot;reason&quot;: &quot;ResourcesDiscovered&quot;, &quot;status&quot;: &quot;False&quot;, &quot;type&quot;: &quot;NamespaceDeletionDiscoveryFailure&quot; }, { &quot;lastTransitionTime&quot;: &quot;2021-01-11T08:41:48Z&quot;, &quot;message&quot;: &quot;All legacy kube types successfully parsed&quot;, &quot;reason&quot;: &quot;ParsedGroupVersions&quot;, &quot;status&quot;: &quot;False&quot;, &quot;type&quot;: &quot;NamespaceDeletionGroupVersionParsingFailure&quot; }, { &quot;lastTransitionTime&quot;: &quot;2021-01-11T08:41:48Z&quot;, &quot;message&quot;: &quot;All content successfully deleted, may be waiting on finalization&quot;, &quot;reason&quot;: &quot;ContentDeleted&quot;, &quot;status&quot;: &quot;False&quot;, &quot;type&quot;: &quot;NamespaceDeletionContentFailure&quot; }, { &quot;lastTransitionTime&quot;: &quot;2021-01-11T08:42:09Z&quot;, &quot;message&quot;: &quot;All content successfully removed&quot;, &quot;reason&quot;: &quot;ContentRemoved&quot;, &quot;status&quot;: &quot;False&quot;, &quot;type&quot;: &quot;NamespaceContentRemaining&quot; }, { &quot;lastTransitionTime&quot;: &quot;2021-01-11T08:41:48Z&quot;, &quot;message&quot;: &quot;All content-preserving finalizers finished&quot;, &quot;reason&quot;: &quot;ContentHasNoFinalizers&quot;, &quot;status&quot;: &quot;False&quot;, &quot;type&quot;: &quot;NamespaceFinalizersRemaining&quot; } ], &quot;phase&quot;: &quot;Terminating&quot; } }``` </code></pre>
<p>I have found the answer to terminate the stuck namespace.</p> <pre class="lang-sh prettyprint-override"><code>for ns in $(kubectl get ns --field-selector status.phase=Terminating -o jsonpath='{.items[*].metadata.name}') do kubectl get ns $ns -ojson | jq '.spec.finalizers = []' | kubectl replace --raw &quot;/api/v1/namespaces/$ns/finalize&quot; -f - done for ns in $(kubectl get ns --field-selector status.phase=Terminating -o jsonpath='{.items[*].metadata.name}') do kubectl get ns $ns -ojson | jq '.metadata.finalizers = []' | kubectl replace --raw &quot;/api/v1/namespaces/$ns/finalize&quot; -f - done </code></pre>
<p>I am <a href="https://piotrminkowski.com/2021/02/18/blue-green-deployment-with-a-database-on-kubernetes/" rel="nofollow noreferrer">reading about blue green deployment with database changes on Kubernetes.</a> It explains very clearly and in detail how the process works:</p> <ol> <li>deploy new containers with the new versions while still directing traffic to the old containers</li> <li>migrate database changes and have the services point to the new database</li> <li>redirect traffic to the new containers and remove the old containers when there are no issues</li> </ol> <p>I have some questions particularly about the moment we switch from the old database to the new one.</p> <p>In step 3 of the article, we have <code>person-v1</code> and <code>person-v2</code> services that both still point to the unmodified version of the database (postgres v1):</p> <p><a href="https://i.stack.imgur.com/pCvg1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pCvg1.png" alt="before database migration" /></a></p> <p>From this picture, having <code>person-v2</code> point to the database is probably needed to establish a TCP connection, but it would likely fail due to incompatibility between the code and DB schema. But since all incoming traffic is still directed to <code>person-v1</code> this is not a problem.</p> <p>We now modify the database (to postgres v2) and switch the traffic to <code>person-v2</code> (step 4 in the article). <strong>I assume that both the DB migration and traffic switch happen at the same time?</strong> That means it is impossible for <code>person-v1</code> to communicate with postgres v2 or <code>person-v2</code> to communicate with postgres v1 at any point during this transition? Because this can obviously cause errors (i.e. inserting data in a column that doesn't exist yet/anymore).</p> <p><a href="https://i.stack.imgur.com/PcJO5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PcJO5.png" alt="after database migration" /></a></p> <p>If the above assumption is correct, then <strong>what happens if during the DB migration new data is inserted in postgres v1</strong>? Is it possible for data to become lost with unlucky timing? Just because the traffic switch happens at the same time as the DB switch, does not mean that any ongoing processes in <code>person-v1</code> can not still execute DB statements. It would seem to me that any new inserts/deletes/updates would need to propagate to postgres v2 as well for as long as the migration is still in progress.</p>
<blockquote> <p>I am reading about blue green deployment with database changes on Kubernetes. It explains very clearly and in detail how the process works</p> </blockquote> <p>It's an interesting article. But I would not do database migration as described there. And Blue-Green deployment does not make this much easier, you cannot atomically swap the traffic, since replicas will still possibly process requests on the old version - and you don't want to cut on-going requests.</p> <p>The DB-change must be done in a way so that it does not break the first version of the code. Perhaps this must be done in multiple steps.</p> <p>Considering the same example, there is multiple different solutions. E.g. first add a <em>view</em> with the new column-names, then deploy a version of the code that uses the view, then change the column-names and finally deploy a newer version of the code that use the new column-names. Alternatively you can <strong>add</strong> columns with the new column-names <em>besides</em> the old column-names and let the old version of the code still use the old column-names and the new version of code use the new column-names, and finally remove old column-names when there is <em>no running replica of the old code</em>.</p> <p>As described above, both rolling-upgrades or blue-green deployments can be practiced.</p>
<p>kubectl top nodes<br> NAME                      CPU(cores)   CPU%  MEMORY(bytes)  MEMORY%<br> <br /> gsdjsgfhdsgfz-12345665-jisj000000 934m       24%     10439Mi       82%<br> gsdjsgfhdsgfz-12345665-jisj000001 717m       18%     9132Mi         72%<br> gsdjsgfhdsgfz-12345665-jisj000002 1099m         28%     7614Mi       60%<br></p> <p>how to get the CPU% and MEMORY% value using java io.fabric8 kubernetes-client library.</p> <pre><code>try (KubernetesClient k8s = new DefaultKubernetesClient()) { NodeMetricsList nodeMetricsList = k8s.top().nodes().metrics(); for (NodeMetrics nodeMetrics : nodeMetricsList.getItems()) { logger.info(&quot;{} {} {}&quot;, nodeMetrics.getMetadata().getName(), nodeMetrics.getUsage().get(&quot;cpu&quot;), nodeMetrics.getUsage().get(&quot;memory&quot;)); } </code></pre> <p>}</p> <p>getting output is:-<br> node name<br> cpu:-1094942089n<br> memory:-7830672Ki<br><br></p> <p>how will take the percentage values?</p>
<p>I had to implement this same feature recently, unfortunately I didn't find a way to get the percentages just by using the <code>top()</code> API alone, I had to perform two calls, one to <code>nodes()</code> in order to retrieve total capacity and another one to <code>top()</code> to retrieve used capacity. Then it was just a matter of calculating percentage.</p> <p>A snippet of the working code:</p> <pre><code>public static void main(String[] args) { KubernetesClient kubernetesClient = new DefaultKubernetesClient(); Map&lt;String, Node&gt; nodeMap = kubernetesClient.nodes().list().getItems() .stream() .collect(Collectors.toMap(node -&gt; node.getMetadata().getName(), Function.identity())); List&lt;NodeUsage&gt; usageList = kubernetesClient.top().nodes().metrics().getItems() .stream() .map(metric -&gt; new NodeUsage(nodeMap.get(metric.getMetadata().getName()), metric.getUsage())) .collect(Collectors.toList()); System.out.println(usageList); } private static class NodeUsage { private final Node node; private final BigDecimal cpuPercentage; private final BigDecimal memoryPercentage; private NodeUsage(Node node, Map&lt;String, Quantity&gt; used) { this.node = node; cpuPercentage = calculateUsage(used.get(&quot;cpu&quot;), node.getStatus().getAllocatable().get(&quot;cpu&quot;)); memoryPercentage = calculateUsage(used.get(&quot;memory&quot;), node.getStatus().getAllocatable().get(&quot;memory&quot;)); } private static BigDecimal calculateUsage(Quantity used, Quantity total) { return Quantity.getAmountInBytes(used) .divide(Quantity.getAmountInBytes(total), 2, RoundingMode.FLOOR) .multiply(BigDecimal.valueOf(100)); } public Node getNode() { return node; } public BigDecimal getCpuPercentage() { return cpuPercentage; } public BigDecimal getMemoryPercentage() { return memoryPercentage; } } </code></pre>
<p>I'm building Istio/K8s-based platform for controlling traffic routing with NodeJS. I need to be able to programmatically modify Custom Resources and I'd like to use the <a href="https://github.com/kubernetes-client/javascript" rel="nofollow noreferrer">@kubernetes/node-client</a> for that. I wasn't able to find the right API for accessing Custome Resources in docs and the repo. Am I missing something? Thx in adv.</p> <p>EDIT: When using CustomObjectApi.patchNamespacedCustomObject function, I'm getting the following error back from K8s API:</p> <p><code>message: 'the body of the request was in an unknown format - accepted media types include: application/json-patch+json, application/merge-patch+json, application/apply-patch+yaml', reason: 'UnsupportedMediaType', code: 415</code></p> <p>My Code:</p> <pre class="lang-js prettyprint-override"><code>const k8sYamls = `${path.resolve(path.dirname(__filename), '..')}/k8sYamls` const vServiceSpec = read(`${k8sYamls}/${service}/virtual-service.yaml`) const kc = new k8s.KubeConfig() kc.loadFromDefault() const client = kc.makeApiClient(k8s.CustomObjectsApi) const result = await client.patchNamespacedCustomObject(vServiceSpec.apiVersion.split('/')[0], vServiceSpec.apiVersion.split('/')[1], namespace, 'virtualservices', vServiceSpec.metadata.name, vServiceSpec) </code></pre> <p>virtual-service.yaml:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: message-service spec: hosts: - message-service http: - name: 'production' route: - destination: host: message-service weight: 100 retries: attempts: 3 perTryTimeout: 2s retryOn: 5xx </code></pre>
<p>I was using the wrong type for the <code>body</code> object in that method. I got it to work following <a href="https://github.com/kubernetes-client/javascript/blob/master/examples/patch-example.js" rel="nofollow noreferrer">this example</a>.</p> <pre class="lang-js prettyprint-override"><code>const patch = [ { &quot;op&quot;: &quot;replace&quot;, &quot;path&quot;:&quot;/metadata/labels&quot;, &quot;value&quot;: { &quot;foo&quot;: &quot;bar&quot; } } ]; const options = { &quot;headers&quot;: { &quot;Content-type&quot;: k8s.PatchUtils.PATCH_FORMAT_JSON_PATCH}}; k8sApi.patchNamespacedPod(res.body.items[0].metadata.name, 'default', patch, undefined, undefined, undefined, undefined, options) .then(() =&gt; { console.log(&quot;Patched.&quot;)}) .catch((err) =&gt; { console.log(&quot;Error: &quot;); console.log(err)}); </code></pre>
<p>According to <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">the documentation</a>:</p> <blockquote> <p>A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned ... It is a resource in the cluster just like a node is a cluster resource...</p> </blockquote> <p>So I was reading about all currently available <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes" rel="nofollow noreferrer">plugins</a> for PVs and I understand that for 3rd-party / out-of-cluster storage this doesn't matter (e.g. storing data in EBS, Azure or GCE disks) because there are no or very little implications when adding or removing nodes from a cluster. However, there are different ones such as (ignoring <code>hostPath</code> as that works only for single-node clusters):</p> <ul> <li>csi</li> <li>local</li> </ul> <p>which (at least from what I've read in the docs) don't require 3rd-party vendors/software.</p> <p>But <a href="https://kubernetes.io/docs/concepts/storage/volumes/#local" rel="nofollow noreferrer">also</a>:</p> <blockquote> <p>... local volumes are subject to the availability of the underlying node and are not suitable for all applications. If a node becomes unhealthy, then the local volume becomes inaccessible by the pod. The pod using this volume is unable to run. Applications using local volumes must be able to tolerate this reduced availability, as well as potential data loss, depending on the durability characteristics of the underlying disk.</p> </blockquote> <blockquote> <p>The local PersistentVolume requires manual cleanup and deletion by the user if the external static provisioner is not used to manage the volume lifecycle.</p> </blockquote> <p><strong>Use-case</strong></p> <p>Let's say I have a single-node cluster with a single <code>local</code> PV and I want to add a new node to the cluster, so I have 2-node cluster (small numbers for simplicity).</p> <p>Will the data from an already existing <code>local</code> PV be 1:1 replicated into the new node as in having one PV with 2 nodes of redundancy or is it strictly bound to the existing node only?</p> <p>If the already existing PV can't be adjusted from 1 to 2 nodes, can a <strong>new PV</strong> (created from scratch) be created so it's 1:1 replicated between 2+ nodes on the cluster?</p> <p>Alternatively if not, what would be the correct approach without using a 3rd-party out-of-cluster solution? Will using <code>csi</code> cause any change to the overall approach or is it the same with redundancy, just different &quot;engine&quot; under the hood?</p>
<blockquote> <p>Can a new PV be created so it's 1:1 replicated between 2+ nodes on the cluster?</p> </blockquote> <p>None of the standard volume types are replicated at all. If you can use a volume type that supports <code>ReadWriteMany</code> access (most readily NFS) then multiple pods can use it simultaneously, but you would have to run the matching NFS server.</p> <p>Of the volume types you reference:</p> <ul> <li><p><code>hostPath</code> is a directory on the node the pod happens to be running on. It's not a directory on any specific node, so if the pod gets recreated on a different node, it will refer to the same directory but on the new node, presumably with different content. Aside from basic test scenarios I'm not sure when a <code>hostPath</code> PersistentVolume would be useful.</p> </li> <li><p><code>local</code> is a directory on a specific node, or at least following a node-affinity constraint. Kubernetes knows that not all storage can be mounted on every node, so this automatically constrains the pod to run on the node that has the directory (assuming the node still exists).</p> </li> <li><p><code>csi</code> is an extremely generic extension mechanism, so that you can run storage drivers that aren't on the list you link to. There are some features that might be better supported by the CSI version of a storage backend than the in-tree version. (I'm familiar with AWS: the <a href="https://github.com/kubernetes-sigs/aws-ebs-csi-driver" rel="nofollow noreferrer">EBS CSI driver</a> supports snapshots and resizing; the <a href="https://github.com/kubernetes-sigs/aws-efs-csi-driver" rel="nofollow noreferrer">EFS CSI driver</a> can dynamically provision NFS directories.)</p> </li> </ul> <p>In the specific case of a local test cluster (say, using <a href="https://kind.sigs.k8s.io" rel="nofollow noreferrer">kind</a>) using a <code>local</code> volume will constrain pods to run on the node that has the data, which is more robust than using a <code>hostPath</code> volume. It won't replicate the data, though, so if the node with the data is deleted, the data goes away with it.</p>
<p>I'm using docker for windows on my local laptop, and I'm trying to mimic a dev installation of kubernetes by using the &quot;run kubernetes' setting on the same laptop. One thing that's awkward is the docker registry. I have a docker registry container running in-cluster that I can push to no problem from the laptop, but when the docker-for-windows kubernetes controller needs to 'pull' an image, I'm not sure how to reference the registry: I've tried referencing the registry using the laptops netbios name, with various DNS suffixes, but it doesn't seem to work.</p> <p>Is there a way I can accomplish this?</p>
<p>You would use the internal cluster DNS, as managed by the Service object you probably created for the registry. All Services are available inside the cluster via <code>$name.$namespace.svc.cluster.local</code> (technically <code>cluster.local</code> is the cluster domain however this is the default and most common value by far).</p>
<p>I was investigating certain things things about <code>cert-manager</code>.</p> <p><code>TLS certificates</code> are automatically recreated by cert-manager.</p> <p>I need to somehow <strong>deregister a domain / certificate</strong> from being regenerated. I guess I would need to tell cert-manager not to take care about a given domain anymore.</p> <p>I do not have any clue how to do that right now. Can someone help?</p>
<p><code>cert-manager</code> is an application implemented using the <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/" rel="nofollow noreferrer">operator pattern</a>.</p> <p>In one sentence, it watches for a <code>Custom Resource</code> (<code>CR</code> for short) named <code>Certificate</code> in the Kubernetes API and it creates and updates <code>Secrets</code> resources to store certificate data.</p> <p>If you delete the <code>Secret</code> resource but don't delete the <code>Certificate</code> CR, <code>cert-manager</code> will recreate the secret for you.</p> <p>The right way of &quot;deregister a domain&quot; or to better say it &quot;make cert-manager not generate a certificate for a domain any more&quot; is to delete the <code>Certificate</code> CR related to your domain.</p> <p>To get a list of all the <code>Certificate</code> CRs in your cluster you can use <code>kubectl</code></p> <pre><code>kubectl get certificate -A </code></pre> <p>When you found the <code>Certificate</code> related to the domain you want to delete, simply delete it</p> <pre><code>kubectl -n &lt;namespace&gt; delete certificate &lt;certificate name&gt; </code></pre> <p>Once you deleted the certificate CR, you might also want to delete the <code>Secret</code> containing the TLS cert one more time. This time <code>cert-manager</code> will not recreate it.</p>
<p>I am trying to set up a Knative eventing pipeline, where exists a container that accepts external gRPC requests and fires events into a broker for further processing.</p> <p>In my toy example, I am failing to use SinkBinding to inject <code>K_SINK</code> environment variable. This is the relevant section of my configuration:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Namespace metadata: name: bora-namespace labels: eventing.knative.dev/injection: enabled --- apiVersion: eventing.knative.dev/v1 kind: Broker metadata: name: my-broker namespace: bora-namespace --- apiVersion: apps/v1 kind: Deployment metadata: name: ease-pipeline-server namespace: bora-namespace spec: replicas: 1 selector: matchLabels: app: ease-pipeline-server template: metadata: labels: app: ease-pipeline-server spec: containers: - name: ease-pipeline-server image: docker.io/boramalper/ease-pipeline-server:latest imagePullPolicy: Always --- apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: name: bind-ease-pipeline-server namespace: bora-namespace spec: subject: apiVersion: apps/v1 kind: Deployment selector: matchLabels: app: ease-pipeline-server sink: ref: apiVersion: eventing.knative.dev/v1 kind: Broker name: my-broker --- kind: Service apiVersion: v1 metadata: name: ease-pipeline-server namespace: bora-namespace spec: type: NodePort selector: app: ease-pipeline-server ports: - protocol: TCP port: 80 targetPort: 8080 nodePort: 30002 </code></pre> <p>My container gets stuck in an infinite crash loop due to missing environment variable.</p> <p>The SinkBinding object seems to have no issues:</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl --namespace bora-namespace get sinkbinding NAME SINK AGE READY REASON bind-ease-pipeline-server http://broker-ingress.knative-eventing.svc.cluster.local/bora-namespace/my-broker 22m True </code></pre> <p>System information:</p> <pre class="lang-sh prettyprint-override"><code>$ kn version Version: v20210526-local-0c6ef82 Build Date: 2021-05-26 06:34:50 Git Revision: 0c6ef82 Supported APIs: * Serving - serving.knative.dev/v1 (knative-serving v0.23.0) * Eventing - sources.knative.dev/v1 (knative-eventing v0.23.0) - eventing.knative.dev/v1 (knative-eventing v0.23.0) $ kubectl version Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;20&quot;, GitVersion:&quot;v1.20.6&quot;, GitCommit:&quot;8a62859e515889f07e3e3be6a1080413f17cf2c3&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-04-15T03:28:42Z&quot;, GoVersion:&quot;go1.15.10&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;20&quot;, GitVersion:&quot;v1.20.7&quot;, GitCommit:&quot;132a687512d7fb058d0f5890f07d4121b3f0a2e2&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-05-12T12:32:49Z&quot;, GoVersion:&quot;go1.15.12&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.1 LTS Release: 18.04 Codename: bionic $ uname -a Linux REDACTED 4.15.0-137-generic #141-Ubuntu SMP Fri Feb 19 13:46:27 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux </code></pre>
<p>The <code>SinkBinding</code> object has a subject configured using a label selector:</p> <pre class="lang-yaml prettyprint-override"><code>subject: apiVersion: apps/v1 kind: Deployment selector: matchLabels: app: ease-pipeline-server </code></pre> <p>However, there is no such label set on the <code>Deployment</code> object:</p> <pre class="lang-yaml prettyprint-override"><code>metadata: name: ease-pipeline-server # no labels </code></pre> <p>The solution here would be to either:</p> <ul> <li><p>add the corresponding label(s) to the Deployment's <code>metadata</code></p> <pre class="lang-yaml prettyprint-override"><code>metadata: name: ease-pipeline-server labels: app: ease-pipeline-server </code></pre> </li> <li><p>use a subject matching on the Deployment's <code>name</code> (<a href="https://knative.dev/docs/reference/api/eventing/#sources.knative.dev/v1.SinkBinding" rel="nofollow noreferrer">API documentation</a>)</p> <pre class="lang-yaml prettyprint-override"><code>subject: apiVersion: apps/v1 kind: Deployment name: ease-pipeline-server </code></pre> </li> </ul>
<p>I have a Kafka wrapper library that uses transactions on the produce side only. The library does not cover the consumer. The producer publishes to multiple topics. The goal is to achieve transactionality. So the produce should either succeed which means there should be exactly once copy of the message written in each topic, or fail which means message was not written to any topics. The users of the library are applications that run on Kubernetes pods. Hence, the pods could fail, or restart frequently. Also, the partition is not going to be explicitly set upon sending the message.</p> <p>My question is, how should I choose the transactional.id for producers? My first idea is to simply choose UUID upon object initiation, as well as setting a transaction.timeout.ms to some reasonable time (a few seconds). That way, if a producer gets terminated due to pod restart, the consumers don't get locked on the transaction forever.</p> <p>Are there any flaws with this strategy? Is there a smarter way to do this? Also, I cannot ask the library user for some kind of id.</p>
<p>UUID can be used in your library to generate transaction id for your producers. I am not really sure what you mean by: <em>That way, if a producer gets terminated due to pod restart, the consumers don't get locked on the transaction forever</em>.</p> <p>Consumer is never really &quot;stuck&quot;. Say the producer goes down after writing message to one topic (and hence transaction is not yet committed), then consumer will behave in one of the following ways:</p> <ul> <li>If <code>isolation.level</code> is set to <code>read_committed</code>, consumer will never process the message (since the message is not committed). It will still read the next committed message that comes along.</li> <li>If <code>isolation.level</code> is set to <code>read_uncommitted</code>, the message will be read and processed (defeating the purpose of transaction in the first place).</li> </ul>
<p>I have an Nginx ingress controller running on one k8s namespace, and on another k8s namespace, I defined a pod, a service, and an ingress resource. this is the ingress resource definition:</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: my-app-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 spec: ingressClassName: general-internal rules: - host: example.com http: paths: - path: &quot;/my-app(/|$)(.*)&quot; backend: serviceName: my-app-svc servicePort: 443 </code></pre> <p>now, when I access this link:</p> <pre><code>http://example.com/my-app/some-path/ </code></pre> <p>then everything is ok because the &quot;my-app-svc&quot; knows the path &quot;/some-path/&quot; and returns a 200 response (the forwarding is to <code>http://my-app-svc/some-path</code> and that's great because my-app-svc doesn't and shouldn't know or care for the <code>/my-app</code> prefix that exists only for the nginx ingress controller so it will know to forward that request to &quot;my-app-svc&quot; internally).</p> <p>but when I access this link (notice no &quot;/&quot; in the end):</p> <pre><code>http://example.com/my-app/some-path </code></pre> <p>I get a redirection response from the &quot;my-app-svc&quot; service, and the &quot;Location&quot; header contains &quot;/some-path/&quot;, so the redirection is to:</p> <pre><code>http://example.com/some-path/ </code></pre> <p>which does not lead to the &quot;my-app-svc&quot; service because it doesn't have the &quot;/my-app&quot; prefix.</p> <p>If the Location header was &quot;/my-app/some-path/&quot; instead of &quot;/some-path/&quot; then everything was ok because the redirection would be to:</p> <pre><code>http://example.com/my-app/some-path/ </code></pre> <p>which would give me the 200 response.</p> <p>the question is, how can I do that the ingress controller will add a &quot;my-app&quot; prefix to the Location header when it returns the redirection response to the client?</p> <p>Thanks</p>
<p>Thanks to my co-worker, we found a solution for the problem:</p> <p>The solution is to add these annotations to the ingess resource:</p> <pre><code>nginx.ingress.kubernetes.io/proxy-redirect-from: / nginx.ingress.kubernetes.io/proxy-redirect-to: /my-app/ </code></pre> <p>meaning:</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: my-app-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/proxy-redirect-from: / nginx.ingress.kubernetes.io/proxy-redirect-to: /my-app/ spec: ingressClassName: general-internal rules: - host: example.com http: paths: - path: &quot;/my-app(/|$)(.*)&quot; backend: serviceName: my-app-svc servicePort: 443 </code></pre> <p>It seems that the annotations above check the &quot;Location&quot; header in the redirection response to the client, and replace the first <code>/</code> with <code>/my-app/</code> and only after this change - the redirection response is sent to the client.</p>
<p>I need to update a file in a container running in k8s using my local editor and save back the updates to the original file in the container <em>without</em> restarting/redeploying the container.</p> <p>Right now I do:</p> <pre><code>$ kubectl exec tmp-shell -- cat /root/motd &gt; motd &amp;&amp; vi motd &amp;&amp; kubectl cp motd tmp-shell:/root/motd </code></pre> <p>Is there some better way to do this?</p> <p>I have looked at:</p> <p><a href="https://github.com/ksync/ksync" rel="nofollow noreferrer">https://github.com/ksync/ksync</a></p> <p>but seems heavyweight for something this &quot;simple&quot;.</p> <p><strong>Notice:</strong></p> <p>I don't want to use the editor that might or might <em>not</em> be available inside the container - since an editor is not guaranteed to be available.</p>
<p>One option that might be available is <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#ephemeral-container" rel="nofollow noreferrer">ephemeral debug containers</a> however they are an alpha feature so probably not enabled for you at time of writing. Barring that, yeah what you said is an option. It probably goes without saying but this is a very bad idea, might not work at all if the target file isn't writable (which it shouldn't be in most cases) either because of file permissions, or because the container is running in immutable mode. Also this would only matter if the thing using the file will detect the change without reloading.</p> <p>A better medium term plan would be to store the content in the ConfgMap and mount it into place. That would let you update it whenever you want.</p>
<p>I am still learning about microservices architecture and can't get a clear idea how to tackle a given problem.</p> <p>Let's say I have a k8s cluster with some BE microservices, FE app or any other services deployed into it. I have also an ingress controller that does consumes traffic and route to services.</p> <p>Somewhere inside that setup I would like to put API gateway, which would do some additional (customizable) stuff on the requests like authentication, logging, adding headers etc. We could wrap up this functionality as a reverse-proxy for the whole cluster. I have found three possible solutions:</p> <p>AGW outside cluster in front of Ingress:</p> <p><a href="https://i.stack.imgur.com/A3QhGt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/A3QhGt.png" alt="Approach 1" /></a></p> <p>AGW as an Ingress Controller:</p> <p><a href="https://i.stack.imgur.com/N3R7Et.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/N3R7Et.png" alt="Approach 2" /></a></p> <p>AGW inside cluster after Ingress:</p> <p><a href="https://i.stack.imgur.com/2zg33t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2zg33t.png" alt="Approach 3" /></a></p> <p>In first approach AGW routes all traffic to ingress, but then we need to have a &quot;private domain&quot; that ingress will be exposed under. AGW will be exposed under public domain and route traffic to this private domain. That is not great, because Ingress is not exposed under the public domain. E.g. we faced a problem, when Ingress was doing some redirects and was using it's private domain in Location header instead of the public one.</p> <p>In second approach AGW functionality is embedded in Ingress Controller, however I haven't find a suitable controller for our need. Building a custom Ingress Controller doesn't sound too good.</p> <p>In third AGW is inside cluster, but then Ingress is not doing any routing. All routing (and load balancing) will happen in AGW.</p> <p>Are there any other solutions? If not which of them would you recommend? Or maybe there is a different approach to included mentioned reverse-proxy funcionality?</p>
<p>The second and third solution usually works best. The Ingress doesn't loadbalance, it's the service objects that configure the loadbalancing rules.</p> <p>If you can manage to use an Api Gateway as Ingress Controller, like ambassador, or Istio (those are the ones I know about), that would be even better. You will remove an additional extra network hop.<br /> Because ambassador and Istio are kubernetes minded, using kubernetes Resource objects, you can control whatever you like using those Ingress Controllers.</p>
<p>I'm running an eks cluster, installed k8s dashboard etc. All works fine, I can login in the UI in</p> <pre><code>http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login </code></pre> <p>Is there a way for me to pass the token via the url so I won't need a human to do this? Thanks!</p>
<p>Based on <a href="https://github.com/kubernetes/dashboard/blob/v2.0.0/docs/user/access-control/README.md#access-control" rel="nofollow noreferrer">official documentation</a> it is impossible to put your authentication token in URL.</p> <blockquote> <p>As of release 1.7 Dashboard supports user authentication based on:</p> <ul> <li><a href="https://github.com/kubernetes/dashboard/blob/v2.0.0/docs/user/access-control/README.md#authorization-header" rel="nofollow noreferrer"><code>Authorization: Bearer &lt;token&gt;</code></a> header passed in every request to Dashboard. Supported from release 1.6. Has the highest priority. If present, login view will not be shown.</li> <li><a href="https://github.com/kubernetes/dashboard/blob/v2.0.0/docs/user/access-control/README.md#bearer-token" rel="nofollow noreferrer">Bearer Token</a> that can be used on Dashboard <a href="https://github.com/kubernetes/dashboard/blob/v2.0.0/docs/user/access-control/README.md#login-view" rel="nofollow noreferrer">login view</a>.</li> <li><a href="https://github.com/kubernetes/dashboard/blob/v2.0.0/docs/user/access-control/README.md#basic" rel="nofollow noreferrer">Username/password</a> that can be used on Dashboard <a href="https://github.com/kubernetes/dashboard/blob/v2.0.0/docs/user/access-control/README.md#login-view" rel="nofollow noreferrer">login view</a>.</li> <li><a href="https://github.com/kubernetes/dashboard/blob/v2.0.0/docs/user/access-control/README.md#kubeconfig" rel="nofollow noreferrer">Kubeconfig</a> file that can be used on Dashboard <a href="https://github.com/kubernetes/dashboard/blob/v2.0.0/docs/user/access-control/README.md#login-view" rel="nofollow noreferrer">login view</a>.</li> </ul> </blockquote> <p>As you can see, only the first option bypasses the Dashboard login view. So, what is Bearer Authentication?</p> <blockquote> <p><strong>Bearer authentication</strong> (also called <strong>token authentication</strong>) is an <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication" rel="nofollow noreferrer">HTTP authentication scheme</a> that involves security tokens called bearer tokens. The name “Bearer authentication” can be understood as “give access to the bearer of this token.” The bearer token is a cryptic string, usually generated by the server in response to a login request. The client must send this token in the <code>Authorization</code> header when making requests to protected resources:</p> </blockquote> <p>You can find more information about Baerer Authentication <a href="https://swagger.io/docs/specification/authentication/bearer-authentication/" rel="nofollow noreferrer">here</a>.</p> <p>The question now is how you can include the authentication header in your request. There are many ways to achieve this:</p> <ul> <li><code>curl</code> command - example:</li> </ul> <pre><code>curl -H &quot;Authorization: Bearer &lt;TOKEN_VALUE&gt;&quot; &lt;https://address-your-dashboard&gt; </code></pre> <ul> <li>Postman application - <a href="https://stackoverflow.com/questions/40539609/how-to-add-authorization-header-in-postman-environment">here</a> is good answer to set up authorization header with screenshots.</li> <li>reverse proxy - you can be achieve this i.e. by configuring reverse proxy in front of Dashboard. Proxy will be responsible for authentication with identity provider and will pass generated token in request header to Dashboard. Note that Kubernetes API server needs to be configured properly to accept these tokens. You can read more about it <a href="https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/README.md#authorization-header" rel="nofollow noreferrer">here</a>. You should know, that this method is potentially insecure due to <a href="https://en.wikipedia.org/wiki/Man-in-the-middle_attack" rel="nofollow noreferrer">Man In The Middle Attack</a> when you are using http.</li> </ul> <p>You can also read very good answers to the question <a href="https://stackoverflow.com/questions/46664104/how-to-sign-in-kubernetes-dashboard">how to sign in kubernetes dashboard</a>.</p>
<p>I am trying to program the patching/rolling upgrade of k8s apps by taking deployment snippets as input. I use <code>patch()</code> method to apply the snippet onto an existing deployment as part of rollingupdate using <a href="https://github.com/fabric8io/kubernetes-client" rel="nofollow noreferrer">fabric8io's k8s client APIS</a>.. Fabric8.io <code>kubernetes-client</code> version <code>4.10.1</code> I'm also using some loadYaml helper methods from <code>kubernetes-api 3.0.12.</code></p> <p>Here is my sample snippet - adminpatch.yaml file:</p> <pre><code> kind: Deployment spec: strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 0 template: spec: containers: - name: ${PATCH_IMAGE_NAME} image: ${PATCH_IMAGE_URL} imagePullPolicy: Always </code></pre> <p>I'm sending the above file content (with all the placeholders replaced) to patchDeployment() method as string. Here is my call to fabric8 patch() method:</p> <pre><code> public static String patchDeployment(String deploymentName, String namespace, String deploymentYaml) { try { Deployment deploymentSnippet = (Deployment) getK8sObject(deploymentYaml); if(deploymentSnippet instanceof Deployment) { logger.debug("Valid deployment object."); Deployment deployment = getK8sClient().apps().deployments().inNamespace(namespace).withName(deploymentName) .rolling().patch(deploymentSnippet); System.out.println(deployment.toString()); return getLastConfig(deployment.getMetadata(), deployment); } } catch (Exception Ex) { Ex.printStackTrace(); } return "Failed"; } </code></pre> <p>It throws the below exception: </p> <pre><code>&gt; io.fabric8.kubernetes.client.KubernetesClientException: Failure &gt; executing: PATCH at: &gt; https://10.44.4.126:6443/apis/apps/v1/namespaces/default/deployments/patch-demo. &gt; Message: Deployment.apps "patch-demo" is invalid: spec.selector: &gt; Invalid value: &gt; v1.LabelSelector{MatchLabels:map[string]string{"app":"nginx", &gt; "deployment":"3470574ffdbd6e88d426a77dd951ed45"}, &gt; MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is &gt; immutable. Received status: Status(apiVersion=v1, code=422, &gt; details=StatusDetails(causes=[StatusCause(field=spec.selector, &gt; message=Invalid value: &gt; v1.LabelSelector{MatchLabels:map[string]string{"app":"nginx", &gt; "deployment":"3470574ffdbd6e88d426a77dd951ed45"}, &gt; MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is &gt; immutable, reason=FieldValueInvalid, additionalProperties={})], &gt; group=apps, kind=Deployment, name=patch-demo, retryAfterSeconds=null, &gt; uid=null, additionalProperties={}), kind=Status, &gt; message=Deployment.apps "patch-demo" is invalid: spec.selector: &gt; Invalid value: &gt; v1.LabelSelector{MatchLabels:map[string]string{"app":"nginx", &gt; "deployment":"3470574ffdbd6e88d426a77dd951ed45"}, &gt; MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is &gt; immutable, metadata=ListMeta(_continue=null, remainingItemCount=null, &gt; resourceVersion=null, selfLink=null, additionalProperties={}), &gt; reason=Invalid, status=Failure, additionalProperties={}). </code></pre> <p>I also tried the original snippet(with labels and selectors) with <code>kubectl patch deployment &lt;DEPLOYMENT_NAME&gt; -n &lt;MY_NAMESPACE&gt; --patch "$(cat adminpatch.yaml)</code> and this applies the same snippet fine. </p> <p>I could not get much documentation on fabric8io k8s client patch() java API. Any help will be appreciated. </p>
<p>With latest improvements in Fabric8 Kubernetes Client, you can do it both via <code>patch()</code> and <code>rolling()</code> API apart from using <code>createOrReplace()</code> which is mentioned in older answer.</p> <h2>Patching JSON/Yaml String using <code>patch()</code> call:</h2> <p>As per latest release v5.4.0, Fabric8 Kubernetes Client does support patch via raw string. It can be either YAML or JSON, see <a href="https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-client/src/test/java/io/fabric8/kubernetes/client/PatchTest.java#L107" rel="nofollow noreferrer">PatchTest.java</a>. Here is an example using raw JSON string to update image of a Deployment:</p> <pre class="lang-java prettyprint-override"><code>try (KubernetesClient kubernetesClient = new DefaultKubernetesClient()) { kubernetesClient.apps().deployments() .inNamespace(deployment.getMetadata().getNamespace()) .withName(deployment.getMetadata().getName()) .patch(&quot;{\&quot;spec\&quot;:{\&quot;template\&quot;:{\&quot;spec\&quot;:{\&quot;containers\&quot;:[{\&quot;name\&quot;:\&quot;patch-demo-ctr-2\&quot;,\&quot;image\&quot;:\&quot;redis\&quot;}]}}}}&quot;); } </code></pre> <h2>Rolling Update to change container image:</h2> <p>However, if you just want to do rolling update; You might want to use <code>rolling()</code> API instead. Here is how it would look like for updating image of an existing Deployment:</p> <pre class="lang-java prettyprint-override"><code>try (KubernetesClient client = new DefaultKubernetesClient()) { // ... Create Deployment // Update Deployment for a single container Deployment client.apps().deployments() .inNamespace(namespace) .withName(deployment.getMetadata().getName()) .rolling() .updateImage(&quot;gcr.io/google-samples/hello-app:2.0&quot;); } </code></pre> <h2>Rolling Update to change multiple images in multi-container Deployment:</h2> <p>If you want to update Deployment with multiple containers. You would need to use <code>updateImage(Map&lt;String, String&gt;)</code> method instead. Here is an example of it's usage:</p> <pre class="lang-java prettyprint-override"><code>try (KubernetesClient client = new DefaultKubernetesClient()) { Map&lt;String, String&gt; containerToImageMap = new HashMap&lt;&gt;(); containerToImageMap.put(&quot;nginx&quot;, &quot;stable-perl&quot;); containerToImageMap.put(&quot;hello&quot;, &quot;hello-world:linux&quot;); client.apps().deployments() .inNamespace(namespace) .withName(&quot;multi-container-deploy&quot;) .rolling() .updateImage(containerToImageMap); } </code></pre> <h2>Rolling Update Restart an existing Deployment</h2> <p>If you need to restart your existing Deployment you can just use the <code>rolling().restart()</code> DSL method like this:</p> <pre class="lang-java prettyprint-override"><code>try (KubernetesClient client = new DefaultKubernetesClient()) { client.apps().deployments() .inNamespace(namespace) .withName(deployment.getMetadata().getName()) .rolling() .restart(); } </code></pre>
<p>Is it possible to avoid specifying namespace if I'd like to run e.g. <code>kubectl describe deployment &lt;deployment_name&gt;</code> for a deployment that is unique? Or is namespace argument is always mandatory?</p>
<p>To list such resources, you can use <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/" rel="nofollow noreferrer">Field Selector</a> on name:</p> <p><code>kubectl get deployments --all-namespaces --field-selector='metadata.name=&lt;deployment_name&gt;'</code></p> <p>To describe:</p> <p><code>kubectl get deployments --all-namespaces --field-selector='metadata.name=&lt;deployment_name&gt;' -oyaml | kubectl describe -f -</code></p>
<p>How to set Node label to Pod environment variable? I need to know the label <code>topology.kubernetes.io/zone</code> value inside the pod.</p>
<p>The <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">Downward API</a> currently does not support exposing node labels to pods/containers. There is an <a href="https://github.com/kubernetes/kubernetes/issues/40610" rel="nofollow noreferrer">open issue</a> about that on GitHib, but it is unclear when it will be implemented if at all.</p> <p>That leaves the only option to get node labels from Kubernetes API, just as <code>kubectl</code> does. It is not easy to implement, especially if you want labels as environment variables. I'll give you an example how it can be done with an <code>initContainer</code>, <code>curl</code>, and <code>jq</code> but if possible, I suggest you rather implement this in your application, for it will be easier and cleaner.</p> <p>To make a request for labels you need permissions to do that. Therefore, the example below creates a service account with permissions to <code>get</code> (describe) nodes. Then, the script in the <code>initContainer</code> uses the service account to make a request and extract labels from <code>json</code>. The <code>test</code> container reads environment variables from the file and <code>echo</code>es one.</p> <p>Example:</p> <pre class="lang-yaml prettyprint-override"><code># Create a service account apiVersion: v1 kind: ServiceAccount metadata: name: describe-nodes namespace: &lt;insert-namespace-name-where-the-app-is&gt; --- # Create a cluster role that allowed to perform describe (&quot;get&quot;) over [&quot;nodes&quot;] apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: describe-nodes rules: - apiGroups: [&quot;&quot;] resources: [&quot;nodes&quot;] verbs: [&quot;get&quot;] --- # Associate the cluster role with the service account apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: describe-nodes roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: describe-nodes subjects: - kind: ServiceAccount name: describe-nodes namespace: &lt;insert-namespace-name-where-the-app-is&gt; --- # Proof of concept pod apiVersion: v1 kind: Pod metadata: name: get-node-labels spec: # Service account to get node labels from Kubernetes API serviceAccountName: describe-nodes # A volume to keep the extracted labels volumes: - name: node-info emptyDir: {} initContainers: # The container that extracts the labels - name: get-node-labels # The image needs 'curl' and 'jq' apps in it # I used curl image and run it as root to install 'jq' # during runtime # THIS IS A BAD PRACTICE UNSUITABLE FOR PRODUCTION # Make an image where both present. image: curlimages/curl # Remove securityContext if you have an image with both curl and jq securityContext: runAsUser: 0 # It'll put labels here volumeMounts: - mountPath: /node name: node-info env: # pass node name to the environment - name: NODENAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: APISERVER value: https://kubernetes.default.svc - name: SERVICEACCOUNT value: /var/run/secrets/kubernetes.io/serviceaccount - name: SCRIPT value: | set -eo pipefail # install jq; you don't need this line if the image has it apk add jq TOKEN=$(cat ${SERVICEACCOUNT}/token) CACERT=${SERVICEACCOUNT}/ca.crt # Get node labels into a json curl --cacert ${CACERT} \ --header &quot;Authorization: Bearer ${TOKEN}&quot; \ -X GET ${APISERVER}/api/v1/nodes/${NODENAME} | jq .metadata.labels &gt; /node/labels.json # Extract 'topology.kubernetes.io/zone' from json NODE_ZONE=$(jq '.&quot;topology.kubernetes.io/zone&quot;' -r /node/labels.json) # and save it into a file in the format suitable for sourcing echo &quot;export NODE_ZONE=${NODE_ZONE}&quot; &gt; /node/zone command: [&quot;/bin/ash&quot;, &quot;-c&quot;] args: - 'echo &quot;$$SCRIPT&quot; &gt; /tmp/script &amp;&amp; ash /tmp/script' containers: # A container that needs the label value - name: test image: debian:buster command: [&quot;/bin/bash&quot;, &quot;-c&quot;] # source ENV variable from file, echo NODE_ZONE, and keep running doing nothing args: [&quot;source /node/zone &amp;&amp; echo $$NODE_ZONE &amp;&amp; cat /dev/stdout&quot;] volumeMounts: - mountPath: /node name: node-info </code></pre>
<p>I'm running into a missing resources issue when submitting a <code>Workflow</code>. The Kubernetes namespace <code>my-namespace</code> has a quota enabled, and for whatever reason the pods being created after submitting the workflow are failing with:</p> <pre><code>pods &quot;hello&quot; is forbidden: failed quota: team: must specify limits.cpu,limits.memory,requests.cpu,requests.memory </code></pre> <p>I'm submitting the following <code>Workflow</code>,</p> <pre><code>apiVersion: &quot;argoproj.io/v1alpha1&quot; kind: &quot;Workflow&quot; metadata: name: &quot;hello&quot; namespace: &quot;my-namespace&quot; spec: entrypoint: &quot;main&quot; templates: - name: &quot;main&quot; container: image: &quot;docker/whalesay&quot; resources: requests: memory: 0 cpu: 0 limits: memory: &quot;128Mi&quot; cpu: &quot;250m&quot; </code></pre> <p>Argo is running on Kubernetes 1.19.6 and was deployed with the <a href="https://github.com/argoproj/argo-helm" rel="nofollow noreferrer">official Helm chart</a> version 0.16.10. Here are my Helm values:</p> <pre><code>controller: workflowNamespaces: - &quot;my-namespace&quot; resources: requests: memory: 0 cpu: 0 limits: memory: 500Mi cpu: 0.5 pdb: enabled: true # See https://argoproj.github.io/argo-workflows/workflow-executors/ # docker container runtime is not present in the TKGI clusters containerRuntimeExecutor: &quot;k8sapi&quot; workflow: namespace: &quot;my-namespace&quot; serviceAccount: create: true rbac: create: true server: replicas: 2 secure: false resources: requests: memory: 0 cpu: 0 limits: memory: 500Mi cpu: 0.5 pdb: enabled: true executer: resources: requests: memory: 0 cpu: 0 limits: memory: 500Mi cpu: 0.5 </code></pre> <p>Any ideas on what I may be missing? Thanks, Weldon</p> <p>Update 1: I tried another namespace without quotas enabled and got past the missing resources issue. However I now see: <code>Failed to establish pod watch: timed out waiting for the condition</code>. Here's what the <code>spec</code> looks like for this pod. You can see the <code>wait</code> container is missing <code>resources</code>. This is the container causing the issue reported by this question.</p> <pre><code>spec: containers: - command: - argoexec - wait env: - name: ARGO_POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: ARGO_CONTAINER_RUNTIME_EXECUTOR value: k8sapi image: argoproj/argoexec:v2.12.5 imagePullPolicy: IfNotPresent name: wait resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /argo/podmetadata name: podmetadata - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-v4jlb readOnly: true - image: docker/whalesay imagePullPolicy: Always name: main resources: limits: cpu: 250m memory: 128Mi requests: cpu: &quot;0&quot; memory: &quot;0&quot; </code></pre>
<p>try deploying the workflow on another namespace if you can, and verify if it's working or not.</p> <p>if you can try with removing the quota for respective namespace.</p> <p>instead of quota you can also use the</p> <pre><code>apiVersion: v1 kind: LimitRange metadata: name: default-limit-range spec: limits: - default: memory: 512Mi cpu: 250m defaultRequest: cpu: 50m memory: 64Mi type: Container </code></pre> <p>so any container have not resource request, limit mentioned that will get this default config of 50m CPU &amp; 64 Mi Memory.</p> <p><a href="https://kubernetes.io/docs/concepts/policy/limit-range/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/policy/limit-range/</a></p>
<p>What is a recommended way to have the <code>gcloud</code> available from <strong>within</strong> a running App Engine web application?</p> <p><strong>Background:</strong><br /> The <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">Kubernetes Python Client</a> is using <code>subprocess</code> to execute <code>gcloud</code> (taken from <code>cmd-path</code> configured in <code>~/.kube/config</code>) to refresh access tokens. Because the web application is using the <code>kubernetes</code> python library to interact with a cluster, the <code>gcloud</code> command has to be available within the App Engine service. So this is <strong>not</strong> about running <code>gcloud</code> during a cloudbuild or other CI steps, but having access to <code>gcloud</code> inside the App Engine service.</p> <p><strong>Possible solution:</strong><br /> During Cloud Build it is of course possible to execute the <a href="https://cloud.google.com/sdk/docs/install#linux" rel="nofollow noreferrer">gcloud install instructions for Linux</a> to make the tool available within the directory of the app, but are there better solutions?</p> <p>Thanks!</p>
<p>IIUC the Python client for Kubernetes requires a Kubernetes config and you're using <code>gcloud container clusters get-credentials</code> to automatically create the config; The Python client for Kubernetes does not require <code>gcloud</code>.</p> <p>I recommend a different approach that uses Google's API Client Library for GKE (Container) to programmatically create a Kubernetes Config that can be consumed by the Kubernetes Python Client from within App Engine. You'll need to ensure that the Service Account being used by your App Engine app has sufficient permissions.</p> <p>Unfortunately, I've not done this using the Kubernetes Python client but I am doing this using the Kubernetes Golang client.</p> <p>The approach is to use Google's Container API to get the GKE cluster's details.</p> <p>APIs Explorer: <a href="https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.locations.clusters/get" rel="nofollow noreferrer"><code>clusters.get</code></a></p> <p>Python API Client Library: <a href="https://googleapis.github.io/google-api-python-client/docs/dyn/container_v1.projects.locations.clusters.html#get" rel="nofollow noreferrer"><code>cluster.get</code></a></p> <p>From the response (<a href="https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.locations.clusters" rel="nofollow noreferrer">Cluster</a>), you can create everything you need to create a Kubernetes config that's acceptable to the Kubernetes client.</p> <p>Here's a summary of the Golang code:</p> <pre><code>ctx := context.Background() containerService, _ := container.NewService(ctx) name := fmt.Sprintf( &quot;projects/%s/locations/%s/clusters/%s&quot;, clusterProject, clusterLocation, clusterName, ) rqst := containerService.Projects.Locations.Clusters.Get(name) resp, _ := rqst.Do() cert, _ := base64.StdEncoding.DecodeString(resp.MasterAuth.ClusterCaCertificate) server := fmt.Sprintf(&quot;https://%s&quot;, resp.Endpoint) apiConfig := api.Config{ APIVersion: &quot;v1&quot;, Kind: &quot;Config&quot;, Clusters: map[string]*api.Cluster{ clusterName: { CertificateAuthorityData: cert, Server: server, }, }, Contexts: map[string]*api.Context{ clusterName: { Cluster: clusterName, AuthInfo: clusterName, }, }, AuthInfos: map[string]*api.AuthInfo{ clusterName: { AuthProvider: &amp;api.AuthProviderConfig{ Name: &quot;gcp&quot;, Config: map[string]string{ &quot;scopes&quot;: &quot;https://www.googleapis.com/auth/cloud-platform&quot;, }, }, }, }, } </code></pre>
<p>I'm not sure if I ask this question here, but I need some clarification I have a Kubernetes cluster and I am wondering since the frontend runs on the client's web browser. Am I able to only expose the API internally and still make HTTP requests to it from the client or am I only able to expose the service using the node port, ingress, and load balancer which exposes it to the internet?</p> <p>Thanks in advance for the feedback</p>
<p>You can expose it to the frontend via an ingress and also (at the same time) internally for other services/pods/containers you have running inside the cluster, it all depends on how you configure it.</p> <p>Assuming you only want it to run internally, all you have to do is not create an ingress. If you want to expose it, then do create the ingress. In both cases you should always create the 'service' as that's what exposes your pods's code to the cluster (inside and outside via ingress).</p> <p>Service: <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a></p> <p>Ingress: <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></p> <p>Hope that clarifies it!</p>
<p>I was using fabric8io to write a service function, relevant code is as below</p> <pre><code>KubernetesClient fabricClient = new DefaultKubernetesClient(); fabricClient.pods().inNamespace(&quot;xxxnamespace&quot;).withLabel(&quot;somekey&quot;, somevalue).list().getItems() </code></pre> <p>It was working fine when I'm doing unit test. But when I was deploying the whole application, and trigger the service funtion, it throws an error as below</p> <pre><code>java.lang.RuntimeException: Error processing bean at org.springframework.statemachine.processor.MethodInvokingStateMachineRuntimeProcessor.process(MethodInvokingStateMachineRuntimeProcessor.java:70) ~[spring-statemachine-core-2.2.0.RELEASE.jar:2.2.0.RELEASE] at org.springframework.statemachine.processor.StateMachineHandler.handle(StateMachineHandler.java:135) ~[spring-statemachine-core-2.2.0.RELEASE.jar:2.2.0.RELEASE] at org.springframework.statemachine.processor.StateMachineHandlerCallHelper.getStateMachineHandlerResults(StateMachineHandlerCallHelper.java:438) [spring-statemachine-core-2.2.0.RELEASE.jar:2.2.0.RELEASE] at org.springframework.statemachine.processor.StateMachineHandlerCallHelper.callOnTransition(StateMachineHandlerCallHelper.java:237) [spring-statemachine-core-2.2.0.RELEASE.jar:2.2.0.RELEASE] at org.springframework.statemachine.support.StateMachineObjectSupport.notifyTransition(StateMachineObjectSupport.java:225) [spring-statemachine-core-2.2.0.RELEASE.jar:2.2.0.RELEASE] at org.springframework.statemachine.support.AbstractStateMachine$3.transit(AbstractStateMachine.java:329) [spring-statemachine-core-2.2.0.RELEASE.jar:2.2.0.RELEASE] at org.springframework.statemachine.support.DefaultStateMachineExecutor.handleTriggerTrans(DefaultStateMachineExecutor.java:287) [spring-statemachine-core-2.2.0.RELEASE.jar:2.2.0.RELEASE] at org.springframework.statemachine.support.DefaultStateMachineExecutor.handleTriggerTrans(DefaultStateMachineExecutor.java:210) [spring-statemachine-core-2.2.0.RELEASE.jar:2.2.0.RELEASE] at org.springframework.statemachine.support.DefaultStateMachineExecutor.processTriggerQueue(DefaultStateMachineExecutor.java:450) [spring-statemachine-core-2.2.0.RELEASE.jar:2.2.0.RELEASE] at org.springframework.statemachine.support.DefaultStateMachineExecutor.access$200(DefaultStateMachineExecutor.java:64) [spring-statemachine-core-2.2.0.RELEASE.jar:2.2.0.RELEASE] at org.springframework.statemachine.support.DefaultStateMachineExecutor$1.run(DefaultStateMachineExecutor.java:330) [spring-statemachine-core-2.2.0.RELEASE.jar:2.2.0.RELEASE] at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50) [spring-core-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.statemachine.support.DefaultStateMachineExecutor.scheduleEventQueueProcessing(DefaultStateMachineExecutor.java:353) [spring-statemachine-core-2.2.0.RELEASE.jar:2.2.0.RELEASE] at org.springframework.statemachine.support.DefaultStateMachineExecutor.access$500(DefaultStateMachineExecutor.java:64) [spring-statemachine-core-2.2.0.RELEASE.jar:2.2.0.RELEASE] at org.springframework.statemachine.support.DefaultStateMachineExecutor$2.triggered(DefaultStateMachineExecutor.java:540) [spring-statemachine-core-2.2.0.RELEASE.jar:2.2.0.RELEASE] at org.springframework.statemachine.trigger.CompositeTriggerListener.triggered(CompositeTriggerListener.java:34) [spring-statemachine-core-2.2.0.RELEASE.jar:2.2.0.RELEASE] at org.springframework.statemachine.trigger.TimerTrigger.notifyTriggered(TimerTrigger.java:123) [spring-statemachine-core-2.2.0.RELEASE.jar:2.2.0.RELEASE] at org.springframework.statemachine.trigger.TimerTrigger.access$000(TimerTrigger.java:33) [spring-statemachine-core-2.2.0.RELEASE.jar:2.2.0.RELEASE] at org.springframework.statemachine.trigger.TimerTrigger$1.run(TimerTrigger.java:117) [spring-statemachine-core-2.2.0.RELEASE.jar:2.2.0.RELEASE] at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54) [spring-context-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:93) [spring-context-5.1.3.RELEASE.jar:5.1.3.RELEASE] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_181] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_181] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_181] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [?:1.8.0_181] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181] Caused by: java.lang.RuntimeException: io.fabric8.kubernetes.client.KubernetesClientException: Operation: [list] for kind: [Pod] with name: [null] in namespace: [biztech-bos] failed. at com.xxxx.saveBuildLog(BosBuildLogAop.java:56) ~[bos-coordinator-server-1.30.0-SNAPSHOT.jar:?] at com.xxxx.BosBuildLogAop.logAround(BosBuildLogAop.java:39) ~[bos-coordinator-server-1.30.0-SNAPSHOT.jar:?] at sun.reflect.GeneratedMethodAccessor689.invoke(Unknown Source) ~[?:?] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181] at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:644) ~[spring-aop-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:633) ~[spring-aop-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:70) ~[spring-aop-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:93) ~[spring-aop-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:688) ~[spring-aop-5.1.3.RELEASE.jar:5.1.3.RELEASE] at com.xxxx.statemachine.TransitionActionConfig$$EnhancerBySpringCGLIB$$165e28b8.checkingIncrementDtsDeploy(&lt;generated&gt;) ~[bos-coordinator-server-1.30.0-SNAPSHOT.jar:?] at sun.reflect.GeneratedMethodAccessor823.invoke(Unknown Source) ~[?:?] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181] at org.springframework.expression.spel.support.ReflectiveMethodExecutor.execute(ReflectiveMethodExecutor.java:130) ~[spring-expression-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.expression.spel.ast.MethodReference.getValueInternal(MethodReference.java:111) ~[spring-expression-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.expression.spel.ast.MethodReference.access$000(MethodReference.java:54) ~[spring-expression-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.expression.spel.ast.MethodReference$MethodValueRef.getValue(MethodReference.java:390) ~[spring-expression-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.expression.spel.ast.CompoundExpression.getValueInternal(CompoundExpression.java:90) ~[spring-expression-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.expression.spel.ast.SpelNodeImpl.getTypedValue(SpelNodeImpl.java:116) ~[spring-expression-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.expression.spel.standard.SpelExpression.getValue(SpelExpression.java:365) ~[spring-expression-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.statemachine.support.AbstractExpressionEvaluator.evaluateExpression(AbstractExpressionEvaluator.java:126) ~[spring-statemachine-core-2.2.0.RELEASE.jar:2.2.0.RELEASE] at org.springframework.statemachine.processor.StateMachineMethodInvokerHelper.processInternal(StateMachineMethodInvokerHelper.java:243) ~[spring-statemachine-core-2.2.0.RELEASE.jar:2.2.0.RELEASE] at org.springframework.statemachine.processor.StateMachineMethodInvokerHelper.process(StateMachineMethodInvokerHelper.java:119) ~[spring-statemachine-core-2.2.0.RELEASE.jar:2.2.0.RELEASE] at org.springframework.statemachine.processor.MethodInvokingStateMachineRuntimeProcessor.process(MethodInvokingStateMachineRuntimeProcessor.java:68) ~[spring-statemachine-core-2.2.0.RELEASE.jar:2.2.0.RELEASE] ... 27 more Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Operation: [list] for kind: [Pod] with name: [null] in namespace: [biztech-bos] failed. at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:64) ~[kubernetes-client-5.2.1.jar:?] at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:72) ~[kubernetes-client-5.2.1.jar:?] at io.fabric8.kubernetes.client.dsl.base.BaseOperation.listRequestHelper(BaseOperation.java:168) ~[kubernetes-client-5.2.1.jar:?] at io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:664) ~[kubernetes-client-5.2.1.jar:?] at io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:86) ~[kubernetes-client-5.2.1.jar:?] at com.xxxx.service.impl.DTSServiceImpl.checkDeployIncrementDts(DTSServiceImpl.java:217) ~[bos-coordinator-server-1.30.0-SNAPSHOT.jar:?] at com.xxxx.configuration.statemachine.TransitionActionConfig.checkingIncrementDtsDeploy(TransitionActionConfig.java:96) ~[bos-coordinator-server-1.30.0-SNAPSHOT.jar:?] at com.xxxx.configuration.statemachine.TransitionActionConfig$$FastClassBySpringCGLIB$$f83ad12a.invoke(&lt;generated&gt;) ~[bos-coordinator-server-1.30.0-SNAPSHOT.jar:?] at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) ~[spring-core-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:749) ~[spring-aop-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) ~[spring-aop-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:88) ~[spring-aop-5.1.3.RELEASE.jar:5.1.3.RELEASE] at com.xxxx.server.aop.BosBuildLogAop.saveBuildLog(BosBuildLogAop.java:52) ~[bos-coordinator-server-1.30.0-SNAPSHOT.jar:?] at com.xxxx.server.aop.BosBuildLogAop.logAround(BosBuildLogAop.java:39) ~[bos-coordinator-server-1.30.0-SNAPSHOT.jar:?] at sun.reflect.GeneratedMethodAccessor689.invoke(Unknown Source) ~[?:?] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181] at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:644) ~[spring-aop-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:633) ~[spring-aop-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:70) ~[spring-aop-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:93) ~[spring-aop-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:688) ~[spring-aop-5.1.3.RELEASE.jar:5.1.3.RELEASE] at com.xxxx.server.configuration.statemachine.TransitionActionConfig$$EnhancerBySpringCGLIB$$165e28b8.checkingIncrementDtsDeploy(&lt;generated&gt;) ~[bos-coordinator-server-1.30.0-SNAPSHOT.jar:?] at sun.reflect.GeneratedMethodAccessor823.invoke(Unknown Source) ~[?:?] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_181] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_181] at org.springframework.expression.spel.support.ReflectiveMethodExecutor.execute(ReflectiveMethodExecutor.java:130) ~[spring-expression-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.expression.spel.ast.MethodReference.getValueInternal(MethodReference.java:111) ~[spring-expression-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.expression.spel.ast.MethodReference.access$000(MethodReference.java:54) ~[spring-expression-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.expression.spel.ast.MethodReference$MethodValueRef.getValue(MethodReference.java:390) ~[spring-expression-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.expression.spel.ast.CompoundExpression.getValueInternal(CompoundExpression.java:90) ~[spring-expression-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.expression.spel.ast.SpelNodeImpl.getTypedValue(SpelNodeImpl.java:116) ~[spring-expression-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.expression.spel.standard.SpelExpression.getValue(SpelExpression.java:365) ~[spring-expression-5.1.3.RELEASE.jar:5.1.3.RELEASE] at org.springframework.statemachine.support.AbstractExpressionEvaluator.evaluateExpression(AbstractExpressionEvaluator.java:126) ~[spring-statemachine-core-2.2.0.RELEASE.jar:2.2.0.RELEASE] at org.springframework.statemachine.processor.StateMachineMethodInvokerHelper.processInternal(StateMachineMethodInvokerHelper.java:243) ~[spring-statemachine-core-2.2.0.RELEASE.jar:2.2.0.RELEASE] at org.springframework.statemachine.processor.StateMachineMethodInvokerHelper.process(StateMachineMethodInvokerHelper.java:119) ~[spring-statemachine-core-2.2.0.RELEASE.jar:2.2.0.RELEASE] at org.springframework.statemachine.processor.MethodInvokingStateMachineRuntimeProcessor.process(MethodInvokingStateMachineRuntimeProcessor.java:68) ~[spring-statemachine-core-2.2.0.RELEASE.jar:2.2.0.RELEASE] ... 27 more Caused by: java.net.UnknownHostException: kubernetes.default.svc at java.net.InetAddress.getAllByName0(InetAddress.java:1280) ~[?:1.8.0_181] at java.net.InetAddress.getAllByName(InetAddress.java:1192) ~[?:1.8.0_181] at java.net.InetAddress.getAllByName(InetAddress.java:1126) ~[?:1.8.0_181] ... </code></pre> <p>in which <code>DTSServiceImpl</code>is my service implementation class.<br> On the deployment machine, I was putting kubeconfig file on ~/.kube/config, and <code>kubectl</code>command line is woking fine. <br> I have no clue how to handle this problem. Any idea how should I find out the exception reason?<br> Thanks in advance!</p>
<p>Fabric8 uses Kubernetes REST API to perform their operations, by default their HTTP client is assuming that it's running inside kubernetes cluster which is why it's trying to reach <code>kubernetes.default.svc</code></p> <p>Since you're calling it outside the cluster then you must tell Fabric8 the address of your cluster, this can be done by specifying the host during creation of the client</p> <pre><code>new DefaultKubernetesClient(&quot;https://my-cluster&quot;); </code></pre> <p>If you're able to SSH to the server which is running your application and ping the cluster address, most likely fabric8 will work.</p>
<p>I am new to ceph and using rook to install ceph in k8s cluster. I see that pod rook-ceph-osd-prepare is in Running status forever and stuck on below line:</p> <pre><code>2020-06-15 20:09:02.260379 D | exec: Running command: ceph auth get-or-create-key client.bootstrap-osd mon allow profile bootstrap-osd --connect-timeout=15 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --name=client.admin --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/180401029 </code></pre> <p>When I logged into container and ran the same command, I see that its stuck too and after pressing ^C it showed this:</p> <pre><code>Traceback (most recent call last): File "/usr/bin/ceph", line 1266, in &lt;module&gt; retval = main() File "/usr/bin/ceph", line 1197, in main verbose) File "/usr/bin/ceph", line 622, in new_style_command ret, outbuf, outs = do_command(parsed_args, target, cmdargs, sigdict, inbuf, verbose) File "/usr/bin/ceph", line 596, in do_command return ret, '', '' </code></pre> <p>Below are all my pods:</p> <pre><code>rook-ceph csi-cephfsplugin-9k9z2 3/3 Running 0 9h rook-ceph csi-cephfsplugin-mjsbk 3/3 Running 0 9h rook-ceph csi-cephfsplugin-mrqz5 3/3 Running 0 9h rook-ceph csi-cephfsplugin-provisioner-5ffbdf7856-59cf7 5/5 Running 0 9h rook-ceph csi-cephfsplugin-provisioner-5ffbdf7856-m4bhr 5/5 Running 0 9h rook-ceph csi-cephfsplugin-xgvz4 3/3 Running 0 9h rook-ceph csi-rbdplugin-6k4dk 3/3 Running 0 9h rook-ceph csi-rbdplugin-klrwp 3/3 Running 0 9h rook-ceph csi-rbdplugin-provisioner-68d449986d-2z9gr 6/6 Running 0 9h rook-ceph csi-rbdplugin-provisioner-68d449986d-mzh9d 6/6 Running 0 9h rook-ceph csi-rbdplugin-qcmrj 3/3 Running 0 9h rook-ceph csi-rbdplugin-zdg8z 3/3 Running 0 9h rook-ceph rook-ceph-crashcollector-k8snode001-76ffd57d58-slg5q 1/1 Running 0 9h rook-ceph rook-ceph-crashcollector-k8snode002-85b6d9d699-s8m8z 1/1 Running 0 9h rook-ceph rook-ceph-crashcollector-k8snode004-847bdb4fc5-kk6bd 1/1 Running 0 9h rook-ceph rook-ceph-mgr-a-5497fcbb7d-lq6tf 1/1 Running 0 9h rook-ceph rook-ceph-mon-a-6966d857d9-s4wch 1/1 Running 0 9h rook-ceph rook-ceph-mon-b-649c6845f4-z46br 1/1 Running 0 9h rook-ceph rook-ceph-mon-c-67869b76c7-4v6zn 1/1 Running 0 9h rook-ceph rook-ceph-operator-5968d8f7b9-hsfld 1/1 Running 0 9h rook-ceph rook-ceph-osd-prepare-k8snode001-j25xv 1/1 Running 0 7h48m rook-ceph rook-ceph-osd-prepare-k8snode002-6fvlx 0/1 Completed 0 9h rook-ceph rook-ceph-osd-prepare-k8snode003-cqc4g 0/1 Completed 0 9h rook-ceph rook-ceph-osd-prepare-k8snode004-jxxtl 0/1 Completed 0 9h rook-ceph rook-discover-28xj4 1/1 Running 0 9h rook-ceph rook-discover-4ss66 1/1 Running 0 9h rook-ceph rook-discover-bt8rd 1/1 Running 0 9h rook-ceph rook-discover-q8f4x 1/1 Running 0 9h </code></pre> <p>Please let me know if anyone has any hints to resolve this or troubleshoot this?</p>
<p>In my case, one of my nodes system clock not synchronized with hardware so there was a time gap between nodes.</p> <p>maybe you should check output of timedatectl command.</p>
<p>I'm running Flink 1.11 on k8s cluster and getting the following error when trying to update the log4j-console.properties file:</p> <pre><code>Starting Task Manager Enabling required built-in plugins Linking flink-s3-fs-hadoop-1.11.1.jar to plugin directory Successfully enabled flink-s3-fs-hadoop-1.11.1.jar sed: couldn't open temporary file /opt/flink/conf/sedl2dH0X: Read-only file system sed: couldn't open temporary file /opt/flink/conf/sedPLYAzY: Read-only file system /docker-entrypoint.sh: 72: /docker-entrypoint.sh: cannot create /opt/flink/conf/flink-conf.yaml: Permission denied sed: couldn't open temporary file /opt/flink/conf/sede0G5LW: Read-only file system /docker-entrypoint.sh: 120: /docker-entrypoint.sh: cannot create /opt/flink/conf/flink-conf.yaml.tmp: Read-only file system Starting taskexecutor as a console application on host flink-taskmanager-c765c947c-qx68t. Exception in thread &quot;main&quot; java.lang.NoClassDefFoundError: com/fasterxml/jackson/databind/ser/FilterProvider at org.apache.logging.log4j.core.layout.JsonLayout.&lt;init&gt;(JsonLayout.java:158) at org.apache.logging.log4j.core.layout.JsonLayout.&lt;init&gt;(JsonLayout.java:69) at org.apache.logging.log4j.core.layout.JsonLayout$Builder.build(JsonLayout.java:102) at org.apache.logging.log4j.core.layout.JsonLayout$Builder.build(JsonLayout.java:77) at org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.build(PluginBuilder.java:122) at org.apache.logging.log4j.core.config.AbstractConfiguration.createPluginObject(AbstractConfiguration.java:1002) at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:942) at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:934) at org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:934) at org.apache.logging.log4j.core.config.AbstractConfiguration.doConfigure(AbstractConfiguration.java:552) at org.apache.logging.log4j.core.config.AbstractConfiguration.initialize(AbstractConfiguration.java:241) at org.apache.logging.log4j.core.config.AbstractConfiguration.start(AbstractConfiguration.java:288) at org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:579) at org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:651) at org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:668) at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:253) at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:153) at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:45) at org.apache.logging.log4j.LogManager.getContext(LogManager.java:194) at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:138) at org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:45) at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:48) at org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:30) at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:329) at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:349) at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.&lt;clinit&gt;(TaskManagerRunner.java:89) Caused by: java.lang.ClassNotFoundException: com.fasterxml.jackson.databind.ser.FilterProvider at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(Unknown Source) at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(Unknown Source) at java.base/java.lang.ClassLoader.loadClass(Unknown Source) </code></pre> <p>my log4j-console.properties:</p> <pre><code>rootLogger.level = INFO #rootLogger.appenderRef.console.ref = ConsoleAppender appender.console.type = Console appender.console.name = STDOUT appender.console.layout.type = PatternLayout appender.console.layout.pattern = %d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n appender.kafka.type = Kafka appender.kafka.name = Kafka appender.kafka.topic = test appender.kafka.layout.type = JsonLayout appender.kafka.layout.complete = false appender.kafka.additional1.type = KeyValuePair appender.kafka.additional1.key=app appender.kafka.additional1.value=TEST appender.kafka.additional2.type = KeyValuePair appender.kafka.additional2.key=subsystem appender.kafka.additional2.value=TEST appender.kafka.additional3.type = KeyValuePair appender.kafka.additional3.key=deployment appender.kafka.additional3.value=TEST appender.kafka.property.bootstrap.servers=*** rootLogger.appenderRef.console.ref = STDOUT rootLogger.appenderRef.kafka.ref = Kafka </code></pre> <p>Im using &quot;flink:1.11.1-scala_2.11-java11&quot; docker image and validated that all log4j2 dependencies are in the classpath.</p> <p>I have also tried to create a new docker image from the above base image and add to it the missing dependency and yet nothing happened.</p>
<p>I too suffered from this bug. The issue here is that when the task manager and job managers start they are running with a modified classpath, not the JAR that you've built via your build system.</p> <p>See the <code>constructFlinkClassPath</code> in the <a href="https://github.com/apache/flink/blob/af681c63821c027bc7a233560f7765b686d0d244/flink-dist/src/main/flink-bin/bin/config.sh#L20" rel="nofollow noreferrer">flink source code</a>. To prove this out, revert the JSON logging pattering and check out the classpath in the tm/jm logs on startup. You'll notice that your JAR isn't on the classpath.</p> <p>To fix this issue you need to provide the dependencies (in this case you'll need <code>jackson-core</code> <code>jackson-annotations</code> and <code>jackson-databind</code>) to the <code>lib</code> folder within the tm/jm nodes (the <code>lib</code> folder is included by default in the flink classpath).</p> <p>If you are using docker, you can do this when you build the container (<code>RUN wget...</code>).</p>
<p>Currently running a fresh &quot;all in one VM&quot; (stacked master/worker approach) kubernetes <code>v1.21.1-00</code> on Ubuntu Server 20 LTS, using</p> <ul> <li>cri-o as container runtime interface</li> <li>calico for networking/security</li> </ul> <p>also installed the kubernetes-dashboard (but I guess that's not important for my issue 😉). Taking this guide for installing ambassador: <a href="https://www.getambassador.io/docs/edge-stack/latest/topics/install/yaml-install/" rel="nofollow noreferrer">https://www.getambassador.io/docs/edge-stack/latest/topics/install/yaml-install/</a> I come along the issue that the service is stuck in status &quot;pending&quot;.</p> <p><code> kubectl get svc -n ambassador</code> prints out the following stuff</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ambassador LoadBalancer 10.97.117.249 &lt;pending&gt; 80:30925/TCP,443:32259/TCP 5h ambassador-admin ClusterIP 10.101.161.169 &lt;none&gt; 8877/TCP,8005/TCP 5h ambassador-redis ClusterIP 10.110.32.231 &lt;none&gt; 6379/TCP 5h quote ClusterIP 10.104.150.137 &lt;none&gt; 80/TCP 5h </code></pre> <p>While changing the <code>type</code> from <code>LoadBalancer</code> to <code>NodePort</code> in the service sets it up correctly, I'm not sure of the implications coming along. Again, I want to use ambassador as an ingress component here - with my setup (only one machine), &quot;real&quot; loadbalancing might not be necessary.</p> <p>For covering all the subdomain stuff, I setup a wildcard recording for pointing to my machine, means I got a CNAME for <code>*.k8s.my-domain.com</code> which points to this host. Don't know, if this approach was that smart for setting up an ingress.</p> <p>Edit: List of events, as requested below:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 116s default-scheduler Successfully assigned ambassador/ambassador-redis-584cd89b45-js5nw to dev-bvpl-099 Normal Pulled 116s kubelet Container image &quot;redis:5.0.1&quot; already present on machine Normal Created 116s kubelet Created container redis Normal Started 116s kubelet Started container redis </code></pre> <p>Additionally, here's the service pending in yaml presenation (exported via <code>kubectl get svc -n ambassador -o yaml ambassador</code>)</p> <pre><code>apiVersion: v1 kind: Service metadata: annotations: a8r.io/bugs: https://github.com/datawire/ambassador/issues a8r.io/chat: http://a8r.io/Slack a8r.io/dependencies: ambassador-redis.ambassador a8r.io/description: The Ambassador Edge Stack goes beyond traditional API Gateways and Ingress Controllers with the advanced edge features needed to support developer self-service and full-cycle development. a8r.io/documentation: https://www.getambassador.io/docs/edge-stack/latest/ a8r.io/owner: Ambassador Labs a8r.io/repository: github.com/datawire/ambassador a8r.io/support: https://www.getambassador.io/about-us/support/ kubectl.kubernetes.io/last-applied-configuration: | {&quot;apiVersion&quot;:&quot;v1&quot;,&quot;kind&quot;:&quot;Service&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{&quot;a8r.io/bugs&quot;:&quot;https://github.com/datawire/ambassador/issues&quot;,&quot;a8r.io/chat&quot;:&quot;http://a8r.io/Slack&quot;,&quot;a8r.io/dependencies&quot;:&quot;ambassador-redis.ambassador&quot;,&quot;a8r.io/description&quot;:&quot;The Ambassador Edge Stack goes beyond traditional API Gateways and Ingress Controllers with the advanced edge features needed to support developer self-service and full-cycle development.&quot;,&quot;a8r.io/documentation&quot;:&quot;https://www.getambassador.io/docs/edge-stack/latest/&quot;,&quot;a8r.io/owner&quot;:&quot;Ambassador Labs&quot;,&quot;a8r.io/repository&quot;:&quot;github.com/datawire/ambassador&quot;,&quot;a8r.io/support&quot;:&quot;https://www.getambassador.io/about-us/support/&quot;},&quot;labels&quot;:{&quot;app.kubernetes.io/component&quot;:&quot;ambassador-service&quot;,&quot;product&quot;:&quot;aes&quot;},&quot;name&quot;:&quot;ambassador&quot;,&quot;namespace&quot;:&quot;ambassador&quot;},&quot;spec&quot;:{&quot;ports&quot;:[{&quot;name&quot;:&quot;http&quot;,&quot;port&quot;:80,&quot;targetPort&quot;:8080},{&quot;name&quot;:&quot;https&quot;,&quot;port&quot;:443,&quot;targetPort&quot;:8443}],&quot;selector&quot;:{&quot;service&quot;:&quot;ambassador&quot;},&quot;type&quot;:&quot;LoadBalancer&quot;}} creationTimestamp: &quot;2021-05-22T07:18:23Z&quot; labels: app.kubernetes.io/component: ambassador-service product: aes name: ambassador namespace: ambassador resourceVersion: &quot;4986406&quot; uid: 68e4582c-be6d-460c-909e-dfc0ad84ae7a spec: clusterIP: 10.107.194.191 clusterIPs: - 10.107.194.191 externalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: http nodePort: 32542 port: 80 protocol: TCP targetPort: 8080 - name: https nodePort: 32420 port: 443 protocol: TCP targetPort: 8443 selector: service: ambassador sessionAffinity: None type: LoadBalancer status: loadBalancer: {} </code></pre> <p>EDIT#2: I wonder, if <a href="https://stackoverflow.com/a/44112285/667183">https://stackoverflow.com/a/44112285/667183</a> applies for my process as well?</p>
<p>Answer is pretty much here: <a href="https://serverfault.com/questions/1064313/ambassador-service-stays-pending">https://serverfault.com/questions/1064313/ambassador-service-stays-pending</a> . After installing a <code>load balancer</code> the whole setup worked. I decided to go with <code>metallb</code> (<a href="https://metallb.universe.tf/installation/#installation-by-manifest" rel="nofollow noreferrer">https://metallb.universe.tf/installation/#installation-by-manifest</a> for installation). I decided to go with the following configuration for a single-node kubernetes cluster:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | address-pools: - name: default protocol: layer2 addresses: - 10.16.0.99-10.16.0.99 </code></pre> <p>After a few seconds the load balancer is detected and everything goes fine.</p>
<p>At first i have delployed cloud sql with postgresql. and then i deployed gke with documents: <a href="https://cloud.google.com/support-hub#section-2" rel="nofollow noreferrer">https://cloud.google.com/support-hub#section-2</a> i used gcloud tool. and i made gke cluster using auto pilot mode. and made deployment with autoscaler. and i registered my docker image. and then i expose it with loadbalancer. Once i build my docker image. and then i excute it in local. it is running well. but it is not running well in gke server. and suddendly can not connect cloud sql. so i registered gke external ip in cloud sql connection ips. but it doesn't work... i want to connect cloud sql from google kubernetes engine. please help me...</p>
<p>For accessing the Cloud SQL instance from an application running in Google Kubernetes Engine, you can use either the Cloud SQL Auth proxy (with public or private IP), or connect directly using a private IP address. The Cloud SQL Auth proxy is the recommended way to connect to Cloud SQL, even when using private IP.</p> <p>Referral link :</p> <p>[1] <a href="https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine" rel="noreferrer">https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine</a> [2]<a href="https://medium.com/google-cloud/connecting-cloud-sql-kubernetes-sidecar-46e016e07bb4" rel="noreferrer">https://medium.com/google-cloud/connecting-cloud-sql-kubernetes-sidecar-46e016e07bb4</a> [3]<a href="https://www.jhipster.tech/tips/018_tip_kubernetes_and_google_cloud_sql.html" rel="noreferrer">https://www.jhipster.tech/tips/018_tip_kubernetes_and_google_cloud_sql.html</a></p>
<p>I have been asked to modify a Helm template to accommodate a few changes to check if a value is empty or not as in the code snippet below. I need to check <code>$var.alias</code> inside the <code>printf</code> in the code snippet and write custom logic to print a custom value. Any pointers around the same would be great.</p> <pre><code>{{- range $key, $value := .Values.testsConfig.keyVaults -}} {{- range $secret, $var := $value.secrets -}} {{- if nil $var.alias}} {{- end -}} {{ $args = append $args (printf &quot;%s=/mnt/secrets/%s/%s&quot; $var.alias $key $var.name | quote) }} {{- end -}} {{- end -}} </code></pre>
<p>I decided to test what madniel wrote in his comment. Here are my files:</p> <p>values.yaml</p> <pre><code>someString: abcdef emptyString: &quot;&quot; # nilString: </code></pre> <p>templates/test.yaml</p> <pre><code>{{ printf &quot;someEmptyString=%q)&quot; .Values.someString }} {{ printf &quot;emptyString=%q)&quot; .Values.emptyString }} {{ printf &quot;nilString=%q)&quot; .Values.nilString }} {{- if .Values.someString }} {{ printf &quot;someString evaluates to true&quot; }} {{- end -}} {{- if .Values.emptyString }} {{ printf &quot;emptyString evaluates to true&quot; }} {{- end -}} {{- if .Values.nilString }} {{ printf &quot;nilString evaluates to true&quot; }} {{- end -}} {{- if not .Values.emptyString }} {{ printf &quot;not emptyString evaluates to true&quot; }} {{- end -}} {{- if not .Values.nilString }} {{ printf &quot;not nilString evaluates to true&quot; }} {{- end -}} </code></pre> <p>Helm template output:</p> <pre><code>➜ helm template . --debug install.go:173: [debug] Original chart version: &quot;&quot; install.go:190: [debug] CHART PATH: &lt;REDACTED&gt; --- # Source: asd/templates/test.yaml someEmptyString=&quot;abcdef&quot;) emptyString=&quot;&quot;) nilString=%!q(&lt;nil&gt;)) someString evaluates to true not emptyString evaluates to true not nilString evaluates to true </code></pre> <p>So yes, it should work if you use <code>{{ if $var.alias }}</code></p>
<p>I created a kubernetes cluster on my debian 9 machine using <a href="https://kind.sigs.k8s.io/" rel="nofollow noreferrer">kind</a>.<br> Which apparently works because I can run <code>kubectl cluster-info</code> with valid output.</p> <p>Now I wanted to fool around with the tutorial on <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/#kubernetes-basics-modules" rel="nofollow noreferrer">Learn Kubernetes Basics</a> site.</p> <p>I have already deployed the app </p> <pre><code>kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1 </code></pre> <p>and started the <code>kubectl proxy</code>. </p> <p>Output of <code>kubectl get deployments</code></p> <pre><code>NAME READY UP-TO-DATE AVAILABLE AGE kubernetes-bootcamp 1/1 1 1 17m </code></pre> <p>My problem now is: when I try to see the output of the application using <code>curl</code> I get</p> <blockquote> <p>Error trying to reach service: 'dial tcp 10.244.0.5:80: connect: connection refused'</p> </blockquote> <p>My commands</p> <pre><code>export POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}') curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy/ </code></pre> <p>For the sake of completeness I can run <code>curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/</code> and I get valid output.</p>
<p>Try add :8080 after the <strong>$POD_NAME</strong></p> <pre><code> curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME:8080/proxy/ </code></pre>
<p>I wanted to know what's the difference of the <code>reclaimPolicy</code> in <code>StorageClass</code> vs. <code>PersistentVolume</code>.</p> <p>Currently we created multiple <code>PersistentVolume</code> with a <code>StorageClass</code> that has a <code>reclaimPolicy</code> of <code>Delete</code>, however we changed the <code>PersistentVolume</code>'s <code>reclaimPolicy</code> manually via <code>kubectl patch pv PV_NAME -p '{&quot;spec&quot;:{&quot;persistentVolumeReclaimPolicy&quot;:&quot;Retain&quot;}}'</code> to have a <code>Retain</code> value.</p> <p>what will happen now if I try to delete the <code>StorageClass</code> or even the cluster or accidentily remove <code>pvc</code>' s does the value of the <code>StorageClass</code> has any effect on the <code>pv</code> even after creating?</p>
<blockquote> <p>I wanted to know what's the difference of the reclaimPolicy in StorageClass vs. PersistentVolume.</p> </blockquote> <p>They mean the same thing, the difference is that the one in <code>StorageClass</code> is used for <a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/" rel="noreferrer">dynamic provisioning of volumes</a>, for manually created persistent volumes they use the <code>reclaimPolicy</code> they were assigned during creation.</p> <blockquote> <p>what will happen now if I try to delete the StorageClass or even the cluster or accidentily remove pvc' s does the value of the StorageClass has any effect on the pv even after creating?</p> </blockquote> <p>I don't think anything will happen if you delete <code>StorageClass</code>, no pv or pvc should be deleted. If you delete a PVC then the <code>reclaimPolicy</code> of the <code>PersistentVolume</code> will be used.</p>
<p>i can to connect to API kube with kubectl and file config (library is @kubernetes/client-node). i want to run my application in conteiner without kubectl. How i may connect to API kube with server account without file config?</p>
<p>if you are running your code inside the container you can directly use the default config using</p> <pre><code>const kc = new k8s.KubeConfig(); kc.loadFromDefault(); </code></pre> <p>with this method, there is no requirement to pass the config and inside container, it will auto adjust the config you can use the API directly.</p> <p>for service account access your can check this document : <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/</a></p> <p>how to generate the service account &amp; secret token that will call the API server : <a href="https://cloudhedge.io/setting-up-kubernetes-api-access-using-service-account/" rel="nofollow noreferrer">https://cloudhedge.io/setting-up-kubernetes-api-access-using-service-account/</a></p>
<p>Basically I'm trying to create a pipeline on my local Jenkins, create an image and then send to Docker Hub. Then I'll deploy this image on our local Kubernetes(Server IP:10.10.10.4).</p> <p>So Jenkins pipeline script is below;</p> <pre><code>docker build -t test123/demo:$BUILD_NUMBER . docker login --username=testuser --password=testpass docker push test123/demo:$BUILD_NUMBER ssh -tt root@10.10.10.4 'kubectl apply -f hybris-deployment.yaml' </code></pre> <p>So the problem is; I can tag succesfully images with $BUILD_NUMBER and push to Docker hub. Then I have to use this $BUILD_NUMBER in the Kubernetes server's YAML file and deploy it.</p> <p>But I can't pass this $BUILD_NUMBER to the Kubernetes server. Somehow I should send this build number with the ssh command and use it in the YAML file as a tag.</p> <p>Any idea how can I do that? Thanks!</p>
<p>you create your pipeline something like</p> <p>if you <strong>deployment.yaml</strong> file is like</p> <pre><code>apiVersion: apps/v1beta2 kind: Deployment metadata: name: test-image labels: app: test-image spec: selector: matchLabels: app: test-image tier: frontend strategy: type: RollingUpdate template: metadata: labels: app: test-image tier: frontend spec: containers: - image: TEST_IMAGE_NAME name: test-image ports: - containerPort: 8080 name: http - containerPort: 443 name: https </code></pre> <p>in pipeline, you can change the image and replace this part in <strong>deployment.yaml</strong></p> <pre><code>sed -i &quot;s,TEST_IMAGE_NAME,gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA,&quot; deployment.yaml </code></pre> <p>so this command will replace line, in <strong>deployment.yaml</strong> and this way image URL updated and you can apply the YAML file.</p> <p>you can see the example jenkin file here : <a href="https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes/blob/master/sample-app/Jenkinsfile" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes/blob/master/sample-app/Jenkinsfile</a></p> <p>whole project link : <a href="https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes</a></p>
<p>I have a Kubernetes cluster and do all my deployments in a declarative manner, by executing the &quot;apply&quot; command in CI/CD pipeline.</p> <p>&quot;Apply&quot; works in such a way that it merges the state of an object with the manifest that came with the command. As the result, you may make manual changes - for example, add a new key-value to a ConfigMap and &quot;apply&quot; will leave it intact, even though this key doesn't exist in source code.</p> <p>So I wonder, how can I detect such issues? Doing &quot;delete&quot; and &quot;create&quot; is not an option since it disrupts the availability. I don't want to change deployment from &quot;apply&quot; since it's production. I just to find manual modifications in a namespace.</p>
<p><strong><a href="https://github.com/weaveworks/kubediff" rel="nofollow noreferrer">kubediff</a></strong> is what you are looking for.</p> <p>You can run it as a command-line tool e.g.:</p> <pre><code>$ ./kubediff k8s Checking ReplicationController 'kubediff' *** .spec.template.spec.containers[0].args[0]: '-repo=https://github.com/weaveworks/kubediff' != '-repo=https://github.com/&lt;your github repo&gt;' Checking Secret 'kubediff-secret' Checking Service 'kubediff' </code></pre> <p>or as a service inside K8s cluster. This mode also gives you a simple UI showing the output.</p>
<p>I am looking to find out if there is a way I can assign a specific Deployment to a specific node pool.</p> <p>I am planning to deploy a big-size application using kubernetes. I am wondering if there is a way we can assign deployments to specific node pools. In other words, we have 3 types of services:</p> <ul> <li>General services, low performance and low replica count</li> <li>Monitor services, high I/O and high performance servers needed</li> <li>Module services, most demanding services, we are aiming to allocate the biggest part of our budget for this.</li> </ul> <p>So obviously we would like to best allocate nodes to specific deployments so no resources go wasted, for example low tier servers node pool X would be only utilized by General service deployments, high tier servers node pool Y would be only utilized by the monitor services, and the highest tier servers would only be utilized by the Module services.</p> <p>I understand that there is a huge number of articles that talks about pod affinity and other related things, but what I seem to not be able to find anything that matches the following: <code>How to assign Deployment to specific node pool</code></p> <p>Thanks in advance!</p>
<p>Another way (in addition to what Yayotrón proposed) would be to work with NodeAffinity and AntiAffinity. For more information check the official documentation here: <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/</a></p> <p>Taints and tolerations very strict and scheduling on other nodes would not be possible at all. With Affinity and Antiaffinity you can specify wheter you want it to be strict (RequiredDuringSchedulingIgnoredDuringExecution) or a soft restriction (PreferredDuring....)</p>