prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>I want to run 2 containers in a pod (sidecar pattern), Of which one of the container must be running with restricted access. In K8s service account is common for all containers in the pod, is it possible to run a sidecar with a different service account or with no service account?</p>
| <p>I just asked the same question for init container in the slack group. The answer was a no.</p>
<p>Although, one user had the idea of manually mounting a SA token into the initcontainer or sidecar in your case and disabling automounting of SA.</p>
|
<p>I have a PV/PVC in my kubernetes cluster.</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
nfs:
path: /tmp
server: 172.17.0.2
</code></pre>
<p>I want to externally add <code>mountOptions</code> to all PVs like below.</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /tmp
server: 172.17.0.2
</code></pre>
<p>Is there any way I can achieve this using <code>kubectl</code> cli like we add annotations to ingress rules and pods?</p>
| <p>You can use <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/" rel="nofollow noreferrer"><code>kubectl patch</code> command</a> to add <code>mountOptions</code> to existing PV in the cluster:</p>
<pre><code>kubectl patch pv pv0003 --patch '{"spec": {"mountOptions": ["hard","nfsvers=4.1"]}}'
</code></pre>
<p>If you want to add <code>mountOptions</code> to every PV in the cluster you can use simple bash <code>for</code> loop and <code>kubectl patch</code> command:</p>
<pre><code>for pv in $(kubectl get pv --no-headers -o custom-columns=":metadata.name"); do kubectl patch pv $pv --patch '{"spec": {"mountOptions": ["hard","nfsvers=4.1"]}}'; done
</code></pre>
|
<p>I've encountered rather strange behavior of my kubernetes cluster (1.18.20, calico 3.14.2): when I attempt to upload two-megabyte JSON file to pod via curl through <code>NodePort</code> service, transmission is interrupted with <code>Recv failure: Connection reset by peer</code>.
Traffic capture shows that both client and server receive RST packets from network, but didn't sent them.
Binary file of same size uploads successfully, but JSON is rejected regardless of <code>Content-Type</code> specified.
File transfer between pods (using similar commands and same file) proceeds smoothly.
Upload through ingress (also configured using <code>NodePort</code>) fails too.
Size of received fragment is always the same, approximately 850K.</p>
<p>I've used <code>nc -l 80*</code> instead of real service with the same outcome.</p>
<p>Apparently, <code>kube-proxy</code> doesn't like big JSON files.</p>
<p>Is it possible to send big JSON files to pod from external clients, or such a limit is hardcoded?</p>
<p><strong>UPD1</strong></p>
<p>Same behavior for fresh cluster (1.22.0, calico 3.20.0).</p>
<p><strong>UPD2</strong></p>
<p>System rejects not every big JSON payload, but only several percent of user uploads.
Payload is specially crafted by client application: first part of multy-volume Zip archive is base64-encoded and encapsulated as JSON file ('{ "data": "..." }').
Size of fragment causing connection break is about 640K.</p>
<p>Looks like error in filtering procedure inside of <code>kube-proxy</code>.</p>
| <p>Unfortunately, the source of the problem was in misconfiguration of IDS/IPS.</p>
<p>Nothing to do with <strong>kube-proxy</strong>.</p>
|
<p>In helm, there are Math functions <a href="https://helm.sh/docs/chart_template_guide/function_list/#math-functions" rel="nofollow noreferrer">https://helm.sh/docs/chart_template_guide/function_list/#math-functions</a> and they have a <code>div</code> function which divides integer! but is there any way to get a decimal result out of division?</p>
<p>I need to get a unit interval out of the percentage.</p>
<p>So if the value is 90 I need to get 0.9, if it is 2 I need 0.02.</p>
<p>Thanks</p>
| <p>Unfortunately there are no functions available in Helm templates which support floating point division.</p>
<p>Easiest would be to register a custom function which does the conversion and division.</p>
<p>If you can't register your custom functions, there is still a way to do this with what's available in Help templates.</p>
<p>You may construct a string that contains the floating point representation of the expected result, and use the <code>float64</code> Helm template function to parse that string into a <code>float64</code> value.</p>
<p>Here's how it may look like:</p>
<pre><code>{{ printf "%d.%02d" (div . 100) (mod . 100) | float64 }}
</code></pre>
<p>If the percent is <code>100</code>, the first digit will be <code>1</code>, <code>0</code> otherwise. Then follows a dot and the 2-digit value of the percent, so using this format string, you'll get the floating point representation. Passing this to the <code>float64</code> function, it'll parse it and return it as a <code>float64</code> value.</p>
<p>Testing it:</p>
<pre><code> t := template.Must(template.New("").Funcs(template.FuncMap{
"div": func(a, b int) int { return a / b },
"mod": func(a, b int) int { return a % b },
"float64": func(s string) (float64, error) { return strconv.ParseFloat(s, 64) },
}).Parse(`{{range .}}{{ printf "%d.%02d" (div . 100) (mod . 100) | float64 }}
{{end}}`))
if err := t.Execute(os.Stdout, []int{0, 2, 90, 100}); err != nil {
panic(err)
}
</code></pre>
<p>Output (try it on the Go Playground):</p>
<pre><code>0
0.02
0.9
1
</code></pre>
|
<p>I have a baremetal cluster deployed using Kubespray with kubernetes 1.22.2, MetalLB, and ingress-nginx enabled. I am getting <code>404 Not found</code> when trying to access any service deployed via helm when setting <code>ingressClassName: nginx</code>. However, everything works fine if I don't use <code>ingressClassName: nginx</code> but <code>kubernetes.io/ingress.class: nginx</code> instead in the helm chart values.yaml. How can I get it to work using <code>ingressClassName</code>?</p>
<p>These are my kubespray settings for <code>inventory/mycluster/group_vars/k8s_cluster/addons.yml</code></p>
<pre><code># Nginx ingress controller deployment
ingress_nginx_enabled: true
ingress_nginx_host_network: false
ingress_publish_status_address: ""
ingress_nginx_nodeselector:
kubernetes.io/os: "linux"
ingress_nginx_tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Equal"
value: ""
effect: "NoSchedule"
- key: "node-role.kubernetes.io/control-plane"
operator: "Equal"
value: ""
effect: "NoSchedule"
ingress_nginx_namespace: "ingress-nginx"
ingress_nginx_insecure_port: 80
ingress_nginx_secure_port: 443
ingress_nginx_configmap:
map-hash-bucket-size: "128"
ssl-protocols: "TLSv1.2 TLSv1.3"
ingress_nginx_configmap_tcp_services:
9000: "default/example-go:8080"
ingress_nginx_configmap_udp_services:
53: "kube-system/coredns:53"
ingress_nginx_extra_args:
- --default-ssl-certificate=default/mywildcard-tls
ingress_nginx_class: "nginx"
</code></pre>
<p>grafana helm values.yaml</p>
<pre><code>ingress:
enabled: true
# For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
# See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
ingressClassName: nginx
# Values can be templated
annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
labels: {}
path: /
# pathType is only for k8s >= 1.1=
pathType: Prefix
hosts:
- grafana.mycluster.org
tls:
- secretName: mywildcard-tls
hosts:
- grafana.mycluster.org
</code></pre>
<p><code>kubectl describe pod grafana-679bbfd94-p2dd7</code></p>
<pre><code>...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 25m default-scheduler Successfully assigned default/grafana-679bbfd94-p2dd7 to node1
Normal Pulled 25m kubelet Container image "grafana/grafana:8.2.2" already present on machine
Normal Created 25m kubelet Created container grafana
Normal Started 25m kubelet Started container grafana
Warning Unhealthy 24m (x3 over 25m) kubelet Readiness probe failed: Get "http://10.233.90.33:3000/api/health": dial tcp 10.233.90.33:3000: connect: connection refused
</code></pre>
<p><code>kubectl get svc</code></p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana LoadBalancer 10.233.14.90 10.10.30.52 80:30285/TCP 55m
kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 9d
</code></pre>
<p><code>kubectl get ing</code> (no node address assigned)</p>
<pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE
grafana nginx grafana.mycluster.org 80, 443 25m
</code></pre>
<p><code>kubectl describe ing grafana</code> (no node address assigned)</p>
<pre><code>Name: grafana
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
mywildcard-tls terminates grafana.mycluster.org
Rules:
Host Path Backends
---- ---- --------
grafana.mycluster.org
/ grafana:80 (10.233.90.33:3000)
Annotations: meta.helm.sh/release-name: grafana
meta.helm.sh/release-namespace: default
Events: <none>
</code></pre>
<p><code>kubectl get all --all-namespaces</code></p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/grafana-b988b9b6-pxccw 1/1 Running 0 2m53s
default pod/nfs-client-nfs-subdir-external-provisioner-68f44cd9f4-wjlpv 1/1 Running 0 17h
ingress-nginx pod/ingress-nginx-controller-6m2vt 1/1 Running 0 17h
ingress-nginx pod/ingress-nginx-controller-xkgxl 1/1 Running 0 17h
kube-system pod/calico-kube-controllers-684bcfdc59-kmsst 1/1 Running 0 17h
kube-system pod/calico-node-dhlnt 1/1 Running 0 17h
kube-system pod/calico-node-r8ktz 1/1 Running 0 17h
kube-system pod/coredns-8474476ff8-9sbwh 1/1 Running 0 17h
kube-system pod/coredns-8474476ff8-fdgcb 1/1 Running 0 17h
kube-system pod/dns-autoscaler-5ffdc7f89d-vskvq 1/1 Running 0 17h
kube-system pod/kube-apiserver-node1 1/1 Running 0 17h
kube-system pod/kube-controller-manager-node1 1/1 Running 1 17h
kube-system pod/kube-proxy-hbjz6 1/1 Running 0 16h
kube-system pod/kube-proxy-lfqzt 1/1 Running 0 16h
kube-system pod/kube-scheduler-node1 1/1 Running 1 17h
kube-system pod/kubernetes-dashboard-548847967d-qqngw 1/1 Running 0 17h
kube-system pod/kubernetes-metrics-scraper-6d49f96c97-2h7hc 1/1 Running 0 17h
kube-system pod/nginx-proxy-node2 1/1 Running 0 17h
kube-system pod/nodelocaldns-64cqs 1/1 Running 0 17h
kube-system pod/nodelocaldns-t5vv6 1/1 Running 0 17h
kube-system pod/registry-proxy-kljvw 1/1 Running 0 17h
kube-system pod/registry-proxy-nz4qk 1/1 Running 0 17h
kube-system pod/registry-xzh9d 1/1 Running 0 17h
metallb-system pod/controller-77c44876d-c92lb 1/1 Running 0 17h
metallb-system pod/speaker-fkjqp 1/1 Running 0 17h
metallb-system pod/speaker-pqjgt 1/1 Running 0 17h
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/grafana LoadBalancer 10.233.1.104 10.10.30.52 80:31116/TCP 2m53s
default service/kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 17h
kube-system service/coredns ClusterIP 10.233.0.3 <none> 53/UDP,53/TCP,9153/TCP 17h
kube-system service/dashboard-metrics-scraper ClusterIP 10.233.35.124 <none> 8000/TCP 17h
kube-system service/kubernetes-dashboard ClusterIP 10.233.32.133 <none> 443/TCP 17h
kube-system service/registry ClusterIP 10.233.30.221 <none> 5000/TCP 17h
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ingress-nginx daemonset.apps/ingress-nginx-controller 2 2 2 2 2 kubernetes.io/os=linux 17h
kube-system daemonset.apps/calico-node 2 2 2 2 2 kubernetes.io/os=linux 17h
kube-system daemonset.apps/kube-proxy 2 2 2 2 2 kubernetes.io/os=linux 17h
kube-system daemonset.apps/nodelocaldns 2 2 2 2 2 kubernetes.io/os=linux 17h
kube-system daemonset.apps/registry-proxy 2 2 2 2 2 <none> 17h
metallb-system daemonset.apps/speaker 2 2 2 2 2 kubernetes.io/os=linux 17h
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default deployment.apps/grafana 1/1 1 1 2m53s
default deployment.apps/nfs-client-nfs-subdir-external-provisioner 1/1 1 1 17h
kube-system deployment.apps/calico-kube-controllers 1/1 1 1 17h
kube-system deployment.apps/coredns 2/2 2 2 17h
kube-system deployment.apps/dns-autoscaler 1/1 1 1 17h
kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 17h
kube-system deployment.apps/kubernetes-metrics-scraper 1/1 1 1 17h
metallb-system deployment.apps/controller 1/1 1 1 17h
NAMESPACE NAME DESIRED CURRENT READY AGE
default replicaset.apps/grafana-b988b9b6 1 1 1 2m53s
default replicaset.apps/nfs-client-nfs-subdir-external-provisioner-68f44cd9f4 1 1 1 17h
kube-system replicaset.apps/calico-kube-controllers-684bcfdc59 1 1 1 17h
kube-system replicaset.apps/coredns-8474476ff8 2 2 2 17h
kube-system replicaset.apps/dns-autoscaler-5ffdc7f89d 1 1 1 17h
kube-system replicaset.apps/kubernetes-dashboard-548847967d 1 1 1 17h
kube-system replicaset.apps/kubernetes-metrics-scraper-6d49f96c97 1 1 1 17h
kube-system replicaset.apps/registry 1 1 1 17h
metallb-system replicaset.apps/controller-77c44876d 1 1 1 17h
</code></pre>
<p><code>kubectl get ing grafana -o yaml</code></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/tls-acme: "true"
meta.helm.sh/release-name: grafana
meta.helm.sh/release-namespace: default
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
creationTimestamp: "2021-11-11T07:16:12Z"
generation: 1
labels:
app.kubernetes.io/instance: grafana
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: grafana
app.kubernetes.io/version: 8.2.2
helm.sh/chart: grafana-6.17.5
name: grafana
namespace: default
resourceVersion: "3137"
uid: 6c34d3bd-9ab6-42fe-ac1b-7620a9566f62
spec:
ingressClassName: nginx
rules:
- host: grafana.mycluster.org
http:
paths:
- backend:
service:
name: ssl-redirect
port:
name: use-annotation
path: /*
pathType: Prefix
- backend:
service:
name: grafana
port:
number: 80
path: /
pathType: Prefix
status:
loadBalancer: {}
</code></pre>
| <blockquote>
<p>Running <code>kubectl get ingressclass</code> returned 'No resources found'.</p>
</blockquote>
<p>That's the main reason of your issue.</p>
<p><em>Why?</em></p>
<p>When you are specifying <code>ingressClassName: nginx</code> in your Grafana <code>values.yaml</code> file you are <a href="https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress" rel="noreferrer">setting your Ingress resource to use</a> <code>nginx</code> Ingress class which does not exist.</p>
<p>I replicated your issue using <a href="https://minikube.sigs.k8s.io/docs/start/" rel="noreferrer">minikube</a>, <a href="https://metallb.universe.tf/" rel="noreferrer">MetalLB</a> and <a href="https://kubernetes.github.io/ingress-nginx/" rel="noreferrer">NGINX Ingress</a> installed via modified <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/cloud/deploy.yaml" rel="noreferrer">deploy.yaml file</a> with commented <code>IngressClass</code> resource + set NGINX Ingress controller name to <code>nginx</code> as in your example. The result was exactly the same - <code>ingressClassName: nginx</code> didn't work (no address), but annotation <code>kubernetes.io/ingress.class: nginx</code> worked.</p>
<hr />
<p>(For the below solution I'm using controller pod name <code>ingress-nginx-controller-86c865f5c4-qwl2b</code>, but in your case it will be different - check it using <code>kubectl get pods -n ingress-nginx</code> command. Also keep in mind it's kind of a workaround - usually <code>ingressClass</code> resource should be installed automatically with a whole installation of NGINX Ingress. I'm presenting this solution to understand <em>why</em> it's not worked for you before, and <em>why</em> it works with NGINX Ingress installed using helm)</p>
<p>In the logs of the Ingress NGINX controller I found (<code>kubectl logs ingress-nginx-controller-86c865f5c4-qwl2b -n ingress-nginx</code>):</p>
<pre><code>"Ignoring ingress because of error while validating ingress class" ingress="default/minimal-ingress" error="no object matching key \"nginx\" in local store"
</code></pre>
<p>So it's clearly shown that there is no matching key to <code>nginx</code> controller class - because there is no <code>ingressClass</code> resource which is the "link" between the NGINX Ingress controller and running Ingress resource.</p>
<p>You can verify which name of controller class is bidden to controller by running <code>kubectl get pod ingress-nginx-controller-86c865f5c4-qwl2b -n ingress-nginx -o yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>...
spec:
containers:
- args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
- --election-id=ingress-controller-leader
- --controller-class=k8s.io/nginx
...
</code></pre>
<p>Now I will create and apply following Ingress class resource:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: nginx
spec:
controller: k8s.io/nginx
</code></pre>
<p>Now in the logs I can see that it's properly configured:</p>
<pre><code>I1115 12:13:42.410384 7 main.go:101] "successfully validated configuration, accepting" ingress="minimal-ingress/default"
I1115 12:13:42.420408 7 store.go:371] "Found valid IngressClass" ingress="default/minimal-ingress" ingressclass="nginx"
I1115 12:13:42.421487 7 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"minimal-ingress", UID:"c708a672-a8dd-45d3-a2ec-f2e2881623ea", APIVersion:"networking.k8s.io/v1", ResourceVersion:"454362", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
</code></pre>
<p>I re-applied the ingress resource definition, I get IP address for Ingress resource.</p>
<hr />
<p>As I said before, instead of using this workaround, I'd suggest installing the NGINX Ingress resource using a solution that automatically installs <code>IngressClass</code> as well. As you have chosen helm chart, it has <a href="https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/templates/controller-ingressclass.yaml" rel="noreferrer">Ingress Class</a> resource so the problem is gone. Other possible ways to install <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="noreferrer">are here</a>.</p>
|
<p>I have an application which stores logs in a file at a configurable location.
Let's say <code>/abc/pqr/application.log</code></p>
<p>Application is being migrated to Kubernetes where it will run in a single pod. If I run <code>kubectl log <pod-name></code>, I get anything that gets printed on <code>stdout</code> which I can redirect to a file. I want to do the other way around, I have a file containing logs at above location and I want <code>kubectl logs <pod-name></code> to print logs from that file.</p>
<p>For example, if run <code>kubectl logs kafka</code> for a kafka pod deployed using <a href="https://github.com/bitnami/charts/tree/master/bitnami/kafka" rel="nofollow noreferrer">bitnami/kafka</a>, I get logs from <code>/opt/bitnami/kafka/logs/server.log</code>. I want to mimic this behavior.</p>
| <p><code>kubectl logs</code> command takes everything from <code>stdout</code> and <code>stderr</code>, so you need to supply logs there.</p>
<p>It's a common practice when containerised applications write their logs to <code>stdout</code> and <code>stderr</code>.</p>
<p>This way there are two main options:</p>
<ol>
<li><p>Adjust the application so it writes logs to <code>stdout</code> and file as well.
E.g. using shell it can be done with <code>tee</code> command.</p>
<p>Please find a <a href="https://stackoverflow.com/questions/418896/how-to-redirect-output-to-a-file-and-stdout">good answers</a> with description of the command.</p>
</li>
<li><p>Use a sidecar container which will be getting logs from file and translating them into its own <code>stdout</code>.
Please find <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#sidecar-container-with-logging-agent" rel="nofollow noreferrer">Using a sidecar container with the logging agent</a></p>
</li>
</ol>
<p>Useful link about kubernetes logging (including containers):</p>
<ul>
<li><a href="https://sematext.com/guides/kubernetes-logging/#how-does-logging-in-kubernetes-work" rel="nofollow noreferrer">The Complete Guide to Kubernetes Logging</a></li>
</ul>
|
<p>Having trouble deploying Hashicorp Vault on kubernetes/helm. Can't get vault to work at all. I've really tried changing almost all the parameters I could and still can't get it to work and I don't know where the issue lies exactly.</p>
<p>The error I get is mainly based on <code>Error Checking Seal status/Client sent an HTTP request to an HTTPS server.</code></p>
<p>If I set <code>tls_disable=true</code> inside the .Values.ha.config then I get an error that vault is sealed but I still can't view the UI... I feel like deploying vault has been bipolar and it sometimes works and sometimes doesn't. Then I can't replicate where the bug lied either. This has been a headache.</p>
<p>Here is my values.yaml file:</p>
<pre><code>server:
enabled: true
ingress:
enabled: true
annotations:
cert.<issuer>.cloud/issuer: <intermediate-hostname>
cert.<issuer>.cloud/secretname: vault-server-tls
cert.<issuer>.cloud/purpose: managed
dns.<issuer>.cloud/class: <class>
dns.<issuer>.cloud/dnsnames: "<hostname>"
dns.<issuer>.cloud/ttl: "600"
hosts:
- host: "vault.<hostname>"
paths: []
tls:
- secretName: vault-server-tls
hosts:
- vault.<hostname>
extraVolumes:
- type: secret
name: vault-server-tls
service:
enabled: true
port: 8200
targetPort: 443
ha:
enabled: true
replicas: 3
raft:
enabled: true
config: |
ui = true
listener "tcp" {
tls_disable = false
address = "[::]:8200"
cluster_address = "[::]:8201"
tls_cert_file = "/vault/userconfig/vault-server-tls/tls.crt"
tls_key_file = "/vault/userconfig/vault-server-tls/tls.key"
tls_client_ca_file = "/vault/userconfig/vault-server-tls/vault.ca"
}
storage "raft" {
path = "/vault/data"
}
config: |
ui = true
listener "tcp" {
tls_disable = false
address = "[::]:443"
cluster_address = "[::]:8201"
tls_cert_file = "/vault/userconfig/vault-server-tls/tls.crt"
tls_key_file = "/vault/userconfig/vault-server-tls/tls.key"
tls_client_ca_file = "/vault/userconfig/vault-server-tls/vault.ca"
tls_require_and_verify_client_cert = false
}
storage "consul" {
path = "vault"
address = "HOST_IP:8500"
}
disable_mlock = true
ui:
enabled: true
serviceType: LoadBalancer
externalPort: 443
targetPort: 8200
</code></pre>
<p>EDIT: I'm now able to view the UI from the LoadBalancer but not from the hostname set in <code>dns.<issuer>.cloud/dnsnames: "<hostname>"</code> under the ingress.annotations</p>
<p>Still get the error but can view the UI via the LoadBalancer: <code>Readiness probe failed. Error unsealing: Error making API request. URL: PUT http://127.0.0.1:8200/v1/sys/unsealCode: 400. Raw Message: Client sent an HTTP request to an HTTPS server.</code></p>
| <p>As you mentioned you faced issued of <code>Error Checking Seal status/Client sent an HTTP request to an HTTPS server & vault is sealed</code></p>
<p>Once you have deployed the vault using the helm chart you have to unseal the vault using the CLI first time and after that UI will be available to use.</p>
<p>Reference document : <a href="https://learn.hashicorp.com/tutorials/vault/kubernetes-raft-deployment-guide?in=vault/kubernetes#initialize-and-unseal-vault" rel="nofollow noreferrer">https://learn.hashicorp.com/tutorials/vault/kubernetes-raft-deployment-guide?in=vault/kubernetes#initialize-and-unseal-vault</a></p>
<p>Get the list of <strong>pods</strong></p>
<pre><code>kubectl get pods --selector='app.kubernetes.io/name=vault' --namespace=' vault'
</code></pre>
<p><strong>Exec</strong> into the pods</p>
<pre><code>kubectl exec --stdin=true --tty=true vault-0 -- vault operator init
kubectl exec --stdin=true --tty=true vault-0 -- vault operator unseal
</code></pre>
<p>once you will unseal the vault your PODs status will get changed to <strong>1/1</strong> in <strong>Ready</strong> instead of <strong>0/1</strong></p>
|
<p>I am trying to copy and paste few commands inside the labs.play-with-k8s free instance provided by kubernetes. Both the commands the Ctrl+c and Ctrl+v doesn't work.</p>
<p>I found few options like ctrl + shift +insert to copy but that also doesn't seems to work for me.</p>
<p>Any suggestion please.</p>
| <p>ctrl + insert to copy and shift + insert to paste</p>
|
<p>In Kubernetes we can request resources using different API versions:</p>
<pre><code>kubectl get roles.v1.rbac.authorization.k8s.io foo -n bar -oyaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: foo
namespace: bar
rules:
- apiGroups:
- ""
resources:
- endpoints
- secrets
verbs:
- create
- get
- watch
- list
- update
</code></pre>
<pre><code>kubectl get roles.v1beta1.rbac.authorization.k8s.io foo -n bar -oyaml
Warning: rbac.authorization.k8s.io/v1beta1 Role is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 Role
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: foo
namespace: bar
rules:
- apiGroups:
- ""
resources:
- endpoints
- secrets
verbs:
- create
- get
- watch
- list
- update
</code></pre>
<ul>
<li>Would the API version used to create a resource have an impact on the resource stored in ETCD?</li>
<li>If a resource was stored when the newer API version (v1) did not exist yet, would this be a problem when the older API version (v1beta1) is removed?</li>
<li>Would upgrading to Kubernetes v1.22, which removes <code>rbac.authorization.k8s.io/v1beta1</code>, break already created/stored resources?</li>
<li>How are resource transformations between different API versions handled?</li>
</ul>
| <blockquote>
<p>If a resource was stored when the newer API version (v1) did not exist
yet, would this be a problem when the older API version (v1beta1) is
removed?</p>
</blockquote>
<p>Kubernetes supports a huge elastic deprecation system, which allows you to create, migrate and maintain API versions in time, however(jumping to your next question, you should sometimes manually upgrade API versions to up-to-date ones)</p>
<p>You can check <a href="https://kubernetes.io/docs/reference/using-api/deprecation-policy/" rel="nofollow noreferrer">Kubernetes Deprecation Policy guide</a>, that is very important part of keeping cluster in work condition.</p>
<p>Main rules:</p>
<ul>
<li>Rule #1: API elements may only be removed by incrementing the version of the API group.</li>
<li>Rule #2: API objects must be able to round-trip between API versions in a given release without information loss, with the exception of whole REST resources that do not exist in some versions.</li>
<li>Rule #3: An API version in a given track may not be deprecated until a new API version at least as stable is released.</li>
<li>Rule #4a: Other than the most recent API versions in each track, older API versions must be supported after their announced deprecation for a certain duration.</li>
<li>Rule #4b: The "preferred" API version and the "storage version" for a given group may not advance until after a release has been made that supports both the new version and the previous version</li>
</ul>
<p>You can check also table that describes which API versions are supported in a series of subsequent releases.</p>
<hr />
<blockquote>
<p>Would upgrading to Kubernetes v1.22, which removes
rbac.authorization.k8s.io/v1beta1, break already created/stored
resources?</p>
</blockquote>
<p>I think yes and you have to do some actions according to <a href="https://kubernetes.io/docs/reference/using-api/deprecation-guide/#rbac-resources-v122" rel="nofollow noreferrer">1.22 RBAC deprecation resources</a></p>
<p><a href="https://i.stack.imgur.com/cJTAq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cJTAq.png" alt="enter image description here" /></a></p>
<hr />
<blockquote>
<p>How are resource transformations between different API versions
handled?</p>
</blockquote>
<p>Check <a href="https://kubernetes.io/docs/reference/using-api/deprecation-guide/#what-to-do" rel="nofollow noreferrer">What to do</a></p>
<ul>
<li><p><a href="https://kubernetes.io/docs/reference/using-api/deprecation-guide/#test-with-deprecated-apis-disabled" rel="nofollow noreferrer">Test with deprecated APIs disabled</a></p>
</li>
<li><p><a href="https://kubernetes.io/docs/reference/using-api/deprecation-guide/#locate-use-of-deprecated-apis" rel="nofollow noreferrer">Locate use of deprecated APIs</a></p>
</li>
<li><p><a href="https://kubernetes.io/docs/reference/using-api/deprecation-guide/#migrate-to-non-deprecated-apis" rel="nofollow noreferrer">Migrate to non-deprecated APIs</a></p>
</li>
</ul>
|
<p>Since I added some additonal initContainers to my Airflow helm chart <a href="https://github.com/apache/airflow/tree/main/chart" rel="nofollow noreferrer">(link)</a>, I was trying to set a default <code>initContainerResources</code> to my helm values and deployment yaml:</p>
<ul>
<li>values.yaml</li>
</ul>
<pre><code># Airflow scheduler settings
scheduler:
initContainerResources:
resources:
limits:
cpu: 200m
memory: 255Mi
requests:
cpu: 100m
memory: 128Mi
</code></pre>
<ul>
<li>.. and deployment.yaml</li>
</ul>
<pre><code>...
- name: scheduler-add-init1
securityContext:
allowPrivilegeEscalation: False
resources:
{{ toYaml .Values.scheduler.initContainerResources.resources | indent 12 }}
...
</code></pre>
<p>However, when I try to render the files with helm template, I get:</p>
<blockquote>
<p>Error: values don't meet the specifications of the schema(s) in the
following chart(s): airflow:</p>
<ul>
<li>scheduler: Additional property initContainerResources is not allowed</li>
</ul>
</blockquote>
<p>My goal was to define the init containers resources together but independent from the scheduler container. What´s wrong with my setup?</p>
| <p>Turned out I had the schema validation process blocking me from adding additional parameters. Just added the desired key to the schema and it worked:</p>
<p>values.schema.json</p>
<pre><code> "initContainerResources": {
"description": "Add default ressources to all init containers of scheduler.",
"type": "object",
"default": "See values.yaml"
},
</code></pre>
|
<p>I am looking for a way to get all object's metadata within a k8s cluster and send it out to an external server.</p>
<p>By metadata, I refer to objects <code>Name</code>, <code>Kind</code>, <code>Labels</code>, <code>Annotations</code>, etc.
The intention is to build an offline inventory of a cluster.</p>
<p>What would be the best approach to build it? Is there any tool that already does something similar?</p>
<p>Thanks</p>
| <p>Posting this as a community wiki, feel free to edit and expand.</p>
<hr />
<p>There are different ways to achieve it.</p>
<ol>
<li><p>From this <a href="https://github.com/kubernetes/kubectl/issues/151#issuecomment-402003022" rel="nofollow noreferrer">GitHub issue comment</a> it's possible to iterate through all resources to get all available objects.</p>
<p>in <strong>yaml</strong>:</p>
<pre><code>kubectl api-resources --verbs=list -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -o yaml
</code></pre>
<p>in <strong>json</strong>:</p>
<pre><code>kubectl api-resources --verbs=list -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -o json
</code></pre>
<p>And then parse the output.</p>
</li>
<li><p>Use kubernetes clients.</p>
<p>There are already developed <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">kubernetes clients</a> (available for different languages) which can be used to get required information and work with it later.</p>
</li>
<li><p>Use kubectl plugin - <code>ketall</code> (didn't test it)</p>
<p>There's a developed plugin for kubectl which returns <strong>all</strong> cluster resources. Please find <a href="https://github.com/corneliusweig/ketall" rel="nofollow noreferrer">github repo - ketall</a>. Again after cluster objects are gotten, you will need to parse/work with them.</p>
</li>
</ol>
|
<p>I have one docker image and I am using following command to run it. </p>
<pre><code>docker run -it -p 1976:1976 --name demo demo.docker.cloud.com/demo/runtime:latest
</code></pre>
<p>I want to run the same in Kubernetes. This is my current yaml file.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: v1
kind: Deployment
metadata:
name: demo-deployment
labels:
app: demo
spec:
replicas: 1
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
containers:
- name: demo
image: demo.docker.cloud.com/demo/runtime:latest
ports:
- containerPort: 1976
imagePullPolicy: Never</code></pre>
</div>
</div>
</p>
<p>This yaml file covers everything except flag "-it". I am not able to find its Kubernetes equivalent. Please help me out with this. Thanks</p>
| <p><code>kubectl run</code> is the close match to <code>docker run</code> based on the requested scenario.</p>
<p>Some examples from Kubernetes documentation and it's purpose :</p>
<pre class="lang-sh prettyprint-override"><code>kubectl run -i --tty busybox --image=busybox -- sh # Run pod as interactive shell
kubectl run nginx --image=nginx -n
mynamespace # Run pod nginx in a specific namespace
kubectl run nginx --image=nginx # Run pod nginx and write its spec into a file called pod.yaml
--dry-run=client -o yaml > pod.yaml
</code></pre>
|
<p>I've updated <code>kubectl client version</code> to the latest but I am not able to update <code>kubectl server version</code> to the latest due to which client and server versions are different. The problem is: how can I update <code>kubectl server version</code>?</p>
<p>P.S i'm running minikube on docker locally</p>
<pre><code>Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:50:46Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.22) and server (1.18) exceeds the supported minor version skew of +/-1 ```
```commands which i used to update Kubectl client version are below (for mac intel chip):
Link reference: https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/ ```
</code></pre>
| <p>Posted community wiki answer for better visibility. Feel free to expand it.</p>
<hr />
<p>You need to upgrade your minikube version in order to upgrade the kubectl server version.</p>
<p>Based on <a href="https://stackoverflow.com/questions/45002364/how-to-upgrade-minikube/64362273#64362273">this answer</a> to upgrade minikube on macOS you just need to run these commands (since 2020):</p>
<pre><code>brew update
brew upgrade minikube
</code></pre>
<p>If you encountered any problems, I'd suggest to fully delete minikube from your system (based on <a href="https://gist.github.com/rahulkumar-aws/65e6fbe16cc71012cef997957a1530a3" rel="nofollow noreferrer">this GitHub page</a>):</p>
<pre><code>minikube stop; minikube delete &&
docker stop $(docker ps -aq) &&
rm -rf ~/.kube ~/.minikube &&
sudo rm -rf /usr/local/bin/localkube /usr/local/bin/minikube &&
launchctl stop '*kubelet*.mount' &&
launchctl stop localkube.service &&
launchctl disable localkube.service &&
sudo rm -rf /etc/kubernetes/ &&
docker system prune -af --volumes
</code></pre>
<p>Then you can install minikube with the newest version from scratch by <a href="https://minikube.sigs.k8s.io/docs/start/" rel="nofollow noreferrer">using brew</a>:</p>
<pre><code>brew install minikube
</code></pre>
<p>either by <a href="https://minikube.sigs.k8s.io/docs/start/" rel="nofollow noreferrer">downloading and installing a binary file</a>:</p>
<pre><code>curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64
sudo install minikube-darwin-amd64 /usr/local/bin/minikube
</code></pre>
|
<p>My kubectl cp isnt working due to tar not being in my bin folder of my container. I can't seem to find how to get it installed.</p>
<p>I get an error when trying:</p>
<pre><code>kubectl cp /tmp/foo_dir <some-pod>:/tmp/bar_dir
</code></pre>
<p><strong>Error:</strong></p>
<blockquote>
<p>rpc error: code = 2 desc = oci runtime error: exec failed:
container_linux.go:235: starting container process caused "exec:
"tar": executable file not found in $PATH"</p>
</blockquote>
| <p>It looks like there is no Tar into your Docker</p>
<p>I would suggest installing the Tar first at the above <strong>Dockerfile</strong> level commands</p>
<p>You can add the Tar into Docker using</p>
<p><strong>Docker image</strong> : <code>RUN yum install -y tar</code></p>
|
<p>Basically I'm builing a system on google cloud. Most services are working on k8s cluster but some codes are not. Lambda and operator of composer, dataflow job are the examples. (Composer is also k8s but different cluster)</p>
<p>I picked kafka as event channel to interconnect the services and I have to decide proper place of kafka broker. K8s pods or VM. I prefer k8s pods, but I worry about the communication between brokers and services, espicially with services outside of k8s cluster.</p>
<blockquote>
<p>Consumer addresses broker with "bootstrap server" that is list of some broker's static unique address. I suppose if brokers are installed inside k8s, addresses of them will be not static unique from outside. Can brokers are connected from service outside of k8s? If possible, which string must be provided to bootstrap sever config?</p>
</blockquote>
<p>Conventional virtual machine is the solution without any suspicion. But I want put more and more things into k8s.</p>
| <p>There are a different solutions to your problems</p>
<p>You can deploy the Kafka on K8s cluster and use the service mesh to interconnect both clusters. So broker and service can connect with each other without any worry.</p>
<p>If you are on GCP you can use the MCS service or traffic director and other service mesh.</p>
<p>You can also set up Kafka on <strong>VM</strong> and expose it over the IP and further that will be used by services to connect.</p>
<blockquote>
<p>Can brokers are connected from service outside of k8s?</p>
</blockquote>
<p>Yes, you can expose your Kafka broker using the service type Loadblanacer or Node Port. <a href="https://banzaicloud.com/blog/kafka-external-access/#:%7E:text=You%20can%20expose%20the%20Kafka,in%20the%20KafkaCluster%20custom%20resource.&text=Above%2C%20externalListeners%20creates%20two%20external,are%20registered%20in%20the%20advertized." rel="nofollow noreferrer">Reference doc</a></p>
<blockquote>
<p>I suppose if brokers are installed inside k8s, addresses of them will
be not static unique from outside.</p>
</blockquote>
<p>You dont need to bind Kafka to any specific hostname for the interface, Kafka will listen to all the interfaces and you can expose it using the <strong>K8s service</strong> if running on K8s.</p>
|
<p>I'm trying to setup TLS for a service that's available outside a Kubernetes cluster (AWS EKS). With cert-manager, I've successfully issued a certificate and configured ingress, but I'm still getting error <code>NET::ERR_CERT_AUTHORITY_INVALID</code>. Here's what I have:</p>
<ol>
<li><p><strong>namespace</strong> <code>tests</code> and <code>hello-kubernetes</code> in it (both <strong>deployment and service</strong> have name <code>hello-kubernetes-first</code>, serivce is ClusterIP with <code>port</code> 80 and <code>targetPort</code> 8080, deployment is based on <code>paulbouwer/hello-kubernetes:1.8</code>, see details in my <a href="https://stackoverflow.com/q/69888157/3995261">previous question</a>)</p>
</li>
<li><p><strong>DNS and ingress</strong> configured to show the service:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
namespace: tests
spec:
ingressClassName: nginx
rules:
- host: test3.projectname.org
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: hello-kubernetes-first
port:
number: 80
</code></pre>
<p>Without configuring TLS, I can access test3.projectname.org via http and see the service (well, it tries to redirect me to https, I see <code>NET::ERR_CERT_AUTHORITY_INVALID</code>, I go to insecure anyway and see the hello-kubernetes page).</p>
<ul>
<li><p>note: I have nginx-ingress <strong>ingress controller</strong>; it was installed before me via the following chart:</p>
<pre><code>apiVersion: v2
name: nginx
description: A Helm chart for Kubernetes
type: application
version: 4.0.6
appVersion: "1.0.4"
dependencies:
- name: ingress-nginx
version: 4.0.6
repository: https://kubernetes.github.io/ingress-nginx
</code></pre>
<p>and the values overwrites applied with the chart differ from the <a href="https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml" rel="nofollow noreferrer">original ones</a> mostly in <code>extraArgs</code>: <code>default-ssl-certificate: "nginx-ingress/dragon-family-com"</code> is uncommneted</p>
</li>
</ul>
</li>
<li><p><strong>cert-manager</strong> installed via <code>kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.4/cert-manager.yaml</code></p>
</li>
<li><p><strong>ClusterIssuer</strong> created with the following config:</p>
<pre><code>apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-backoffice
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
# use https://acme-v02.api.letsencrypt.org/directory after everything is fixed and works
privateKeySecretRef: # this secret is created in the namespace of cert-manager
name: letsencrypt-backoffice-private-key
# email: <will be used for urgent alerts about expiration etc>
solvers:
# TODO: add for each domain/second-level domain/*.projectname.org
- selector:
dnsZones:
- test.projectname.org
- test2.projectname.org
- test3.projectname.org
http01:
ingress:
class: nginx
</code></pre>
</li>
<li><p><strong>certificate</strong> in the <code>tests</code> namespace. It's config is</p>
<pre><code>apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: letsencrypt-certificate-31
namespace: tests
spec:
secretName: tls-secret-31
issuerRef:
kind: ClusterIssuer
name: letsencrypt-backoffice
commonName: test3.projectname.org
dnsNames:
- test3.projectname.org
</code></pre>
</li>
</ol>
<p>Now, certificate is ready (<code>kubectl get certificates -n tests</code> tells that) and to apply it, I add this to ingress's spec:</p>
<pre><code> tls:
- hosts:
- test3.projectname.org
secretName: tls-secret-31
</code></pre>
<p>However, when I try to open test3.projectname.org via https, it still shows me the <code>NET::ERR_CERT_AUTHORITY_INVALID</code> error. <strong>What am I doing wrong? How to debug this?</strong> I've checked up <code>openssl s_client -connect test3.projectname.org:443 -prexit</code><a href="https://stackoverflow.com/a/22814663/3995261">*</a> and it shows the following chain:</p>
<pre><code> 0 s:CN = test3.projectname.org
i:C = US, O = (STAGING) Let's Encrypt, CN = (STAGING) Artificial Apricot R3
1 s:C = US, O = (STAGING) Let's Encrypt, CN = (STAGING) Artificial Apricot R3
i:C = US, O = (STAGING) Internet Security Research Group, CN = (STAGING) Pretend Pear X1
2 s:C = US, O = (STAGING) Internet Security Research Group, CN = (STAGING) Pretend Pear X1
i:C = US, O = (STAGING) Internet Security Research Group, CN = (STAGING) Doctored Durian Root CA X3
</code></pre>
<p>and tells, among other output</p>
<blockquote>
<p>Verification error: unable to get local issuer certificate</p>
</blockquote>
<p>Unfortunately, I haven't found anything useful to try further, so any help is appreciated.</p>
| <p>Your ClusterIssuer refers to LetsEncrypt staging issuer. Remove that setting / the default should use their production setup. As pointed out in comments: <a href="https://acme-v02.api.letsencrypt.org/directory" rel="nofollow noreferrer">https://acme-v02.api.letsencrypt.org/directory</a></p>
<p>Deleting the previously generated secrets or switching to new secrets should ensure your certificates would be re-generated, using the right issuer.</p>
<p>The staging issuer could be useful testing LetsEncrypt integration, it shouldn't be used otherwise.</p>
|
<p>At work we use Kubernetes hosted in GCP. I also have a side project hosted in my <em>personal</em> GCP account using Google App Engine (deploy using <code>gcloud app deploy</code>).</p>
<p>Often when I try to run a command such as <code>kubectl logs -f service-name</code>, I get an error like "Error from server (Forbidden): pods is forbidden: User "my_personal_email@gmail.com" cannot list resource "pods" in API group "" in the namespace "WORK_NAMESPACE": Required "container.pods.list" permission." and then I have to fight with kubectl for hours trying to get it to work.</p>
<p>Can somebody please break it down for a slow person like me, how <code>gcloud</code> and <code>kubectl</code> work together, and how I can easily switch accounts so I can use <code>gcloud</code> commands for my personal projects and <code>kubectl</code> commands for my work projects? I'm happy to nuke my whole config and start from scratch if that's what it takes. I've found various <code>kubectl</code> and <code>gcloud</code> documentation but it doesn't make much sense or talks in circles.</p>
<p>Edit: this is on Linux. </p>
| <p>Had the same problem and doing all of the:</p>
<pre><code>gcloud auth login
gcloud auth list
gcloud config set account
gcloud projects list
</code></pre>
<p>didn't help. I knew <code>gcloud</code> switched fine as I was able to list other resources with it directly.
But it seems <code>kubectl</code> can't pick those changes up automatically, as <code>kubectl/gcloud</code> integration relies on the pre-generated key, which has a 1h expiration(not sure if it's a default but it's what it is on my machine right now).
So, on top of setting right user/project/account with <code>gcloud</code>, you should re-generate the creds:</p>
<p><code>gcloud container clusters get-credentials <my-cluster> --zone <clusters-zone></code></p>
|
<p>The authentication using kyecloak isn't working as expected, it been used Istio vs Keycloak.
Istio components configured : Gateway, Virtualservice, AuthorizationPolicy, RequestAuthentication</p>
<p>using a valid token: <strong>401 Jwt issuer is not configured</strong></p>
<p><a href="https://i.stack.imgur.com/zYMaM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zYMaM.png" alt="enter image description here" /></a></p>
<p>ISTIO CONFIGURATION FOR SECURITY:</p>
<pre><code>---
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: "jwt-example"
namespace: istio-system
spec:
selector:
matchLabels:
istio: ingressgateway
jwtRules:
- issuer: "http://localhost:30080/auth/realms/master"
jwksUri: "http://localhost:30080/auth/realms/master/protocol/openid-connect/certs"
forwardOriginalToken: true
outputPayloadToHeader: x-jwt-payload
EOF
---
kubectl apply -f - <<EOF
apiVersion: "security.istio.io/v1beta1"
kind: "AuthorizationPolicy"
metadata:
name: "frontend-ingress"
namespace: istio-system
spec:
selector:
matchLabels:
istio: ingressgateway
action: DENY
rules:
- from:
- source:
notRequestPrincipals: ["*"]
principalBinding: USE_ORIGIN
EOF
---
</code></pre>
<p>once there is no authorization Bearer</p>
<p><a href="https://i.stack.imgur.com/sWgyM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sWgyM.png" alt="enter image description here" /></a></p>
<p>for double check i used istio's example and worked :</p>
<pre><code> kubectl apply -f - <<EOF
apiVersion: "security.istio.io/v1beta1"
kind: "RequestAuthentication"
metadata:
name: "jwt-example"
namespace: istio-system
spec:
selector:
matchLabels:
istio: ingressgateway
jwtRules:
- issuer: "testing@secure.istio.io"
jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.8/security/tools/jwt/samples/jwks.json"
EOF
kubectl apply -f - <<EOF
apiVersion: "security.istio.io/v1beta1"
kind: "AuthorizationPolicy"
metadata:
name: "frontend-ingress"
namespace: istio-system
spec:
selector:
matchLabels:
istio: ingressgateway
action: DENY
rules:
- from:
- source:
notRequestPrincipals: ["*"]
EOF
</code></pre>
<p>ISTIO GTW and VS :</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: keycloak-gateway
namespace: default
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: enterprise-vs
spec:
hosts:
- '*'
gateways:
- default/keycloak-gateway
http:
- match:
- uri:
prefix: '/enterprise/'
rewrite:
uri: /
fault:
delay:
fixedDelay: 1s
route:
- destination:
host: enterprise
port:
number: 8080
subset: enterprise-s1
weight: 90
- destination:
host: enterprise
port:
number: 8080
subset: enterprise-s2
weight: 10
</code></pre>
| <p>I encountered similar issue.</p>
<p>The JWT token had following value for issuer:
"iss": "http://localhost:8080/auth/realms/dev"</p>
<p>I matched the same value in my JwtRules i.e. localhost.
However I changed jwksUri to cluster IP address of Keycloak.
This seems to have worked.</p>
<p>jwtRules:</p>
<pre><code>- issuer: 'http://localhost:8080/auth/realms/dev'
jwksUri: 'http://10.105.250.41:8080/auth/realms/dev/protocol/openid-connect/certs'
</code></pre>
|
<p>I have a Django project deployed in Kubernetes and I am trying to deploy Prometheus as a monitoring tool. I have successfully done all the steps needed to include <code>django_prometheus</code> in the project and locally I can go go <code>localhost:9090</code> and play around with querying the metrics.</p>
<p>I have also deployed Prometheus to my Kubernetes cluster and upon running a <code>kubectl port-forward ...</code> on the Prometheus pod I can see some metrics of my Kubernetes resources.</p>
<p>Where I am a bit confused is how to make the deployed Django app metrics available on the Prometheus dashboard just like the others.
I deployed my app in <code>default</code> namespace and prometheus in a <code>monitoring</code> dedicated namespace. I am wondering what am I missing here. Do I need to expose the ports on the service and deployment from 8000 to 8005 according to the number of workers or something like that?</p>
<p>My Django app runs with gunicorn using <code>supervisord</code> like so:</p>
<pre><code>[program:gunicorn]
command=gunicorn --reload --timeout 200000 --workers=5 --limit-request-line 0 --limit-request-fields 32768 --limit-request-field_size 0 --chdir /code/ my_app.wsgi
</code></pre>
<ul>
<li><code>my_app</code> service:</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: my_app
namespace: default
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: my-app
sessionAffinity: None
type: ClusterIP
</code></pre>
<ul>
<li>Trimmed version of the <code>deployment.yaml</code></li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: my-app
name: my-app-deployment
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: my-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: my-app
spec:
containers:
- image: ...
imagePullPolicy: IfNotPresent
name: my-app
ports:
- containerPort: 80
name: http
protocol: TCP
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: regcred
restartPolicy: Always
schedulerName: default-scheduler
terminationGracePeriodSeconds: 30
</code></pre>
<ul>
<li><code>prometheus configmap</code></li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
data:
prometheus.rules: |-
... some rules
prometheus.yml: |-
global:
scrape_interval: 5s
evaluation_interval: 5s
rule_files:
- /etc/prometheus/prometheus.rules
scrape_configs:
- job_name: prometheus
static_configs:
- targets:
- localhost:9090
- job_name: my-app
metrics_path: /metrics
static_configs:
- targets:
- localhost:8000
- job_name: 'node-exporter'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_endpoints_name]
regex: 'node-exporter'
action: keep
kind: ConfigMap
metadata:
labels:
name: prometheus-config
name: prometheus-config
namespace: monitoring
</code></pre>
| <p>You do not have to expose services, if the promehteus is installed on the same cluster as your app. You can communicate with apps between namespaces by using Kubernetes DNS resolution, going by the rule:</p>
<pre><code>SERVICENAME.NAMESPACE.svc.cluster.local
</code></pre>
<p>so one way is to change your prometheus job target to something like this</p>
<pre class="lang-yaml prettyprint-override"><code> - job_name: speedtest-ookla
metrics_path: /metrics
static_configs:
- targets:
- 'my_app.default.svc.cluster.local:9000'
</code></pre>
<p>And this is the "manual" way. A better approach will be to use prometheus <code>kubernetes_sd_config</code>. It will autodiscover your services and try to scrape them.</p>
<p>Reference: <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config" rel="nofollow noreferrer">https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config</a></p>
|
<p>I am trying to do the following, so where MYVALUE in host needs to change to include Release Name. Can't figure how to do this, as you can't use env variables like <code>{{ .Release.Name }}</code> directly in to a values.yaml file.</p>
<p>I did do a <code>fullnameOverride</code> and put <code>fullnameOverride: myrelease-mysql</code> for the mysql pod and then jasper has <code>host: myrelease-mysql</code> that works but wanted to know if there was a clever way to put release name into a values.yaml file.</p>
<p>I assumed I would need to use a configMap as can use <code>.Release.Name</code> there and then embed that config key into values.yaml.</p>
<p><strong>Values.yaml</strong></p>
<pre><code>jasperreports:
mariadb:
enabled: false
externalDatabase:
host: MYVALUE // Also tried $MVALUE
user: sqluser
database: jasper
jasperreportsUsername: jasper
env:
- name: MYVALUE
valueFrom:
configMapKeyRef:
name: mysql-jasper
key: mysql_releasename
</code></pre>
<p><strong>ConfigMap</strong></p>
<pre><code>kind: ConfigMap
metadata:
name: mysql-jasper
data:
mysql_releasename: {{ .Release.Name }}-"mysql"
</code></pre>
| <p>It seems that helm does not support any template rendering capabilities in a <code>values.yaml</code> file - there are multiple topics on the helm GitHub:</p>
<ul>
<li><a href="https://github.com/helm/helm/issues/9754" rel="nofollow noreferrer">Canonical way of using dynamic object names within values.yaml</a></li>
<li><a href="https://github.com/helm/helm/pull/6876" rel="nofollow noreferrer">Adding values templates in order to customize values with go-template, for the chart and its dependencies</a></li>
<li><a href="https://github.com/helm/helm/issues/2492" rel="nofollow noreferrer">Proposal: Allow templating in values.yaml</a></li>
</ul>
<p>For now this feature is not implemented so you need to find a workaround - the suggestion from David Maze seems to be a good direction, but if you want to follow your approach you can use below workaround <a href="https://helm.sh/docs/helm/helm_install/#helm-install" rel="nofollow noreferrer">using <code>--set</code> flag in the <code>helm install</code> command</a> or use <code>sed</code> command and pipe to <code>helm install</code> command.</p>
<p>First solution with <code>--set</code> flag.</p>
<p>My <code>values.yaml</code> file is little bit different than yours:</p>
<pre class="lang-yaml prettyprint-override"><code>mariadb:
enabled: false
externalDatabase:
user: sqluser
database: jasper
jasperreportsUsername: jasper
</code></pre>
<p>That's because when I was using your <code>values.yaml</code> I couldn't manage to apply these values to <code>bitnami/jasperreports</code> chart, the <code>helm install</code> command was using default values <a href="https://github.com/bitnami/charts/blob/master/bitnami/jasperreports/values.yaml" rel="nofollow noreferrer">from here</a>.</p>
<p>I'm setting a shell variable <code>RELEASE_NAME</code> which I will use both for setting chart name and <code>externalDatabase.host</code> value.</p>
<pre><code>RELEASE_NAME=my-test-release
helm install $RELEASE_NAME bitnami/jasperreports -f values.yaml --set externalDatabase.host=$RELEASE_NAME-mysql
</code></pre>
<p>The above <code>helm install</code> command will override default values both by setting values from the <code>values.yaml</code> file + setting <code>externalDatabase.host</code> value.</p>
<p>Before applying you can check if this solution works as expected by using <code>helm template</code> command:</p>
<pre><code>RELEASE_NAME=my-test-release
helm template $RELEASE_NAME bitnami/jasperreports -f values.yaml --set externalDatabase.host=$RELEASE_NAME-mysql
...
- name: MARIADB_HOST
value: "my-test-release-mariadb"
...
</code></pre>
<p>Another approach is to set a bash variable <code>RELEASE_NAME</code> which will be used in the <code>sed</code> command to output modified <code>values.yaml</code> file (I'm not editing <code>values.yaml</code> file itself). This output will be pipe into a <code>helm install</code> command (where I also used the<code>RELEASE_NAME</code> variable).</p>
<p><code>values.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>mariadb:
enabled: false
externalDatabase:
host: MYHOST
user: sqluser
database: jasper
jasperreportsUsername: jasper
</code></pre>
<pre><code>RELEASE_NAME=my-test-release
sed "s/MYHOST/$RELEASE_NAME-mysql/g" values.yaml | helm install $RELEASE_NAME bitnami/jasperreports -f -
</code></pre>
<p>This approach will set chart configuration the same as in the first approach.</p>
|
<p>I have a local website.
The website was created by a docker-compose and it is listening on a localhost port 3000.</p>
<p>When I try:</p>
<pre><code>curl 127.0.0.1:3000
</code></pre>
<p>I can see the response.</p>
<p>What I did:</p>
<p>From my domain provider I edited the DNS to point to my server, then I changed nginx-ingress:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: virtual-host-ingress
namespace: ingress-basic
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
cert-manager.io/cluster-issuer: "letsencrypt-pp"
spec:
tls:
- hosts:
- nextformulainvesting.com
secretName: ***
rules:
- host: "nextformulainvesting.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: e-frontend-saleor
port:
number: 80
</code></pre>
<p>and I created the service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: e-frontend-saleor
spec:
ports:
- protocol: TCP
port: 80
targetPort: 3000
</code></pre>
<p>But with the service or without the service I receive the error <code>503 Service Temporarily Unavailable</code>.</p>
<p>How can I use nginx-ingress to point to my local TCP service?</p>
| <p>To clarify the issue I am posting a community wiki answer.</p>
<p>The answer that helped to resolve this issue is available at <a href="https://stackoverflow.com/questions/57764237/kubernetes-ingress-to-external-service">this link</a>. Based on that - the clue of the case is to create manually a Service and an Endpoint objects for external server.</p>
<p>After that one can create an Ingress object that will point to Service <code>external-ip</code> with adequate port .</p>
<p>Here are the examples of objects provided in <a href="https://stackoverflow.com/questions/57764237/kubernetes-ingress-to-external-service">similar question</a>.</p>
<ul>
<li>Service and an Endpoint objects:</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: external-ip
spec:
ports:
- name: app
port: 80
protocol: TCP
targetPort: 5678
clusterIP: None
type: ClusterIP
---
apiVersion: v1
kind: Endpoints
metadata:
name: external-ip
subsets:
- addresses:
- ip: 10.0.40.1
ports:
- name: app
port: 5678
protocol: TCP
</code></pre>
<ul>
<li>Ingress object:</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: external-service
spec:
rules:
- host: service.example.com
http:
paths:
- backend:
serviceName: external-ip
servicePort: 80
path: /
</code></pre>
<p>See also <a href="https://github.com/kubernetes/kubernetes/issues/8631#issuecomment-104404768" rel="nofollow noreferrer">this reference</a>.</p>
|
<p>I have a Django project deployed in Kubernetes and I am trying to deploy Prometheus as a monitoring tool. I have successfully done all the steps needed to include <code>django_prometheus</code> in the project and locally I can go go <code>localhost:9090</code> and play around with querying the metrics.</p>
<p>I have also deployed Prometheus to my Kubernetes cluster and upon running a <code>kubectl port-forward ...</code> on the Prometheus pod I can see some metrics of my Kubernetes resources.</p>
<p>Where I am a bit confused is how to make the deployed Django app metrics available on the Prometheus dashboard just like the others.
I deployed my app in <code>default</code> namespace and prometheus in a <code>monitoring</code> dedicated namespace. I am wondering what am I missing here. Do I need to expose the ports on the service and deployment from 8000 to 8005 according to the number of workers or something like that?</p>
<p>My Django app runs with gunicorn using <code>supervisord</code> like so:</p>
<pre><code>[program:gunicorn]
command=gunicorn --reload --timeout 200000 --workers=5 --limit-request-line 0 --limit-request-fields 32768 --limit-request-field_size 0 --chdir /code/ my_app.wsgi
</code></pre>
<ul>
<li><code>my_app</code> service:</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: my_app
namespace: default
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: my-app
sessionAffinity: None
type: ClusterIP
</code></pre>
<ul>
<li>Trimmed version of the <code>deployment.yaml</code></li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: my-app
name: my-app-deployment
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: my-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: my-app
spec:
containers:
- image: ...
imagePullPolicy: IfNotPresent
name: my-app
ports:
- containerPort: 80
name: http
protocol: TCP
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: regcred
restartPolicy: Always
schedulerName: default-scheduler
terminationGracePeriodSeconds: 30
</code></pre>
<ul>
<li><code>prometheus configmap</code></li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
data:
prometheus.rules: |-
... some rules
prometheus.yml: |-
global:
scrape_interval: 5s
evaluation_interval: 5s
rule_files:
- /etc/prometheus/prometheus.rules
scrape_configs:
- job_name: prometheus
static_configs:
- targets:
- localhost:9090
- job_name: my-app
metrics_path: /metrics
static_configs:
- targets:
- localhost:8000
- job_name: 'node-exporter'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_endpoints_name]
regex: 'node-exporter'
action: keep
kind: ConfigMap
metadata:
labels:
name: prometheus-config
name: prometheus-config
namespace: monitoring
</code></pre>
| <p>No need to expose the application outside the cluster.</p>
<p>Leveraging the Kubernetes service discovery, add the job to scrape Services, Pods, or both:</p>
<pre><code>- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
regex: (.+)
- regex: __meta_kubernetes_service_label_(.+)
action: labelmap
- regex: 'app_kubernetes_io_(.+)'
action: labeldrop
- regex: 'helm_sh_(.+)'
action: labeldrop
</code></pre>
<pre><code>- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
regex: (.+)
- source_labels: [__meta_kubernetes_pod_node_name]
action: replace
target_label: host
regex: (.+)
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod
regex: (.+)
- regex: __meta_kubernetes_pod_label_(.+)
action: labelmap
- regex: 'app_kubernetes_io_(.+)'
action: labeldrop
- regex: 'helm_sh_(.+)'
action: labeldrop
</code></pre>
<p>Then, annotate the Service with:</p>
<pre><code>metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "80"
prometheus.io/path: "/metrics"
</code></pre>
<p>and the Deployment with:</p>
<pre><code>spec:
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "80"
prometheus.io/path: "/metrics"
</code></pre>
|
<p>I'm trying to connect an external webflow page to our kubernetes cluster ingress on GCP GKE. Specifically, I want everything at <a href="http://www.domain.com" rel="nofollow noreferrer">www.domain.com</a> to go to the external webflow service, and everything at <a href="http://www.domain.com/app" rel="nofollow noreferrer">www.domain.com/app</a> to go to our local service in the cluster.</p>
<p>I've seen this question <a href="https://stackoverflow.com/questions/65919773/ingress-nginx-proxy-to-outside-website-webflow-hosted">Ingress Nginx Proxy to Outside Website (Webflow hosted)</a> and followed it, but I couldn't get it working. I keep getting an error
<code>Translation failed: invalid ingress spec: service "default/external-service" is type "ExternalName", expected "NodePort" or "LoadBalancer"; service "default/external-service" is type "ExternalName", expected "NodePort" or "LoadBalancer"</code></p>
<p>Here's my setup</p>
<p>External Service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: external-service
namespace: default
spec:
externalName: participant-homepage-9f8712.webflow.io
ports:
- port: 443
protocol: TCP
targetPort: 443
sessionAffinity: None
type: ExternalName
status:
loadBalancer: {}
</code></pre>
<p>Ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.gcp.kubernetes.io/pre-shared-cert: _,
ingress.kubernetes.io/backends: _,
ingress.kubernetes.io/forwarding-rule: _,
ingress.kubernetes.io/https-forwarding-rule: _,
ingress.kubernetes.io/https-target-proxy: _,
ingress.kubernetes.io/preserve-host: "false"
ingress.kubernetes.io/secure-backends: "true"
ingress.kubernetes.io/ssl-cert: _,
ingress.kubernetes.io/static-ip: _,
ingress.kubernetes.io/target-proxy: _,
ingress.kubernetes.io/url-map: _,
networking.gke.io/managed-certificates: _,
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/server-snippet: |
proxy_ssl_name participant-homepage-9f8712.webflow.io;
proxy_ssl_server_name on;
nginx.ingress.kubernetes.io/upstream-vhost: participant-homepage-9f8712.webflow.io
name: my-ingress
namespace: default
spec:
backend:
serviceName: external-service
servicePort: 443
rules:
- host: www.honeybeehub.xyz
http:
paths:
- backend:
serviceName: app-service
servicePort: 80
path: /app/*
pathType: ImplementationSpecific
- backend:
serviceName: external-service
servicePort: 443
path: /*
pathType: ImplementationSpecific
status:
loadBalancer:
ingress:
- ip: _._._._
</code></pre>
<p>Any help would be greatly appreciated. Thank you!</p>
| <p>The reason why the steps on the question you quoted <a href="https://stackoverflow.com/questions/65919773/ingress-nginx-proxy-to-outside-website-webflow-hosted">Ingress Nginx Proxy to Outside Website (Webflow hosted)</a> are not working, is because that question focuses on EKS (Amazon Elastic Kubernetes Service). ExternalName Services are not supported in GCE Ingress, as you can see in the following <a href="https://stackoverflow.com/questions/53107348/error-creating-ingress-path-with-gce-externalname">question</a>.
What I can recommend to you is to post it as a Feature Request on the <a href="https://www.google.com/url?q=https://cloud.google.com/support/docs/issue-trackers&sa=D&source=docs&ust=1636750079733000&usg=AOvVaw2ocSL1lAPc3gUomJhKCRJm" rel="nofollow noreferrer">Google's Issue tracker</a></p>
|
<p>My secret file looks like:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: secret
type: Opaque
stringData:
"user.name": "user"
"user.password": "password"
</code></pre>
<p>And I am trying to get a value with the next code:</p>
<pre><code>{{- $secret := lookup "v1" "Secret" .Release.Namespace "secret" -}}
{{- if $secret -}}
{{- print $secret.data.user.password}}
</code></pre>
<p>The problem is "user.password" key contains a dot and I haven't found how to fix it.
Thanks for any help!</p>
| <p>You can use the <code>index</code> function to access map values with dots in them like:</p>
<pre><code>{{- print (index $secret.data "user.password")}}
</code></pre>
|
<p>I have the following architecture for the PostgreSQL cluster:</p>
<p><a href="https://i.stack.imgur.com/TlP1l.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TlP1l.png" alt="enter image description here" /></a></p>
<p>Here, there are multiple clients that interacts with PostgreSQL pods via pgpool, the issue is, when the pod (could be <code>pgpool</code> or <code>PostgreSQL</code> pod) terminates (for multiple reasons) the client is getting impacts and have to recreate the connection. For example in this diagram, if <code>postgresql-1</code> pod terminates then <code>client-0</code> will have to recreate the connection with the cluster.</p>
<p>Is there a way in kubernetes to handle it so that connections to <code>pgpool k8s service</code> are load balanced/ recreated to other pods so that the clients do not see the switch over <strong>and are not impacted</strong>?</p>
<p>Please note these are TCP connections and not HTTP connections (which are stateless). Also, all the PostgreSQL pods are <a href="https://www.postgresql.org/docs/10/runtime-config-wal.html#SYNCHRONOUS-COMMIT-MATRIX" rel="nofollow noreferrer">always in sync with remote_apply</a>.</p>
| <p>Without substantial custom code to allow TCP connection transfers between hosts, you kind of can't. When a process shuts down, all TCP streams it has open will be closed, that's how normal Linux networking functions. If you poke a round on your search engine of choice for things like "TCP connection migration" you'll find a lot of research efforts on this but little actual code. More often you just terminate the TCP connection at some long-lived edge proxy and if <em>that</em> has to restart you eat the reconnects.</p>
|
<p>I have the following architecture for the PostgreSQL cluster:</p>
<p><a href="https://i.stack.imgur.com/TlP1l.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TlP1l.png" alt="enter image description here" /></a></p>
<p>Here, there are multiple clients that interacts with PostgreSQL pods via pgpool, the issue is, when the pod (could be <code>pgpool</code> or <code>PostgreSQL</code> pod) terminates (for multiple reasons) the client is getting impacts and have to recreate the connection. For example in this diagram, if <code>postgresql-1</code> pod terminates then <code>client-0</code> will have to recreate the connection with the cluster.</p>
<p>Is there a way in kubernetes to handle it so that connections to <code>pgpool k8s service</code> are load balanced/ recreated to other pods so that the clients do not see the switch over <strong>and are not impacted</strong>?</p>
<p>Please note these are TCP connections and not HTTP connections (which are stateless). Also, all the PostgreSQL pods are <a href="https://www.postgresql.org/docs/10/runtime-config-wal.html#SYNCHRONOUS-COMMIT-MATRIX" rel="nofollow noreferrer">always in sync with remote_apply</a>.</p>
| <blockquote>
<p>Is there a way in kubernetes to handle it so that connections to pgpool k8s service are load balanced/ recreated to other pods...</p>
</blockquote>
<p>Connections to pgpool k8s service is load balance by kube-proxy. Endpoints (pgpool pods) that back the service will automatically be update whenever there's a change (eg. scaling) in pods population.</p>
<blockquote>
<p>...so that the clients do not see the switch over and are not impacted?</p>
</blockquote>
<p>Should the pgpool pod that the client connected gets terminated, the client tcp state become invalid (eg. void remote IP). There's no need to keep such connection alive but re-connect to the pgpool <strong>service</strong> where kube-proxy will route you to next available pgpool pod. The actual connection to backend database is managed by pgpool including database failover. With pgpool as the proxy, you do not need to worry about database switching.</p>
|
<p>Hi i have installed minikube but when i run minikube start i get this error :</p>
<pre><code>😄 minikube v1.17.1 on Ubuntu 20.04
✨ Using the docker driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🏃 Updating the running docker "minikube" container ...
🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.0 ...
🤦 Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
▪ Generating certificates and keys ...
▪ Booting up control plane ...
💢 initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.2
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.8.0-40-generic
DOCKER_VERSION: 20.10.0
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
[
▪ Generating certificates and keys ...
▪ Booting up control plane ...
💣 Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.2
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.8.0-40-generic
DOCKER_VERSION: 20.10.0
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
😿 minikube is exiting due to an error. If the above message is not useful, open an issue:
👉 https://github.com/kubernetes/minikube/issues/new/choose
❌ Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.2
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.8.0-40-generic
DOCKER_VERSION: 20.10.0
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
💡 Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
🍿 Related issue: https://github.com/kubernetes/minikube/issues/4172
</code></pre>
<p>i can't understand what is the problem here , it was working then i got a similar error it syas :
🐳 Preparing Kubernetes v1.20.0 on Docker 20.10.0 ...| ❌ Unable to load cached images: loading cached images: stat /home/feiz-nouri/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v4: no such file or directory</p>
<p>, i uninstalled it then reinstall it but still got error.</p>
<p>can someone help me fix this</p>
| <p>You can use <code>minikube delete</code> to delete the old cluster. After that start minikube using <code>minikube start</code></p>
|
<p>Running Spark on Kubernetes, with each of 3 Spark workers given 8 cores and 8G ram, results in</p>
<pre><code>Executor app-xxx-xx/0 finished with state KILLED exitStatus 143
</code></pre>
<p>seemingly no matter how simple the computation or what flags I pass to <code>spark-submit</code>.</p>
<p>For instance,</p>
<pre class="lang-sh prettyprint-override"><code>kubectl run -n redacted spark-client --rm -it --restart='Never' \
--image docker.io/bitnami/spark:3.2.0-debian-10-r2 \
-- run-example \
--name my-pi-calc-example-2 \
--master spark://spark-master-svc:7077 \
--deploy-mode cluster \
--driver-memory 4g \
--executor-memory 1g \
--driver-cores 4 \
--executor-cores 4 \
--verbose \
SparkPi 3
</code></pre>
<p>gives me the following logs on <code>spark-worker-0</code>:</p>
<pre><code>21/11/15 22:07:42 INFO DriverRunner: Launch Command: "/opt/bitnami/java/bin/java" "-cp" "/opt/bitnami/spark/conf/:/opt/bitnami/spark/jars/*" "-Xmx4096M" "-Dspark.master=spark://spark-master-svc:7077" "-Dspark.driver.cores=4" "-Dspark.driver.supervise=false" "-Dspark.submit.deployMode=cluster" "-Dspark.driver.memory=4g" "-Dspark.executor.memory=4g" "-Dspark.submit.pyFiles=" "-Dspark.jars=file:///opt/bitnami/spark/examples/jars/scopt_2.12-3.7.1.jar,file:///opt/bitnami/spark/examples/jars/spark-examples_2.12-3.2.0.jar,file:/opt/bitnami/spark/examples/jars/spark-examples_2.12-3.2.0.jar" "-Dspark.rpc.askTimeout=10s" "-Dspark.app.name=my-pi-calc-example-2" "-Dspark.executor.cores=4" "org.apache.spark.deploy.worker.DriverWrapper" "spark://Worker@xx.xx.19.190:34637" "/opt/bitnami/spark/work/driver-20211115220742-0006/spark-examples_2.12-3.2.0.jar" "org.apache.spark.examples.SparkPi" "3" "--verbose"
21/11/15 22:07:44 INFO Worker: Asked to launch executor app-20211115220744-0006/4 for Spark Pi
21/11/15 22:07:44 INFO SecurityManager: Changing view acls to: spark
21/11/15 22:07:44 INFO SecurityManager: Changing modify acls to: spark
21/11/15 22:07:44 INFO SecurityManager: Changing view acls groups to:
21/11/15 22:07:44 INFO SecurityManager: Changing modify acls groups to:
21/11/15 22:07:44 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(spark); groups with view permissions: Set(); users with modify permissions: Set(spark); groups with modify permissions: Set()
21/11/15 22:07:44 INFO ExecutorRunner: Launch command: "/opt/bitnami/java/bin/java" "-cp" "/opt/bitnami/spark/conf/:/opt/bitnami/spark/jars/*" "-Xmx4096M" "-Dspark.driver.port=42013" "-Dspark.rpc.askTimeout=10s" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "spark://CoarseGrainedScheduler@spark-worker-0.spark-headless.redacted.svc.cluster.local:42013" "--executor-id" "4" "--hostname" "xx.xx.19.190" "--cores" "4" "--app-id" "app-20211115220744-0006" "--worker-url" "spark://Worker@xx.xx.19.190:34637"
21/11/15 22:07:48 INFO Worker: Asked to kill executor app-20211115220744-0006/4
21/11/15 22:07:48 INFO ExecutorRunner: Runner thread for executor app-20211115220744-0006/4 interrupted
21/11/15 22:07:48 INFO ExecutorRunner: Killing process!
21/11/15 22:07:48 INFO Worker: Executor app-20211115220744-0006/4 finished with state KILLED exitStatus 143
21/11/15 22:07:48 INFO ExternalShuffleBlockResolver: Clean up non-shuffle and non-RDD files associated with the finished executor 4
21/11/15 22:07:48 INFO ExternalShuffleBlockResolver: Executor is not registered (appId=app-20211115220744-0006, execId=4)
21/11/15 22:07:48 INFO ExternalShuffleBlockResolver: Application app-20211115220744-0006 removed, cleanupLocalDirs = true
21/11/15 22:07:48 INFO Worker: Cleaning up local directories for application app-20211115220744-0006
21/11/15 22:07:48 INFO Worker: Driver driver-20211115220742-0006 exited successfully
</code></pre>
<p>I can drop, change, or modify the <code>run-example</code> or <code>spark-submit</code> flags. It seemingly has no effect, even for something as simple as <code>SparkPi 3</code>; the executors are killed and exit code 143 with very little information about why they were actually killed.</p>
<p>Resource constraints should not be an issue whatsoever here. This is a Kubernetes cluster of 3 AWS m5.4xlarge worker nodes, 16 vCPu and 64GiB RAM with very little else actually deployed on the. I have not set Kubernetes <code>spec.resources</code> on <code>limits</code> or <code>requests</code>. The Spark cluster is deployed as follows:</p>
<pre class="lang-sh prettyprint-override"><code>argocd app create spark \
--repo https://charts.bitnami.com/bitnami \
--helm-chart spark \
--dest-server https://kubernetes.default.svc \
--insecure \
--helm-set 'worker.replicaCount=3' \
--dest-namespace redacted \
--revision '5.7.9' \
--helm-set worker.coreLimit=8 \
--helm-set worker.memoryLimit=8G \
--helm-set worker.daemonMemoryLimit=4G \
--helm-set master.daemonMemoryLimit=4G
argocd app sync spark
</code></pre>
<p>This uses the <a href="https://github.com/bitnami/charts/tree/master/bitnami/spark" rel="nofollow noreferrer">Spark Bitnami Helm chart</a> and ArgoCD/Helm to deploy.</p>
<p>The cluster deploys just fine; I can see, for example, <code>Starting Spark worker xxx.xx.xx.xx:46105 with 8 cores, 8.0 GiB RAM</code> and that all 3 workers have joined.</p>
<p>What am I missing here? How can I better debug this and figure out what the resource constraint is?</p>
<hr />
<p>Funny enough, I can even run SparkPi locally. If I e.g. <code>kubectl exec -it spark-worker-0 -- bash</code>:</p>
<pre class="lang-sh prettyprint-override"><code>$ ./bin/run-example SparkPi 3
...
21/11/15 22:22:09 INFO SparkContext: Running Spark version 3.2.0
...
21/11/15 22:22:11 INFO DAGScheduler: Job 0 finished: reduce at SparkPi.scala:38, took 0.634538 s
Pi is roughly 3.1437838126127087
</code></pre>
<p>Then I can add two arguments to run in cluster mode, and boom, executors get killed:</p>
<pre><code>$ ./bin/run-example \
--master spark://spark-master-svc:7077 \
--deploy-mode cluster SparkPi
# Executor app-20211115222530-0008/2 finished with state KILLED exitStatus 143
</code></pre>
| <p>Learned a couple things here. The first is that 143 KILLED does not seem to actually be indicative of failure but rather of executors receiving a signal to shutdown once the job is finished. So, seems draconian when found in logs but is not.</p>
<p>What was confusing me was that I wasn't seeing any "Pi is roughly 3.1475357376786883" text on stdout/stderr. This led me to believe the computation never got that far, which was incorrect.</p>
<p>The issue here is what I was using <code>--deploy-mode cluster</code> when <code>--deploy-mode client</code> actually made a lot more sense in this situation. That is because I was running an ad-hoc container through <code>kubectl run</code> which was not part of the existing deployment. This fits the definition of <a href="https://spark.apache.org/docs/latest/submitting-applications.html#launching-applications-with-spark-submit" rel="nofollow noreferrer">client mode</a> better, since the submission does not come from an existing Spark worker. When running in <code>--deploy-mode=cluster</code>, you'll never actually see stdout since input/output of the application are not attached to the console.</p>
<p>Once I changed <code>--deploy-mode</code> to <code>client</code>, I also needed to add <code>--conf spark.driver.host</code> as documented <a href="https://github.com/bitnami/charts/issues/2883" rel="nofollow noreferrer">here</a> and <a href="https://github.com/bitnami/charts/issues/2946" rel="nofollow noreferrer">here</a>, for the pods to be able to resolve back to the invoking host.</p>
<pre class="lang-sh prettyprint-override"><code>kubectl run -n redacted spark-client --rm -it --restart='Never' \
--image docker.io/bitnami/spark:3.2.0-debian-10-r2 \
-- /bin/bash -c '
run-example \
--name my-pi-calc-example \
--master spark://spark-master-svc:7077 \
--deploy-mode client \
--conf spark.driver.host=$(hostname -i) \
SparkPi 10'
</code></pre>
<p>Output:</p>
<pre><code>21/11/15 23:22:16 INFO TaskSchedulerImpl: Killing all running tasks in stage 0: Stage finished
21/11/15 23:22:16 INFO DAGScheduler: Job 0 finished: reduce at SparkPi.scala:38, took 2.961188 s
Pi is roughly 3.140959140959141
21/11/15 23:22:16 INFO SparkUI: Stopped Spark web UI at http://xx.xx.xx.xx:4040
21/11/15 23:22:16 INFO StandaloneSchedulerBackend: Shutting down all executors
21/11/15 23:22:16 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to shut down
</code></pre>
<p>Interestingly enough this is <code>app-20211115232213-0024</code>, which in the Spark Master UI still shows each worker as KILLED 143 - reinforcing the conclusion that that is a 'normal' shutdown signal.</p>
|
<p>This is sort of strange behavior in our K8 cluster.</p>
<p>When we try to deploy a new version of our applications we get:</p>
<pre><code>Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "<container-id>" network for pod "application-6647b7cbdb-4tp2v": networkPlugin cni failed to set up pod "application-6647b7cbdb-4tp2v_default" network: Get "https://[10.233.0.1]:443/api/v1/namespaces/default": dial tcp 10.233.0.1:443: connect: connection refused
</code></pre>
<p>I used <code>kubectl get cs</code> and found <code>controller</code> and <code>scheduler</code> in <code>Unhealthy</code> state.</p>
<p>As describer <a href="https://github.com/kubernetes/kubernetes/issues/93472" rel="nofollow noreferrer">here</a> updated <code>/etc/kubernetes/manifests/kube-scheduler.yaml</code> and
<code>/etc/kubernetes/manifests/kube-controller-manager.yaml</code> by commenting <code>--port=0</code></p>
<p>When I checked <code>systemctl status kubelet</code> it was working.</p>
<pre><code>Active: active (running) since Mon 2020-10-26 13:18:46 +0530; 1 years 0 months ago
</code></pre>
<p>I had restarted kubelet service and <code>controller</code> and <code>scheduler</code> were shown healthy.</p>
<p>But <code>systemctl status kubelet</code> shows (soon after restart kubelet it showed running state)</p>
<pre><code>Active: activating (auto-restart) (Result: exit-code) since Thu 2021-11-11 10:50:49 +0530; 3s ago<br>
Docs: https://github.com/GoogleCloudPlatform/kubernetes<br> Process: 21234 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET
</code></pre>
<p>Tried adding <code>Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --fail-swap-on=false" </code> to <code>/etc/systemd/system/kubelet.service.d/10-kubeadm.conf</code> as described <a href="http://Active:%20activating%20(auto-restart)%20(Result:%20exit-code)%20since%20Thu%202021-11-11%2010:50:49%20+0530;%203s%20ago%20%20%20%20%20%20Docs:%20https://github.com/GoogleCloudPlatform/kubernetes%20%20%20Process:%2021234%20ExecStart=/usr/bin/kubelet%20$KUBELET_KUBECONFIG_ARGS%20$KUBELET_CONFIG_ARGS%20$KUBELET_KUBEADM_ARGS%20$KUBELET" rel="nofollow noreferrer">here</a>, but still its not working properly.</p>
<p>Also removed <code>--port=0</code> comment in above mentioned manifests and tried restarting,still same result.</p>
<p><strong>Edit:</strong> This issue was due to <code>kubelet</code> certificate expired and fixed following <a href="https://github.com/kubernetes/kubeadm/issues/2054#issuecomment-606916146" rel="nofollow noreferrer">these</a> steps. If someone faces this issue, make sure <code>/var/lib/kubelet/pki/kubelet-client-current.pem</code> certificate and key values are base64 encoded when placing on <code>/etc/kubernetes/kubelet.conf</code></p>
<p>Many other suggested <code>kubeadm init</code> again. But this cluster was created using <code>kubespray</code> no manually added nodes.</p>
<p>We have baremetal k8 running on Ubuntu 18.04.
K8: v1.18.8</p>
<p>We would like to know any debugging and fixing suggestions.</p>
<p>PS:<br>
When we try to <code>telnet 10.233.0.1 443</code> from any node, first attempt fails and second attempt success.</p>
<p>Edit: Found this in <code>kubelet</code> service logs</p>
<pre><code>Nov 10 17:35:05 node1 kubelet[1951]: W1110 17:35:05.380982 1951 docker_sandbox.go:402] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "app-7b54557dd4-bzjd9_default": unexpected command output nsenter: cannot open /proc/12311/ns/net: No such file or directory
</code></pre>
| <p>Posting comment as the community wiki answer for better visibility</p>
<hr />
<p>This issue was due to <code>kubelet</code> certificate expired and fixed following <a href="https://github.com/kubernetes/kubeadm/issues/2054#issuecomment-606916146" rel="nofollow noreferrer">these steps</a>. If someone faces this issue, make sure <code>/var/lib/kubelet/pki/kubelet-client-current.pem</code> certificate and key values are <code>base64</code> encoded when placing on <code>/etc/kubernetes/kubelet.conf</code></p>
|
<p>I'm trying to setup and expose a service (ArgoCD) to outside a cluster. Note: I'm fairly new to Kubernetes, so quite probably I have some misconceptions. If you can see one, please help me get rid of it. If more information is needed to diagnose what's happening, please let me know, I'll add it.</p>
<p>I have nginx-ingress ingress controller installed in the cluster in the namespace <code>nginx</code>. I have installed ArgoCD via helm into <code>argocd</code> namespace*. <code>kubectl get service -n argocd</code> shows (omitting AGE column):</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
projectname-argocd-application-controller ClusterIP 10.100.249.133 <none> 8082/TCP
projectname-argocd-dex-server ClusterIP 10.100.80.187 <none> 5556/TCP,5557/TCP
projectname-argocd-redis ClusterIP 10.100.230.170 <none> 6379/TCP
projectname-argocd-repo-server ClusterIP 10.100.221.87 <none> 8081/TCP
projectname-argocd-server ClusterIP 10.100.22.26 <none> 80/TCP,443/TCP
</code></pre>
<p>As far as I understand, service <code>projectname-argocd-server</code> is the one I should expose to get ArgoCD WebUI. Trying to do so, I've created an ingress (based on <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#resource-backend" rel="nofollow noreferrer">docs</a>):</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-routing
spec:
rules:
- host: test2.projectname.org
http:
paths:
- path: /
pathType: Prefix # presumably may comment this out
backend:
service:
name: projectname-argocd-server
port:
number: 80
# this was added later while trying to figure the problem out
defaultBackend:
service:
name: projectname-argocd-server
port:
number: 80
ingressClassName: nginx
</code></pre>
<p>and applied it via <code>kubectl apply -f routing.yaml -n argocd</code>. Now I can see the ingress is created along with the one created by deployment of ArgoCD, and the output of <code>kubectl get ing -A</code> is (omitting AGE, and PORTS that are 80; <code><url></code> is url of LoadBalancer shown in AWS console):</p>
<pre><code>NAMESPACE NAME CLASS HOSTS ADDRESS
argocd projectname-argocd-server nginx test.projectname.org <url>
argocd ingress-routing nginx test2.projectname.org <url>
</code></pre>
<p>By the way, <code>kubectl get svc -n nginx</code> shows that <code>nginx-ingress-ingress-nginx-controller</code> is LoadBalancer with url <code><url></code> (<code>80:30538/TCP</code>).</p>
<p><code>kubectl describe ingress -n argocd</code> shows that ingress <code>ingress-routing</code> is ok, with correct address, default backend and rules; for ingress <code>projectname-argocd-server</code> it shows ok address and rules (path <code>/</code>), although <code>Default backend</code> is shown as <code>default-http-backend:80 (<error: endpoints "default-http-backend" not found>)</code>.</p>
<p>Now let me also show the DNS settings to complete the picture:</p>
<ul>
<li>I've created a hosted zone for projectname.org (in Route 53), put its DNS servers to NS-entries of domain register</li>
<li>I've created a CNAME entry in the hosted zone, pointing <code>test.projectname.org</code> to <code><url></code></li>
<li>I've created an A entry for <code>test2.projectname.org</code>, selected the load balancer from the list and so it points to <code>dualstack.<url></code></li>
</ul>
<p><strong>I expected</strong> to see ArgoCD interface at least at one of <code>http://test.projectname.org/</code> and <code>http://test2.projectname.org/</code>. What <strong>actually happens</strong> is:</p>
<ol>
<li><p>when I open <code>http://test.projectname.org/</code>, it redirects me to https url and shows <code>NET::ERR_CERT_AUTHORITY_INVALID</code>. If I insist on visiting, browser shows <code>ERR_TOO_MANY_REDIRECTS</code>.</p>
</li>
<li><p>Before I added ingress class and moved <code>ingress-routing</code> from <code>nginx</code> namespace to <code>argocd</code> namespace, <code>http://test2.projectname.org/</code> gave me 404; now it also redirects to https and then gives <code>ERR_TOO_MANY_REDIRECTS</code></p>
</li>
<li><p>I've also checked the <code>/healthz</code> addresses but they give the same result as the <code>/</code> ones. (in contrast, <code>http://<url>/healthz</code> gives an empty page)</p>
</li>
</ol>
<p><strong>My question is: what else am I missing, why I don't get the UI?</strong></p>
<p>Is it impossible to expose a service before setting some SSL certificate? Can 2 ingresses conflict when trying to expose the same thing on different subdomains (test.projectname.org and test2.projectname.org)? <strong>Can I see at least one service (ArgoCD) without using projectname.org to check if it is configured and deployed properly?</strong> (to separate if it's an ingress/routing/dns issue or a configuration issue)</p>
<p>(*) Here's the chart that I used to install ArgoCD:</p>
<pre><code>apiVersion: v2
name: argo-cd
appVersion: v2.1.5
description: A declarative, GitOps continuous delivery tool for Kubernetes
version: 3.26.3
dependencies:
- name: argo-cd
version: 3.26.3
repository: https://argoproj.github.io/argo-helm
</code></pre>
<p>and values-overwrite.yaml that I've used is just <a href="https://github.com/argoproj/argo-helm/blob/master/charts/argo-cd/values.yaml" rel="nofollow noreferrer">default values</a> wrapped into <code>argo-cd:</code> thing since these should be applied to the dependency. Notably, those have <code>enabled: false</code> in <code>ingress:</code>, so the fact that ingress <code>projectname-argocd-server</code> is created is somewhat unexpected.</p>
<p>PS the <code>nginx</code> IngressClass was generated, not created manually, so it may be useful to see it as well (I've substituted ids and timestamps with "..."), as shown by <code>kubectl get IngressClass nginx -o yaml</code>:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
annotations:
meta.helm.sh/release-name: nginx-ingress
meta.helm.sh/release-namespace: nginx
creationTimestamp: ...
generation: 1
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: nginx-ingress
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/version: 1.0.3
helm.sh/chart: ingress-nginx-4.0.5
name: nginx
resourceVersion: "5750"
uid: ...
spec:
controller: k8s.io/ingress-nginx
</code></pre>
| <p>Right, the issue was somewhat complicated, but I've figured it out. Basically, it consists of 2 problems:</p>
<ol>
<li>https configuration and</li>
<li>ingress configuration</li>
</ol>
<p>The main problem about https configuration was solved in a separate <a href="https://stackoverflow.com/q/69964611/3995261">question</a> and is reduced to switching ACME server from staging to production. I've provided more details it in my answer.</p>
<p>Now, the ingress configuration is somewhat tricky since ArgoCD has some redirections, ~inner TLS requirements~, and also serves more than one protocol at :443. Fortunately, I've found <a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#kubernetesingress-nginx" rel="nofollow noreferrer">this tutorial</a> which shows ssl-passthrough settings, more ingress annotations, including <code>nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"</code> which fixes the <code>ERR_TOO_MANY_REDIRECTS</code> error. Here's my ingress config which works fine with https set up (note also changes in port and tls secret):</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-argocd-routing
namespace: argocd
annotations:
cert-manager.io/cluster-issuer: <cluster issuer name>
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
ingressClassName: nginx
tls:
- hosts:
- test2.projectname.org # switched to argocd. later
secretName: argocd-secret # do not change, this is provided by Argo CD
rules:
- host: test2.projectname.org # switched to argocd. later
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: projectname-argocd-server
port:
number: 443
</code></pre>
|
<p>I wrote a service to retrieve some information from the Kubernetes cluster. Below is a snippet from the <code>kubernetes_service.py</code> file that works perfectly when I run it on my local machine.</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes.client.rest import ApiException
from kubernetes import client, config
from exceptions.logs_not_found_exceptions import LogsNotFound
import logging
log = logging.getLogger("services/kubernetes_service.py")
class KubernetesService:
def __init__(self):
super().__init__()
config.load_kube_config()
self.api_instance = client.CoreV1Api()
def get_pods(self, body):
try:
api_response = self.api_instance.list_namespaced_pod(namespace=body['namespace'])
dict_response = api_response.to_dict()
pods = []
for item in dict_response['items']:
pods.append(item['metadata']['name'])
log.info(f"Retrieved the pods: {pods}")
return pods
except ApiException as e:
raise ApiException(e)
def get_logs(self, body):
try:
api_response = self.api_instance.read_namespaced_pod_log(name=body['pod_name'], namespace=body['namespace'])
tail_logs = api_response[len(api_response)-16000:]
log.info(f"Retrieved the logs: {tail_logs}")
return tail_logs
except ApiException:
raise LogsNotFound(body['namespace'], body['pod_name'])
</code></pre>
<p>When creating the docker image using Dockerfile, it also installed kubectl. Below is my Dockerfile.</p>
<pre><code>FROM python:3.8-alpine
RUN mkdir /app
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt && rm requirements.txt
RUN apk add curl openssl bash --no-cache
RUN curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl" \
&& chmod +x ./kubectl \
&& mv ./kubectl /usr/local/bin/kubectl
COPY . .
EXPOSE 8087
ENTRYPOINT [ "python", "bot.py"]
</code></pre>
<p>To grant the container permissions to run the command <code>kubectl get pods</code> I added the role in the deployment.yml file:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: pyhelper
spec:
selector:
app: pyhelper
ports:
- protocol: "TCP"
port: 8087
targetPort: 8087
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pyhelper
spec:
selector:
matchLabels:
app: pyhelper
replicas: 1
template:
metadata:
labels:
app: pyhelper
spec:
serviceAccountName: k8s-101-role
containers:
- name: pyhelper
image: **********
imagePullPolicy: Always
ports:
- containerPort: 8087
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: k8s-101-role
subjects:
- kind: ServiceAccount
name: k8s-101-role
namespace: ind-iv
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-101-role
</code></pre>
<p>At the start up of the container it returns the error <code>kubernetes.config.config_exception.ConfigException: Invalid kube-config file. No configuration found</code> at the line <code>config.load_kube_config()</code> in the <code>kubernetes_service.py</code> file. I checked the config file by running the command <code>kubectl config view</code> and the file is indeed empty. What am I doing wrong here?
Empty config file:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
</code></pre>
<p>Also tried to run the command <code>kubectl get pods</code> in the shell of the container and it successfully returned the pods.</p>
| <p>I believe you'll want <a href="https://github.com/kubernetes-client/python-base/blob/3aa8b4c94282707a20482f71e86624f3b39a2cc6/config/__init__.py#L24" rel="nofollow noreferrer"><code>kubernetes.config.load_config</code></a> which differs from the <code>load_kube_config</code> you're currently using in that the package-level one looks for any <code>$HOME/.kube/config</code> as you expected, but <em>then</em> falls back to the in-cluster config as the <code>ServiceAccount</code> usage expects</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes.config import load_config
class KubernetesService:
def __init__(self):
super().__init__()
load_config()
</code></pre>
|
<p>I have a configMap file:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
labels:
owner: testdb
name: testdb-configmap
data:
host: postgres
port: "5432"
</code></pre>
<p>and a secret file:</p>
<pre><code>aapiVersion: v1
kind: Secret
type: Opaque
metadata:
labels:
owner: testdb
name: testdb-secret
namespace: test
data:
user: dGVzdA==
pwd: dGVzdA==
</code></pre>
<p>and I want to build an environment variable <code>CONNECTION_STRING</code> as below:</p>
<pre><code>env:
- name: CONNECTION_STRING
value: "Host=<host-from-configmap>;Username=<user-from-secret>;Password=<password-from-secret>;Port=<port-from-configmap>;Pooling=False;"
</code></pre>
<p>I want to know if this is possible and if yes, then how? I have also looked at using <code>.tpl</code> (named templates) but couldn't figure out a way.</p>
<hr />
<p><strong>NOTE</strong></p>
<p>Since I don't have access to the image which requires <code>CONNECTION_STRING</code> I have to build it this way. These configmap and secret files are also going to remain like this.</p>
<hr />
| <p>Kubernetes can <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config" rel="nofollow noreferrer">set environment variables based on other environment variables</a>. This is a core Kubernetes Pod capability, and doesn't depend on anything from Helm.</p>
<p>Your value uses four components, two from the ConfigMap and two from the Secret. You need to declare each of these as separate environment variables, and then declare a main environment variable that concatenates them together.</p>
<pre class="lang-yaml prettyprint-override"><code>env:
- name: TESTDB_HOST
valueFrom:
configMapKeyRef:
name: testdb-configmap # {{ include "chart.name" . }}
key: host
- name: TESTDB_PORT
valueFrom:
configMapKeyRef:
name: testdb-configmap
key: port
- name: TESTDB_USER
valueFrom:
secretKeyRef:
name: testdb-secret
key: user
- name: TESTDB_PASSWORD
valueFrom:
secretKeyRef:
name: testdb-secret
key: password
- name: CONNECTION_STRING
value: Host=$(TESTDB_HOST);Username=$(TESTDB_USER);Password=$(TESTDB_PASSWORD);PORT=$(TESTDB_PORT);Pooling=False;
</code></pre>
|
<p>I'd like to know if <code>kubectl</code> offers an easy way to list all the secrets that a certain pod/deployment/statefulset is using, or if there is some way to cleanly retrieve this info. When doing a <code>kubectl describe</code> for a pod, I see I can get a list of mounted volumes which include the ones that come from secrets that I could extract using <code>jq</code> and the like, but this way feels a bit clumsy. I have been searching a bit to no avail. Do you know if there is anything like that around? Perhaps using the API directly?</p>
| <p>To <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/" rel="nofollow noreferrer">List all Secrets</a> currently in use by a pod use:</p>
<pre><code>kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq
</code></pre>
<p>In the other hand if you want to access to stored secrets in the API:</p>
<blockquote>
<p>Kubernetes Secrets are, by default, stored unencrypted in the API
server's underlying data store (etcd). Anyone with API access can
retrieve or modify a Secret, and so can anyone with access to etcd.
Additionally, anyone who is authorized to create a Pod in a namespace
can use that in order to safely use Secrets, take at least the
following steps:</p>
<ul>
<li>Enable Encryption at Rest for Secrets.</li>
<li>Enable or configure RBAC rules that restrict reading data in Secrets (including via indirect means).</li>
<li>Where appropriate, also use mechanisms such as RBAC to limit which principals are allowed to create new Secrets or replace existing<br />
ones.access to read any Secret in that namespace; this includes<br />
indirect access such as the ability to create a Deployment.</li>
</ul>
</blockquote>
<p>If you want more information about secrets in kubernetes, <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">follow this link.</a></p>
|
<p>Is there any way we can run pod based on the alert fired from Prometheus? We have a scenario where we need to execute a pod based on the disk pressure threshold.
I am able to create alert but I need to execute a pod. How can I achieve that?</p>
<pre><code>groups:
- name: node_memory_MemAvailable_percent
rules:
- alert: node_memory_MemAvailable_percent_alert
annotations:
description: Memory on node {{ $labels.instance }} currently at {{ $value }}%
is under pressure
summary: Memory usage is under pressure, system may become unstable.
expr: |
100 - ((node_memory_MemAvailable_bytes{job="node-exporter"} * 100) / node_memory_MemTotal_bytes{job="node-exporter"}) > 80
for: 2m
labels:
severity: warning
</code></pre>
| <p>I think the Alertmanager can help you, using the <code>webhook</code> receiver (<a href="https://prometheus.io/docs/alerting/latest/configuration/#webhook_config" rel="nofollow noreferrer">documentation</a>).</p>
<p>In this way, when the alert is triggered, Prometheus sends it to the Alertmanager, then the Alertmanager does a POST to a custom webhook.</p>
<p>Of course, you need to implement a service that handles the alert and runs your action.</p>
|
<p>Today I am facing a problem that the redis in the kubernetes cluster shows error like this:</p>
<pre><code>2021-11-16T05:32:18 [INFO] - increment delete failed...,MobcError(RedisCMDError(An error was signalled by the server: You can't write against a read only replica.))
</code></pre>
<p>I check my redis config and found that the config url like this:</p>
<pre><code>redisConnectionStr="redis://default:password@cruise-redis-headless.reddwarf-cache.svc.cluster.local:6379/3"
</code></pre>
<p>I check the servcie and found it mapped to the redis cluster master and replica, I was wonder the headness service user for what situation? why not only keep the master servcie and replica servcie? If read operate, use any of the servcie would be fine. If write, only use the master service. when should i use the headness service?</p>
| <blockquote>
<p>why not only keep the master service and replica servcie?</p>
</blockquote>
<p>You can do it, if you could implement it. as redis helm maybe dont use specific labels to run Read/Write replicas so it's hard to divert and create the different services for <strong>read & write</strong>.</p>
<p>If you are deploying the <strong>Redis</strong> using the helm it will create the two services into the <strong>K8s</strong> cluster</p>
<p>one is <strong>Headless</strong> service and another one normal <strong>service</strong> with the <strong>ClusterIP</strong>.</p>
<p>In the application side, we should be using the <strong>normal</strong> service.</p>
<p>So idea is that headless return the replicas IPs which will be used further by the application to manage the cluster if you are using the sentinel or Redis cluster.</p>
<p>If you are using the Normal service which will move or route the traffic automatically to <strong>Read/Write</strong> replicas based on the condition.</p>
<p>While if you are using the sentinel you can ping the sentinel service and it will return master and read replicas IP.</p>
<p>Inside code you can use as per requirement now as you are getting both write and read ip with sentinel.</p>
<p>Inside the <strong>helm</strong> chart you can see one configuration</p>
<pre><code>sentinel:
enabled: true
</code></pre>
<p>In the bitnami <a href="https://github.com/bitnami/charts/tree/master/bitnami/redis#master-slave-with-sentinel" rel="noreferrer">docs</a> they explicitly mention This command will return the address of the current master, which can be accessed from inside the cluster..</p>
<p>Read more at : <a href="https://github.com/bitnami/charts/tree/master/bitnami/redis#master-replicas-with-sentinel" rel="noreferrer">https://github.com/bitnami/charts/tree/master/bitnami/redis#master-replicas-with-sentinel</a></p>
|
<p>I have to get the existing microservices run. They are given as docker images.
They talk to each other by configured hostnames and ports.
I started to use Istio to view and configure the outgoing calls of each microservice.
Now I am at the point that I need to rewrite / redirect the host and the port of a request that goes out of one container.
How can I do that with Istio?</p>
<p>I will try to give a minimum example.
There are two services, service-a and service-b.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: service-b
spec:
selector:
matchLabels:
run: service-b
replicas: 1
template:
metadata:
labels:
run: service-b
spec:
containers:
- name: service-b
image: nginx
ports:
- containerPort: 80
name: web
---
apiVersion: v1
kind: Service
metadata:
name: service-b
labels:
run: service-b
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 80
name: service-b
selector:
run: service-b
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-a
spec:
selector:
matchLabels:
run: service-a
replicas: 1
template:
metadata:
labels:
run: service-a
spec:
containers:
- name: service-a
image: nginx
ports:
- containerPort: 80
name: web
---
apiVersion: v1
kind: Service
metadata:
name: service-a
labels:
run: service-a
spec:
ports:
- port: 8081
protocol: TCP
targetPort: 80
name: service-a
selector:
run: service-a
</code></pre>
<p>I can docker exec into service-a and successfully execute:</p>
<pre><code>root@service-a-d44f55d8c-8cp8m:/# curl -v service-b:8080
< HTTP/1.1 200 OK
< server: envoy
</code></pre>
<p>Now, to simulate my problem, I want to reach service-b by using another hostname and port. I want to configure Istio the way that this call will also work:</p>
<pre><code>root@service-a-d44f55d8c-8cp8m:/# curl -v service-x:7777
</code></pre>
<p>Best regards,
Christian</p>
| <p>There are two solutions which can be used depending on necessity of using <code>istio</code> features.</p>
<p>If no <code>istio</code> features are planned to use, it can be solved using native kubernetes. In turn, if some <code>istio</code> feature are intended to use, it can be solved using <code>istio virtual service</code>. Below are two options:</p>
<hr />
<p><strong>1. Native kubernetes</strong></p>
<p><code>Service-x</code> should be pointed to the backend of <code>service-b</code> deployment. Below is <code>selector</code> which points to <code>deployment: service-b</code>:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: service-x
labels:
run: service-x
spec:
ports:
- port: 7777
protocol: TCP
targetPort: 80
name: service-x
selector:
run: service-b
</code></pre>
<p>This way request will go through <code>istio</code> anyway because sidecar containers are injected.</p>
<pre><code># curl -vI service-b:8080
* Trying xx.xx.xx.xx:8080...
* Connected to service-b (xx.xx.xx.xx) port 8080 (#0)
> Host: service-b:8080
< HTTP/1.1 200 OK
< server: envoy
</code></pre>
<p>and</p>
<pre><code># curl -vI service-x:7777
* Trying yy.yy.yy.yy:7777...
* Connected to service-x (yy.yy.yy.yy) port 7777 (#0)
> Host: service-x:7777
< HTTP/1.1 200 OK
< server: envoy
</code></pre>
<hr />
<p><strong>2. Istio virtual service</strong></p>
<p>In this example <a href="https://istio.io/latest/docs/reference/config/networking/virtual-service" rel="nofollow noreferrer">virtual service</a> is used. Service <code>service-x</code> still needs to be created, but now we don't specify any selectors:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: service-x
labels:
run: service-x
spec:
ports:
- port: 7777
protocol: TCP
targetPort: 80
name: service-x
</code></pre>
<p>Test it from another pod:</p>
<pre><code># curl -vI service-x:7777
* Trying yy.yy.yy.yy:7777...
* Connected to service-x (yy.yy.yy.yy) port 7777 (#0)
> Host: service-x:7777
< HTTP/1.1 503 Service Unavailable
< server: envoy
</code></pre>
<p><code>503</code> error which is expected. Now creating <code>virtual service</code> which will route requests to <code>service-b</code> on <code>port: 8080</code>:</p>
<pre><code>apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: service-x-to-b
spec:
hosts:
- service-x
http:
- route:
- destination:
host: service-b
port:
number: 8080
</code></pre>
<p>Testing from the pod:</p>
<pre><code># curl -vI service-x:7777
* Trying yy.yy.yy.yy:7777...
* Connected to service-x (yy.yy.yy.yy) port 7777 (#0)
> Host: service-x:7777
< HTTP/1.1 200 OK
< server: envoy
</code></pre>
<p>See it works as expected.</p>
<hr />
<p>Useful links:</p>
<ul>
<li><a href="https://istio.io/latest/docs/reference/config/networking/virtual-service" rel="nofollow noreferrer">Istio virtual service</a></li>
<li><a href="https://istio.io/latest/docs/reference/config/networking/virtual-service/#Destination" rel="nofollow noreferrer">virtual service - destination</a></li>
</ul>
|
<p>I have a VM with kubernetes installed using kubeadm (NOT minikube). The VM acts a the single node of the cluster, with taints removed to allow it to act as both Master and Worker node (as shown in the kubernetes documentation).
I have saved, transfered and loaded a my <em>app:test</em> image into it. I can easily run a container with it using docker run.
It shows up when I run <em>sudo docker images</em>.</p>
<p>When I create a deployment/pod that uses this image and specify <em>Image-PullPolicy: IfNotPresent</em> or <em>Never</em>, I still have the <em>ImagePullBackoff</em> error. The describe command shows me it tries to pull the image from dockerhub...</p>
<p>Note that when I try to use a local image that was <em>pulled as the result of creating another pod</em>, the ImagePullPolicies seem to work, no problem. Although the <em>image doesn't appear when i run sudo docker images --all</em>.</p>
<p>How can I use a local image for pods in kubernetes? Is there a way to do it without using a private repository?</p>
| <blockquote>
<p>image doesn't appear when i run sudo <code>docker images --all</code></p>
</blockquote>
<p>Based on your comment, you are using K8s v1.22, which means it is likely your cluster is using <code>containerd</code> container runtime instead of <code>docker</code> (you can check with <code>kubectl get nodes -o wide</code>, and see the last column).</p>
<p>Try listing your images with <code>crictl images</code> and pulling with <code>crictl pull <image_name></code> to preload the images on the node.</p>
|
<p>I have an API that recently started receiving more traffic, about 1.5x. That also lead to a doubling in the latency:</p>
<p><a href="https://i.stack.imgur.com/clJmx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/clJmx.png" alt="latency" /></a></p>
<p>This surprised me since I had setup autoscaling of both nodes and pods as well as GKE internal loadbalancing.</p>
<p>My external API passes the request to an internal server which uses a lot of CPU. And looking at my VM instances it seems like all of the traffic got sent to one of my two VM instances (a.k.a. Kubernetes nodes):</p>
<p><a href="https://i.stack.imgur.com/B1BhR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B1BhR.png" alt="CPU utilization per node" /></a></p>
<p>With loadbalancing I would have expected the CPU usage to be more evenly divided between the nodes.</p>
<p>Looking at my deployment there is one pod on the first node:</p>
<p><a href="https://i.stack.imgur.com/S0KYc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S0KYc.png" alt="First pod on first node" /></a></p>
<p>And two pods on the second node:</p>
<p><a href="https://i.stack.imgur.com/DBSUR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DBSUR.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/ZVKg6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZVKg6.png" alt="enter image description here" /></a></p>
<p>My service config:</p>
<pre><code>$ kubectl describe service model-service
Name: model-service
Namespace: default
Labels: app=model-server
Annotations: networking.gke.io/load-balancer-type: Internal
Selector: app=model-server
Type: LoadBalancer
IP Families: <none>
IP: 10.3.249.180
IPs: 10.3.249.180
LoadBalancer Ingress: 10.128.0.18
Port: rest-api 8501/TCP
TargetPort: 8501/TCP
NodePort: rest-api 30406/TCP
Endpoints: 10.0.0.145:8501,10.0.0.152:8501,10.0.1.135:8501
Port: grpc-api 8500/TCP
TargetPort: 8500/TCP
NodePort: grpc-api 31336/TCP
Endpoints: 10.0.0.145:8500,10.0.0.152:8500,10.0.1.135:8500
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal UpdatedLoadBalancer 6m30s (x2 over 28m) service-controller Updated load balancer with new hosts
</code></pre>
<p>The fact that Kubernetes started a new pod seems like a clue that Kubernetes autoscaling is working. But the pods on the second VM do not receive any traffic. How can I make GKE balance the load more evenly?</p>
<h2>Update Nov 2:</h2>
<p>Goli's answer leads me to think that it has something to do with the setup of the model service. The service exposes both a REST API and a GRPC API but the GRPC API is the one that receives traffic.</p>
<p>There is a corresponding forwarding rule for my service:</p>
<pre><code>$ gcloud compute forwarding-rules list --filter="loadBalancingScheme=INTERNAL"
NAME REGION IP_ADDRESS IP_PROTOCOL TARGET
aab8065908ed4474fb1212c7bd01d1c1 us-central1 10.128.0.18 TCP us-central1/backendServices/aab8065908ed4474fb1212c7bd01d1c1
</code></pre>
<p>Which points to a backend service:</p>
<pre><code>$ gcloud compute backend-services describe aab8065908ed4474fb1212c7bd01d1c1
backends:
- balancingMode: CONNECTION
group: https://www.googleapis.com/compute/v1/projects/questions-279902/zones/us-central1-a/instanceGroups/k8s-ig--42ce3e0a56e1558c
connectionDraining:
drainingTimeoutSec: 0
creationTimestamp: '2021-02-21T20:45:33.505-08:00'
description: '{"kubernetes.io/service-name":"default/model-service"}'
fingerprint: lA2-fz1kYug=
healthChecks:
- https://www.googleapis.com/compute/v1/projects/questions-279902/global/healthChecks/k8s-42ce3e0a56e1558c-node
id: '2651722917806508034'
kind: compute#backendService
loadBalancingScheme: INTERNAL
name: aab8065908ed4474fb1212c7bd01d1c1
protocol: TCP
region: https://www.googleapis.com/compute/v1/projects/questions-279902/regions/us-central1
selfLink: https://www.googleapis.com/compute/v1/projects/questions-279902/regions/us-central1/backendServices/aab8065908ed4474fb1212c7bd01d1c1
sessionAffinity: NONE
timeoutSec: 30
</code></pre>
<p>Which has a health check:</p>
<pre><code>$ gcloud compute health-checks describe k8s-42ce3e0a56e1558c-node
checkIntervalSec: 8
creationTimestamp: '2021-02-21T20:45:18.913-08:00'
description: ''
healthyThreshold: 1
httpHealthCheck:
host: ''
port: 10256
proxyHeader: NONE
requestPath: /healthz
id: '7949377052344223793'
kind: compute#healthCheck
logConfig:
enable: true
name: k8s-42ce3e0a56e1558c-node
selfLink: https://www.googleapis.com/compute/v1/projects/questions-279902/global/healthChecks/k8s-42ce3e0a56e1558c-node
timeoutSec: 1
type: HTTP
unhealthyThreshold: 3
</code></pre>
<p>List of my pods:</p>
<pre><code>kubectl get pods
NAME READY STATUS RESTARTS AGE
api-server-deployment-6747f9c484-6srjb 2/2 Running 3 3d22h
label-server-deployment-6f8494cb6f-79g9w 2/2 Running 4 38d
model-server-deployment-55c947cf5f-nvcpw 0/1 Evicted 0 22d
model-server-deployment-55c947cf5f-q8tl7 0/1 Evicted 0 18d
model-server-deployment-766946bc4f-8q298 1/1 Running 0 4d5h
model-server-deployment-766946bc4f-hvwc9 0/1 Evicted 0 6d15h
model-server-deployment-766946bc4f-k4ktk 1/1 Running 0 7h3m
model-server-deployment-766946bc4f-kk7hs 1/1 Running 0 9h
model-server-deployment-766946bc4f-tw2wn 0/1 Evicted 0 7d15h
model-server-deployment-7f579d459d-52j5f 0/1 Evicted 0 35d
model-server-deployment-7f579d459d-bpk77 0/1 Evicted 0 29d
model-server-deployment-7f579d459d-cs8rg 0/1 Evicted 0 37d
</code></pre>
<p>How do I A) confirm that this health check is in fact showing 2/3 backends as unhealthy? And B) configure the health check to send traffic to all of my backends?</p>
<h2>Update Nov 5:</h2>
<p>After finding that several pods had gotten evicted in the past because of too little RAM, I migrated the pods to a new nodepool. The old nodepool VMs had 4 CPU and 4GB memory, the new ones have 2 CPU and 8GB memory. That seems to have resolved the eviction/memory issues, but the loadbalancer still only sends traffic to one pod at a time.</p>
<p>Pod 1 on node 1:</p>
<p><a href="https://i.stack.imgur.com/rPQJ0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rPQJ0.png" alt="Pod 1 on node 1" /></a></p>
<p>Pod 2 on node 2:</p>
<p><a href="https://i.stack.imgur.com/dE4B5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dE4B5.png" alt="enter image description here" /></a></p>
<p>It seems like the loadbalancer is not splitting the traffic at all but just randomly picking one of the GRPC modelservers and sending 100% of traffic there. Is there some configuration that I missed which caused this behavior? Is this related to me using GRPC?</p>
| <p>Turns out the answer is that you <strong>cannot loadbalance gRPC requests using a GKE loadbalancer</strong>.</p>
<p>A GKE loadbalancer (as well as Kubernetes' default loadbalancer) picks a new backend every time a new TCP connection is formed. For regular HTTP 1.1 requests each request gets a new TCP connection and the loadbalancer works fine. For gRPC (which is based on HTTP 2), the TCP connection is only setup once and all requests are multiplexed on the same connection.</p>
<p>More details in this <a href="https://kubernetes.io/blog/2018/11/07/grpc-load-balancing-on-kubernetes-without-tears/" rel="nofollow noreferrer">blog post</a>.</p>
<p>To enable gRPC loadbalancing I had to:</p>
<ol>
<li>Install Linkerd</li>
</ol>
<pre><code>curl -fsL https://run.linkerd.io/install | sh
linkerd install | kubectl apply -f -
</code></pre>
<ol start="2">
<li>Inject the Linkerd proxy in <strong>both</strong> the receiving and sending pods:</li>
</ol>
<p><a href="https://i.stack.imgur.com/LO0Ii.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LO0Ii.png" alt="enter image description here" /></a></p>
<pre><code>kubectl apply -f api_server_deployment.yaml
kubectl apply -f model_server_deployment.yaml
</code></pre>
<ol start="3">
<li>After realizing that Linkerd would not work together with the GKE loadbalancer, I exposed the receiving deployment as a ClusterIP service instead.</li>
</ol>
<pre><code>kubectl expose deployment/model-server-deployment
</code></pre>
<ol start="4">
<li>Pointed the gRPC client to the ClusterIP service IP address I just created, and redeployed the client.
<a href="https://i.stack.imgur.com/AMFag.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AMFag.png" alt="enter image description here" /></a></li>
</ol>
<pre><code>kubectl apply -f api_server_deployment.yaml
</code></pre>
|
<p>We have a AWS EKS running (setup using Pulumi), where we installed Tekton as described in the <a href="https://buildpacks.io/docs/tools/tekton/" rel="nofollow noreferrer">Cloud Native Buildpacks Tekton docs</a>. The <a href="https://github.com/jonashackt/tekton-argocd-eks" rel="nofollow noreferrer">example project is available</a>.</p>
<p><a href="https://github.com/jonashackt/tekton-argocd-eks/blob/main/pipelines/pipeline.yml" rel="nofollow noreferrer">Our Tekton pipeline</a> is configured like this (which is derived <a href="https://buildpacks.io/docs/tools/tekton/#43-pipeline" rel="nofollow noreferrer">from the Cloud Native Buildpacks Tekton docs</a> also):</p>
<pre><code>apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: buildpacks-test-pipeline
spec:
params:
- name: IMAGE
type: string
description: image URL to push
- name: SOURCE_URL
type: string
description: A git repo url where the source code resides.
- name: SOURCE_REVISION
description: The branch, tag or SHA to checkout.
default: ""
workspaces:
- name: source-workspace # Directory where application source is located. (REQUIRED)
- name: cache-workspace # Directory where cache is stored (OPTIONAL)
tasks:
- name: fetch-repository # This task fetches a repository from github, using the `git-clone` task you installed
taskRef:
name: git-clone
workspaces:
- name: output
workspace: source-workspace
params:
- name: url
value: "$(params.SOURCE_URL)"
- name: revision
value: "$(params.SOURCE_REVISION)"
- name: subdirectory
value: ""
- name: deleteExisting
value: "true"
- name: buildpacks # This task uses the `buildpacks` task to build the application
taskRef:
name: buildpacks
runAfter:
- fetch-repository
workspaces:
- name: source
workspace: source-workspace
- name: cache
workspace: cache-workspace
params:
- name: APP_IMAGE
value: "$(params.IMAGE)"
- name: BUILDER_IMAGE
value: paketobuildpacks/builder:base # This is the builder we want the task to use (REQUIRED)
</code></pre>
<p>We added <code>SOURCE_URL</code> and <code>SOURCE_REVISION</code> as parameters already.</p>
<p>The question is: <strong>How can we trigger a Tekton <code>PipelineRun</code> from GitLab CI</strong> (inside our <code>.gitlab-ci.yml</code>) adhering to the following requirements:</p>
<ul>
<li>simplest possible approach</li>
<li>Do not use the extra complexity introduced by <a href="https://github.com/tektoncd/triggers" rel="nofollow noreferrer">Tekton Triggers</a> (incl. <a href="https://github.com/tektoncd/experimental/tree/main/commit-status-tracker" rel="nofollow noreferrer">commit-status-tracker</a>) but still keep GitLab as the source of truth (e.g. see green/red pipeline runs on commits etc.)</li>
<li>report successfully run Tekton Pipelines as green GitLab CI Pipelines & failed Tekton Pipelines as red GitLab CI Pipelines</li>
<li>preserve/stream the Tekton Pipeline logs into GitLab CI Pipeline logs - both in case of errors or success inside the Tekton Pipelines</li>
<li>use <a href="https://docs.gitlab.com/ee/ci/variables/predefined_variables.html" rel="nofollow noreferrer">GitLab CI Predefined Variables</a> for a generic approach</li>
</ul>
| <p><strong>TLDR;</strong></p>
<p>I created a fully comprehensible example project showing all necessary steps and running pipelines here: <a href="https://gitlab.com/jonashackt/microservice-api-spring-boot/" rel="noreferrer">https://gitlab.com/jonashackt/microservice-api-spring-boot/</a> with the full <code>.gitlab-ci.yml</code> to directly trigger a Tekton Pipeline:</p>
<pre><code>image: registry.gitlab.com/jonashackt/aws-kubectl-tkn:0.21.0
variables:
AWS_DEFAULT_REGION: 'eu-central-1'
before_script:
- mkdir ~/.kube
- echo "$EKSKUBECONFIG" > ~/.kube/config
- echo "--- Testdrive connection to cluster"
- kubectl get nodes
stages:
- build
build-image:
stage: build
script:
- echo "--- Create parameterized Tekton PipelineRun yaml"
- tkn pipeline start buildpacks-test-pipeline
--serviceaccount buildpacks-service-account-gitlab
--workspace name=source-workspace,subPath=source,claimName=buildpacks-source-pvc
--workspace name=cache-workspace,subPath=cache,claimName=buildpacks-source-pvc
--param IMAGE=$CI_REGISTRY_IMAGE
--param SOURCE_URL=$CI_PROJECT_URL
--param SOURCE_REVISION=$CI_COMMIT_REF_SLUG
--dry-run
--output yaml > pipelinerun.yml
- echo "--- Trigger PipelineRun in Tekton / K8s"
- PIPELINE_RUN_NAME=$(kubectl create -f pipelinerun.yml --output=jsonpath='{.metadata.name}')
- echo "--- Show Tekton PipelineRun logs"
- tkn pipelinerun logs $PIPELINE_RUN_NAME --follow
- echo "--- Check if Tekton PipelineRun Failed & exit GitLab Pipeline accordingly"
- kubectl get pipelineruns $PIPELINE_RUN_NAME --output=jsonpath='{.status.conditions[*].reason}' | grep Failed && exit 1 || exit 0
</code></pre>
<p>Here are the brief steps you need to do:</p>
<p><strong>1. Choose a base image for your <code>.gitlab-ci.yml</code> providing <code>aws</code> CLI, <code>kubectl</code> and Tekton CLI (<code>tkn</code>)</strong></p>
<p>This is entirely up to you. I created an example project <a href="https://gitlab.com/jonashackt/aws-kubectl-tkn" rel="noreferrer">https://gitlab.com/jonashackt/aws-kubectl-tkn</a> which provides an image, which is based on the official <a href="https://hub.docker.com/r/amazon/aws-cli" rel="noreferrer">https://hub.docker.com/r/amazon/aws-cli</a> image and is accessible via <code>registry.gitlab.com/jonashackt/aws-kubectl-tkn:0.21.0</code>.</p>
<p><strong>2. CI/CD Variables for aws CLI & Kubernetes cluster access</strong></p>
<p>Inside your GitLab CI project (or better: inside the group, where your GitLab CI project resides in) you need to create <code>AWS_ACCESS_KEY_ID</code>, <code>AWS_SECRET_ACCESS_KEY</code> as CI/CD Variables holding the the aws cli credentials (beware to <code>mask</code> them while creating them in order to prevent them beeing printed into the GitLab CI logs). Depending on your EKS clusters (or other K8s clusters) config, you need to provide a <code>kubeconfig</code> to access your cluster. One way is to create a GitLab CI/CD variable like <code>EKSKUBECONFIG</code> providing the necessary file (e.g. in the example project this is provided by Pulumi with <code>pulumi stack output kubeconfig > kubeconfig</code>). In this setup using Pulumi there are no secret credentials inside the <code>kubeconfig</code> so the variable doesn't need to be masked. But be aware of possible credentials here and protect them accordingly if needed.</p>
<p><a href="https://i.stack.imgur.com/YF4CO.png" rel="noreferrer"><img src="https://i.stack.imgur.com/YF4CO.png" alt="enter image description here" /></a></p>
<p>Also define <code>AWS_DEFAULT_REGION</code> containing your EKS cluster's region:</p>
<pre><code># As we need kubectl, aws & tkn CLI we use https://gitlab.com/jonashackt/aws-kubectl-tkn
image: registry.gitlab.com/jonashackt/aws-kubectl-tkn:0.21.0
variables:
AWS_DEFAULT_REGION: 'eu-central-1'
</code></pre>
<p><strong>3. Use <code>kubeconfig</code> and testdrive cluster connection in <code>before_script</code> section</strong></p>
<p>Preparing things we need later inside other steps could be done inside the <code>before_script</code> section. So let's create the directory <code>~/.kube</code> there and create the file <code>~/.kube/config</code> from the contents of the variable <code>EKSKUBECONFIG</code>. Finally fire a <code>kubectl get nodes</code> to check if the cluster connection is working. Our <code>before_script</code> section now looks like this:</p>
<pre><code>before_script:
- mkdir ~/.kube
- echo "$EKSKUBECONFIG" > ~/.kube/config
- echo "--- Testdrive connection to cluster"
- kubectl get nodes
</code></pre>
<p><strong>4. Pass parameters to Tekton PipelineRun</strong></p>
<p>Passing parameters via <code>kubectl</code> isn't trivial - or even needs to be done using a templating engine like Helm. But luckily the Tekton CLI has something for us: <a href="https://stackoverflow.com/questions/67405826/tekton-running-pipeline-via-passing-parameter"><code>tkn pipeline start</code> accepts parameters</a>. So we can transform the <a href="https://buildpacks.io/docs/tools/tekton/#5-create--apply-pipelinerun" rel="noreferrer">Cloud Native Buildpacks Tekton PipelineRun Yaml file</a> into a <code>tkn</code> CLI command like this:</p>
<pre><code>tkn pipeline start buildpacks-test-pipeline \
--serviceaccount buildpacks-service-account-gitlab \
--workspace name=source-workspace,subPath=source,claimName=buildpacks-source-pvc \
--workspace name=cache-workspace,subPath=cache,claimName=buildpacks-source-pvc \
--param IMAGE=registry.gitlab.com/jonashackt/microservice-api-spring-boot \
--param SOURCE_URL=https://gitlab.com/jonashackt/microservice-api-spring-boot \
--param SOURCE_REVISION=main \
--timeout 240s \
--showlog
</code></pre>
<p>Now here are some points to consider. First the name <code>buildpacks-test-pipeline</code> right after the <code>tkn pipeline start</code> works as an equivalent to the yaml files <code>spec: pipelineRef: name: buildpacks-test-pipeline</code> definition.</p>
<p>It will also work as a reference to the <code>Pipeline</code> object defined inside the file <a href="https://github.com/jonashackt/pulumi-eks-tekton/blob/trigger-build-from-gitlab/tekton-ci-config/pipeline.yml" rel="noreferrer">pipeline.yml</a> which starts with <code>metadata: name: buildpacks-test-pipeline</code> like:</p>
<p>apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: buildpacks-test-pipeline
...</p>
<p>Second to define workspaces isn't trivial. Luckily <a href="https://issueexplorer.com/issue/tektoncd/cli/1368" rel="noreferrer">there's help</a>. We can define a workspace in <code>tkn</code> CLI like this: <code>--workspace name=source-workspace,subPath=source,claimName=buildpacks-source-pvc</code>.</p>
<p>Third using the parameters as intended now becomes easy. Simply use <code>--param</code> accordingly. We also use <code>--showlog</code> to directly stream the Tekton logs into the commandline (or GitLab CI!) together with <code>--timeout</code>.</p>
<p>Finally using <a href="https://docs.gitlab.com/ee/ci/variables/predefined_variables.html" rel="noreferrer">GitLab CI Predefined variables</a> our <code>.gitlab-ci.yml</code>'s build stage looks like this:</p>
<pre><code>build-image:
stage: build
script:
- echo "--- Run Tekton Pipeline"
- tkn pipeline start buildpacks-test-pipeline
--serviceaccount buildpacks-service-account-gitlab
--workspace name=source-workspace,subPath=source,claimName=buildpacks-source-pvc
--workspace name=cache-workspace,subPath=cache,claimName=buildpacks-source-pvc
--param IMAGE=$CI_REGISTRY_IMAGE
--param SOURCE_URL=$CI_PROJECT_URL
--param SOURCE_REVISION=$CI_COMMIT_REF_SLUG
--timeout 240s
--showlog
</code></pre>
<p><strong>5. Solve the every GitLab CI Pipeline is green problem</strong></p>
<p>This could have been everything we need to do. But: right now every GitLab CI Pipeline is green, regardless of the Tekton Pipeline's status.</p>
<p>Therefore we remove <code>--showlog</code> and <code>--timeout</code> again, but add a <code>--dry-run</code> together with the <code>--output yaml</code> flags. Without the <code>--dry-run</code> the <code>tkn pipeline start</code> command would create a <code>PipelineRun</code> object definition already, which we can't create then using <code>kubectl</code> anymore:</p>
<pre><code>build-image:
stage: build
script:
- echo "--- Create parameterized Tekton PipelineRun yaml"
- tkn pipeline start buildpacks-test-pipeline
--serviceaccount buildpacks-service-account-gitlab
--workspace name=source-workspace,subPath=source,claimName=buildpacks-source-pvc
--workspace name=cache-workspace,subPath=cache,claimName=buildpacks-source-pvc
--param IMAGE=$CI_REGISTRY_IMAGE
--param SOURCE_URL=$CI_PROJECT_URL
--param SOURCE_REVISION=$CI_COMMIT_REF_SLUG
--dry-run
--output yaml > pipelinerun.yml
</code></pre>
<p>Now that we removed <code>--showlog</code> and don't start an actual Tekton pipeline using <code>tkn</code> CLI, we need to create the pipeline run using:</p>
<pre><code>- PIPELINE_RUN_NAME=$(kubectl create -f pipelinerun.yml --output=jsonpath='{.metadata.name}')
</code></pre>
<p>Having the temporary variable <code>PIPELINE_RUN_NAME</code> available containing the exact pipeline run id, we can stream the Tekton pipeline logs into our GitLab CI log again:</p>
<pre><code>- tkn pipelinerun logs $PIPELINE_RUN_NAME --follow
</code></pre>
<p>Finally we need to check for Tekton pipeline run's status and exit our GitLab CI pipeline accordingly in order to prevent red Tekton pipelines resulting in green GitLab CI pipelines. Therefore let's check the status of the Tekton pipeline run first. This can be achieved <a href="https://stackoverflow.com/a/55077916/4964553">using <code>--output=jsonpath='{.status.conditions[*].reason}'</code></a> together with a <code>kubectl get pipelineruns</code>:</p>
<pre><code>kubectl get pipelineruns $PIPELINE_RUN_NAME --output=jsonpath='{.status.conditions[*].reason}'
</code></pre>
<p>Then we pipe the result into <a href="https://stackoverflow.com/a/11612379/4964553">a <code>grep</code> which checks, if <code>Failed</code> is inside</a> the <code>status.condiditons.reason</code> field.</p>
<p>Finally we use a bash onliner (which is <code><expression to check true or false> && command when true || command when false</code>) to issue the suitable <code>exit</code> command (see <a href="https://askubuntu.com/a/892605">https://askubuntu.com/a/892605</a>):</p>
<pre><code>- kubectl get pipelineruns $PIPELINE_RUN_NAME --output=jsonpath='{.status.conditions[*].reason}' | grep Failed && exit 1 || exit 0
</code></pre>
<p>Now every GitLab CI Pipeline becomes green, when the Tekton Pipeline succeeded - and gets red when the Tekton Pipeline failed. <a href="https://gitlab.com/jonashackt/microservice-api-spring-boot/-/pipelines" rel="noreferrer">The example project has some logs</a> if you're interested. It's pretty cool to see the Tekton logs inside the GitLab CI logs:</p>
<p><a href="https://i.stack.imgur.com/XJnP7.png" rel="noreferrer"><img src="https://i.stack.imgur.com/XJnP7.png" alt="enter image description here" /></a></p>
|
<p>i am using kubernetes plugin in jenkins.i want to run an init script before slave container provision on kubernetes.the kubernetes plugin pod template won't allow me to or i cannot find a way to run it.can anybody please help me with this.i need to run a certain set of commands on the kubernetes slave container before it provisions.this is how my config looks like
<a href="https://i.stack.imgur.com/Eq5Ui.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Eq5Ui.png" alt="enter image description here" /></a></p>
| <p>1 - You don't need to use the slave image, jenkins will have an agent container regardless, you should only specify containers you want it to run additionally.</p>
<p>2 - On the containers you can specify the entrypoints or pre-provision whatever you want beforehand and just worry about the execution. That means you can get a container ready to go and assume the code will be there, if you need to run any extra commands on the code, you can just add an extra script step</p>
<p>3 - In order for your step to be executed in a container, you have to be explicit in your pipeline, otherwise it will run on the master.</p>
<p>I can't really guide you to using the UI because I use the Jenkinsfile inside projects I want to build.</p>
|
<p>I've created a Jenkins pipeline and I am running it on GKE (using Kubernetes plugin). In one of the steps I am running java application in one of the containers. Application is writing logs to stdout. I would like to see the logs from this application in Stackdriver. How can I achieve it? Currently I am not able to see these logs in Stackdriver nor in Jenkins logs.</p>
<p>My pipeline:</p>
<pre><code>podTemplate(cloud: 'kubernetes',
yaml: """
apiVersion: v1
kind: Pod
spec:
securityContext:
runAsUser: 0
containers:
- name: maven
image: maven:3.6.3-jdk-11
resources:
requests:
cpu: "1"
memory: "1Gi"
ephemeral-storage: "20Mi"
limits:
cpu: "2"
memory: "4Gi"
ephemeral-storage: "1Gi"
command:
- cat
tty: true
"""
) {
node(POD_LABEL) {
stage('Run app') {
container('maven') {
sh "java -jar my-app.jar &"
}
}
}
}
</code></pre>
<p>=== EDIT ===</p>
<p>My master Jenkins is installed on one of the GKE pods and each PR validation triggers another pod having the mentioned container with java application. It looks that the jenkins slave agent is gathering the logs (but not attaching them to build log as the java app is run in background).</p>
| <p>If you are already running the Jenkin on GCP instance it would be quite easy and directly get the logs to Stackdriver.</p>
<p><a href="https://faun.pub/jenkins-on-google-compute-engine-611bd86e295b" rel="nofollow noreferrer">https://faun.pub/jenkins-on-google-compute-engine-611bd86e295b</a></p>
<p>You can also integrate the Agent with java</p>
<p>Reference : <a href="https://medium.com/google-cloud/java-logging-on-cloud-run-with-stackdriver-9786d6fdbe17" rel="nofollow noreferrer">https://medium.com/google-cloud/java-logging-on-cloud-run-with-stackdriver-9786d6fdbe17</a></p>
<p>You should also checkout using the <a href="https://bluemedora.com/integration/jenkins-logs-in-google-stackdriver/" rel="nofollow noreferrer">Bluemedora</a></p>
|
<p>I have a VM with kubernetes installed using kubeadm (NOT minikube). The VM acts a the single node of the cluster, with taints removed to allow it to act as both Master and Worker node (as shown in the kubernetes documentation).
I have saved, transfered and loaded a my <em>app:test</em> image into it. I can easily run a container with it using docker run.
It shows up when I run <em>sudo docker images</em>.</p>
<p>When I create a deployment/pod that uses this image and specify <em>Image-PullPolicy: IfNotPresent</em> or <em>Never</em>, I still have the <em>ImagePullBackoff</em> error. The describe command shows me it tries to pull the image from dockerhub...</p>
<p>Note that when I try to use a local image that was <em>pulled as the result of creating another pod</em>, the ImagePullPolicies seem to work, no problem. Although the <em>image doesn't appear when i run sudo docker images --all</em>.</p>
<p>How can I use a local image for pods in kubernetes? Is there a way to do it without using a private repository?</p>
| <h2>One can do so with a combination of crictl and ctr, if using containerd.</h2>
<p><strong>TLDR:</strong> <a href="https://blog.scottlowe.org/2020/01/25/manually-loading-container-images-with-containerd/" rel="nofollow noreferrer">these steps</a>, which are also described in the crictl <a href="https://github.com/containerd/cri/blob/master/docs/crictl.md#directly-load-a-container-image" rel="nofollow noreferrer">github documentation</a>:</p>
<p>1- Once you get the image on the node (in my case, a VM), make sure it is in an archive (.tar). You can do that with the <code>docker save</code> or <code>ctr image export</code> commands.</p>
<p>2- Use <code>sudo ctr -n=k8s.io images import myimage.tar</code> while in the same directory as thearchived image to add it to containerd in the namespace that kubernetes uses to track it's images. It should now appear when you run <code>sudo crictl images</code>.</p>
<hr />
<p>As suggested, I tried listing images with crictl and my app:test did not appear. However, trying to import my local image through crictl didn't seem to work either. I used <em>crictl pull app:test</em> and it showed the following error message:</p>
<blockquote>
<p>FATA[0000] pulling image failed: rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/app:test": failed to resolve reference "docker.io/library/app:test": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed.</p>
</blockquote>
<p>However, when following <a href="https://blog.scottlowe.org/2020/01/25/manually-loading-container-images-with-containerd/" rel="nofollow noreferrer">these steps</a>, my image is finally recognized as an existing local image in kubernetes. They are actually the same as suggested in the crictl <a href="https://github.com/containerd/cri/blob/master/docs/crictl.md#directly-load-a-container-image" rel="nofollow noreferrer">github documentation</a></p>
<p>How does one explain this? How do images get "registered" in the kubernetes cluster? Why couldn't crictl import the image? I might post another issue to ask that...</p>
|
<p>I am trying to utilize Rancher Terraform provider to create a new RKE cluster and then use the Kubernetes and Helm Terraform providers to create/deploy resources to the created cluster. I'm using this <a href="https://registry.terraform.io/providers/rancher/rancher2/latest/docs/resources/cluster_v2#kube_config" rel="noreferrer">https://registry.terraform.io/providers/rancher/rancher2/latest/docs/resources/cluster_v2#kube_config</a> attribute to create a local file with the new cluster's kube config.
The Helm and Kubernetes providers need the kube config in the provider configuration: <a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs" rel="noreferrer">https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs</a>. Is there any way I can get the provider configuration to wait for the local file being created?</p>
| <p>Generally speaking, Terraform always needs to evaluate provider configurations during the planning step because providers are allowed to rely on those settings in order to create the plan, and so it typically isn't possible to have a provider configuration refer to something created only during the apply step.</p>
<p>As a way to support bootstrapping in a situation like this though, this is one situation where it can be reasonable to use the <code>-target=...</code> option to <code>terraform apply</code>, to plan and apply only sufficient actions to create the Rancher cluster first, and then follow up with a normal plan and apply to complete everything else:</p>
<pre><code>terraform apply -target=rancher2_cluster_v2.example
terraform apply
</code></pre>
<p>This two-step process is needed only for situations where the <code>kube_config</code> attribute isn't known yet. As long as this resource type has convergent behavior, you should be able to use just <code>terraform apply</code> as normal unless you in future make a change that requires replacing the cluster.</p>
<p>(This is a general answer about provider configurations refering to resource attributes. I'm not familiar with Rancher in particular and so there might be some specifics about that particular resource type which I'm not mentioning here.)</p>
|
<p>I am using <code>MobaXterm_21.2</code> installed version.
When I run <code>kubectl version</code>, it's working as expected:</p>
<pre><code> kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", G
oVersion:"go1.12.5", Compiler:"gc", Platform:"windows/amd64"}
Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.
</code></pre>
<p>But it is not able to read the <code>.kube/config</code> file or able to pickup the config file given through ENV variable <code>KUBECONFIG</code> or <code>--kubeconfig</code>. See the response below:</p>
<pre><code> export KUBECONFIG=/drives/path/to/config/file.config
✔
kubectl config view
apiVersion: v1
clusters: []
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
</code></pre>
<p>Not working either:</p>
<pre><code> kubectl config --kubeconfig=/drives/path/to/config/file.config view
apiVersion: v1
clusters: []
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
</code></pre>
<p><strong>This is a kind of blocking issues for me. Can anyone guide me on how to make <code>kubectl</code> work in mobaxterm? Any help will be highly appreciated.</strong></p>
<p><strong>Edit</strong> - like @mikolaj-s pointed. From <code>powershel/cmd/gitbash</code>, I am able to access the k8s cluster with out any problem. I have been accessing the cluster using powershell for several months now and it reads the <code>.kube/config</code> file or <code>KUBECONFIG</code> env var as expected.<br />
I want to shift to mobaxterm for it's multi-tab feature. If there is another tool that provides multi-tab feature I might be ok with it too.</p>
<p>In Mobaxterm -</p>
<pre><code> kubectl cluster-info dump
Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.
kubectl config get-contexts --kubeconfig /path/to/config/file
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
</code></pre>
<p>The kubeconfig files i am using are tested and have no issues for sure(100%) as they are working from powershell.</p>
<p><strong>Edit 2 -</strong> Many thanks for @mikolaj-s.<br />
With lot of hope I tried using powershell in mobaxterm as suggest by mikolaj - <a href="https://codetryout.com/mobaxterm-windows-command-prompt/" rel="nofollow noreferrer">mobaxterm-windows-command-prompt</a> and <strong>it worked.</strong></p>
| <p>The solution is to use PowerShell directly in the MobaXterm - steps how to configure that can be <a href="https://codetryout.com/mobaxterm-windows-command-prompt/" rel="nofollow noreferrer">found here</a> (instead of <code>CMD</code> choose <code>Powershell</code>):</p>
<blockquote>
<p>MobaXterm comes with various client tools such as SSH, telnet, WSL, CMD, and so on. It can well handle a Windows command line as well, here is how,</p>
</blockquote>
<blockquote>
<p>How to open Windows command prompt using MobaXterm?</p>
<ul>
<li>Open your MobaXterm</li>
<li>From the top menu, click on Sessions</li>
<li>From the Session settings window, click on the Shell button</li>
<li>Under the Basic Shell settings tab, select Terminal shell CMD</li>
<li>Also select a startup directory of your choice, which the CMD prompt will start it as you startup folder.</li>
<li>Now, Click the OK button to open a windows command window!</li>
</ul>
</blockquote>
<blockquote>
<p>With this, you should be able to use multiple Windows command lines in a tabbed view, or along with your other sessions.</p>
</blockquote>
|
<p>I have a number of <strong>restful services</strong> within our system</p>
<ul>
<li>Some are our <strong>within</strong> the kubernetes <strong>cluster</strong></li>
<li><strong>Others</strong> are on <strong>legacy</strong> infrasture and are <strong>hosted on VM's</strong></li>
</ul>
<p>Many of our <strong>restful services</strong> make <strong>synchronous calls</strong> to each other (so not asynchronously using message queues)</p>
<p>We also have a number of UI's (fat clients or web apps) that make use of these services</p>
<p>We might define a simple k8s manifest file like this</p>
<ol>
<li>Pod</li>
<li>Service</li>
<li>Ingress</li>
</ol>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: "orderManager"
spec:
containers:
- name: "orderManager"
image: "gitlab-prem.com:5050/image-repo/orderManager:orderManager_1.10.22"
---
apiVersion: v1
kind: Service
metadata:
name: "orderManager-service"
spec:
type: NodePort
selector:
app: "orderManager"
ports:
- protocol: TCP
port: 50588
targetPort: 50588
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: orderManager-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /orders
pathType: Prefix
backend:
service:
name: "orderManager-service"
port:
number: 50588
</code></pre>
<p>I am really not sure what the best way for <strong>restful services</strong> on the cluster to talk to each other.</p>
<ul>
<li>It seems like there is only one good route for callers outside the cluster which is use the url built by the ingress rule</li>
<li>Two options within the cluster</li>
</ul>
<p>This might illustrate it further with an example</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Caller</th>
<th style="text-align: center;">Receiver</th>
<th style="text-align: center;">Example Url</th>
<th style="text-align: right;"></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">UI</td>
<td style="text-align: center;">On Cluster</td>
<td style="text-align: center;">http://clusterip/orders</td>
<td style="text-align: right;">The UI would use the cluster ip and the ingress rule to reach the order manager</td>
</tr>
<tr>
<td style="text-align: left;">Service off cluster</td>
<td style="text-align: center;">On Cluster</td>
<td style="text-align: center;">http://clusterip/orders</td>
<td style="text-align: right;">Just like the UI</td>
</tr>
<tr>
<td style="text-align: left;">On Cluster</td>
<td style="text-align: center;">On Cluster</td>
<td style="text-align: center;">http://clusterip/orders</td>
<td style="text-align: right;">Could use ingress rule like the above approach</td>
</tr>
<tr>
<td style="text-align: left;">On Cluster</td>
<td style="text-align: center;">On Cluster</td>
<td style="text-align: center;">http://orderManager-service:50588/</td>
<td style="text-align: right;">Could use the service name and port directly</td>
</tr>
</tbody>
</table>
</div>
<p><em>I write <strong>cluster ip</strong> a few times above but in real life we put something top so there is a friendly name like http://mycluster/orders</em></p>
<p>So when <strong>caller</strong> and <strong>reciever</strong> are <strong>both on cluster</strong> is it either</p>
<ul>
<li><strong>Use</strong> the <strong>ingress rule</strong> which is also used by services and apps outside the cluster</li>
<li><strong>Use</strong> the <strong>nodeport service name</strong> which is used in the ingress rule</li>
<li>Or perhaps something else!</li>
</ul>
<p><strong>One benefit</strong> of using <strong>nodeport service name</strong> is that you do not have to change your base URL.</p>
<ul>
<li>The <strong>ingress</strong> rule <strong>appends</strong> an <strong>extra elements</strong> to the route (in the above case <em>orders</em>)</li>
<li>When I move a restful service from legacy to k8s cluster it will increase the complexity</li>
</ul>
| <p>It depends on whether you want requests to be routed through your ingress controller or not.</p>
<p>Requests sent to the full URL configured in your Ingress resource will be processed by your ingress controller. The controller itself — NGINX in this case — will proxy the request to the Service. The request will then be routed to a Pod.</p>
<p>Sending the request directly to the Service’s URL simply skips your ingress controller. The request is directly routed to a Pod.</p>
<p>The trade offs between the two options depend on your setup.</p>
<p>Sending requests through your ingress controller will increase request latency and resource consumption. If your ingress controller does nothing other than route requests, I would recommend sending requests directly to the Service.</p>
<p>However, if you use your ingress controller for other purposes, like authentication, monitoring, logging, or tracing, then you may prefer that the controller process internal requests.</p>
<p>For example, on some of my clusters I use the NGINX ingress controller to measure request latency and track HTTP response statuses. I route requests between apps running in the same cluster through the ingress controller in order to have that information available. I pay the cost of increased latency and resource usage in order to have improved observability.</p>
<p>Whether the trade offs are worth it in your case depends on you. If your ingress controller does nothing more that basic routing, then my recommendation is to skip it entirely. If it does more, then you need to weigh the pros and cons of routing requests through it.</p>
|
<p>I'm using <strong>Pulumi-Typescript</strong> to deploy infrastructure in GCP.</p>
<p>I have 1 Google Kubernetes Engine resource already deployed in my project.
To use that existing resource in Pulumi operation, I found documentation on pulumi import in which I explored about how it actually works.
By referring that document, I have added Import configuration are as follow:</p>
<pre><code>import * as gcp from "@pulumi/gcp";
import * as pulumi from "@pulumi/pulumi";
import * as k8s from "@pulumi/kubernetes";
import * as infraJson from "../parameters/infrastructure-parameters.json"
export const gke = new gcp.container.Cluster(infraJson.clusterConfigs.CLUSTER_NAME,
{
name: infraJson.clusterConfigs.CLUSTER_NAME,
initialNodeCount: infraJson.clusterConfigs.CLUSTER_DEFAULT_NODEPOOL_NODE_COUNT,
removeDefaultNodePool: false,
location: infraJson.clusterConfigs.RESOURCE_LOCATION,
nodeConfig: {
preemptible: false,
machineType: infraJson.clusterConfigs.CLUSTER_MACHINE_TYPE,
oauthScopes: [
"https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/monitoring.write",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/trace.append"
]
}
}, {
import: "projects/myproject/locations/us-central1-c/clusters/test-cluster"
});
</code></pre>
<p>But, When I run command "<strong>Pulumi up</strong>" it shows me warning:</p>
<pre><code>gcp:container:Cluster (test-cluster):
warning: previously-imported resources that still specify an ID may not be replaced; please remove the `import` declaration from your program.
</code></pre>
<p>I searched everywhere about how to do it in GCP, but not found anything.</p>
<p>If anyone knows how to do it, Please help me with this issue..!
Thank you...!</p>
| <p>The only place in the Pulumi code base containing that warning is here:</p>
<p><a href="https://github.com/pulumi/pulumi/blob/272c4643b20d40113689475fd2f9fe3134e6d431/pkg/resource/deploy/step_generator.go#L540-L551" rel="nofollow noreferrer">https://github.com/pulumi/pulumi/blob/272c4643b20d40113689475fd2f9fe3134e6d431/pkg/resource/deploy/step_generator.go#L540-L551</a></p>
<p>What I understand from this code is this: you specified an ID of an existing cluster to import, as part of Pulumi code describing to "to be" situation of your cluster. But your existing cluster configuration differs from the description of your cluster config in the code. This leads to Pulumi wanting to to replace your cluster, but it is prevented by your <code>import</code> option, hence the warning.</p>
<p>Please run <code>pulumi preview --diff</code> and see where the config in your code differs from the actual state. Change your code to match the actual state. At that moment, <code>pulumi preview</code> should tell you there are no changes. Once you have that, you should be able to run <code>pulumi up</code> without a problem.</p>
|
<p>I am trying to deploy the jpetstore application (<a href="https://github.com/IBM-Cloud/jpetstore-kubernetes" rel="nofollow noreferrer">https://github.com/IBM-Cloud/jpetstore-kubernetes</a>) into kubernetes in EKS and i am having trouble interpreting this command as it is used for IBM cloud:</p>
<pre><code>ibmcloud ks cluster get --cluster CLUSTER_NAME
</code></pre>
<p>What is the EKS equivalent if i were to use <code>kubectl</code> ?</p>
<p>I have EKS setup using the terraform below:</p>
<pre><code>provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
}
</code></pre>
<p>could command be retrieving the endpoint?</p>
| <p><strong>Cluster Name:</strong></p>
<pre><code>$ kubectl config get-clusters
NAME
kubernetes
</code></pre>
<p><strong>Cluster end point info:</strong></p>
<pre><code>$ kubectl cluster-info
Kubernetes master is running at https://172.17.0.109:6443
dash-kubernetes-dashboard is running at https://172.17.0.109:6443/api/v1/namespaces/kube-system/services/https:dash-kubernetes-dashboard:https/proxy
KubeDNS is running at https://172.17.0.109:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
</code></pre>
|
<p>I am taking a course in Udemy and I am new to the world of <strong>Kubernetes</strong> and I am trying to configure <strong>ingress nginx controller</strong> in Kubernetes but it returns <strong>404 not found</strong> when i send a request at specified URL, it has been 10 days that I am trying to fix it, i've looked at similar questions but none of their answers are working for me. I am also using <strong>Skaffold</strong> to do build/deploy image on docker hub automatically when i change something in files.</p>
<p><a href="https://i.stack.imgur.com/U0WyJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/U0WyJ.png" alt="404 not found screenshot" /></a></p>
<p>My <strong>express app</strong> server:</p>
<pre><code>app.get('/api/users/currentuser', (req, res) => {
res.send('Hi there');
});
app.listen(3000, () => {
console.log('[Auth] - Listening on port 3000');
});
</code></pre>
<p><strong>ingress-srv.yaml</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: ticketing.com
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
</code></pre>
<p><strong>auth-depl.yaml</strong> (Auth deployment & srv)</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: myusername/auth:latest
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
type: ClusterIP
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
</code></pre>
<p><strong>skaffold.yaml</strong> file:</p>
<pre><code>apiVersion: skaffold/v2beta25
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: username/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
</code></pre>
<p><strong>Dockerfile</strong>:</p>
<pre><code>FROM node:alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "start"]
</code></pre>
<p>I also executed command from <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">NGINX Ingress Controller docs</a>:</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.5/deploy/static/provider/cloud/deploy.yaml
</code></pre>
<p>I also changed hosts.file in the system:
127.0.0.1 ticketing.com</p>
<p>Logs:</p>
<p><strong>kubectl get pods</strong></p>
<pre><code>NAME READY STATUS RESTARTS AGE
auth-depl-5f89899d9f-wtc94 1/1 Running 0 6h33m
</code></pre>
<p><strong>kubectl get svc</strong></p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
auth-srv ClusterIP 10.96.23.71 <none> 3000/TCP 23h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
</code></pre>
<p><strong>kubectl get pods --namespace=ingress-nginx</strong></p>
<pre><code>NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-7fm56 0/1 Completed 0 23h
ingress-nginx-admission-patch-5vflr 0/1 Completed 1 23h
ingress-nginx-controller-5c8d66c76d-89zhp 1/1 Running 0 23h
</code></pre>
<p><strong>kubectl get ing</strong></p>
<pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-srv <none> ticketing.com localhost 80 23h
</code></pre>
<p><strong>kubectl describe ing ingress-srv</strong></p>
<pre><code>Name: ingress-srv
Namespace: default
Address: localhost
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
ticketing.com
/api/users/?(.*) auth-srv:3000 (10.1.0.10:3000)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 22m (x18 over 23h) nginx-ingress-controller Scheduled for sync
</code></pre>
<p>Could there be a problem with the Windows IIS web server? since I previously configured something for another project, and in the screenshot above I see:</p>
<pre><code>Requested URL http://ticketing.com:80/api/users/currentuser
Physical Path C:\inetpub\wwwroot\api\users\currentuser
</code></pre>
<p>Also the screenshot shows the port <strong>:80</strong> at the requested URL but I have the server port 3000? + when i request at <strong>https</strong> it returns:</p>
<pre><code>502 Bad Gateway
nginx
</code></pre>
<p>also C:\inetpub\wwwroot is strange to me.</p>
<p>Any ideas would help me a lot with continuing the course.</p>
| <p>After a few days of research I finally solved the problem, the problem was with IIS Web Server which I had enabled when I was working on a project in ASP.NET core, I uninstalled it and the problem was solved.</p>
<p>How to uninstall IIS from Windows 10:</p>
<ul>
<li>Go to Control Panel > Programs and Features</li>
<li>Click Turn Windows features on or off</li>
<li>Scroll down to Internet Information Services</li>
<li>Click on the square next to Internet Information Services so it becomes empty</li>
<li>Click OK and restart the PC (required).</li>
</ul>
|
<p>I want to deploy a microservices application with multiple microservices and two operators for external applications.</p>
<p>The microservices are consolidated under the helm chart. The external stateful applications like Cassandra DB and Kafka have an operator.</p>
<p>How do I combine and manage the lifecycle of the entire stack? What's the recommended way to do this deployment?</p>
<p>Thanks
Guru</p>
| <p>You can use a helm-based operator which can be scaffolded out using an existing helm chart. <a href="https://sdk.operatorframework.io/docs/building-operators/helm/" rel="nofollow noreferrer">https://sdk.operatorframework.io/docs/building-operators/helm/</a></p>
|
<p>I'm struggling to expose a service in an AWS cluster to outside and access it via a browser. Since my <a href="https://stackoverflow.com/q/69801758/3995261">previous question</a> haven't drawn any answers, I decided to simplify the issue in several aspects.</p>
<p>First, I've created a deployment which should work without any configuration. Based on <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-on-digitalocean-kubernetes-using-helm" rel="nofollow noreferrer">this article</a>, I did</p>
<ol>
<li><p><code>kubectl create namespace tests</code></p>
</li>
<li><p>created file <code>probe-service.yaml</code> based on <code>paulbouwer/hello-kubernetes:1.8</code> and deployed it <code>kubectl create -f probe-service.yaml -n tests</code>:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-first
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes-first
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes-first
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes-first
template:
metadata:
labels:
app: hello-kubernetes-first
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: Hello from the first deployment!
</code></pre>
</li>
<li><p>created <code>ingress.yaml</code> and applied it (<code>kubectl apply -f .\probes\ingress.yaml -n tests</code>)</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
spec:
rules:
- host: test.projectname.org
http:
paths:
- pathType: Prefix
path: "/test"
backend:
service:
name: hello-kubernetes-first
port:
number: 80
- host: test2.projectname.org
http:
paths:
- pathType: Prefix
path: "/test2"
backend:
service:
name: hello-kubernetes-first
port:
number: 80
ingressClassName: nginx
</code></pre>
</li>
</ol>
<p>Second, I can see that DNS actually point to the cluster and ingress rules are applied:</p>
<ul>
<li>if I open <code>http://test.projectname.org/test</code> or any irrelevant path (<code>http://test.projectname.org/test3</code>), I'm shown <code>NET::ERR_CERT_AUTHORITY_INVALID</code>, but</li>
<li>if I use "open anyway" in browser, irrelevant paths give <code>ERR_TOO_MANY_REDIRECTS</code> while <code>http://test.projectname.org/test</code> gives <code>Cannot GET /test</code></li>
</ul>
<p>Now, TLS issues aside (those deserve a separate question), why can I get <code>Cannot GET /test</code>? It looks like ingress controller (ingress-nginx) got the rules (otherwise it wouldn't descriminate paths; that's why I don't show DNS settings, although they are described in the previous question) but instead of showing the simple hello-kubernetes page at <code>/test</code> it returns this simple 404 message. <strong>Why is that?</strong> What could possibly go wrong? <strong>How to debug this?</strong></p>
<p>Some debug info:</p>
<ul>
<li><p><code>kubectl version --short</code> tells Kubernetes Client Version is v1.21.5 and Server Version is v1.20.7-eks-d88609</p>
</li>
<li><p><code>kubectl get ingress -n tests</code> shows that <code>hello-kubernetes-ingress</code> exists indeed, with <code>nginx</code> class, 2 expected hosts, address equal to that shown for load balancer in AWS console</p>
</li>
<li><p><code>kubectl get all -n tests</code> shows</p>
<pre><code>NAME READY STATUS RESTARTS AGE
pod/hello-kubernetes-first-6f77d8ff99-gjw5d 1/1 Running 0 5h4m
pod/hello-kubernetes-first-6f77d8ff99-ptwsn 1/1 Running 0 5h4m
pod/hello-kubernetes-first-6f77d8ff99-x8w87 1/1 Running 0 5h4m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/hello-kubernetes-first ClusterIP 10.100.18.189 <none> 80/TCP 5h4m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/hello-kubernetes-first 3/3 3 3 5h4m
NAME DESIRED CURRENT READY AGE
replicaset.apps/hello-kubernetes-first-6f77d8ff99 3 3 3 5h4m
</code></pre>
</li>
<li><p><code>ingress-nginx</code> was installed before me via the following chart:</p>
<pre><code>apiVersion: v2
name: nginx
description: A Helm chart for Kubernetes
type: application
version: 4.0.6
appVersion: "1.0.4"
dependencies:
- name: ingress-nginx
version: 4.0.6
repository: https://kubernetes.github.io/ingress-nginx
</code></pre>
<p>and the values overwrites applied with the chart differ from the <a href="https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml" rel="nofollow noreferrer">original ones</a> mostly (well, those got updated since the installation) in <code>extraArgs</code>: <code>default-ssl-certificate: "nginx-ingress/dragon-family-com"</code> is uncommneted</p>
</li>
</ul>
<p>PS To answer Andrew, I indeed tried to setup HTTPS but it seemingly didn't help, so I haven't included what I tried into the initial question. Yet, here's what I did:</p>
<ol>
<li><p>installed cert-manager, currently without a custom chart: <code>kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.4/cert-manager.yaml</code></p>
</li>
<li><p>based on <a href="https://cert-manager.io/docs/configuration/acme/dns01/route53/#creating-an-issuer-or-clusterissuer" rel="nofollow noreferrer">cert-manager's tutorial</a> and <a href="https://stackoverflow.com/q/58553510/3995261">SO question</a> created a ClusterIssuer with the following config:</p>
<pre><code>apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-backoffice
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
# use https://acme-v02.api.letsencrypt.org/directory after everything is fixed and works
privateKeySecretRef: # this secret will be created in the namespace of cert-manager
name: letsencrypt-backoffice-private-key
# email: <will be used for urgent alerts about expiration etc>
solvers:
# TODO: add for each domain/second-level domain/*.projectname.org
- selector:
dnsZones:
- test.projectname.org
- test2.projectname.org
# haven't made it to work yet, so switched to the simpler to configure http01 challenge
# dns01:
# route53:
# region: ... # that of load balancer (but we also have ...)
# accessKeyID: <of IAM user with access to Route53>
# secretAccessKeySecretRef: # created that
# name: route53-credentials-secret
# key: secret-access-key
# role: arn:aws:iam::645730347045:role/cert-manager
http01:
ingress:
class: nginx
</code></pre>
<p>and applied it via <code>kubectl apply -f issuer.yaml</code></p>
</li>
<li><p>created 2 certificates in the same file and applied it again:</p>
<pre><code>---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: letsencrypt-certificate
spec:
secretName: tls-secret
issuerRef:
kind: ClusterIssuer
name: letsencrypt-backoffice
commonName: test.projectname.org
dnsNames:
- test.projectname.org
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: letsencrypt-certificate-2
spec:
secretName: tls-secret-2
issuerRef:
kind: ClusterIssuer
name: letsencrypt-backoffice
commonName: test2.projectname.org
dnsNames:
- test2.projectname.org
</code></pre>
</li>
<li><p>made sure that the certificates are issued correctly (skipping the pain part, the result is: <code>kubectl get certificates</code> shows that both certificates have <code>READY</code> = <code>true</code> and both tls secrets are created)</p>
</li>
<li><p>figured that my ingress is in another namespace and secrets for tls in ingress spec <a href="https://github.com/kubernetes/ingress-nginx/issues/2170" rel="nofollow noreferrer">can only</a> be referred in the same namespace (haven't tried the wildcard certificate and <code>--default-ssl-certificate</code> option yet), so for each one copied them to <code>tests</code> namespace:</p>
<ol>
<li>opened existing secret, like <code>kubectl edit secret tls-secret-2</code>, copied data and annotations</li>
<li>created an empty (Opaque) secret in tests: <code>kubectl create secret generic tls-secret-2-copy -n tests</code></li>
<li>opened it (<code>kubectl edit secret tls-secret-2-copy -n tests</code>) and inserted data an annotations</li>
</ol>
</li>
<li><p>in ingress <code>spec</code>, added the tls bit:</p>
<pre><code>tls:
- hosts:
- test.projectname.org
secretName: tls-secret-copy
- hosts:
- test2.projectname.org
secretName: tls-secret-2-copy
</code></pre>
</li>
<li><p>I hoped that this will help, but actually it made no difference (I get <code>ERR_TOO_MANY_REDIRECTS</code> for irrelevant paths, redirect from http to https, <code>NET::ERR_CERT_AUTHORITY_INVALID</code> at https and <code>Cannot GET /test</code> if I insist on getting to the page)</p>
</li>
</ol>
| <p>Since you've used your own answer to complement the question, I'll kind of answer all the things you asked, while providing a divide and conquer strategy to troubleshooting kubernetes networking.</p>
<p>At the end I'll give you some nginx and IP answers</p>
<p>This is correct</p>
<pre><code>- host: test3.projectname.org
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: hello-kubernetes-first
port:
number: 80
</code></pre>
<hr />
<p>Breaking down troubleshooting with Ingress</p>
<ol>
<li>DNS</li>
<li>Ingress</li>
<li>Service</li>
<li>Pod</li>
<li>Certificate</li>
</ol>
<hr />
<p>1.DNS</p>
<p>you can use the command dig to query the DNS</p>
<p>dig google.com
<a href="https://i.stack.imgur.com/hBwrC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hBwrC.png" alt="dig result" /></a></p>
<ol start="2">
<li>Ingress</li>
</ol>
<p>the ingress controller doesn't look for the IP, it just looks for the headers</p>
<p>you can force a host using any tool that lets you change the headers, like curl</p>
<p><code>curl --header 'Host: test3.projectname.com' http://123.123.123.123</code> (your public IP)</p>
<ol start="3">
<li>Service</li>
</ol>
<p>you can be sure that your service is working by creating ubuntu/centos pod, using <code>kubectl exec -it podname -- bash</code> and trying to curl your service form withing the cluster</p>
<ol start="4">
<li>Pod</li>
</ol>
<p>You're getting this</p>
<pre><code>192.168.14.57 - - [14/Nov/2021:12:02:58 +0000] "GET /test2 HTTP/2.0" 404 144
"-" "<browser's user-agent header value>" 448 0.002
</code></pre>
<p>This part <code>GET /test2</code> means that the request got the address from the DNS, went all the way from the internet, found your clusters, found your ingress controller, got through the service and reached your pod. Congratz! Your ingress is working!</p>
<p>But why is it returning 404?</p>
<p>The path that was passed to the service and from the service to the pod is /test2</p>
<p>Do you have a file called test2 that nginx can serve? Do you have an upstream config in nginx that has a test2 prefix?</p>
<p>That's why, you're getting a 404 from nginx, not from the ingress controller.</p>
<p>Those IPs are internal, remember, the internet traffic ended at the cluster border, now you're in an internal network. Here's a rough sketch of what's happening</p>
<p>Let's say that you're accessing it from your laptop. Your laptop has the IP 192.168.123.123, but your home has the address 7.8.9.1, so when your request hits the cluster, the cluster sees 7.8.9.1 requesting test3.projectname.com.</p>
<p>The cluster looks for the ingress controller, which finds a suitable configuration and passed the request down to the service, which passes the request down to the pod.</p>
<p>So,</p>
<pre><code>your router can see your private IP (192.168.123.123)
Your cluster(ingress) can see your router's IP (7.8.9.1)
Your service can see the ingress's IP (192.168.?.?)
Your pod can see the service's IP (192.168.14.57)
</code></pre>
<p>It's a game of pass around.
If you want to see the public IP in your nginx logs, you need to customize it to get the X-Real-IP header, which is usually where load-balancers/ingresses/ambassador/proxies put the actual requester public IP.</p>
|
<p>I have the following situation (this is my motivation to ask the question, not the question itself):</p>
<ul>
<li>I've got a web application that accepts uploads from users.
<ul>
<li>The users access the application through an Ingress,</li>
<li>then a Service,</li>
<li>Then a Deployment with two Pods.</li>
<li>Application contained in each Pod.</li>
</ul>
</li>
<li>Sometimes the upload fails:
<ul>
<li>I can see in the logs from the Pod that the upload went all right.</li>
<li>I can even see the data uploaded by the user.</li>
<li>There are nothing but normal logs in the Pod.</li>
<li>But the ingress reports a HTTP 500 error.</li>
<li>And the users sees a HTTP 500 error - connection reset by peer.</li>
</ul>
</li>
</ul>
<p>If the Pod seems all right, but ingress complains, then I should check the middle man, the Service. Then I realized that there is no easy way to obtain logs from the service.</p>
<p>So this is my question:</p>
<ul>
<li>How can I read logs from the Service object ? I mean activity logs, not the deployment events.</li>
<li>Do they exist?</li>
</ul>
| <p>The only resources in K8s that produce logs are Pods! Pods lead to the creation of containers, which for their part lead to the creation of Linux processes on the K8s nodes. Those processes write logs that are "reaped" by the container runtime and made available to K8s, e.g. when you run <code>kubectl logs</code>.</p>
<p><strong>Consequently, only K8s resources that are backed by Pods produce logs, e.g. Deployment, Daemonsets, StatefulSets and Jobs.</strong></p>
<p>Services are merely logical resources that configures how network traffic can be routed to Pods. So, in a way they have underlying Pods, but do not produce any additional log output. The only tangible outcome of a Service resource are iptables rules on the K8s nodes, that define how traffic has to be routed from the Service IP to the IPs of the underlying Pods.</p>
<p>To resolve Ingress related problems, you might get further insights from the logs of your <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="nofollow noreferrer">ingress controller</a> which is typically deployed as a deployment and therefore backed by Pods.</p>
|
<p>I'm trying to create a validating webhook</p>
<pre class="lang-sh prettyprint-override"><code>kubebuilder create webhook batch \
--version v1 \
--kind Webhook \
--defaulting \
--programmatic-validation
</code></pre>
<p>but it always gives me an error.</p>
<pre><code>failed to create webhook: unable to inject the resource to.
"base.go.kubebuilder.io/v3": kubebuilder create webhook requires.
a previously created API
</code></pre>
<p>and i'm not sure what to add extra in the kubebuilder command.
Any help is appreciated.</p>
| <p>I just got the same problem, seems like kubebuilder look at a file called <code>PROJECT</code> at the root of your project to validate whether the API has been created or not. So before you create the defaulting webhook, make sure you have created the API before you create the webhook, I'm having a hard time explaining this but I think some example will make it clear</p>
<p>so at root of your project, if you run
<code>$ cat PROJECT</code>
it will look someting like this</p>
<pre><code>domain: example.org
layout:
- go.kubebuilder.io/v3
projectName: khello
repo: example.com/khello
version: "3"
</code></pre>
<p>now if we run your command</p>
<pre><code>kubebuilder create webhook batch \
--version v1 \
--kind Webhook \
--defaulting \
--programmatic-validation
</code></pre>
<p>it will complains and says something like</p>
<pre><code>....
2021/11/17 13:15:03 failed to create webhook: unable to inject the resource to "base.go.kubebuilder.io/v3": kubebuilder create webhook requires a previously created API
</code></pre>
<p>ok cool, now we're at the same state, now what?</p>
<p>now, if you haven't create the API, do one make one with</p>
<pre><code>kubebuilder create api --version v1 --kind Webhook
</code></pre>
<p>now if you notice a file with the name <code>PROJECT</code> at the root of your project dir, it will says something like</p>
<pre><code>domain: example.org
layout:
- go.kubebuilder.io/v3
projectName: khello
repo: example.com/khello
resources:
- api:
crdVersion: v1
namespaced: true
controller: true
domain: example.org
kind: Webhook
path: example.com/khello/api/v1
version: v1
version: "3"
</code></pre>
<p>now that we have created the api, we can run your command</p>
<pre><code>kubebuilder create webhook batch \
--version v1 \
--kind Webhook \
--defaulting \
--programmatic-validation
</code></pre>
<p>voila, it works now</p>
<p>and the <code>PROJECT</code> file will become something like</p>
<pre><code>domain: example.org
layout:
- go.kubebuilder.io/v3
projectName: khello
repo: example.com/khello
resources:
- api:
crdVersion: v1
namespaced: true
controller: true
domain: example.org
kind: Webhook
path: example.com/khello/api/v1
version: v1
webhooks:
defaulting: true
validation: true
webhookVersion: v1
version: "3"
</code></pre>
<p>with that being said, I'm not sure how kubebuilder works under the hood, but from what I understand it check whether something is created from that <code>PROJECT</code> whenever the <code>create</code> command happens. So, my suggestion would be check your <code>PROJECT</code> file, make sure the API is created and if it does, make sure you put the right parameter in your <code>kubebuilder create weboook</code> command to match the contents of thta <code>PROJECT</code> file.</p>
|
<p>I want to create a cluster which is very similar to this one where they are using azure.
<a href="https://medium.com/devops-dudes/how-to-setup-completely-private-azure-kubernetes-service-aks-clusters-with-azure-private-links-b800a5a6776f" rel="nofollow noreferrer">Link to the tutorial</a></p>
<p>Whatever tutorials i have gone through for AWS-EKS are blocking it bi directional. But I need a bastion host and don't want the application to be inaccessible via www.</p>
<p>Is there a possible solution for this problem.</p>
| <p>The AKS tutorial you posted aim to create completely Private Azure Kubernetes Service (AKS).</p>
<p>Anyway, either case you can use <a href="https://eksctl.io/introduction/" rel="nofollow noreferrer">eksctl</a> to easily create one, here's a quick example where public access to control plane is disabled and allow node group to use NAT for Internet access. You can replace <> with your own preference:</p>
<pre><code>cat << EOF | eksctl create cluster -f -
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: <my cluster name>
region: <my region name>
vpc:
clusterEndpoints:
privateAccess: true
publicAccess: false
nodeGroups:
- name: <my self-managed node group name>
instanceType: <t3a.medium>
desiredCapacity: 1
EOF
</code></pre>
|
<p>I am researching the K8s architecture, focusing on the flow of pod spinning in the system.</p>
<p>I was wondering <em><strong>how</strong></em> (that is, who is initiating the communication) and <em><strong>when</strong></em> the different components of the control plane communicate with each other.</p>
<p>I have followed the excellent talk of <strong>Jérôme Petazzoni</strong> at LISA 2019 (<a href="https://www.youtube.com/watch?v=3KtEAa7_duA" rel="nofollow noreferrer">here</a>) to understand the architecture of the control plane, and read the concepts on <a href="https://kubernetes.io/docs/concepts/architecture" rel="nofollow noreferrer">kubernetes.io</a>.</p>
<p>However, I still haven't found the answers to the following questions:</p>
<ol>
<li>Who initiates the resource check of each node, in the <a href="https://kubernetes.io/docs/concepts/architecture/nodes/#node-capacity" rel="nofollow noreferrer">documentation</a> it is written:</li>
</ol>
<blockquote>
<p>Node objects track information about the Node's resource capacity: for example, the amount of memory available and the number of CPUs. Nodes that self-register report their capacity during registration. If you manually add a Node, then you need to set the node's capacity information when you add it.</p>
</blockquote>
<p>However, there is no specification on when does it update at <code>etcd</code>, and who initiates the regular update (other than the <em>heartbeat</em> that updates the status of the node).</p>
<p>Also, when does the cache of the scheduler update?</p>
<ol start="2">
<li>Who informs the different components about new pending requests? That is, how is the <code>controller-manager</code>/<code>scheduler</code> <em>"knows"</em> when it suppose to do its job? Each request is written as a manifest in <code>etcd</code> by the <code>kube-api-server</code>, but these components aren't connected to <code>etcd</code> directly.</li>
</ol>
<p>Does that mean the API-Server needs to inform each component about each new request?</p>
<p>I have many possible answers, but not a concrete confirmation of the real process in current K8s architecture.</p>
| <p>Answering your questions:</p>
<p><strong>Who initiates the resource check of each node?</strong></p>
<p>The component responsible for that is <em>"Node Status Manager"</em> which is a sub-control loop of the "<em>SyncLoop"</em> which is a <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">kubelet agent</a> component.</p>
<p>The more detailed answer is in this article: <a href="https://aws.plainenglish.io/kubernetes-deep-dive-kubelet-e4527ed56f4c" rel="nofollow noreferrer">Kubernetes Deep Dive: Kubelet</a>:</p>
<blockquote>
<p>As you can see, the core of <code>kubelet</code>’s work is a control loop, namely: <strong>SyncLoop</strong>.</p>
</blockquote>
<blockquote>
<p>For example, the <strong>Node Status Manager</strong> is responsible for responding to changes in the status of the <code>Node</code>, and then collecting the status of the <code>Node</code> and reporting it to the <code>APIServer</code> through Heartbeat.</p>
</blockquote>
<p>There is also a good diagram:</p>
<p><img src="https://pbs.twimg.com/media/DncWSekUYAAlOMY?format=jpg&name=large" alt="" /></p>
<p>Answering second part:</p>
<p><strong>Who informs the different components about new pending requests? That is, how is the <code>controller-manager</code>/<code>scheduler</code> <em>"knows"</em> when it suppose to do its job?</strong></p>
<p>The components responsible for that are <a href="https://kubernetes.io/docs/concepts/architecture/controller/" rel="nofollow noreferrer">Kubernetes' controllers</a> and <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/" rel="nofollow noreferrer">Scheduler</a> itself. Good examples and explanations are in this article: <a href="https://github.com/jamiehannaford/what-happens-when-k8s" rel="nofollow noreferrer">What happens when ... Kubernetes edition!</a>,</p>
<p>Basically after Kubernetes verified the request (authentication, authorization, admission control stuff), it is saved to datastore (<code>etcd</code>), and then it's taken by <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#initializers" rel="nofollow noreferrer">initializers</a> which may perform some additional logic on the resource (not always), after that it's visible via kube-server. Main part that may interest you is <a href="https://github.com/jamiehannaford/what-happens-when-k8s#control-loops" rel="nofollow noreferrer">Control loops</a>. They are constantly checking if a new request exists in a datastore, and if yes they are proceeding. Example - when you are deploying a new deployment:</p>
<ul>
<li><a href="https://github.com/jamiehannaford/what-happens-when-k8s#deployments-controller" rel="nofollow noreferrer">Deployments controller</a> is taking a request - it will realise that there is no ReplicaSet record associated, it will roll-out new one</li>
<li><a href="https://github.com/jamiehannaford/what-happens-when-k8s#replicasets-controller" rel="nofollow noreferrer">ReplicaSets controller</a>, like deployments controller, it will take a request and roll-out new pods</li>
<li>Pods are ready but they are in pending state - now <a href="https://github.com/jamiehannaford/what-happens-when-k8s#scheduler" rel="nofollow noreferrer">Scheduler</a> (which is like previous controllers, listening constantly for new requests from the data store - it's de facto answer for your question) will find a suitable node and schedule a pod to a node. Now, kubelet agent on the node will create a new pod.</li>
</ul>
<p>For more details I'd strongly suggest reading the earlier mentioned article - <a href="https://github.com/jamiehannaford/what-happens-when-k8s" rel="nofollow noreferrer">What happens when ... Kubernetes edition!</a>.</p>
<p><strong>Does that mean the API-Server needs to inform each component about each new request?</strong></p>
<p>It works in a different way - the kube-apiserver is making requests to be visible, and controllers, which are loops, are detecting new requests and starting to proceed with them.</p>
|
<p>I have a script that creates users with specific access.</p>
<p>the script runs the following commands:</p>
<pre><code>kubectl create serviceaccount username
kubectl create clusterrole readonlyuser --non-resource-url=* --verb=get,list,watch --resource=pods,pods/attach,pods/exec,pods/port-forward,pods/proxy,services/proxy
kubectl create clusterrolebinding username --serviceaccount=default:username --clusterrole=readonlyuser
kubectl create rolebinding username --serviceaccount=default:username --clusterrole=readonlyuser --namespace=namespace
</code></pre>
<p>When I try to exec into a pod in that specific namespace I get the following:</p>
<pre><code>Error from server (Forbidden): pods "<podname>" is forbidden: User "system:serviceaccount:default:username" cannot create resource "pods/exec" in API group "" in the namespace "namespace"
</code></pre>
<p>Any idea how to fix this?</p>
| <p>The error message state the following:</p>
<pre><code> cannot create resource "pods/exec" in API group "" in the namespace "namespace"
</code></pre>
<p>So created another clusterrole and rolebinding as follow:</p>
<pre><code>kubectl create clusterrole exec-readonlyuser --verb create --resource pods/exec
kubectl create rolebinding exec-readonlyuser --clusterrole=exec-readonlyuser --serviceaccount default:namespace -n namespace
</code></pre>
<p>now tried to exec and it is working.</p>
<pre><code> kubectl exec -it nginx --as system:serviceaccount:namespace:username -- sh
#
#exit
</code></pre>
<p>If you are using <code>yaml</code> then this could be added as another rule in same clusterrole.</p>
|
<p>I am currently studying distributed systems and have seen that many businesses relies on side-car proxy pattern for their services. For example, I know a company that uses an nginx proxy for authentication of their services and roles and permissions instead of including this business logic within their services.</p>
<p>Another one makes use of a cloud-sql-proxy on GKE to use the Cloud SQL offering that comes on google cloud. So on top of deploying their services in a container which runs in a pod, they is a proxy just for communicating with the database.</p>
<p>There is also istio which is a service mesh solution which can be deployed as a side-car proxy in a pod.</p>
<p><strong>I am pretty sure there are other commonly know use-cases where this pattern is used but at some point how much is too much side-car proxy? How heavy is it on the pod running it and what are the complexity that comes with using 2, 3, or even 4 side car proxys on top of your service container?</strong></p>
| <p>I recommend you to define what really you need and continue your research based on this, since this topic is too broad and doesn't have one correct answer.</p>
<p>Due to this, I decided to post a community wiki answer. Feel free to expand it.</p>
<p>There can be various reasons for running some containers in one pod. According to <a href="https://kubernetes.io/docs/concepts/workloads/pods/#workload-resources-for-managing-pods" rel="nofollow noreferrer">Kubernetes documentation</a>:</p>
<blockquote>
<p>A Pod can encapsulate an application composed of multiple co-located
containers that are tightly coupled and need to share resources. These
co-located containers form a single cohesive unit of service—for
example, one container serving data stored in a shared volume to the
public, while a separate <em>sidecar</em> container refreshes or updates
those files. The Pod wraps these containers, storage resources, and an
ephemeral network identity together as a single unit.</p>
</blockquote>
<p>In its simplest form, a sidecar container can be used to add functionality to a primary application that might otherwise be difficult to improve.</p>
<p><strong>Advantages of using sidecar containers</strong></p>
<ul>
<li>sidecar container is independent from its primary application in terms of runtime environment and programming language;</li>
<li>no significant latency during communication between primary application and sidecar container;</li>
<li>the sidecar pattern entails designing modular containers. The modular container can be plugged in more than one place with minimal modifications, since you don't need to write configuration code inside each application.</li>
</ul>
<p><strong>Notes regarding usage of sidecar containers</strong></p>
<ul>
<li><p>consider making a small sidecar container that doesn't consume much resources. The strong point of a sidecar containers lies in their ability to be small and pluggable. If sidecar container logic is getting more complex and/or becoming more tightly coupled with the main application container, it may better be integrated with the main application’s code instead.</p>
</li>
<li><p>to ensure that any number of sidecar containers can works successfully with main application its necessary to sum up all the resources/request limits while defining resource limits for the pod, because all the containers will run in parallel. Whole functionality works only if both types of containers are running successfully and most of the time these sidecar containers are simple and small that consume fewer resources than the main container.</p>
</li>
</ul>
|
<p>To make a persistent namespace with <code>unshare</code> you use the syntax:</p>
<pre><code>touch /root/mynetns1
unshare --net==/root/mynetns1
</code></pre>
<p>To make a persistent namespace with <code>ip</code> you use the syntax:</p>
<pre><code>ip netns add mynetns2
</code></pre>
<p>The <code>ip</code> command does not list or can access the namespace made by <code>unshare</code> and vice versa.</p>
<p>The <code>ip</code> command is much better at customizing a network namespace, but the <code>unshare</code> command allows you to launch a program with multiple different namespaces. Ideally I would create a namespace with <code>ip</code>, but launch the command with other namespaces with <code>unshare</code>. To do this <code>unshare</code> would need to be able to reference the namespace created by <code>ip</code>, how can this be done?</p>
<p>Right now I am using <code>ip netns exec mynetns1 unshare [other namespaces] ...</code> as a hack, but id prefer launch more cleanly with <code>unshare</code>.</p>
<p>I'd also like to be able to have my program possible interact with network namespaces they individually create, so information regarding how to list and access their network namespaces would also be helpful.</p>
| <p>Both <code>unshare --net=/somefile</code> and <code>ip netns add somename</code> create a new network namespace and bind-mount it to somewhere. The only difference is that <code>unshare</code> bind-mounts it to whatever file you specify, and <code>ip</code> bind-mounts it to a new file in <code>/var/run/netns/</code>. In other words, if you used <code>/var/run/netns/mynetns1</code> in place of <code>/root/mynetns1</code>, then you could later interact with it with <code>ip</code>.</p>
|
<p>In minikube, API server fails to connect to my audit log webhook, I see the following error in the api-server logs</p>
<pre><code>E0308 08:30:26.457841 1 metrics.go:109] Error in audit plugin 'webhook' affecting 400 audit events: Post "http://ca-audit.armo-system:8888/": dial tcp: lookup ca-audit.armo-system on 10.42.4.254:53: no such host
</code></pre>
<p>I dont know why the api-server is connecting to <code>10.42.4.254:53</code> since my service ip is different:</p>
<pre><code>$ kubectl -n armo-system get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ca-audit ClusterIP 10.109.132.114 <none> 8888/TCP 8m33s
</code></pre>
<p>I do not understand what I'm doing wrong, any suggestions?</p>
<hr />
<p>This is how I configured my audit policy, webhook and minikube-</p>
<p>I pre-configured my minikube as following:</p>
<pre><code># Create the webhook config and audit policy
export C_CONTEXT=$(kubectl config current-context)
export C_NAME=$(kubectl config get-contexts ${C_CONTEXT} --no-headers | awk '{print $2}')
export C_CLUSTER=$(kubectl config get-contexts ${C_CONTEXT} --no-headers | awk '{print $3}')
export C_USER=$(kubectl config get-contexts ${C_CONTEXT} --no-headers | awk '{print $4}')
export ARMO_NAMESPACE="armo-system"
export ARMO_AUDIT_SERVICE="ca-audit"
export ARMO_AUDIT_PORT=8888
mkdir -p ~/.minikube/files/etc/ssl/certs
cat <<EOF > ~/.minikube/files/etc/ssl/certs/audit-webhook.yaml
{
"apiVersion": "v1",
"clusters": [
{
"cluster": {
"server": "http://${ARMO_AUDIT_SERVICE}.${ARMO_NAMESPACE}:${ARMO_AUDIT_PORT}/"
},
"name": "${C_NAME}"
}
],
"contexts": [
{
"context": {
"cluster": "${C_CLUSTER}",
"user": "${C_USER}"
},
"name": "${C_NAME}"
}
],
"current-context": "${C_CONTEXT}",
"kind": "Config",
"preferences": {},
"users": []
}
EOF
cat <<EOF > ~/.minikube/files/etc/ssl/certs/audit-policy.yaml
{
"apiVersion": "audit.k8s.io/v1",
"kind": "Policy",
"rules": [
{
"level": "Metadata"
}
]
}
EOF
# Copy the audit policy to `/etc/ssl/certs/.`
sudo cp ~/.minikube/files/etc/ssl/certs/audit-policy.yaml ~/.minikube/files/etc/ssl/certs/audit-webhook.yaml /etc/ssl/certs/.
# Start the minikube, add the flags `--extra-config=apiserver.audit-policy-file=/etc/ssl/certs/audit-policy.yaml`, `--extra-config=apiserver.audit-webhook-config-file=/etc/ssl/certs/audit-webhook.yaml`
sudo -E minikube start --vm-driver=none --extra-config=apiserver.audit-policy-file=/etc/ssl/certs/audit-policy.yaml --extra-config=apiserver.audit-webhook-config-file=/etc/ssl/certs/audit-webhook.yaml
</code></pre>
<p>Now that my minikube is up and running, I created the namespace, service and webhook deployment:</p>
<pre><code>cat <<EOF | kubectl apply -f -
---
apiVersion: v1
kind: Namespace
metadata:
name: ${ARMO_NAMESPACE}
---
kind: Service
apiVersion: v1
metadata:
labels:
app: ${ARMO_AUDIT_SERVICE}
name: ${ARMO_AUDIT_SERVICE}
namespace: ${ARMO_NAMESPACE}
spec:
ports:
- port: ${ARMO_AUDIT_PORT}
targetPort: ${ARMO_AUDIT_PORT}
protocol: TCP
selector:
app: ${ARMO_AUDIT_SERVICE}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${ARMO_AUDIT_SERVICE}
namespace: ${ARMO_NAMESPACE}
labels:
app: ${ARMO_AUDIT_SERVICE}
spec:
selector:
matchLabels:
app: ${ARMO_AUDIT_SERVICE}
replicas: 1
template:
metadata:
labels:
app: ${ARMO_AUDIT_SERVICE}
spec:
containers:
- name: ${ARMO_AUDIT_SERVICE}
image: quay.io/armosec/k8s-ca-auditlog-ubi:dummy
imagePullPolicy: Always
env:
- name: ARMO_AUDIT_PORT
value: "${ARMO_AUDIT_PORT}"
ports:
- containerPort: ${ARMO_AUDIT_PORT}
name: ${ARMO_AUDIT_SERVICE}
EOF
</code></pre>
<p>The webhook image code (<code>quay.io/armosec/k8s-ca-auditlog-ubi:dummy</code>) is as following:</p>
<pre><code>package main
import (
"encoding/json"
"flag"
"fmt"
"net/http"
"os"
"k8s.io/apiserver/pkg/apis/audit"
"github.com/golang/glog"
)
func main() {
flag.Parse()
flag.Set("alsologtostderr", "1") // display logs in stdout
InitServer()
}
// InitServer - Initialize webhook listener
func InitServer() {
port, ok := os.LookupEnv("ARMO_AUDIT_PORT")
if !ok {
port = "8888"
}
glog.Infof("Webhook listening on port: %s, path: %s", port, "/")
http.HandleFunc("/", HandleRequest)
glog.Fatal(http.ListenAndServe(fmt.Sprintf(":%s", port), nil))
}
//HandleRequest -
func HandleRequest(w http.ResponseWriter, req *http.Request) {
eventList := audit.EventList{}
err := json.NewDecoder(req.Body).Decode(&eventList)
if err != nil {
e := fmt.Errorf("failed parsing api-server request, reason: %s", err.Error())
glog.Errorf(e.Error())
http.Error(w, e.Error(), http.StatusBadRequest)
return
}
glog.Infof("webhook received audit list, len: %d", len(eventList.Items))
for _, event := range eventList.Items {
bEvent, _ := json.Marshal(event)
glog.Infof("Received event: %s", string(bEvent))
}
w.WriteHeader(http.StatusOK)
}
</code></pre>
| <p>Actually I don't know well about minikube but I have used audit for k8s so my answer wouldn't be help for minikube environment.</p>
<p><em>First</em>, k8s dns format is incorrect. It should be <strong>{service-name}.{namespace}.svc or {service-name}.{namespace}.svc.cluster.local</strong>. Of course, http:// or https:// can be added also.</p>
<p><em>Second</em>, you should give dnsPolicy option <strong>kube-apiserver.yaml</strong>.
To make kube-apiserver search dns resolution in cluster, 'dnsPolicy' should be written.</p>
<pre><code>sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml
</code></pre>
<p>open kube-apiserver.yaml and add <strong>dnsPolicy: ClusterFirstWithHostNet</strong> to it .
The location of dnsPolicy is below .spec and same level with containers or volumes</p>
|
<p>In Kubernetes, I have the following section in deployment.yaml. I am using ConfigMap and I want to set the path dynamically based on the pod metadata or label or env variable in pod. Does ConfigMap support setting path dynamically?</p>
<pre><code>spec:
volumes:
- name: configmap
configMap:
name: devconfig
items:
- key: config
path: $(ENVIRONMENT)
defaultMode: 420
</code></pre>
| <p>This is call substitution which kubectl does not support out of the box. However, you can easily achieve what you want by using <a href="https://stackoverflow.com/questions/14155596/how-to-substitute-shell-variables-in-complex-text-files">envsubst</a> command which will substitute <code>$ENVIRONMENT</code> in your yaml with the the environment variable set in your current shell.</p>
|
<p>We are running a Kubernetes cluster on AKS. The cluster runs on multiple node pools. Autoscaling is enabled to make sure nodes are added or removed when necessary.</p>
<p>I can see the current used amount of nodes by navigating to AKS -> Settings -> Node pools on the Azure Portal. However, I'm not able to get this information historically.</p>
<p>A question I want to find an answer for:
How many nodes were active for node pool 'x' last night?</p>
<p>I couldn't find any metrics for the AKS and Virtual Machine Scale Set resources to answer my question. What are my options?</p>
| <p>AFAIK there is not such a metric. a small workaround could be:</p>
<p>In the Portal go to your AKS -> Monitoring -> Metrics. In the Scope select your AKS, Scope Namespace is <code>Container service</code> and then you have the following metrics:</p>
<ul>
<li><p><code>Cluster Health</code> - determines if the autoscaler will take action on the cluster</p>
</li>
<li><p><code>Unneeded Nodes</code> - autoscaler mark nodes for deletion</p>
</li>
</ul>
<p>There you can see at least if scaling took place and how many nodes were deleted afterwards so you could calculate the amount of nodes.</p>
|
<p>I'm working on a POC for getting a Spark cluster set up to use Kubernetes for resource management using AKS (Azure Kubernetes Service). I'm using spark-submit to submit pyspark applications to k8s in cluster mode and I've been successful in getting applications to run fine.</p>
<p>I got Azure file share set up to store application scripts and Persistent Volume and a Persistent Volume Claim pointing to this file share to allow Spark to access the scripts from Kubernetes. This works fine for applications that don't write any output, like the pi.py example given in the spark source code, but writing any kind of outputs fails in this setup. I tried running a script to get wordcounts and the line</p>
<pre><code>wordCounts.saveAsTextFile(f"./output/counts")
</code></pre>
<p>causes an exception where wordCounts is an rdd.</p>
<pre><code>Traceback (most recent call last):
File "/opt/spark/work-dir/wordcount2.py", line 14, in <module>
wordCounts.saveAsTextFile(f"./output/counts")
File "/opt/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1570, in saveAsTextFile
File "/opt/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
File "/opt/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o65.saveAsTextFile.
: ExitCodeException exitCode=1: chmod: changing permissions of '/opt/spark/work-dir/output/counts': Operation not permitted
</code></pre>
<p>The directory "counts" has been created from the spark application just fine, so it seems like it has required permissions, but this subsequent <code>chmod</code> that spark tries to perform internally fails. I haven't been able to figure out the cause and what exact configuration I'm missing in my commands that's causing this. Any help would be greatly appreciated.</p>
<p>The <code>kubectl</code> version I'm using is</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.1", GitCommit:"632ed300f2c34f6d6d15ca4cef3d3c7073412212", GitTreeState:"clean", BuildDate:"2021-08-19T15:45:37Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"881d4a5a3c0f4036c714cfb601b377c4c72de543", GitTreeState:"clean", BuildDate:"2021-10-21T05:13:01Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>The spark version is 2.4.5 and the command I'm using is</p>
<pre><code><SPARK_PATH>/bin/spark-submit --master k8s://<HOST>:443 \
--deploy-mode cluster \
--name spark-pi3 \
--conf spark.executor.instances=2 \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
--conf spark.kubernetes.container.image=docker.io/datamechanics/spark:2.4.5-hadoop-3.1.0-java-8-scala-2.11-python-3.7-dm14 \
--conf spark.kubernetes.driver.volumes.persistentVolumeClaim.azure-fileshare-pvc.options.claimName=azure-fileshare-pvc \
--conf spark.kubernetes.driver.volumes.persistentVolumeClaim.azure-fileshare-pvc.mount.path=/opt/spark/work-dir \
--conf spark.kubernetes.executor.volumes.persistentVolumeClaim.azure-fileshare-pvc.options.claimName=azure-fileshare-pvc \
--conf spark.kubernetes.executor.volumes.persistentVolumeClaim.azure-fileshare-pvc.mount.path=/opt/spark/work-dir \
--verbose /opt/spark/work-dir/wordcount2.py
</code></pre>
<p>The PV and PVC are pretty basic. The PV yml is:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: azure-fileshare-pv
labels:
usage: azure-fileshare-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
azureFile:
secretName: azure-storage-secret
shareName: dssparktestfs
readOnly: false
secretNamespace: spark-operator
</code></pre>
<p>The PVC yml is:</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: azure-fileshare-pvc
# Set this annotation to NOT let Kubernetes automatically create
# a persistent volume for this volume claim.
annotations:
volume.beta.kubernetes.io/storage-class: ""
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
selector:
# To make sure we match the claim with the exact volume, match the label
matchLabels:
usage: azure-fileshare-pv
</code></pre>
<p>Let me know if more info is needed.</p>
| <blockquote>
<p>The owner and user are root.</p>
</blockquote>
<p>It looks like you've mounted your volume as root. Your problem:</p>
<pre><code>chmod: changing permissions of '/opt/spark/work-dir/output/counts': Operation not permitted
</code></pre>
<p>is due to the fact that you are trying to change the permissions of a file that you are not the owner of. So you need to change the owner of the file first.</p>
<p>The easiest solution is to <code>chown</code> on the resource you want to access. However, this is often not feasible as it can lead to privilege escalation as well as the image itself may block this possibility. In this case you can create <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">security context</a>.</p>
<blockquote>
<p>A security context defines privilege and access control settings for a Pod or Container. Security context settings include, but are not limited to:</p>
<ul>
<li><p>Discretionary Access Control: Permission to access an object, like a file, is based on <a href="https://wiki.archlinux.org/index.php/users_and_groups" rel="nofollow noreferrer">user ID (UID) and group ID (GID)</a>.</p>
</li>
<li><p><a href="https://en.wikipedia.org/wiki/Security-Enhanced_Linux" rel="nofollow noreferrer">Security Enhanced Linux (SELinux)</a>: Objects are assigned security labels.</p>
</li>
<li><p>Running as privileged or unprivileged.</p>
</li>
<li><p><a href="https://linux-audit.com/linux-capabilities-hardening-linux-binaries-by-removing-setuid/" rel="nofollow noreferrer">Linux Capabilities</a>: Give a process some privileges, but not all the privileges of the root user.
></p>
</li>
<li><p><a href="https://kubernetes.io/docs/tutorials/clusters/apparmor/" rel="nofollow noreferrer">AppArmor</a>: Use program profiles to restrict the capabilities of individual programs.</p>
</li>
<li><p><a href="https://kubernetes.io/docs/tutorials/clusters/seccomp/" rel="nofollow noreferrer">Seccomp</a>: Filter a process's system calls.</p>
</li>
<li><p>AllowPrivilegeEscalation: Controls whether a process can gain more privileges than its parent process. This bool directly controls whether the <a href="https://www.kernel.org/doc/Documentation/prctl/no_new_privs.txt" rel="nofollow noreferrer"><code>no_new_privs</code></a> flag gets set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged OR 2) has <code>CAP_SYS_ADMIN</code>.</p>
</li>
<li><p>readOnlyRootFilesystem: Mounts the container's root filesystem as read-only.</p>
</li>
</ul>
<p>The above bullets are not a complete set of security context settings -- please see <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#securitycontext-v1-core" rel="nofollow noreferrer">SecurityContext</a> for a comprehensive list.</p>
<p>For more information about security mechanisms in Linux, see <a href="https://www.linux.com/learn/overview-linux-kernel-security-features" rel="nofollow noreferrer">Overview of Linux Kernel Security Features</a></p>
</blockquote>
<p>You can <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods" rel="nofollow noreferrer">Configure volume permission and ownership change policy for Pods</a>.</p>
<blockquote>
<p>By default, Kubernetes recursively changes ownership and permissions for the contents of each volume to match the <code>fsGroup</code> specified in a Pod's <code>securityContext</code> when that volume is mounted. For large volumes, checking and changing ownership and permissions can take a lot of time, slowing Pod startup. You can use the <code>fsGroupChangePolicy</code> field inside a <code>securityContext</code> to control the way that Kubernetes checks and manages ownership and permissions for a volume.</p>
</blockquote>
<p>Here is an example:</p>
<pre class="lang-yaml prettyprint-override"><code>securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
fsGroupChangePolicy: "OnRootMismatch"
</code></pre>
<p>See also <a href="https://stackoverflow.com/questions/51200115/chown-changing-ownership-of-data-db-operation-not-permitted/51203031#51203031">this similar question</a>.</p>
|
<p>I am deploying application with helm chart.
I am facing an issue with StorageClass rook-ceph whenever I deploy a chart my pods are in pending state because pvc are not getting created. Logs for pvc are</p>
<pre><code>Warning ProvisioningFailed 96s (x13 over 20m) rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-775dcbbc86-nt8tr_170456b2-6876-4a49-9077-05cd2395cfed failed to provision volume with StorageClass "rook-cephfs": rpc error: code = Aborted desc = an operation with the given Volume ID pvc-f65acb47-f145-449e-ba1c-e8a61efa67b0 already exists
</code></pre>
| <p>try to restart MDS service, if you can copy the data of pvc to a new one, it should work also</p>
|
<p>I wanted to create alerts in Grafana for My Kubernetes Clusters.
I have configured Prometheus, Node exporter, Kube-Metrics, Alert Manager in my k8s Cluster.
I wanted to setup Alerting on Unschedulable or Failed Pods.</p>
<ol>
<li>Cause of unschedulable or failed pods</li>
<li>Generating an alert after a while</li>
<li>Creating another alert to notify us when pods fail.
Can You guide me how to achieve this??</li>
</ol>
| <p>Based on the comment from <a href="https://stackoverflow.com/users/8803619/suresh-vishnoi" title="14,017 reputation">Suresh Vishnoi</a>:</p>
<blockquote>
<p>it might be helpful <a href="https://awesome-prometheus-alerts.grep.to/rules.html#kubernetes" rel="noreferrer">awesome-prometheus-alerts.grep.to/rules.html#kubernetes</a></p>
</blockquote>
<p>yes, this could be very helpful. On this site you can find templates for <a href="https://awesome-prometheus-alerts.grep.to/rules.html#rule-kubernetes-1-17" rel="noreferrer">failed pods (not healthy)</a>:</p>
<blockquote>
<p>Pod has been in a non-ready state for longer than 15 minutes.</p>
</blockquote>
<pre class="lang-yaml prettyprint-override"><code> - alert: KubernetesPodNotHealthy
expr: min_over_time(sum by (namespace, pod) (kube_pod_status_phase{phase=~"Pending|Unknown|Failed"})[15m:1m]) > 0
for: 0m
labels:
severity: critical
annotations:
summary: Kubernetes Pod not healthy (instance {{ $labels.instance }})
description: "Pod has been in a non-ready state for longer than 15 minutes.\n V
</code></pre>
<p>or for <a href="https://awesome-prometheus-alerts.grep.to/rules.html#rule-kubernetes-1-18" rel="noreferrer">crash looping</a>:</p>
<blockquote>
<p>Pod {{ $labels.pod }} is crash looping</p>
</blockquote>
<pre class="lang-yaml prettyprint-override"><code> - alert: KubernetesPodCrashLooping
expr: increase(kube_pod_container_status_restarts_total[1m]) > 3
for: 2m
labels:
severity: warning
annotations:
summary: Kubernetes pod crash looping (instance {{ $labels.instance }})
description: "Pod {{ $labels.pod }} is crash looping\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
</code></pre>
<p>See also <a href="https://sysdig.com/blog/kubernetes-monitoring-prometheus/" rel="noreferrer">this good guide about monitoring kubernetes cluster with Prometheus</a>:</p>
<blockquote>
<p>The Kubernetes API and the <a href="https://github.com/kubernetes/kube-state-metrics" rel="noreferrer">kube-state-metrics</a> (which natively uses prometheus metrics) <strong>solve part of this problem</strong> by exposing Kubernetes internal data, such as the number of desired / running replicas in a deployment, unschedulable nodes, etc.</p>
<p>Prometheus is a good fit for microservices because you just need to <strong>expose a metrics port</strong>, and don’t need to add too much complexity or run additional services. Often, the service itself is already presenting a HTTP interface, and the developer just needs to add an additional path like <code>/metrics</code>.</p>
</blockquote>
<p>If it comes to unschedulable nodes, you can use the metric <code>kube_node_spec_unschedulable</code>. It is described <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/docs/node-metrics.md" rel="noreferrer">here</a> or <a href="https://www.sumologic.com/blog/kubernetes-monitoring/" rel="noreferrer">here</a>:
<code>kube_node_spec_unschedulable</code> - Whether a node can schedule new pods or not.</p>
<p>Look also at <a href="https://www.replex.io/blog/kubernetes-in-production-the-ultimate-guide-to-monitoring-resource-metrics" rel="noreferrer">this guide</a>.
Basically, you need to find the metric you want to monitor and set it appropriately in Prometheus. Alternatively, you can use templates, as I showed at the beginning of the answer.</p>
|
<p>Trying to following the use-case examples given in the official Kubernetes documentation for using secrets as environment variables (referenced <a href="https://kubernetes.io/docs/concepts/configuration/secret/#use-case-as-container-environment-variables" rel="nofollow noreferrer">here</a> ), I made both my secret and my deployment yaml (which includes a pod spec) as follows:</p>
<p>Secret yaml:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Secret
metadata:
name: my-secret
type: Opaque
data:
LOCAL_UID: dsdgvreRBRBBsdd=
LOCAL_PWD: MmSDkfKDODbOU4NCg==
</code></pre>
<p>which is written to the namespace by doing:
<code>kubectl apply -f my-secret.yaml</code></p>
<p>Likewise, here is the deployment Yaml:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
terminationGracePeriodSeconds: 30
containers:
- name: my-app
env:
- name: REPO_APP_URL
value: https://repo.myco.com/project.tar.gz
envFrom:
- secretRef:
name: my-secret
image: repo.myco.com/images/node-alpine:0.1.6
imagePullPolicy: Always
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 15
periodSeconds: 15
securityContext:
runAsUser: 1000
imagePullSecrets:
- name: regcredepg
</code></pre>
<ul>
<li>note shown above (but is in the deployment yaml) are the Service and Ingress specs.</li>
</ul>
<p>This is run by doing the following</p>
<p><code>kubectl apply -f my-app.yaml</code></p>
<p>This actually works great given:</p>
<ol>
<li><p>the <code>imagePullSecrets</code> directive is included in the deployment YAML.</p>
</li>
<li><p>that the <code>name</code> value given in the <code>imagePullSecrets</code> section is <strong>not</strong> the actual secret used in the <code>envFrom: - secretRef:</code> section.</p>
</li>
</ol>
<p>If I try to set the name of the <code>imagePullSecrets</code> name field to <code>my-secret</code>, the pod fails to load (saying <code>Error from server (BadRequest): container "my-app" in pod "my-app-597bb6c9b4-lh8rg" is waiting to start: image can't be pulled</code>).</p>
<p>Also, it won't allow me to simply remove the <code>imagePullSecrets</code> section of the YAML in the pod spec, even though the documentation claims it its optional.</p>
<p>So, the only way this will work is if I include the <code>imagePullSecrets</code> reference to a valid secrets that I am <strong>not</strong> using in my <code>envFrom: - secretRef:</code> section. I am sure I am missing some logical obvious issue here. Can anyone shed light on this??</p>
| <p><code>imagePullSecrets</code> has a different format than just the ID and password. You need to specify the registry FQDN and the username and password. You can find more information in <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">Pull an Image from a Private Registry</a>.</p>
|
<p>Assume there is a backend application with a private key stored in a .env file.</p>
<p>For the project file structure:</p>
<pre><code>|-App files
|-Dockerfile
|-.env
</code></pre>
<p>If I run the docker image locally, the application can be reached normally by using a valid public key during the API request. However, if I deploy the container into AKS cluster by using same docker image, the application failed.</p>
<p>I am wondering how the container in a AKS cluster handle the .env file. What should I do to solve this problem?</p>
| <p>Moving this out of comments for better visibility.</p>
<hr />
<p>First and most important is docker is not the same as kubernetes. What works on docker, won't work directly on kubernetes. Docker is a container runtime, while kubernetes is a container orchestration tool which sits on top of docker (not always docker now, <code>containerd</code> is used as well).</p>
<p>There are many resources on the internet which describe the key difference. For example this one is from <a href="https://azure.microsoft.com/en-us/topic/kubernetes-vs-docker/" rel="nofollow noreferrer">microsoft docs</a></p>
<hr />
<p>First <code>configmaps</code> and <code>secrets</code> should be created:</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-a-configmap" rel="nofollow noreferrer">Creating and managing configmaps</a> and <a href="https://kubernetes.io/docs/concepts/configuration/secret/#creating-a-secret" rel="nofollow noreferrer">creating and managing secrets</a></p>
<p>There are different types of <a href="https://kubernetes.io/docs/concepts/configuration/secret/#secret-types" rel="nofollow noreferrer">secrets</a> which can be created.</p>
<hr />
<ol>
<li>Use configmaps/secrets as environment variables.</li>
</ol>
<p>Further referring to <code>configMaps</code> and <code>secrets</code> as <code>environment variables</code> looks like (configmaps and secrets have the same structure):</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: pod-example
spec:
containers:
- ...
env:
-
name: ADMIN_PASS
valueFrom:
secretKeyRef: # here secretref is used for sensitive data
key: admin
name: admin-password
-
name: MYSQL_DB_STRING
valueFrom:
configMapKeyRef: # this is not sensitive data so can be used configmap
key: db_config
name: connection_string
...
</code></pre>
<ol start="2">
<li>Use configmaps/secrets as volumes (it will be presented as file).</li>
</ol>
<p>Below the example of using secrets as files mounted in a specific directory:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
containers:
- ...
volumeMounts:
- name: secrets-files
mountPath: "/mnt/secret.file1" # "secret.file1" file will be created in "/mnt" directory
subPath: secret.file1
volumes:
- name: secrets-files
secret:
secretName: my-secret # name of the Secret
</code></pre>
<p>There's a <a href="https://medium.com/avmconsulting-blog/secrets-management-in-kubernetes-378cbf8171d0" rel="nofollow noreferrer">good article</a> which explains and shows use cases of secrets as well as its limitations e.g. size is limited to 1Mb.</p>
|
<p>In Kubernetes, I have the following section in deployment.yaml. I am using ConfigMap and I want to set the path dynamically based on the pod metadata or label or env variable in pod. Does ConfigMap support setting path dynamically?</p>
<pre><code>spec:
volumes:
- name: configmap
configMap:
name: devconfig
items:
- key: config
path: $(ENVIRONMENT)
defaultMode: 420
</code></pre>
| <p>As an alternative to <a href="https://stackoverflow.com/questions/14155596/how-to-substitute-shell-variables-in-complex-text-files">envsubst</a>, that was absolutely correct answered by @gohm's, you may wanna try to use combination of job, that will check your configmap and pass proper values to your <code>path</code>.</p>
<p>Take a look: <a href="https://stackoverflow.com/questions/65091430/kubernetes-use-environment-variable-configmap-in-persistentvolume-host-path">Kubernetes: use environment variable/ConfigMap in PersistentVolume host path</a></p>
|
<p>I have 3 configs in service:</p>
<pre class="lang-py prettyprint-override"><code>1 - appsettings.development.json
2 - appsettings.testing.json
3 - appsettings.production.json
</code></pre>
<p>And i have 3 launch settings for them as well in <code>launchsettings.json</code>:</p>
<pre class="lang-json prettyprint-override"><code>{
"profiles": {
"DevelopmentDockerProfile" {
"commandName": "Docker",
"environmentVariables": {
"DOTNET_ENVIRONMENT": "development"
}
},
"ProductionDockerProfile" {
"commandName": "Docker",
"environmentVariables": {
"DOTNET_ENVIRONMENT": "production"
}
},
"TestingDockerProfile" {
"commandName": "Docker",
"environmentVariables": {
"DOTNET_ENVIRONMENT": "testing"
}
}
}
}
</code></pre>
<p>And i can normally run service, using correct configuration only on local machine.</p>
<p>I can not push <code>DOTNET_ENVIRONMENT</code> variable with service when publishing.</p>
<p>Is there any way to bind environment variable to <code>publishProfile.pubxml</code> or whatever else, that environment variable <code>DOTNET_ENVIRONMENT</code> will magically appear in docker service on host machine?</p>
| <h2>Short answer</h2>
<p>No, there is no such option.</p>
<h2>Longer one</h2>
<p>First of all, you can easily set environment variables using <code>Environment.SetEnvironmentVariable("name", "value")</code>. But that's probably not what you want.</p>
<p>What you might want is to configure environment variables in configs for container/orchestration tool you use, either set in docker command line (<code>docker run -e VARIABLE_NAME=value</code>) or in <code>docker-compose.yml</code> for Docker Compose or in service YAML config for k8s.</p>
|
<p>Say I have several services in kubernetes. And I have one entry point to the cluster, it's a public facing service that is meant to validate the JWT token (from AWS cognito).</p>
<p>The entry point routes the request to an internal service, and that in turn usually makes more requests to other internal services.</p>
<p>My question is: is it enough to validate the JWT only once and make other communications without any form of authentication, just passing the user id (or any other data needed)? Or do I need to have some form of authentication when making http requests between services? if so, which? should I validate the JWT again? should I have server certificates or something like that?</p>
| <p>Posted a community wiki answer for better visibility. Feel free to expand it.</p>
<hr />
<p>As David Szalai’s comment mentioned, it depends on your security and project requirements:</p>
<blockquote>
<p>If you go with a zero-trust model inside k8s, you can use mTLS with a service mesh between services. Passing JWTs is also good if you need to propagate user-auth info to different services.</p>
</blockquote>
<blockquote>
<p>In the current (project) we’ll use mTLS with a service mesh, and send JWTs along with requests where the receiver needs info about user, and parse/validate it there again.</p>
</blockquote>
<p>If you apps do not have built-in authentication / authorization mechanisms you may try Istio - check these articles:</p>
<ul>
<li><a href="https://istio.io/latest/docs/concepts/security/" rel="nofollow noreferrer">Istio documentation - Security</a></li>
<li><a href="https://medium.com/intelligentmachines/istio-jwt-step-by-step-guide-for-micro-services-authentication-690b170348fc" rel="nofollow noreferrer">Istio & JWT: Step by Step Guide for Micro-Services Authentication</a>.</li>
</ul>
<p>Also check these articles about authentication in Kubernetes:</p>
<ul>
<li><a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">Kubernetes Docs - Authenticating</a></li>
<li><a href="https://learnk8s.io/microservices-authentication-kubernetes" rel="nofollow noreferrer">Authentication between microservices using Kubernetes identities</a></li>
</ul>
<p><strong>EDIT:</strong></p>
<p><em>Why?</em></p>
<p>In security, <a href="https://wso2.com/blogs/thesource/securing-microservices-in-a-zero-trust-environment/" rel="nofollow noreferrer">we say</a>:</p>
<blockquote>
<p>It’s a common principle in security that the strength of a given system is only as strong as the strength of its weakest link.</p>
</blockquote>
<p>This article - <a href="https://cloud.google.com/blog/products/networking/the-service-mesh-era-securing-your-environment-with-istio" rel="nofollow noreferrer">The service mesh era: Securing your environment with Istio</a> mentions some possible attacks that can be done in insecure system, like man-in-the middle attacks and replayed attacks:</p>
<blockquote>
<p>An approach to mitigate this risk is to ensure that peers are only authenticated using non-portable identities. <a href="https://cloud.google.com/istio/docs/istio-on-gke/installing#choose_a_security_option" rel="nofollow noreferrer">Mutual TLS authentication</a> (mTLS) ensures that peer identities are bound to the TLS channel and cannot be replayed. It also ensures that all communication is encrypted in transit, and mitigates the risk of man-in-the middle attacks and replay attacks by the destination service. While mutual TLS helps strongly identify the network peer, end user identities (or identity of origin) can still be propagated using bearer tokens like JWT.</p>
</blockquote>
<blockquote>
<p>Given the proliferation of threats within the production network and the increased points of privileged access, it is increasingly necessary to adopt a zero-trust network security approach for microservices architectures. This approach requires that all accesses are strongly authenticated, authorized based on context, logged, and monitored … and the controls must be optimized for dynamic production environments.</p>
</blockquote>
<p>Without adding additional security layers (like mTLS and service mesh in a cluster), we are assuming that communication between microservices in the cluster is done in fully trusted network, <a href="https://wso2.com/blogs/thesource/securing-microservices-in-a-zero-trust-environment/" rel="nofollow noreferrer">so they can give an attacker possibility to exploit business assets via network</a>:</p>
<blockquote>
<p>Many microservices deployments today, mostly worry about the edge security by exposing the microservices via APIs and protecting those with an API gateway at the edge. Once a request passes the API gateway, the communications among microservices assume a trusted network, and expose endless possibilities to an attacker gaining access to the network to exploit all valuable business assets exposed by the microservices.</p>
</blockquote>
|
<p>I've created a cronjob in kubernetes to do scheduled backups of my postgresql db (assuming that I have PGPASS Secret already created):</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: postgres-backup-dev
spec:
# Backup the database every day at 2AM
schedule: "0 2 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: postgres-backup-dev
image: postgres:10.4
command: ["/bin/sh"]
args: ["-c", 'echo "$PGPASS" > /root/.pgpass && chmod 600 /root/.pgpass && pg_dump -Fc -h <host> -U <user> <db> > /var/backups/backup.dump']
env:
- name: PGPASS
valueFrom:
secretKeyRef:
name: dev-pgpass
key: pgpass
volumeMounts:
- mountPath: /var/backups
name: postgres-backup-storage
restartPolicy: Never
volumes:
- name: postgres-backup-storage
hostPath:
path: /var/volumes/postgres-backups
type: DirectoryOrCreate
</code></pre>
<p>This works but it's dumping the backup file in the container of the Job that is created by the Cronjob.
My goal is to rather dump this file in the container of my original database. My original db deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:10.4
imagePullPolicy: Always
resources:
requests:
cpu: "1"
memory: "1Gi"
limits:
cpu: "2"
memory: "2Gi"
ports:
- containerPort: 5432
protocol: TCP
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
</code></pre>
<p>Is there any way to achieve this?</p>
<p>Thanks</p>
| <p>You need a shared volume.</p>
<p>If the pods are on the same host, you can use a <code>hostPath</code> volume.</p>
<p>If the pods are on a cluster, you need a different storage backend, like NFS, EFS (on AWS), CephFS or GlusterFS.</p>
|
<p>What is the purpose of <code>kubectl logs deploy/my-deployment</code> shown at <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#interacting-with-deployments-and-services" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/kubectl/cheatsheet/#interacting-with-deployments-and-services</a>?</p>
<p>I would think it will show me logs from all the pods deployed as part of the <code>my-deployment</code> object. However, even though I have 2 pods in my deployment, that command shows logs from only one of them.</p>
| <p>If your deployment has multiple pod replicas, then <code>kubectl logs deployment/...</code> will just pick one on its own.</p>
<p>Here is an example:</p>
<pre><code>kubectl get pods -n kube-system | grep coredns
coredns-78fcd69978-dqf95 1/1 Running 0 42h
coredns-78fcd69978-vgvf2 1/1 Running 0 42h
</code></pre>
<pre><code>kubectl logs deployment/coredns -n kube-system
Found 2 pods, using pod/coredns-78fcd69978-vgvf2
</code></pre>
|
<p>I am just exploring and want to helm my k8dash, but got the weird error since I have been able to deploy on AWS EKS.</p>
<p>I am running them on my Minikube <code>V1.23.2</code></p>
<p>My helm version is <code>v3.6.2</code></p>
<p>Kubernetes kubectl version is <code>v1.22.3</code></p>
<p>Basically if I do helm template, the <code>VirtualServer</code> would be like this:</p>
<pre><code>apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: k8dash
namespace: k8dash
spec:
host: namahost.com
routes:
- action:
pass: RELEASE-NAME
path: /
upstreams:
- name: RELEASE-NAME
port: 80
service: RELEASE-NAME
</code></pre>
<p>and I got this error:</p>
<pre><code>Error: unable to build Kubernetes objects from release manifest: unable to recognize "": no matches for kind "VirtualServer" in version "k8s.nginx.org/v1"
</code></pre>
<p>It's weird, deploying this one on AWS EKS just fine but locally got this error and I could not find any clue while Googling. Does it has something to do with my tools version?</p>
| <p>You have to install additional CRDs as both <code>VirtualServer</code> and <code>VirtualServerRoute</code> are not oob, but nginx resources.</p>
<p><a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions" rel="nofollow noreferrer">CustomResourceDefinitions</a>:</p>
<blockquote>
<p>The CustomResourceDefinition API resource allows you to define custom
resources. Defining a CRD object creates a new custom resource with a
name and schema that you specify. The Kubernetes API serves and
handles the storage of your custom resource. The name of a CRD object
must be a valid DNS subdomain name.</p>
<p>This frees you from writing your own API server to handle the custom
resource, but the generic nature of the implementation means you have
less flexibility than with API server aggregation.</p>
</blockquote>
<hr />
<p><a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/#create-custom-resources" rel="nofollow noreferrer">Nginx Create Custom Resources</a></p>
<blockquote>
<p>Note: <strong>By default, it is required to create custom resource definitions
for VirtualServer, VirtualServerRoute, TransportServer and Policy.</strong>
Otherwise, the Ingress Controller pods will not become Ready. If you’d
like to disable that requirement, configure -enable-custom-resources
command-line argument to false and skip this section.</p>
</blockquote>
<p>Create custom resource definitions for VirtualServer and VirtualServerRoute, TransportServer and Policy resources.</p>
<p>You can find crds under <a href="https://github.com/nginxinc/kubernetes-ingress/tree/master/deployments/common/crds" rel="nofollow noreferrer">https://github.com/nginxinc/kubernetes-ingress/tree/master/deployments/common/crds</a>:</p>
<pre><code>$ git clone https://github.com/nginxinc/kubernetes-ingress/
$ cd kubernetes-ingress/deployments
$ git checkout v2.0.3 (or latest, as you wish)
$ kubectl apply -f common/crds/k8s.nginx.org_virtualservers.yaml
$ kubectl apply -f common/crds/k8s.nginx.org_virtualserverroutes.yaml
$ kubectl apply -f common/crds/k8s.nginx.org_transportservers.yaml
$ kubectl apply -f common/crds/k8s.nginx.org_policies.yaml
</code></pre>
<p>After successful applying you will be able to create both <code>VirtualServer</code> and <code>VirtualServerRoute</code></p>
|
<p>Giving the port forward object</p>
<pre><code>from kubernetes.stream import stream, portforward
pf = portforward(
k8s_client_v1.connect_get_namespaced_pod_portforward,
pod_name,
name_space,
ports=port
)
pf.socket(port)
</code></pre>
<p><code>pf</code> is a socket object from <code>AF_UNIX</code> family.</p>
<p>I need a better understating whether another <code>AF_INET</code> family socket has to be instantiated in oder to achieve the same functionality as <code>kubectl</code> client has, ex: <code>kubectl port-forward $pod_name $port</code></p>
<p>I appreciate in advance if someone could share any snippets of this type of implementation.</p>
<p>At this moment creating port forward object was inspired from <a href="https://github.com/kubernetes-client/python/blob/master/examples/pod_portforward.py#L176" rel="nofollow noreferrer">here</a></p>
| <p>I am not an expert on sockets, but I managed to wrap the socket from the port forwarding with a <code>http.client.HTTPConnection</code>. I wanted to patch <code>requests</code>, but I could not make it work.</p>
<p>Anyway, this way the http requests can be implemented more conveniently in a high level fashion. Not sure if you were looking for that, but I was. To query pod names, ports and more you can check the <a href="https://v1-17.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.17" rel="nofollow noreferrer">api reference</a> for kubernetes and look up the corresponding method in the python package.</p>
<pre><code>from http.client import HTTPConnection
from kubernetes import client, config
from kubernetes.stream import portforward
from kubernetes.stream.ws_client import PortForward
class ForwardedKubernetesHTTPConnection(HTTPConnection):
def __init__(self, forwarding: PortForward, port: int):
super().__init__("127.0.0.1", port)
self.sock = forwarding.socket(port)
def connect(self) -> None:
pass
def close(self) -> None:
pass
if __name__ == "__main__":
config.load_kube_config()
client_v1 = client.CoreV1Api()
name_space = "my-namespace" # or get from api somehow
pod_name = "my-pod" # or get from api somehow
port = 80
pf = portforward(
client_v1.connect_get_namespaced_pod_portforward,
pod_name,
name_space,
ports=str(port)
)
conn = ForwardedKubernetesHTTPConnection(pf, port)
conn.request("GET", "/my/url") # will fail for other methods
resp = conn.getresponse()
print(resp.status)
print(resp.headers)
print(resp.read())
conn.request("GET", "/my/other/url")
# ...
</code></pre>
|
<p><strong>EDIT:</strong></p>
<p>I deleted minikube, enabled kubernetes in Docker desktop for Windows and installed <code>ingress-nginx</code> manually.</p>
<pre><code>$helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace
Release "ingress-nginx" does not exist. Installing it now.
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ServiceAccount "ingress-nginx" in namespace "ingress-nginx" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "ingress-nginx"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "ingress-nginx"
</code></pre>
<p>It gave me an error but I think it's because I did it already before because:</p>
<pre><code>$kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.106.222.233 localhost 80:30199/TCP,443:31093/TCP 11m
ingress-nginx-controller-admission ClusterIP 10.106.52.106 <none> 443/TCP 11m
</code></pre>
<p>Then applied all my yaml files again but this time ingress is not getting any address:</p>
<pre><code>$kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
myapp-ingress <none> myapp.com 80 10m
</code></pre>
<hr />
<p>I am using docker desktop (windows) and installed nginx-ingress controller via minikube addons enable command:</p>
<pre><code>$kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create--1-lp4md 0/1 Completed 0 67m
ingress-nginx-admission-patch--1-jdkn7 0/1 Completed 1 67m
ingress-nginx-controller-5f66978484-6mpfh 1/1 Running 0 67m
</code></pre>
<p>And applied all my yaml files:</p>
<pre><code>$kubectl get svc --all-namespaces -o wide
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default event-service-svc ClusterIP 10.108.251.79 <none> 80/TCP 16m app=event-service-app
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16m <none>
default mssql-clusterip-srv ClusterIP 10.98.10.22 <none> 1433/TCP 16m app=mssql
default mssql-loadbalancer LoadBalancer 10.109.106.174 <pending> 1433:31430/TCP 16m app=mssql
default user-service-svc ClusterIP 10.111.128.73 <none> 80/TCP 16m app=user-service-app
ingress-nginx ingress-nginx-controller NodePort 10.101.112.245 <none> 80:31583/TCP,443:30735/TCP 68m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
ingress-nginx ingress-nginx-controller-admission ClusterIP 10.105.169.167 <none> 443/TCP 68m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 72m k8s-app=kube-dns
</code></pre>
<p>All pods and services seems to be running properly. Checked the pod logs, all migrations etc. has worked and app is up and running. But when I try to send an HTTP request, I get a socket hang up error. I've checked all the logs for all pods, couldn't find anything useful.</p>
<pre><code>$kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
myapp-ingress nginx myapp.com localhost 80 74s
</code></pre>
<p>This one is also a bit weird, I was expecting ADRESS to be set to an IP not to localhost. So adding 127.0.0.1 entry for myapp.com in /etc/hosts also didn't seem so right.</p>
<p>My question here is what I might be doing wrong? Or how can I even trace where are my requests are being forwarded to?</p>
<p>ingress-svc.yaml</p>
<pre><code> apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
spec:
rules:
- host: myapp.com
http:
paths:
- path: /api/Users
pathType: Prefix
backend:
service:
name: user-service-svc
port:
number: 80
- path: /api/Events
pathType: Prefix
backend:
service:
name: event-service-svc
port:
number: 80
</code></pre>
<p>events-depl.yaml:</p>
<pre><code> apiVersion: apps/v1
kind: Deployment
metadata:
name: event-service-app
labels:
app: event-service-app
spec:
replicas: 1
selector:
matchLabels:
app: event-service-app
template:
metadata:
labels:
app: event-service-app
spec:
containers:
- name: event-service-app
image: ghcr.io/myapp/event-service:master
imagePullPolicy: Always
ports:
- containerPort: 80
imagePullSecrets:
- name: myapp
---
apiVersion: v1
kind: Service
metadata:
name: event-service-svc
spec:
selector:
app: event-service-app
ports:
- protocol: TCP
port: 80
targetPort: 80
</code></pre>
| <h2>Reproduction</h2>
<p>I reproduced the case using minikube <code>v1.24.0</code>, Docker desktop <code>4.2.0</code>, engine <code>20.10.10</code></p>
<p>First, <code>localhost</code> in ingress appears due to logic, it doesn't really matter what IP address is behind the domain in <code>/etc/hosts</code>, I added a different one for testing and still it showed localhost. Only <code>metallb</code> will provide an IP address from set up network.</p>
<h2>What happens</h2>
<p>When minikube driver is <code>docker</code>, minikube creates a big container (VM) where kubernetes components are run. This can be checked by running <code>docker ps</code> command in host system:</p>
<pre><code>$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f087dc669944 gcr.io/k8s-minikube/kicbase:v0.0.28 "/usr/local/bin/entr…" 16 minutes ago Up 16 minutes 127.0.0.1:59762->22/tcp, 127.0.0.1:59758->2376/tcp, 127.0.0.1:59760->5000/tcp, 127.0.0.1:59761->8443/tcp, 127.0.0.1:59759->32443/tcp minikube
</code></pre>
<p>And then <code>minikube ssh</code> to get inside this container and run <code>docker ps</code> to see all kubernetes containers.</p>
<p>Moving forward. Before introducing <code>ingress</code>, it's already clear that even <code>NodePort</code> doesn't work as intended. Let's check it.</p>
<p>There are two ways to get <code>minikube VM IP</code>:</p>
<ol>
<li>run <code>minikube IP</code></li>
<li><code>kubectl get nodes -o wide</code> and find the node's IP</li>
</ol>
<p>What should happen next with <code>NodePort</code> is requests should go to <code>minikube_IP:Nodeport</code> while it doesn't work. It happens because docker containers inside the minikube VM are not exposed outside of the cluster which is another docker container.</p>
<p>On <code>minikube</code> to access services within cluster there is a special command - <a href="https://minikube.sigs.k8s.io/docs/commands/service/" rel="nofollow noreferrer"><code>minikube service %service_name%</code></a> which will create a direct tunnel to the service inside the <code>minikube VM</code> (you can see that it contains <code>service URL</code> with <code>NodePort</code> which is supposed to be working):</p>
<pre><code>$ minikube service echo
|-----------|------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|------|-------------|---------------------------|
| default | echo | 8080 | http://192.168.49.2:32034 |
|-----------|------|-------------|---------------------------|
* Starting tunnel for service echo.
|-----------|------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|------|-------------|------------------------|
| default | echo | | http://127.0.0.1:61991 |
|-----------|------|-------------|------------------------|
* Opening service default/echo in default browser...
! Because you are using a Docker driver on windows, the terminal needs to be open to run it
</code></pre>
<p>And now it's available on host machine:</p>
<pre><code>$ curl http://127.0.0.1:61991/
StatusCode : 200
StatusDescription : OK
</code></pre>
<h2>Adding ingress</h2>
<p>Moving forward and adding ingress.</p>
<pre><code>$ minikube addons enable ingress
$ kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default echo NodePort 10.111.57.237 <none> 8080:32034/TCP 25m
ingress-nginx ingress-nginx-controller NodePort 10.104.52.175 <none> 80:31041/TCP,443:31275/TCP 2m12s
</code></pre>
<p>Trying to get any response from <code>ingress</code> by hitting <code>minikube_IP:NodePort</code> with no luck:</p>
<pre><code>$ curl 192.168.49.2:31041
curl : Unable to connect to the remote server
At line:1 char:1
+ curl 192.168.49.2:31041
</code></pre>
<p>Trying to create a tunnel with <code>minikube service</code> command:</p>
<pre><code>$ minikube service ingress-nginx-controller -n ingress-nginx
|---------------|--------------------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|---------------|--------------------------|-------------|---------------------------|
| ingress-nginx | ingress-nginx-controller | http/80 | http://192.168.49.2:31041 |
| | | https/443 | http://192.168.49.2:31275 |
|---------------|--------------------------|-------------|---------------------------|
* Starting tunnel for service ingress-nginx-controller.
|---------------|--------------------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|---------------|--------------------------|-------------|------------------------|
| ingress-nginx | ingress-nginx-controller | | http://127.0.0.1:62234 |
| | | | http://127.0.0.1:62235 |
|---------------|--------------------------|-------------|------------------------|
* Opening service ingress-nginx/ingress-nginx-controller in default browser...
* Opening service ingress-nginx/ingress-nginx-controller in default browser...
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
</code></pre>
<p>And getting <code>404</code> from <code>ingress-nginx</code> which means we can send requests to ingress:</p>
<pre><code>$ curl http://127.0.0.1:62234
curl : 404 Not Found
nginx
At line:1 char:1
+ curl http://127.0.0.1:62234
</code></pre>
<h2>Solutions</h2>
<p>Above I explained what happens. Here are three solutions how to get it work:</p>
<ol>
<li>Use another <a href="https://minikube.sigs.k8s.io/docs/drivers/" rel="nofollow noreferrer">minikube driver</a> (e.g. virtualbox. I used <code>hyperv</code> since my laptop has windows 10 pro)</li>
</ol>
<p><code>minikube ip</code> will return "normal" IP address of virtual machine and all network functionality will work just fine. You will need to add this IP address into <code>/etc/hosts</code> for domain used in ingress rule</p>
<p><strong>Note!</strong> Even though <code>localhost</code> was shown in <code>kubectl get ing ingress</code> output in <code>ADDRESS</code>.</p>
<ol start="2">
<li>Use built-in kubernetes feature in Docker desktop for Windows.</li>
</ol>
<p>You will need to <a href="https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal-clusters" rel="nofollow noreferrer">manually install <code>ingress-nginx</code></a> and change <code>ingress-nginx-controller</code> service type from <code>NodePort</code> to <code>LoadBalancer</code> so it will be available on <code>localhost</code> and will be working. Please find <a href="https://stackoverflow.com/a/69113528/15537201">my another answer about Docker desktop for Windows</a></p>
<ol start="3">
<li>(testing only) - use port-forward</li>
</ol>
<p>It's almost exactly the same idea as <code>minikube service</code> command. But with more control. You will open a tunnel from host VM port <code>80</code> to <code>ingress-nginx-controller</code> service (eventually pod) on port <code>80</code> as well. <code>/etc/hosts</code> should contain <code>127.0.0.1 test.domain</code> entity.</p>
<pre><code>$ kubectl port-forward service/ingress-nginx-controller -n ingress-nginx 80:80
Forwarding from 127.0.0.1:80 -> 80
Forwarding from [::1]:80 -> 80
</code></pre>
<p>And testing it works:</p>
<pre><code>$ curl test.domain
StatusCode : 200
StatusDescription : OK
</code></pre>
<h2>Update for kubernetes in docker desktop on windows and ingress:</h2>
<p>On modern <code>ingress-nginx</code> versions <code>.spec.ingressClassName</code> should be added to ingress rules. See <a href="https://kubernetes.github.io/ingress-nginx/#i-have-only-one-instance-of-the-ingresss-nginx-controller-in-my-cluster-what-should-i-do" rel="nofollow noreferrer">last updates</a>, so ingress rule should look like:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
...
spec:
ingressClassName: nginx # can be checked by kubectl get ingressclass
rules:
- host: myapp.com
http:
...
</code></pre>
|
<p>I'm trying to configure a <code>gitlab-ci.yml</code> file to automatically build two Docker images. I've seen it being done with docker-compose, but in my case I don't want to use docker-compose.</p>
<p>The <code>docker-compose.yml</code> configuration that I've seen is something like this:</p>
<pre class="lang-yaml prettyprint-override"><code>version: '3'
services:
vcxagency-base:
build:
context: https://github.com/AbsaOSS/vcxagencynode.git#master
dockerfile: ./vcxagency-base/vcxagency-base.Dockerfile
image: vcxagency-base:1.15.0
vcxagency-node:
build:
context: https://github.com/AbsaOSS/vcxagencynode.git#master
dockerfile: ./vcxagency-node/Dockerfile
image: vcxagency-node:0.3.1
ports:
- 8080:8080
depends_on:
- postgres
environment:
- LOG_LEVEL=debug
- LOG_ENABLE_INDYSDK=false
- LOG_JSON_TO_CONSOLE=false
- SERVER_PORT=8080
- SERVER_MAX_REQUEST_SIZE_KB=300
- SERVER_ENABLE_TLS=false
</code></pre>
<p>I haven't really worked with Docker at all. I tried to explore the documentation, but couldn't find something specific.</p>
<p>The goal would be to have these running on a Kubernetes cluster, but for now I just want to build the images in GitLab.</p>
| <p>You could make use of Kaniko as a docker build tool and GitLabs feature called matrix builds.</p>
<p><strong>How to use kaniko for building docker images</strong></p>
<p>Snippet below is taken from the official GitLab <a href="https://docs.gitlab.com/ee/ci/docker/using_kaniko.html#building-a-docker-image-with-kaniko" rel="noreferrer">docs</a></p>
<pre class="lang-yaml prettyprint-override"><code>build:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"${CI_REGISTRY}\":{\"auth\":\"$(printf "%s:%s" "${CI_REGISTRY_USER}" "${CI_REGISTRY_PASSWORD}" | base64 | tr -d '\n')\"}}}" > /kaniko/.docker/config.json
- >-
/kaniko/executor
--context "${CI_PROJECT_DIR}"
--dockerfile "${CI_PROJECT_DIR}/Dockerfile"
--destination "${CI_REGISTRY_IMAGE}:${CI_COMMIT_TAG}"
rules:
- if: $CI_COMMIT_TAG
</code></pre>
<p><strong>What are matrix builds?</strong></p>
<p>GitLab introduced a feature for parallel job executions which is called matrix jobs, basically the job is executed multiple times with different sets of variables (variable matrix).</p>
<p>Taking the above snippet as a base you would result in something like below (suppose your dockerfiles are stored under <code><PROJECT_DIR>/dockerfiles/</code> and are named <code>A.dockerfile</code> and <code>B.dockerfile</code>, respectively):</p>
<pre class="lang-yaml prettyprint-override"><code>.kaniko-build:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"${CI_REGISTRY}\":{\"auth\":\"$(printf "%s:%s" "${CI_REGISTRY_USER}" "${CI_REGISTRY_PASSWORD}" | base64 | tr -d '\n')\"}}}" > /kaniko/.docker/config.json
- >-
/kaniko/executor
--context "${CI_PROJECT_DIR}"
--dockerfile "${CI_PROJECT_DIR}/${DOCKERFILE}"
--destination "${CI_REGISTRY_IMAGE}:${CI_COMMIT_TAG}"
rules:
- if: $CI_COMMIT_TAG
docker-build:
extends: .build
parallel:
matrix:
- DOCKERFILE: ${CI_PROJECT_DIR}/dockerfiles/A.dockerfile
- DOCKERFILE: ${CI_PROJECT_DIR}/dockerfiles/B.dockerfile
</code></pre>
|
<p>I am using helm 3, and would like to concatenate 3 secretRefKeys into a single value of Env Var.</p>
<p>This is my cronjob.yaml file:</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: my-cronjob
spec:
schedule: {{ .Values.cron }}
jobTemplate:
spec:
template:
spec:
initContainers:
- name: PREFIX
valueFrom:
secretKeyRef:
name: {{ .Values.secretName }}
key: prefix
- name: ENV
valueFrom:
secretKeyRef:
name: {{ .Values.secretName }}
key: env
- name: SUFFIX
valueFrom:
secretKeyRef:
name: {{ .Values.secretName }}
key: suffix
- name: URL_FULL
value: $(PREFIX)$(ENV)$(SUFFIX)
containers:
.
.
.
.
.
</code></pre>
<p>I expect the value of <code>URL_FULL</code> to be the actual values of <code>prefix</code> <code>env</code> <code>suffix</code> concatenated.</p>
<p>But what I get with <code>helm template</code> command is literally what I write in the value:</p>
<pre><code> - name: URL_FULL
value: $(PREFIX)$(ENV)$(SUFFIX)
</code></pre>
<p>Thanks in advance.</p>
| <p>That's correct; when you submit the pod spec to the Kubernetes API it will contain that literal string including the <code>$(...)</code> references (and also the <code>secretRef:</code> blocks and not the values of the secrets).</p>
<p>When Kubernetes creates the Pods and starts the containers, at that point the cluster itself will fill in all of the environment variables and do the string concatenation. If this were a Deployment, you'd be able to verify this by running</p>
<pre class="lang-sh prettyprint-override"><code>kubectl exec deployment/my-deployment -it -- \
sh -c 'echo $URL_FULL'
</code></pre>
<p>(In a Job or CronJob you'd probably have to instrument the service itself to see this.)</p>
|
<p>I am using helm 3, and would like to concatenate 3 secretRefKeys into a single value of Env Var.</p>
<p>This is my cronjob.yaml file:</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: my-cronjob
spec:
schedule: {{ .Values.cron }}
jobTemplate:
spec:
template:
spec:
initContainers:
- name: PREFIX
valueFrom:
secretKeyRef:
name: {{ .Values.secretName }}
key: prefix
- name: ENV
valueFrom:
secretKeyRef:
name: {{ .Values.secretName }}
key: env
- name: SUFFIX
valueFrom:
secretKeyRef:
name: {{ .Values.secretName }}
key: suffix
- name: URL_FULL
value: $(PREFIX)$(ENV)$(SUFFIX)
containers:
.
.
.
.
.
</code></pre>
<p>I expect the value of <code>URL_FULL</code> to be the actual values of <code>prefix</code> <code>env</code> <code>suffix</code> concatenated.</p>
<p>But what I get with <code>helm template</code> command is literally what I write in the value:</p>
<pre><code> - name: URL_FULL
value: $(PREFIX)$(ENV)$(SUFFIX)
</code></pre>
<p>Thanks in advance.</p>
| <p><code>helm template</code> is not going to resolve those environment variables, it is just going to render the template, as the documentation states.</p>
<p>You will need to deploy the chart. I've made a simple example:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
volumes:
- name: shared-volume
emptyDir: {}
initContainers:
- name: busybox
image: busybox
volumeMounts:
- name: shared-volume
mountPath: /nginx-data
command: ["/bin/sh"]
# sleep so that we could exec into it
args: ["-c", "sleep 6000"]
env:
- name: PROP_ONE
valueFrom:
secretKeyRef:
key: one
name: secret-sample
- name: PROP_TWO
valueFrom:
secretKeyRef:
key: two
name: secret-sample
- name: PROP_THREE
value: $(PROP_ONE)
containers:
- name: nginx
image: nginx
volumeMounts:
- name: shared-volume
mountPath: /usr/share/nginx/html
</code></pre>
<p>You can then issue :</p>
<pre><code>helm install foo .
</code></pre>
<p>Then issue a <code>helm ls</code> to see that it's there in the cluster.</p>
<p>Even if you now do a :</p>
<pre><code>kubectl describe pod nginx | grep PROP_THREE
</code></pre>
<p>you will see that the reference is:</p>
<pre><code>PROP_THREE: $(PROP_ONE)
</code></pre>
<p>If you really want to see that env variable, you will have to exec into the container (I do not know of another way):</p>
<pre><code>kubectl exec -it nginx -c busybox -- /bin/sh
</code></pre>
<p>And then:</p>
<pre><code> env | grep PROP_THREE
PROP_THREE=bar
</code></pre>
<p>to see that it is resolved correctly.</p>
|
<p>I am backuping my Postgresql database using this cronjob:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: postgres-backup
spec:
schedule: "0 2 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: postgres-backup
image: postgres:10.4
command: ["/bin/sh"]
args: ["-c", 'echo "$PGPASS" > /root/.pgpass && chmod 600 /root/.pgpass && pg_dump -Fc -h <host> -U <user> <db> > /var/backups/backup.dump']
env:
- name: PGPASS
valueFrom:
secretKeyRef:
name: pgpass
key: pgpass
volumeMounts:
- mountPath: /var/backups
name: postgres-backup-storage
restartPolicy: Never
volumes:
- name: postgres-backup-storage
hostPath:
path: /var/volumes/postgres-backups
type: DirectoryOrCreate
</code></pre>
<p>The cronjob gets successfully executed, the backup is made and saved in the container of the Job but this container is stopped after successful execution of the script.
of course I want to access the backup files in the container but I can't because it is stopped/terminated.</p>
<p>is there a way to execute shell commands in a container after it is terminated, so I can access the backup files saved in the container?</p>
<p>I know that I could do that on the node, but I don't have the permission to access it.</p>
| <p><a href="https://stackoverflow.com/users/3638081/confused-genius">@confused genius</a> gave me a great idea to create another same container to access the dump files so this is the solution that works:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: postgres-backup
spec:
schedule: "0 2 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: postgres-backup
image: postgres:10.4
command: ["/bin/sh"]
args: ["-c", 'echo "$PGPASS" > /root/.pgpass && chmod 600 /root/.pgpass && pg_dump -Fc -h <host> -U <user> <db> > /var/backups/backup.dump']
env:
- name: PGPASS
valueFrom:
secretKeyRef:
name: dev-pgpass
key: pgpass
volumeMounts:
- mountPath: /var/backups
name: postgres-backup-storage
- name: postgres-restore
image: postgres:10.4
volumeMounts:
- mountPath: /var/backups
name: postgres-backup-storage
restartPolicy: Never
volumes:
- name: postgres-backup-storage
hostPath:
# Ensure the file directory is created.
path: /var/volumes/postgres-backups
type: DirectoryOrCreate
</code></pre>
<p>after that one just needs to sh into the "postgres-restore" container and access the dump files.</p>
<p>thanks</p>
|
<p>I have a multi-container pod in k8s, let's call them A and B. When stopping the pod, A must stop before B because A needs B until it's off.</p>
<p>To do that, I registered a <code>preStop</code> hook on A so A can gracefully stop before B.</p>
<p>However I'm not sure this is a good solution, because I miss some information I can't find in k8s documentation:</p>
<p>What happens when a multi-container pod is stopped?</p>
<ul>
<li>All containers <code>preStop</code> hooks are called, then when they are all over all containers receive <code>SIGTERM</code>, or</li>
<li>In parallel, all containers receive <code>preStop</code> if they have one or directly <code>SIGTERM</code> if they don't?</li>
</ul>
<p>In the second case, <code>preStop</code> is useless for what I want to do as B will be instantly killed.</p>
| <p>Typically, during pod deletion, the container runtime sends a TERM signal to the main process in each container.</p>
<p>According to <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination" rel="nofollow noreferrer">the official documentation</a>:</p>
<blockquote>
<ol>
<li><p>If one of the Pod's containers has defined a <code>preStop</code> <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks" rel="nofollow noreferrer">hook</a>,
the kubelet runs that hook inside of the container.</p>
</li>
<li><p>The kubelet triggers the container runtime to send a TERM signal to process 1 inside each container.</p>
</li>
</ol>
</blockquote>
<p>This numeration can confuse - looks like TERM signal will be sent only after <code>preStop</code> hook will be finished.
I decided to check the order of work with a simple example below.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: lifecycle-demo
spec:
restartPolicy: Never
volumes:
- name: config
configMap:
name: nginx-conf
containers:
- name: container-1
image: nginx
lifecycle:
preStop:
exec:
command: ["/bin/sleep","15"]
ports:
- containerPort: 80
- name: container-2
image: nginx
ports:
- containerPort: 81
volumeMounts:
- name: config
mountPath: /etc/nginx/conf.d
terminationGracePeriodSeconds: 30
</code></pre>
<p>Container-1 has <code>preStop</code> hook for 15 seconds delay.
I've connected to both containers to see behavior during pod deletion.</p>
<p><strong>Result</strong></p>
<p>After pod deletion:</p>
<ol>
<li><p>Container-1 worked for 15 seconds, before the connection was lost</p>
</li>
<li><p>Container-2 immediately lost connection</p>
</li>
</ol>
<p><strong>Conclusion</strong></p>
<p>If the container has a <code>preStop</code> hook, it will try to execute it. Only then it will receive TERM signal. <em>The main condition in this case: the grace period has not expired.</em></p>
<p>If the container doesn't have a <code>preStop</code> hook, it will receive TERM signal immediately after the command to remove the pod. Thus, it will not wait while<code>preStop</code> hook will be executed for another container.</p>
<blockquote>
<p><strong>Note:</strong> The containers in the Pod receive the TERM signal at different times and in an arbitrary order. If the order of shutdowns
matters, consider using a <code>preStop</code> hook to synchronize.</p>
</blockquote>
|
<p>I am using a service account with a role assigned to it using OIDC. I opened shell in the pod and checked current role,</p>
<p><a href="https://i.stack.imgur.com/57fNu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/57fNu.png" alt="enter image description here" /></a></p>
<p>but my service is doing the same thing but it is using node role,</p>
<p><a href="https://i.stack.imgur.com/UMFhZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UMFhZ.png" alt="enter image description here" /></a></p>
<h2>Versions of Java SDK</h2>
<ul>
<li>aws-java-sdk-core:1.11.505</li>
</ul>
| <blockquote>
<p>The containers in your pods must use an AWS SDK version that supports
assuming an IAM role via an OIDC web identity token file.</p>
</blockquote>
<p>Check if you meet the minimum requirement for <a href="https://github.com/boto/boto3" rel="nofollow noreferrer">boto3</a> is 1.9.220, for <a href="https://github.com/boto/botocore" rel="nofollow noreferrer">botocore</a> is 1.12.200.</p>
|
<p>I have 3 vps with docker installed. I have created a docker image and I would like to know if there is a possibility to configure docker or docker-compose in such a way that these 3 containers run only once a day at random hours.</p>
<p>I have not found any way other than to configure the command to start the docker in a bash script, called from the Linux crontab, which waits for a while at random. Are there better better solutions?
Maybe k8s / k3s?</p>
| <p><em>This is a community wiki answer posted for better visibility. Feel free to expand it.</em></p>
<p>Possible solutions to this problem are listed below.</p>
<p><strong>Docker</strong></p>
<p>Bash script, which is called from Linux crontab to start Docker.</p>
<p><strong>Kubernetes</strong></p>
<p>Since Docker / Docker-compose does't have its own way of scheduling a container to run at random time, CronJob might be the easiest way to do this.</p>
<p>More information about CronJobs is available in <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">the official documentation</a></p>
|
<p>I am using Velero to create and backup and restore, Velero has controllers which get triggered when I can create the custom objects.</p>
<pre><code>import veleroApi "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
restoreObj := veleroApi.Restore{
TypeMeta: metav1.TypeMeta{},
ObjectMeta: metav1.ObjectMeta{
DeletionGracePeriodSeconds: &gracePeriodSeconds,
},
Spec: veleroApi.RestoreSpec{
BackupName: "backup-name-20211101",
RestorePVs: &restorePV,
},
Status: veleroApi.RestoreStatus{},
}
</code></pre>
<p>But how can I submit this custom object to the Kube API server?</p>
<p>I used API client to apply the changes:</p>
<pre><code>apiClient.CoreV1().RESTClient().Patch(types.ApplyPatchType).Body(restoreObj).Do(context)
</code></pre>
<p>But I am getting:</p>
<pre><code>unknown type used for body: {TypeMeta:{Kind:Restore APIVersion:velero.io/v1} ObjectMeta:{Name: GenerateName: Namespace:velero SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:0xc000256018 Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[]} Spec:{BackupName:backup-name-20211101 ScheduleName: IncludedNamespaces:[] ExcludedNamespaces:[] IncludedResources:[] ExcludedResources:[] NamespaceMapping:map[] LabelSelector:nil RestorePVs:0xc0007a9088 PreserveNodePorts:<nil> IncludeClusterResources:<nil> Hooks:{Resources:[]}} Status:{Phase: ValidationErrors:[] Warnings:0 Errors:0 FailureReason: StartTimestamp:<nil> CompletionTimestamp:<nil> Progress:<nil>}}
</code></pre>
| <p>If you would like to create a client for custom object follow the following steps:</p>
<ol>
<li>Describe the custom resource for which you would like to create a rest client:</li>
</ol>
<pre><code>kubectl describe CustomResourceDefinition <custom resource definition name>
</code></pre>
<p>Note down the <code>API and version</code> and the <code>Kind</code>, as an example it would look like:</p>
<pre><code>API Version: apiextensions.k8s.io/v1
Kind: CustomResourceDefinition
</code></pre>
<p>Here, <code>apiextensions.k8s.io</code> is API and <code>v1</code> is the version.</p>
<ol start="2">
<li>Check if the <code>API version</code> that you got from step 1 is in the list of APIs:</li>
</ol>
<pre><code>kubectl get --raw "/"
</code></pre>
<ol start="3">
<li>Create the client:</li>
</ol>
<pre><code>func getClusterConfig() *rest.Config {
config, err := rest.InClusterConfig()
if err != nil {
glog.Fatal(err.Error())
}
return config
}
func getRestClient() *rest.RESTClient {
cfg := getClusterConfig()
gv := schema.GroupVersion{Group: "<API>", Version: "<version>"}
cfg.GroupVersion = &gv
cfg.APIPath = "/apis" // you can verify the path from step 2
var Scheme = runtime.NewScheme()
var Codecs = serializer.NewCodecFactory(Scheme)
cfg.NegotiatedSerializer = Codecs.WithoutConversion()
restClient, err := rest.RESTClientFor(cfg)
if err != nil {
panic(err.Error())
}
return restClient
}
</code></pre>
<p>Alternatively, check the answer from kozmo <a href="https://stackoverflow.com/a/63424133/2704032">here</a></p>
<hr />
<p>For Velero you can reuse the client they have.</p>
<p>As an example take a look at <a href="https://github.com/vmware-tanzu/velero/blob/3b2e9036d178831e9be9aa90403c4aad42793cb6/pkg/cmd/cli/restore/create.go#L319" rel="nofollow noreferrer">this code</a>:</p>
<pre><code>restore, err := o.client.VeleroV1().Restores(restore.Namespace).Create(context.TODO(), restore, metav1.CreateOptions{})
</code></pre>
|
<p>My use case is to make a kubernetes pod's metrics be available for consumption every 2 secs. It is required to poll the pod at this interval to maintain a healthy control plane (determine pods that are possibly choked and avoid routing traffic to those endpoints).</p>
<p>I'm using metrics-server right now, but it is not best suited for my use case.
I came across the note below, <a href="https://github.com/kubernetes-sigs/metrics-server" rel="nofollow noreferrer">here</a>.</p>
<blockquote>
<p>Metrics Server is not meant for non-autoscaling purposes. For example,
don't use it to forward metrics to monitoring solutions, or as a
source of monitoring solution metrics. In such cases please collect
metrics from Kubelet /metrics/resource endpoint directly.</p>
<p>How often metrics are scraped? Default 60 seconds, can be changed
using metric-resolution flag. We are not recommending setting values
below 15s, as this is the resolution of metrics calculated by Kubelet.</p>
</blockquote>
<ol>
<li>How should one use the kubelet metrics endpoint directly? All examples I've come across use <code>metrics.k8s.io</code>.</li>
<li>The other approach is to get <code>/sys/fs/cgroup/cpu/cpuacct.usage</code> reading from the docker containers directly, but there needs to be an aggregation layer. How should one design this stats aggregation layer?</li>
</ol>
<p>Are there other approaches? What is the best recommended way to address my requirement? Thanks.</p>
| <p>I think the easiest would be to add a /metrics endpoint on your workloads and then scrape that using <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus" rel="nofollow noreferrer">prometheus</a> and the implement the <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-adapter" rel="nofollow noreferrer">prometheus adapter</a> to plug into HPA.</p>
|
<p>I have an Nginx ingress controller in my Kubernetes cluster and I use bitnami/nats.</p>
<p>The services produced are the following:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nats-client ClusterIP 10.72.13.82 <none> 4222/TCP 36m
nats-cluster ClusterIP 10.72.8.67 <none> 6222/TCP 36m
nats-headless ClusterIP None <none> 4222/TCP,6222/TCP 36m
nats-monitoring ClusterIP 10.72.2.102 <none> 8222/TCP 36m
</code></pre>
<p>When I kubefwd the port(4222) I can connect to it, but using the below ingress it doesn't work.</p>
<p>Here's my ingress.yaml</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-controller
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: "letsencrypt-cluster-issuer"
spec:
tls:
- hosts:
- NATS_DOMAIN
secretName: secret
rules:
- host: NATS_DOMAIN
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nats-client
port:
number: 4222
</code></pre>
<p>Any help would be highly appreciated!</p>
| <p>Do you mind opening an issue <a href="https://github.com/bitnami/charts/issues" rel="nofollow noreferrer">here</a>? That way we could ask for more information and provide a better help. Once it is solved we can come back here a post the findings so other users can get it too.
Please, when opening issue provide all the information that could help debugging it. As commands executed, images used, charts used, logs, etc.</p>
|
<p>I have a Kubernetes cluster with an install of <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack" rel="nofollow noreferrer">kube-prometheus-stack</a> (Prometheus 2.27.1, kube-state-metrics v2.0.0)</p>
<p>I would like to have a query to return how much time each pod was running, over last 24 hours</p>
<ul>
<li>If a pod is still running, the time from its creation to now</li>
<li>If a post has terminated, the time from creation to completion</li>
</ul>
<p>Importantly, I need exactly the time the pod existed, as opposed to CPU usage.</p>
<p>I can do something like this with:</p>
<pre><code>kube_pod_completion_time - kube_pod_created
</code></pre>
<p>but it returns nothing for pods that are still running. And, since Prometheus does not return metrics that are more than 5 min old, it will not report anything for pods that were terminated and deleted.</p>
<p>How would I query Prometheus without these issues?</p>
| <p>One working solution is this:</p>
<pre><code>sum by(namespace, pod) (
(last_over_time(kube_pod_completion_time[1d])
- last_over_time(kube_pod_created[1d]))
or
(time() - kube_pod_created)
)
</code></pre>
<p>The first part inside <code>sum</code> handles the case of pods that have terminated. We pick the last value of <code>kube_pod_completion_time</code> and <code>kube_pod_stared</code> and compute the difference.</p>
<p>The second part handles the pods that are still running. In that case, there is a fresh value of the <code>kube_pod_created</code> metric, and we can subtract it from the current time.</p>
|
<p>We run a Kubernetes-compatible (OKD 3.11) on-prem / private cloud cluster with backend apps communicating with low-latency Redis databases used as caches and K/V stores. The new architecture design is about to divide worker nodes equally between two geographically distributed data centers ("regions"). We can assume static pairing between node names and regions, an now we have added labeling of nodes with region names as well.</p>
<p>What would be the recommended approach to protect low-latency communication with the in-memory databases, making client apps stick to the same region as the database they are allowed to use? Spinning up additional replicas of the databases is feasible, but does not prevent round-robin routing between the two regions...</p>
<p>Related: <a href="https://stackoverflow.com/questions/55743299/kubernetes-node-different-region-in-single-cluster">Kubernetes node different region in single cluster</a></p>
| <p>Posting this out of comments as community wiki for better visibility, feel free to edit and expand.</p>
<hr />
<p>Best option to solve this question is to use <a href="https://istio.io/latest/docs/tasks/traffic-management/locality-load-balancing/" rel="nofollow noreferrer"><code>istio - Locality Load Balancing</code></a>. Major points from the link:</p>
<blockquote>
<p>A locality defines the geographic location of a workload instance
within your mesh. The following triplet defines a locality:</p>
<ul>
<li><p>Region: Represents a large geographic area, such as us-east. A region typically contains a number of availability zones. In
Kubernetes, the label topology.kubernetes.io/region determines a
node’s region.</p>
</li>
<li><p>Zone: A set of compute resources within a region. By running services in multiple zones within a region, failover can occur between
zones within the region while maintaining data locality with the
end-user. In Kubernetes, the label topology.kubernetes.io/zone
determines a node’s zone.</p>
</li>
<li><p>Sub-zone: Allows administrators to further subdivide zones for more fine-grained control, such as “same rack”. The sub-zone concept
doesn’t exist in Kubernetes. As a result, Istio introduced the custom
node label topology.istio.io/subzone to define a sub-zone.</p>
</li>
</ul>
<p>That means that a pod running in zone <code>bar</code> of region <code>foo</code> is not
considered to be local to a pod running in zone <code>bar</code> of region <code>baz</code>.</p>
</blockquote>
<hr />
<p>Another option that can be considered with traffic balancing adjusting is suggested in comments:</p>
<p>use <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity" rel="nofollow noreferrer"><code>nodeAffinity</code></a> to achieve consistency between scheduling <code>pods</code> and <code>nodes</code> in specific "regions".</p>
<blockquote>
<p>There are currently two types of node affinity, called
<code>requiredDuringSchedulingIgnoredDuringExecution</code> and
<code>preferredDuringSchedulingIgnoredDuringExecution</code>. You can think of
them as "hard" and "soft" respectively, in the sense that the former
specifies rules that must be met for a pod to be scheduled onto a node
(similar to nodeSelector but using a more expressive syntax), while
the latter specifies preferences that the scheduler will try to
enforce but will not guarantee. The "IgnoredDuringExecution" part of
the names means that, similar to how nodeSelector works, if labels on
a node change at runtime such that the affinity rules on a pod are no
longer met, the pod continues to run on the node. In the future we
plan to offer <code>requiredDuringSchedulingRequiredDuringExecution</code> which
will be identical to <code>requiredDuringSchedulingIgnoredDuringExecution</code>
except that it will evict pods from nodes that cease to satisfy the
pods' node affinity requirements.</p>
<p>Thus an example of <code>requiredDuringSchedulingIgnoredDuringExecution</code>
would be "only run the pod on nodes with Intel CPUs" and an example
<code>preferredDuringSchedulingIgnoredDuringExecution</code> would be "try to run
this set of pods in failure zone XYZ, but if it's not possible, then
allow some to run elsewhere".</p>
</blockquote>
<p><strong>Update</strong>: based on <a href="https://stackoverflow.com/questions/70006961/best-method-to-keep-client-server-traffic-in-the-same-region-node-in-kubernete#comment123810994_70036319">@mirekphd comment</a>, it will still not be fully functioning in a way it was asked to:</p>
<blockquote>
<p>It turns out that in practice Kubernetes does not really let us switch
off secondary zone, as soon as we spin up a realistic number of pod
replicas (just a few is enough to see it)... they keep at least some
pods in the other zone/DC/region by design (which is clever when you
realize that it removes the dependency on the docker registry
survival, at least under default imagePullPolicy for tagged images),
<a href="https://github.com/kubernetes/kubernetes/issues/99630#issuecomment-790740081" rel="nofollow noreferrer">GibHub issue #99630 - NodeAffinity preferredDuringSchedulingIgnoredDuringExecution doesn't work well</a></p>
</blockquote>
<p>Please refer to <a href="https://stackoverflow.com/a/70041931/15537201">@mirekphd's answer</a></p>
|
<p>We run a Kubernetes-compatible (OKD 3.11) on-prem / private cloud cluster with backend apps communicating with low-latency Redis databases used as caches and K/V stores. The new architecture design is about to divide worker nodes equally between two geographically distributed data centers ("regions"). We can assume static pairing between node names and regions, an now we have added labeling of nodes with region names as well.</p>
<p>What would be the recommended approach to protect low-latency communication with the in-memory databases, making client apps stick to the same region as the database they are allowed to use? Spinning up additional replicas of the databases is feasible, but does not prevent round-robin routing between the two regions...</p>
<p>Related: <a href="https://stackoverflow.com/questions/55743299/kubernetes-node-different-region-in-single-cluster">Kubernetes node different region in single cluster</a></p>
| <p>So effective region-pinning solution is more complex than just using <code>nodeAffinity</code> in the "preferred" version. This alone will cause you a lot of unpredictable surprises due to the opinionated character of Kubernetes that has zone spreading hard-coded, as seen in this <a href="https://github.com/kubernetes/kubernetes/issues/99630#issuecomment-790705286" rel="nofollow noreferrer">Github issue</a>, where they clearly try to put at least some eggs in another basket and see zone selection as an antipattern.</p>
<p>In practice the usefulness of <code>nodeAffinity</code> alone is restricted to a scenario with a very limited number of pod replicas, because when the pods number exceeds the number of nodes in a region (i.e. typically for the 3rd replica in a 2-nodes / 2-regions setup), scheduler will start "correcting" or "fighting" with user preference weights (even as unbalanced of 100:1) very much in favor of spreading, placing at least one "representative" pod on every node in every region (including the non-preferred ones with minimum possible weights of 1).</p>
<p>But this default zone spreading issue can be overcome if you create a single-replica container that will act as a "master" or "anchor" (a natural example being a database). For this single-pod "master" <code>nodeAffinity</code> will still work correctly - of course in the HA variant, i.e. "preferred" not "required" version. As for the remaining multi-pod apps, you use something else - <code>podAffinity</code> (this time in the "required" version), which will make the "slave" pods follow their "master" between zones, because setting any pod-based spreading disables the default zone spreading. You can have as many replicas of the "slave" pods as you want and never run into a single misplaced pod (at least at schedule time), because of the "required" affinity used for "slaves". Note that the known limitation of <code>nodeAffinity</code> applies here as well: the number of "master" pod replicas must not exceed the number of nodes in a region, or else "zone spreading" will kick in.</p>
<p>And here's an example of how to label the "master" pod correctly for the benefit of <code>podAffinity</code> and using a deployment config YAML file: <a href="https://stackoverflow.com/a/70041308/9962007">https://stackoverflow.com/a/70041308/9962007</a></p>
|
<ol>
<li><p>I created a build config from the git repo (Docker based app).</p>
<p><code>oc new-build <git-url> --name=myApp</code></p></li>
<li><p>Created a deployment config using:</p>
<p><code>oc create dc myApp --image=<image-stream-repo></code></p></li>
</ol>
<p>I am adding labels to this deployment config from the UI.</p>
<ol start="3">
<li>Now when I deploy the application (from the UI) or use <code>oc rollout</code>, the POD is created and the application is running in the POD.</li>
</ol>
<p><strong>But the label (name: myApp) is not being applied to the POD.</strong> As a result the <code>selector</code> in the <code>service</code> is failing to identify the POD. The service is configured with label <code>name: myApp</code> as POD selector.</p>
<p>If I go to the created POD and <code>edit yaml</code> and add <code>name: myApp</code> in the <code>labels</code> section, only then the POD is being identified by the service.</p>
<p>In case if any one has faced similar issue, please provide your inputs in overcoming this problem.</p>
| <p>To set a label on a <code>Pod</code> using a <code>DeploymentConfig</code>, the label must be placed in a much more deeply nested key, as shown below, or else it will be set on the <code>DeploymentConfig</code> itself, not the <code>Pod</code>:</p>
<pre><code>objects:
- kind: DeploymentConfig
apiVersion: v1
metadata:
name: my-name
# labels: NOT HERE (FOR DC ONLY)!
spec:
replicas: 1
selector:
app: my-name
strategy:
type: Rolling
template:
metadata:
labels:
app: my-name
myAppsGroup: myApp
</code></pre>
<p>You can verify the correctness of pod labeling using <code>oc describe pod my-name</code></p>
|
<p>I have 2 kubernetes clusters in the IBM cloud, one has 2 Nodes, the other one 4.</p>
<p>The one that has 4 Nodes is working properly but at the other one I had to temporarily remove the worker nodes due to monetary reasons (shouldn't be payed while being idle).</p>
<p>When I reactivated the two nodes, everything seemed to start up fine and as long as I don't try to interact with Pods it still looks fine on the surface, no messages about inavailability or critical health status. OK, I deleted two obsolete <code>Namespace</code>s which got stuck in the <code>Terminating</code> state, but I could resolve that issue by restarting a cluster node (don't exactly know anymore which one it was).</p>
<p>When everything looked ok, I tried to access the kubernetes dashboard (everything done before was on IBM management level or in the command line) but surprisingly I found it unreachable with an error page in the browser stating:</p>
<blockquote>
<p>503: Service Unavailable</p>
</blockquote>
<p>There was a small JSON message at the bottom of that page, which said:</p>
<pre class="lang-json prettyprint-override"><code>{
"kind": "Status",
"apiVersion": "v1",
"metadata": { },
"status": "Failure",
"message": "error trying to reach service: read tcp 172.18.190.60:39946-\u003e172.19.151.38:8090: read: connection reset by peer",
"reason": "ServiceUnavailable",
"code": 503
}
</code></pre>
<p>I sent a <code>kubectl logs kubernetes-dashboard-54674bdd65-nf6w7 --namespace=kube-system</code> where the <code>Pod</code> was shown as running, but the result was not logs to be viewed, it was this message instead:</p>
<pre class="lang-none prettyprint-override"><code>Error from server: Get "https://10.215.17.75:10250/containerLogs/kube-system/kubernetes-dashboard-54674bdd65-nf6w7/kubernetes-dashboard":
read tcp 172.18.135.195:56882->172.19.151.38:8090:
read: connection reset by peer
</code></pre>
<p>Then I found out I'm neither able to get the logs of <strong>any</strong> <code>Pod</code> running in that cluster, nor am I able to deploy any new custom kubernetes object that requires scheduling (I actually could apply <code>Service</code>s or <code>ConfigMap</code>s, but no <code>Pod</code>, <code>ReplicaSet</code>, <code>Deployment</code> or similar).</p>
<p>I already tried to</p>
<ul>
<li>reload the worker nodes in the workerpool</li>
<li>restart the worker nodes in the workerpool</li>
<li>restarted the kubernetes-dashboard <code>Deployment</code></li>
</ul>
<p>Unfortunately, none of the above actions changed the accessibility of the <code>Pod</code>s.</p>
<p>There's another thing that might be related (though I'm not quite sure it actually is):</p>
<p>In the other cluster that runs fine, there are three calico <code>Pod</code>s running and all three are up while in the problematic cluster only 2 of the three calico <code>Pod</code>s are up and running, the third one stays in <code>Pending</code> state and a <code>kubectl describe pod calico-blablabla-blabla</code> reveals the reason, an <code>Event</code></p>
<pre class="lang-none prettyprint-override"><code>Warning FailedScheduling 13s default-scheduler
0/2 nodes are available: 2 node(s) didn't have free ports for the requested pod ports.
</code></pre>
<p>Does anyone have a clue about what's going on in that cluster and can point me to possible solutions? I don't really want to delete the cluster and spawn a new one but I cannot use the user interfaces (dashboard or cli).</p>
<h2>Edit</h2>
<p>The result of <code>kubectl describe pod kubernetes-dashboard-54674bdd65-4m2ch --namespace=kube-system</code>:</p>
<pre class="lang-none prettyprint-override"><code>Name: kubernetes-dashboard-54674bdd65-4m2ch
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Node: 10.215.17.82/10.215.17.82
Start Time: Mon, 15 Nov 2021 09:01:30 +0100
Labels: k8s-app=kubernetes-dashboard
pod-template-hash=54674bdd65
Annotations: cni.projectcalico.org/containerID: ca52cefaae58d8e5ce6d54883cb6a6135318c8db53d231dc645a5cf2e67d821e
cni.projectcalico.org/podIP: 172.30.184.2/32
cni.projectcalico.org/podIPs: 172.30.184.2/32
container.seccomp.security.alpha.kubernetes.io/kubernetes-dashboard: runtime/default
kubectl.kubernetes.io/restartedAt: 2021-11-10T15:47:14+01:00
kubernetes.io/psp: ibm-privileged-psp
Status: Running
IP: 172.30.184.2
IPs:
IP: 172.30.184.2
Controlled By: ReplicaSet/kubernetes-dashboard-54674bdd65
Containers:
kubernetes-dashboard:
Container ID: containerd://bac57850055cd6bb944c4d893a5d315c659fd7d4935fe49083d9ef8ae03e5c31
Image: registry.eu-de.bluemix.net/armada-master/kubernetesui-dashboard:v2.3.1
Image ID: registry.eu-de.bluemix.net/armada-master/kubernetesui-dashboard@sha256:f14f581d36b83fc9c1cfa3b0609e7788017ecada1f3106fab1c9db35295fe523
Port: 8443/TCP
Host Port: 0/TCP
Args:
--auto-generate-certificates
--namespace=kube-system
State: Running
Started: Mon, 15 Nov 2021 09:01:37 +0100
Ready: True
Restart Count: 0
Requests:
cpu: 50m
memory: 100Mi
Liveness: http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
Readiness: http-get https://:8443/ delay=10s timeout=30s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/certs from kubernetes-dashboard-certs (rw)
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sc9kw (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kubernetes-dashboard-certs:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-certs
Optional: false
tmp-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-sc9kw:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 600s
node.kubernetes.io/unreachable:NoExecute op=Exists for 600s
Events: <none>
</code></pre>
| <h1>Problem resolved…</h1>
<p>The cause of the problem was an update of the cluster to the kubernetes version 1.21 while my cluster was meeting the following conditions:</p>
<ul>
<li>private and public service endpoint enabled</li>
<li>VRF disabled</li>
</ul>
<h2>Root cause:</h2>
<p>In Kubernetes version 1.21, Konnectivity replaces OpenVPN as the network proxy that is used to secure the communication of the Kubernetes API server master to worker nodes in the cluster.<br />
When using Konnectivity, a problem exists with masters to cluster nodes communication when all of the above mentioned conditions are met.</p>
<h2>Solution steps:</h2>
<ul>
<li>disabled the private service endpoint (the public one seems not to be a problem) by using the command<br />
<code>ibmcloud ks cluster master private-service-endpoint disable --cluster <CLUSTER_NAME></code>
(this command is provider specific, if you are experiencing the same problem with a different provider or on a local installation, find out how to disable that private service endpoint)</li>
<li>refreshed the cluster master using
<code>ibmcloud ks cluster master refresh --cluster <CLUSTER_NAME></code> and finally</li>
<li>reloaded all the worker nodes (in the web console, should be possible through a command as well)</li>
<li>waited for about 30 minutes:
<ul>
<li>Dashboard available / reachable again</li>
<li><code>Pod</code>s accessible and schedulable again</li>
</ul>
</li>
</ul>
<hr />
<h2>General recommendation:</h2>
<p><strong>BEFORE</strong> you update any cluster to kubernetes 1.21, check if you have enabled the private service endpoint. If you have, either disable it or delay the update until you can, or enable VRF (virtual routing and forwarding), which I couldn't but was told it was likely to resolve the issue.</p>
|
<p>I want to implement custom logic to determine readiness for my pod, and I went over this: <a href="https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html#actuator.endpoints.kubernetes-probes.external-state" rel="noreferrer">https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html#actuator.endpoints.kubernetes-probes.external-state</a> and they mention an example property:
<code>management.endpoint.health.group.readiness.include=readinessState,customCheck</code></p>
<p>Question is - how do I override <code>customCheck</code>?
In my case I want to use HTTP probes, so the yaml looks like:</p>
<pre><code>readinessProbe:
initialDelaySeconds: 10
periodSeconds: 10
httpGet:
path: /actuator/health
port: 12345
</code></pre>
<p>So then again - where and how should I apply logic that would determine when the app is ready (just like the link above, i'd like to rely on an external service in order for it to be ready)</p>
| <p>customCheck is a key for your custom HealthIndicator. The key for a given HealthIndicator is the name of the bean without the HealthIndicator suffix</p>
<p>You can read:
<a href="https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html#actuator.endpoints.health.writing-custom-health-indicators" rel="nofollow noreferrer">https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html#actuator.endpoints.health.writing-custom-health-indicators</a></p>
<p>You are defining readinessProbe, so probably hiting /actuator/health/readiness is a better choice.</p>
<pre><code>public class CustomCheckHealthIndicator extends AvailabilityStateHealthIndicator {
private final YourService yourService;
public CustomCheckHealthIndicator(ApplicationAvailability availability, YourService yourService) {
super(availability, ReadinessState.class, (statusMappings) -> {
statusMappings.add(ReadinessState.ACCEPTING_TRAFFIC, Status.UP);
statusMappings.add(ReadinessState.REFUSING_TRAFFIC, Status.OUT_OF_SERVICE);
});
this.yourService = yourService;
}
@Override
protected AvailabilityState getState(ApplicationAvailability applicationAvailability) {
if (yourService.isInitCompleted()) {
return ReadinessState.ACCEPTING_TRAFFIC;
} else {
return ReadinessState.REFUSING_TRAFFIC;
}
}
}
</code></pre>
|
<p>I have situation where I am deploying some chart, let's call it "myChart".</p>
<p>Let's suppose I have a pipeline, where I am doing below:</p>
<pre><code>helm delete myChart_1.2 -n <myNamespace>
</code></pre>
<p>An <strong>right after</strong> I am installing new one:</p>
<pre><code>helm delete myChart_1.3 -n <myNamespace>
</code></pre>
<p>Does Kubernetes or maybe Helm knows that all the resources should be deleted first and then install new?</p>
<p>For instance there might be some PVC and PV that are still not deleted. Is there any problem with that, should I add some waits before deployment?</p>
| <p>Helm delete (aka. uninstall) should remove the objects managed in a given deployment, before exiting.</p>
<p>Still, when the command returns: you could be left with resources in a Terminating state, pending actual deletion.</p>
<p>Usually, we could find about PVC, that may still be attached to a running container.</p>
<p>Or objects such as ReplicaSet or Pods -- most likely, your Helm chart installs Deployments, DaemonSets, StatefulSets, ... top-level objects may appear to be deleted, while their child objects are still being terminated.</p>
<p>Although this shouldn't be an issue for Helm, assuming that your application is installed using a generated name, and as long as your chart is able to create multiple instances of a same application, in a same cluster/namespace, ... without them overlapping ( => if all resources managed through Helm have unique names, which is not always the case ).</p>
<p>If your chart is hosted on a public repository, let us know what to check. And if you're not one of the maintainer for that chart: beware that Helm charts could go from amazing to very bad, depending on who's contributing, what use cases have been met so far, ...</p>
|
<p>Can I replicate data between Kubernetes PV into two separate clusters situated in different data centers?</p>
<p>I have a cluster with associated PV running in Primary site. I have a separate cluster running in DR site.</p>
<p>How do I continuously replicate data in primary site to DR site so that when application is running from from DR? The data written to PR PVs are available in DR.</p>
<p>Application writes files to the PV like xls, csv etc.</p>
<p>I can use any OSS storage orchestrator like openebs, rook, storageos etc.</p>
<p>Database is outside of kubernetes.</p>
| <p><a href="https://stackoverflow.com/users/1328672/narain">Narain</a> is right. <strong>Kubernetes doesn't contain any functionality that would allow you to synchronize two PVs used by two different clusters</strong>. So you would need to find your own solution to synchronize those two filesystems. It can be an existing solution like <code>lsyncd</code>, proposed in <a href="https://unix.stackexchange.com/a/307049">this thread</a> or any custom solution like the above mentioned <code>rsync</code> which can be wrapped into a simple <code>bash</code> script and run periodically in <code>cron</code>.</p>
|
<p>I want to list my deployments based on a key value pair found in the <code>.spec.selector.matchLabels</code> field.</p>
<p>Doing so using the plain <code>labels</code> is easy, but I could not find a way to match / fetch the deployment that satisfies the condition that a certain <code>key=value</code> is present in the following section</p>
<pre><code>spec:
[...]
selector:
matchLabels:
app: myapp
process: web
release: myrelease
</code></pre>
<p>Does not seem this can be done using the <a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#ListOptions" rel="noreferrer"><code>ListOptions</code></a></p>
| <p>It is not supported:</p>
<ul>
<li><a href="https://github.com/kubernetes/client-go/issues/713#issuecomment-557540936" rel="nofollow noreferrer">https://github.com/kubernetes/client-go/issues/713#issuecomment-557540936</a></li>
<li>and <a href="https://github.com/kubernetes/kubernetes/issues/53459" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/53459</a></li>
</ul>
<p>You have to filter on the client side:</p>
<pre><code> depl, err := clientset.AppsV1().Deployments("some_namespace").List(context.Background(), metav1.ListOptions{})
if err != nil {
panic(err.Error())
}
for _, item := range depl.Items {
if item.Spec.Selector.MatchLabels["app"] == "myapp" {
fmt.Println("found it")
}
}
</code></pre>
|
<p>I'm confused with the following response.</p>
<p>what's the meaning of <code>"apiVersion": "v1",</code>???</p>
<p>I'm expecting the <code>apiVersion</code> to be <code>v1beta1</code>.</p>
<pre><code>$ curl http://127.0.0.1:8080/apis/batch/v1beta1
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "batch/v1beta1",
"resources": [
{
"name": "cronjobs",
"singularName": "",
"namespaced": true,
"kind": "CronJob",
"verbs": [
"create",
"delete",
"deletecollection",
"get",
"list",
"patch",
"update",
"watch"
],
"shortNames": [
"cj"
],
"categories": [
"all"
],
"storageVersionHash": "h/JlFAZkyyY="
},
{
"name": "cronjobs/status",
"singularName": "",
"namespaced": true,
"kind": "CronJob",
"verbs": [
"get",
"patch",
"update"
]
}
]
}controlplane $
</code></pre>
| <p>Modular structure of K8s API, which allows for versioning and grouping.
You can find more information about on the following pages:</p>
<ul>
<li><a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/api-group.md" rel="nofollow noreferrer">https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/api-group.md</a></li>
<li><a href="https://kubernetes.io/docs/reference/using-api/#api-groups" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/using-api/#api-groups</a></li>
<li><a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/api-group.md" rel="nofollow noreferrer">https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/api-group.md</a></li>
<li><a href="https://matthewpalmer.net/kubernetes-app-developer/articles/kubernetes-apiversion-definition-guide.html" rel="nofollow noreferrer">https://matthewpalmer.net/kubernetes-app-developer/articles/kubernetes-apiversion-definition-guide.html</a></li>
</ul>
|
<p>I'm trying to deploy a Kubernetes Operator using <a href="https://kopf.readthedocs.io/en/stable/" rel="nofollow noreferrer">Kopf</a> and I'm getting the following error:</p>
<pre><code>kopf._cogs.clients.errors.APIForbiddenError: ('exchangerates.operators.brennerm.github.io is forbidden: User "system:serviceaccount:default:exchangerates-operator" cannot list resource "exchangerates" in API group "operators.brennerm.github.io" at the cluster scope', {'kind': 'Status', 'apiVersion': 'v1', 'metadata': {}, 'status': 'Failure', 'message': 'exchangerates.operators.brennerm.github.io is forbidden: User "system:serviceaccount:default:exchangerates-operator" cannot list resource "exchangerates" in API group "operators.brennerm.github.io" at the cluster scope', 'reason': 'Forbidden', 'details': {'group': 'operators.brennerm.github.io', 'kind': 'exchangerates'}, 'code': 403})
</code></pre>
<p>What's confusing is that if I check the permissions granted to the Service Account it looks like it has the correct permissions:</p>
<pre><code>$ kubectl auth can-i list exchangerates --as=system:serviceaccount:default:exchangerates-operator
yes
$ kubectl auth can-i list exchangerates --as=system:serviceaccount:default:exchangerates-operator --all-namespaces
yes
</code></pre>
<p>Is there somewhere else I should be looking to troubleshoot the issue?</p>
| <p>User <a href="https://stackoverflow.com/users/857383/sergey-vasilyev" title="3,401 reputation">Sergey Vasilyev</a> has tested this configuration and mentioned in the comment:</p>
<blockquote>
<p>You are right, "*" works. I tried your repo locally with Minikube 1.24.0 & K8s 1.22.3 — it works, there are no permission errors. The operator and the setup are both correct. Similarly for K3d — it works. I assume it is something with your local setup or old images left somewhere.</p>
</blockquote>
<p>I also tested. I ran it locally on Minikube and had no problems. Your setup looks fine, everything works fine. Probably the problem may be with some dependencies in the image, or with Minikube leftovers. Bear in mind that Minikube is mainly used for testing and learning purposes so some of it's features might not be ideal. As for solving your problem, just try creating a new cluster.</p>
|
<p>I have 3 vps with docker installed. I have created a docker image and I would like to know if there is a possibility to configure docker or docker-compose in such a way that these 3 containers run only once a day at random hours.</p>
<p>I have not found any way other than to configure the command to start the docker in a bash script, called from the Linux crontab, which waits for a while at random. Are there better better solutions?
Maybe k8s / k3s?</p>
| <p>If "randomness" is part of your application behaviour, then you should contain this logic inside one of services / containers, so this is no-brainer. You are running entire stack with <code>docker-compose</code> and it just works.</p>
<p>Other than that <code>Crontab</code> or other external scheduler (like kubernetes cron jobs) is the way of doing that if running those three containers on random time once a day is your requirement of using those docker services.</p>
|
<p>I am running Spark on Kubernetes and need to monitor the Spark Job in runtime.<br />
How can I access the SparkUI page of my job while it's running?</p>
| <ol>
<li>Enable port forwarding for the Spark driver pod:</li>
</ol>
<pre class="lang-sh prettyprint-override"><code>kubectl port-forward <driver-pod-name> 4040:4040
</code></pre>
<ol start="2">
<li>Access the SparkUI from your web via <code>http://localhost:4040</code></li>
</ol>
<hr />
<p>You can also make sure SparkUI is running by looking for the relevant log:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl logs <driver-pod-name> | grep SparkUI
</code></pre>
<p>Example output:</p>
<blockquote>
<p>21/11/22 09:50:21 INFO Utils: Successfully started service 'SparkUI'
on port 4040. 21/11/22 09:50:21 INFO SparkUI: Bound SparkUI to
0.0.0.0, and started at <a href="http://spark-xxxxxxxxxxxxxxxx-driver-svc.spark.svc:4040" rel="nofollow noreferrer">http://spark-xxxxxxxxxxxxxxxx-driver-svc.spark.svc:4040</a></p>
</blockquote>
|
<p>We are running a single NodeJS instance in a Pod with a request of 1 CPU, and no limit. Upon load testing, we observed the following:</p>
<pre><code>NAME CPU(cores) MEMORY(bytes)
backend-deployment-5d6d4c978-5qvsh 3346m 103Mi
backend-deployment-5d6d4c978-94d2z 3206m 99Mi
</code></pre>
<p>If NodeJS is only running a single thread, how could it be consuming more than 1000m CPU, when running directly on a Node it would only utilize a single core? Is kubernetes somehow letting it borrow time across cores?</p>
| <p>Although Node.js runs the main application code in a single thread, the Node.js runtime is multi-threaded. Node.js has an internal <a href="https://nodejs.org/en/docs/guides/dont-block-the-event-loop/" rel="nofollow noreferrer">worker pool</a> that is used to run background tasks, including I/O and certain CPU-intensive processing like crypto functions. In addition, if you use the <a href="https://nodejs.org/api/worker_threads.html" rel="nofollow noreferrer">worker_threads</a> facility (not to be confused with the worker pool), then you would be directly accessing additional threads in Node.js.</p>
|
<p>On my kubernetes nodes there are</p>
<ol>
<li>prioritized pods</li>
<li>dispensable pods</li>
</ol>
<p>Therefore I would like to have QoS class of <code>Guaranteed</code> for the prioritized pods.
To achieve a <code>Guaranteed</code> class the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/" rel="nofollow noreferrer">cpu/memory requests/limits must meet some conditions</a>. Therefore:</p>
<blockquote>
<p>For every Container in the Pod, the CPU limit must equal the CPU
request</p>
</blockquote>
<p>But I would like to set a higher CPU limit than request, so that the prioritized pods can use every free CPU resources which are available.</p>
<p>Simple example: A Node with 4 cores has:</p>
<ul>
<li>1 prioritized pod with 2000 CPU request and 3900 CPU limit</li>
<li>3 dispensable pods with each 500 CPU request and limit.</li>
</ul>
<p>If the prioritized pod would have 2000 CPU request and limit 2 Cores are wasted because the dispensable pods don't use CPU most of the time.</p>
<p>If the prioritized pod would have 3900 CPU request and limit, I would need an extra node for the dispensable pods.</p>
<p><strong>Questions</strong></p>
<p>Is it possible to set explicitly the <code>Guaranteed</code> class to a pod even with difference CPU request and limit?</p>
<p>If it's not possible: Why is there no way to explicitly set the QoS class?</p>
<p><strong>Remarks</strong></p>
<p>There's an <a href="https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/" rel="nofollow noreferrer">system-cluster-critical</a> option. But I think this should only be used for critical k8s add-on pods but not for critical applications.</p>
| <blockquote>
<p>Is it possible to set explicitly the <code>Guaranteed</code> class to a pod even with difference CPU request and limit?</p>
</blockquote>
<p><strong>Yes, however you will need to use an additional plugin: <a href="https://github.com/kubernetes-sigs/scheduler-plugins/tree/master/kep/9-capacity-scheduling" rel="nofollow noreferrer">capacity-scheduling</a> used with <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/#resource-quota-per-priorityclass" rel="nofollow noreferrer"><code>PriorityClass</code></a>:</strong></p>
<blockquote>
<p>There is increasing demand to use Kubernetes to manage batch workloads (ML/DL). In those cases, one challenge is to improve cluster utilization while ensuring that each user has a reasonable amount of resources. The problem can be partially addressed by the Kubernetes <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/" rel="nofollow noreferrer">ResourceQuota</a>. The native Kubernetes ResourceQuota API can be used to specify the maximum overall resource allocation per namespace. The quota enforcement is done through an admission check. A quota consumer (e.g., a Pod) cannot be created if the aggregated resource allocation exceeds the quota limit. In other words, the overall resource usage is aggregated based on Pod's spec (i.e., cpu/mem requests) when it's created. The Kubernetes quota design has the limitation: the quota resource usage is aggregated based on the resource configurations (e.g., Pod cpu/mem requests specified in the Pod spec). Although this mechanism can guarantee that the actual resource consumption will never exceed the ResourceQuota limit, it might lead to low resource utilization as some pods may have claimed the resources but failed to be scheduled. For instance, actual resource consumption may be much smaller than the limit.</p>
</blockquote>
<hr />
<blockquote>
<p>Pods can be created at a specific <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#pod-priority" rel="nofollow noreferrer">priority</a>. You can control a pod's consumption of system resources based on a pod's priority, by using the <code>scopeSelector</code> field in the quota spec.</p>
<p>A quota is matched and consumed only if <code>scopeSelector</code> in the quota spec selects the pod.</p>
<p>When quota is scoped for priority class using <code>scopeSelector</code> field, quota object is restricted to track only following resources:</p>
<ul>
<li><code>pods</code></li>
<li><code>cpu</code></li>
<li><code>memory</code></li>
<li><code>ephemeral-storage</code></li>
<li><code>limits.cpu</code></li>
<li><code>limits.memory</code></li>
<li><code>limits.ephemeral-storage</code></li>
<li><code>requests.cpu</code></li>
<li><code>requests.memory</code></li>
<li><code>requests.ephemeral-storage</code></li>
</ul>
</blockquote>
<p>This plugin supports also <a href="https://github.com/kubernetes-sigs/scheduler-plugins/tree/master/kep/9-capacity-scheduling#additional-preemption-details" rel="nofollow noreferrer">preemption</a> (example for Elastic):</p>
<blockquote>
<p>Preemption happens when a pod is unschedulable, i.e., failed in PreFilter or Filter phases.</p>
<p>In particular for capacity scheduling, the failure reasons could be:</p>
<ul>
<li>Prefilter Stage</li>
<li>sum(allocated res of pods in the same elasticquota) + pod.request > elasticquota.spec.max</li>
<li>sum(allocated res of pods in the same elasticquota) + pod.request > sum(elasticquota.spec.min)</li>
</ul>
<p>So the preemption logic will attempt to make the pod schedulable, with a cost of preempting other running pods.</p>
</blockquote>
<p>Examples of yaml files and usage can be found <a href="https://github.com/kubernetes-sigs/scheduler-plugins/tree/master/kep/9-capacity-scheduling#design-details" rel="nofollow noreferrer">in the plugin description</a>.</p>
|
<p>I'm using spring boot 2.5.6 and I'm generating the docker image with the spring boot maven plugin.
I'm deploying the application using AWS EKS with nodes managed by fargate.</p>
<p>The plugin configuration is the following</p>
<pre><code><plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<excludes>
<exclude>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</exclude>
</excludes>
</configuration>
</plugin>
</code></pre>
<p>The command I use to execute it is the following</p>
<pre><code>./mvnw spring-boot:build-image -Dspring-boot.build-image.imageName=my-image-name
</code></pre>
<p>When the application is deployed on AWS EKS, the application print the following data</p>
<pre><code>Setting Active Processor Count to 2
Adding $JAVA_OPTS to $JAVA_TOOL_OPTIONS
Calculated JVM Memory Configuration:
-XX:MaxDirectMemorySize=10M
-Xmx408405K
-XX:MaxMetaspaceSize=128170K
-XX:ReservedCodeCacheSize=240M
-Xss1M
(Total Memory: 1G, Thread Count: 250, Loaded Class Count: 20215, Headroom: 0%)
Enabling Java Native Memory Tracking
Adding 128 container CA certificates to JVM truststore
Spring Cloud Bindings Enabled
Picked up JAVA_TOOL_OPTIONS:
-Djava.security.properties=/layers/paketo-buildpacks_bellsoft-liberica/java-security-properties/java-security.properties
-XX:+ExitOnOutOfMemoryError
-XX:ActiveProcessorCount=2
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath="/var/log/containers/heapDump.hprof"
-XX:MaxDirectMemorySize=10M
-Xmx408405K
-XX:MaxMetaspaceSize=128170K
-XX:ReservedCodeCacheSize=240M
-Xss1M
-XX:+UnlockDiagnosticVMOptions
-XX:NativeMemoryTracking=summary
-XX:+PrintNMTStatistics
-Dorg.springframework.cloud.bindings.boot.enable=true
</code></pre>
<p>If I go inside the container and I run the command "free -h" I get the following output</p>
<pre><code>total mem : 7.7G
used mem : 730M
free mem : 4.6G
shared : 820K
buff/cache : 2.4G
available
</code></pre>
<p>Why the -Xmx is filled with 400Mb only? And why the total memory is only 1Gb?</p>
| <p>Posting this out of comments for better visibility.</p>
<hr />
<p>An important thing to mention is when <code>free</code> command is run inside a pod's container, it shows all available memory on the node where this pod is scheduled and running.</p>
<p>At this point it's very important to have memory <code>resources</code> and <code>limits</code> for java applications since JVM memory allocation can be set incorrectly if it happens by the application.</p>
<hr />
<p>There are two main options for resource allocation (in this particular case is <code>memory</code>):</p>
<ul>
<li><p>requests (<code>spec.containers[].resources.requests.memory</code>) - kubernetes scheduler has to find a node which has requested amount of memory, not less than specified.</p>
<p>It's very important to set the <code>requests</code> reasonably since it's used for scheduling and there are chances that kubernetes scheduler won't be able to find a sufficient node with enough free memory to schedule the pod - <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/#specify-a-memory-request-that-is-too-big-for-your-nodes" rel="nofollow noreferrer">good example of incorrect requests</a></p>
</li>
<li><p>limits (<code>spec.containers[].resources.limits.memory</code>) - kubelet insures that pod will not consume more than specified in limits, since containers in pod are allowed to consume more than requested.</p>
<p>It's also important to have <code>limits</code> set up for predictable resource consumption since containers can exceed requested memory and consume all node's memory until <code>OOM killer</code> is involved. <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/#if-you-do-not-specify-a-memory-limit" rel="nofollow noreferrer">Possible cases when limits are not set</a></p>
</li>
</ul>
|
<p>we are planning to introduce AWS spot instances in production ( non-prod is running with spot already ). In order to achieve HA we are running HPA with minimum replicas 2 for all critical deployments. Because of the spot instances behaviour we want to run on-demand instances and one pod should be running on on-demand instances for the same</p>
<p><strong>Question:</strong></p>
<p>Is there anyway i can split pods to get launch one pod of the deployment in on-demand and all the other pods (another one since minimum is 2 and if HPA increase the pods ) of the same deployment in spot instances.</p>
<p>We already using nodeaAffinity and podAntiAffinity since we have multiple node groups for different reasons. Below is the snippet.</p>
<pre><code> nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: category
operator: In
values:
- <some value>
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: <lable key>
operator: In
values:
- <lable value>
topologyKey: "kubernetes.io/hostname"
</code></pre>
| <p>Short answer is <code>No</code>. No such way to define per replica. As you are already using podAntiAffinity, just by adding the same pod labels, you can ensure no two replicas stays in the same host (if that's not what you are already doing). And then use <a href="https://github.com/aws/aws-node-termination-handler" rel="nofollow noreferrer">spotInterruption</a> Handler to drain and reschedule without abrupt downtimes during spot interruptions.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.