prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>So I am using the <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">nginx ingress</a> and installed it trough the helm chart:</p>
<pre><code>helm install --set controller.kind=DaemonSet --set controller.service.annotations."service\.beta\.kubernetes\.io/do-loadbalancer-size-slug"="lb-large" --set controller.service.annotations."service\.beta\.kubernetes\.io/do-loadbalancer-hostname"="some\.url\.com" ingress-nginx ingress-nginx/ingress-nginx
</code></pre>
<p>This automatically created a loadbalancer on digital ocean as well.</p>
<p>As far as I understand until now, referencing <a href="https://kubernetes.github.io/ingress-nginx/user-guide/custom-errors/" rel="nofollow noreferrer">this</a> i have to:</p>
<ol>
<li>Create a docker image which will act as default backend, like <a href="https://github.com/kubernetes/ingress-nginx/tree/main/images/custom-error-pages" rel="nofollow noreferrer">this</a>.</li>
<li>Have to set the following things in the helm chart:
<ul>
<li>defaultBackend.enable => true</li>
<li>defaultBackend.image => The image created in step 1.</li>
<li>controller.config.custom-http-errors => [404, 503, ..] (all the errors i want to be handled by the default backend (<a href="https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/configmap.md" rel="nofollow noreferrer">config values reference</a>).</li>
</ul>
</li>
<li>Upgrade the helm chart.</li>
</ol>
<p>Would this be the right and also easiest approach or is there a simpler way?</p>
<p>Also would upgrading the helm chart, remove the old loadbalancer and create a new one?</p>
| <blockquote>
<p>Would this be the right and also easiest approach or is there a simpler way?</p>
</blockquote>
<p>The steps presented in the question based on <a href="https://kubernetes.github.io/ingress-nginx/examples/customization/custom-errors/" rel="nofollow noreferrer">official NGINX Ingress Controller wiki</a> are correct and seems to be easiest approach.</p>
<blockquote>
<p>Also would upgrading the helm chart, remove the old loadbalancer and create a new one?</p>
</blockquote>
<p>After using command <code>helm upgrade</code> the LoadBalancer will stay the same - the IP address won't change.
By running <code>helm list</code> command you can see that upgrade took place, by checking <code>REVISION</code> and <code>UPDATED</code> fields.</p>
<pre><code>NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
ingress-nginx default 4 2021-09-06 10:38:14.447456942 +0000 UTC deployed ingress-nginx-3.35.0 0.48.1
</code></pre>
|
<p>I am creating a bash script to automate certain actions in my cluster. One of the commands is: <code>kubectl delete -f example.yaml</code>.</p>
<p>The problem is that when the resources defined in the YAML do not exist, the following error is printed:</p>
<pre><code>Error from server (NotFound): error when deleting "example.yaml": deployments.apps "my_app" not found
</code></pre>
<p>I am looking to add an additional step that first checks whether a set of resources defined in a YAML file exist in the cluster. Is there a command that would allow me to do so?</p>
<p>From the <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/" rel="noreferrer">documentation</a>, I found:</p>
<blockquote>
<h1>Compares the current state of the cluster against the state that the cluster would be in if the manifest was applied.</h1>
<p>kubectl diff -f ./my-manifest.yaml</p>
</blockquote>
<p>but I find it difficult to parse the output that it returns. Is there a better alternative?</p>
| <p>To find out if the same object is already present in the cluster as exactly described in the manifest file. you can use the return code of the <code>kubectl diff</code> command.</p>
<pre><code>Exit status:
0 No differences were found.
1 Differences were found.
>1 Kubectl or diff failed with an error.
</code></pre>
<p>Example:</p>
<pre><code>kubectl diff -f crazy.yml &>/dev/null
rc=$?
if [ $rc -eq 0 ];then
echo "Exact Object is already installed on the cluster"
elif [ $rc -eq 1 ];then
echo "Exact object is not installed, either its not installed or different from the manifest file"
else
echo "Unable to determine the difference"
fi
</code></pre>
<p>Alternatively, if you want to really parse the output, you may use the following env variable to print the diff output in desired format:</p>
<blockquote>
<p>KUBECTL_EXTERNAL_DIFF environment variable can be used to select your
own diff command. Users can use external commands with params too,</p>
</blockquote>
|
<p>Apologies if this is a really simple question</p>
<p>I have a 2 applications that can potentially share the same template</p>
<pre><code>applications:
#
app1:
containerName: app1
replicaCount: 10
logLevel: warn
queue: queue1
#
app2:
containerName: app2
replicaCount: 20
logLevel: info
queue: queue2
...
</code></pre>
<p>If I create a single template for both apps, is there a wildcard or variable i can use
that will iterate over both of the apps (i.e. app1 or app2) ? ...e.g. the bit where ive put <SOMETHING_HERE> below ...</p>
<pre><code>spec:
env:
- name: LOG_LEVEL
value: "{{ .Values.applications.<SOMETHING_HERE>.logLevel }}"
</code></pre>
<p>Currently (which im sure is not very efficient) I have two seperate templates that each define their own app .e.g</p>
<p><strong>app1_template.yaml</strong></p>
<pre><code>{{ .Values.applications.app1.logLevel }}
</code></pre>
<p><strong>app2_template.yaml</strong></p>
<pre><code>{{ .Values.applications.app2.logLevel }}
</code></pre>
<p>Which im pretty sure is not the way Im supposed to do it?</p>
<p>any help on this would be greatly appreciated</p>
| <p>One of the solutions would be to have one template and multiple value files, one per deployment/environment</p>
<pre><code>spec:
env:
- name: LOG_LEVEL
value: "{{ .Values.logLevel }}"
</code></pre>
<p><em>values-app1.yaml</em>:</p>
<pre><code>containerName: app1
replicaCount: 10
logLevel: warn
queue: queue1
</code></pre>
<p><em>values-app2.yaml</em>:</p>
<pre><code>containerName: app2
replicaCount: 20
logLevel: info
queue: queue2
</code></pre>
<p>then, specify which values file should be used, by adding this to helm command:</p>
<pre><code>APP=app1 # or app2
helm upgrade --install . --values ./values-${APP}.yaml
</code></pre>
<p>you can also have shared values, let say in regular <code>values.yaml</code> and provide multiple files:</p>
<pre><code>APP=app1
helm upgrade --install . --values ./values.yaml --values ./values-${APP}.yaml
</code></pre>
|
<p>I have setup a Postgres pod on my Kubernetes cluster, and I am trying to troubleshoot it a bit.</p>
<p>I would like to use the <a href="https://hub.docker.com/_/postgres" rel="nofollow noreferrer">official Postgres image</a> and deploy it to my Kubernetes cluster using <code>kubectl</code>. Given that my Postgres server connection details are:</p>
<pre><code>host: mypostgres
port: 5432
username: postgres
password: 12345
</code></pre>
<p>And given that I <em>think</em> the command will be <em>something</em> like:</p>
<pre><code>kubectl run -i --tty --rm debug --image=postgres --restart=Never -- sh
</code></pre>
<p>What do I need to do so that I can deploy this image to my cluster, connect to my Postgres server and start running SQL command against it (for troubleshooting purposes)?</p>
| <p>If your primarily interested in troubleshooting, then you're probably looking for the <code>kubectl port-forward</code> command, which will expose a container port on your local host. First, you'll need to deploy the Postgres pod; you haven't shown what your pod manifest looks like, so I'm going to assume a <code>Deployment</code> like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: postgres
name: postgres
namespace: sandbox
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- env:
- name: POSTGRES_PASSWORD
value: secret
- name: POSTGRES_USER
value: example
- name: POSTGRES_DB
value: example
image: docker.io/postgres:13
name: postgres
ports:
- containerPort: 5432
name: postgres
protocol: TCP
volumeMounts:
- mountPath: /var/lib/postgresql
name: postgres-data
strategy: Recreate
volumes:
- emptyDir: {}
name: postgres-data
</code></pre>
<p>Once this is running, you can access postgres with the <code>port-forward</code>
command like this:</p>
<pre><code>kubectl -n sandbox port-forward deploy/postgres 5432:5432
</code></pre>
<p>This should result in:</p>
<pre><code>Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432
</code></pre>
<p>And now we can connect to Postgres using <code>psql</code> and run queries
against it:</p>
<pre><code>$ psql -h localhost -U example example
psql (13.4)
Type "help" for help.
example=#
</code></pre>
<hr />
<p><code>kubectl port-forward</code> is only useful as a troubleshooting mechanism. If
you were trying to access your <code>postgres</code> pod from another pod, you
would create a <code>Service</code> and then use the service name as the hostname
for your client connections.</p>
<hr />
<p><strong>Update</strong></p>
<p>If your goal is to deploy a <em>client</em> container so that you can log
into it and run <code>psql</code>, the easiest solution is just to <code>kubectl rsh</code>
into the postgres container itself. Assuming you were using the
<code>Deployment</code> shown earlier in this question, you could run:</p>
<pre><code>kubectl rsh deploy/postgres
</code></pre>
<p>This would get you a shell prompt <em>inside</em> the postgres container. You
can run <code>psql</code> and not have to worry about authentication:</p>
<pre><code>$ kubectl rsh deploy/postgres
$ psql -U example example
psql (13.4 (Debian 13.4-1.pgdg100+1))
Type "help" for help.
example=#
</code></pre>
<p>If you want to start up a separate container, you can use the <code>kubectl debug</code> command:</p>
<pre><code>kubectl debug deploy/postgres
</code></pre>
<p>This gets you a root prompt in a debug pod. If you know the ip address
of the postgres pod, you can connect to it using <code>psql</code>. To get
the address of the pod, run this on your local host:</p>
<pre><code>$ kubectl get pod/postgres-6df4c549f-p2892 -o jsonpath='{.status.podIP}'
10.130.0.11
</code></pre>
<p>And then inside the debug container:</p>
<pre><code>root@postgres-debug:/# psql -h 10.130.0.11 -U example example
</code></pre>
<p>In this case you would have to provide an appropriate password,
because you are accessing postgres from "another machine", rather than
running directly inside the postgres pod.</p>
<p>Note that in the above answer I've used the shortcut
<code>deploy/<deployment_name</code>, which avoids having to know the name of the
pod created by the <code>Deployment</code>. You can replace that with
<code>pod/<podname></code> in all cases.</p>
|
<p>Ive hit a wall I'm hoping the SO community can advise on where to go next. I've set up a 6 node kubernetes cluster with calico as the networking service. I've only got two pods, the dns debugging pods from kubernetes and a mysql pod. Well and the kube-system pods.
Anyways, I've been at this all day. I've started from scratch 3 times and I keep hitting a wall when it comes to DNS. I've been trying to sort through why I can't access my pods externally. Here are my configs.</p>
<p><strong>mysql.yaml</strong></p>
<pre><code>kind: Service
metadata:
name: mysql
namespace: new_namespace
spec:
type: ExternalName
externalName: mysql.new_namespace.svc.cluster.local
ports:
- port: 3306
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
namespace: new_namespace
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: *******
securityContext:
runAsUser: 0
allowPrivilegeEscalation: false
ports:
- name: mysql
containerPort: 3306
protocol: TCP
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: gluster-claim
</code></pre>
<p>Along with others, I've been primarily following <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="nofollow noreferrer">this guide</a>, but have been unsuccessful in determining my issue. DNS issues just... make no sense to me.</p>
<pre><code>$ kubectl exec -i -t -nnew_namespace dnsutils -- nslookup mysql
Server: 192.168.128.10
Address: 192.168.128.10#53
*** Can't find mysql.new_namespace.svc.cluster.local: No answer
</code></pre>
<p>It seems like things <em>should</em> be working...</p>
<pre><code>$ kubectl exec -i -t -nnew_namespace dnsutils -- nslookup kubernetes.default
Server: 192.168.128.10
Address: 192.168.128.10#53
Name: kubernetes.default.svc.cluster.local
Address: 192.168.128.1
</code></pre>
<pre><code>$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-58497c65d5-mcmb4 1/1 Running 0 31m
kube-system calico-node-27ln4 1/1 Running 0 36m
kube-system calico-node-dngfs 1/1 Running 6 (39m ago) 45m
kube-system calico-node-nq6bz 1/1 Running 0 38m
kube-system calico-node-p6jwz 1/1 Running 0 35m
kube-system calico-node-p8fzr 1/1 Running 0 35m
kube-system calico-node-wlzr9 1/1 Running 0 35m
kube-system calico-typha-68857595fc-kgnvx 1/1 Running 0 45m
kube-system calico-typha-68857595fc-n4hhq 1/1 Running 0 45m
kube-system calico-typha-68857595fc-vjgkc 1/1 Running 0 45m
kube-system coredns-78fcd69978-25bxh 1/1 Running 0 26m
kube-system coredns-78fcd69978-cfl52 1/1 Running 0 26m
kube-system etcd-new_namespace-master 1/1 Running 3 49m
kube-system kube-apiserver-new_namespace-master 1/1 Running 0 49m
kube-system kube-controller-manager-new_namespace-master 1/1 Running 0 31m
kube-system kube-proxy-4zx4m 1/1 Running 0 35m
kube-system kube-proxy-hhvh7 1/1 Running 0 38m
kube-system kube-proxy-m8sph 1/1 Running 0 35m
kube-system kube-proxy-qrfx7 1/1 Running 0 49m
kube-system kube-proxy-tkb4m 1/1 Running 0 35m
kube-system kube-proxy-vct78 1/1 Running 0 36m
kube-system kube-scheduler-new_namespace-master 1/1 Running 3 49m
new_namespace dnsutils 1/1 Running 0 30m
new_namespace mysql-554fd8859d-hb7lp 1/1 Running 0 4m5s
</code></pre>
<pre><code>$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 192.168.128.1 <none> 443/TCP 51m
kube-system calico-typha ClusterIP 192.168.239.47 <none> 5473/TCP 47m
kube-system kube-dns ClusterIP 192.168.128.10 <none> 53/UDP,53/TCP,9153/TCP 51m
new_namespace gluster-cluster ClusterIP 192.168.180.197 <none> 1/TCP 30m
new_namespace mysql ExternalName <none> mysql.new_namespace.svc.cluster.local <none> 31m
</code></pre>
<pre><code>$ kubectl get endpoints --all-namespaces
NAMESPACE NAME ENDPOINTS AGE
default kubernetes 10.1.0.125:6443 52m
kube-system calico-typha 10.1.0.126:5473,10.1.0.127:5473,10.1.0.128:5473 48m
kube-system kube-dns 192.168.13.1:53,192.168.97.65:53,192.168.13.1:53 + 3 more... 52m
new_namespace gluster-cluster 10.1.0.125:1,10.1.0.126:1,10.1.0.127:1 + 3 more... 31m
</code></pre>
<pre><code>$ kubectl describe endpoints kube-dns --namespace=kube-system
Name: kube-dns
Namespace: kube-system
Labels: k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=CoreDNS
Annotations: endpoints.kubernetes.io/last-change-trigger-time: 2021-09-06T15:30:06Z
Subsets:
Addresses: 192.168.13.1,192.168.97.65
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
dns-tcp 53 TCP
dns 53 UDP
metrics 9153 TCP
Events: <none>
</code></pre>
<p>And the logs...don't really mean anything to me. It looks like things are working though? Yet I still can't access mysql..</p>
<pre><code>$ kubectl logs --namespace=kube-system -l k8s-app=kube-dns new_namespace-master: Mon Sep 6 16:01:47 2021
[INFO] 192.168.119.1:52410 - 18128 "A IN mysql.new_namespace.svc.cluster.local. udp 48 false 512" NOERROR qr,aa,rd 97 0.00009327s
[INFO] 192.168.119.1:41837 - 46102 "A IN mysql.new_namespace.new_namespace.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000240183s
[INFO] 192.168.119.1:42485 - 36923 "A IN mysql.new_namespace.new_namespace.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000097762s
[INFO] 192.168.119.1:54354 - 34171 "A IN mysql.new_namespace.new_namespace.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000155781s
[INFO] 192.168.119.1:36491 - 48004 "A IN mysql.new_namespace.svc.cluster.local. udp 48 false 512" NOERROR qr,aa,rd 141 0.000075232s
[INFO] 192.168.119.1:58078 - 26522 "A IN mysql.new_namespace.new_namespace.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000096242s
[INFO] 192.168.119.1:59389 - 51728 "AAAA IN mysql.new_namespace.svc.cluster.local. udp 48 false 512" NOERROR qr,aa,rd 141 0.000110561s
[INFO] 192.168.119.1:39553 - 24302 "A IN mysql.new_namespace.new_namespace.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000114412s
[INFO] 192.168.119.1:60340 - 28351 "A IN mysql.new_namespace.svc.cluster.local. udp 48 false 512" NOERROR qr,aa,rd 141 0.000175322s
[INFO] 192.168.119.1:36494 - 12725 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000149332s
[INFO] 192.168.119.1:45875 - 25210 "AAAA IN mysql. udp 23 false 512" NXDOMAIN qr,rd,ra,ad 98 0.000629398s
[INFO] 192.168.119.1:37467 - 44662 "A IN mysql.new_namespace.svc.cluster.local. udp 48 false 512" NOERROR qr,aa,rd 97 0.000115082s
[INFO] 192.168.119.1:37792 - 59085 "AAAA IN mysql. udp 23 false 512" NXDOMAIN qr,aa,rd,ra 98 0.000043841s
[INFO] 192.168.119.1:47263 - 56267 "AAAA IN mysql.new_namespace.svc.cluster.local. udp 48 false 512" NOERROR qr,aa,rd 141 0.000241662s
[INFO] 192.168.119.1:47070 - 59276 "A IN mysql.new_namespace.svc.cluster.local. udp 48 false 512" NOERROR qr,aa,rd 141 0.000144522s
[INFO] 192.168.119.1:46812 - 32557 "A IN mysql.new_namespace.svc.cluster.local. udp 48 false 512" NOERROR qr,aa,rd 141 0.00008474s
[INFO] 192.168.119.1:57113 - 14895 "AAAA IN mysql.new_namespace.svc.cluster.local. udp 48 false 512" NOERROR qr,aa,rd 141 0.000052781s
[INFO] 192.168.119.1:51403 - 18192 "AAAA IN mysql.new_namespace.svc.cluster.local. udp 48 false 512" NOERROR qr,aa,rd 141 0.000166912s
[INFO] 192.168.119.1:52077 - 43229 "A IN kubernetes.default.new_namespace.svc.cluster.local. udp 61 false 512" NXDOMAIN qr,aa,rd 154 0.000199803s
[INFO] 192.168.119.1:60907 - 16052 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000082662s
</code></pre>
<p>Oh and this was the init command I used:</p>
<pre><code>sudo kubeadm reset ; sudo kubeadm init --pod-network-cidr=192.168.0.0/17 --service-cidr=192.168.128.0/17 --apiserver-advertise-address 10.1.0.125 --control-plane-endpoint 10.1.0.125
</code></pre>
| <p>Turns out I just hadn't installed Calico correctly. I started from scratch one more time and installed calico using <a href="https://docs.projectcalico.org/getting-started/kubernetes/self-managed-onprem/onpremises" rel="nofollow noreferrer">this guide</a> (NOT THE QUICKSTART), and everything's working. I'm not really sure how I came across finding out calico was installed incorrectly. I think it was because I had two calico controllers in two different namespaces and I was very confused as to why. Anyways, reran the same init command, followed the guide, set up mysql and I have access!</p>
|
<p>I have created a .NET Core microservice using Microsoft sign-in as explained <a href="https://stackoverflow.com/questions/69031953/how-to-set-the-redirect-uri-when-using-microsoft-sign-in-in-a-net-5-application">here</a>. When running it locally or in a local Docker container, everything is working as expected. However, when running in <a href="https://aws.amazon.com/eks/" rel="nofollow noreferrer">EKS</a>, I get an annoying warning message in the logs:</p>
<blockquote>
<p>Failed to determine the https port for redirect.</p>
</blockquote>
<p>This warning is shown in this picture shown below which also shows that I also checked which port numbers are used (only 80 for http). I would be nice if <code>netstat -tulp</code> would show a port number for https because then I can just set it in the application settings (<a href="https://learn.microsoft.com/en-us/aspnet/core/security/enforcing-ssl?view=aspnetcore-2.2&tabs=visual-studio#port-configuration" rel="nofollow noreferrer">as explained by the Microsoft docs</a>) ad resolve it directly.</p>
<p>The Microsoft docs clearly exlain what to do when</p>
<blockquote>
<p>The middleware logs the warning "Failed to determine the https port
for redirect."</p>
</blockquote>
<p>but these do not make clear what to do if everything works fine in all environments except when running the application in EKS. Probably, I should do something in my EKS/Kubernetes configuration but I have no clue what to do.</p>
<p>Here is the service file:</p>
<p><a href="https://i.stack.imgur.com/qAF6T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qAF6T.png" alt="enter image description here" /></a></p>
<p>Here is the warning message shown in the logs.</p>
<p><a href="https://i.stack.imgur.com/IvKji.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IvKji.png" alt="enter image description here" /></a></p>
<p>How do I resolve the warning message?</p>
| <p>I can see you are using <code>app.UseHsts();</code> in <code>Startup.cs</code> file which will force client to use https. See document <a href="https://learn.microsoft.com/en-us/aspnet/core/security/enforcing-ssl?view=aspnetcore-5.0&tabs=visual-studio#http-strict-transport-security-protocol-hsts" rel="nofollow noreferrer">here</a>.</p>
<p>But basic on your log <code>Now listening on: http://[::]:80</code>, your service only listening on http but not https.</p>
<p>Maybe You can try this</p>
<ol>
<li>Simply delete the code <code>app.UseHsts();</code> and use only http.</li>
<li>Enable https by config <code>spec.template.spec.containers.env</code> in k8s deployment.yaml. You will need a server tls cert in your container.</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: webapi
labels:
app: weather-forecast
spec:
replicas: 1
selector:
matchLabels:
service: webapi
template:
metadata:
labels:
app: weather-forecast
service: webapi
spec:
containers:
- name: webapi
image: exploredocker.azurecr.io/webapi:v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
protocol: TCP
- containerPort: 443
protocol: TCP
env:
- name: ASPNETCORE_URLS
value: https://+:443;http://+:80
- name: ASPNETCORE_Kestrel__Certificates__Default__Path
value: /https/server_cert.pfx
- name: ASPNETCORE_Kestrel__Certificates__Default__Password
value: password
</code></pre>
|
<p>I have two helm charts, the second one is nested to the first
tree output:</p>
<pre><code>.
└── outerChart
├── charts
│ └── innerChart
│ ├── charts
│ ├── Chart.yaml
│ ├── templates
│ │ ├── deployment.yaml
│ │ ├── _helpers.tpl
│ │ ├── ingress.yaml
│ │ ├── NOTES.txt
│ │ ├── serviceaccount.yaml
│ │ └── service.yaml
│ └── values.yaml
├── Chart.yaml
├── templates
└── values.yaml
</code></pre>
<p>I have a global variable in the outer <code>values.yaml</code>:</p>
<pre><code>global:
myreplicacount: 4
</code></pre>
<p>In the nested chart there is a values file with</p>
<pre><code>replicaCount: 1
</code></pre>
<p>and a deployment that uses it:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "foo.fullname" . }}
labels:
{{ include "foo.labels" . | indent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
</code></pre>
<p>The straight forward way in order to use the global var is to replace:</p>
<pre><code>replicas: {{ .Values.replicaCount }}
</code></pre>
<p>with:</p>
<pre><code>replicas: {{ .Values.global.myreplicacount }}
</code></pre>
<p>in the <code>deployment.yaml</code>.</p>
<p>But it possible to use that in the nested <code>values.yaml</code>? I would like to keep the inner chart intact. I am stuck in a situation where one software team provides me the inner chart and another one provides needed configuration through global variables. <code>replicaCount</code> is just an example, they have many variables that need to change and if I edit the inner chart that will make inner chart updates difficult.</p>
<p>Just adding:</p>
<pre><code>replicaCount: {{ .Values.global.myreplicacount }}
</code></pre>
<p>in outerChart/charts/innerChart/values.yaml
returns</p>
<pre><code>): error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{".Values.global.myreplicacount":interface {}(nil)}
</code></pre>
| <p>Inside the nested chart, <code>.Values</code> is seen as the contents of <code>.Values.nested</code> (using whatever name for the dependency is in <code>requirements.yaml</code> or <code>Chart.yaml</code>), merged with the top-level <code>global:</code> key.</p>
<pre class="lang-yaml prettyprint-override"><code># requirements.yaml
- name: nested # <-- this name will be used in values.yaml
version: "^1"
repository: ...
</code></pre>
<p>You can provide specific settings for that dependency using its name as a key in <code>values.yaml</code>. This doesn't require <code>global:</code> or any changes to the dependency chart.</p>
<pre class="lang-yaml prettyprint-override"><code># outer chart values.yaml, or `helm install -f` YAML file
nested: # <-- matches name: in requirements.yaml above
replicaCount: 1
</code></pre>
<p>For a Deployment replica count in particular, I'd prefer this option over possible alternatives. It's likely that each Deployment and StatefulSet will need a different replica count, especially across multiple charts, and it doesn't make sense to globally say "absolutely everything will have exactly 4 replicas".</p>
<p>If you can edit the nested chart, and you want it to optionally accept a <code>global:</code> value, you could honor that if a local value isn't set</p>
<pre class="lang-yaml prettyprint-override"><code># charts/nested/deployment.yaml
replicas: {{ .Values.replicaCount | default .Values.global.replicaCount | default 1 }}
</code></pre>
|
<p>I am using elastic search exporter to pull elastic search metrics to prometheus.
I installed the helm chart and could see the metrics on <a href="http://127.0.0.1:9108/metrics" rel="nofollow noreferrer">http://127.0.0.1:9108/metrics</a> with port forwarding.
But i don't see any metrics coming to prometheus.
Can someone please tell me where to start troubleshooting the issue?</p>
| <p>There are a few options that might help you:</p>
<ul>
<li>Check <a href="https://github.com/prometheus-community/helm-charts/blob/main/charts/prometheus-elasticsearch-exporter/values.yaml" rel="nofollow noreferrer">ServiceMonitor</a> configuration of <code>prometheus-elasticsearch-exporter</code> and ensure that it is enabled:</li>
</ul>
<pre><code>serviceMonitor:
enabled: true
</code></pre>
<ul>
<li>Read about <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#prometheusioscrape" rel="nofollow noreferrer">Scrape</a> configuration of <code>kube-prometheus-stack</code> and setup it according to your goals.</li>
</ul>
<blockquote>
<p>By default, Prometheus discovers PodMonitors and ServiceMonitors within its namespace, that are labeled with the same release tag as the prometheus-operator release. Sometimes, you may need to discover custom PodMonitors/ServiceMonitors, for example used to scrape data from third-party applications. An easy way of doing this, without compromising the default PodMonitors/ServiceMonitors discovery, is allowing Prometheus to discover all PodMonitors/ServiceMonitors within its namespace, without applying label filtering. <strong>To do so, you can set prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues and prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues to false.</strong></p>
</blockquote>
|
<p>I have a java web stateful service which connects to mongodb and currently it is being deployed as a jar directly on the VM. This service has a property in properties file i.e. <code>primary</code>.</p>
<p>If we set <code>primary</code> to true then it is primary instance of the application and on this instance mongodb aggregation will happen.</p>
<p>If we set <code>primary</code> to false then it is secondary instance of the application and on this instance no mongodb aggregation will happen but it will serve all read and write queries.</p>
<p>Now we are migrating towards k8s deployment.</p>
<p>Challenges are :</p>
<p><strong>How do we set the value of primary during the k8s pod deployment having multiple replicas ?</strong></p>
<p><strong>How do we handle the failure of primary instance of application pod ?</strong></p>
<p>I am open for any architectural changes suggestions in order to facilitate better deployment structure.</p>
<p>Thanks in advance.</p>
| <p>For <strong>primary</strong> and <strong>secondary</strong> you can pass the <strong>environment</strong> variables to POD and based on that POD will start with the respective configuration.</p>
<pre><code>import java.util.Map;
public class EnvMap {
public static void main (String[] args) {
Map<String, String> env = System.getenv();
for (String envName : env.keySet()) {
System.out.format("%s=%s%n",
envName,
env.get(envName));
}
}
}
</code></pre>
<p>You can create the TWO different deployment into K8s one for the Primary and second for secondary.</p>
<blockquote>
<p>How do we set the value of primary during the k8s pod deployment
having multiple replicas ?</p>
</blockquote>
<p>You can set the environment variable which will be the same across all replicas of Primary and different environment variables for secondary.</p>
<blockquote>
<p>How do we handle the failure of primary instance of application pod ?</p>
</blockquote>
<p>You can use the <strong>readiness</strong> and <strong>liveness</strong> probe to deployment so if any failure occurs K8s will auto restart the POD.</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/</a></p>
<p>If you can create the two deployment in K8s</p>
<pre><code>apiVersion: apps/v1
kind: deployment
metadata:
name: primary
labels:
purpose: primary
spec:
containers:
- name: primary
image: gcr.io/google-samples/node-hello:1.0
env:
- name: primary
value: "True"
</code></pre>
<p>Second deployment environment variables will goes like</p>
<pre><code>apiVersion: apps/v1
kind: deployment
metadata:
name: secondary
labels:
purpose: secondary
spec:
containers:
- name: secondary
image: gcr.io/google-samples/node-hello:1.0
env:
- name: secondary
value: "False"
</code></pre>
<p>If you are using the stateful set you can also configure same in it if not using the deployment.</p>
|
<p>I enabled AGIC in the Azure portal and then created <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#simple-fanout" rel="nofollow noreferrer">Fanout Ingress</a>. But it's not working. I checked Rules (ingress-appgateway > Rules > Path-based routing) and paths are targeting correct backend pool.</p>
<p>When I am testing health probe, it's failing ("MC_..." resource group > ingress-appgateway > Health probes > Click test) - showing error :</p>
<blockquote>
<p>One or more of your backend instances are unhealthy. It is recommended
to address this health issue first before attaching the probe</p>
</blockquote>
<p>I tried:</p>
<ul>
<li>disabling and enabling AGIC -> <strong>did not work</strong></li>
<li>using <code>pathType: ImplementationSpecific</code> (instead of <code>pathType: Prefix</code>) -> <strong>did not work</strong></li>
<li><code>nginx.ingress.kubernetes.io/rewrite-target: /$1</code> & <code>/foo(/|$)(.*)</code> -> <strong>did not work</strong></li>
</ul>
<p>At the top of the Overview page of "ingress-appgateway" (Azure portal), showing error:</p>
<blockquote>
<p>All the instances in one or more of your backend pools are unhealthy.
This will result in a 502 error when you try to access your
application hosted behind the Application Gateway. Please check the
backend health and resolve the issue.</p>
</blockquote>
<p>It works only If I remove paths (<code>/foo</code> & <code>/bar</code>) and target a single service.</p>
<p>FYI, I am using Azure CNI networking and existing VNet (dedicated subnet).</p>
<p><code>deployment.yaml</code></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: demo-web-app1
namespace: demo
spec:
selector:
app: demo-web-app1
type: ClusterIP
ports:
- protocol: TCP
port: 4200
targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: demo-web-app2
namespace: demo
spec:
selector:
app: demo-web-app2
type: ClusterIP
ports:
- protocol: TCP
port: 8080
targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-web-app1
namespace: demo
spec:
replicas: 2
selector:
matchLabels:
app: demo-web-app1
template:
metadata:
labels:
app: demo-web-app1
spec:
containers:
- name: demo-web-app1
image: myacr.azurecr.io/myacr6472:375
ports:
- containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-web-app2
namespace: demo
spec:
replicas: 3
selector:
matchLabels:
app: demo-web-app2
template:
metadata:
labels:
app: demo-web-app2
spec:
containers:
- name: demo-web-app2
image: myacr.azurecr.io/myacr6472:375
ports:
- containerPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-demo-web-app
namespace: demo
annotations:
kubernetes.io/ingress.class: azure/application-gateway
appgw.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /foo
pathType: Prefix
backend:
service:
name: demo-web-app1
port:
number: 4200
- path: /bar
pathType: Prefix
backend:
service:
name: demo-web-app2
port:
number: 8080
</code></pre>
| <p>Using <code>appgw.ingress.kubernetes.io/backend-path-prefix: "/"</code> We will be able to specify multiple rules.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: gateway-ingress
annotations:
kubernetes.io/ingress.class: azure/application-gateway
appgw.ingress.kubernetes.io/backend-path-prefix: "/"
spec:
rules:
- http:
paths:
- path: /service1/*
pathType: Prefix
backend:
service:
name: k8-boot-graphql-rest-aks-apm-test
port:
number: 8081
- path: /service2/*
pathType: Prefix
backend:
service:
name: k8-sboot-restapi-test
port:
number: 8080
http://host/service1/actuator/health ---> Microservice 1
http://host/service2/actuator/health ---> Microservice 2
</code></pre>
<p>Sample deployment script I have used</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: k8-boot-graphql-rest-aks-apm-test
labels:
app: k8-boot-graphql-rest-aks-apm-test
spec:
containers:
- image: sreyasvpariyath/k8-boot-graphql-rest-aks-apm-test:latest #CHANGE
imagePullPolicy: Always
name: k8-boot-graphql-rest-aks-apm-test
ports:
- containerPort: 8081
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: k8-boot-graphql-rest-aks-apm-test
spec:
selector:
app: k8-boot-graphql-rest-aks-apm-test
ports:
- protocol: TCP
port: 8081
targetPort: 8081
</code></pre>
|
<p>I have what seems like an API issue that results in not being to be able to create a job from a cron job.</p>
<p>I have a CronJob Helm file using the API like so:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
</code></pre>
<p>Deploying that with helm works just fine.</p>
<p>Then after it is deployed I attempt to create a job using that cronjob like so:</p>
<pre><code> kubectl create job $(helm-release-name) --from=cronjob/connector-config
</code></pre>
<p>This used to create the job based on the chart above. Now however, since upgrading to 1.19.11 I instead get this error:</p>
<pre><code>##[error]error: unknown object type *v1beta1.CronJob
commandOutput
##[error]The process
'/opt/hostedtoolcache/kubectl/1.22.1/x64/kubectl' failed with exit code 1
</code></pre>
<p>If I change the api in the Helm chart to this:</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
</code></pre>
<p>Then the Helm chart fails to deploy.</p>
<pre><code>UPGRADE FAILED: unable to recognize "": no matches for kind "CronJob" in version
"batch/v1"
</code></pre>
<p>Suggestions?</p>
<p>Thanks!</p>
| <p>CronJobs is generally available (GA) in Google Kubernetes Engine (GKE) version 1.21 and later.</p>
<p>the version you should use <code>apiVersion: batch/v1</code></p>
<p><a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/</a></p>
<blockquote>
<p>CronJobs was promoted to general availability in Kubernetes v1.21. If
you are using an older version of Kubernetes, please refer to the
documentation for the version of Kubernetes that you are using, so
that you see accurate information. Older Kubernetes versions do not
support the batch/v1 CronJob API.</p>
</blockquote>
<p>You can check the support Kubernetes API versions using</p>
<pre><code>kubectl api-resources OR kubectl api-versions
</code></pre>
<p>you can also try</p>
<pre><code>kubectl explain <Resource type>
kubectl explain cronjob
</code></pre>
|
<p>I uninstalled calico using:
'kubectl delete -f calico.yaml'</p>
<p>and installed weave using:
'export kubever=$(kubectl version | base64 | tr -d '\n')'
'kubectl apply -f "<a href="https://cloud.weave.works/k8s/net?k8s-version=" rel="nofollow noreferrer">https://cloud.weave.works/k8s/net?k8s-version=</a>$kubever"'</p>
<p>When i deploy my pods they remain at "ContainerCreating" status.
When i check pod logs i see error below:</p>
<p>'networkPlugin cni failed to set up pod "saccofrontend-d7444fd6d-998gf_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/'</p>
<p>I manually deleted this file.
A reinstall of docker and kubeadm did not help either.Still getting the same error.</p>
<p>Please advise what could be promting kubelet to still use calico as the cni even though i unstialled it.</p>
| <p>My issue was that I was using the command below to setup the Calico CNI:</p>
<pre><code>kubectl apply -f https://docs.projectcalico.org/v3.9/manifests/calico.yaml
</code></pre>
<p>Using the link below instead worked. It's the same without the version I guess.</p>
<pre><code>kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
</code></pre>
|
<p>I've recently joined a new project in Kubernetes. Last team didn't seem to manage deployments well and some task are managed wit single pods running with init containers in'em.</p>
<p>So, for example we have namespace "test" and pods there - some of them were run manually, aside the deployment by other team members and contain <code>initContainers</code>. I need to find a particular pod with init container in it - get his name, manifest and so on.</p>
<p>The cheat sheet in Kubernetes docs suggests a solution with getting a container id:</p>
<pre><code>kubectl get pods -n test -o jsonpath='{range .items[*].status.initContainerStatuses[*]}{.containerID}{"\n"}{end}' | cut -d/ -f3
</code></pre>
<p>It gives me an ID of <code>InitContainer</code>.</p>
<p>When I try to get name of it and respectively pod name I try to use this code snippet:</p>
<pre><code>kubectl get pod -n test -o jsonpath='{range .items[?(@.status.containerStatuses[].containerID=="docker://686bc30be6e870023dcf611f7a7808516e041c892a236e565ba2bd3e0569ff7a")]}{.metadata.name}{end}'
</code></pre>
<p>and it gives me nothing so far.</p>
<p>Is there more elegant and easy way of getting pod names with <code>initcontainers</code> in a particular namespace?</p>
<p>I also tried this solution:</p>
<pre><code>kubectl get pod -n test -o custom-columns='POD_NAME:metadata.name,INIT_NAME:spec.InitContainers[*].name'
</code></pre>
<p>but it returns nothing.</p>
<p>The solution I'am using now is parsing yaml output with "for" cycle in bash but it doesn't sound good to me.</p>
<p>Any suggestions?</p>
| <p>You need to improve your request with <code>initContainerStatuses</code> to find necessary information only for Init containers:</p>
<pre><code>kubectl get pod -n <namespace> -o jsonpath='{range .items[?(@.status.initContainerStatuses[].containerID=="docker://<container_id>")]}{.metadata.name}{end}'
</code></pre>
<p>For example:</p>
<pre><code>kubectl get pod -n kube-system -o jsonpath='{range .items[?(@.status.initContainerStatuses[].containerID=="docker://e235d512c3a5472c8f7de6e73c724317639c9132c07193
cb9")]}{.metadata.name}{end}'
weave-net-s26tf
</code></pre>
|
<p>Update:<br />
A colleague who works for Microsoft said:</p>
<blockquote>
<p>Changelog entry for this behaviour change is here: <a href="https://github.com/MicrosoftDocs/azure-docs-cli/blob/master/docs-ref-conceptual/release-notes-azure-cli.md#aks-3" rel="nofollow noreferrer">https://github.com/MicrosoftDocs/azure-docs-cli/blob/master/docs-ref-conceptual/release-notes-azure-cli.md#aks-3</a></p>
</blockquote>
<hr>
<p>I'm following the proper instructions and the documentation must be out of date.</p>
<p><a href="https://learn.microsoft.com/en-us/azure/aks/kubernetes-service-principal" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/kubernetes-service-principal</a></p>
<blockquote>
<p><strong>Automatically create and use a service principal.</strong><br />
When you create an AKS cluster in the Azure portal or using the az aks create command, Azure can automatically generate a service principal.
In the following Azure CLI example, a service principal is not specified. In this scenario, the Azure CLI creates a service principal for the AKS cluster. To successfully complete the operation, your Azure account must have the proper rights to create a service principal.</p>
</blockquote>
<pre><code>az aks create --name myAKSCluster --resource-group myResourceGroup
</code></pre>
<p>This is what happened a few months ago - see <strong>Finished service principal creation</strong>:</p>
<p><a href="https://i.stack.imgur.com/0IF56.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0IF56.png" alt="enter image description here" /></a></p>
<p>Now when I try I get <strong>Add role propagation</strong>:</p>
<p><a href="https://i.stack.imgur.com/blJZd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/blJZd.png" alt="enter image description here" /></a></p>
<p>The problem is querying the servicePrincipalProfile.clientId results in <strong>msi</strong>, I need the guid of the service principal not the Managed Service Identity.</p>
<pre><code>$CLIENT_ID=$(az aks show --resource-group $AKS_RESOURCE_GROUP --name $AKS_CLUSTER_NAME --query "servicePrincipalProfile.clientId" --output tsv)
echo $CLIENT_ID
</code></pre>
<p>Used to work:</p>
<p><a href="https://i.stack.imgur.com/tDZXj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tDZXj.png" alt="enter image description here" /></a></p>
<p>Now its changed:</p>
<p><a href="https://i.stack.imgur.com/GnNxh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GnNxh.png" alt="enter image description here" /></a></p>
<p>How do I create the Kubernetes Cluster with a Service Principal as the documentation states and how it used to work?</p>
<p>Repro steps:</p>
<p><a href="https://github.com/MeaningOfLights/AzureTraining/blob/master/Hands-On-Labs-That-Work/80-Kubernetes.md" rel="nofollow noreferrer">https://github.com/MeaningOfLights/AzureTraining/blob/master/Hands-On-Labs-That-Work/80-Kubernetes.md</a></p>
<p><a href="https://github.com/MeaningOfLights/AzureTraining/blob/master/Hands-On-Labs-That-Work/85-Kubernetes-Deployment.md" rel="nofollow noreferrer">https://github.com/MeaningOfLights/AzureTraining/blob/master/Hands-On-Labs-That-Work/85-Kubernetes-Deployment.md</a></p>
| <p>For Reference: I got the same and following your <a href="https://github.com/MicrosoftDocs/azure-docs-cli/blob/master/docs-ref-conceptual/release-notes-azure-cli.md#aks-3" rel="nofollow noreferrer">link</a> I found that this worked.</p>
<pre><code>az aks show -g aks -n cluster --query identityProfile.kubeletidentity.clientId -o tsv
</code></pre>
<p><a href="https://i.stack.imgur.com/FaaOL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FaaOL.png" alt="enter image description here" /></a></p>
<p>and this returned the appropriate guide, that I could use for my RBAC assignment</p>
<pre><code># get the clientId of the cluster
$clientId = (az aks show -g aks -n cluster --query identityProfile.kubeletidentity.clientId -o tsv)
# get the resourceId of the registry
$acrId=(az acr show -g acr -n myacr --query id -o tsv)
# give authority for cluster id to the acr pull
az role assignment create $clientId --role AcrPull --scope $acrId
</code></pre>
|
<p>I’ve the following yaml which I need to apply using the K8S go sdk (and not k8s cli)
I didn’t find a way with the go sdk as it is custom resource, any idea how I can apply it via code to k8s?</p>
<p>This is the file</p>
<p>Any example will be very helpful!</p>
<pre><code>apiVersion: aps.dp.com/v1alpha1
kind: Edtack
metadata:
name: ed01
namespace: ctr
spec:
intRef:
name: evr
stack:
- name: vectnt
namespace: aps
path: https://packages.timber.io/helm/latest/vect-0.11.0.tgz
valuesRef:
name: vecvalues
- name: ek
namespace: lg
path: rescharts/bing
- name: apigw-gloo-ee
namespace: apw
path: https://common.cdn.repositories.cloud.sap/api-gateway/apigw-gloo-ee/apigw-gloo-ee-0.3.0.tgz
pullSecretRef:
name: svr-secret
valuesSecretRef:
name: apis
- name: kuback
namespace: kube-prom
path: https://github.com/prometheus-community/helm-charts/releases/download/kube-prometheus-stack-16.12.0/kube-prometheus-stack-16.12.0.tgz
valuesSecretRef:
name: kubes
</code></pre>
<p> </p>
| <p>You can use the <a href="https://github.com/pytimer/k8sutil" rel="nofollow noreferrer">k8sutil</a> repo, see the <a href="https://github.com/pytimer/k8sutil/tree/main/examples/apply" rel="nofollow noreferrer">apply</a> example:</p>
<pre><code>package main
import (
"context"
"flag"
"log"
"path/filepath"
"github.com/pytimer/k8sutil/apply"
"k8s.io/client-go/discovery"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
)
const applyStr = `
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
ports:
- name: web
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: ClusterIP
`
func main() {
var kubeconfig *string
if home := homedir.HomeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()
// use the current context in kubeconfig
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err.Error())
}
dynamicClient, err := dynamic.NewForConfig(config)
if err != nil {
panic(err.Error())
}
discoveryClient, err := discovery.NewDiscoveryClientForConfig(config)
if err != nil {
panic(err.Error())
}
applyOptions := apply.NewApplyOptions(dynamicClient, discoveryClient)
if err := applyOptions.Apply(context.TODO(), []byte(applyStr)); err != nil {
log.Fatalf("apply error: %v", err)
}
}
</code></pre>
|
<p>How I will be able to change the access mode in the persistent volume claim access mode? currently it is showing as RWO, and I need to change it as RWX?</p>
<p>Many thanks in advance.</p>
| <p><code>accessModes</code> in PersistenceVolumeClaim (PVC) is an <strong>immutable field</strong> and <strong>cannot be changed once applied</strong>. Even though you can change the bounded PersistentVolume (PV) <code>accessModes</code> using the same way as suggested by @aurelius which will automatically updated PVC <code>accessModes</code></p>
|
<p>Created Kubernetes cluster deployment with 3 Pods, and all are running fine, but when trying to run them cannot do it, tried doing curl the Ip (Internal)of the Pods in describe section i could see this error "" MountVolume.SetUp failed for volume "default-token-twhht" : failed to sync secret cache:</p>
<p>errors below:</p>
<pre><code>5m51s Normal RegisteredNode node/ip-10-1-1-4 Node ip-10-1-1-4 event: Registered Node ip-10-1-1-4 in Controller
57m Normal Scheduled pod/nginx-deployment-585449566-9bqp7 Successfully assigned default/nginx-deployment-585449566-9bqp7 to ip-10-1-1-4
57m Warning FailedMount pod/nginx-deployment-585449566-9bqp7 MountVolume.SetUp failed for volume "default-token-twhht" : failed to sync secret cache: timed out waiting for the condition
57m Normal Pulling pod/nginx-deployment-585449566-9bqp7 Pulling image "nginx:latest"
56m Normal Pulled pod/nginx-deployment-585449566-9bqp7 Successfully pulled image "nginx:latest" in 12.092210534s
56m Normal Created pod/nginx-deployment-585449566-9bqp7 Created container nginx
56m Normal Started pod/nginx-deployment-585449566-9bqp7 Started container nginx
57m Normal Scheduled pod/nginx-deployment-585449566-9hlhz Successfully assigned default/nginx-deployment-585449566-9hlhz to ip-10-1-1-4
57m Warning FailedMount pod/nginx-deployment-585449566-9hlhz MountVolume.SetUp failed for volume "default-token-twhht" : failed to sync secret cache: timed out waiting for the condition
57m Normal Pulling pod/nginx-deployment-585449566-9hlhz Pulling image "nginx:latest"
56m Normal Pulled pod/nginx-deployment-585449566-9hlhz Successfully pulled image "nginx:latest" in 15.127984291s
56m Normal Created pod/nginx-deployment-585449566-9hlhz Created container nginx
56m Normal Started pod/nginx-deployment-585449566-9hlhz Started container nginx
57m Normal Scheduled pod/nginx-deployment-585449566-ffkwf Successfully assigned default/nginx-deployment-585449566-ffkwf to ip-10-1-1-4
57m Warning FailedMount pod/nginx-deployment-585449566-ffkwf MountVolume.SetUp failed for volume "default-token-twhht" : failed to sync secret cache: timed out waiting for the condition
57m Normal Pulling pod/nginx-deployment-585449566-ffkwf Pulling image "nginx:latest"
56m Normal Pulled pod/nginx-deployment-585449566-ffkwf Successfully pulled image "nginx:latest" in 9.459864756s
56m Normal Created pod/nginx-deployment-585449566-ffkwf Created container nginx
</code></pre>
| <p>You can add an additional RBAC role permission to your Pod's service account, reference <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">1</a> <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control#define-and-assign" rel="nofollow noreferrer">2</a> <a href="https://kubernetes.io/docs/reference/access-authn-authz/node/" rel="nofollow noreferrer">3</a>.</p>
<p>Assure as well that you have the workload identity set up, reference <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#enable_on_cluster" rel="nofollow noreferrer">4</a>.</p>
<hr />
<p>This can also happen when apiserver is on high load, you could have more smaller nodes to spread your pods and increase your resource requests.</p>
|
<p>I'm trying to deploy a Mongodb ReplicaSet on microk8s cluster. I have installed a VM running on Ubuntu 20.04. After the deployment, the mongo pods do not run but crash. I've enabled microk8s storage, dns and rbac add-ons but still the same problem persists. Can any one help me find the reason behind it? Below is my manifest file:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mongodb-service
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
selector:
matchLabels:
role: mongo
environment: test
serviceName: mongodb-service
replicas: 3
template:
metadata:
labels:
role: mongo
environment: test
replicaset: MainRepSet
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: replicaset
operator: In
values:
- MainRepSet
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
volumes:
- name: secrets-volume
secret:
secretName: shared-bootstrap-data
defaultMode: 256
containers:
- name: mongod-container
#image: pkdone/mongo-ent:3.4
image: mongo
command:
- "numactl"
- "--interleave=all"
- "mongod"
- "--wiredTigerCacheSizeGB"
- "0.1"
- "--bind_ip"
- "0.0.0.0"
- "--replSet"
- "MainRepSet"
- "--auth"
- "--clusterAuthMode"
- "keyFile"
- "--keyFile"
- "/etc/secrets-volume/internal-auth-mongodb-keyfile"
- "--setParameter"
- "authenticationMechanisms=SCRAM-SHA-1"
resources:
requests:
cpu: 0.2
memory: 200Mi
ports:
- containerPort: 27017
volumeMounts:
- name: secrets-volume
readOnly: true
mountPath: /etc/secrets-volume
- name: mongodb-persistent-storage-claim
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: mongodb-persistent-storage-claim
spec:
storageClassName: microk8s-hostpath
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
</code></pre>
<p>Also, here are the pv, pvc and sc outputs:</p>
<pre><code>yyy@xxx:$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mongodb-persistent-storage-claim-mongo-0 Bound pvc-1b3de8f7-e416-4a1a-9c44-44a0422e0413 5Gi RWO microk8s-hostpath 13m
yyy@xxx:$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-5b75ddf6-abbd-4ff3-a135-0312df1e6703 20Gi RWX Delete Bound container-registry/registry-claim microk8s-hostpath 38m
pvc-1b3de8f7-e416-4a1a-9c44-44a0422e0413 5Gi RWO Delete Bound default/mongodb-persistent-storage-claim-mongo-0 microk8s-hostpath 13m
yyy@xxx:$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
microk8s-hostpath (default) microk8s.io/hostpath Delete Immediate false 108m
</code></pre>
<pre><code>yyy@xxx:$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
metrics-server-8bbfb4bdb-xvwcw 1/1 Running 1 148m
dashboard-metrics-scraper-78d7698477-4qdhj 1/1 Running 0 146m
kubernetes-dashboard-85fd7f45cb-6t7xr 1/1 Running 0 146m
hostpath-provisioner-5c65fbdb4f-ff7cl 1/1 Running 0 113m
coredns-7f9c69c78c-dr5kt 1/1 Running 0 65m
calico-kube-controllers-f7868dd95-wtf8j 1/1 Running 0 150m
calico-node-knzc2 1/1 Running 0 150m
</code></pre>
<p>I have installed the cluster using this command:</p>
<p>sudo snap install microk8s --classic --channel=1.21</p>
<p>Output of mongodb deployment:</p>
<pre><code>yyy@xxx:$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/mongo-0 0/1 CrashLoopBackOff 5 4m18s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 109m
service/mongodb-service ClusterIP None <none> 27017/TCP 4m19s
NAME READY AGE
statefulset.apps/mongo 0/3 4m19s
</code></pre>
<p>Pod logs:</p>
<pre><code>yyy@xxx:$ kubectl logs pod/mongo-0
{"t":{"$date":"2021-09-07T16:21:13.191Z"},"s":"F", "c":"CONTROL", "id":20574, "ctx":"-","msg":"Error during global initialization","attr":{"error":{"code":2,"codeName":"BadValue","errmsg":"storage.wiredTiger.engineConfig.cacheSizeGB must be greater than or equal to 0.25"}}}
</code></pre>
<pre><code>yyy@xxx:$ kubectl describe pod/mongo-0
Name: mongo-0
Namespace: default
Priority: 0
Node: citest1/192.168.9.105
Start Time: Tue, 07 Sep 2021 16:17:38 +0000
Labels: controller-revision-hash=mongo-66bd776569
environment=test
replicaset=MainRepSet
role=mongo
statefulset.kubernetes.io/pod-name=mongo-0
Annotations: cni.projectcalico.org/podIP: 10.1.150.136/32
cni.projectcalico.org/podIPs: 10.1.150.136/32
Status: Running
IP: 10.1.150.136
IPs:
IP: 10.1.150.136
Controlled By: StatefulSet/mongo
Containers:
mongod-container:
Container ID: containerd://458e21fac3e87dcf304a9701da0eb827b2646efe94cabce7f283cd49f740c15d
Image: mongo
Image ID: docker.io/library/mongo@sha256:58ea1bc09f269a9b85b7e1fae83b7505952aaa521afaaca4131f558955743842
Port: 27017/TCP
Host Port: 0/TCP
Command:
numactl
--interleave=all
mongod
--wiredTigerCacheSizeGB
0.1
--bind_ip
0.0.0.0
--replSet
MainRepSet
--auth
--clusterAuthMode
keyFile
--keyFile
/etc/secrets-volume/internal-auth-mongodb-keyfile
--setParameter
authenticationMechanisms=SCRAM-SHA-1
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 07 Sep 2021 16:24:03 +0000
Finished: Tue, 07 Sep 2021 16:24:03 +0000
Ready: False
Restart Count: 6
Requests:
cpu: 200m
memory: 200Mi
Environment: <none>
Mounts:
/data/db from mongodb-persistent-storage-claim (rw)
/etc/secrets-volume from secrets-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b7nf8 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mongodb-persistent-storage-claim:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mongodb-persistent-storage-claim-mongo-0
ReadOnly: false
secrets-volume:
Type: Secret (a volume populated by a Secret)
SecretName: shared-bootstrap-data
Optional: false
kube-api-access-b7nf8:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 7m53s default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Warning FailedScheduling 7m52s default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Normal Scheduled 7m50s default-scheduler Successfully assigned default/mongo-0 to citest1
Normal Pulled 7m25s kubelet Successfully pulled image "mongo" in 25.215669443s
Normal Pulled 7m21s kubelet Successfully pulled image "mongo" in 1.192994197s
Normal Pulled 7m6s kubelet Successfully pulled image "mongo" in 1.203239709s
Normal Pulled 6m38s kubelet Successfully pulled image "mongo" in 1.213451175s
Normal Created 6m38s (x4 over 7m23s) kubelet Created container mongod-container
Normal Started 6m37s (x4 over 7m23s) kubelet Started container mongod-container
Normal Pulling 5m47s (x5 over 7m50s) kubelet Pulling image "mongo"
Warning BackOff 2m49s (x23 over 7m20s) kubelet Back-off restarting failed container
</code></pre>
| <p>The logs you provided show that you have an incorrectly set parameter <code>wiredTigerCacheSizeGB</code>. In your case it is 0.1, and according to the message</p>
<pre><code>"code":2,"codeName":"BadValue","errmsg":"storage.wiredTiger.engineConfig.cacheSizeGB must be greater than or equal to 0.25"
</code></pre>
<p>it should be at least 0.25.</p>
<p>In the section <code>containers</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>containers:
- name: mongod-container
#image: pkdone/mongo-ent:3.4
image: mongo
command:
- "numactl"
- "--interleave=all"
- "mongod"
- "--wiredTigerCacheSizeGB"
- "0.1"
- "--bind_ip"
- "0.0.0.0"
- "--replSet"
- "MainRepSet"
- "--auth"
- "--clusterAuthMode"
- "keyFile"
- "--keyFile"
- "/etc/secrets-volume/internal-auth-mongodb-keyfile"
- "--setParameter"
- "authenticationMechanisms=SCRAM-SHA-1"
</code></pre>
<p>you should change in this place</p>
<pre><code>- "--wiredTigerCacheSizeGB"
- "0.1"
</code></pre>
<p>the value <code>"0.1"</code> to any other greather or equal <code>"0.25"</code>.</p>
<hr />
<p>Additionally I have seen another error:</p>
<pre class="lang-yaml prettyprint-override"><code>1 pod has unbound immediate PersistentVolumeClaims
</code></pre>
<p>It should related to what I wrote earlier. However, you may find alternative ways to solve it <a href="https://stackoverflow.com/questions/52668938/pod-has-unbound-persistentvolumeclaims">here</a>, <a href="https://stackoverflow.com/questions/59506631/k8s-pod-has-unbound-immediate-persistentvolumeclaims-mongodb">here</a> and <a href="https://github.com/helm/charts/issues/12521" rel="nofollow noreferrer">here</a>.</p>
|
<p>In my macos I can use below to launch kubernetes dashboard:</p>
<pre><code># launch k8s dashboard
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended.yaml
# Enable the local access proxy, listening on port 8001 by default
$ kubectl proxy
</code></pre>
<p>I have a question about this, why there have <code>$ kubectl proxy</code> command?
whether launch the dashboard, it do not listen a port by default? why should have a proxy?</p>
| <p>This command allows the dashboard to call the Kubernetes API Server. It handles locating the API server and authenticating: see <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#using-kubectl-proxy" rel="nofollow noreferrer">https://kubernetes.io/docs/using-kubectl-proxy</a></p>
<p>Without it, the dashboard will not be able to connect to the API Server since its endpoints is unknown (from dashboard perspective).
If you want to use the API Server endpoint directly, you will have to add an authenticatin token to each requests and you will also have to configure CORS on Kubernetes API Server. See option <code>--cors-allowed-origins</code> <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/</a></p>
|
<p>I have a big application structured with <strong><a href="https://fastapi.tiangolo.com/" rel="nofollow noreferrer">FastAPI</a></strong> (with many routers), that runs in AWS Lambda. I want to migrate it to a container inside <strong><a href="https://kubernetes.io/" rel="nofollow noreferrer">Kubernetes</a></strong>. From my research, <strong><a href="https://www.openfaas.com/" rel="nofollow noreferrer">OpenFaaS</a></strong> is a great solution.
However I can't find documentation about how to do this.</p>
<p><em>Does anyone has references or a better solution?</em></p>
| <p>If you are using the python or Ruby</p>
<p>You can create the docker file and use it for creating the docker images and simply deploy it on Kubernetes.</p>
<pre><code>FROM ruby:2.7-alpine3.11
WORKDIR /home/app
COPY . .
RUN bundle install
CMD ["ruby", "main.rb"]
</code></pre>
<p>For OpenFass they have provided good labs with documentation to create the Async function etc.</p>
<p>Labs : <a href="https://github.com/openfaas/workshop" rel="nofollow noreferrer">https://github.com/openfaas/workshop</a></p>
<p>If you are looking for <a href="https://github.com/openfaas/faas/tree/master/sample-functions" rel="nofollow noreferrer">examples</a> you can check out the official repo only : <a href="https://github.com/openfaas/faas/tree/master/sample-functions" rel="nofollow noreferrer">https://github.com/openfaas/faas/tree/master/sample-functions</a></p>
<p><strong>Extra</strong></p>
<p>There is also another good option <a href="https://knative.dev/docs/" rel="nofollow noreferrer">Knative</a> or <a href="https://kubeless.io/" rel="nofollow noreferrer">Kubeless</a></p>
<p>You can find the python Kubeless example and CI/CD example : <a href="https://github.com/harsh4870/kubeless-kubernetes-ci-cd" rel="nofollow noreferrer">https://github.com/harsh4870/kubeless-kubernetes-ci-cd</a></p>
|
<p>Imagine I have some pods I need on separate k8s nodes, I could use something like this if I know both pods have a label <code>my/label=somevalue</code></p>
<pre class="lang-yaml prettyprint-override"><code>affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 2
podAffinityTerm:
labelSelector:
matchLabels:
my/label: somevalue
</code></pre>
<p>I have some pods I need separated, according to multiple values of the same label, which doesn't known up-front (it's a shared key which is calculated by an operator).</p>
<p>Is there a way of specifying a <code>podAffinityTerm</code> which applies to any pods sharing the same value of <code>my.label</code>, regardless of the actual value?</p>
<p>E.g.</p>
<pre><code>Pod a has my/label=x
Pod b has my/label=x
Pod c has my/label=y
Pod d has my/label=y
</code></pre>
<p>I'd need pods a & b separated from each other, and pods c & d separated form each other, but e.g. a and d can coexist on the same node</p>
| <p>As far as I know, there is no built-in way to specify affinity without knowing label values. At the stage of creating a pod you need to provide both key and value. In order for affinity to work properly, you need to know this value at the time of creation and put it in the appropriate yaml file.</p>
<p>Theoretically, you could create a custom script, e.g. in bash, which will take your value for you</p>
<blockquote>
<p>its a sharding key which is calculated by an operator</p>
</blockquote>
<p>and then replace it in yaml files. This way it will be set correctly when creating pod.</p>
<p>Additionally, you can also have a look at <a href="https://kubernetes.io/docs/reference/labels-annotations-taints/" rel="nofollow noreferrer">Well-Known Labels, Annotations and Taints</a>.</p>
<p>Depending on your exact situation, you can try to solve the problem with them. See also <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">this page</a>. There you will find everything about assigning pod to nodes.</p>
|
<p>Firstly, this is my folder:</p>
<p><a href="https://i.stack.imgur.com/0zvFX.png" rel="noreferrer"><img src="https://i.stack.imgur.com/0zvFX.png" alt="enter image description here" /></a></p>
<p>This is my Dockerfile:</p>
<pre><code>FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build-env
WORKDIR /app
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
FROM mcr.microsoft.com/dotnet/aspnet:5.0
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "PlatformService.dll"]
</code></pre>
<p>platforms-depl.yaml (deployment file)</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: platforms-depl
spec:
replicas: 1
selector:
matchLabels:
app: platformservice
template:
metadata:
labels:
app: platformservice
spec:
containers:
- name: platformservice
image: hao14102000/platformservice:latest
</code></pre>
<p>platforms-np-srv.yaml (NodePort Service file)</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: platformnpservice-srv
spec:
type: NodePort
selector:
app: platformservice
ports:
- name: platformservice
protocol: TCP
port: 80
targetPort: 80
</code></pre>
<p>When I apply 2 files this is what I see:</p>
<p><a href="https://i.stack.imgur.com/Tf5KP.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Tf5KP.png" alt="enter image description here" /></a></p>
<p>When I try to connect on port <code>31023</code> using both options below:</p>
<pre><code>http://localhost:31023/api/platforms
http://10.109.215.230:31023/api/platforms
</code></pre>
<p>It doesn't work. This happens:</p>
<p><a href="https://i.stack.imgur.com/IR7Xg.png" rel="noreferrer"><img src="https://i.stack.imgur.com/IR7Xg.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/mjW6Y.png" rel="noreferrer"><img src="https://i.stack.imgur.com/mjW6Y.png" alt="enter image description here" /></a></p>
<p>I don't know what wrong with this...</p>
| <h2>What happens with Linux containers:</h2>
<p>Kubernetes on Windows Docker Desktop by default runs its components in WSL2 (Windows subsystem for Linux), it's separate virtual machine with its own IP address and <code>localhost</code>. This is the reason why service is not reachable on <code>localhost</code> from host OS (in this case Windows).</p>
<p>Another option is to disable <code>using WSL2 based engine</code> in settings, instead <code>hyper-v</code> will be used and virtual machine will be created however in Docker Desktop it's said that preferably WSL2 should be used for performance benefits.</p>
<h2>Available options how to access the service using WSL2:</h2>
<ol>
<li>Fastest and easiest (loadbalancer)</li>
</ol>
<p>Set up a <code>service</code> with <code>LoadBalancer</code> type. <code>EXTERNAL-IP</code> will be localhost which solves all questions immediately. For example:</p>
<pre><code>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11m
nginx LoadBalancer 10.110.15.53 localhost 8080:30654/TCP 4m10s
</code></pre>
<p>Nginx is available in browser on <code>localhost:8080</code>.</p>
<ol start="2">
<li>Using virtual machine's IP and nodeport</li>
</ol>
<p>Another option is to find <code>WSL</code> virtual machine and then access service on this IP and <code>nodeport</code>.</p>
<p>To find WSL VM address, you need to run <code>wsl</code> command to connect to this VM and then find its IP address:</p>
<pre><code>wsl
# ip a | grep eth0
6: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
inet 172.19.xxx.yyy/20 brd 172.19.xxx.yyy scope global eth0
</code></pre>
<p>Nginx is available in browser on <code>172.19.xxx.yyy:30654</code>.</p>
<ol start="3">
<li>Port-forward - for testing purposes</li>
</ol>
<p><code>Port-forward</code> is useful for testing purposes, but it shouldn't be used on production systems.</p>
<p>To start the proxy to the service, run following command:</p>
<pre><code>kubectl port-forward service/nginx 8080:80 &
</code></pre>
<p>Nginx is available in browser on <code>localhost:8080</code></p>
<h2>Assumptions when Hyper-V is used</h2>
<p>First <code>hyper-v</code> should be installed on host machine. Note that not all versions of Windows are supported. Please refer to documentation on which versions and how to enable <code>hyper-v</code> <a href="https://learn.microsoft.com/en-us/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v" rel="noreferrer">here</a>.</p>
<p>When <code>using WSL2 based engine</code> is deselected, <code>hyper-v</code> is used to work with containers. It creates a separate virtual machine which can be found in <code>Hyper-v Manager</code>.</p>
<ul>
<li><code>nodeport</code> works on localhost + nodeport</li>
<li><code>loadbalancer</code> doesn't work, you can't connect to <code>localhost</code> with service port even though <code>External-IP</code> shows <code>localhost</code>.</li>
</ul>
<h2>Windows containers on Windows Docker Desktop</h2>
<p>It's also possible to run Windows containers on Windows Docker Desktop.</p>
<p>It's required to change daemon which will be used. In tray select on <code>switch to Windows containers</code>. <a href="https://docs.docker.com/desktop/windows/#switch-between-windows-and-linux-containers" rel="noreferrer">Switch between linux and windows containers</a>.</p>
<p>However <code>kubernetes</code> option will become unavailable, because <code>control plane</code> components are designed to be run on <code>linux</code> host.</p>
<h2>Environment:</h2>
<p><strong>OS</strong>: Windows 10 Enterprise, build: 19041.1165</p>
<p><strong>Docker Desktop</strong>: 4.0.0 (67817)</p>
<p><strong>Engine</strong>: 20.10.8</p>
<p><strong>Kubernetes</strong>: 1.21.4</p>
<h2>Useful links:</h2>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="noreferrer">Service types in Kubernetes</a></li>
<li><a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="noreferrer">Kubernetes port-forwarding</a></li>
<li><a href="https://learn.microsoft.com/en-us/virtualization/hyper-v-on-windows/about/" rel="noreferrer">Hyper-V</a></li>
<li><a href="https://docs.docker.com/desktop/windows/#switch-between-windows-and-linux-containers" rel="noreferrer">Docker Desktop for Windows users</a></li>
</ul>
|
<p>I am testing out Airflow on Kubernetes. Currently, I am running a simple DAG and purposefully trying to crash it to see what happens by running a memory-intensive task. My process queries a very large table and I'm trying to generate an Out Of Memory (OOM) error. I know it's happening because the task fails but the only logs I'm able to see give a vague error:</p>
<pre><code>subprocess.CalledProcessError: Command '['/tmp/venvuydlj38n/bin/python', '/tmp/venvuydlj38n/script.py', '/tmp/venvuydlj38n/script.in', '/tmp/venvuydlj38n/script.out', '/tmp/venvuydlj38n/string_args.txt']' died with <Signals.SIGKILL: 9>.
</code></pre>
<p>This is the only thing I see. Nothing in Kubernetes tells me why either. The pod doesn't restart and I don't see anything on <code>kubectl get events</code>. I don't understand why Kubernetes isn't evicting the Pod. From the Kubernetes documentation, Kubernetes should kill this pod and reschedule it but it doesn't look like it's happening - yet my process still gets the SigKill 9 error. At this point I know it's an OOME, however I want to find the logs that will tell confirm that. If I get Sigkill 9 error some other reason I want to know.</p>
<p>Some other info: I set my Celery worker to have a 2GB resource limit.</p>
| <p>Take a look at this issue: <a href="https://github.com/kubernetes/kubernetes/issues/50632" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/50632</a></p>
<p>and this article:
<a href="https://medium.com/back-market-engineering/a-story-of-kubernetes-celery-resource-and-a-rabbit-ec2ef9e37e9f" rel="nofollow noreferrer">https://medium.com/back-market-engineering/a-story-of-kubernetes-celery-resource-and-a-rabbit-ec2ef9e37e9f</a></p>
<blockquote>
<p>When a container consumes more memory than its limit, the OOM Killer
sends a SIGKILL (signal number 9) to one of its processes.</p>
<p>If the container runs only one process, as it is often advised, the
container will then exit with the same exit code. K8s will see this
and report it, marking the container as OOMKilled. If, however, there
are multiple processes running on the container, as is the case here
with our Celery server and its 4 worker children processes, the
SIGKILL may target any of those children. The Celery server will
notice it</p>
</blockquote>
|
<p>I recently noticed a big accumulation of pods with status 'Shutdown'. We have been using Kubernetes since October, 2020.</p>
<p>Production and staging is running on the same nodes except that staging uses preemtible nodes to cut the cost. The containers are also stable in staging. (Failures occur rarely as they are caught in testing before).</p>
<p>Service provider Google Cloud Kubernetes.</p>
<p>I familiarized myself with the docs and tried searching however neither I recognize neither google helps with this particular status. There are no errors in the logs.</p>
<p><a href="https://i.stack.imgur.com/pC7LW.png" rel="noreferrer"><img src="https://i.stack.imgur.com/pC7LW.png" alt="example of bunch of shutdown pods" /></a>
<a href="https://i.stack.imgur.com/gEh6n.png" rel="noreferrer"><img src="https://i.stack.imgur.com/gEh6n.png" alt="pod description only says failed" /></a></p>
<p>I have no problem pods being stopped. Ideally I'd like K8s to automatically delete these shutdown pods. If I run <code>kubectl delete po redis-7b86cdccf9-zl6k9</code>, it goes away in a blink.</p>
<p><code>kubectl get pods | grep Shutdown | awk '{print $1}' | xargs kubectl delete pod</code> is manual temporary workaround.</p>
<p>PS. <code>k</code> is an alias to <code>kubectl</code> in my environment.</p>
<p>Final example: it happens across all namespaces // different containers.
<a href="https://i.stack.imgur.com/PXHY8.png" rel="noreferrer"><img src="https://i.stack.imgur.com/PXHY8.png" alt="enter image description here" /></a></p>
<p>I stumbled upon few related issues explaining the status
<a href="https://github.com/kubernetes/website/pull/28235" rel="noreferrer">https://github.com/kubernetes/website/pull/28235</a>
<a href="https://github.com/kubernetes/kubernetes/issues/102820" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/102820</a></p>
<p>"When pods were evicted during the graceful node shutdown, they are marked as failed. Running <code>kubectl get pods</code> shows the status of the the evicted pods as <code>Shutdown</code>."</p>
| <p>The evicted pods are not removed on purpose, as k8s team says here <a href="https://github.com/kubernetes/kubernetes/issues/54525#issuecomment-340035375" rel="noreferrer">1</a>, the evicted pods are nor removed in order to be inspected after eviction.</p>
<p>I believe here the best approach would be to create a cronjob <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/" rel="noreferrer">2</a> as already mentioned.</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: del-shutdown-pods
spec:
schedule: "* 12 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- kubectl get pods | grep Shutdown | awk '{print $1}' | xargs kubectl delete pod
restartPolicy: OnFailure
</code></pre>
|
<p>I have a minimal web application running on local minikube cluster.
The backend is exposing an API at <code>/be/test/hi</code> and service name is also be.
When I send a GET request from frontend to backend i get:</p>
<pre><code>main.js:15 GET http://be/be/test/hi net::ERR_NAME_NOT_RESOLVED
</code></pre>
<p>If I run <code>nslookup be</code> from the frontend POD i get the correct dns resolution, and running
<code>curl be/be/test/hi</code> I get the right response from backend (a simple 'Hello world' string)</p>
<p>What am I missing?</p>
<p>backend.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: be
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: be
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: be
spec:
replicas: 1
selector:
matchLabels:
app: be
template:
metadata:
labels:
app: be
spec:
containers:
- name: be
image: kubebe
imagePullPolicy: Never
ports:
- containerPort: 8080
</code></pre>
<p>frontend.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: fe
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: fe
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 1
selector:
matchLabels:
app: fe
template:
metadata:
labels:
app: fe
spec:
containers:
- name: fe
image: kubefe
imagePullPolicy: Never
ports:
- containerPort: 8080
</code></pre>
<p>main.js</p>
<pre><code>const URL="http://be/be/test/hi"
function httpGetAsync(){
var xmlHttp = new XMLHttpRequest();
xmlHttp.onreadystatechange = function() {
if (xmlHttp.readyState == 4 && xmlHttp.status == 200)
console.log(xmlHttp.responseText);
}
xmlHttp.open("GET", URL, true);
xmlHttp.send(null);
}
</code></pre>
<p>frontend Dockerfile (kubefe image):</p>
<pre><code>FROM centos/httpd-24-centos7:2.4
RUN mkdir -p /var/www/html/fe
COPY ./index.html ./main.js /var/www/html/fe/
</code></pre>
<p>EDIT: I've resolved my issues but I think the actual question is still unanswered.</p>
<p>To solve this I've simple removed the protocol and host from my url and set up proxy rules in /etc/httpd/conf.d/default-site.conf</p>
<p>default-site.conf</p>
<pre><code><VirtualHost *:8080>
ProxyPreserveHost Off
ProxyRequests Off
ProxyPass /be/ http://be/be/
</VirtualHost>
</code></pre>
| <p><strong>Your frontend code (javascript) is executing in your browser, not inside the kubernetes cluster (nginx pod)</strong></p>
<p>You have 3 options here,</p>
<ol>
<li>Create a NodePort Service for your frontend</li>
<li>Rely on kubeproxy for doing a port forward</li>
<li>Create an ingress</li>
</ol>
|
<p>I have created a Kind cluster with containerd runtime.
Here is my node:</p>
<pre><code>root@dev-001:~# k get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
local-cluster-control-plane Ready control-plane,master 7d8h v1.20.2 172.18.0.2 <none> Ubuntu 20.10 5.4.0-81-generic containerd://1.4.0-106-gce4439a8
local-cluster-worker Ready <none> 7d8h v1.20.2 172.18.0.5 <none> Ubuntu 20.10 5.4.0-81-generic containerd://1.4.0-106-gce4439a8
local-cluster-worker2 Ready <none> 7d8h v1.20.2 172.18.0.3 <none> Ubuntu 20.10 5.4.0-81-generic containerd://1.4.0-106-gce4439a8
local-cluster-worker3 Ready <none> 7d8h v1.20.2 172.18.0.4 <none> Ubuntu 20.10 5.4.0-81-generic containerd://1.4.0-106-gce4439a8
</code></pre>
<p>How I can ssh into nodes?</p>
<p>Kind version: 0.11.1 or greater</p>
<p>Runtime: containerd ( not docker )</p>
| <p>Kind Kuberenetes uses Docker to <strong><a href="https://kind.sigs.k8s.io/" rel="noreferrer">create container(s) which will be Kubernetes node(s)</a></strong>:</p>
<blockquote>
<p><a href="https://sigs.k8s.io/kind" rel="noreferrer">kind</a> is a tool for running local Kubernetes clusters using Docker container “nodes”.</p>
</blockquote>
<p>So basically the layers are: your host -> containers hosted on yours host's docker which are acting as <strong>Kubernetes nodes</strong> -> on nodes there are container runtimes used for running pods</p>
<p>In order to SSH into nodes you need to exec into docker containers. Let's do it.</p>
<p>First, we will get list of nodes by running <code>kubectl get nodes -o wide</code>:</p>
<pre><code>NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kind-control-plane Ready control-plane,master 5m5s v1.21.1 172.18.0.2 <none> Ubuntu 21.04 5.11.0-1017-gcp containerd://1.5.2
kind-worker Ready <none> 4m38s v1.21.1 172.18.0.4 <none> Ubuntu 21.04 5.11.0-1017-gcp containerd://1.5.2
kind-worker2 Ready <none> 4m35s v1.21.1 172.18.0.3 <none> Ubuntu 21.04 5.11.0-1017-gcp containerd://1.5.2
</code></pre>
<p>Let's suppose we want to SSH into <code>kind-worker</code> node.</p>
<p>Now, we will get list of docker containers (<code>docker ps -a</code>) and check if all nodes are here:</p>
<pre><code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7ee204ad5fd1 kindest/node:v1.21.1 "/usr/local/bin/entr…" 10 minutes ago Up 8 minutes kind-worker
434f54087e7c kindest/node:v1.21.1 "/usr/local/bin/entr…" 10 minutes ago Up 8 minutes 127.0.0.1:35085->6443/tcp kind-control-plane
2cb2e9465d18 kindest/node:v1.21.1 "/usr/local/bin/entr…" 10 minutes ago Up 8 minutes kind-worker2
</code></pre>
<p>Take a look at the <code>NAMES</code> column - here are nodes names used in Kubernetes.</p>
<p>Now we will use standard <a href="https://docs.docker.com/engine/reference/commandline/exec/" rel="noreferrer"><code>docker exec</code> command</a> to connect to the running container and connect to it's shell - <code>docker exec -it kind-worker sh</code>, then we will run <code>ip a</code> on the container to check if IP address matches the address from the <code>kubectl get nodes</code> command:</p>
<pre><code># ls
bin boot dev etc home kind lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
...
11: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
inet 172.18.0.4/16 brd 172.18.255.255 scope global eth0
...
#
</code></pre>
<p>As can see, we successfully connected to the node used by Kind Kubernetes - the IP address <code>172.18.0.4</code> matches the IP address from the <code>kubectl get nodes</code> command.</p>
|
<p>I am having an issue on GKE where all this error is being spewed from all name spaces. Not sure what might be the issue or how to troubleshoot this.</p>
<p>message: "MountVolume.SetUp failed for "<strong>volume-name</strong>-token-m4rtn" : failed to sync secret cache: timed out waiting for the condition"</p>
<p>It occurs for almost all pods in all namespaces. Has anyone come across this or have any ideas on how I can troubleshoot?</p>
| <p>The error you are receiving points to be a problem with RBAC(Role-based access control) permissions, looks like the service account used by the pod does not have enough permissions.</p>
<p>Hence, the default service account within the namespace you are deploying to is not authorized to mount the secret that you are trying to mount into your Pod.</p>
<p>You can get further information on the following link <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">Using RBAC Authorization</a>
You also can take a look at the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control#define-and-assign" rel="nofollow noreferrer">Google’s documentation</a></p>
<p>For example, the following Role grants read access (get, watch, and list) to all pods in the accounting Namespace:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: accounting
name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
</code></pre>
<p>Also you can take a look at the following similar cases <a href="https://www.reddit.com/r/kubernetes/comments/noyrvi/failed_to_sync_configmap_cache_timed_out/" rel="nofollow noreferrer">Case in Reddit</a>, <a href="https://stackoverflow.com/questions/64908290/pods-starting-but-not-working-in-kubernetes">StackOverflow case</a></p>
|
<p>Searching the web for <em>how to deploy a locally build docker image in kubernetes</em>, I found solutions in combination with <strong>minikube</strong> only.</p>
<p>For example: <a href="https://medium.com/swlh/how-to-run-locally-built-docker-images-in-kubernetes-b28fbc32cc1d" rel="nofollow noreferrer">minikube</a></p>
<p>So I'm wondering, is it possible to use locally build docker images in an Kubernetes Cluster established by <code>kubeadm</code> or is the preferred way to deploy those images to an private registry and pull it afterwards from within the cluster?</p>
| <p>Using a private registry is definitely the way to do it. Some may rely on a third-party registry, others would deploy it in kubernetes. Solutions may include Artifactory, Nexus, Portus, Harbor, GitLab, docker-registry, ... most of which may use s3 storage and would scale horizontally.</p>
<p>Now, with minikube, hosting a registry might not be practical: you could indeed build your images directly on your host. But you shouldn't do this anywhere else.</p>
|
<p>We're trying to install the <code>ingress-nginx</code> controller onto an Azure Kubernetes Service (AKS) cluster, following the steps from the <a href="https://learn.microsoft.com/en-us/azure/aks/ingress-internal-ip#create-an-ingress-controller" rel="nofollow noreferrer">Azure documentation</a>.</p>
<p>Kubernetes version: 1.21.1
Chart version: 3.36.</p>
<p>The command we're using:</p>
<pre><code>SET REGISTRY_NAME=
SET ACR_URL=%REGISTRY_NAME%.azurecr.io
SET CONTROLLER_REGISTRY=k8s.gcr.io
SET CONTROLLER_IMAGE=ingress-nginx/controller
SET CONTROLLER_TAG=v0.48.1
SET PATCH_REGISTRY=docker.io
SET PATCH_IMAGE=jettech/kube-webhook-certgen
SET PATCH_TAG=v1.5.1
SET DEFAULTBACKEND_REGISTRY=k8s.gcr.io
SET DEFAULTBACKEND_IMAGE=defaultbackend-amd64
SET DEFAULTBACKEND_TAG=1.5
SET NAMESPACE=ingress-basic
kubectl create namespace %NAMESPACE%
kubectl apply -n %NAMESPACE% -f .\limitRanges.yaml
helm install nginx-ingress ingress-nginx/ingress-nginx ^
--namespace %NAMESPACE% ^
--version 3.36.0 ^
--set controller.replicaCount=2 ^
--set controller.nodeSelector."kubernetes\.io/os"=linux ^
--set controller.image.registry=%ACR_URL% ^
--set controller.image.image=%CONTROLLER_IMAGE% ^
--set controller.image.tag=%CONTROLLER_TAG% ^
--set controller.image.digest="" ^
--set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux ^
--set controller.admissionWebhooks.patch.image.registry=%ACR_URL% ^
--set controller.admissionWebhooks.patch.image.image=%PATCH_IMAGE% ^
--set controller.admissionWebhooks.patch.image.tag=%PATCH_TAG% ^
--set controller.admissionWebhooks.patch.image.digest="" ^
--set defaultBackend.nodeSelector."kubernetes\.io/os"=linux ^
--set defaultBackend.image.registry=%ACR_URL% ^
--set defaultBackend.image.image=%DEFAULTBACKEND_IMAGE% ^
--set defaultBackend.image.tag=%DEFAULTBACKEND_TAG% ^
--set defaultBackend.image.digest="" ^
-f internal-load-balancer.yaml ^
--debug
</code></pre>
<p>When running, the output is:</p>
<pre><code>install.go:173: [debug] Original chart version: "3.36.0"
install.go:190: [debug] CHART PATH: C:\Users\......\AppData\Local\Temp\helm\repository\ingress-nginx-3.36.0.tgz
client.go:290: [debug] Starting delete for "nginx-ingress-ingress-nginx-admission" ServiceAccount
client.go:319: [debug] serviceaccounts "nginx-ingress-ingress-nginx-admission" not found
client.go:128: [debug] creating 1 resource(s)
client.go:290: [debug] Starting delete for "nginx-ingress-ingress-nginx-admission" ClusterRole
client.go:128: [debug] creating 1 resource(s)
client.go:290: [debug] Starting delete for "nginx-ingress-ingress-nginx-admission" ClusterRoleBinding
client.go:128: [debug] creating 1 resource(s)
client.go:290: [debug] Starting delete for "nginx-ingress-ingress-nginx-admission" Role
client.go:319: [debug] roles.rbac.authorization.k8s.io "nginx-ingress-ingress-nginx-admission" not found
client.go:128: [debug] creating 1 resource(s)
client.go:290: [debug] Starting delete for "nginx-ingress-ingress-nginx-admission" RoleBinding
client.go:319: [debug] rolebindings.rbac.authorization.k8s.io "nginx-ingress-ingress-nginx-admission" not found
client.go:128: [debug] creating 1 resource(s)
client.go:290: [debug] Starting delete for "nginx-ingress-ingress-nginx-admission-create" Job
client.go:319: [debug] jobs.batch "nginx-ingress-ingress-nginx-admission-create" not found
client.go:128: [debug] creating 1 resource(s)
client.go:519: [debug] Watching for changes to Job nginx-ingress-ingress-nginx-admission-create with timeout of 5m0s
client.go:547: [debug] Add/Modify event for nginx-ingress-ingress-nginx-admission-create: ADDED
client.go:586: [debug] nginx-ingress-ingress-nginx-admission-create: Jobs active: 0, jobs failed: 0, jobs succeeded: 0
client.go:547: [debug] Add/Modify event for nginx-ingress-ingress-nginx-admission-create: MODIFIED
client.go:586: [debug] nginx-ingress-ingress-nginx-admission-create: Jobs active: 1, jobs failed: 0, jobs succeeded: 0
</code></pre>
<p>If I look at the pod logs for the job <code>nginx-ingress-ingress-nginx-admission-create</code>, I see the following log:</p>
<pre><code>W0909 06:34:24.393154 1 client_config.go:608] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
{"err":"an error on the server (\"\") has prevented the request from succeeding (get secrets nginx-ingress-ingress-nginx-admission)","level":"fatal","msg":"error getting secret","source":"k8s/k8s.go:109","time":"2021-09-09T06:34:34Z"}
</code></pre>
<p>I'm a little lost on where to look for additional information. I can see the error seems to be relating to getting a secret, and I can't see that secret under a <code>kubectl get secrets -A</code> command. I'm assuming the <code>\"\"</code> portion is supposed to be the error message, but it's not helping.</p>
<p>I have been able to install this chart successfully on a brand new, throwaway cluster. My guess is that it's an RBAC or permissions type problem, but without anything further about where to look, I'm out of ideas.</p>
| <p>You need to Quote the Values. I would also suggest to simplify the code bcs all the values are set by default inside the <a href="https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml" rel="nofollow noreferrer">Helm Chart of ingress-nginx</a>:</p>
<pre><code>SET NAMESPACE=ingress-basic
kubectl create namespace %NAMESPACE%
kubectl apply -n %NAMESPACE% -f .\limitRanges.yaml
helm install nginx-ingress ingress-nginx/ingress-nginx ^
--namespace %NAMESPACE% ^
--version "4.0.1" ^
-set controller.replicaCount="2" ^
-f internal-load-balancer.yaml ^
--debug
</code></pre>
|
<p>just looking for some clarification here
I have a 2 node etcd cluster:</p>
<pre><code>master01=http://10.1.1.21:2379,master02=http://10.1.1.22:2379
</code></pre>
<p>all running fine.
If I login to master01 and do the following:</p>
<pre><code>etcdctl --cluster=true endpoint health
</code></pre>
<p>i get a good response:</p>
<pre><code>http://10.1.1.21:2379 is healthy: successfully committed proposal: took = 25.628392ms
http://10.1.1.22:2379 is healthy: successfully committed proposal: took = 42.98645ms
</code></pre>
<p>all operations get, put are running as expected.</p>
<pre><code>ETCDCTL_API=3 etcdctl --endpoints=http://10.1.1.21:2379,http://10.1.1.22:2379 get date
</code></pre>
<p>The trouble starts when I drop one of the nodes, so if i kill one node I am now getting errors instead of results for example:</p>
<pre><code>ETCDCTL_API=3 etcdctl --endpoints=http://10.1.1.21:2379,http://10.1.1.22:2379 get date
{"level":"warn","ts":"2021-09-09T08:58:22.175Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0000e0a80/#initially=[http://10.1.1.21:2379;http://10.1.1.22:2379]","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
Error: context deadline exceeded
</code></pre>
<p>in this case i killed off master01, am i doing something wrong?</p>
| <p>An etcd cluster needs a majority of nodes, a quorum, to agree on updates to the cluster state. For a cluster with n members, quorum is (n/2)+1. For any odd-sized cluster, adding one node will always increase the number of nodes necessary for quorum. Although adding a node to an odd-sized cluster appears better since there are more machines, the fault tolerance is worse since exactly the same number of nodes may fail without losing quorum but there are more nodes that can fail. If the cluster is in a state where it can’t tolerate any more failures, adding a node before removing nodes is dangerous because if the new node fails to register with the cluster (e.g., the address is misconfigured), quorum will be permanently lost.</p>
<p>So, in your case having two etcd nodes provide the same redundancy as one, so always recommended to have odd number of etcd nodes. <code>code = DeadlineExceeded desc = context deadline exceeded</code> means that client is not able to reach etcd server and it timeouts. So it might the case, that you are trying to connect to a etcd server which is down and as a results you see the error. Please refer the below doc to know more</p>
<p><a href="https://etcd.io/docs/v3.3/faq/#what-is-failure-tolerance" rel="noreferrer">ETDC FALUT TOLERANCE TABLE </a></p>
|
<p>I currently have airflow running in a Kubernetes cluster in Azure using the helm chart for Apache airflow. I am able to use the API from the VM where I port forward the web server, using the endpoint: http://localhost:8080/api/v1/dags/test_trigger/dagRuns</p>
<p>Can anyone point me in the right direction for how I can interact with the API from other locations, or just expose the API endpoint in general to be able to be called from other locations?</p>
<p>Thanks,</p>
| <p>There is a short chapter in Airflow Helm Chart's Production Guide:</p>
<p><a href="https://airflow.apache.org/docs/helm-chart/stable/production-guide.html#accessing-the-airflow-ui" rel="nofollow noreferrer">https://airflow.apache.org/docs/helm-chart/stable/production-guide.html#accessing-the-airflow-ui</a></p>
<p>It's about setting up Ingress or LoadBalancer essentially.</p>
<p>Accessing the API server is the same as accessing the webserver - they use the same port/run on webserver, so the pointes there should guide you what to do.</p>
|
<p>I’m working on to install the Kubernetes cluster on Azure Ubuntu Virtual Machine 20.04 LTS. For that I have followed below articles.</p>
<p><a href="https://www.edureka.co/blog/install-kubernetes-on-ubuntu#InstallingKubeadmKubeletKubectl" rel="nofollow noreferrer">https://www.edureka.co/blog/install-kubernetes-on-ubuntu#InstallingKubeadmKubeletKubectl</a>
<a href="https://adamtheautomator.com/install-kubernetes-ubuntu/#Initializing_a_Kubernetes_Cluster" rel="nofollow noreferrer">https://adamtheautomator.com/install-kubernetes-ubuntu/#Initializing_a_Kubernetes_Cluster</a></p>
<p>Whenever I execute this <code>kubeadm init --apiserver-advertise-address=<ip-address-of-kmaster-vm> --pod-network-cidr=192.168.0.0/16</code> cmd for initializing a Kubernetes Cluster on Master node, then I’m getting the following error:</p>
<blockquote>
<p>[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.</p>
</blockquote>
<p>So, can anyone help me out for installing the Kubernetes Cluster on Azure Ubuntu Virtual Machine.</p>
| <p>I have also faced this problem and solved this by changing the cgroup driver of docker to systemd</p>
<pre><code>sudo vi /lib/systemd/system/docker.service
Then Modifiy the line as below
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl restart kubelet
sudo systemctl status kubelet
</code></pre>
<p>If the kubelet status is active and running then you can apply the kubeadm join command and install kubernetes</p>
|
<p>I've just deployed websocket based <code>echo-server</code> on <a href="https://aws.amazon.com/eks/?whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc&eks-blogs.sort-by=item.additionalFields.createdDate&eks-blogs.sort-order=desc" rel="nofollow noreferrer">AWS EKS</a>. I see it's running stable and okay but when I was searching for implementation details I was finding only articles that were saying something about <code>nginx ingress controller</code> or <code>AWS application loadbalancer</code> and a lot of troubles with them.</p>
<p>Do I miss anything in my current, vanilla config? Do I need the AWS ALB or nginx ingress controller?</p>
<p>Thank you for all the replies.
All the best.</p>
| <blockquote>
<p>Do I miss anything in my current, vanilla config?</p>
</blockquote>
<p>You probably exposed your <code>echo-server</code> app using <a href="https://stackoverflow.com/questions/41509439/whats-the-difference-between-clusterip-nodeport-and-loadbalancer-service-types/67931156#67931156">service type - <code>ClusterIP</code> or <code>NodePort</code></a> which is fine if you only need to access your app locally in the cluster (<code>ClusterIP</code>) or using your <a href="https://kubernetes.io/docs/concepts/architecture/nodes/" rel="nofollow noreferrer">node</a> IP address (<code>NodePort</code>).</p>
<blockquote>
<p>Do I need the AWS ALB or nginx ingress controller?</p>
</blockquote>
<p>They both are different things, but they have similar common goal - to make your websocket app available externally and <a href="https://stackoverflow.com/questions/45079988/ingress-vs-load-balancer/55161523#55161523">distribute traffic based on defined L7 routing routes</a>. It's good solution if you have multiple deployments. So you need to answer yourself if you need some kind of Ingress Controller. If you are planning to deploy your application into production you should consider using those solutions, but probably it may be fine with service type <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">LoadBalancer</a>.</p>
<p><strong>EDIT:</strong></p>
<p>If you are already using service type LoadBalancer your app is already available externally. Ingress controller provides additional configuration possibilities to configure L7 traffic route to your cluster (Ingress Controllers are often using LoadBalancer under the hood). Check this <a href="https://stackoverflow.com/questions/45079988/ingress-vs-load-balancer/45084511#45084511">answer</a> for more details about differences between LoadBalancer and Ingress.</p>
<p>Also check:</p>
<ul>
<li><a href="https://www.nginx.com/blog/aws-alb-vs-nginx-plus/" rel="nofollow noreferrer">Choosing the Right Load Balancer on Amazon: AWS Application Load Balancer vs. NGINX Plus</a></li>
<li><a href="https://blog.getambassador.io/configuring-kubernetes-ingress-on-aws-dont-make-these-mistakes-1a602e430e0a" rel="nofollow noreferrer">Configuring Kubernetes Ingress on AWS? Don’t Make These Mistakes</a></li>
<li><a href="https://websockets.readthedocs.io/en/latest/howto/kubernetes.html" rel="nofollow noreferrer">WebSocket - Deploy to Kubernetes</a></li>
<li><a href="https://stackoverflow.com/questions/45079988/ingress-vs-load-balancer/45084511">LoadBalancer vs Ingress</a></li>
</ul>
|
<p>I have an application running in Kubernetes as a <code>StatefulSet</code> that starts 2 pods. It has configured a liveness probe and a readiness probe.</p>
<p>The <code>liveness probe</code> call a simple <code>/health</code> endpoint that responds when the server is done loading</p>
<p>The <code>readiness probe</code>, wait for some start-up job to complete. The job can take several minutes in some cases, and only when it finish the api of the application is ready to start accepting requests.</p>
<p>Even when the api is not available my app also run side jobs that don't depend on it, and I expect them to be done while the startup is happening too.</p>
<p><strong>Is it possible to force Kubernetes deployment to complete and deploy 2 pods, even when the readiness probe is still not passing?</strong></p>
<p>From the docs I get that the only effect of a readiness probe not passing is that the current pod won't be included as available in the loadbalancer service (which is actually the only effect that I want).</p>
<blockquote>
<p>If the readiness probe fails, the endpoints controller removes the
Pod's IP address from the endpoints of all Services that match the
Pod.</p>
</blockquote>
<p>However I am also seeing that the deployment never finishes, since pod 1 readiness probe is not passing and pod 2 is never created.</p>
<pre><code>kubectl rollout restart statefulset/pod
kubectl get pods
NAME READY STATUS RESTARTS AGE
pod-0 1/2 Running 0 28m
</code></pre>
<p>If the readiness probe failure, always prevent the deployment, <strong>Is there other way to selectively expose only ready pods in the loadbalancer, while not marking them as Unready during the deployment?</strong></p>
<p>Thanks in advance!</p>
| <h2>StatefulSet deployment</h2>
<blockquote>
<p>Is it possible to force kubernetes deployment to complete and deploy 2
pods, even when the readiness probe is still not passing?</p>
</blockquote>
<p>Assuming it's meant <code>statefulSet</code> instead of <code>deployment</code> as object, the answer is no, it's not possible by design, most important is second point:</p>
<ul>
<li>For a StatefulSet with N replicas, when Pods are being deployed, they are created sequentially, in order from {0..N-1}.</li>
<li>Before a scaling operation is applied to a Pod, all of its predecessors must be Running and Ready.</li>
<li>When Pods are being deleted, they are terminated in reverse order, from {N-1..0}.</li>
</ul>
<blockquote>
<p>When the nginx example above is created, three Pods will be deployed
in the order web-0, web-1, web-2. web-1 will not be deployed before
web-0 is Running and Ready, and web-2 will not be deployed until web-1
is Running and Ready</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees" rel="nofollow noreferrer">StatefulSets - Deployment and scaling guaranties</a></p>
<h2>Readyness probe, endpoints and potential workaround</h2>
<blockquote>
<p>If the readiness probe failure, always prevent the deployment, Is
there other way to selectively expose only ready pods in the load
balancer, while not marking them as Unready during the deployment?</p>
</blockquote>
<p>This is by design, pods are added to service endpoints once they are in <code>ready</code> state.</p>
<p>Some kind of potential workaround can be used, at least in simple example it does work, however you should try and evaluate if this approach will suit your case, this is fine to use as initial deployment.</p>
<p><code>statefulSet</code> can be started without <code>readyness</code> probe included, this way <code>statefulSet</code> will start pods one by one when previous is <code>run and ready</code>, <code>liveness</code> may need to set up <code>initialDelaySeconds</code> so kubernetes won't restart the pod thinking it's unhealthy. Once <code>statefulSet</code> is fully run and ready, you can add <code>readyness</code> probe to the <code>statefulSet</code>.</p>
<p>When <code>readyness</code> probe is added, kubernetes will restart all pods again starting from the last one and your application will need to start again.</p>
<p>Idea is to start all pods and they will be able to serve requests +- at the same time, while with <code>readyness</code> probe applied, only one pod will start in 5 minutes for instance, next pod will take 5 minutes more and so on.</p>
<h2>Example</h2>
<p>Simple example to see what's going on based on <code>nginx</code> webserver and <code>sleep 30</code> command which makes kubernetes think when <code>readyness</code> probe is setup that pod is <code>not ready</code>.</p>
<ol>
<li>Apply <code>headless service</code></li>
<li>Comment <code>readyness</code> probe in <code>statefulSet</code> and apply manifest</li>
<li>Observe that all pods are created right after previous pod is <code>running and ready</code></li>
<li>Uncomment <code>readyness</code> probe and apply manifest</li>
<li>Kubernetes will recreate all pods starting from the last one waiting this time <code>readyness</code> probe to complete and flag a pod as <code>running and ready</code>.</li>
</ol>
<p>Very convenient to use this command to watch for progress:</p>
<pre><code>watch -n1 kubectl get pods -o wide
</code></pre>
<p><code>nginx-headless-svc.yaml</code>:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
</code></pre>
<p><code>nginx-statefulset.yaml</code>:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
command: ["/bin/bash", "-c"]
args: ["sleep 30 ; echo sleep completed ; nginx -g \"daemon off;\""]
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 1
periodSeconds: 5
</code></pre>
<h2>Update</h2>
<p>Thanks to @jesantana for this much easier solution.</p>
<p>If all pods have to be scheduled at once and it's not necessary to wait for pods readyness, <code>.spec.podManagementPolicy</code> can be set to <code>Parallel</code>. <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies" rel="nofollow noreferrer">Pod Management Policies</a></p>
<h2>Useful links:</h2>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">Kubernetes statefulsets</a></li>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">kubernetes liveness, readyness and startup probes</a></li>
</ul>
|
<p>I have kube-dns running in a (bare metal) cluster. I know that it works, as I can <a href="https://github.com/kubernetes/dns/blob/master/docs/specification.md" rel="noreferrer">interpolate a service to a name</a> I've created and get a host entry:</p>
<pre><code>$ host elk-service-headless.default.svc.cluster.local
elk-service-headless.default.svc.cluster.local has address 10.42.0.151
elk-service-headless.default.svc.cluster.local has address 10.42.0.152
elk-service-headless.default.svc.cluster.local has address 10.42.0.153
(...)
</code></pre>
<p>What I can't figure out how to do is to list all of the records that kube-dns is holding. I've tried the <a href="https://stackoverflow.com/questions/19322962/how-can-i-list-all-dns-records">standard DNS tricks</a> like dig and <code>host -l</code> and can't get them. But in any case, there must be a way to do this from Kubernetes itself. I tried inspecting ConfigMaps and didn't find what I'm looking for.</p>
| <p>This post will help you find the internal DNS record of your K8s services on a cluster that runs kube-dns:</p>
<ol>
<li>Find the ClusterIP of the kube-dns service:</li>
</ol>
<p><code>kubectl -n kube-system get svc kube-dns</code></p>
<p><a href="https://i.stack.imgur.com/LocI9.png" rel="noreferrer"><img src="https://i.stack.imgur.com/LocI9.png" alt="enter image description here" /></a></p>
<p>Now we know the internal K8s DNS resolver IP is 172.20.0.10</p>
<ol start="2">
<li>Find the application service endpoint IP:</li>
</ol>
<p><code>kubectl -n fe get ep</code></p>
<p><a href="https://i.stack.imgur.com/Ce7aC.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Ce7aC.png" alt="enter image description here" /></a></p>
<ol start="3">
<li>Exec into the application pod:</li>
</ol>
<p><code>kubectl -n fe exec -it fe-app-575fdf6cb6-lt7t6 -- sh</code></p>
<ol start="4">
<li>Get DNS service name:</li>
</ol>
<p><a href="https://i.stack.imgur.com/dzXUd.png" rel="noreferrer"><img src="https://i.stack.imgur.com/dzXUd.png" alt="enter image description here" /></a></p>
<ol start="5">
<li>A script to list all the K8s SVC DNS records:</li>
</ol>
<pre><code>#!/bin/bash
echo =========== Create an ubuntu pod ==================
kubectl run ubuntu --image=ubuntu -- bash -c "while true; do echo hello; sleep 10;done"
# Wait for the pod "ubuntu" to contain the status condition of type "Ready"
kubectl wait --for=condition=Ready pod/ubuntu
# Save a sorted list of IPs of all of the k8s SVCs:
kubectl get svc -A|egrep -v 'CLUSTER-IP|None'|awk '{print $4}'|sort -V > ips
# Copy the ip list to owr Ubuntu pod:
kubectl cp ips ubuntu:/
echo =========== Installing dig tool into the pod ===============
kubectl exec -it ubuntu -- apt-get update
kubectl exec -it ubuntu -- apt install -y dnsutils
# Print 7 blank lines
yes '' | sed 7q
echo =========== Print all k8s SVC DNS records ====================
for ip in $(cat ips); do echo -n "$ip "; kubectl exec -it ubuntu -- dig -x $ip +short; done
echo ====== End of list =====================
echo ========= Cleanup ===============
kubectl delete po ubuntu
rm ips
exit 0
</code></pre>
|
<p>I am facing a weird issue with my pods. I am launching around 20 pods in my env and every time some random 3-4 pods out of them hang with Init:0/1 status. On checking the status of pod, Init container shows running status, which should terminate after task is finished, and app container shows Waiting/Pod Initializing stage. Same init container image and specs are being used in across all 20 pods but this issue is happening with some random pods every time. And on terminating these stuck pods, it stucks in Terminating state. If i ssh on node at which this pod is launched and run docker ps, it shows me init container in running state but on running docker exec it throws error that container doesn't exist. This init container is pulling configs from Consul Server and on checking volume (got from docker inspect), i found that it has pulled all the key-val pairs correctly and saved it in defined file name. I have checked resources on all the nodes and more than enough is available on all. </p>
<p>Below is detailed example of on the pod acting like this. </p>
<p>Kubectl Version :</p>
<pre><code>kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T09:42:01Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Pods :</p>
<pre><code>kubectl get pods -n dev1|grep -i session-service
session-service-app-75c9c8b5d9-dsmhp 0/1 Init:0/1 0 10h
session-service-app-75c9c8b5d9-vq98k 0/1 Terminating 0 11h
</code></pre>
<p>Pods Status :</p>
<pre><code>kubectl describe pods session-service-app-75c9c8b5d9-dsmhp -n dev1
Name: session-service-app-75c9c8b5d9-dsmhp
Namespace: dev1
Node: ip-192-168-44-18.ap-southeast-1.compute.internal/192.168.44.18
Start Time: Fri, 27 Apr 2018 18:14:43 +0530
Labels: app=session-service-app
pod-template-hash=3175746185
release=session-service-app
Status: Pending
IP: 100.96.4.240
Controlled By: ReplicaSet/session-service-app-75c9c8b5d9
Init Containers:
initpullconsulconfig:
Container ID: docker://c658d59995636e39c9d03b06e4973b6e32f818783a21ad292a2cf20d0e43bb02
Image: shr-u-nexus-01.myops.de:8082/utils/app-init:1.0
Image ID: docker-pullable://shr-u-nexus-01.myops.de:8082/utils/app-init@sha256:7b0692e3f2e96c6e54c2da614773bb860305b79922b79642642c4e76bd5312cd
Port: <none>
Args:
-consul-addr=consul-server.consul.svc.cluster.local:8500
State: Running
Started: Fri, 27 Apr 2018 18:14:44 +0530
Ready: False
Restart Count: 0
Environment:
CONSUL_TEMPLATE_VERSION: 0.19.4
POD: sand
SERVICE: session-service-app
ENV: dev1
Mounts:
/var/lib/app from shared-volume-sidecar (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-bthkv (ro)
Containers:
session-service-app:
Container ID:
Image: shr-u-nexus-01.myops.de:8082/sand-images/sessionservice-init:sitv12
Image ID:
Port: 8080/TCP
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/etc/appenv from shared-volume-sidecar (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-bthkv (ro)
Conditions:
Type Status
Initialized False
Ready False
PodScheduled True
Volumes:
shared-volume-sidecar:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-bthkv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-bthkv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
</code></pre>
<p>Container Status on Node :</p>
<pre><code>sudo docker ps|grep -i session
c658d5999563 shr-u-nexus-01.myops.de:8082/utils/app-init@sha256:7b0692e3f2e96c6e54c2da614773bb860305b79922b79642642c4e76bd5312cd "/usr/bin/consul-t..." 10 hours ago Up 10 hours k8s_initpullconsulconfig_session-service-app-75c9c8b5d9-dsmhp_dev1_c2075f2a-4a18-11e8-88e7-02929cc89ab6_0
da120abd3dbb gcr.io/google_containers/pause-amd64:3.0 "/pause" 10 hours ago Up 10 hours k8s_POD_session-service-app-75c9c8b5d9-dsmhp_dev1_c2075f2a-4a18-11e8-88e7-02929cc89ab6_0
f53d48c7d6ec shr-u-nexus-01.myops.de:8082/utils/app-init@sha256:7b0692e3f2e96c6e54c2da614773bb860305b79922b79642642c4e76bd5312cd "/usr/bin/consul-t..." 10 hours ago Up 10 hours k8s_initpullconsulconfig_session-service-app-75c9c8b5d9-vq98k_dev1_42837d12-4a12-11e8-88e7-02929cc89ab6_0
c26415458d39 gcr.io/google_containers/pause-amd64:3.0 "/pause" 10 hours ago Up 10 hours k8s_POD_session-service-app-75c9c8b5d9-vq98k_dev1_42837d12-4a12-11e8-88e7-02929cc89ab6_0
</code></pre>
<p>On running Docker exec (same result with kubectl exec) :</p>
<pre><code>sudo docker exec -it c658d5999563 bash
rpc error: code = 2 desc = containerd: container not found
</code></pre>
| <p>A Pod can be stuck in Init status due to many reasons.</p>
<p>PodInitializing or Init Status means that the Pod contains an <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="noreferrer">Init container</a> that hasn't finalized (Init containers: specialized containers that run before app containers in a Pod, init containers can contain utilities or setup scripts). If the pods status is ´Init:0/1´ means one init container is not finalized; <code>init:N/M</code> means the Pod has M Init Containers, and N have completed so far.</p>
<p><a href="https://i.stack.imgur.com/O9Kgz.png" rel="noreferrer"><img src="https://i.stack.imgur.com/O9Kgz.png" alt="Architecture" /></a></p>
<hr />
<hr />
<h2>Gathering information</h2>
<p>For those scenario the best would be to gather information, as the <strong>root cause</strong> can be different in every PodInitializing issue.</p>
<ul>
<li><p><code>kubectl describe pods pod-XXX</code> with this command you can get many info of the pod, you can check if there's any meaningful event as well. Save the init container name</p>
</li>
<li><p><code>kubectl logs pod-XXX</code> this command prints the logs for a container in a pod or specified resource.</p>
</li>
<li><p><em><code>kubectl logs pod-XXX -c init-container-xxx</code></em> This is the most accurate as could print the logs of the init container. <em>You can get the init container name describing the pod in order to replace "init-container-XXX" as for example to "copy-default-config" as below:</em></p>
<p><a href="https://i.stack.imgur.com/NJyjr.png" rel="noreferrer"><img src="https://i.stack.imgur.com/NJyjr.png" alt="enter image description here" /></a></p>
<p>The output of <code>kubectl logs pod-XXX -c init-container-xxx</code> can thrown meaningful info of the issue, reference:</p>
<p><a href="https://i.stack.imgur.com/76Bfa.png" rel="noreferrer"><img src="https://i.stack.imgur.com/76Bfa.png" alt="Image-logs" /></a></p>
<p>In the image above we can see that the <strong>root cause</strong> is that the init container can't download the plugins from jenkins (timeout), here now we can check connection config, proxy, dns; or just modify the yaml to deploy the container without the plugins.</p>
</li>
</ul>
<p>Additional:</p>
<ul>
<li><p><code>kubectl describe node node-XXX</code> describing the pod will give you the name of its node, which you can also inspect with this command.</p>
</li>
<li><p><code>kubectl get events</code> to list the cluster events.</p>
</li>
<li><p><code>journalctl -xeu kubelet | tail -n 10</code> kubelet logs on systemd (<code>journalctl -xeu docker | tail -n 1</code> for docker).</p>
</li>
</ul>
<hr />
<p><strong>Solutions</strong></p>
<p>The solutions depends on the information gathered, once the <strong>root cause</strong> is found.</p>
<p>When you find a log with an insight of the root cause, you can investigate that specific root cause.</p>
<p>Some examples:</p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/41580#issuecomment-287990085" rel="noreferrer">1</a> > In there this happened when init container was deleted, can be fixed deleting the pod so it would be recreated, or redeploy it. Same scenario in <a href="https://github.com/kubernetes/kubernetes/issues/29972#issuecomment-304992389" rel="noreferrer">1.1</a>.</p>
<p><a href="https://docs.microfocus.com/itom/Network_Operations_Management:2020.08/TSPodsStuckinInitState" rel="noreferrer">2</a> > If you found "bad address 'kube-dns.kube-system'" the PVC may not be recycled correctly, solution provided in <a href="https://docs.microfocus.com/itom/Network_Operations_Management:2020.08/TSPodsStuckinInitState" rel="noreferrer">2</a> is running <code>/opt/kubernetes/bin/kube-restart.sh</code>.</p>
<p><a href="https://waspro-tistory-com.translate.goog/643?_x_tr_sl=auto&_x_tr_tl=en&_x_tr_hl=en-US&_x_tr_pto=nui,in,elem#:%7E:text=kubectl%20logs%20-f%20myapp-pod%20-c%20init-failed-test" rel="noreferrer">3</a> > There, a sh file was not found, the solution would be to modify the yaml file or remove the container if unnecessary.</p>
<p><a href="https://stackoverflow.com/q/51843212/11928136">4</a> > A FailedSync was found, and it was solved restarting docker on the node.</p>
<p>In general you can modify the yaml, for example to avoid using an outdated URL, try to recreate the affected resource, or just remove the init container that causes the issue from your deployment. However the specific solution will depend on the specific root cause.</p>
|
<p>I am trying to start a postgres pod on microk8s kubernetes cluster. At the moment the postgres container with all its data is started locally on the host machine.</p>
<p>The question is: Is it possible to map the current volume (from local docker volume ) to the kubernetes pod deployment?</p>
<p>I have used <code>kompose</code> to convert the <code>docker-compose.yml</code> to appropriate <code>.yaml</code> files for kubernetes deployment.</p>
<p>The above mentioned command <code>kompose</code> creates <code>postgres-deployment.yaml</code>, <code>postgres-service.yaml</code>, and 2 <code>persistantvolumeclaims</code> ( from the volumes mapped in the docker-compose one for the pg_data and the other one for the init_db script).</p>
<p>Do I need to generate <code>PersistantVolume</code> mappings alongside the <code>PersistantVolumeClaims</code> that were automatically generated by <code>kompose</code> and how would they look?</p>
<p>EDIT: Using the yaml below I made 2 <code>volumes</code> and 2 <code>volumeclaims</code> for the postgres container one for the data one for the init_db script. Running that and then exposing the service endpoints worked.
WARNING: Because the database was running on docker host machine container and kubernetes pod in same time data corruption happened.</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/lib/docker/volumes/dummy_pgdata/_data"
</code></pre>
| <p>Posted community wiki for better visibility. Feel free to expand it.</p>
<hr />
<p>There is possibility to share same docker volume with the Kubernetes pod by <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume" rel="nofollow noreferrer">defining custom PersistentVolume</a> with <code>storageClassName</code> and <code>hostPath</code> set as below:</p>
<pre><code>storageClassName: manual
hostPath:
path: "/var/lib/docker/volumes/{docker-volume}"
</code></pre>
<p><strong>However, there is no possibility to share the same Postgres data files between different Postgres installations.</strong> It may cause data corruption. Check <a href="https://stackoverflow.com/a/43583234/16391991">this answer</a> for more details.</p>
|
<p>I am unable to connect to our Kubernetes cluster. The <code>kubectl</code> command does not seem to take the configuration into account...</p>
<p>When I issue a <code>kubectl cluster-info</code> (or <code>kubectl get pods</code>)
I get the following error message:</p>
<blockquote>
<p>The connection to the server localhost:8080 was refused - did you specify the right host or port?</p>
</blockquote>
<p>I was suspecting that the <code>~/.kube/config</code> was pointing to my minikube but it is not the case:</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS...==
server: https://10.95.xx.yy:6443
name: cluster.local
contexts:
- context:
cluster: cluster.local
namespace: xxx-cluster-xxx-xx-username
user: username
name: username-context
current-context: ""
kind: Config
preferences: {}
users:
- name: username
user:
client-certificate: .certs/username.crt
client-key: .certs/username.key
</code></pre>
<p>Surprisingly, the <code>$KUBECONFIG</code> environment variable is set to the correct path:</p>
<pre><code>KUBECONFIG=/Users/username/.kube/config
</code></pre>
<p>and the <code>kubectl config view</code> works fine (a.k.a. is not pointing to <code>localhost</code> but to <code>https://10.95.xx.yy:6443</code>)</p>
<p>Finally, I also try to specify the path to the config file when invoking <code>kubectl</code> (<code>kubectl get pods --kubeconfig=/Users/username/.kube/config</code>), but the error remains the same...</p>
| <p>Your current context is unset, as seen with <code>current-context: ""</code>; if you were to run <code>kubectl --context username-context get pods</code> I would expect it to do more what you want. If that turns out to be the case, one can run <code>kubectl config use-context username-context</code> to set the <code>current-context</code> going forward</p>
|
<p>I have an application that has two parts: one deployer and the application runtime environment. The deployer needs to have access to different namespaces to be able to launch, edit and delete the application deployments, svc, configmaps, etc.</p>
<p>I first launch the deployer via a helm chart and then the deployer exposes some APIs to manage the application (launch, edit, delete).</p>
<p>My question is how to write the <code>ClusterRole</code> for my deployer that can only have access to a set of pre-created namespaces without giving it full cluster access (deployer should not be able to create, edit or delete namespaces). OR I have to create one <code>Role</code> for each of those namespaces and add them to the Helm chart of the deployer before installing it?</p>
| <p>You can create a <code>ClusterRole</code> that will describe what the role can do. Then create a <code>RoleBinding</code> in each namespace you want the role user to have the priviliges. Here is a nice example from the <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-example" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>A RoleBinding can also reference a ClusterRole to grant the permissions defined in that ClusterRole to resources inside the RoleBinding's namespace. This kind of reference lets you define a set of common roles across your cluster, then reuse them within multiple namespaces.</p>
<p>For instance, even though the following RoleBinding refers to a ClusterRole, "dave" (the subject, case sensitive) will only be able to read Secrets in the "development" namespace, because the RoleBinding's namespace (in its metadata) is "development".</p>
</blockquote>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1
# This role binding allows "dave" to read secrets in the "development" namespace.
# You need to already have a ClusterRole named "secret-reader".
kind: RoleBinding
metadata:
name: read-secrets
#
# The namespace of the RoleBinding determines where the permissions are granted.
# This only grants permissions within the "development" namespace.
namespace: development
subjects:
- kind: User
name: dave # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: secret-reader
apiGroup: rbac.authorization.k8s.io
</code></pre>
|
<p>I am just starting out with <code>kubernetes</code>. I have created a simple app which is working fine with the external IP address. It's a <code>mongodb-express</code> with <code>mongo-db</code> backend. I am trying to get a domain name mapped to it, instead of the IP address.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-express
labels:
app: mongo-express
spec:
replicas: 1
selector:
matchLabels:
app: mongo-express
template:
metadata:
labels:
app: mongo-express
spec:
containers:
- name: mongo-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
etc
---
apiVersion: v1
kind: Service
metadata:
name: mongo-express-service
spec:
selector:
app: mongo-express
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 8081
</code></pre>
<p>This was a walk in the park with AWS. Simply created a zone and added the name servers to the domain name provider, however in GCP this does not work. I have created a zone in cloudDNS but cannot seem to map it to the cluster. The documentation is a bit complex for me to understand. Can someone help?</p>
<pre><code>kubernetes ClusterIP 10.28.0.1 <none> 443/TCP 51m
mongo-express-service LoadBalancer 10.28.7.249 34.69.73.210 80:30055/TCP 42m
mongodb-service ClusterIP 10.28.1.163 <none> 27017/TCP 45m
</code></pre>
| <p><strong>Cloud DNS</strong> config is the same as the <strong>route53</strong> configuration. You can add the <strong>CName</strong> record or <strong>A</strong> record as per requirement.</p>
<p>For a simple use case, you can follow this document which is regarding adding static IP to DNS and mapping the domain.</p>
<p>As you have already exposed the service with type <strong>LoadBalancer</strong> you can follow from here</p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip#configuring_your_domain_name_records" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip#configuring_your_domain_name_records</a></p>
<p>Make sure your Nameserver are set correctly</p>
<blockquote>
<p>If your nameserver is Cloud DNS: Follow Cloud DNS Quickstart guide to
configure DNS A record for your domain name with the reserved IP
address of your application.</p>
<p>If your nameserver is another provider: Refer to your DNS service's
documentation on setting DNS A records to configure your domain name.
If you choose to use Cloud DNS instead, refer to Migrating to Cloud
DNS.</p>
</blockquote>
<p><strong>Extra</strong> :</p>
<p>You can also use the external DNS to auto add the <strong>ingress</strong> records to Cloud DNS.</p>
<p>Example : <a href="https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/gke.md" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/gke.md</a></p>
|
<p>My metrics-server was sudden not working and got below information:</p>
<pre><code>$ kubectl get apiservices |egrep metrics
v1beta1.metrics.k8s.io kube-system/metrics-server False (MissingEndpoints)
</code></pre>
<p>I tried to implement below but still not okay:</p>
<pre><code>$ git clone https://github.com/kubernetes-incubator/metrics-server.git
$ cd metrics-server
$ kubectl apply -f deploy/1.8+/
</code></pre>
<p>Please advise, thanks.</p>
| <p>I solved this issue the following:</p>
<p>Download metrics-server:</p>
<pre><code>wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.5.0/components.yaml
</code></pre>
<p>Remove metrics server:</p>
<pre><code>kubectl delete -f components.yaml
</code></pre>
<p>Edit downloaded file and add <em>- --kubelet-insecure-tls</em> flag:</p>
<pre><code>labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --kubelet-insecure-tls
</code></pre>
<p>Create service once again:</p>
<pre><code>kubectl apply -f components.yaml
</code></pre>
|
<p>We have load balancing, discovery, etc in the Spring cloud. But the same load balancing, discovery is available in Kubernetes as well. So just wanted to know when we should go with Spring cloud (load balancing or discovery) and when to use Kubernetes</p>
| <p>It depends on your use-case. There can be situations where you need to directly use <code>Eureka server registry</code> and <code>Eureka client discovery</code> offered by <code>Spring Cloud Netflix</code>. <code>Ribbon</code> is the client side load balancer provided by <code>Spring Cloud Netflix</code>.</p>
<p>In my experience, it is not impossible to use <code>Eureka</code> in any environment. It can be your local data centre or cloud infrastructure. However when it comes to deployment environment, there are so many alternatives for us to achieve the server registry mechanism. Sometimes those alternatives are the best solutions. I will give you an example below...</p>
<p><strong>If you host your application in your local server (Local data centre)</strong></p>
<p>Now in this scenario you can use <code>Eureka</code> and continue your server registry and discovery mechanism. (That is not the only way. I mentioned <code>Eureka</code> for this scenario because it would be a good use case for it)</p>
<p><strong>If you host your application in AWS infrastructure</strong></p>
<p>The <code>AWS</code> environment gives you lots of benefits and services such that you can forget the burden of maintaining and implementing <code>Eureka</code>. You can achieve simply the same behaviour by <code>AWS load balancers</code>, <code>AWS target groups</code> and even more by adding <code>AWS auto scaling groups</code>. In AWS it self there are so many other ways to achieve this as well.</p>
<p>Long story in short that for your scenario, you can continue using the power of <code>Kubernetes</code> and get the privilege unless you have a specific reason to use <code>Eureka</code> and put a large effort to implement it. You should select what suits the best depending on time, effort, maintainability, performance etc.</p>
<p>Hope this helps for you to get an idea. Happy coding!</p>
|
<p>The DevSpace-Django tutorial in question is the following:</p>
<p><a href="https://devspace.cloud/blog/2019/10/18/deploy-django-to-kubernetes" rel="nofollow noreferrer">https://devspace.cloud/blog/2019/10/18/deploy-django-to-kubernetes</a></p>
<p>Trying something completely barebones to understand how <code>devspace</code> works to decide if I want to make the switch from <code>skaffold</code>. There are a number of things in the above tutorial that seem to be no longer accurate or have changed in more recent versions of <code>devpsace</code>.</p>
<p>At any rate, I'm not able to connect to the <a href="https://docs.djangoproject.com/en/3.2/intro/tutorial01/#the-development-server" rel="nofollow noreferrer">Django tutorial app</a> when I navigate to <code>localhost:8000</code>. It just says "Can't connect to the server."</p>
<p>This is the output I get when I <code>devspace dev</code>:</p>
<pre><code> devspace dev
[warn] There is a newer version of DevSpace: v5.15.0. Run `devspace upgrade` to upgrade to the newest version.
[info] Using namespace 'mysite'
[info] Using kube context 'mysite'
[info] Execute 'helm upgrade mysite component-chart --namespace mysite --values /var/folders/tl/wqf19mws155_7bkqyw401z_w0000gn/T/325113665 --install --repo https://charts.devspace.sh --repository-config='' --version 0.8.0 --kube-context mysite'
[info] Execute 'helm list --namespace mysite --output json --kube-context mysite'
[done] √ Deployed helm chart (Release revision: 5)
[done] √ Successfully deployed mysite with helm
[done] √ Port forwarding started on 8000:8000 (mysite/mysite-7b856bb78b-2ztpf-devspace)
#########################################################
[info] DevSpace UI available at: http://localhost:8090
#########################################################
[0:sync] Waiting for pods...
[0:sync] Starting sync...
[0:sync] Sync started on /Users/cjones/Projects/Apps/Test/mysite <-> . (Pod: mysite/mysite-7b856bb78b-2ztpf-devspace)
[0:sync] Waiting for initial sync to complete
[info] Opening 'http://localhost:8000' as soon as application will be started (timeout: 4m0s)
[info] Opening shell to pod:container mysite-7b856bb78b-2ztpf-devspace:container-0
Installing Python Dependencies
Requirement already satisfied: asgiref==3.4.1 in /usr/local/lib/python3.9/site-packages (from -r requirements.txt (line 1)) (3.4.1)
Requirement already satisfied: Django==3.2.7 in /usr/local/lib/python3.9/site-packages (from -r requirements.txt (line 2)) (3.2.7)
Requirement already satisfied: pytz==2021.1 in /usr/local/lib/python3.9/site-packages (from -r requirements.txt (line 3)) (2021.1)
Requirement already satisfied: sqlparse==0.4.1 in /usr/local/lib/python3.9/site-packages (from -r requirements.txt (line 4)) (0.4.1)
WARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv
WARNING: You are using pip version 21.1.2; however, version 21.2.4 is available.
You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.
____ ____
| _ \ _____ __/ ___| _ __ __ _ ___ ___
| | | |/ _ \ \ / /\___ \| '_ \ / _` |/ __/ _ \
| |_| | __/\ V / ___) | |_) | (_| | (_| __/
|____/ \___| \_/ |____/| .__/ \__,_|\___\___|
|_|
Welcome to your development container!
This is how you can work with it:
- Run `python main.py` to build the application
- Files will be synchronized between your local machine and this container
- Some ports will be forwarded, so you can access this container on your local machine via localhost:
Image ImageSelector LabelSelector Ports (Local:Remote)
username/app 8000:8000
root@mysite-7b856bb78b-2ztpf-devspace:/app#
</code></pre>
<p>This is the <code>DevSpace.yaml</code>:</p>
<pre><code>version: v1beta10
# `vars` specifies variables which may be used as ${VAR_NAME} in devspace.yaml
vars:
- name: IMAGE
value: username/app
# `deployments` tells DevSpace how to deploy this project
deployments:
- name: mysite
# This deployment uses `helm` but you can also define `kubectl` deployments or kustomizations
helm:
# We are deploying the so-called Component Chart: https://devspace.sh/component-chart/docs
componentChart: true
# Under `values` we can define the values for this Helm chart used during `helm install/upgrade`
# You may also use `valuesFiles` to load values from files, e.g. valuesFiles: ["values.yaml"]
values:
containers:
- image: ${IMAGE} # Use the value of our `${IMAGE}` variable here (see vars above)
service:
ports:
- port: 8000
# `dev` only applies when you run `devspace dev`
dev:
# `dev.ports` specifies all ports that should be forwarded while `devspace dev` is running
# Port-forwarding lets you access your application via localhost on your local machine
ports:
- imageSelector: ${IMAGE} # Select the Pod that runs our `${IMAGE}`
forward:
- port: 8000
# `dev.open` tells DevSpace to open certain URLs as soon as they return HTTP status 200
# Since we configured port-forwarding, we can use a localhost address here to access our application
open:
- url: http://localhost:8000
# `dev.sync` configures a file sync between our Pods in k8s and your local project files
sync:
- imageSelector: ${IMAGE} # Select the Pod that runs our `${IMAGE}`
excludePaths:
- .git/
uploadExcludePaths:
- Dockerfile
# `dev.terminal` tells DevSpace to open a terminal as a last step during `devspace dev`
terminal:
imageSelector: ${IMAGE} # Select the Pod that runs our `${IMAGE}`
# With this optional `command` we can tell DevSpace to run a script when opening the terminal
# This is often useful to display help info for new users or perform initial tasks (e.g. installing dependencies)
# DevSpace has generated an example ./devspace_start.sh file in your local project - Feel free to customize it!
command:
- ./devspace_start.sh
# Since our Helm charts and manifests deployments are often optimized for production,
# DevSpace let's you swap out Pods dynamically to get a better dev environment
replacePods:
- imageSelector: ${IMAGE} # Select the Pod that runs our `${IMAGE}`
# Since the `${IMAGE}` used to start our main application pod may be distroless or not have any dev tooling, let's replace it with a dev-optimized image
# DevSpace provides a sample image here but you can use any image for your specific needs
replaceImage: loftsh/python:latest
# Besides replacing the container image, let's also apply some patches to the `spec` of our Pod
# We are overwriting `command` + `args` for the first container in our selected Pod, so it starts with `sleep 9999999`
# Using `sleep 9999999` as PID 1 (instead of the regular ENTRYPOINT), allows you to start the application manually
patches:
- op: replace
path: spec.containers[0].command
value:
- sleep
- op: replace
path: spec.containers[0].args
value:
- "9999999"
- op: remove
path: spec.containers[0].securityContext
# `profiles` lets you modify the config above for different environments (e.g. dev vs production)
profiles:
# This profile is called `production` and you can use it for example using: devspace deploy -p production
# We generally recommend to use the base config without any profiles as optimized for development (e.g. image build+push is disabled)
- name: production
# This profile adds our image to the config so that DevSpace will build, tag and push our image before the deployment
merge:
images:
app:
image: ${IMAGE} # Use the value of our `${IMAGE}` variable here (see vars above)
dockerfile: ./Dockerfile
</code></pre>
<p>This is the <code>Dockerfile</code>:</p>
<pre><code>FROM python:3.8-slim-buster
# Create project directory (workdir)
WORKDIR /app
# Add requirements.txt to WORKDIR and install dependencies
COPY requirements.txt .
RUN pip install -r requirements.txt
# Add source code files to WORKDIR
ADD . .
# Application port (optional)
EXPOSE 8000
# Container start command
# It is also possible to override this in devspace.yaml via images.*.cmd
CMD ["manage.py", "runserver", "8000"]
</code></pre>
<p>Both DevSpace UI and <code>kubectl get pods</code> show the Pod is running.</p>
<p>This is deploying to <code>minikube</code> using the Docker driver.</p>
<p>Nothing else is running on port <code>8000</code> and I've also tried <code>8080</code> by changing every occurrence of <code>8000</code> to <code>8080</code>.</p>
<p>The Django project is only what you get when you run: <code>django-admin startproject mysite</code>. Nothing fancy.</p>
<p>It isn't clear to me what I'm doing wrong here. Suggestions?</p>
<p>EDIT:</p>
<p>Also, meant to point out on M1 Mac. Not sure if it could be related but worth pointing out.</p>
<p>Tried with <code>docker-desktop</code> cluster and <code>minikube</code> thinking it could be a potential issue with <code>minikube</code>. Same issue.</p>
<p>Also, tried the Quick Start guide here using a DevSpace provided project with the same results:</p>
<p><a href="https://devspace.sh/cli/docs/quickstart" rel="nofollow noreferrer">https://devspace.sh/cli/docs/quickstart</a></p>
| <p>Kind of important step being left off of the <a href="https://devspace.cloud/blog/2019/10/18/deploy-django-to-kubernetes" rel="nofollow noreferrer">Django tutorial</a>. After running <code>devspace dev</code>, you need to run in the <code>devspace</code> CLI that comes up:</p>
<pre><code>./manage.py runserver
</code></pre>
<p>This will launch the application in browser.</p>
|
<p>I am running my script inside a Kubernetes container and a zip file is downloaded.</p>
<ol>
<li><p>I want to verify how can I access the content inside a zip file when I am triggering tests from a windows machine and the tests are running inside the Kubernetes container through automation.</p>
</li>
<li><p>How can I do mapping of the Kubernetes downloads folder so that I can access it as a windows folder on my local machine even after the testcase execution is completed.</p>
</li>
</ol>
| <blockquote>
<p>I want to verify how can I access the content inside a zip file when I
am triggering tests from a windows machine and the tests are running
inside the Kubernetes container through automation.</p>
</blockquote>
<p>If it's possible for you to write API uploading the zip file to storage you can do that also. Like S3 AWS or any other storage and access file from there, which may cost some extra if zip files are large.</p>
<p>There is also other option to save data in Disk Persitenet volume, if you are running managed services GKE, EKS you wont be able to access that data directly to local PC.</p>
<p>If you are running locally on windows PC you can mount the folder to POD and access created zip to local system.</p>
<blockquote>
<p>How can I do mapping of the Kubernetes downloads folder so that I can
access it as a windows folder on my local machine even after the
testcase execution is completed.</p>
</blockquote>
<p>You can use <strong><a href="https://stackoverflow.com/questions/45090337/kubernetes-on-windows-persistent-volume">hostpath</a></strong> Volume type in your pod spec to mount a file or directory from the host node’s filesystem.</p>
<p>POD example :</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: hostpath-volume-pod
spec:
containers:
- name: my-hostpath-volume-pod
image: <Image name>
volumeMounts:
- name: foo
mountPath: "C:\\etc\\foo"
readOnly: true
volumes:
- name: foo
hostPath:
path: "C:\\etc\\foo"
</code></pre>
|
<p>I want to patch an existing secret resource within Kubernetes. The object is called <code>centos-secretstorage</code> within the <code>default</code> namespace. I want to add a simple label of <code>test: empty</code>. However, this fails when the secret object <code>centos-secretstorage</code> exists, but it doesn't have any existing labels. If I manually label the secret with something else beforehand via <code>kubectl label centos-secretstorage hello=world</code>, and rerun my golang code. It is able to add the <code>test: empty</code> label successfully.</p>
<p>However, I want to have this be able to add a label regardless if existing labels exist or not.</p>
<pre><code>type secret struct {
namespace string
name string
}
func main() {
k8sClient := k8CientInit()
vaultSecret := secret{
namespace: "default",
name: "centos-secretstorage",
}
vaultSecret.patchSecret(k8sClient)
}
type patchStringValue struct {
Op string `json:"op"`
Path string `json:"path"`
Value string `json:"value"`
}
func (p *secret) patchSecret(k8sClient *kubernetes.Clientset) {
emptyPayload := []patchStringValue{{
Op: "add",
Path: "/metadata/labels/test",
Value: "empty",
}}
emptyPayloadBytes, _ := json.Marshal(emptyPayload)
fmt.Println(string(emptyPayloadBytes))
emptyres, emptyerr := k8sClient.CoreV1().Secrets(p.namespace).Patch(p.name, types.JSONPatchType, emptyPayloadBytes)
if emptyerr != nil {
log.Fatal(emptyerr.Error())
}
fmt.Println(emptyres.Labels)
}
</code></pre>
<p>Error: <code> the server rejected our request due to an error in our request</code></p>
| <p>The problem is that the <code>add</code> operation in the JSON patch strategy requires the path to point to an existing map, while the object you are patching does not have this map at all. This is why when <em>any</em> label exists, the patch succeeds. We can work around this by using a different patch strategy. I think the <code>merge</code> strategy should work well.</p>
<p>I was able to reproduce this (on a namespace, but the object doesn't matter) using <code>kubectl</code> (which is generally useful when debugging the Kubernetes API):</p>
<ul>
<li><code>kubectl patch ns a --type='json' -p='[{"op": "merge", "path": "/metadata/labels/test", "value":"empty"}]'</code> -> fails</li>
<li><code>kubectl patch ns a --type='merge' -p='{"metadata": {"labels": {"test": "empty"}}}'</code> -> succeeds</li>
</ul>
<p>Using Golang client-go it would look something like this (didn't actually compile / run this):</p>
<pre><code> payload := `{"metadata": {"labels": {"test": "empty"}}}`
emptyres, emptyerr := k8sClient.CoreV1().Secrets(p.namespace).Patch(p.name, types.MergePatchType, []byte(payload))
</code></pre>
<p>You can make the creation of the <code>payload</code> JSON nicer using structs, as you did with <code>patchStringValue</code>.</p>
<p>More info on patch strategies can be found here:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/</a></li>
<li><a href="https://erosb.github.io/post/json-patch-vs-merge-patch/" rel="nofollow noreferrer">https://erosb.github.io/post/json-patch-vs-merge-patch/</a></li>
</ul>
|
<pre><code>initContainers:
- name: git-clone-openg2p
image: bitnami/odoo
command: ["/bin/sh","-c"]
args: ['apt-get git && git clone https://github.com/repo.git && git clone https://github.com/repo.git /bitnami/odoo']
volumeMounts:
- name: odoo-data
mountPath: /bitnami/odoo
</code></pre>
<p>I need to add add-ons by cloning git repository into <code>/bitnami/odoo</code>.
This is my init container configuration in yaml file. When I <code>helm install</code> and create pod it says "Invalid operation git" in the logs of pod.</p>
| <p>As far as I know, there is no command <code>apt-get get</code>, you probably want:</p>
<pre><code>apt-get install -y git
</code></pre>
|
<p>I'm trying to set up a database migration job for dotnet entity framework.
It seems that I cannot connect to mysql database service from kubernetes job, but I can connect from my desktop when I forward ports.</p>
<p>This is my working MySql deployment + service:</p>
<pre><code>kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
targetPort: 3306
selector:
app: mysql
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:8.0
name: mysql
env:
- name: MYSQL_DATABASE
value: myDatabase
- name: MYSQL_USER
value: myUser
- name: MYSQL_PASSWORD
value: myPassword
- name: MYSQL_ROOT_PASSWORD
value: myRootPassword
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
</code></pre>
<p>I'm not pasting persistent volume claims for brevity.</p>
<p>This works as I'm able to connect to mysql from my desktop after i do:
<code>kubectl port-forward deployment/mysql 3306:3306</code> and connect through MySQL Workbench.</p>
<p>What I cannot do is to run migrations from the job that uses Dockerfile which has C# database project with dbContext in order to run db migration.</p>
<p>The job:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: candles-downloader-db-migration
spec:
backoffLimit: 0
template:
spec:
containers:
- name: candles-service-migration
image: migration
imagePullPolicy: Never
env:
- name: CONNECTION_STRING
value: server=mysql.default.svc.cluster.local:3306;uid=myUser;pwd=myPassword;database=myDatabase
restartPolicy: Never
</code></pre>
<p>As you can see I'm passing the connection string via environment variable CONNECTION_STRING</p>
<p>Then there's Dockerfile for the job:</p>
<pre><code>FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build-env
RUN dotnet tool install --global dotnet-ef --version 5.0.9
ENV PATH $PATH:/root/.dotnet/tools
WORKDIR /app
# Copy csproj and restore as distinct layers
COPY *.csproj .
RUN dotnet restore
# Copy everything else and build
COPY ./ .
ENTRYPOINT dotnet ef database update -v --connection $CONNECTION_STRING
</code></pre>
<p>I have the image built on my minikube cluster.
When the job starts, the dockerfile container gets the connection string. For debugging I used -v flag for verbose output.</p>
<p>Here is the output from the failed job (unimportant parts truncated for brevity): <code>kubectl logs candles-downloader-db-migration-gqndm</code></p>
<pre><code>Finding IDesignTimeServices implementations in assembly 'Infrastructure.Persistence.Sql'...
No design-time services were found.
Migrating using database 'myDatabase' on server 'mysql.default.svc.cluster.local:3306'.
'CandlesServiceDbContext' disposed.
System.InvalidOperationException: An exception has been raised that is likely due to a transient failure. Consider enabling transient error resiliency by adding 'EnableRetryOnFailure()' to the 'UseMySql' call.
---> MySql.Data.MySqlClient.MySqlException (0x80004005): Unable to connect to any of the specified MySQL hosts.
</code></pre>
<p>I suspect the problems with connection string.
I used <code>server=mysql.default.svc.cluster.local:3306;uid=myUser;pwd=myPassword;database=myDatabase</code>
But I've tried with different server values as well:</p>
<ul>
<li><code>mysql.default.svc.cluster.local:3306</code></li>
<li><code>mysql.default.cluster.local:3306</code></li>
<li><code>mysql.svc.cluster.local:3306</code></li>
<li><code>mysql:3306</code></li>
<li>even my local cluster ip for the mysql service <code>10.97.213.180:3306</code></li>
</ul>
<p>None of them works. I always get this error in the job logs:</p>
<blockquote>
<p>Unable to connect to any of the specified MySQL hosts.</p>
</blockquote>
<p>Should my job / container on different pod see the mysql server on other pod through the kubernetes service? I thought so, but it looks like it's "invisible".</p>
| <p>I figured it out after reading kubernetes documentation:
<a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/</a></p>
<p>I've installed DNS utils with the following command:
<code>kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml</code></p>
<p>Then I was able to test my 'mysql' service if it's discoveryable by name:
<code>kubectl exec -i -t dnsutils -- nslookup mysql</code></p>
<p>And it was. The output was:</p>
<pre><code>Server: 10.96.0.10
Address: 10.96.0.10#53
Name: mysql.default.svc.cluster.local
Address: 10.97.213.180
</code></pre>
<p>But after specifying name with port, it failed:
<code>kubectl exec -i -t dnsutils -- nslookup mysql:3306</code></p>
<pre><code>Server: 10.96.0.10
Address: 10.96.0.10#53
** server can't find mysql:3306: NXDOMAIN
command terminated with exit code 1
</code></pre>
<p>So as I expected the error was in connection string.
I had to change from</p>
<pre><code>server=mysql:3306; ...
</code></pre>
<p>to</p>
<pre><code>server=mysql;port=3306; ...
</code></pre>
<p>and my migrations ran in the job.</p>
|
<p>So I've been trying to fix this for days now and I'm beyond stuck.</p>
<p>My app is running, and I can access the site when I go to the default url (example.com). I can refresh on this url without issues, and I can navigate through the pages being rendered through react router as long as I don't refresh on any other page. (e.g.) refreshing on example.com/path1 doesn't work, I get a 404 error.</p>
<p>My current ingress file looks like:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myApp-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
tls:
- hosts:
- my.app.com
secretName: myApp-tls
rules:
- host: "my.app.com"
http:
paths:
- pathType: Prefix
path: /.*
backend:
service:
name: myApp
port:
number: 80
</code></pre>
<p>I've added many of the most common replies to issues such as this, but it only makes matters worse, such as white screen with "Unexpected token '<'" errors for all javascript files it tries to load, and none of the pages loading at all.</p>
<p>For example, I've tried:</p>
<pre><code> nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/configuration-snippet: |
try_files $uri $uri/ /index.html;
</code></pre>
<p>I've tried adding additional paths to the controller for each route available in the app, I've tried setting the path to just "/" or "/(.*)", nothing is working.</p>
<p>Any help would be greatly appreciated! thank you!</p>
<p>EDIT:</p>
<pre><code># latest active node image
FROM node:14-alpine
# Create app directory
RUN mkdir /app
RUN mkdir -p /app/node_modules && chown -R node:node /app
WORKDIR /app
COPY package.json /app
COPY tsconfig.json /app
COPY webpack.config.js /app
COPY .env.prod .env
ADD src /app/src
ADD dist /app/dist
RUN mkdir -p /app/dist/js && chown -R node:node /app/dist/js
ADD server /app/server
ADD assets /app/assets
ADD config /app/config
ARG NPM_TOKEN
COPY .npmrc_docker .npmrc
RUN npm cache verify
RUN npm install
RUN npm run build:prod
RUN rm -f .npmrc
# Expose necessary port
EXPOSE 3000
# Compile typescript and start app
CMD ["cross-env", "NODE_ENV=production", "PORT=3000", "ts-node", "server/server"]
</code></pre>
<p>I've also tried doing it with <code>CMD ["npx", "http-server", "./dist", "-p", "3000"]</code> since the server part is only doing the following:</p>
<pre><code> this.app.use(express.static(path.resolve(__dirname, "../dist")));
this.app.get("*", (request: express.Request, response: express.Response) => {
response.sendFile(path.resolve(__dirname, "../dist/index.html"));
});
</code></pre>
| <p>Your <code>path: /.*</code> has no capture groups, so <code>/$1</code> and <code>/$2</code> are undefined.</p>
<p>Try the following:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myApp-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
tls:
- hosts:
- my.app.com
secretName: myApp-tls
rules:
- host: "my.app.com"
http:
paths:
- pathType: Prefix
path: /(.*)
backend:
service:
name: myApp
port:
number: 80
</code></pre>
|
<p>I am trying to create a cli tool for kubernetes. I need to generate Bearer Token for communicating with kubernetes API. How can I generate the token from Kubeconfig File? I do not want to use external library or kubectl.<br><br>
Here is example Kubeconfig File:</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01USXhNREU1TVRReU0xb1hEVE13TVRJd09ERTVNVFF5TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTUE4CmhEcDBvRVUzNTFFTEVPTzZxd3dUQkZ2U2ZxWWlGOE0yR0VZMXNLRFZ0MUNyL3czOS83QkhCYi9NOE5XaW9vdlQKZ2hsZlN2TXhsaTBRUVRSQmd5NHp2ZkNveXdBWkg0dWphYTBIcW43R2tkbUdVUC94RlZoWEIveGhmdHY5RUFBNwpMSW1CT3dZVHJ6ajRtS0JxZ3RTenhmVm5hN2J2U2oxV203bElYaTNaSkZzQmloSFlwaXIwdFZEelMzSGtEK2d0Cno1RkhOU0dnSS9MTlczOWloTU1RQ0g0ZFhtQVVueGFmdFdwUlRQOXFvSHJDWTZxVlNlbEVBYm40UWZVZ2ZUaDEKMUNhdW01bllOUjlDZ3lPOStNY0hXMTdYV0c4NGdGV3p6VUxPczVXbUo0VVY4RjdpdkVhMVJlM2Q3VkpKUEF6VwpCME4rWFFmcXg5UTArRWlXWklVQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZBV0p0Y2RLYjRRQWU2ekw4NzdvN3FQNVVWNWZNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFCYWt3bE1LL2VmWUpyNlVlWEpkenBURmRaS0lYbWFEaWxhZ3ZNOGNkci9nVjJlWVlEdgpRY3FvcUwvNS95U3Y1T2ZpR0MrU25nUXhZMHp0a0VVQm04N1NOR1dYLzd1VlUwbytVV2tzZERLR2JhRmxIVE9PCmFBR3dndEZ4T1YzeTF1WnZJVm8vbW12WTNIMTBSd29uUE8yMU5HMEtRWkRFSStjRXFFb1JoeDFtaERCeGVSMUgKZzdmblBJWTFUczhWM2w0SFpGZ015anpwVWtHeUNjMVYxTDk5Vk55UHJISEg0L1FibVM5UWdkNUNWZXNlRm9HaApOVkQ4ZHRjUmpWM2tGYVVJelJ6a3lRMG1FMXk1RXRXMWVZZnF4QnAxNUN3NnlSenNWMzcrdlNab0pSS1FoNGw4CjB1b084cFhCMGQ4V1hMNml0UWp2ZjJOQnBnOU1nY0Q2QzEvZgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://192.168.1.18:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURFekNDQWZ1Z0F3SUJBZ0lJYldUcHpDV25zTVl3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURFeU1UQXhPVEUwTWpOYUZ3MHlNVEV5TVRBeE9URTBNalZhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTBGT09JcnZiTGd1SWJmVXUKd29BaG5SaktEQVFCdkp3TlliSWZkSlNGSFBhY1ljbmVUcUVSVXFZeEs4azFHRytoR0FDTlFPb2VNV3Q1anNjRwpuN0FFdHhscUJQUzNQMzBpMVhLSmZnY2Q1OXBxaG1kOVFIdFNOVTlUTVlaM2dtY0x4RGl1cXZFRGI0Q042UTl6CkI3Yk5iUDE4Y3pZdHVwbUJrY2plMFF1ZEd2dktHcWhaY1NkVFZMT3ErcTE0akM4TTM5UmgzdDk1ZEM2aWRYaUsKbWE3WGs5YnJtalJnWDZRVUJJc0xwTnkvc3dJaUFiUTlXSm1YL2VkdHhYTGpENllNK1JzQ0JkbGc5MEhhcURqdgpKSlcwQ2g4cDJkV1ZwalQrWjBMd2ZnUENBN1YzS1o4YWdldHhwQ0xQcmxlOTdnRStVM1BKbXJVY0lBaVJlbzFoCmsvOXVqUUlEQVFBQm8wZ3dSakFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0h3WURWUjBqQkJnd0ZvQVVCWW0xeDBwdmhBQjdyTXZ6dnVqdW8vbFJYbDh3RFFZSktvWklodmNOQVFFTApCUUFEZ2dFQkFDeXVKazdjdVppdzhmQW5teUdSa0trdFAzUE5LUnBCdDVnUVdjUzJuRFUrTmpIMjh1MmpGUDQ5Cm1xbjY1SGpmQU9iOVREUUlRcUtZaWdjYTViOXFYRXlDWHZEN1k1SXJ4RmN3VnEvekdZenFYWjVkR0srUnlBUlQKdm0rQzNaTDV0N2hJc1RIYWJ0SkhTYzhBeFFPWEdTd1h0YkJvdHczd2ZuSXB0alY1SG1VYjNmeG9KQUU4S1hpTgpHcXZ5alhpZHUwc1RtckszOHM5ZjZzTFdyN1lOQTlKNEh4ditkNk15ZFpSWDhjS3VRaFQzNDFRcTVEVnRCT1BoCjBpb1Mwa0JEUDF1UWlIK0tuUE9MUmtnYXAyeDhjMkZzcFVEY1hJQlBHUDBPR1VGNWFMNnhIa2NsZ0Q5eHFkU0cKMVlGVjJUamtjNHN2U1hMSkt1cmU1S2IrODcyQlZWWT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBMEZPT0lydmJMZ3VJYmZVdXdvQWhuUmpLREFRQnZKd05ZYklmZEpTRkhQYWNZY25lClRxRVJVcVl4SzhrMUdHK2hHQUNOUU9vZU1XdDVqc2NHbjdBRXR4bHFCUFMzUDMwaTFYS0pmZ2NkNTlwcWhtZDkKUUh0U05VOVRNWVozZ21jTHhEaXVxdkVEYjRDTjZROXpCN2JOYlAxOGN6WXR1cG1Ca2NqZTBRdWRHdnZLR3FoWgpjU2RUVkxPcStxMTRqQzhNMzlSaDN0OTVkQzZpZFhpS21hN1hrOWJybWpSZ1g2UVVCSXNMcE55L3N3SWlBYlE5CldKbVgvZWR0eFhMakQ2WU0rUnNDQmRsZzkwSGFxRGp2SkpXMENoOHAyZFdWcGpUK1owTHdmZ1BDQTdWM0taOGEKZ2V0eHBDTFBybGU5N2dFK1UzUEptclVjSUFpUmVvMWhrLzl1alFJREFRQUJBb0lCQUEvclVxRTAyYnJiQnNIZwpTb0p5YUI4cEZjZDFSdXl5d0JNSEdZQS9HU3p0YTJYTmx6OUs3NWZ4T3pDdFgzRk9sbkRQR2Z3cjU4Sy9BN3IxCldudzVaeUxXdmxOQ24vNHFBYzl0d1RQd04walFWL09OVlBUb2Q0KzdVQkFveGxrZ3ByV0gzMUVRdWNKN2dGeWUKNFp0bFRLMVhjWHNjV01JNW1MMGJMR3V0QjRSWU5meHAwZ1AxekJ6Z2FLYjVGK2xVcFdHZ2w1dHNHay9ncm9uSwpUVkVCQmtBT0lyU0pFemc5YUJ2emJMS0h3TnZlL1QrVEdJTGVZalpRYVkxL1lLN2JpbFVkaFlQOGI2OWhxbFZnClVxc0hpRjVXNzYzenMrdXl5azNtUU1yblJKQ2ZUWDNTRWhOVm1BdTl0TXh2eE1BRk9QT1lLb3FPb25LNHdrZWwKU21HUHBnRUNnWUVBNjJhMjdWdlgrMVRlellIWmZWSW8rSi8welVmZERqZ0MvWG1zTlBWdkhXdXlkOUVRQ1JXKwpOS1FpOGdMWmNUSEpWU3RidkpRVENSQUdCL0wzM09SUTI5Tm1KNnVVUWNNR0pBSzhwckdLKytwTXF3NHRPdzMvCkhDblVQZGVaSGFVVVFnODVJeWMrbmg5QnFQWndXclk3REZEbENUOXI5cVZJN1RvS0ptd2RjdlVDZ1lFQTRvNVUKZDZXdFpjUk5vV041UUorZVJkSDRkb2daQnRjQ0ExTGNWRDdxUzYrd0s2eTdRU05zem9wWTc1MnlLWU91N2FCWQo2RlhDQVRHaG0ranN6ZE14allrV2ROdGxwbDZ4ejZRZmN6ZWgydjVUQVdpRkZyMTlqU1RkLzNrRlFDNytpeUQyCnZRSHpacXZZSUhtQ3VleldHRFJrVVB2dzk1dTFranphcEZCRHZqa0NnWUJXZUpLMXVra3FiOUN3V1FTVmZuckMKYWErNVFLNjVMR1ljeW5jeHRQNnVKZ09XODlzYUd6eVZoYjI0Zk1kM1J6eVg1cWQ2TEVLWno2TUhoSDc4UzNwUQpaZVZlcVM1NndiTWR3MHVkU0JhdjF5OTJubXlMQnVjeFowUXB1MnJwY3R4d0w3dGphR1VlSElrNEVkN1AwNlQ1Ckx6WVRJWkw5TlZZR25vMWY4OU1WaVFLQmdRQ2RKQjNnYzNGSEloYTZkM1cxNWtEd3FzZ001eTk4dUF0MFpMZmcKVTFkTnNnbWU4WXRjamdhOVorWnlKVTViVHpRNUxEd2V3c1R5OFFyb1NuSmQvVHZrc1E1N2RXWVhOSjFlcWJjSwp3cTZvYURrSXhBZDBFM0VQUW1BZEFFTXRGcXVGc3hLUlhOWUlBKysvN3FoRzc4ZzhON0xSSFQ4eGI3Wk1QWnRsCjF5cDF1UUtCZ0VGemtmR3VzeGxJU2xOY1VDUGFTUUx6bTZqYmdjdUlXcjZaN053R01pVHM3b2x5TnQrdnpiRnMKbnk5d1pnbHlsS0M2NjcreXpIa0tkbnZBdWRuS290bDhybzRCOVhjUHhGWDJ5NnpwZWIxWS91STZpVzl4Y2NSNQozbUlVS2QrOGdMczRrTUttL2dXYjZxTHdPZ3pjQWJIbTV6SVhBMXQ5TUJWYlE2ZHEvMlZDCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
</code></pre>
| <blockquote>
<p>I need to generate Bearer Token for communicating with kubernetes API</p>
</blockquote>
<p>You cannot ”generate” these tokens. They are <em>issued</em> by the control plane and signed with the <em>private key</em> that the control plane holds. It would be a security hole if you could generate these on the client side.</p>
|
<p>I'm having some issues with permissions and I'm really hoping someone can point me to where I'm going wrong...</p>
<p>I've got a Kube cluster set up and functioning (for example, I'm running a mysql pod and connecting to it without issue), and I've been trying to get a Postgresql pod running with TLS support. The service that will be connecting to this pod requires TLS, so going without TLS is unfortunately not an option.</p>
<p>Here's where things get a bit messy, everything functions - except for the fact that for some reason Postgres init can't seem to read my certificate files that are stored in Kube secrets. Seems like whatever options I choose, Postgres init returns the following:</p>
<pre><code>$ kubectl logs data-server-97469df55-8wd6q
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgres ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default time zone ... UTC
creating configuration files ... ok
running bootstrap script ... ok
sh: locale: not found
2021-09-11 20:03:54.323 UTC [32] WARNING: no usable system locales were found
performing post-bootstrap initialization ... ok
initdb: warning: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
syncing data to disk ... ok
Success. You can now start the database server using:
pg_ctl -D /var/lib/postgres -l logfile start
waiting for server to start....2021-09-11 20:04:01.882 GMT [37] FATAL: could not load server certificate file "/var/lib/postgres-secrets/server.crt": Permission denied
2021-09-11 20:04:01.882 GMT [37] LOG: database system is shut down
pg_ctl: could not start server
Examine the log output.
stopped waiting
</code></pre>
<p>I HIGHLY suspect my issue is the very first line, but I'm not sure how to go about resolving this in Kubernetes. How do I tell Kubernetes that I need to mount my secrets so that user 'postgres' can read them (being lazy and doing a <code>chmod 0777</code> does not work)?</p>
<p>These are my configs:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: data-server
labels:
app: data-server
spec:
ports:
- name: data-server
targetPort: 5432
protocol: TCP
port: 5432
selector:
app: data-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: data-server
spec:
selector:
matchLabels:
app: data-server
strategy:
type: Recreate
template:
metadata:
labels:
app: data-server
spec:
serviceAccountName: default
containers:
- name: postgres
image: postgres:alpine
imagePullPolicy: IfNotPresent
args:
- -c
- hba_file=/var/lib/postgres-config/pg_hba.conf
- -c
- config_file=/var/lib/postgres-config/postgresql.conf
env:
- name: PGDATA
value: /var/lib/postgres
- name: POSTGRES_PASSWORD_FILE
value: /var/lib/postgres-secrets/postgres-pwd.txt
ports:
- name: data-server
containerPort: 5432
hostPort: 5432
protocol: TCP
volumeMounts:
- name: postgres-config
mountPath: /var/lib/postgres-config
- name: postgres-storage
mountPath: /var/lib/postgres-data
- name: postgres-secrets
mountPath: /var/lib/postgres-secrets
volumes:
- name: postgres-config
configMap:
name: data-server
- name: postgres-storage
persistentVolumeClaim:
claimName: gluster-claim
- name: postgres-secrets
secret:
secretName: data-server
defaultMode: 0640
</code></pre>
<p>Secrets:</p>
<pre><code>$ kubectl get secret
NAME TYPE DATA AGE
data-server Opaque 5 131m
default-token-nq7pv kubernetes.io/service-account-token 3 5d5h
</code></pre>
<p>PV / PVC</p>
<pre><code>$ kubectl describe pv,pvc
Name: gluster-pv
Labels: <none>
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Bound
Claim: default/gluster-claim
Reclaim Policy: Retain
Access Modes: RWX
VolumeMode: Filesystem
Capacity: 50Gi
Node Affinity: <none>
Message:
Source:
Type: Glusterfs (a Glusterfs mount on the host that shares a pod's lifetime)
EndpointsName: gluster-cluster
EndpointsNamespace: <unset>
Path: /gv0
ReadOnly: false
Events: <none>
Name: gluster-claim
Namespace: default
StorageClass:
Status: Bound
Volume: gluster-pv
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 50Gi
Access Modes: RWX
VolumeMode: Filesystem
Used By: data-server-97469df55-8wd6q
dnsutils
mysql-6f47967858-xngbr
</code></pre>
| <p>Figured it out.. Turns out it was just a necessary block in the template/spec:</p>
<pre><code>securityContext:
runAsUser: 70
fsGroup: 70
</code></pre>
<p>Took way too long to find a <a href="https://stackoverflow.com/questions/66316489/dockerfile-run-addgroup-group-postgres-in-use">reference</a> to this using the googles. Seems a bit odd too...what happens if I want to switch off alpine to something else? the UID/GID aren't going to be the same. I'll have to find those and change those here too. Seems silly to use IDs rather than names.</p>
|
<p>I'm new to K8s and currently following a course that is using K8s.I am using minikube in my local machine. I'm stuck on the part in which I need to directly communicate with ingress-nginx which happens to be in a separate namespace.</p>
<p>Here's some information regarding my setup.</p>
<pre><code> ✗ client (main) ✗ kubectl get namespace
NAME STATUS AGE
default Active 25d
ingress-nginx Active 21d
kube-node-lease Active 25d
kube-public Active 25d
kube-system Active 25d
</code></pre>
<p>Service</p>
<pre><code> ✔︎ client (main) ✗ kubectl get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.105.88.90 <none> 80:30306/TCP,443:31814/TCP 21d
ingress-nginx-controller-admission ClusterIP 10.101.75.17 <none> 443/TCP 21d
</code></pre>
<p>My ingress-srv.yaml file</p>
<pre><code> apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
labels:
name: ingress-service
spec:
rules:
- host: ticketing.dev
http:
paths:
- pathType: Prefix
path: "/api/users/(.*)"
backend:
service:
name: auth-srv
port:
number: 3000
- pathType: Prefix
path: /?(.*)
backend:
service:
name: client-srv
port:
number: 3000
</code></pre>
<p>I looked for some information over <a href="https://github.com/kubernetes/ingress-nginx/issues/3665" rel="nofollow noreferrer">github</a> and to do cross communication what I need to do is doing <code>http://<service>.<namespace>.svc.cluster.local:<port>/</code>. Based on that info, I came up with
<code>http://ingress-nginx-controller.ingress-nginx.svc.cluster.local</code> and test it using curl.</p>
<pre><code>✔︎ client (main) ✗ curl http://ingress-nginx-controller.ingress-nginx.svc.cluster.local
curl: (6) Could not resolve host: ingress-nginx-controller.ingress-nginx.svc.cluster.local
</code></pre>
<p>I'm following a course called <a href="https://www.udemy.com/course/microservices-with-node-js-and-react/" rel="nofollow noreferrer"> Microservices with Node JS and React </a></p>
<p>Hope for your answers and insights.
Thank you.</p>
| <p>If you ingress controller in Ingress-nginx namespace it perfect or also fine it's any other namespace.</p>
<p>You have create the ingress object in that namespace where your application running, regardless of where you controller running.</p>
<p>Ingress will find the controller or communicate with it across namespace automatically using the class that you have specified in the <strong>ingress</strong> object.</p>
<p>In simple words</p>
<p>If your <strong>app1</strong> running in namespace <strong>application</strong> just create one <strong>ingress</strong> in <strong>application</strong> namespace pointing to <strong>app1</strong> <em>service</em> and you are done.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
labels:
name: ingress-service
spec:
rules:
- host: ticketing.dev
http:
paths:
- pathType: Prefix
path: "/api/users/(.*)"
backend:
service:
name: auth-srv
port:
number: 3000
- pathType: Prefix
path: /?(.*)
backend:
service:
name: client-srv
port:
number: 3000
</code></pre>
<ul>
<li>Ingress Controller(simple Pod/Deployment along with a Service that can be used to utilize routing and proxying. Based on Nginx example);</li>
<li>Ingress rules(a separate Kubernetes resource with <strong>kind: Ingress</strong>. Will only take effect if Ingress Controller is already existing and <strong>class</strong> is mentioned in ingress)</li>
</ul>
<p>Ingress Controller can be deployed in any namespace, usually deployed in a namespace separate from your app services like you have done in <strong>ingress-nginx</strong>. It can out-of-the-box see Ingress rules in all namespaces in the cluster and will pick them up.</p>
<p>The Ingress rules, however, must reside in the namespace where the app that they configure reside.</p>
|
<p>I have the following code:</p>
<pre><code>config.load_incluster_config()
v1 = client.CoreV1Api()
my_init_containers = v1.read_namespaced_pod(namespace=NAMESPACE, name=pod_name).spec.init_containers
</code></pre>
<p>This gives me all the containers in <code>init_containers</code>. My problem is that I want to get only specific container: <code>git-sync-init-container</code>.</p>
<p>I can do:</p>
<pre><code>init = v1.read_namespaced_pod( namespace=NAMESPACE, name=pod_name).spec.init_containers[2]
</code></pre>
<p>But I don't want to use indexes. How can I get a container by name?
I tried to iterate over <code>my_init_containers</code> searching for <code>name=git-sync-init-container</code> but this is a <code>V1Container</code> type and it doesn't let me iterate over it.</p>
| <p>You are close:</p>
<pre class="lang-py prettyprint-override"><code>my_init_containers = v1.read_namespaced_pod(namespace=NAMESPACE, name=pod_name).spec.init_containers
for container in my_init_containers:
if container.name == "the_one_im_looking_for":
init = container
break
</code></pre>
|
<p>I have a kubernetes cluster, with two nodes running.</p>
<p>I have argocd being used to handle pulling in any changes to my microservice (one microservice, currently, but I will be adding to that).</p>
<p>My application is being built as a helm chart. So when my repo changes, i update my helm chart, and then argocd sees that the helm chart has changes and applies those changes to the cluster.</p>
<p>I'm looking to add Istio as my service mesh to my cluster. With using Istio there will be quite a few yaml configuration files.</p>
<p>My question is, <strong>how can I have my cluster auto update my istio configurations like how argocd updates when my helm chart changes?</strong></p>
<p>Of course, I could put the istio configuration files in the helm chart, but my thoughts on that were:</p>
<ol>
<li>do i want my istio configurations tied to my application?</li>
<li>even if I did do #1, which I am not opposed to, there are many istio configurations that will apply cluster-wide, not just to my one microservice, and those definitely wouldn't make sense to tie into my specific one microservice, argo-cd application. <strong>So how would I handle auto updating cluster-wide istio files?</strong></li>
</ol>
<p>Another option could be to use the argocd app of apps pattern, but from what I've read that doesn't seem to have the greatest support yet.</p>
| <ol>
<li><p>In my opinion, you should package Istio components like <code>VirtualService</code>, <code>RequestAuthentication</code> etc. to the application if they "belong" to the application. You could even add <code>Gateways</code> and <code>Certificates</code> to the app if it fits your development model (i.e., no separate team which manages these concerns). Using a tool like crossplane, you could even include database provisioning or other infrastructure to your app. That way, the app is self-contained and configuration not spread at multiple places.</p>
</li>
<li><p>You could create an "infrastructure" chart. This is something which could be in an own Argo app or even deployed before your apps (maybe the same phase at which Argo CD itself is deployed)</p>
</li>
</ol>
|
<p>I wanted to host a TDengine cluster in Kubernetes, then met an error when I enabled coredump in the container.</p>
<p>I've searched Stack Overflow and found the Docker solution, <a href="https://stackoverflow.com/questions/60574458/how-to-modify-the-core-pattern-when-building-docker-image">How to modify the `core_pattern` when building docker image</a>, but not the Kubernetes one.</p>
| <p>Here's an example:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-app
spec:
template:
spec:
containers:
- image: my-image
name: my-app
...
securityContext:
allowPrivilegeEscalation: false
runAsUser: 0
</code></pre>
|
<h1>The problem</h1>
<h2>Error photo</h2>
<p><a href="https://i.stack.imgur.com/4tpWA.png" rel="nofollow noreferrer">Flyway Job Error</a></p>
<p>I'm currently attempting to use flyway to perform migrations in postgres. I have a feeling the init scripts are unable to be found based on this error message.
Not too sure what I'm missing as the init scripts worked when using docker containers.</p>
<p>Any help would be appreciated for solving this issue.</p>
<h1>What I have tried:</h1>
<ol>
<li>So far I have tried playing with the connection string inside the job to see if it was a connection problem.</li>
<li>Then I tried using a config map with the init sql script inside to hopefully have the script visible for migrations. Didn't have too much luck with that approach.
I have attached the example that I found for this approach.
<a href="https://github.com/arkuksin/scs-db-versioning/blob/master/k8s/migration/migration_configmap.yml" rel="nofollow noreferrer">Migrations with ConfigMap for init sql scripts</a></li>
</ol>
<p>I have some more research links that I found but they didn't quite fit or work with my problem and weren't as noteworthy or were outdated.</p>
<ol>
<li><a href="https://blog.sebastian-daschner.com/entries/flyway-migrate-databases-managed-k8s" rel="nofollow noreferrer">Database migrations in Kubernetes with flyway video</a></li>
<li><a href="https://reflectoring.io/flyway-kubernetes-continuous-database-configuration/" rel="nofollow noreferrer">Continuous Database configuration with flyway</a></li>
</ol>
<h1>Code/Kubernetes setup</h1>
<p>What follows is my current setup:</p>
<p>Side note: I am using kind to pull my docker images locally and create my clusters, I will attach my docker compose and docker files at the end.</p>
<h2>Flyway Setup</h2>
<p>I have Flyway as a Job in Kubernetes with this structure:</p>
<h3>Flyway job file</h3>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: flyway-migration-job
spec:
template:
spec:
containers:
- name: flyway
image: flyway/flyway:6.3.1
command: ["flyway -url=jdbc:postgresql://postgres-master:5432/wallet -user=test_user -password=testuser1234 -connectRetries=60 migrate"]
restartPolicy: Never
backoffLimit: 1
</code></pre>
<h2>Postgres Setup</h2>
<h3>Postgres Deployment file:</h3>
<p>This job is using postgres which I have created a deployment for using:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-master
labels:
app: postgres-master
spec:
replicas: 1
selector:
matchLabels:
app: postgres-master
template:
metadata:
labels:
app: postgres-master
spec:
containers:
- name: postgres-master
image: postgres:12
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: wallet
- name: POSTGRES_USER
value: test_user
- name: POSTGRES_PASSWORD
value: testuser1234
</code></pre>
<h3>Postgres service file</h3>
<p>I have then exposed the Postgres pod using this Postgres service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: postgres-master
labels:
app: postgres-master
spec:
type: NodePort
ports:
- protocol: TCP
port: 5432
targetPort: 5432
selector:
app: postgres-master
</code></pre>
<h2>Node setup</h2>
<p>I also have a node deployment which connects to the database pod and I have confirmed that I have been able to connect to the database using hostname postgres-master(pod name), the above login details and port 5432.</p>
<h3>Node deployment file</h3>
<p>I've attached the node deployment file for completeness but it works.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: node-deployment
labels:
app: node-deployment
spec:
replicas: 3
selector:
matchLabels:
app: node-deployment
template:
metadata:
labels:
app: node-deployment
spec:
containers:
- name: node
image: wallet-api-20_web:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
env:
- name: PG_HOST
value: "postgres-master.default"
</code></pre>
<h2>Docker setup</h2>
<h3>Docker compose</h3>
<p>Then I have my docker compose file which I use to build my images:
Side not - All of the docker images worked together including migrations before I started adding Kubernetes.</p>
<pre><code>version: '3.5'
services:
web:
build:
dockerfile: ./Dockerfile
container_name: wallet-api-web
depends_on:
- flyway
ports:
- "8080:8080"
postgres-db:
build:
dockerfile: ./utils/db.Dockerfile
container_name: postgres-db
environment:
POSTGRES_PASSWORD: testuser1234
POSTGRES_USER: test_user
POSTGRES_DB: wallet
image: postgres:12
ports:
- "5432:5432"
restart: always
flyway:
command: -url=jdbc:postgresql://db:5432/wallet -user=test_user -password=testuser1234 -connectRetries=60 migrate
container_name: flyway-migrations
depends_on:
- db
image: flyway/flyway:6.3.1
volumes:
- ${PWD}/migrations/sql:/flyway/sql
</code></pre>
<h3>Postgres Dockerfile</h3>
<p>Here is my dockerfile for postgres:</p>
<pre><code>FROM postgres:13
ENV POSTGRES_DB wallet
</code></pre>
<h3>Node dockerfile</h3>
<pre><code>FROM node:15
ENV NODE_ENV=production
WORKDIR /app
# Copying the package json first enables caching.
COPY ["package.json", "package-lock.json*", "./"]
RUN npm install --production
COPY . .
RUN npm run build
ENTRYPOINT npm run start
</code></pre>
<h3>Flyway Migration scripts</h3>
<p>This is the only migration script I have and it was working before moving to Kubernetes.</p>
<pre><code>CREATE TABLE transaction_events (
transaction_version int,
transaction_type VARCHAR(200),
coins int,
transaction_id VARCHAR(500),
wallet_id VARCHAR(500)
)
</code></pre>
<p>Any help would be appreciated, I have a feeling this might be something small I am missing.</p>
<p>If anymore information is required please let me know.</p>
<p>Feel free to add any questions as well and I will do my best to provide further context.</p>
<p>Thank you.</p>
| <p>I suspect the way you are calling <code>command</code> is the issue. it needs to be <code>command: ["flyway", "-url=jdbc:postgresql://postgres-master:5432/wallet", "-user=test_user", "-password=testuser1234", "-connectRetries=60 migrate"]</code></p>
<p>Also, I think you do not need to use <code>command</code>. You can use <code>args</code> instead since you are use also the command specified in the image entrypoint.
<code>args: ["-url=jdbc:postgresql://postgres-master:5432/wallet", "-user=test_user", "-password=testuser1234", "-connectRetries=60 migrate"]</code></p>
<p>both should work anyway</p>
|
<p>I'm playing around with kubernetes ConfigMaps. In the <a href="https://kubernetes.io/docs/concepts/configuration/configmap/#configmaps-and-pods" rel="nofollow noreferrer">official documentation</a>, I see "file-like keys" in the <code>data</code> field:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: game-demo
data:
# file-like keys
game.properties: |
enemy.types=aliens,monsters
player.maximum-lives=5
user-interface.properties: |
color.good=purple
color.bad=yellow
allow.textmode=true
</code></pre>
<p>Is it possible to break these "file-like keys" into different files and reference them in this ConfigMap resource?</p>
<p>I see several benefits of this approach:</p>
<ul>
<li>Slimmed down ConfigMap</li>
<li>Proper syntax highlighting for the "file-like" configurations</li>
<li>Can run auto formatters against the "file-like" configurations</li>
</ul>
| <blockquote>
<ul>
<li>Proper syntax highlighting for the "file-like" configurations</li>
<li>Can run auto formatters against the "file-like" configurations</li>
</ul>
</blockquote>
<p>Yes, it is easier to save the files as proper files on your machine and in Git.</p>
<p>I propose that you use the <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/" rel="nofollow noreferrer">kustomize feature of kubectl</a> and use <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/#configmapgenerator" rel="nofollow noreferrer">configMapGenerator</a> to generate the ConfigMap instead.</p>
<p>Example <code>kustomization.yaml</code> (saved in the same directory as your files, e.g. in <code>config/</code>)</p>
<pre><code>configMapGenerator:
- name: game-demo
files:
- game.properties
- user-interface.properties
</code></pre>
<p>Then you can apply (and generate the configMap) with (if your config is in <code>config/</code>):</p>
<pre><code>kubectl -k config/
</code></pre>
<p>Or you can preview the "generated" configMap with:</p>
<pre><code>kubectl kustomize config/
</code></pre>
|
<p>I'm trying to run minikube with hyperv without open an Administrator powershell.
Is there any way?
I'm doing this:</p>
<pre><code>choco install minikube
minikube.exe start --vm-driver "hyperv"
</code></pre>
<p>If I try to launch minikube start from a normal powershell it gives me this message:</p>
<pre><code>X hyperv does not appear to be installed
</code></pre>
| <p>To launch minikube from non-admin powershell. You need to add a non-admin user to "Hyper-V Administrators" group.</p>
<p>Open PowerShell with administrator right. And run below command to add current user name to
"Hyper-V Administrators" group. You need to sign off and sign in to take effect.</p>
<pre><code>Add-LocalGroupMember -Group "Hyper-V Administrators" -Member [System.Security.Principal.WindowsIdentity]::GetCurrent().Name
</code></pre>
|
<p>I am working on a Kubernetes CronJob in GKE to <a href="https://cloud.google.com/sql/docs/mysql/import-export/exporting" rel="nofollow noreferrer">export SQL databases from GCP Cloud SQL</a>. I have a single GCP Cloud SQL instance whose Google service account is <code>p848827672298-eef1pd@gcp-sa-cloud-sql.iam.gserviceaccount.com</code>. If I add to this service account the permission to access and create bucket objects, running the <code>gcloud sql export sql ...</code> command <em>works</em>: the database is exported to my bucket.</p>
<p>However, what I want to do is use <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity" rel="nofollow noreferrer">Workload Identity</a> to bind a Kubernetes service account to the <code>p848827...</code> Google service account so that the CronJob can export the Cloud SQL database to my bucket. I try to do this by running this command:</p>
<pre><code>gcloud iam service-accounts add-iam-policy-binding \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:PROJECT_ID.svc.id.goog[K8_NAMESPACE/K8_SERVICE_ACCOUNT]" \
p848827672298-eef1pd@gcp-sa-cloud-sql.iam.gserviceaccount.com
</code></pre>
<p>(Of course, I replaced <code>PROJECT_ID</code>, <code>K8_NAMESPACE</code>, and <code>K8_SERVICE_ACCOUNT</code> with the appropriate values for my project.)</p>
<p>This results in</p>
<pre><code>ERROR: (gcloud.iam.service-accounts.add-iam-policy-binding) NOT_FOUND: Service account projects/PROJECT_ID/serviceAccounts/p848827672298-eef1pd@gcp-sa-cloud-sql.iam.gserviceaccount.com does not exist.
</code></pre>
<p>How do I bind the Cloud SQL service account to my Kubernetes service account?</p>
| <p>Hope your API enabled, please trying disabling and enabling API once faced once similar issue and worked: <code>gcloud services enable compute.googleapis.com</code>
<a href="https://dzone.com/articles/enabling-gke-workload-identity" rel="nofollow noreferrer">https://dzone.com/articles/enabling-gke-workload-identity</a></p>
<p>If you have a serviceaccount JSON file you can directly inject it to POD also as volume mount secret also.</p>
<pre><code>kubectl create secret generic echo --from-file service-account.json
</code></pre>
<p>Now suppose you are deploying the application with the secret injection</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: echo
labels:
app: echo
spec:
replicas: 1
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
name: echo
spec:
containers:
- name: echo
image: "gcr.io/hightowerlabs/echo"
env:
- name: "GOOGLE_APPLICATION_CREDENTIALS"
value: "/var/run/secret/cloud.google.com/service-account.json"
- name: "PROJECT_ID"
valueFrom:
configMapKeyRef:
name: echo
key: project-id
- name: "TOPIC"
value: "echo"
volumeMounts:
- name: "service-account"
mountPath: "/var/run/secret/cloud.google.com"
- name: "certs"
mountPath: "/etc/ssl/certs"
volumes:
- name: "service-account"
secret:
secretName: "echo"
- name: "certs"
hostPath:
path: "/etc/ssl/certs"
</code></pre>
<p>Example : <a href="https://github.com/kelseyhightower/gke-service-accounts-tutorial#google-cloud-service-accounts-with-google-container-engine-gke---tutorial" rel="nofollow noreferrer">https://github.com/kelseyhightower/gke-service-accounts-tutorial#google-cloud-service-accounts-with-google-container-engine-gke---tutorial</a></p>
|
<p>I would like to deploy an ngingx-ingress controller on my self-hosted Kubernetes (microk8s) that is configurable to listen on one or more interfaces (external IPs).</p>
<p>Not even sure if that is easily possible and I should just switch to an external solution, such as HAProxy or an nginx.</p>
<p>Required behavior:</p>
<pre><code>192.168.0.1-H"domain.com":443/frontend -> 192.168.0.1 (eth0) -> ingress -> service-frontend
192.168.0.1-H"domain.com":443/backend -> 192.168.0.1 (eth0) -> ingress -> service-backend
88.88.88.88-H"domain.com":443/frontend -> 88.88.88.88 (eth1) -> ? -> [403 or timeout]
88.88.88.88-H"domain.com":443/backend -> 88.88.88.88 (eth1) -> ? -> [403 or timeout]
</code></pre>
<p>And then later the eth1 interface should be able to be switched on, so that requests on that interface behave the same as on eth0.</p>
<p>I would like to be able to deploy multiple instances of services for load-balancing. I would like to keep the configuration in my namespace (if possible) so I can always delete and apply everything at once.</p>
<p>I'm using this guide as a reference: <a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/baremetal/</a></p>
<p>I was able to get something working with minikube, but obviously could not expose any external IPs and performance was quite bad. For that, I just configured a "kind: Ingress" and that was it.</p>
<p>So far, the ingress controller that's default on microk8s seems to listen on all interfaces and I can only configure it in its own namespace. Defining my own ingress seems to not have any effect.</p>
| <blockquote>
<p>I would like to deploy an ngingx-ingress controller on my self-hosted
Kubernetes (microk8s) that is configurable to listen on one or more
interfaces (external IPs).</p>
</blockquote>
<p>For above scenario, you have to deploy the multiple ingress controller of Nginx ingress and keep the different <strong>class name</strong> in it.</p>
<p>Official document : <a href="https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/</a></p>
<p>So in this scenario, you have to create the Kubernetes service with <strong>Loadbalancer</strong> IP and each will point to the respective deployment and class will be used in the ingress object.</p>
<p>If you looking forward to use the multiple domains with a single ingress controller you can easily do it by mentioning the host into ingress.</p>
<p>Example for two domain :</p>
<ol>
<li>bar.foo.dev</li>
<li>foo.bar.dev</li>
</ol>
<p>YAML example</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontdoor-bar
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
tls:
- hosts:
- bar.foo.dev
secretName: tls-secret-bar
rules:
- host: bar.foo.dev
http:
paths:
- backend:
serviceName: barfoo
servicePort: 80
path: /(.*)
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontdoor-foo
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- foo.bar.dev
secretName: tls-secret-foo
rules:
- host: foo.bar.dev
http:
paths:
- backend:
serviceName: foobar
servicePort: 9000
path: /(.*)
</code></pre>
|
<p>The following error is returned:</p>
<pre><code>error: you must specify two or three arguments: verb, resource, and optional resourceName
</code></pre>
<p>when I executed:</p>
<pre><code>kubectl auth --as=system:serviceaccount:mytest1:default can-i use psp 00-mytest1
</code></pre>
<p>I already have following manifests for <code>podsecuritypolicy</code> (psp.yaml), <code>role</code> (role.yaml) and <code>rolebinding</code> (rb.yaml) and deployed in the namespace <code>mytest1</code>.</p>
<p>psp.yaml</p>
<pre><code> apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: 00-mytest1
labels: {}
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'runtime/default'
seccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default'
spec:
privileged: false
allowPrivilegeEscalation: false
requiredDropCapabilities:
- ALL
runAsUser:
rule: 'MustRunAsNonRoot'
runAsGroup:
rule: 'MustRunAs'
ranges:
- min: 1000
max: 1000
- min: 1
max: 65535
supplementalGroups:
rule: 'MayRunAs'
ranges:
- min: 1
max: 65535
fsGroup:
rule: 'MayRunAs'
ranges:
- min: 1
max: 65535
seLinux:
rule: 'RunAsAny'
hostNetwork: false
hostIPC: false
hostPID: false
hostPorts: []
volumes:
- configMap
- downwardAPI
- emptyDir
- projected
- secret
</code></pre>
<p>role.yaml</p>
<pre><code> apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: mytest1
namespace: "mytest1"
labels: {}
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['00-mytest1']
</code></pre>
<p>and rb.yaml</p>
<pre><code> apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: mytest1
namespace: "mytest1"
labels: {}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: mytest1
subjects:
- kind: ServiceAccount
name: default
namespace: "mytest1"
</code></pre>
<p>I expect the return <code>yes</code> or <code>no</code> for <code>kubectl auth can-i ...</code> check and not the above mentioned error. Is the use-case for auth check correct? I appreciate he correction.</p>
| <p>You are missing the flag <code>--subresource</code>. If I execute</p>
<pre><code>kubectl auth --as=system:serviceaccount:mytest1:default can-i use psp --subresource=00-mytest1
</code></pre>
<p>I have clear answer. In my situation:</p>
<pre><code>no
</code></pre>
<p>You can also get an warning like this:</p>
<pre><code>Warning: resource 'podsecuritypolicies' is not namespace scoped in group 'policy'
</code></pre>
<p>But it is related directly to your config.</p>
<p>For more information about kubectl auth can-i command check</p>
<pre><code>kubectl auth can-i --help
</code></pre>
<p>in your terminal.
You can also read <a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/" rel="nofollow noreferrer">this doc</a>.</p>
|
<p>We are using Spring Boot in Kubernetes in our project. Recently we decided to add graceful shutdown for our application in order to ensure seamless deployments and make sure no requests are failed during pod terminations for whatever reason.
Even though Spring Boot provides graceful shutdown, it seems there is still a probability that requests may fail due to the fact that Kubernetes starts removing a pod from endpoints and sends the SIGTERM signal to the pod at the same time in parallel.
Quote from kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>At the same time as the kubelet is starting graceful shutdown, the control plane removes that shutting-down Pod from Endpoints (and, if enabled, EndpointSlice) objects where these represent a Service with a configured selector.</p>
</blockquote>
<p>This is aldso described in more detail <a href="https://youtu.be/wBtUIkMgzU8?t=3063" rel="nofollow noreferrer">here</a>. The solution is provided there as well, which is</p>
<blockquote>
<p>you can add a sleep in a preStop hook of the pod spec and, of course, <em><strong>configure that sleep to be whatever makes sense for your use case</strong></em> :</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/IjAcq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IjAcq.png" alt="enter image description here" /></a></p>
<p>In the example provided this sleep is configured to be 10 seconds, but I'm wondering what is the reasonable value for that, so the pod's termination is not unnecessarily delayed?</p>
<p>Thanks.</p>
| <p>It seems like this might be up to your preferences but it looks like the 5-10 seconds sleep is a recommended range:</p>
<blockquote>
<p>In “Kubernetes in Action”, Lukša recommends 5–10 seconds
<a href="https://blog.gruntwork.io/delaying-shutdown-to-wait-for-pod-deletion-propagation-445f779a8304" rel="nofollow noreferrer">https://blog.gruntwork.io/delaying-shutdown-to-wait-for-pod-deletion-propagation-445f779a8304</a></p>
</blockquote>
|
<p>I have a simple stateful with SFTP features and a second service try to connect to it.</p>
<p>When ISTIO is enabled, the connection is closed by the sftp service.
We can find this log:</p>
<pre><code>Bad protocol version identification '\026\003\001'
</code></pre>
<p>The routing is OK.</p>
<p>The service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: foo
namespace: bar
spec:
ports:
- port: 22
targetPort: 22
protocol: TCP
name: tcp-sftp
selector:
app.kubernetes.io/instance: foo-bar
</code></pre>
<p>I try to add a VirtualService with no luck:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: foo-bar-destination-virtualservice
namespace: bar
spec:
hosts:
- foo.bar.svc.cluster.local
tcp:
- match:
- port: 22
route:
- destination:
host: foo.bar.svc.cluster.local
port:
number: 22
</code></pre>
<p>The workaround is to disabled the sidecar on the sftp pod for now:</p>
<pre><code>sidecar.istio.io/inject: "false"
</code></pre>
| <p>It seems Envoy proxy (and Istio by proxy) does not support SFTP protocol (<a href="https://github.com/istio/istio/issues/7664#issuecomment-411948731" rel="nofollow noreferrer">referecce</a>).<br />
Your workaround is currently the only way to make it work.</p>
<p>If you want your auto-discovered services in the mesh to route/access your SFTP service, you can additionally create <a href="https://istio.io/latest/docs/reference/config/networking/service-entry/" rel="nofollow noreferrer"><code>ServiceEntry</code></a> pointing to it.</p>
|
<p>I have an architecture with multiple pods subscribing to a GCP Topic.</p>
<p>Every pod handles messages while it's up but is not interested in receiving messages it missed when it was not up.</p>
<p>In ActiveMQ this was Non Persistent messages, but I don't see the equivalent in GCP.
The only thing I thought is message lifetime with a minimum of 10 minutes.</p>
<p>Is this possible in GCP and where can it be configured ?</p>
| <p>There is no option in Cloud Pub/Sub to disable storage. You have two options.</p>
<ol>
<li><p>As you suggest, set the message retention duration to the minimum, 10 minutes. This does mean you'll get messages that are up to ten minutes old when the pod starts up. The disadvantage to this approach is that if there is an issue with your pod and it falls more than ten minutes behind in processing messages, then it will not get those messages, even when it is up.</p>
</li>
<li><p>Use the <a href="https://cloud.google.com/pubsub/docs/replay-overview#seek_to_a_time" rel="nofollow noreferrer">seek operation</a> and seek to seek forward to the current timestamp. When a pod starts up, the first thing it could do is issue a Seek command that would acknowledge all messages before the provided timestamp. Note that this operation is eventually consistent and so it is possible you may still get some older messages when you initially start your subscriber (or, if you are using push, once your endpoint is up). Also keep in mind that the seek operation is an administrative operation and therefore is limited to <a href="https://cloud.google.com/pubsub/quotas" rel="nofollow noreferrer">6,000 operations a minute</a> (100 operations per second). Therefore, if you have a lot of pods that are often restarting, you may run into this limit.</p>
</li>
</ol>
|
<p>I am trying to get some secret objects from my cluster using the <code>k8s</code> <code>go</code> client library as follows</p>
<pre><code>secret, err := clientset.CoreV1().Secrets("mynamespace").Get("name-of-my-secret", metav1.GetOptions{})
</code></pre>
<p>This worked well as long as I had to get just one object.</p>
<p>I have a case now where I need to fetch several such secret objects, however this now has to be done based on labels. (i.e. fetch all such secret objects matching the <code>foo=someprovider.someurl.com/path1=value1</code> label)</p>
<p>I don't see however in the relevant <a href="https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#GetOptions" rel="nofollow noreferrer">docs</a> a way to pass label selectors to the <code>GetOptions</code> struct that I assume performs such kind of filtering.</p>
<p>Any suggestions on how to select (potentially multiple) such resources based on labels?</p>
| <pre><code>clientset.CoreV1().Secrets("").List(context.TODO(), metav1.ListOptions{LabelSelector: "k8s-app=kube-proxy"})
</code></pre>
|
<p>So I've been trying to fix this for days now and I'm beyond stuck.</p>
<p>My app is running, and I can access the site when I go to the default url (example.com). I can refresh on this url without issues, and I can navigate through the pages being rendered through react router as long as I don't refresh on any other page. (e.g.) refreshing on example.com/path1 doesn't work, I get a 404 error.</p>
<p>My current ingress file looks like:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myApp-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
tls:
- hosts:
- my.app.com
secretName: myApp-tls
rules:
- host: "my.app.com"
http:
paths:
- pathType: Prefix
path: /.*
backend:
service:
name: myApp
port:
number: 80
</code></pre>
<p>I've added many of the most common replies to issues such as this, but it only makes matters worse, such as white screen with "Unexpected token '<'" errors for all javascript files it tries to load, and none of the pages loading at all.</p>
<p>For example, I've tried:</p>
<pre><code> nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/configuration-snippet: |
try_files $uri $uri/ /index.html;
</code></pre>
<p>I've tried adding additional paths to the controller for each route available in the app, I've tried setting the path to just "/" or "/(.*)", nothing is working.</p>
<p>Any help would be greatly appreciated! thank you!</p>
<p>EDIT:</p>
<pre><code># latest active node image
FROM node:14-alpine
# Create app directory
RUN mkdir /app
RUN mkdir -p /app/node_modules && chown -R node:node /app
WORKDIR /app
COPY package.json /app
COPY tsconfig.json /app
COPY webpack.config.js /app
COPY .env.prod .env
ADD src /app/src
ADD dist /app/dist
RUN mkdir -p /app/dist/js && chown -R node:node /app/dist/js
ADD server /app/server
ADD assets /app/assets
ADD config /app/config
ARG NPM_TOKEN
COPY .npmrc_docker .npmrc
RUN npm cache verify
RUN npm install
RUN npm run build:prod
RUN rm -f .npmrc
# Expose necessary port
EXPOSE 3000
# Compile typescript and start app
CMD ["cross-env", "NODE_ENV=production", "PORT=3000", "ts-node", "server/server"]
</code></pre>
<p>I've also tried doing it with <code>CMD ["npx", "http-server", "./dist", "-p", "3000"]</code> since the server part is only doing the following:</p>
<pre><code> this.app.use(express.static(path.resolve(__dirname, "../dist")));
this.app.get("*", (request: express.Request, response: express.Response) => {
response.sendFile(path.resolve(__dirname, "../dist/index.html"));
});
</code></pre>
| <p>So I think I found the issue, it seems to be working now, I still run into a problem of cmd shift r occasionally giving an unexpected token, but as a whole it seems to work.</p>
<p>I added the run script to the package.json, and I'm using that as the CMD now</p>
<p><code>CMD ["npm", "run", "start:prod"]</code></p>
<p>I also moved the javascript from the webpack build back to the root of the dist directory.</p>
<p>I don't really understand why it works now, and if I find anymore information on it, I'll update it here.</p>
<p>Edit:</p>
<p>It turns out this didn't work, I think at some point the app was cached and it just seemed like it was working, still stuck and hoping to find a fix for this.</p>
|
<p>I created two sample application(tcp-server and tcp-client) for checking TCP connection in istio environment. Used the below link to create server and client in g0</p>
<p><a href="https://www.linode.com/docs/guides/developing-udp-and-tcp-clients-and-servers-in-go/" rel="nofollow noreferrer">https://www.linode.com/docs/guides/developing-udp-and-tcp-clients-and-servers-in-go/</a></p>
<p>Deployed the application in kubernetes cluster and tested without istio, it is working fine.</p>
<p>But after installing istio(demo configuration, followed this url to install istio: <a href="https://istio.io/latest/docs/setup/getting-started/" rel="nofollow noreferrer">https://istio.io/latest/docs/setup/getting-started/</a>),
and redeploying the apps to fill with envoy-proxy, the client is not connecting to server</p>
<p>Also using below command makes the server connect success</p>
<pre><code>sh -c "echo world | nc 10.244.1.29 1234" is
</code></pre>
<p>What am I doing wrong?</p>
| <p>Posting the solution I found.</p>
<p>Issue: I was trying to connect to server using ip address and nodePort, which some how not working in istio environment</p>
<p>Solution: From Client instead of providing ip address and nodePort of server node, give service name and container port of server app.</p>
<p>Extra Info: For using client from outside cluster, create gateway and virtual service for server. In your external client provide nodePort and ip address of istio-ingress pod as server destination</p>
|
<p>I have a Kubernetes cluster <code>v1.22.1</code> set up in bare metal CentOS. I am facing a problem when setting up Nginx Ingress controller following <a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/" rel="nofollow noreferrer">this link</a>.</p>
<p>I followed exactly the same in step 1-3 but got a <code>CrashLoopBackOff</code> error in nginx ingress controller pod. I checked the logs of the pod and found below:</p>
<pre><code>[root@dev1 deployments]# kubectl logs -n nginx-ingress nginx-ingress-5cd5c7549d-hw6l7
I0910 23:15:20.729196 1 main.go:271] Starting NGINX Ingress controller Version=1.12.1 GitCommit=6f72db6030daa9afd567fd7faf9d5fffac9c7c8f Date=2021-09-08T13:39:53Z PlusFlag=false
W0910 23:15:20.770569 1 main.go:310] The '-use-ingress-class-only' flag will be deprecated and has no effect on versions of kubernetes >= 1.18.0. Processing ONLY resources that have the 'ingressClassName' field in Ingress equal to the class.
F0910 23:15:20.774788 1 main.go:314] Error when getting IngressClass nginx: the server could not find the requested resource
</code></pre>
<p>I believe I have the IngressClass setup properly as shown in below:</p>
<pre><code>[root@dev1 deployments]# kubectl get IngressClass
NAME CONTROLLER PARAMETERS AGE
nginx nginx.org/ingress-controller <none> 2m12s
</code></pre>
<p>So I have no idea why it said Error when getting IngressClass nginx. Can anyone shed me some lights please?</p>
| <h2>Reproduction and what happens</h2>
<p>I created a one node cluster using <code>kubeadm</code> on CentOS 7. And got the same error.</p>
<p>You and I were able to proceed further only because we missed this command at the beginning:</p>
<pre><code>git checkout v1.12.1
</code></pre>
<p>The main difference is <code>ingress-class.yaml</code> has <code>networking.k8s.io/v1beta1</code> in <code>v1.12.1</code> and <code>networking.k8s.io/v1</code> in <code>master</code> branch.</p>
<p>After I went here for the second time and switched the branch, I immediately saw this error:</p>
<pre><code>$ kubectl apply -f common/ingress-class.yaml
error: unable to recognize "common/ingress-class.yaml": no matches for kind "IngressClass" in version "networking.k8s.io/v1beta1"
</code></pre>
<p>That looks like other resources are not updated to be used on kubernetes <code>v1.22+</code> yet.</p>
<p>Please <a href="https://kubernetes.io/docs/reference/using-api/deprecation-guide/#ingress-v122" rel="nofollow noreferrer">see deprecated migration guide - v1.22 - ingress</a></p>
<h2>How to proceed further</h2>
<ul>
<li><p>I tested exactly the same approach on a cluster with <code>v1.21.4</code> and it worked like a charm. So you may consider downgrading the cluster.</p>
</li>
<li><p>If you're not tight to using NGINX ingress controller (supported by <code>Nginx inc</code>, you can try <code>ingress nginx</code> which is developed by <code>kubernetes community</code>. I tested it on <code>v1.22</code>, it works fine. Please find
<a href="https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal" rel="nofollow noreferrer">Installation on bare metal cluster</a>.</p>
</li>
</ul>
<p>P.S. It may be confusing, but there are two free nginx ingress controllers which are developed by different teams. Also there's a third option - NGINX Plus which is paid and has more option. Please <a href="https://docs.nginx.com/nginx-ingress-controller/intro/nginx-ingress-controllers/" rel="nofollow noreferrer">see here the difference</a></p>
|
<p>I have been trying to find a solution to this among the previously asked questions, but I can't find one that works for my use case (which to me seems like a general use case)</p>
<p>So I have a load balancer service and 5 pods in a namespace that share a label <code>app=abc_application</code>. So when I want to follow logs in all pods simultaneously, I use this</p>
<p><code>kubectl logs -f -l app=abc_application -c abc_application_container</code></p>
<p>Now my use case looks like this. I have a request that failed an hour back and I want to check the logs. I wanted to use the <code>--since=60m</code> argument but that doesn't work with the above command.</p>
<p>Is there any alternative than getting logs of individual pods? Can this command not be integrated?</p>
| <p>you can also do the same with <strong>logs</strong></p>
<pre><code>kubectl logs <POD name> -n <Namespace name> --since-time='2021-09-21T10:00:00Z'
</code></pre>
<p>Using simple since with logs</p>
<pre><code>kubectl logs <POD name> -n <Namespace name> --since=60h (5s, 2m, or 3h)
</code></pre>
<p>If you want to tail logs by a few line</p>
<pre><code>kubectl logs <POD name> -n <Namespace name> --tail=200
</code></pre>
<p>If want to <strong>grep</strong> anything from logs</p>
<pre><code>kubectl logs <POD name> -n <Namespace name> | grep <string>
</code></pre>
<p>With the above command, you can pass the container name with <strong>-c</strong> & <strong>-l</strong> for label.</p>
<p>Reference : <a href="https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_logs/" rel="nofollow noreferrer">https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_logs/</a></p>
|
<p>I have been trying to find a solution to this among the previously asked questions, but I can't find one that works for my use case (which to me seems like a general use case)</p>
<p>So I have a load balancer service and 5 pods in a namespace that share a label <code>app=abc_application</code>. So when I want to follow logs in all pods simultaneously, I use this</p>
<p><code>kubectl logs -f -l app=abc_application -c abc_application_container</code></p>
<p>Now my use case looks like this. I have a request that failed an hour back and I want to check the logs. I wanted to use the <code>--since=60m</code> argument but that doesn't work with the above command.</p>
<p>Is there any alternative than getting logs of individual pods? Can this command not be integrated?</p>
| <p>I tried this with <code>kubectl tail</code> and got it working</p>
<p>To install <code>kubectl tail</code> -> <code>kubectl krew install tail</code></p>
<p><code>kubectl tail -n <namespace> -l app=abc_application --since=2h</code></p>
|
<p>The Kubernetes documentation says:</p>
<blockquote>
<p>The administrator creates one <code>ResourceQuota</code> for each namespace.</p>
</blockquote>
<p>However, Kubernetes API does not prevent from creating more than one <code>ResourceQuota</code> per namespace.</p>
<p><strong>What happens if I have two <code>ResourceQuota</code>s in one namespace?</strong> Which one is used? The one with lower limits or the one with higher limits?</p>
<p>I cannot find the answer without testing it, which takes some time.</p>
| <p>Yes, that's true, it's possible to create multiple ResourceQuota in the same namespace and it has been mentioned in few cases in <a href="https://github.com/kubernetes/kubernetes/issues/23698" rel="nofollow noreferrer">this issue</a>. It's important to remember that the ResourceQuota is one of the Admission Controllers turned on by default as per <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#resourcequota" rel="nofollow noreferrer">the official documentation</a>:</p>
<blockquote>
<p>This admission controller will observe the incoming request and ensure that it does not violate any of the constraints enumerated in the ResourceQuota object in a Namespace.</p>
</blockquote>
<p>That means that regardless of the number of ResourceQuotas in the same namespace, if they are all meeting the requirements it'll be fine. A problem will occur if there will be any ResourceQuota/s which is/are violating the requirements and than it'll simply fail.</p>
|
<p>i got really confused since i am new to kubernetes
is there any difference between kubernetes endpoint and ClusterIP ?</p>
| <p>An Endpoint in Kubernetes is just an IP address and port tied to some resource, and you rarely want to think about them at all as they are just used by other resources like Services. It is though a Kubernetes resource that can be found, listed and described.</p>
<p>You can list all Endpoint resources in the cluster with <code>kubectl get endpoints -A</code></p>
<p>A ClusterIP on the other hand is just an IP address on the internal Kubernetes network. This is where all pods communicate with each other through Services.</p>
<p>A Service is a way for network traffic to flow to pods. If you want to expose a pod only on the internal network for other pods to communicate with you would setup a Service with the type ClusterIP, and use the name of the service as a DNS name. Inside pods on the network you can always call the service names in stead of the actual IP addresses.</p>
<p>If you want the pods to be exposed external to the Kubernets cluster you have multiple ways to do this. You can create a Service with type NodePort which will open the same port on all cluster nodes and direct traffic to the pod through that port. You could also setup a Service with type LoadBalancer, but it is a little more complicated depending on where your cluster is living.</p>
|
<p>I'm trying to deploy okteto environment on Visual Studio Code to use Remote Development on Kubernetes.</p>
<p>Following the official steps (<a href="https://okteto.com/blog/remote-kubernetes-development/" rel="nofollow noreferrer">https://okteto.com/blog/remote-kubernetes-development/</a>), I executed "Okteto: up" and selected manifest(vscode-remote-go/okteto.yml), but got this error:</p>
<pre><code>Installing dependencies...
x couldn't download syncthing, please try again
</code></pre>
<p>By changing the log level, I also got these logs:</p>
<pre><code>C:\Users\user\AppData\Local\Programs\okteto.exe up -f 'c:\Workspace\...my_project...\vscode-remote-go\okteto.yml' --remote '22100' --loglevel=debug
time="2021-09-13T14:09:32+09:00" level=info msg="starting up command"
time="2021-09-13T14:09:32+09:00" level=info msg="failed to get latest version from github: fail to get releases from github: Get \"https://api.github.com/repos/okteto/okteto/releases?per_page=5\": dial tcp: lookup api.github.com: no such host"
Installing dependencies...
time="2021-09-13T14:09:32+09:00" level=info msg="installing syncthing for windows/amd64"
time="2021-09-13T14:09:32+09:00" level=info msg="failed to download syncthing, retrying: failed to download syncthing from https://github.com/syncthing/syncthing/releases/download/v1.10.0/syncthing-windows-amd64-v1.10.0.zip: Get \"https://github.com/syncthing/syncthing/releases/download/v1.10.0/syncthing-windows-amd64-v1.10.0.zip\": dial tcp: lookup github.com: no such host"
time="2021-09-13T14:09:33+09:00" level=info msg="installing syncthing for windows/amd64"
time="2021-09-13T14:09:33+09:00" level=info msg="failed to download syncthing, retrying: failed to download syncthing from https://github.com/syncthing/syncthing/releases/download/v1.10.0/syncthing-windows-amd64-v1.10.0.zip: Get \"https://github.com/syncthing/syncthing/releases/download/v1.10.0/syncthing-windows-amd64-v1.10.0.zip\": dial tcp: lookup github.com: no such host"
time="2021-09-13T14:09:34+09:00" level=info msg="installing syncthing for windows/amd64"
time="2021-09-13T14:09:34+09:00" level=info msg="failed to upgrade syncthing: failed to download syncthing from https://github.com/syncthing/syncthing/releases/download/v1.10.0/syncthing-windows-amd64-v1.10.0.zip: Get \"https://github.com/syncthing/syncthing/releases/download/v1.10.0/syncthing-windows-amd64-v1.10.0.zip\": dial tcp: lookup github.com: no such host"
time="2021-09-13T14:09:34+09:00" level=info msg="couldn't download syncthing, please try again"
x couldn't download syncthing, please try again
</code></pre>
<p>This environment is behind my corporate proxy, and okteto.exe may not use Windows proxy setting. When I directly enter the URL (<a href="https://github.com/syncthing/syncthing/releases/download/v1.10.0/syncthing-windows-amd64-v1.10.0.zip" rel="nofollow noreferrer">https://github.com/syncthing/syncthing/releases/download/v1.10.0/syncthing-windows-amd64-v1.10.0.zip</a>) it can be downloaded using proxy.</p>
<p>Is it possible to use okteto behind proxy?</p>
| <p>Using a proxy is not currently supported in the Okteto. We're looking into it though.</p>
<p>For now, a workaround is to manually download the syncthing binary and save it as<code>%HOME%\.okteto\syncthing.exe</code></p>
|
<p>I have a local kubernetes cluster (minikube), that is trying to load images from my local Docker repo.</p>
<p>When I do a "docker images", I get:</p>
<pre><code>cluster.local/container-images/app-shiny-app-validation-app-converter 1.6.9
cluster.local/container-images/app-shiny-app-validation 1.6.9
</code></pre>
<p>Given I know the above images are there, I run some helm commands which uses these images, but I get the below error:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal BackOff 66s (x2 over 2m12s) kubelet Back-off pulling image "cluster.local/container-images/app-shiny-app-validation-app-converter:1.6.9"
Warning Failed 66s (x2 over 2m12s) kubelet Error: ImagePullBackOff
Normal Pulling 51s (x3 over 3m24s) kubelet Pulling image "cluster.local/container-images/app-shiny-app-validation-app-converter:1.6.9"
Warning Failed 11s (x3 over 2m13s) kubelet Failed to pull image "cluster.local/container-images/app-shiny-app-validation-app-converter:1.6.9": rpc error: code = Unknown desc = Error response from daemon: Get https://cluster.local/v2/: dial tcp: lookup cluster.local: Temporary failure in name resolution
Warning Failed 11s (x3 over 2m13s) kubelet Error: ErrImagePull
</code></pre>
<p>Anyone know how I can fix this? Seems the biggest problem is <code>Get https://cluster.local/v2/: dial tcp: lookup cluster.local: Temporary failure in name resolution</code></p>
| <p>Since minikube is being used, you can refer to their documentation.
It is recommended that if a <code>imagePullPolicy</code> is being used, it needs to be set to <code>Never</code>. If set to <code>Always</code>, it will try to reach out and pull from the network.</p>
<p>From docs: <a href="https://minikube.sigs.k8s.io/docs/handbook/pushing/" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/handbook/pushing/</a>
"Tip 1: Remember to turn off the imagePullPolicy:Always (use imagePullPolicy:IfNotPresent or imagePullPolicy:Never) in your yaml file. Otherwise Kubernetes won’t use your locally build image and it will pull from the network."</p>
|
<p>We have a bunch of pods that use RabbitMQ. If the pods are shut down by K8S with SIGTERM, we have found that our RMQ client (Python Pika) has no time to close the connection to RMQ Server causing it to think those clients are still alive until 2 heartbeats are missed.</p>
<p>Our investigation has turned up that on SIGTERM, K8S kills all in- and most importantly OUTbound TCP connections, among other things (removing endpoints, etc.) Tried to see if any connections were still possible during preStop hooks, but preStop seems very internally focused and no traffic got out.</p>
<p>Has anybody else experienced this issue and solved it? All we need to do is be able to get a message out the door before kubelet slams the door. Our pods are not K8S "Services" so some <a href="https://stackoverflow.com/questions/62567844/kubernetes-graceful-shutdown-continue-to-serve-traffic-during-termination">suggestions</a> didn't help.</p>
<p>Steps to reproduce:</p>
<ol>
<li>add preStop hook sleep 30s to Sender pod</li>
<li>tail logs of Receiver pod to see inbound requests</li>
<li>enter Sender container's shell & loop curl Receiver - requests appear in the logs</li>
<li><code>k delete pod</code> to start termination of Sender pod</li>
<li>curl requests immediately begin to hang in Sender, nothing in the Receiver logs</li>
</ol>
| <p>TERM kills nothing, it's up to your application to decide how to handle it. SIGKILL is sent some time later which does forcibly nuke the process but 1) it also closes all sockets which RMQ can detect and 2) you control how long the container has to close cleanly via terminationGracePeriodSeconds</p>
|
<p>Does the Kubernetes scheduler place the pods on the nodes only based on their requested resources and nodes' available resources at the current snapshot of the server or it also takes into account the node's historical resource utilization?</p>
| <p>In the official <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/" rel="nofollow noreferrer">Kubernetes documentation</a> we can find process and metrics used by <code>kube-scheduler</code> for choosing node for pod.</p>
<p>Basically this is 2-step process:</p>
<blockquote>
<p>kube-scheduler selects a node for the pod in a 2-step operation:</p>
<ol>
<li>Filtering</li>
<li>Scoring</li>
</ol>
</blockquote>
<p><em>Filtering</em> step is responsible for getting list of nodes which actually are able to run a pod:</p>
<blockquote>
<p>The <em>filtering</em> step finds the set of Nodes where it's feasible to schedule the Pod. For example, the PodFitsResources filter checks whether a candidate Node has enough available resource to meet a Pod's specific resource requests. After this step, the node list contains any suitable Nodes; often, there will be more than one. If the list is empty, that Pod isn't (yet) schedulable.</p>
</blockquote>
<p><em>Scoring</em> step is responsible for choosing the best node from the list generated by the <em>filtering</em> step:</p>
<blockquote>
<p>In the <em>scoring</em> step, the scheduler ranks the remaining nodes to choose the most suitable Pod placement. The scheduler assigns a score to each Node that survived filtering, basing this score on the active scoring rules.</p>
</blockquote>
<blockquote>
<p>Finally, kube-scheduler assigns the Pod to the Node with the highest ranking. If there is more than one node with equal scores, kube-scheduler selects one of these at random.</p>
</blockquote>
<p>When the node with the highest score is chosen, scheduler notifies the API server:</p>
<blockquote>
<p>...picks a Node with the highest score among the feasible ones to run the Pod. The scheduler then notifies the API server about this decision in a process called <em>binding</em>.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler" rel="nofollow noreferrer">Factors that are taken into consideration for scheduling</a>:</p>
<ul>
<li>Individual and collective resource requirements</li>
<li>Hardware</li>
<li>Policy constraints</li>
<li>Affinity and anti-affinity specifications</li>
<li>Data locality</li>
<li>Inter-workload interference</li>
<li>Others...</li>
</ul>
<p>More detailed information about parameters be found <a href="https://kubernetes.io/docs/reference/scheduling/policies/" rel="nofollow noreferrer">here</a>:</p>
<blockquote>
<p>The following <em>predicates</em> implement filtering:</p>
<ul>
<li><code>PodFitsHostPorts</code>: Checks if a Node has free ports (the network protocol kind) for the Pod ports the Pod is requesting.</li>
<li><code>PodFitsHost</code>: Checks if a Pod specifies a specific Node by its hostname.</li>
<li><code>PodFitsResources</code>: Checks if the Node has free resources (eg, CPU and Memory) to meet the requirement of the Pod.</li>
<li><code>MatchNodeSelector</code>: Checks if a Pod's Node <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="nofollow noreferrer">Selector</a> matches the Node's <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels" rel="nofollow noreferrer">label(s)</a>.</li>
<li><code>NoVolumeZoneConflict</code>: Evaluate if the <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">Volumes</a> that a Pod requests are available on the Node, given the failure zone restrictions for that storage.</li>
<li><code>NoDiskConflict</code>: Evaluates if a Pod can fit on a Node due to the volumes it requests, and those that are already mounted.</li>
<li><code>MaxCSIVolumeCount</code>: Decides how many <a href="https://kubernetes.io/docs/concepts/storage/volumes/#csi" rel="nofollow noreferrer">CSI</a> volumes should be attached, and whether that's over a configured limit.</li>
<li><code>PodToleratesNodeTaints</code>: checks if a Pod's <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/" rel="nofollow noreferrer">tolerations</a> can tolerate the Node's <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/" rel="nofollow noreferrer">taints</a>.</li>
<li><code>CheckVolumeBinding</code>: Evaluates if a Pod can fit due to the volumes it requests. This applies for both bound and unbound <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">PVCs</a>.</li>
</ul>
</blockquote>
<blockquote>
<p>The following <em>priorities</em> implement scoring:</p>
<ul>
<li><code>SelectorSpreadPriority</code>: Spreads Pods across hosts, considering Pods that belong to the same <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a>, <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> or <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer">ReplicaSet</a>.</li>
<li><code>InterPodAffinityPriority</code>: Implements preferred <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity" rel="nofollow noreferrer">inter pod affininity and antiaffinity</a>.</li>
<li><code>LeastRequestedPriority</code>: Favors nodes with fewer requested resources. In other words, the more Pods that are placed on a Node, and the more resources those Pods use, the lower the ranking this policy will give.</li>
<li><code>MostRequestedPriority</code>: Favors nodes with most requested resources. This policy will fit the scheduled Pods onto the smallest number of Nodes needed to run your overall set of workloads.</li>
<li><code>RequestedToCapacityRatioPriority</code>: Creates a requestedToCapacity based ResourceAllocationPriority using default resource scoring function shape.</li>
<li><code>BalancedResourceAllocation</code>: Favors nodes with balanced resource usage.</li>
<li><code>NodePreferAvoidPodsPriority</code>: Prioritizes nodes according to the node annotation <code>scheduler.alpha.kubernetes.io/preferAvoidPods</code>. You can use this to hint that two different Pods shouldn't run on the same Node.</li>
<li><code>NodeAffinityPriority</code>: Prioritizes nodes according to node affinity scheduling preferences indicated in PreferredDuringSchedulingIgnoredDuringExecution. You can read more about this in <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">Assigning Pods to Nodes</a>.</li>
<li><code>TaintTolerationPriority</code>: Prepares the priority list for all the nodes, based on the number of intolerable taints on the node. This policy adjusts a node's rank taking that list into account.</li>
<li><code>ImageLocalityPriority</code>: Favors nodes that already have the <a href="https://kubernetes.io/docs/reference/glossary/?all=true#term-image" rel="nofollow noreferrer">container images</a> for that Pod cached locally.</li>
<li><code>ServiceSpreadingPriority</code>: For a given Service, this policy aims to make sure that the Pods for the Service run on different nodes. It favours scheduling onto nodes that don't have Pods for the service already assigned there. The overall outcome is that the Service becomes more resilient to a single Node failure.</li>
<li><code>EqualPriority</code>: Gives an equal weight of one to all nodes.</li>
<li><code>EvenPodsSpreadPriority</code>: Implements preferred <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/" rel="nofollow noreferrer">pod topology spread constraints</a>.</li>
</ul>
</blockquote>
<p>Answering your question:</p>
<blockquote>
<p>Does it take into account the node's historical resource utilization?</p>
</blockquote>
<p>As can see, on the above list there are no parameters related to the historical resource utilization. Also, I did research and I didn't find any information about it.</p>
|
<p><code>Kubectl</code> allows you to create ad hoc jobs based on existing crons.</p>
<p>This works great but in the documentation there is no specification for passing arguments upon creation of the job.</p>
<p>Example:</p>
<pre><code>kubectl -n my-namespace create job --from=cronjob/myjob my-job-clone
</code></pre>
<p>Is there any way I can pass arguements to this job upon creation?</p>
| <p>Although <code>kubectl</code> currently does not allow you to use the --from flag and specify a command in the same clause, you can work around this limitation by getting the yaml from a dry run and using <code>yq</code> to apply a patch to it.</p>
<p>For example:</p>
<pre><code># get the original yaml file
kubectl create job myjob --from cronjob/mycronjob --dry-run=client --output yaml > original.yaml
# generate a patch with your new arguments
yq new 'spec.template.spec.containers[0].args[+]' '{INSERT NEW ARGS HERE}' > patch.yaml
# apply the patch
yq merge --arrays update patch.yaml original.yaml > final.yaml
# create job from the final yaml
kubectl create -f final.yaml
</code></pre>
|
<p>I have a local kubernetes cluster (minikube), that is trying to load images from my local Docker repo.</p>
<p>When I do a "docker images", I get:</p>
<pre><code>cluster.local/container-images/app-shiny-app-validation-app-converter 1.6.9
cluster.local/container-images/app-shiny-app-validation 1.6.9
</code></pre>
<p>Given I know the above images are there, I run some helm commands which uses these images, but I get the below error:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal BackOff 66s (x2 over 2m12s) kubelet Back-off pulling image "cluster.local/container-images/app-shiny-app-validation-app-converter:1.6.9"
Warning Failed 66s (x2 over 2m12s) kubelet Error: ImagePullBackOff
Normal Pulling 51s (x3 over 3m24s) kubelet Pulling image "cluster.local/container-images/app-shiny-app-validation-app-converter:1.6.9"
Warning Failed 11s (x3 over 2m13s) kubelet Failed to pull image "cluster.local/container-images/app-shiny-app-validation-app-converter:1.6.9": rpc error: code = Unknown desc = Error response from daemon: Get https://cluster.local/v2/: dial tcp: lookup cluster.local: Temporary failure in name resolution
Warning Failed 11s (x3 over 2m13s) kubelet Error: ErrImagePull
</code></pre>
<p>Anyone know how I can fix this? Seems the biggest problem is <code>Get https://cluster.local/v2/: dial tcp: lookup cluster.local: Temporary failure in name resolution</code></p>
| <ol>
<li>Add <code>cluster.local</code> to your <code>/etc/hosts</code> file in all your kubernetes nodes.</li>
</ol>
<pre><code>192.168.12.34 cluster.local
</code></pre>
<ol start="2">
<li>Check whether you can login to registry using <code>docker login cluster.local</code></li>
<li>If your registry has self-signed certificates, copy <code>cluster.local.crt</code> key to all kubernetes worker nodes <code>/etc/docker/certs.d/cluster.local/ca.crt</code></li>
</ol>
|
<p>I am new to <code>Kubernetes</code> and using EKS cluster end-point provided by third party. I trying to create a simple ngnix deployment using following command:</p>
<pre><code>kubectl create deployment nginx-depl --image=nginx
</code></pre>
<p>It gives me following error:</p>
<pre><code>error: failed to create deployment: admission webhook "validate.kyverno.svc" denied the request:
resource Deployment/comp-dev/nginx-depl was blocked due to the following policies
edison-platform-policy-disallow-pod-without-resources:
validate-resources: 'validation error: Error : Unable to install - container spec does not specify resource request. Rule validate-resources[0] failed at path /spec/template/spec/containers/0/resources/requests/. Rule validate-resources[1] failed at path /metadata/labels/AllowContainerWithoutResourcesRequests/.'
edison-platform-policy-disallow-privileged-container:
autogen-validate-allowPrivilegeEscalation: 'validation error: Privileged mode is not allowed. Set allowPrivilegeEscalation to false. Rule autogen-validate-allowPrivilegeEscalation[0] failed at path /spec/template/spec/containers/0/securityContext/. Rule autogen-validate-allowPrivilegeEscalation[1] failed at path /spec/template/metadata/labels/AllowPrivilegedEscalation/.'
edison-platform-policy-disallow-root-user:
autogen-validate-runAsNonRoot: 'validation error: Running as root user is not allowed. Set runAsNonRoot to true. Rule autogen-validate-runAsNonRoot[0] failed at path /spec/template/spec/securityContext/runAsNonRoot/. Rule autogen-validate-runAsNonRoot[1] failed at path /spec/template/spec/securityContext/runAsUser/. Rule autogen-validate-runAsNonRoot[2] failed at path /spec/template/spec/containers/0/securityContext/. Rule autogen-validate-runAsNonRoot[3] failed at path /spec/template/spec/containers/0/securityContext/. Rule autogen-validate-runAsNonRoot[4] failed at path /spec/template/metadata/labels/AllowRootUserAccess/.'
edison-platform-policy-disallow-unknown-registries:
autogen-validate-registries: 'validation error: Unknown image registry. Rule autogen-validate-registries failed at path /spec/template/spec/containers/0/image/'
</code></pre>
<p>Is public image registry is blocked in ECS? Or do the third party EKS provider has not enabled the public docker repository?</p>
| <p>The cluster is installed with <a href="https://thenewstack.io/kyverno-a-new-cncf-sandbox-project-offers-kubernetes-native-policy-management/" rel="nofollow noreferrer">Kyverno</a>. Your <code>create</code> request was rejected by this policy engine base on a policy setup by the provider. Try the following spec:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox
spec:
replicas: 1
selector:
matchLabels:
app: busybox
template:
metadata:
labels:
app: busybox
spec:
securityContext:
runAsUser: 1000
containers:
- name: busybox
image: docker.io/busybox:latest
command: ["sh","-c"]
args: ["sleep 3600"]
resources:
requests:
cpu: 100m
memory: 100Mi
securityContext:
allowPrivilegeEscalation: false
runAsNonRoot: true
</code></pre>
<p>Note how to run Nginx as non-root is not cover here.</p>
|
<p>I have these two ingresses for a blue (production) and green (canary) deployment and I want in my nginx snippet to set certain stuff based on whether the request is served from production service or from canary. What should be the <code>if</code> statement in the snippet?</p>
<pre><code>---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: blue-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/configuration-snippet: |
if (canary) { <--- WHAT CAN I CHECK HERE?
...
} else {
...
}
spec:
...
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: green-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-by-cookie: "green"
spec:
...
</code></pre>
| <p>I don't think there is a specific <em>canary property</em> you can check against.<br />
You could, however, look for specific <em>cookie</em>, that your canary service could provide</p>
<pre><code>if ($http_cookie ~* "id=([^;]+)(?:;|$)") {
...
}
</code></pre>
<p>Note that, while nginx does have <a href="http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#if" rel="nofollow noreferrer"><code>if</code> directive</a>, there is no <code>else</code>. <em>The equivalent for else would be all that isn't modified with an <code>if</code></em>.</p>
|
<p>I have an AKS cluster with a web application. I want to provision an nginx Ingress controller to expose the app to the internet and later enable TLS.</p>
<p>I have been following the official documentation</p>
<p><a href="https://learn.microsoft.com/en-us/azure/aks/ingress-basic" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/ingress-basic</a></p>
<p>and</p>
<p><a href="https://learn.microsoft.com/en-us/azure/aks/ingress-static-ip" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/ingress-static-ip</a></p>
<p>But I always end up with a pending nginx-ingress service with this error</p>
<pre><code>reason: SyncLoadBalancerFailed
message: >-
Error syncing load balancer: failed to ensure load balancer: instance not
found
</code></pre>
<p><a href="https://i.stack.imgur.com/hA27b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hA27b.png" alt="enter image description here" /></a></p>
<p>I have seen</p>
<p><a href="https://stackoverflow.com/questions/55625051/how-to-fix-failed-to-ensure-load-balancer-error-for-nginx-ingress">How to fix "failed to ensure load balancer" error for nginx ingress</a></p>
<p>and googled the error but so far no luck</p>
<p>Does anyone know what could it be?</p>
<p>Or, is there some working example I can start from?</p>
| <p>I believe you are using a static IP address with the NGINX Ingress controller service. This issue pops up if the cloud controller manager cannot find the static Azure Public Ip Address resource in the containing resource group mentioned in the NGINX Ingress Controller's service annotation (if no resource group is explicitly specified with a service annotation, it will look for the Azure Public IP Address resource in the <a href="https://learn.microsoft.com/en-us/azure/aks/faq#why-are-two-resource-groups-created-with-aks" rel="nofollow noreferrer">AKS cluster's node resource group</a>)</p>
<p>If you have created the static Azure Public IP Address resource in the node resource group then please ensure that the Azure Public IP address resource exists.</p>
<p>If you have created the static Azure Public IP Address resource in a different resource group, then:</p>
<ul>
<li><p>Please ensure the cluster identity used by the AKS cluster has delegated permissions to the other resource group, such as Network Contributor.</p>
<pre><code>az role assignment create \
--assignee <Client ID of cluster identity> \
--role "Network Contributor" \
--scope /subscriptions/<subscription id>/resourceGroups/<Public IP address resource group name>
</code></pre>
<p><strong>Note:</strong> Your cluster identity can be a <a href="https://learn.microsoft.com/en-us/azure/aks/kubernetes-service-principal?tabs=azure-cli" rel="nofollow noreferrer">service principal</a> or a <a href="https://learn.microsoft.com/en-us/azure/aks/use-managed-identity" rel="nofollow noreferrer">managed identity</a>.</p>
</li>
<li><p>In the <code>helm install</code> command to deploy an NGINX Ingress Controller, please add the following argument:<br />
<code>--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-resource-group"=$PublicIpAddressResourceGroupName</code></p>
<p>Thus, if you are following <a href="https://learn.microsoft.com/en-us/azure/aks/ingress-static-ip#ip-and-dns-label" rel="nofollow noreferrer">this document</a> the helm install command should look something like:</p>
<pre><code># Use Helm to deploy an NGINX ingress controller
helm install nginx-ingress ingress-nginx/ingress-nginx \
--namespace ingress-basic \
--set controller.replicaCount=2 \
--set controller.nodeSelector."kubernetes\.io/os"=linux \
--set controller.image.registry=$ACR_URL \
--set controller.image.image=$CONTROLLER_IMAGE \
--set controller.image.tag=$CONTROLLER_TAG \
--set controller.image.digest="" \
--set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux \
--set controller.admissionWebhooks.patch.image.registry=$ACR_URL \
--set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE \
--set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG \
--set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \
--set defaultBackend.image.registry=$ACR_URL \
--set defaultBackend.image.image=$DEFAULTBACKEND_IMAGE \
--set defaultBackend.image.tag=$DEFAULTBACKEND_TAG \
--set controller.service.loadBalancerIP=$STATIC_IP \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DNS_LABEL
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-resource-group"=$PublicIpAddressResourceGroupName
</code></pre>
</li>
</ul>
<p>For more information please check <a href="https://learn.microsoft.com/en-us/azure/aks/static-ip#create-a-service-using-the-static-ip-address" rel="nofollow noreferrer">here</a>.</p>
|
<p>For example, I run a Pod in a public cloud cluster. The Pod has a main container running the app. The cluster has an environment variable named <code>ABC</code>. Within the main container, I wish to access the environment variable <code>ABC</code>. What is the best way to do so?</p>
| <p><strong>Very simple option</strong></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
</code></pre>
<p>Read more : <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/</a></p>
<p><strong>Option 1</strong></p>
<p>If variables is not much important you can use the configmap to store the and config map will get injected to POD and your app can access the variables from OS.</p>
<p>Read more about configmap : <a href="https://kubernetes.io/docs/concepts/configuration/configmap/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/configmap/</a></p>
<p>Example configmap :</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
</code></pre>
<p>example pod</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config
restartPolicy: Never
</code></pre>
<p>You can also inject files with list of variables as per requirement.</p>
<p><strong>Option 2</strong> :</p>
<p>You can also use secret if your variable is important : <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/secret/</a></p>
<p><strong>Option 3</strong> :</p>
<p>If you want to go with best practices and options you can use the <strong>vault</strong> with Kubernetes for managing all different microservice variables management.</p>
<p>Vault : <a href="https://www.vaultproject.io/" rel="nofollow noreferrer">https://www.vaultproject.io/</a></p>
<p>Example : <a href="https://github.com/travelaudience/kubernetes-vault-example" rel="nofollow noreferrer">https://github.com/travelaudience/kubernetes-vault-example</a></p>
<p>it key-value pair management and provides good security options also.</p>
|
<p>The Python <code>kubernetes</code> module provides two methods for listing secrets:</p>
<ul>
<li><code>list_namespaced_secret</code></li>
<li><code>list_secret_for_all_namespaces</code></li>
</ul>
<p>I want to list the secrets for the namespace in which a container is running.</p>
<p>The <code>list_secret_all_for_namespaces</code> method isn't appropriate because it attempts to list secrets at the cluster scope, and I don't need or want that level of access.</p>
<p>The <code>list_namespaced_secret</code> method requires an explicit namespace name. I can provide that as an argument to my Python code...</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: example
namespace: my-sandbox
spec:
containers:
- command:
- /path/to/my-python-code.py
arguments:
- my-sandbox
</code></pre>
<p>...but that's clumsy, because it means the pod manifest needs to be
modified every time it's deployed to a new namespace.</p>
<p>Is there a way to acquire the current namespace name via the API?</p>
| <p>If you could use <code>fieldRef</code> in the pod's manifest to set the env variable.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv
sleep 10;
done;
env:
- name: MY_POD_NAMESPACE # <----this
valueFrom:
fieldRef:
fieldPath: metadata.namespace
restartPolicy: Never
</code></pre>
<p>You can read the env variable and consume it with <code>list_namespaced_secret</code>.</p>
<pre><code>from kubernetes import client, config
import os
config.load_kube_config()
v1 = client.CoreV1Api()
k = v1.list_namespaced_secret(str(os.environ.get("MY_POD_NAMESPACE")))
for i in k.items:
print(i.metadata.name)
</code></pre>
|
<p>I am deploying in Azure AKS a regular deployment and i want to use keyvault to store my secrets to get access to a database.</p>
<p>This is my deployment file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: sonarqube
name: sonarqube
spec:
selector:
matchLabels:
app: sonarqube
replicas: 1
template:
metadata:
labels:
app: sonarqube
spec:
containers:
- name: sonarqube
image: sonarqube:8.9-developer
resources:
requests:
cpu: 500m
memory: 1024Mi
limits:
cpu: 2000m
memory: 4096Mi
volumeMounts:
- mountPath: "/mnt/secrets/"
name: secrets-store-inline
- mountPath: "/opt/sonarqube/data/"
name: sonar-data-new
- mountPath: "/opt/sonarqube/extensions/plugins/"
name: sonar-extensions-new2
env:
- name: "SONARQUBE_JDBC_USERNAME"
valueFrom:
secretKeyRef:
name: test-secret
key: username
- name: "SONARQUBE_JDBC_PASSWORD"
valueFrom:
secretKeyRef:
name: test-secret
key: password
- name: "SONARQUBE_JDBC_URL"
valueFrom:
configMapKeyRef:
name: sonar-config
key: url
ports:
- containerPort: 9000
protocol: TCP
volumes:
- name: sonar-data-new
persistentVolumeClaim:
claimName: sonar-data-new
- name: sonar-extensions-new2
persistentVolumeClaim:
claimName: sonar-extensions-new2
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "azure-kv-provider"
</code></pre>
<p>and this is my secret storage class:</p>
<pre><code>kind: SecretProviderClass
metadata:
name: azure-kv-provider
spec:
provider: azure
secretObjects:
- data:
- key: username
objectName: username
- key: password
objectName: password
secretName: test-secret
type: Opaque
parameters:
usePodIdentity: "false"
useAssignedIdentity: "true"
userAssignedIdentityID: "zzzz-zzzz-zzzz-zzzz-zzzz"
keyvaultName: "dbkvtz"
cloudName: ""
objects: |
array:
- |
objectName: test
objectType: secret
objectAlias: username
objectVersion: ""
- |
objectName: test
objectType: secret
objectAlias: password
objectVersion: ""
resourceGroup: "myresourcegroup"
subscriptionId: "yyyy-yyyy-yyyy-yyy-yyyy"
tenantId: "xxxx-xxxx-xxxx-xxx-xxxx"
</code></pre>
<p>Where "zzzz-zzzz-zzzz-zzzz-zzzz" is the Client ID of the created Managed Identity.</p>
<p>In the Key Vault that i created "dbkvtz" i added through "Access Policy" the Managed Identity that i created. On the other hand in "Manage Identity" i am not able to add any role in "Azure Role Assignement" -- No role assignments found for the selected subscription. I don't know if it is necessary to add any role there.</p>
<p>The AKS cluster is setup for system assigned managed identity. I want to use Managed Identities to get access to the key vaults so i created a managed identity with client id "zzzz-zzzz-zzzz-zzzz-zzzz" (where is "z" a value from 0-9a-z).</p>
<p>I am not too familiar with keyvault integration in AKS so i am not sure if the config is ok.</p>
<p>I am getting this error:</p>
<p><strong>kubectl describe pods:</strong></p>
<pre><code> Normal Scheduled 19m default-scheduler Successfully assigned default/sonarqube-6bdb9cfc85-npbfw to aks-agentpool-16966606-vmss000000
Warning FailedMount 5m43s (x5 over 16m) kubelet Unable to attach or mount volumes: unmounted volumes=[secrets-store-inline], unattached volumes=[secrets-store-inline sonar-data-new sonar-extensions-new2 default-token-t45tw]: timed out waiting for the condition
Warning FailedMount 3m27s kubelet Unable to attach or mount volumes: unmounted volumes=[secrets-store-inline], unattached volumes=[default-token-t45tw secrets-store-inline sonar-data-new sonar-extensions-new2]: timed out waiting for the condition
Warning FailedMount 71s (x2 over 10m) kubelet Unable to attach or mount volumes: unmounted volumes=[secrets-store-inline], unattached volumes=[sonar-data-new sonar-extensions-new2 default-token-t45tw secrets-store-inline]: timed out waiting for the condition
Warning FailedMount 37s (x17 over 19m) kubelet MountVolume.SetUp failed for volume "secrets-store-inline" : rpc error: code = Unknown desc = failed to mount secrets store objects for pod default/sonarqube-6bdb9cfc85-npbfw, err: rpc error: code = Unknown desc = failed to mount objects, error: failed to create auth config, error: failed to get credentials, nodePublishSecretRef secret is not set
</code></pre>
<p>logs az aks show -g RG -n SonarQubeCluster</p>
<pre><code>{
"aadProfile": null,
"addonProfiles": {
"azurepolicy": {
"config": null,
"enabled": true,
"identity": {
"clientId": "yy",
"objectId": "zz",
"resourceId": "/subscriptions/xx/resourcegroups/MC_xx_SonarQubeCluster_southcentralus/providers/Microsoft.ManagedIdentity/userAssignedIdentities/azurepolicy-sonarqubecluster"
}
},
"httpApplicationRouting": {
"config": null,
"enabled": false,
"identity": null
},
"omsagent": {
"config": {
"logAnalyticsWorkspaceResourceID": "/subscriptions/xx/resourceGroups/DefaultResourceGroup-SCUS/providers/Microsoft.OperationalInsights/workspaces/DefaultWorkspace-44e26024-4977-4419-8d23-0e1e22e8804e-SCUS"
},
"enabled": true,
"identity": {
"clientId": "yy",
"objectId": "zz",
"resourceId": "/subscriptions/xx/resourcegroups/MC_xx_SonarQubeCluster_southcentralus/providers/Microsoft.ManagedIdentity/userAssignedIdentities/omsagent-sonarqubecluster"
}
}
},
"agentPoolProfiles": [
{
"availabilityZones": [
"1"
],
"count": 2,
"enableAutoScaling": false,
"enableEncryptionAtHost": null,
"enableFips": false,
"enableNodePublicIp": null,
"enableUltraSsd": null,
"gpuInstanceProfile": null,
"kubeletConfig": null,
"kubeletDiskType": "OS",
"linuxOsConfig": null,
"maxCount": null,
"maxPods": 110,
"minCount": null,
"mode": "System",
"name": "agentpool",
"nodeImageVersion": "AKSUbuntu-1804gen2containerd-2021.07.25",
"nodeLabels": {},
"nodePublicIpPrefixId": null,
"nodeTaints": null,
"orchestratorVersion": "1.20.7",
"osDiskSizeGb": 128,
"osDiskType": "Managed",
"osSku": "Ubuntu",
"osType": "Linux",
"podSubnetId": null,
"powerState": {
"code": "Running"
},
"provisioningState": "Succeeded",
"proximityPlacementGroupId": null,
"scaleDownMode": null,
"scaleSetEvictionPolicy": null,
"scaleSetPriority": null,
"spotMaxPrice": null,
"tags": null,
"type": "VirtualMachineScaleSets",
"upgradeSettings": null,
"vmSize": "Standard_DS2_v2"
}
],
"apiServerAccessProfile": {
"authorizedIpRanges": null,
"enablePrivateCluster": false,
"enablePrivateClusterPublicFqdn": null,
"privateDnsZone": null
},
"autoScalerProfile": null,
"autoUpgradeProfile": null,
"azurePortalFqdn": "sonarqubecluster-dns-4b5e95d4.portal.hcp.southcentralus.azmk8s.io",
"disableLocalAccounts": null,
"diskEncryptionSetId": null,
"dnsPrefix": "SonarQubeCluster-dns",
"enablePodSecurityPolicy": null,
"enableRbac": true,
"extendedLocation": null,
"fqdn": "sonarqubecluster-dns-4b5e95d4.hcp.southcentralus.azmk8s.io",
"fqdnSubdomain": null,
"httpProxyConfig": null,
"id": "/subscriptions/xx/resourcegroups/RG/providers/Microsoft.ContainerService/managedClusters/SonarQubeCluster",
"identity": {
"principalId": "yy",
"tenantId": "rr",
"type": "SystemAssigned",
"userAssignedIdentities": null
},
"identityProfile": {
"kubeletidentity": {
"clientId": "yy",
"objectId": "zz",
"resourceId": "/subscriptions/xx/resourcegroups/MC_xx_SonarQubeCluster_southcentralus/providers/Microsoft.ManagedIdentity/userAssignedIdentities/SonarQubeCluster-agentpool"
}
},
"kubernetesVersion": "1.20.7",
"linuxProfile": null,
"location": "southcentralus",
"maxAgentPools": 100,
"name": "SonarQubeCluster",
"networkProfile": {
"dnsServiceIp": "10.0.0.10",
"dockerBridgeCidr": "172.17.0.1/16",
"loadBalancerProfile": {
"allocatedOutboundPorts": null,
"effectiveOutboundIPs": [
{
"id": "/subscriptions/xx/resourceGroups/MC_xx_SonarQubeCluster_southcentralus/providers/Microsoft.Network/publicIPAddresses/nn",
"resourceGroup": "MC_xx_SonarQubeCluster_southcentralus"
}
],
"idleTimeoutInMinutes": null,
"managedOutboundIPs": {
"count": 1
},
"outboundIPs": null,
"outboundIpPrefixes": null
},
"loadBalancerSku": "Standard",
"natGatewayProfile": null,
"networkMode": null,
"networkPlugin": "kubenet",
"networkPolicy": null,
"outboundType": "loadBalancer",
"podCidr": "10.244.0.0/16",
"serviceCidr": "10.0.0.0/16"
},
"nodeResourceGroup": "MC_xx_SonarQubeCluster_southcentralus",
"podIdentityProfile": null,
"powerState": {
"code": "Running"
},
"privateFqdn": null,
"privateLinkResources": null,
"provisioningState": "Succeeded",
"resourceGroup": "RG",
"securityProfile": null,
"servicePrincipalProfile": {
"clientId": "msi"
},
"sku": {
"name": "Basic",
"tier": "Free"
},
"type": "Microsoft.ContainerService/ManagedClusters",
"windowsProfile": null
}
</code></pre>
<p>Any idea of what is wrong?</p>
<p>Thank you in advance.</p>
| <p>The <code>userAssignedIdentityID</code> in your <code>SecretProviderClass</code> must be the User-assigned Kubelet managed identity ID (Managed Identity for the NodePool) and not the Managed Identity created for your AKS bcs the volumes will be access via kubelet on the nodes.</p>
<pre><code>apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: azure-kvname-user-msi
spec:
provider: azure
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "true"
userAssignedIdentityID: "<Kubelet identity ID>"
keyvaultName: "kvname"
</code></pre>
<p>You also need to assign a Role to this Kubelet Identity:</p>
<pre><code>resource "azurerm_role_assignment" "akv_kubelet" {
scope = azurerm_key_vault.akv.id
role_definition_name = "Key Vault Secrets Officer"
principal_id = azurerm_kubernetes_cluster.aks.kubelet_identity[0].object_id
}
</code></pre>
<p>or</p>
<pre><code>export KUBE_ID=$(az aks show -g <resource group> -n <aks cluster name> --query identityProfile.kubeletidentity.objectId -o tsv)
export AKV_ID=$(az keyvault show -g <resource group> -n <akv name> --query id -o tsv)
az role assignment create --assignee $KUBE_ID --role "Key Vault Secrets Officer" --scope $AKV_ID
</code></pre>
<p>Documentation can be found <a href="https://azure.github.io/secrets-store-csi-driver-provider-azure/configurations/identity-access-modes/user-assigned-msi-mode/#configure-user-assigned-managed-identity-to-access-keyvault" rel="nofollow noreferrer">here for user-assigned identity</a> and <a href="https://azure.github.io/secrets-store-csi-driver-provider-azure/configurations/identity-access-modes/system-assigned-msi-mode/" rel="nofollow noreferrer">here for system-assigned identity.</a></p>
|
<p>We have a microservices based application running in openshift for which we need to design an API gateway.</p>
<p>This API gateway is expected to perform minimal checks like authentication , routing to appropriate micro services , logging , tracing etc ( Authorisation would be handled by respective micro services )</p>
<p>Should we go for a commercial product ( nginx , Red Hat OpenShift API gateway) or an open source spring cloud gateway would suffice ?</p>
<p>Please note these APIs are not exposed to external world and does not need all features of a full fledged API gateway like developer portal ,usage limits by consumer etc.</p>
| <blockquote>
<p>Should we go for a commercial product ( nginx , Red Hat OpenShift API
gateway) or an open source spring cloud gateway would suffice ?</p>
</blockquote>
<p>Open-source version of Kong will work also.</p>
<p>i would suggest to check & read this article first : <a href="https://medium.com/@harsh.manvar111/api-gateway-identity-server-comparison-ec439468cc8a" rel="nofollow noreferrer">https://medium.com/@harsh.manvar111/api-gateway-identity-server-comparison-ec439468cc8a</a></p>
<p><a href="https://i.stack.imgur.com/W8cTu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W8cTu.png" alt="enter image description here" /></a></p>
<p>You can check for the Kong API gateway once, as it supports authentication & authorization with API key along with options like OIDC, JWT, and many plugin supports.</p>
<p>As you don't require a developer portal otherwise there is a good option <a href="https://github.com/pantsel/konga" rel="nofollow noreferrer">Konga</a>.</p>
<p>Kong is very lightweight and in the background runs on the <strong>Nginx</strong> platform has verities of the plugin supports also and good community support.</p>
<p>There is also a plugin for logging & monitoring you can attach or enable them use them and Kong will forward the data to the central log system.</p>
<p>People also use Kong at edge & service mesh internally for <strong>mtls</strong> and networking if required.</p>
<p>we been using Kong for a long and have written some good articles on Kong & keycloak also if you are using Keycloak IAM. : <a href="https://faun.pub/securing-the-application-with-kong-keycloak-101-e25e0ae9ec56" rel="nofollow noreferrer">https://faun.pub/securing-the-application-with-kong-keycloak-101-e25e0ae9ec56</a></p>
<p>You can also check out basic setup of Kong & Konga : <a href="https://faun.pub/kong-api-gateway-with-konga-dashboard-ae95b6d1fec7" rel="nofollow noreferrer">https://faun.pub/kong-api-gateway-with-konga-dashboard-ae95b6d1fec7</a></p>
<p>Kong installation : <a href="https://docs.konghq.com/enterprise/2.5.x/deployment/installation/docker/" rel="nofollow noreferrer">https://docs.konghq.com/enterprise/2.5.x/deployment/installation/docker/</a></p>
<p>There is also another API gateway called : <a href="https://wso2.com/api-manager/" rel="nofollow noreferrer">https://wso2.com/api-manager/</a></p>
|
<p>In order to apply Kubernetis for the first time as a backend developer of a startup, I looked for AWS' guide to EKS and found a good document and followed it.</p>
<p>The link to the guide is as follows.
<a href="https://aws-eks-web-application.workshop.aws/en/10-intro.html" rel="nofollow noreferrer">https://aws-eks-web-application.workshop.aws/en/10-intro.html</a></p>
<p>Here, I proceeded with the start with an AWS Account method on 2-1, and I'm omitting all the previous options.</p>
<p>ex. 5-2. (Option) Add Console Credential</p>
<p>In the initial attempt, we proceeded with the process with Option, because we continued to fail in the progress application stage and are trying new things.</p>
<p>All processes were easy until "10-1. Apply HPA stage".</p>
<p>However, when I checked the HPA Status through the kubectl get hpa command, CPU usage was marked as unknown.</p>
<p>The guide said that if you try a little later, it will come out properly, so I tried it an hour later, but it was the same.</p>
<p>So, when I checked the status through the kubectl describe hpa command, I found that there was an error due to the missing cpu request as below.</p>
<pre><code>Name: demo-flask-backend-hpa
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Tue, 14 Sep 2021 09:03:53 +0000
Reference: Deployment/demo-flask-backend
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): <unknown> / 30%
Min replicas: 1
Max replicas: 5
Deployment pods: 1 current / 0 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: failed to get cpu utilization: missing request for cpu
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedGetResourceMetric 5s horizontal-pod-autoscaler failed to get cpu utilization: missing request for cpu
Warning FailedComputeMetricsReplicas 5s horizontal-pod-autoscaler invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: missing request for cpu
</code></pre>
<p>To solve this problem, we looked in many ways, but we haven't found a suitable solution because we still have little knowledge about Kubernetis.</p>
<p>The yaml setting files created so far are as follows.</p>
<p>All the instructions used in the Bashshell followed the guide, and there were no critical errors other than deprecated errors.</p>
<p>How could I solve this error?</p>
<hr />
<p><strong>flask-hpa.yaml</strong></p>
<pre><code>---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: demo-flask-backend-hpa
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: demo-flask-backend
minReplicas: 1
maxReplicas: 5
targetCPUUtilizationPercentage: 30
</code></pre>
<hr />
<p><strong>flask-deployment.yaml</strong></p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-flask-backend
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: demo-flask-backend
template:
metadata:
labels:
app: demo-flask-backend
spec:
containers:
- name: demo-flask-backend
image: $ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/demo-flask-backend:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
resources:
requests:
cpu: 250m
limits:
cpu: 500m
</code></pre>
<hr />
<p><strong>ingress.yaml</strong></p>
<pre><code>---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: "backend-ingress"
namespace: default
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
rules:
- http:
paths:
- path: /contents
pathType: Prefix
backend:
service:
name: "demo-flask-backend"
port:
number: 8080
- path: /services
pathType: Prefix
backend:
service:
name: "demo-nodejs-backend"
port:
number: 8080
- path: /
pathType: Prefix
backend:
service:
name: "demo-frontend"
port:
number: 80
</code></pre>
| <p>HPA works on the Metrics server data to scale POD or not.</p>
<pre><code>kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
</code></pre>
<p>installation : <a href="https://docs.aws.amazon.com/eks/latest/userguide/metrics-server.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/metrics-server.html</a></p>
<p>In AWS you have to install it first, which in GKE it's come already installed by default.</p>
<p><a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-metrics-server/" rel="nofollow noreferrer">https://aws.amazon.com/premiumsupport/knowledge-center/eks-metrics-server/</a></p>
<p>You can check if metrics server running or not using</p>
<pre><code>kubectl top pods
</code></pre>
<p>if output comes with resource usage your metrics server is up & running, there is some another issue with HPA.</p>
|
<p>I am trying to deploy multiple cronjob using same helm chart. I have defined cronjobs in values.yaml file which is below.</p>
<pre><code>cronjob:
crons:
"0":
name: one-minute-cron
schedule: "*/1 * * * *"
"1":
name: five-minute-cron
schedule: "*/5 * * * *"
metadata:
namespace: "{{K8S_NS}}"
creationTimestamp: null
restartPolicy: OnFailure
image:
repository: "{{CI_REGISTRY_IMAGE}}/{{CI_COMMIT_REF_SLUG}}:{{CI_COMMIT_SHA}}.{{CI_PIPELINE_IID}}"
pullPolicy: "Always"
imagePullSecrets: git-image-pull-secret-cron
restartPolicy: OnFailure
resources:
requests:
cpu: 1.0
memory: "128Mi"
limits:
cpu: 2.0
memory: "192Mi"
</code></pre>
<p>Below is my cronjob.yaml file from templates folder.</p>
<pre><code>{{- range $job, $val := .Values.cronjob.crons }}
apiVersion: batch/v1beta1
kind: CronJob
metadata:
namespace: {{ $.Values.cronjob.metadata.namespace }}
spec:
concurrencyPolicy: Allow
failedJobsHistoryLimit: 1
jobTemplate:
metadata:
creationTimestamp: {{ $.Values.cronjob.metadata.creationTimestamp }}
name: {{ $.Values.cronjob.crons.name }}
spec:
template:
metadata:
creationTimestamp: {{ $.Values.cronjob.metadata.creationTimestamp }}
spec:
containers:
- image: {{ $.Values.cronjob.image.repository }}
imagePullPolicy: {{ $.Values.cronjob.image.pullPolicy }}
name: {{ $.Values.cronjob.crons.name }}
resources:
requests:
memory: {{ $.Values.cronjob.resources.requests.memory }}
cpu: {{ $.Values.cronjob.resources.requests.cpu }}
limits:
memory: {{ $.Values.cronjob.resources.limits.memory }}
cpu: {{ $.Values.cronjob.resources.limits.cpu }}
dnsPolicy: ClusterFirst
restartPolicy: {{ $.Values.cronjob.image.restartPolicy }}
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
imagePullSecrets:
- name: {{ $.Values.cronjob.image.imagePullSecrets }}
schedule: {{ quote $.Values.cronjob.crons.schedule }}
successfulJobsHistoryLimit: 3
suspend: false
status: {}
---
{{- end }}
</code></pre>
<p>When I run this in CICD pipeline for deployment in gitlab it throws below error.</p>
<pre><code>Error: UPGRADE FAILED: error validating "": error validating data: [ValidationError(CronJob.spec.jobTemplate.spec.template.spec.containers[0]): missing required field "name" in io.k8s.api.core.v1.Container, ValidationError(CronJob.spec): missing required field "schedule" in io.k8s.api.batch.v1beta1.CronJobSpec]
</code></pre>
<p>Note: I have copied the whole repository from gitlab except the sensitive information such as gitlab secrets, templates. I have checked the other blogs as well for this but none of them are helping to get these crons working. Github repo link is <a href="https://github.com/shaileshsutar88/helm-multiple-cron" rel="nofollow noreferrer">here</a></p>
| <p>First,
I think that <code>cronjob.crons</code> should be an array. Try changing it to:</p>
<pre><code>cronjob:
crons:
- id: "0"
name: "one-minute-cron"
schedule: "*/1 * * * *"
- id: "1"
name: five-minute-cron
schedule: "*/5 * * * *"
</code></pre>
<p>Then you can iterate over "crons" via:</p>
<pre><code>{{- range $cron := .Values.cronjob.crons }}
</code></pre>
<p>You can then access the "cron" fields like so:</p>
<pre><code>...
imagePullPolicy: {{ $.Values.cronjob.image.pullPolicy }}
name: {{ $cron.name }}
</code></pre>
|
<p>I am running Spring Batch application in Kubernetes environment. The k8s cluster have one master and three worker nodes. I am testing spring batch under high load, which is spawning around 100 worker pods. However, all the 100 pods are coming up only on two out of three worker nodes. No node selector or additional labeling has been done on the nodes.</p>
<p>I have used Spring cloud deployer Kubernetes to create worker pods in Kubernetes.</p>
<p>The versions involved are:</p>
<ul>
<li>Spring Boot: 2.1.9.RELEASE</li>
<li>Spring Cloud: 2020.0.1</li>
<li>Spring Cloud Deployer: 2.5.0</li>
<li>Spring Cloud Task: 2.1.1.RELEASE</li>
<li>Kubernetes: 1.21</li>
</ul>
<p>How can I ensure that worker pods get scheduled on all available worker nodes evenly?</p>
<p>Following is the partition handler implementation responsible for launching the tasks.</p>
<pre><code>@Bean
public PartitionHandler partitionHandler(TaskLauncher taskLauncher, JobExplorer jobExplorer) {
Resource resource = this.resourceLoader.getResource(resourceSpec);
DeployerPartitionHandler partitionHandler = new DeployerPartitionHandler(taskLauncher, jobExplorer, resource,
"worker");
commandLineArgs.add("--spring.profiles.active=worker");
commandLineArgs.add("--spring.cloud.task.initialize.enable=false");
commandLineArgs.add("--spring.batch.initializer.enabled=false");
commandLineArgs.add("--spring.cloud.task.closecontext_enabled=true");
commandLineArgs.add("--logging.level.root=DEBUG");
partitionHandler.setCommandLineArgsProvider(new PassThroughCommandLineArgsProvider(commandLineArgs));
partitionHandler.setEnvironmentVariablesProvider(environmentVariablesProvider());
partitionHandler.setApplicationName(appName + "worker");
partitionHandler.setMaxWorkers(maxWorkers);
return partitionHandler;
}
@Bean
public EnvironmentVariablesProvider environmentVariablesProvider() {
return new SimpleEnvironmentVariablesProvider(this.environment);
}
</code></pre>
| <p>Posting this out of comments as a community wiki for better visibility, feel free to edit and expand.</p>
<hr />
<p>There are scheduling mechanics which can prevent scheduling pods on some nodes:</p>
<ul>
<li><p><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/" rel="nofollow noreferrer">Taints and tolerations</a></p>
</li>
<li><p><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">Node selector</a></p>
</li>
</ul>
<p>If nothing is set, it's worth trying to rejoin the node. For instance it might not be registered correctly (this solved the issue above).</p>
|
<p>Such as <code>system:masters</code>、<code>system:anonymous</code>、<code>system:unauthenticated</code>.</p>
<p>Is there a way to have all system groups that do not contain external creation, just the system,kubectl command or a list?</p>
<p>I searched the Kubernetes documentation but didn't find a list or a way to get it.</p>
| <p><strong>There is no build-in command to list all the default user groups from the Kubernetes cluster.</strong></p>
<p>However you can try to workaround in several options:</p>
<ul>
<li>You can create your custom script (i.e. in Bash) based on <code>kubectl get clusterrole</code> command.</li>
<li>You can try install some <a href="https://krew.sigs.k8s.io/plugins/" rel="nofollow noreferrer">plugins</a>. Plugin <a href="https://github.com/corneliusweig/rakkess/blob/master/README.md" rel="nofollow noreferrer">rakkess</a> could help you:</li>
</ul>
<blockquote>
<p>Have you ever wondered what access rights you have on a provided kubernetes cluster? For single resources you can use <code>kubectl auth can-i list deployments</code>, but maybe you are looking for a complete overview? This is what <code>rakkess</code> is for. It lists access rights for the current user and all server resources, similar to <code>kubectl auth can-i --list</code>.</p>
</blockquote>
<p>See also more information about:</p>
<ul>
<li><a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/" rel="nofollow noreferrer">kubelet authentication / authorization</a></li>
<li><a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#anonymous-requests" rel="nofollow noreferrer">anonymous requests</a></li>
</ul>
|
<p>I was wondering if there are any ways to spin up another job, on the event that a pod controlled by a job fails, which would update the job controlling that pod’s memory request value, maybe double it, before the pod restarts?</p>
<p>I’ve looked up PreStop container lifestyle hooks, operators, etc. Right now the best solution seems to be a custom controller that runs over all jobs and if they are in Reboot state or something like that and have a certain label then it doubles their memory request.</p>
| <p>You could yuse <a href="https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler" rel="nofollow noreferrer">Vertical Pod Autoscaler</a> in <code>Auto</code> or <code>Recreate</code> mode.<br />
It support Jobs, CronJobs, as well as Deployments.</p>
<p>Please be aware of <a href="https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#known-limitations" rel="nofollow noreferrer">limitations</a>, such as it cannot be used with HPA on CPU or memory, and VPA can't evict pods which are not run under a controller.</p>
|
<p>What does it mean that Kubernetes removed support for Docker but supports containerd. Docker developed containerd and uses it for managing containers.</p>
<p>Are there 2 different containerd?</p>
<ol>
<li>one that is supported by kubernetes</li>
<li>another one which is used by docker</li>
</ol>
<p>If not, then why kubernetes cannot talk/use to containerd used by the Docker daemon?</p>
| <h2>In short</h2>
<ul>
<li><a href="https://containerd.io/" rel="nofollow noreferrer">containerd</a> is part of Docker (the part that pulls and runs images)</li>
<li>Since Kubernetes is concerned only with pulling and running images and not creating them, it makes little sense to support Docker (which does not implement <a href="https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/" rel="nofollow noreferrer">CRI</a>) as a runtime, when containerd (which implements CRI) is the only thing that is needed</li>
<li>Even though Kubernetes drops support for Docker as a container runtime, images created with docker will still work in Kubernetes since they follow the <a href="https://github.com/opencontainers/runtime-spec/blob/master/spec.md" rel="nofollow noreferrer">OCI-spec</a>.</li>
</ul>
<blockquote>
<p>Docker as an underlying runtime is being deprecated in favor of
runtimes that use the Container Runtime Interface (CRI) created for
Kubernetes. Docker-produced images will continue to work in your
cluster with all runtimes, as they always have.</p>
</blockquote>
<h2>The longer answer</h2>
<p><strong>containerd is a container runtime, which is a piece of software that pulls images, manages networking, storage and starts containers</strong> (using <a href="https://github.com/opencontainers/runc" rel="nofollow noreferrer">runc</a>). There are other container runtimes, such as <a href="https://cri-o.io/" rel="nofollow noreferrer">cri-o</a> that can be used for this purpose also. Those container runtime can handle any image following the OCI-spec, which is a specification of the container image format.</p>
<p><strong>Docker on the other hand is much more than that</strong>. It is a multi-competent tool that can both create OCI compliant images as well as running them. When Docker pulls or runs an image it uses containerd.</p>
<p>Kubernetes relies heavily on a plugin architecture where various needs like storage, networking etc can be fullfilled by any implementation that adheres to the interface Kubernetes specifies. For container runtimes, <strong>Kubernetes defines the Container Runtime Interface CRI, and any container runtime that implements that interface can be used to pull images and launch containers in Kubernetes</strong>.</p>
<p>The extra features, like creating images etc that comes with Docker has little to no value for Kubernetes, which is concerned with pulling images and running containers only. <strong>The fact that Docker does not implement CRI makes it unneccessary complex for Kubernetes to support Docker as a runtime</strong>, when all it needs is the containerd runtime.</p>
<blockquote>
<p>[...]</p>
<p>Docker is cool and useful because it has a lot of UX enhancements
that make it really easy for humans</p>
<p>[...]</p>
<p>As a result of this human-friendly abstraction layer, your Kubernetes
cluster has to use another tool called Dockershim to get at what it
really needs, which is containerd. That’s not great, because it gives
us another thing that has to be maintained and can possibly break.
What’s actually happening here is that Dockershim is being removed
from Kubelet as early as v1.23 release, which removes support for
Docker as a container runtime as a result. You might be thinking to
yourself, but if containerd is included in the Docker stack, why does
Kubernetes need the Dockershim?</p>
<p>Docker isn’t compliant with CRI, the Container Runtime Interface. If
it were, we wouldn’t need the shim, and this wouldn’t be a thing</p>
</blockquote>
<p>Quotes are taken from the article <a href="https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/" rel="nofollow noreferrer">Don't Panic: Kubernetes and Docker</a></p>
|
<p>I am using the mssql docker image (Linux) for sql server 2019. The default user is not <code>root</code> but <code>mssql</code>.
I need to perform some operations as <code>root</code> inside the container:</p>
<pre><code>docker exec -it sql bash
mssql@7f5a78a63728:/$ sudo <command>
bash: sudo: command not found
</code></pre>
<p>Then I start the shell as <code>root</code>:</p>
<pre><code>docker exec -it --user=root sql bash
root@7f5a78a63728:/# <command>
...
</code></pre>
<p>This works.</p>
<p>Now I need to do this in a container deployed in an AKS cluster</p>
<pre><code>kubectl exec -it rms-sql-1-sql-server-deployment-86cc45dc5c-tgtm2 -- bash
mssql@rms-sql-1-sql-server-host:/$ sudo <command>
bash: sudo: command not found
</code></pre>
<p>as expected. But then:</p>
<pre><code>kubectl exec -it --user=root rms-sql-1-sql-server-deployment-86cc45dc5c-tgtm2 -- bash
error: auth info "root" does not exist
</code></pre>
<p>So when the container is in an AKS cluster, starting a shell as <code>root</code> doesn't work.</p>
<p>I then try to ssh into the node and use docker from inside:</p>
<pre><code>kubectl debug node/aks-agentpool-30797540-vmss000000 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
Creating debugging pod node-debugger-aks-agentpool-30797540-vmss000000-xfrsq with container debugger on node aks-agentpool-30797540-vmss000000.
If you don't see a command prompt, try pressing enter.
root@aks-agentpool-30797540-vmss000000:/# docker ...
bash: docker: command not found
</code></pre>
<p>Looks like a Kubernetes cluster node doesn't have docker installed!</p>
<p>Any clues?</p>
<p><strong>EDIT</strong></p>
<p>The image I used locally and in Kubernetes is exactly the same,</p>
<pre><code>mcr.microsoft.com/mssql/server:2019-latest untouched
</code></pre>
| <p><a href="https://stackoverflow.com/users/10008173/david-maze" title="74,731 reputation">David Maze</a> has well mentioned in the comment:</p>
<blockquote>
<p>Any change you make in this environment will be lost as soon as the Kubernetes pod is deleted, including if you need to update the underlying image or if its node goes away outside of your control. Would building a custom image with your changes be a more maintainable solution?</p>
</blockquote>
<p>Generally, if you want to change something permanently you have to create a new image. Everything you described behaved exactly as it was supposed to. First you have exec the container in docker, then logged in as root. However, in k8s it is a completely different container. Perhaps a different image is used. Second, even if you made a change, it would exist until the container dies. If you want to modify something permanently, you have to create your new image with all the components and the configuration you need. For more information look at <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/" rel="nofollow noreferrer">pod lifecycle</a>.</p>
|
<p>I have my hyperledger fabric blockchain deployed on k8s in the <strong>namespace: hlf-blockchain</strong> and my client app is deployed is in another <strong>namespace: hlf-app</strong></p>
<p>The cpp-profile template is below. url-> <em><code>grpcs://<service-name>.<namespace>:<port></code></em> which enables cross namespace communication.</p>
<pre><code>{
"name": "test-network",
"version": "1.0.0",
"client": {
"organization": "MyOrg",
"connection": {
"timeout": {
"peer": {
"endorser": "10000"
}
}
}
},
"organizations": {
"TboxOrg": {
"mspid": "MyOrg",
"peers": [
"peer0",
"peer1",
"peer2"
],
"certificateAuthorities": [
"ca-myorg"
]
}
},
"peers": {
"peer0": {
"url": "grpcs://peer0.hlf-blockchain:${P0PORT}",
"tlsCACerts": {
"pem": "${PEERPEM}"
},
"grpcOptions": {
"ssl-target-name-override": "peer0",
"hostnameOverride": "peer0",
"request-timeout": 10000,
"grpc.keepalive_time_ms": 60000
}
},
"peer1": {
"url": "grpcs://peer1.hlf-blockchain:${P1PORT}",
"tlsCACerts": {
"pem": "${PEERPEM}"
},
"grpcOptions": {
"ssl-target-name-override": "peer1",
"hostnameOverride": "peer1",
"request-timeout": 10000,
"grpc.keepalive_time_ms": 60000
}
},
"peer2-tbox": {
"url": "grpcs://peer2.hlf-blockchain:${P2PORT}",
"tlsCACerts": {
"pem": "${PEERPEM}"
},
"grpcOptions": {
"ssl-target-name-override": "peer2",
"hostnameOverride": "peer2",
"request-timeout": 10000,
"grpc.keepalive_time_ms": 60000
}
}
},
"certificateAuthorities": {
"ca-tboxorg": {
"url": "https://ca-myorg.hlf-blockchain:${CAPORT}",
"caName": "ca-myorg",
"tlsCACerts": {
"pem": ["${CAPEM}"]
},
"httpOptions": {
"verify": false
}
}
}
}
</code></pre>
<p>From my client-app using <strong>fabrid-sdk-go</strong> I am able to connect to the network using the gateway. While invoking the chaincode I am getting the following error:</p>
<pre><code>Endorser Client Status Code: (2) CONNECTION_FAILED. Description: dialing connection on target [peer0:7051]: connection is in TRANSIENT_FAILURE\nTransaction processing for endorser
</code></pre>
<p>I am able to invoke the transactions using cli command from the same <strong>namespace: hfl-blockchain</strong></p>
<p>My <strong>peer service configuration</strong>:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: peer0
labels:
app: peer0
spec:
selector:
name: peer0
type: ClusterIP
ports:
- name: grpc
port: 7051
protocol: TCP
- name: event
port: 7061
protocol: TCP
- name: couchdb
port: 5984
protocol: TCP
</code></pre>
<p>I believe this error is due to communication error between different namespace, which the client apps gets from the cpp-profile.</p>
<p>What's the correct way to configure the peer service or the cpp connection profile?</p>
| <p>You are correct, the discovery service is returning network URLs that are unreachable from the <code>hlf-blockchain</code> namespace.</p>
<p>It is possible to run a Gateway client in a different namespace from the Fabric network. If you are using Kube DNS, each
of the fabric nodes can be referenced with a fully qualified host name <code><service-name>.<namespace>.svc.cluster.local</code>.</p>
<p>In order to connect a gateway client across namespaces, you will need to introduce the .svc.cluster.local
Fully Qualified Domain Name to the fabric URLs returned by discovery:</p>
<ul>
<li><p>In your TLS CA enrollments, make sure that the certificate signing requests include a valid Subject Alternate Name
with the FQDN. For example, if your peer0 TLS certificate is only valid for the host <code>peer0</code>, then the grpcs://
connection will be rejected in the TLS handshake when connecting to grpcs://peer0.hlf-blockchain.svc.cluster.local.</p>
</li>
<li><p>In the Gateway Client Connection Profile, use the FQDN when connecting to the discovery peers. In addition
to the peer <code>url</code> attribute, make sure to address host names in the <code>grpcOptions</code> stanzas.</p>
</li>
<li><p>Discovery will return the peer host names as specified in the core.yaml <code>peer.gossip.externalendpoint</code>
(<code>CORE_PEER_GOSSIP_EXTERNALENDPOINT</code> env) parameter. Make sure that this specifies the FQDN for all peers
visible to discovery.</p>
</li>
<li><p>Discovery will return the orderer host names as specified in the configtx.yaml organization <code>OrdererEndpoints</code> stanza.
Make sure that these URLs specify the FQDN for all orderers.</p>
</li>
</ul>
<p>Regarding the general networking, make sure to double-check that the gateway client application has visibility and a
network route to the pods running fabric services in a different namespace. Depending on your Calico configuration
and Kube permissions, it's possible that traffic is getting blocked before it ever reaches the Fabric services.</p>
|
<p>I am trying kubernetes and seem to have hit bit of a hurdle. The problem is that from within my pod I can't curl local hostnames such as <strong>wrkr1</strong> or <strong>wrkr2</strong> (machine hostnames on my network) but can successfully resolve hostnames such as google.com or stackoverflow.com.</p>
<p>My cluster is a basic setup with one master and 2 worker nodes.</p>
<p><strong>What works from within the pod:</strong></p>
<ul>
<li><p>curl to <strong>google.com</strong> from pod -- works</p>
</li>
<li><p>curl to another service(kubernetes) from pod -- works</p>
</li>
<li><p>curl to another machine on same LAN via its IP address such as 192.168.x.x -- works</p>
</li>
<li><p>curl to another machine on same LAN via its hostname such as wrkr1 -- does not work</p>
</li>
</ul>
<p><strong>What works from the node hosting pod:</strong></p>
<ul>
<li>curl to google.com --works</li>
<li>curl to another machine on same LAN via
its IP address such as 192.168.x.x -- works</li>
<li>curl to another machine
on same LAN via its hostname such as wrkr1 -- works.</li>
</ul>
<blockquote>
<p>Note: the pod cidr is completely different from the IP range used in
LAN</p>
</blockquote>
<p>the node contains a hosts file with entry corresponding to wrkr1's IP address (although I've checked node is able to resolve hostname without it also but I read somewhere that a pod inherits its nodes DNS resolution so I've kept the entry)</p>
<p>Kubernetes Version: <strong>1.19.14</strong></p>
<p>Ubuntu Version: <strong>18.04 LTS</strong></p>
<p>Need help as to whether this is normal behavior and what can be done if I want pod to be able to resolve hostnames on local LAN as well?</p>
| <h2>What happens</h2>
<blockquote>
<p>Need help as to whether this is normal behavior</p>
</blockquote>
<p>This is normal behaviour, because there's no DNS server in your network where virtual machines are hosted and kubernetes has its own DNS server inside the cluster, it simply doesn't know about what happens on your host, especially in <code>/etc/hosts</code> because pods simply don't have access to this file.</p>
<blockquote>
<p>I read somewhere that a pod inherits its nodes DNS resolution so I've
kept the entry</p>
</blockquote>
<p>This is a point where tricky thing happens. <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy" rel="nofollow noreferrer">There are four available</a> <code>DNS policies</code> which are applied per pod. We will take a look at two of them which are usually used:</p>
<ul>
<li>"<strong>Default</strong>": The Pod inherits the name resolution configuration from the node that the pods run on. See related discussion for more details.</li>
<li>"<strong>ClusterFirst</strong>": Any DNS query that does not match the configured cluster domain suffix, such as "www.kubernetes.io", is forwarded to the upstream nameserver inherited from the node. Cluster administrators may have extra stub-domain and upstream DNS servers configured</li>
</ul>
<p>The trickiest ever part is this (from the same link above):</p>
<blockquote>
<p>Note: "Default" is not the default DNS policy. If dnsPolicy is not
explicitly specified, then "ClusterFirst" is used.</p>
</blockquote>
<p>That means that all pods that do not have <code>DNS policy</code> set will be run with <code>ClusterFirst</code> and they won't be able to see <code>/etc/resolv.conf</code> on the host. I tried changing this to <code>Default</code> and indeed, it can resolve everything host can, however internal resolving stops working, so it's not an option.</p>
<p>For example <code>coredns</code> deployment is run with <code>Default</code> dnsPolicy which allows <code>coredns</code> to resolve hosts.</p>
<h2>How this can be resolved</h2>
<p><strong>1. Add <code>local</code> domain to <code>coreDNS</code></strong></p>
<p>This will require to add <code>A</code> records per host. Here's a part from edited coredns configmap:</p>
<p>This should be within <code>.:53 {</code> block</p>
<pre><code>file /etc/coredns/local.record local
</code></pre>
<p>This part is right after block above ends (SOA information was taken from the example, it doesn't make any difference here):</p>
<pre><code>local.record: |
local. IN SOA sns.dns.icann.org. noc.dns.icann.org. 2015082541 7200 3600 1209600 3600
wrkr1. IN A 172.10.10.10
wrkr2. IN A 172.11.11.11
</code></pre>
<p>Then <code>coreDNS</code> deployment should be added to include this file:</p>
<pre><code>$ kubectl edit deploy coredns -n kube-system
volumes:
- configMap:
defaultMode: 420
items:
- key: Corefile
path: Corefile
- key: local.record # 1st line to add
path: local.record # 2nd line to add
name: coredns
</code></pre>
<p>And restart <code>coreDNS</code> deployment:</p>
<pre><code>$ kubectl rollout restart deploy coredns -n kube-system
</code></pre>
<p>Just in case check if <code>coredns</code> pods are <code>running and ready</code>:</p>
<pre><code>$ kubectl get pods -A | grep coredns
kube-system coredns-6ddbbfd76-mk2wv 1/1 Running 0 4h46m
kube-system coredns-6ddbbfd76-ngrmq 1/1 Running 0 4h46m
</code></pre>
<p>If everything's done correctly, now newly created pods will be able to resolve hosts by their names. <a href="https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/" rel="nofollow noreferrer">Please find an example in coredns documentation</a></p>
<p><strong>2. Set up DNS server in the network</strong></p>
<p>While <code>avahi</code> looks similar to DNS server, it does not act like a DNS server. It's not possible to setup <code>requests forwarding</code> from <code>coredns</code> to <code>avahi</code>, while it's possible to proper DNS server in the network and this way have everything will be resolved.</p>
<p><strong>3. Deploy <code>avahi</code> to kubernetes cluster</strong></p>
<p>There's a ready image with <code>avahi</code> <a href="https://hub.docker.com/r/flungo/avahi" rel="nofollow noreferrer">here</a>. If it's deployed into the cluster with <code>dnsPolicy</code> set to <code>ClusterFirstWithHostNet</code> and most importantly <code>hostNetwork: true</code> it will be able to use host adapter to discover all available hosts within the network.</p>
<h2>Useful links:</h2>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy" rel="nofollow noreferrer">Pods DNS policy</a></li>
<li><a href="https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/" rel="nofollow noreferrer">Custom DNS entries for kubernetes</a></li>
</ul>
|
<p>I'm trying to add ingress to my nginx container.</p>
<p>The following ingress template gives me "parse error (<>/ingress.yaml:71: unexpected EOF)". I went through trying mark possible missing end statements, but even adding arbitrary end at the end of file didn't fix it. I am out of ideas as to what causes this EOF.</p>
<p>So the question is the general: What causes "unexpected EOF" in the file?</p>
<pre class="lang-yaml prettyprint-override"><code>{{- if .Values.web.ingress.enabled }}
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: {{ .Release.Name }}-proxy-ingress
labels:
tier: intelowl
component: proxy
release: {{ .Release.Name }}
chart: {{ .Chart.Name }}
heritage: {{ .Release.Service }}
{{- with .Values.labels }}
{{ toYaml . | indent 4 }}
{{- end }} # {{- with .Values.labels }}
{{- if .Values.web.ingress.annotations }}
annotations:
{{- with .Values.web.ingress.annotations }}
{{ toYaml . | indent 4 }}
{{- end }} # {{- with .Values.web.ingress.annotations }}
{{- end }} # {{- if .Values.web.ingress.annotations }}
spec:
{{- if .Values.web.ingress.tls.enabled }}
tls:
- hosts:
- {{ .Values.web.ingress.host }}
secretName: {{ .Values.web.ingress.tls.secretName }}
{{- end }} # {{- if .Values.web.ingress.tls.enabled }}
rules:
- http:
paths:
{{- range .Values.web.ingress.precedingPaths }}
- path: {{ .path }}
backend:
service:
name: {{ .serviceName }}
port:
number: {{ .servicePort }}
{{- end }} # {{- range .Values.web.ingress.precedingPaths }}
- backend:
service:
name: {{ .Release.Name }}-proxy
port:
number: {{ ternary 443 80 .Values.web.ingress.tls.enabled }}
{{- if .Values.web.ingress.path }}
path: {{ .Values.web.ingress.path }}
{{- end }} # {{- if .Values.web.ingress.path }}
{{- range .Values.web.ingress.succeedingPaths }}
- path: {{ .path }}
backend:
service:
name: {{ .serviceName }}
port:
number: {{ .servicePort }}
{{- end }} # {{- range .Values.web.ingress.succeedingPaths }}
{{- if .Values.web.ingress.host }}
host: {{ .Values.web.ingress.host }}
{{- end }} # {{- if .Values.web.ingress.host }}
{{- end }} # {{- if .Values.web.ingress.enabled }}
</code></pre>
| <p>Your file is generally structured like so:</p>
<pre class="lang-none prettyprint-override"><code>{{- if .Values.someCondition }}
...
{{- end }} # {{- if .Values.someCondition }}
</code></pre>
<p>However, the Go <code>text/template</code> engine runs before any YAML parsing happens. There is <em>not</em> a comment in this example; there is an <code>if</code> statement, the matching <code>end</code>, and an unterminated <code>if</code>.</p>
<p>The <code>text/template</code> language has its own <code>{{/* comment */}}</code> syntax, and in principle you could use this</p>
<pre class="lang-none prettyprint-override"><code>{{- if .Values.someCondition }}
...
{{- end }}{{/* if .Values.someCondition */}}
</code></pre>
<p>Beyond this, the file you show seems to have the right number of <code>{{ end }}</code>s.</p>
<p>I'd probably avoid this style myself. Usually these condition blocks are fairly short; you could break the template into multiple <code>define</code> named templates if that helps.</p>
<pre class="lang-yaml prettyprint-override"><code>metadata:
labels:
tier: intelowl
et: cetera
{{- include "more-labels" . | indent 4 }}
{{- include "ingress-annotations" . | indent 2 }}
{{- define "more-labels" -}}
{{ with .Values.labels }}{{ toYaml . }}{{ end }}
{{- end -}}
{{- define "ingress-annotations" -}}
{{- with .Values.web.ingress.annotations }}
annotations:
{{ toYaml . | indent 2 }}
{{- end -}}
{{- end -}}
</code></pre>
<p>Especially for labels you might find that some of the values will be reused in all of your objects, and so including a template in the <code>_helpers.tpl</code> file to produce the common values will reduce some of the repetitiveness.</p>
|
<p>My goal is to setup an <strong>ingress nginx</strong> within my kubernetes cluster. The deployment seems to work as I guess, the logs are looking good.</p>
<pre><code>NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-admission-create--1-n5h28 0/1 Completed 0 4d8h
pod/ingress-nginx-admission-patch--1-czsfn 0/1 Completed 0 4d8h
pod/ingress-nginx-controller-7f7f8685b8-xvldg 1/1 Running 0 10m
pod/web-app-59555dbf95-slqc4 1/1 Running 0 20m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller NodePort 10.97.224.250 <none> 80:32666/TCP,443:31657/TCP 4d8h
service/ingress-nginx-controller-admission ClusterIP 10.100.7.97 <none> 443/TCP 4d8h
service/web-app-internal ClusterIP 10.103.22.145 <none> 80/TCP 20m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 1/1 1 1 4d8h
deployment.apps/web-app 1/1 1 1 20m
NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-nginx-controller-55b65fcbff 0 0 0 22h
replicaset.apps/ingress-nginx-controller-5f7d486f4d 0 0 0 43m
replicaset.apps/ingress-nginx-controller-76bdf9b5f6 0 0 0 3h47m
replicaset.apps/ingress-nginx-controller-7d7489d947 0 0 0 44m
replicaset.apps/ingress-nginx-controller-7f7f8685b8 1 1 1 10m
replicaset.apps/ingress-nginx-controller-7fdc4896dd 0 0 0 22h
replicaset.apps/ingress-nginx-controller-86668dc4fc 0 0 0 22h
replicaset.apps/ingress-nginx-controller-8cf5559f8 0 0 0 4d8h
replicaset.apps/ingress-nginx-controller-f58499759 0 0 0 62m
replicaset.apps/web-app-59555dbf95 1 1 1 20m
NAME COMPLETIONS DURATION AGE
job.batch/ingress-nginx-admission-create 1/1 2s 4d8h
job.batch/ingress-nginx-admission-patch 1/1 7s 4d8h
</code></pre>
<p>I've already experienced some issues, stated in <a href="https://stackoverflow.com/questions/69167854/kubernetes-ingress-nginx-controller-internal-error-occurred-failed-calling-web">this question</a>.
The deployment I use is the following:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web-app
image: registry/web-app:latest
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: web-app-internal
namespace: ingress-nginx
spec:
selector:
app: web-app
ports:
- port: 80
targetPort: 80
protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/enable-access-log: "true"
name: web-app-ingress
namespace: ingress-nginx
labels:
name: web-app-ingress
spec:
rules:
- host: web.app
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: web-app-internal
port:
number: 80
</code></pre>
<p>First of all, let me explain, that I try to make the ingress accessible via <code>http</code> internally for the moment. When this is working, the next step will be to establish a <code>ssl</code> certified connection.</p>
<p>Last but not least, a few more relevant data:</p>
<ul>
<li>the host defined within the ingress rule resolves to the ip of the externally hosted load-balancer outside the cluster within my own network</li>
<li><code>curl -v http://web.app</code> returns the following output:</li>
</ul>
<pre><code>* Trying x.x.x.x...
* TCP_NODELAY set
* Connected to web.app (x.x.x.x) port 80 (#0)
> GET / HTTP/1.1
> Host: web.app
> User-Agent: curl/7.64.1
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 400 Bad Request
<
Client sent an HTTP request to an HTTPS server.
* Closing connection 0
</code></pre>
<p>I'm a newbie to all things k8s related, any guess what I miss?</p>
<p>Many thanks in advice!</p>
| <p>No, solved the problem. It was an incorrect nginx load-balancer setup. Did indeed pass the <strong>443</strong> and <strong>80</strong> traffic, but not to the exposed Port for <strong>http</strong> on my worker nodes the <strong>ingress-nginx-controller</strong> service allocated. After this, everything works fine.</p>
|
<p>Can anyone explain to me why when running my load test on one pod it gives better TPS rather than when scaling to two pods.</p>
<p>I expected that when running the same scenario with the same configuration on 2 pods the TPS will be increased but this is not what happened.</p>
<p>Is this normal behaviour that scaling horizontal not improve the total number of requests?</p>
<p>Please note that I didn't get any failures on one pod just scaled to 2 for high availability.</p>
| <p>If you are using any sort of database, this is the place to optimize to increase TPS. Here is why:</p>
<p>Assume your database is running as fast as possible - The pod can handle the network connections and query against the database, but the database is slow because the CPU/Memory/TPS are already maxed; increasing the amount of TPS from pod 1 will not increase your TPS because the DB TPS is the limiting factor.</p>
<p>Now if you add pod 2, your DB still already has maxed out CPU/Memory/TPS, but now it has to use some of the CPU it was using before to complete transactions, to manage DB connections from pod 2 that have to be queued until the CPU/Memory of the DB can handle them; ultimately lowering TPS of the DB and TPS of the entire app.</p>
<p>TLDR: If your DB is maxed, adding pods to make transactions against it lowers the TPS because the DB has to use resources that were actively handling transactions (your limiting factor for TPS) to handle queued DB connections instead.</p>
<p>To fix this, vertically scale your write DB's, horizontally scale your read DB's, optimize DB transactions with index's or better queries, use PGBouncer to manage DB connections, and make sure your DB transaction type is the most efficient for your use case.</p>
|
<p>I work with a dev shop as a contractor & I'm pitching Kubernetes to my CTO.</p>
<p>It's on the premise that they can deploy multiple websites and abstract away multi-server management.</p>
<p>However, the one stipulation is that in this new cluster of resources they would be able to point multiple different domains at it and still be able to route requests accordingly.</p>
<p>So my question is: how can i manage multiple domains on a single Kubernetes cluster?</p>
<p>I don't know if this sort of thing is possible in Kubernetes, any help would be appreciated.</p>
| <p>You can use an ingress with multiple domain names:</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></p>
<p>In the <code>rules</code> section, you can define multiple hosts like:</p>
<pre><code>rules:
- host: host1.com
http:
paths:
...
- host: host2.com
http:
paths:
...
</code></pre>
|
<p>Can anyone explain to me why when running my load test on one pod it gives better TPS rather than when scaling to two pods.</p>
<p>I expected that when running the same scenario with the same configuration on 2 pods the TPS will be increased but this is not what happened.</p>
<p>Is this normal behaviour that scaling horizontal not improve the total number of requests?</p>
<p>Please note that I didn't get any failures on one pod just scaled to 2 for high availability.</p>
| <p>It's really depends on what your pod did.
As @spencer mentioned. Besides that, there still many factor will impact your expectation:</p>
<ol>
<li>Does your pod has leader election?</li>
<li>QPS/Burst setting(for controller,since I have no idea what your pod did).</li>
<li>...</li>
</ol>
<p>Based on your case, I guess your pod is not the TPS limiting factor.</p>
<p>Basically increase the replication of pod will not low down the TPS at least.</p>
|
<p>I am following <a href="https://konghq.com/blog/kubernetes-ingress-api-gateway/" rel="nofollow noreferrer">this procedure</a> to deploy konghq in my Kubernetes.</p>
<p>The key installation command there is this:</p>
<pre><code>$ kubectl create -f https://konghq.com/blog/kubernetes-ingress-api-gateway/
</code></pre>
<p>It works fine when I create one single kinghq deployment. But it doesn't work for two deployments. What would I need to do? I changed the namespace but realized that about of resources are created outside of the namespace.</p>
| <p>There is no sense to create 2 ingress controllers under 1 namespace. Would you like have multiple ingress rules under 1 namespace - you are welcome to create 1 Ingress controller and multiple rules.</p>
<p>Consider creating 2 ingress controllers in case you have multiple namespaces.
For example, check <a href="https://github.com/Kong/kubernetes-ingress-controller/issues/960#" rel="nofollow noreferrer">Multiple Ingress in different namespaces
</a></p>
<blockquote>
<p>I am trying to setup 2 Ingress controllers in my k8s cluster under 2
namespaces. Reason for this setup: Need one to be public which has
route to only one service that we want to expose. Need another one to
be private and has routes to all services including the internal
services</p>
</blockquote>
<p>To deep dive into your issue it would be nice to have logs, errors, etc.</p>
<p>In case you still DO need 2 controllers, I would recommend you change namespace resource limits(to avoid issues) and then try deploy again.</p>
<p>To check: <a href="https://discuss.kubernetes.io/t/multiple-kong-ingress-controller-or-just-one-to-different-environments/5778" rel="nofollow noreferrer">Multiple kong ingress controller or just one to different environments</a></p>
|
<p>I have a persistent volume that has read/write access on one specific node.</p>
<p>How can I mount this persistent volume on all other nodes as read only?</p>
<p>Here is what I was thinking, in my pvc.yaml:</p>
<pre><code>---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
# This name uniquely identifies the PVC. This is used in deployment.
name: public-pv-claim
namespace: default
spec:
accessModes:
- ReadWriteOnce
- ReadOnlyMany
resources:
# This is the request for storage. Should be available in the cluster.
requests:
storage: 1Gi
</code></pre>
<p>and in the specific node</p>
<pre><code> ...
volumes:
- name: public
# This volume is based on PVC
persistentVolumeClaim:
# Name of the PVC created earlier
claimName: public-pv-claim
containers:
- name: specific
# Volume mounts for this container
volumeMounts:
# Volume 'public' is mounted to path '/public'
- name: data
mountPath: "/public"
...
</code></pre>
<p>and for pods of other nodes:</p>
<pre><code> ...
volumes:
- name: public
# This volume is based on PVC
persistentVolumeClaim:
# Name of the PVC created earlier
claimName: public-pv-claim
containers:
- name: other
...
volumeMounts:
- name: public
mountPath: "/public"
readOnly: true
...
</code></pre>
| <p>The solution I found was to give the persistance volume the "ReadWriteMany" access modes: and then mount it with readOnly set to true in the definition of mounted volumes. Here are the .yaml files..</p>
<p>The persistent volume claim... pvc.yaml:</p>
<pre><code>---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
# This name uniquely identifies the PVC. This is used in deployment.
name: public-pv-claim
namespace: default
spec:
accessModes:
- ReadWriteMany # All nodes have read/write access to the volume
resources:
# This is the request for storage. Should be available in the cluster.
requests:
storage: 1Gi
</code></pre>
<p>and in the specific node that should be allowed to write to the volume container_write_access_to_pv.yaml:</p>
<pre><code> ...
volumes:
- name: public
# This volume is based on PVC
persistentVolumeClaim:
# Name of the PVC created earlier
claimName: public-pv-claim
containers:
- name: specific
# Volume mounts for this container
volumeMounts:
# Volume is mounted to path '/public'
- name: data
mountPath: "/public"
...
</code></pre>
<p>and for pods of other nodes that should have write only access:
container_with_read_only_access_to_pv.yaml:</p>
<pre><code> ...
volumes:
- name: public
# This volume is based on PVC
persistentVolumeClaim:
# Name of the PVC created earlier
claimName: public-pv-claim
containers:
- name: other
...
volumeMounts:
- name: public
# Volume is mounted to path '/public' in read-only mode
mountPath: "/public"
readOnly: true
...
</code></pre>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.