prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>I used helm chart to install Postgresql in Kubernetes.</p>
<p>I cannot connect to my Postgres inside my app server. When I tried to connect to db. Server throw to me a error about HostNotFoundError.</p>
<p>Anyone have any experience about install the database on the Kubernetes. Please help me.</p>
<p>Thanks!</p>
<pre><code>kubectl get pod
NAME READY STATUS RESTARTS AGE
apollo-b47f886f-g9c8f 1/1 Running 0 2d19h
jupiter-7799cf84db-zfm9q 1/1 Running 0 5m23s
jupiter-migration-kkmqz 0/1 Error 0 3h22m
nginx-ingress-ingress-nginx-controller-55d6b7dc57-nn8q6 1/1 Running 0 20d
postgres-postgresql-0 1/1 Running 0 18h
</code></pre>
<pre><code>kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apollo ClusterIP 10.92.8.51 <none> 80/TCP 20d
jupiter ClusterIP 10.92.7.8 <none> 80/TCP 2d14h
kubernetes ClusterIP 10.92.0.1 <none> 443/TCP 20d
nginx-ingress-ingress-nginx-controller LoadBalancer 10.92.11.40 External IP 80:31096/TCP,443:31324/TCP 20d
nginx-ingress-ingress-nginx-controller-admission ClusterIP 10.92.14.117 <none> 443/TCP 20d
postgres-postgresql ClusterIP 10.92.3.97 <none> 5432/TCP 18h
postgres-postgresql-headless ClusterIP None <none> 5432/TCP
</code></pre>
<pre><code>sequelize config
module.exports = {
production: {
username: postgres,
password: mypassword,
database: jupiter,
host: postgres-postgresql,
dialect: 'postgres',
},
};
</code></pre>
<pre><code>Server logs
error HostNotFoundError [SequelizeHostNotFoundError]: getaddrinfo ENOTFOUND postgres-postgresql
at Client._connectionCallback (/app/node_modules/sequelize/lib/dialects/postgres/connection-manager.js:175:24)
at Client._handleErrorWhileConnecting (/app/node_modules/pg/lib/client.js:305:19)
at Client._handleErrorEvent (/app/node_modules/pg/lib/client.js:315:19)
at Connection.emit (node:events:394:28)
at Socket.reportStreamError (/app/node_modules/pg/lib/connection.js:52:12)
at Socket.emit (node:events:394:28)
at emitErrorNT (node:internal/streams/destroy:157:8)
at emitErrorCloseNT (node:internal/streams/destroy:122:3)
at processTicksAndRejections (node:internal/process/task_queues:83:21) {
parent: Error: getaddrinfo ENOTFOUND postgres-postgresql
06:02:30 0|jupiter |
at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:71:26) {
errno: -3008,
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'postgres-postgresql\n'
},
original: Error: getaddrinfo ENOTFOUND postgres-postgresql
06:02:30 0|jupiter |
at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:71:26) {
errno: -3008,
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'postgres-postgresql\n'
}
}
</code></pre>
| <p>if in case what @AndD pointed out is cleared and still didn't work..</p>
<ol>
<li><p>I would check my coreDNS pod under kube-system if there is an issue.</p>
</li>
<li><p>You could also try using the FQDN (fully qualified domain name) postgres-postgresql.[YOURNAMESPACE].svc.cluster.local</p>
</li>
<li><p>the postgres-postgresql service IP</p>
</li>
<li><p>postgres-postgresql-0 incase
there is a problem with your svc networking</p>
</li>
</ol>
|
<p>I have tried desperately to apply a simple pod specification without any luck, even with this previous answer: <a href="https://stackoverflow.com/questions/48534980/mount-local-directory-into-pod-in-minikube">Mount local directory into pod in minikube</a></p>
<p>The yaml file:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: hostpath-pod
spec:
containers:
- image: httpd
name: hostpath-pod
volumeMounts:
- mountPath: /data
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /tmp/data/
</code></pre>
<p>I started minikube cluster with: <code>minikube start --mount-string="/tmp:/tmp" --mount</code> and there are 3 files in <code>/tmp/data</code>:</p>
<pre class="lang-sh prettyprint-override"><code>ls /tmp/data/
file2.txt file3.txt hello
</code></pre>
<p>However, this is what I get when I do <code>kubectl describe pods</code>:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m26s default-scheduler Successfully assigned default/hostpath-pod to minikube
Normal Pulled 113s kubelet, minikube Successfully pulled image "httpd" in 32.404370125s
Normal Pulled 108s kubelet, minikube Successfully pulled image "httpd" in 3.99427232s
Normal Pulled 85s kubelet, minikube Successfully pulled image "httpd" in 3.906807762s
Normal Pulling 58s (x4 over 2m25s) kubelet, minikube Pulling image "httpd"
Normal Created 54s (x4 over 112s) kubelet, minikube Created container hostpath-pod
Normal Pulled 54s kubelet, minikube Successfully pulled image "httpd" in 4.364295872s
Warning Failed 53s (x4 over 112s) kubelet, minikube Error: failed to start container "hostpath-pod": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/tmp/data" to rootfs at "/data" caused: stat /tmp/data: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
Warning BackOff 14s (x6 over 103s) kubelet, minikube Back-off restarting failed container
</code></pre>
<p>Not sure what I'm doing wrong here. If it helps I'm using minikube version <code>v1.23.2</code> and this was the output when I started minikube:</p>
<pre><code>😄 minikube v1.23.2 on Darwin 11.5.2
▪ KUBECONFIG=/Users/sachinthaka/.kube/config-ds-dev:/Users/sachinthaka/.kube/config-ds-prod:/Users/sachinthaka/.kube/config-ds-dev-cn:/Users/sachinthaka/.kube/config-ds-prod-cn
✨ Using the hyperkit driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🔄 Restarting existing hyperkit VM for "minikube" ...
❗ This VM is having trouble accessing https://k8s.gcr.io
💡 To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
🐳 Preparing Kubernetes v1.22.2 on Docker 20.10.8 ...
🔎 Verifying Kubernetes components...
📁 Creating mount /tmp:/tmp ...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
❗ /usr/local/bin/kubectl is version 1.18.0, which may have incompatibilites with Kubernetes 1.22.2.
▪ Want kubectl v1.22.2? Try 'minikube kubectl -- get pods -A'
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
</code></pre>
<p>Anything I can try? :'(</p>
<h2>Update 1</h2>
<ul>
<li>Changing from <code>minikube</code> to <code>microk8s</code> helped. But I'm still not seeing anything inside <code>/data/</code> in the pod.</li>
<li>Also changing from <code>/tmp/</code> to a different folder helped in <code>minikube</code>. Something to do with MacOs.</li>
</ul>
| <p><strong>OP has said, that problem is solved:</strong></p>
<blockquote>
<p>changing from /tmp/ to a different folder helped in minikube. Something to do with MacOs
For some reason minikube doesn't like <code>/tmp/</code></p>
</blockquote>
<p><strong>An explanation of this problem:</strong>
You cannot mount <code>/tmp</code> to <code>/tmp</code>. The problem isn't with macOS, but with the way you do it. I tried to recreate this problem in several ways. I used a docker and got a very interesting error:</p>
<pre><code>docker: Error response from daemon: Duplicate mount point: /tmp.
</code></pre>
<p>This error makes it clear what the problem is. If you mount your catalog elsewhere, everything should work (which was confirmed):</p>
<blockquote>
<p>Do I understand correctly, that when you changed the mount point to a different folder, does it work?</p>
</blockquote>
<blockquote>
<p>that is correct. For some reason minikube doesn't like <code>/tmp/</code></p>
</blockquote>
<p>I know you are using hyperkit instead of docker in my case, but the only difference will be in the message you get on the screen. In the case of the docker, it is very clear.</p>
|
<p>Pod lifecycle is managed by Kubelet in data plane.</p>
<p>As per the definition: If the liveness probe fails, the kubelet kills the container</p>
<p>Pod is just a container with dedicated network namespace & IPC namespace with a sandbox container.</p>
<hr />
<p>Say, if the Pod is single app container Pod, then upon liveness failure:</p>
<ul>
<li>Does kubelet kill the Pod?</li>
</ul>
<p>or</p>
<ul>
<li>Does kubelet kill the container (only) within the Pod?</li>
</ul>
| <p>The kubelet uses liveness probes to know when to restart a container (<strong>NOT</strong> the entire Pod). If the liveness probe fails, the kubelet kills the container, and then the container may be restarted, however it depends on its <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy" rel="nofollow noreferrer">restart policy</a>.</p>
<hr />
<p>I've created a simple example to demonstrate how it works.</p>
<p>First, I've created an <code>app-1</code> Pod with two containers (<code>web</code> and <code>db</code>).
The <code>web</code> container has a liveness probe configured, which always fails because the <code>/healthz</code> path is not configured.</p>
<pre><code>$ cat app-1.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: app-1
name: app-1
spec:
containers:
- image: nginx
name: web
livenessProbe:
httpGet:
path: /healthz
port: 8080
httpHeaders:
- name: Custom-Header
value: Awesome
- image: postgres
name: db
env:
- name: POSTGRES_PASSWORD
value: example
</code></pre>
<p>After applying the above manifest and waiting some time, we can describe the <code>app-1</code> Pod to check that only the <code>web</code> container has been restarted and the <code>db</code> container is running without interruption:<br />
<strong>NOTE:</strong> I only provided important information from the <code>kubectl describe pod app-1</code> command, not the entire output.</p>
<pre><code>$ kubectl apply -f app-1.yml
pod/app-1 created
$ kubectl describe pod app-1
Name: app-1
...
Containers:
web:
...
Restart Count: 4 <--- Note that the "web" container was restarted 4 times
Liveness: http-get http://:8080/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
...
db:
...
Restart Count: 0 <--- Note that the "db" container works fine
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
...
Normal Killing 78s (x2 over 108s) kubelet Container web failed liveness probe, will be restarted
...
</code></pre>
<p>We can connect to the <code>db</code> container to see if it is running:<br />
<strong>NOTE:</strong> We can use the <code>db</code> container even when restarting the<code> web</code> container.</p>
<pre><code>$ kubectl exec -it app-1 -c db -- bash
root@app-1:/#
</code></pre>
<p>In contrast, after connecting to the <code>web</code> container, we can observe that the liveness probe restarts this container:</p>
<pre><code>$ kubectl exec -it app-1 -c web -- bash
root@app-1:/# command terminated with exit code 137
</code></pre>
|
<p>I have a role which has full privilege to access EKS, Ec2, IAM which is attached to an Ec2 Instance.</p>
<p>I am trying to access my EKS cluster from this Ec2 Instance. I did add the Ec2 instance arn like below to the Trusted relationship of the role which the instance assumes as well. However, still I get the error like below when trying to access the cluster using <code>kubectl</code> from cli inside the Ec2 instance.</p>
<p>I have tried below to obtain the kube config written to the instance hoe directory from which I execute these commands.</p>
<pre><code>aws sts get-caller-identity
$ aws eks update-kubeconfig --name eks-cluster-name --region aws-region --role-arn arn:aws:iam::XXXXXXXXXXXX:role/testrole
</code></pre>
<p>Error I'm getting:</p>
<pre><code>error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:sts::769379794363:assumed-role/dev-server-role/i-016d7738c9cb84b96 is not authorized to perform: sts:AssumeRole on resource xxx
</code></pre>
| <p>Community wiki answer for better visibility.</p>
<p>The problem is solved by taking a good tip from the comment:</p>
<blockquote>
<p>Don't specify <code>role-arn</code> if you want it to use the instance profile.</p>
</blockquote>
<p>OP has confirmed:</p>
<blockquote>
<p>thanks @jordanm that helped</p>
</blockquote>
|
<p>I am trying to host my own Nextcloud server using Kubernetes.</p>
<p>I want my Nextcloud server to be accessed from <code>http://localhost:32738/nextcloud</code> but every time I access that URL, it gets redirected to <code>http://localhost:32738/login</code> and gives me <code>404 Not Found</code>.</p>
<p>If I replace the path with:</p>
<pre><code>path: /
</code></pre>
<p>then, it works without problems on <code>http://localhost:32738/login</code> but as I said, it is not the solution I am looking for. The login page should be accessed from <code>http://localhost:32738/nextcloud/login</code>.</p>
<p>Going to <code>http://127.0.0.1:32738/nextcloud/</code> does work for the initial setup but after that it becomes inaccessible as it always redirects to:</p>
<pre><code>http://127.0.0.1:32738/apps/dashboard/
</code></pre>
<p>and not to:</p>
<pre><code>http://127.0.0.1:32738/nextcloud/apps/dashboard/
</code></pre>
<p>This is my yaml:</p>
<pre><code>#Nextcloud-Dep
apiVersion: apps/v1
kind: Deployment
metadata:
name: nextcloud-server
labels:
app: nextcloud
spec:
replicas: 1
selector:
matchLabels:
pod-label: nextcloud-server-pod
template:
metadata:
labels:
pod-label: nextcloud-server-pod
spec:
containers:
- name: nextcloud
image: nextcloud:22.2.0-apache
env:
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: nextcloud
key: db-name
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: nextcloud
key: db-username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: nextcloud
key: db-password
- name: POSTGRES_HOST
value: nextcloud-database:5432
volumeMounts:
- name: server-storage
mountPath: /var/www/html
subPath: server-data
volumes:
- name: server-storage
persistentVolumeClaim:
claimName: nextcloud
---
#Nextcloud-Serv
apiVersion: v1
kind: Service
metadata:
name: nextcloud-server
labels:
app: nextcloud
spec:
selector:
pod-label: nextcloud-server-pod
ports:
- port: 80
protocol: TCP
name: nextcloud-server
---
#Database-Dep
apiVersion: apps/v1
kind: Deployment
metadata:
name: nextcloud-database
labels:
app: nextcloud
spec:
replicas: 1
selector:
matchLabels:
pod-label: nextcloud-database-pod
template:
metadata:
labels:
pod-label: nextcloud-database-pod
spec:
containers:
- name: postgresql
image: postgres:13.4
env:
- name: POSTGRES_DATABASE
valueFrom:
secretKeyRef:
name: nextcloud
key: db-name
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: nextcloud
key: db-username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: nextcloud
key: db-password
- name: POSTGRES_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: nextcloud
key: db-rootpassword
- name: PGDATA
value: /var/lib/postgresql/data/
volumeMounts:
- name: database-storage
mountPath: /var/lib/postgresql/data/
subPath: data
volumes:
- name: database-storage
persistentVolumeClaim:
claimName: nextcloud
---
#Database-Serv
apiVersion: v1
kind: Service
metadata:
name: nextcloud-database
labels:
app: nextcloud
spec:
selector:
pod-label: nextcloud-database-pod
ports:
- port: 5432
protocol: TCP
name: nextcloud-database
---
#PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: nextcloud-pv
labels:
type: local
spec:
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp"
---
#PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nextcloud
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
#Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nextcloud-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- backend:
service:
name: nextcloud-server
port:
number: 80
pathType: Prefix
path: /nextcloud(/.*)
---
#Secret
apiVersion: v1
kind: Secret
metadata:
name: nextcloud
labels:
app: nextcloud
immutable: true
stringData:
db-name: nextcloud
db-username: nextcloud
db-password: changeme
db-rootpassword: longpassword
username: admin
password: changeme
</code></pre>
<p>ingress-nginx was installed with:</p>
<pre><code>helm install nginx ingress-nginx/ingress-nginx
</code></pre>
<p>Please tell me if you want me to supply more information.</p>
| <p>In your case there is a difference between the exposed URL in the backend service and the specified path in the Ingress rule. That's why you get an error.</p>
<p>To avoid that you can use rewrite rule.</p>
<p>Using that one your ingress paths will be rewritten to value you provide.
This annotation <code>ingress.kubernetes.io/rewrite-target: /login</code> will rewrite the URL <code>/nextcloud/login</code> to <code>/login</code> before sending the request to the backend service.</p>
<p>But:</p>
<blockquote>
<p>Starting in Version 0.22.0, ingress definitions using the annotation nginx.ingress.kubernetes.io/rewrite-target are not backwards compatible with previous versions.</p>
</blockquote>
<p>On <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">this documentation</a> you can find following example:</p>
<pre><code>$ echo '
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: rewrite
namespace: default
spec:
rules:
- host: rewrite.bar.com
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
path: /something(/|$)(.*)
' | kubectl create -f -
</code></pre>
<blockquote>
<p>In this ingress definition, any characters captured by <code>(.*)</code> will be assigned to the placeholder <code>$2</code>, which is then used as a parameter in the <code>rewrite-target</code> annotation.</p>
</blockquote>
<p>So in your URL you could see wanted <code>/nextcloud/login</code>, but rewriting will couse changing path to <code>/login</code> in the Ingress rule and finding your backend. I would suggest use one of following option:</p>
<pre><code>path: /nextcloud(/.*)
</code></pre>
<pre><code>nginx.ingress.kubernetes.io/rewrite-target: /$1
</code></pre>
<p>or</p>
<pre><code>path: /nextcloud/login
</code></pre>
<pre><code>nginx.ingress.kubernetes.io/rewrite-target: /login
</code></pre>
<p>See also <a href="https://docs.pivotal.io/pks/1-7/nsxt-ingress-rewrite-url.html" rel="nofollow noreferrer">this article</a>.</p>
|
<p>I uses a Kubernetes application with a Service that has a single Endpoint. If using <code>curl</code> I retrieve it, I got something like this:</p>
<pre><code>{
"kind": "Endpoints",
"apiVersion": "v1",
"metadata": {
...
"subsets": [
{
"addresses": [
{
"ip": "172.16.235.204"
}
],
...
}
]
</code></pre>
<p>Now under given circumstances, my application needs to change its IP with a curl PATCH. I am using the following code:</p>
<pre><code>KUBE_SERVICE_DIR="/var/run/secrets/kubernetes.io/serviceaccount"
KUBE_TOKEN_FILENAME="${KUBE_SERVICE_DIR}/token"
KUBE_CA_CERT="${KUBE_SERVICE_DIR}/ca.crt"
KUBE_TOKEN=$(cat ${KUBE_TOKEN_FILENAME})
BODY='[{"op": "replace", "path": "subsets/0/addresses/0/ip", "value": "172.16.140.18"}]'
curl -XPATCH --cacert ${KUBE_CA_CERT} -H "Content-Type: application/json-patch+json" -H "Authorization: Bearer ${KUBE_TOKEN}" --data "${BODY}" "https://${KUBERNETES_API_HOST}:${KUBERNETES_API_PORT}/api/v1/namespaces/cf-db/endpoints/cfdb-ccdb"
</code></pre>
<p>As you can see my patch JSON is trying to apply a replace operation on this path <code>subsets/0/addresses/0/ip</code>setting the new IP.
But when I run it I got:</p>
<pre><code>{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "the server rejected our request due to an error in our request",
"reason": "Invalid",
"details": {
},
"code": 422
</code></pre>
<p>Can someone help me to figure out what's wrong with my code? I tried a lot of variations using also examples on StackOverflow but no luck.</p>
| <p>After a lot of tries and errors, I found the solution below.</p>
<p>A solution like this is used in Patroni (but there is written in Python) where the Service has one Endpoint that always references the master node. When a failover o switchover occurs the Patroni code update the Service Endpoint. A code like this can be used whenever you have a StatefulSet and you always want to have the Service reference the master node even in a failover scenario.</p>
<pre><code>KUBE_SERVICE_DIR="/var/run/secrets/kubernetes.io/serviceaccount"
KUBE_TOKEN_FILENAME="${KUBE_SERVICE_DIR}/token"
KUBE_CA_CERT="${KUBE_SERVICE_DIR}/ca.crt"
KUBE_TOKEN=$(cat ${KUBE_TOKEN_FILENAME})
KUBERNETES_API_HOST=${KUBERNETES_SERVICE_HOST}
KUBERNETES_API_PORT=${KUBERNETES_SERVICE_PORT}
generatePatchData()
{
local MASTER_IP=$1
local MASTER_PORT=$2
cat <<EOF
{"subsets": [{"addresses": [{"ip": "$MASTER_IP"}], "ports": [{"name": "postgresql", "port": $MASTER_PORT, "protocol": "TCP"}]}]}
EOF
}
patchEndpointIP() {
local MASTER_IP=$1
local MASTER_PORT=$2
curl -XPATCH --cacert ${KUBE_CA_CERT} -H "Content-Type: application/merge-patch+json" -H "Authorization: Bearer ${KUBE_TOKEN}" "https://${KUBERNETES_API_HOST}:${K
UBERNETES_API_PORT}/api/v1/namespaces/cf-db/endpoints/cfdb-ccdb" --data "$(generatePatchData $MASTER_IP $MASTER_PORT)"
}
getEndpointIP() {
curl -sSk -H "Authorization: Bearer ${KUBE_TOKEN}" "https://${KUBERNETES_API_HOST}:${KUBERNETES_API_PORT}/api/v1/namespaces/cf-db/endpoints/cfdb-ccdb"
}
patchEndpointIP "172.16.140.13" "5431"
getEndpointIP
</code></pre>
|
<p>I am using a manually created Kubernetes Cluster (using <code>kubeadm</code>) deployed on AWS ec2 instances (manually created ec2 instances). I want to use AWS EBS volumes for Kubernetes persistent volume. How can I use AWS EBS volumes within Kubernetes cluster for persistent volumes?</p>
<p>Cluster details:</p>
<ul>
<li><code>kubectl</code> veresion: 1.19</li>
<li><code>kubeadm</code> veresion: 1.19</li>
</ul>
| <p>Posted community wiki for better visibility with general solution as there are no further details / logs provided. Feel free to expand it.</p>
<hr />
<p>The official supported way to mount <a href="https://aws.amazon.com/ebs/" rel="nofollow noreferrer">Amazon Elastic Block Store</a> as <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">Kubernetes volume</a> on the self-managed Kubernetes cluster running on AWS is to use <a href="https://kubernetes.io/docs/concepts/storage/volumes/#awselasticblockstore" rel="nofollow noreferrer"><code>awsElasticBlockStore</code>volume type</a>.</p>
<p>To manage the lifecycle of Amazon EBS volumes on the self-managed Kubernetes cluster running on AWS please install <a href="https://github.com/kubernetes-sigs/aws-ebs-csi-driver" rel="nofollow noreferrer">Amazon Elastic Block Store Container Storage Interface Driver</a>.</p>
|
<h2>Current Architecture</h2>
<p>I have a microservice (Let's call it the publisher) which generates events for a particular resource. There's a web app which shows the updated list of the resource, but this view doesn't talk to the microservice directly. There's a view service in between which calls the publisher whenevever the web app request the list of resources.<br />
To show the latest update, the web app uses the long polling method which calls the view service and the view service calls the publisher (The view service does more than that, for eg. collecting data from different other sources, but that's probably not relevant here). The publisher also publishes any update on all the resources using pubsub which the view service is currently not using.</p>
<h2>Proposed Changes</h2>
<p>To reduce the API calls and make the system more efficient, I am trying to implement websockets instead of the long polling. In theory it works like the web app will subscribe to events from the view service. The view service will listen to the resource update events from the publisher and whenever there's any message on the pubsub topic, it'll send an event to the web app.</p>
<h2>Problem</h2>
<p>The problem with websocket which I am experiencing now is that the view service is deployed using Kubernetes and thus can have multiple pods running at the same time. Different instances of the web app can listen to the events from those different pods, thus it may happen that the pubsub message is received by pod A, but the websocket listener which require this resource is connected to the pod B. If pod A ack the message to discard it since it can't find any event listener, we will lose the event and the web app will not have the updated data. If pod A nack the message so that it can be listened by any other pod which may be benefiting with that message, it may happen that there's no pod which have any websocket event listener which can be benefited with that message and the message will keep circulating blocking the pubsub queue forever.</p>
<h2>Possible Solutions</h2>
<p>The first solution which came into my mind is to create different subscription for different pods so every pod will receive the same event at least once and we won't be blocking the pubsub queue. However, the challenge in this approach is that the pods can die anytime thus leaving the subscription abandoned and after few weeks I'll be dealing with tons of abandoned subscription with overflowing messages.</p>
<p>Another solution is to have a database where the pubsub messages will be stored and the different pods will query it to receive the events periodically check for any listener and remove it from there, but it doesn't solve the problem when there's no listener for the events. Additionally, I don't want to add a database just because of this issue (Current long polling is much better architecture than this).</p>
<p>Third solution is to implement the websockets inside the publisher, however, this is the least possible solution as the codebase is really huge there and no one likes to add any new functionality there.</p>
<p>Final solution is to have just one pod of the view service all the time, but then it defeats the purpose of having a microservice and being on Kubernetes. Additionally, it won't be scalable.</p>
<h2>Question</h2>
<p>Is there any recommended way or any other way I can connect the pubsub events to the web app using websockets without adding unnecessary complexity? I would love an example if there's one available anywhere else.</p>
| <p>There is no easy solution. First of all, in websocket pattern, the pod is responsible to send the event to the web app. And therefore to gather the correct backend events to the web app. In this design, the pod need to filter the correct messages to deliver.</p>
<p>The most naive implementation will be to duplicate all the messages, in all pod (and therefore in all subscription), but it's not really efficient and money consuming (in addition to time consuming to discard all the messages).</p>
<hr />
<p>We can imagine a more efficient solution, and to create, on each pod, a list of subscription, one per open webapp channel. On these subscription you can add a filter parameter. Of course, the publisher need to add an attribute to allow the subscription to filter on that.</p>
<p>When the session is over, the subscription must be deleted.</p>
<p>In case of crash, I propose this pattern: Store, in a database, the subscription ID, the filter content (the webapp session) and the pod ID in charge of the filtering and delivering. Then, detect the pod crash, or run a scheduled pod, to check if all the running pod are registered in database. If one pod is in database and not running, delete all the related subscription.</p>
<p>If you are able to detect in realtime the pod crash, you can dispatch the active webapp sessions to the other running pods, or on the new one created.</p>
<hr />
<p>As you see, the design isn't simple and required controls, check, and garbage collection.</p>
|
<p>I am running this command in my mac terminal,want to submit my test spark job on to one of our k8s cluster:</p>
<pre><code>ID_TOKEN=`kubectl config view --minify -o jsonpath='{.users[0].user.auth-provider.config.id-token}'`
./bin/spark-submit \
--master k8s://https://c2.us-south.containers.cloud.ibm.com:30326 \
--deploy-mode cluster \
--name Hello \
--class scala.example.Hello \
--conf spark.kubernetes.namespace=isap \
--conf spark.executor.instances=3 \
--conf spark.kubernetes.container.image.pullPolicy=Always \
--conf spark.kubernetes.container.image.pullSecrets=default-us-icr-io \
--conf spark.kubernetes.container.image=us.icr.io/cedp-isap/spark-for-apps:2.4.1 \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
--conf spark.kubernetes.authenticate.driver.caCertFile=/usr/local/opt/spark/ca.crt \
--conf spark.kubernetes.authenticate.submission.oauthToken=$ID_TOKEN \
local:///opt/spark/jars/interimetl_2.11-1.0.jar
</code></pre>
<p>And I already created service account "spark", as well as cluster role binding yaml like this:</p>
<pre><code>kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: isap
name: pod-mgr
rules:
- apiGroups: ["rbac.authorization.k8s.io", ""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list", "create", "delete"]
</code></pre>
<p>and </p>
<pre><code>kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-mgr-spark
namespace: isap
subjects:
- kind: ServiceAccount
name: spark
namespace: isap
roleRef:
kind: ClusterRole
name: pod-mgr
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>But when I run above spark-submit command, I found the the log like this:</p>
<pre><code>20/06/15 02:45:02 INFO LoggingPodStatusWatcherImpl: State changed, new state:
pod name: hello-1592203500709-driver
namespace: isap
labels: spark-app-selector -> spark-0c7f50ab2d21427aac9cf2381cb4bb64, spark-role -> driver
pod uid: 375674d2-784a-4b32-980d-953488c8a8b2
creation time: 2020-06-15T06:45:02Z
service account name: default
volumes: kubernetes-credentials, spark-local-dir-1, spark-conf-volume, default-token-p8pgf
node name: N/A
start time: N/A
container images: N/A
phase: Pending
status: []
</code></pre>
<p>You will notice it is still using service account "default", rather than "Spark"
And the executor pod can not be created in my k8s cluster. Also no logs is displayed in created driver pod.</p>
<p>Could anyone can help to take a look what I missed here?Thanks!</p>
| <p>I am not sure if you already figure it out the issue. Hope my input is still useful.</p>
<p>There are two places which whill check against RBAC.</p>
<p>First, when you execute the spark-submit, it will call k8s web api to create the driver pod. Later the driver pod will call k8s api to create executor pod.</p>
<p>I saw you already create the spark service account , role and rolebinding. You also use them for your driver pod. That is fine, but the problem, you haven't assign the user when creating the driver pod. So K8S believes you are still using "system:anonymous".</p>
<p>You can also assign the "spark" sa to it by config "spark.kubernetes.authenticate.submission.*", one example I put here</p>
<pre><code> spark-submit ^
--master k8s://xxx ^
--name chen-pi ^
--deploy-mode cluster ^
--driver-memory 8g ^
--executor-memory 16g ^
--executor-cores 2 ^
--conf spark.kubernetes.container.image=gcr.io/spark-operator/spark-py:v3.1.1 ^
--conf spark.kubernetes.file.upload.path=/opt/spark ^
--conf spark.kubernetes.namespace=default^
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark^
--conf spark.kubernetes.authenticate.caCertFile=./local/ca.crt ^
--conf spark.kubernetes.authenticate.oauthTokenFile=./local/token ^
--conf spark.kubernetes.authenticate.submission.oauthTokenFile=./local/token ^
./spark-examples/python/skeleton/skeleton.py
</code></pre>
|
<p>We are using AKS 1.19.11 version and would like to know whether we could enable the configurable scaling behavior in AKS also as <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaler</a>.</p>
<p>If yes, the current hpa setting used is with <code>apiversion: autoscaling/v1</code>. Is it possible to configure these hpa behavior properties with these api version?</p>
| <p>If you ask specifically about <code>behavior</code> field, the answer is: <strong>no, it's not available in <code>apiVersion: autoscaling/v1</code></strong> and if you want to leverage it, you need to use <code>autoscaling/v2beta2</code>. It's clearly stated <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-configurable-scaling-behavior" rel="nofollow noreferrer">here</a>:</p>
<blockquote>
<p>Starting from v1.18 the v2beta2 API allows scaling behavior to be
configured through the HPA behavior field.</p>
</blockquote>
<p>If you have doubts, you can easily check it on your own by trying to apply a new <code>HorizontalPodAutoscaler</code> object definition, containing this field, but instead of required <code>autoscaling/v2beta2</code>, use <code>autoscaling/v1</code>. You should see the error message similar to the one below:</p>
<pre><code>error: error validating "nginx-multiple.yaml": error validating data: [ValidationError(HorizontalPodAutoscaler.spec): unknown field "behavior" in io.k8s.api.autoscaling.v1.HorizontalPodAutoscalerSpec, ValidationError(HorizontalPodAutoscaler.spec): unknown field "metrics" in io.k8s.api.autoscaling.v1.HorizontalPodAutoscalerSpec]; if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
<p>As you can see both <code>metrics</code> and <code>behavior</code> fields in <code>spec</code> are not valid in <code>autoscaling/v1</code> however they are perfectly valid in <code>autoscaling/v2beta2</code> API.</p>
<p>To check whether your <strong>AKS</strong> cluster supports this API version, run:</p>
<pre><code>$ kubectl api-versions | grep autoscaling
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2
</code></pre>
<p>If your result is similar to mine (i.e. you can see <code>autoscaling/v2beta2</code>), it means your <strong>AKS</strong> cluster supports this API version.</p>
|
<p>I have deployed mongodb as a StatefulSets on the K8S. It is not connecting, when I try to connect the DB using connection string URI(Ex: mongodb://mongo-0.mongo:27017,mongo-1.mongo:27017/cool_db), but it is connecting and getting the results when I use Endpoint IP Address. </p>
<pre><code># kubectl get sts
NAME READY AGE
mongo 2/2 7h33m
#kubectl get pods
NAME READY STATUS RESTARTS AGE
mongo-0 2/2 Running 0 7h48m
mongo-1 2/2 Running 2 7h47m
# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10h
mongo ClusterIP None <none> 27017/TCP 7h48m
</code></pre>
<p>I'm trying to test the connection using connection string URI in the python using the below process but it fails. </p>
<pre><code>>>> import pymongo
>>> client = pymongo.MongoClient("mongodb://mongo-0.mongo:27017,mongo-1.mongo:27017/cool_db")
>>> db = client.cool_db
>>> print db.cool_collection.count()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python2.7/site-packages/pymongo/collection.py", line 1800, in count
return self._count(cmd, collation, session)
File "/usr/lib64/python2.7/site-packages/pymongo/collection.py", line 1600, in _count
_cmd, self._read_preference_for(session), session)
File "/usr/lib64/python2.7/site-packages/pymongo/mongo_client.py", line 1454, in _retryable_read
read_pref, session, address=address)
File "/usr/lib64/python2.7/site-packages/pymongo/mongo_client.py", line 1253, in _select_server
server = topology.select_server(server_selector)
File "/usr/lib64/python2.7/site-packages/pymongo/topology.py", line 235, in select_server
address))
File "/usr/lib64/python2.7/site-packages/pymongo/topology.py", line 193, in select_servers
selector, server_timeout, address)
File "/usr/lib64/python2.7/site-packages/pymongo/topology.py", line 209, in _select_servers_loop
self._error_message(selector))
pymongo.errors.ServerSelectionTimeoutError: mongo-0.mongo:27017: [Errno -2] Name or service not known,mongo-1.mongo:27017: [Errno -2] Name or service not known
</code></pre>
<p>If we use the Endpoint IP Address, then we will get a response from DB.</p>
<pre><code>>>> import pymongo
>>> client = pymongo.MongoClient("mongodb://10.244.1.8,10.244.2.9:27017/cool_db")
>>> db = client.cool_db
>>> print db.cool_collection.count()
0
>>>
</code></pre>
<p>I have tried with different URI's like (client = pymongo.MongoClient("mongodb://mongo-0.mongo:27017/cool_db") )but not working. Could anyone please help me?</p>
| <p>Had the same issue in a spring java connection when I used the mongodb-community-operator. Solution was, don't use the aggregated String, just use the service. So, in your case, instead of using a concatinated String of the pods like this</p>
<pre><code>mongodb://mongo-0.mongo:27017,mongo-1.mongo:27017/cool_db
</code></pre>
<p>I used the service, which also was generated as a secret in my namespace and what was written under the key <code>connectionString.standardSrv</code></p>
<pre><code>mongodb+srv://<username>:<password>@mongo-svc.mongo.svc.cluster.local/cool_db?ssl=false
</code></pre>
<p>-> I'm assuming your sts is named mongo and your namespace is named mongo too.
In that case, I could not set the username and password as extra parameters, only this one uri. But that worked.</p>
|
<p>I recently evaluated Kubernetes with a simple test project and I was able to update image of StatefulSet with command like this:</p>
<pre><code>kubectl set image statefulset/cloud-stateful-set cloud-stateful-container=ncccloud:v716
</code></pre>
<p>I'm now trying to get our real system to work in Kubernetes and the pods don't do anything when I try to update image, even though I'm using basically the same command.</p>
<p>It says:</p>
<blockquote>
<p>statefulset.apps "cloud-stateful-set" image updated</p>
</blockquote>
<p>And <code>kubectl describe statefulset.apps/cloud-stateful-set</code> says:</p>
<blockquote>
<p>Image: ncccloud:v716"</p>
</blockquote>
<p>But <code>kubectl describe pod cloud-stateful-set-0</code> and <code>kubectl describe pod cloud-stateful-set-1</code> say:</p>
<blockquote>
<p>"Image: ncccloud:latest"</p>
</blockquote>
<p>The ncccloud:latest is an image which doesn't work:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cloud-stateful-set-0 0/1 CrashLoopBackOff 7 13m
cloud-stateful-set-1 0/1 CrashLoopBackOff 7 13m
mssql-deployment-6cd4ff766-pzz99 1/1 Running 1 55m
</code></pre>
<p>Another strange thing is that every time I try to apply the StatefulSet it says <strong>configured</strong> instead of <strong>unchanged</strong>.</p>
<pre><code>$ kubectl apply -f k8s/cloud-stateful-set.yaml
statefulset.apps "cloud-stateful-set" configured
</code></pre>
<p>Here is my cloud-stateful-set.yaml:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cloud-stateful-set
labels:
app: cloud
group: service
spec:
replicas: 2
# podManagementPolicy: Parallel
serviceName: cloud-stateful-set
selector:
matchLabels:
app: cloud
template:
metadata:
labels:
app: cloud
group: service
spec:
containers:
- name: cloud-stateful-container
image: ncccloud:latest
imagePullPolicy: Never
ports:
- containerPort: 80
volumeMounts:
- name: cloud-stateful-storage
mountPath: /cloud-stateful-data
volumeClaimTemplates:
- metadata:
name: cloud-stateful-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
</code></pre>
<p>Here is full output of <code>kubectl describe pod/cloud-stateful-set-1</code>:</p>
<pre><code>Name: cloud-stateful-set-1
Namespace: default
Node: docker-for-desktop/192.168.65.3
Start Time: Tue, 02 Jul 2019 11:03:01 +0300
Labels: app=cloud
controller-revision-hash=cloud-stateful-set-5c9964c897
group=service
statefulset.kubernetes.io/pod-name=cloud-stateful-set-1
Annotations: <none>
Status: Running
IP: 10.1.0.20
Controlled By: StatefulSet/cloud-stateful-set
Containers:
cloud-stateful-container:
Container ID: docker://3ec26930c1a81caa39d5c5a16c4e25adf7584f90a71e0110c0b03ecb60dd9592
Image: ncccloud:latest
Image ID: docker://sha256:394427c40e964e34ca6c9db3ce1df1f8f6ce34c4ba8f3ab10e25da6e89678830
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 139
Started: Tue, 02 Jul 2019 11:19:03 +0300
Finished: Tue, 02 Jul 2019 11:19:03 +0300
Ready: False
Restart Count: 8
Environment: <none>
Mounts:
/cloud-stateful-data from cloud-stateful-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-gzxpx (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
cloud-stateful-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: cloud-stateful-storage-cloud-stateful-set-1
ReadOnly: false
default-token-gzxpx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-gzxpx
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 19m default-scheduler Successfully assigned cloud-stateful-set-1 to docker-for-desktop
Normal SuccessfulMountVolume 19m kubelet, docker-for-desktop MountVolume.SetUp succeeded for volume "pvc-4c9e1796-9c9a-11e9-998f-00155d64fa03"
Normal SuccessfulMountVolume 19m kubelet, docker-for-desktop MountVolume.SetUp succeeded for volume "default-token-gzxpx"
Normal Pulled 17m (x5 over 19m) kubelet, docker-for-desktop Container image "ncccloud:latest" already present on machine
Normal Created 17m (x5 over 19m) kubelet, docker-for-desktop Created container
Normal Started 17m (x5 over 19m) kubelet, docker-for-desktop Started container
Warning BackOff 4m (x70 over 19m) kubelet, docker-for-desktop Back-off restarting failed container
</code></pre>
<p>Here is full output of <code>kubectl describe statefulset.apps/cloud-stateful-set</code>:</p>
<pre><code>Name: cloud-stateful-set
Namespace: default
CreationTimestamp: Tue, 02 Jul 2019 11:02:59 +0300
Selector: app=cloud
Labels: app=cloud
group=service
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{},"labels":{"app":"cloud","group":"service"},"name":"cloud-stateful-set","names...
Replicas: 2 desired | 2 total
Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=cloud
group=service
Containers:
cloud-stateful-container:
Image: ncccloud:v716
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/cloud-stateful-data from cloud-stateful-storage (rw)
Volumes: <none>
Volume Claims:
Name: cloud-stateful-storage
StorageClass:
Labels: <none>
Annotations: <none>
Capacity: 10Mi
Access Modes: [ReadWriteOnce]
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 25m statefulset-controller create Pod cloud-stateful-set-0 in StatefulSet cloud-stateful-set successful
Normal SuccessfulCreate 25m statefulset-controller create Pod cloud-stateful-set-1 in StatefulSet cloud-stateful-set successful
</code></pre>
<p>I'm using Docker Desktop on Windows, if it matters.</p>
| <p>in my case imagePullPolicy was set to Always already:</p>
<pre><code>kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"gcr.io/google_containers/nginx-slim:0.8"}]'
</code></pre>
<p>helped in my case, see k8s docs: <a href="https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#rolling-update" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#rolling-update</a></p>
|
<p>I have created an <code>Elasticsearch</code> resource using the below yaml manifest after installing the eck-operator as mentioned <a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html" rel="nofollow noreferrer">here</a>.</p>
<pre><code>apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 7.15.0
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
</code></pre>
<p>After this manifest is applied, I can get the status manually by executing:</p>
<pre><code>kubectl get elasticsearch -n ecknamespace
</code></pre>
<p>and the output would be as follows:</p>
<pre><code>> $ kubectl get elasticsearch -n ecknamespace
NAME HEALTH NODES VERSION PHASE AGE
quickstart green 3 7.15.0 Ready 3d17h
</code></pre>
<p>Using the <a href="https://github.com/kubernetes-client/csharp" rel="nofollow noreferrer">Kubernetes C# Client</a>, how do I get the above data programmatically?</p>
| <p>The client includes an <a href="https://github.com/kubernetes-client/csharp/tree/master/examples/customResource" rel="nofollow noreferrer">example</a> of how to interact with custom resources.</p>
<p>It will require you to define the classes described in the files <code>cResource.cs</code> and <code>CustomResourceDefinition.cs</code>.</p>
<p>Afterwards, the following code should list the <code>elasticsearch</code> resource:</p>
<pre><code>var config = KubernetesClientConfiguration.BuildConfigFromConfigFile();
var client = new GenericClient(config, "elasticsearch.k8s.elastic.co", "v1", "elasticsearches");
var elasticSearches = await client.ListNamespacedAsync<CustomResourceList<CResource>>("default").ConfigureAwait(false);
foreach (var es in elasticSearches.Items)
{
Console.WriteLine(es.Metadata.Name);
}
</code></pre>
<p><strong>EDIT</strong> after OP's comments: to view all fields of the custom resource, one needs to edit the <code>CustomResource</code> class (file <code>CustomResourceDefinition.cs</code> in the example) with the corresponding fields.</p>
|
<p><strong>Update #2:</strong>
I have checked the health status of my instances within the auto scaling group - here the instances are titled as "healthy". (Screenshot added)</p>
<p>I followed <a href="https://docs.aws.amazon.com/de_de/elasticloadbalancing/latest/classic/ts-elb-healthcheck.html#ts-elb-healthcheck-autoscaling" rel="nofollow noreferrer">this trouble-shooting tutorial</a> from AWS - without success:</p>
<blockquote>
<p><strong>Solution</strong>: Use the ELB health check for your Auto Scaling group. When you use the ELB health check, Auto Scaling determines the health status of your instances by checking the results of both the instance status check and the ELB health check. For more information, see <a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-add-elb-healthcheck.html" rel="nofollow noreferrer">Adding health checks to your Auto Scaling group</a> in the <em>Amazon EC2 Auto Scaling User Guide</em>.</p>
</blockquote>
<hr />
<p><strong>Update #1:</strong>
I found out that the two Node-Instances are "OutOfService" (as seen in the screenshots below) because they are failing the Healtcheck from the loadbalancer - could this be the problem? And how do i solve it?</p>
<p>Thanks!</p>
<hr />
<p>I am currently on the home stretch to host my ShinyApp on AWS.</p>
<p>To make the hosting scalable, I decided to use AWS - more precisely an EKS cluster.</p>
<p>For the creation I followed this tutorial: <a href="https://github.com/z0ph/ShinyProxyOnEKS" rel="nofollow noreferrer">https://github.com/z0ph/ShinyProxyOnEKS</a></p>
<p>So far everything worked, except for the last step: "When accessing the load balancer address and port, the login interface of ShinyProxy can be displayed normally.</p>
<p>The load balancer gives me the following error message as soon as I try to call it with the corresponding port: <strong>ERR_EMPTY_RESPONSE.</strong></p>
<p>I have to admit that I am currently a bit lost and lack a starting point where the error could be.</p>
<p>I was already able to host the Shiny sample application in the cluster (step 3.2 in the tutorial), so it must be somehow due to shinyproxy, kubernetes proxy or the loadbalancer itself.</p>
<p>I link you to the following information below:</p>
<ul>
<li>Overview EC2 Instances (Workspace + Cluster Nodes)</li>
<li>Overview Loadbalancer</li>
<li>Overview Repositories</li>
<li>Dockerfile ShinyProxy</li>
<li>Dockerfile Kubernetes Proxy</li>
<li>Dockerfile ShinyApp (sample application)</li>
</ul>
<p>I have painted over some of the information to be on the safe side - if there is anything important, please let me know.</p>
<p>If you need anything else I haven't thought of, just give me a hint!</p>
<p>And please excuse the confusing question and formatting - I just don't know how to word / present it better. sorry!</p>
<p>Many thanks and best regards</p>
<hr />
<p><strong>Overview EC2 Instances (Workspace + Cluster Nodes)</strong></p>
<p><a href="https://i.stack.imgur.com/54Hst.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/54Hst.png" alt="enter image description here" /></a></p>
<p><strong>Overview Loadbalancer</strong></p>
<p><a href="https://i.stack.imgur.com/yd72j.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yd72j.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/IF1Hq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IF1Hq.png" alt="enter image description here" /></a></p>
<p><strong>Overview Repositories</strong></p>
<p><a href="https://i.stack.imgur.com/9nd41.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9nd41.png" alt="enter image description here" /></a></p>
<p><strong>Dockerfile ShinyProxy (source <a href="https://github.com/openanalytics/shinyproxy-config-examples/tree/master/03-containerized-kubernetes" rel="nofollow noreferrer">https://github.com/openanalytics/shinyproxy-config-examples/tree/master/03-containerized-kubernetes</a>)</strong></p>
<p><a href="https://i.stack.imgur.com/lKhG3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lKhG3.png" alt="enter image description here" /></a></p>
<p><strong>Dockerfile Kubernetes Proxy (source <a href="https://github.com/openanalytics/shinyproxy-config-examples/tree/master/03-containerized-kubernetes" rel="nofollow noreferrer">https://github.com/openanalytics/shinyproxy-config-examples/tree/master/03-containerized-kubernetes</a> - Fork)</strong></p>
<p><a href="https://i.stack.imgur.com/q7ZIz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/q7ZIz.png" alt="enter image description here" /></a></p>
<p><strong>Dockerfile ShinyApp (sample application)</strong></p>
<p><a href="https://i.stack.imgur.com/yNunz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yNunz.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/DvC8T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DvC8T.png" alt="enter image description here" /></a></p>
<p><strong>The following files are 1:1 from the tutorial:</strong></p>
<ol>
<li>application.yaml (shinyproxy)</li>
<li>sp-authorization.yaml</li>
<li>sp-deployment.yaml</li>
<li>sp-service.yaml</li>
</ol>
<p><strong>Health-Status in the AutoScaling-Group</strong></p>
<p><a href="https://i.stack.imgur.com/Pca5V.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Pca5V.png" alt="enter image description here" /></a></p>
| <p>Unfortunately, there is a known issue in <code>AWS</code></p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/80579" rel="nofollow noreferrer">externalTrafficPolicy: Local with Type: LoadBalancer AWS NLB health checks failing · Issue #80579 · kubernetes/kubernetes</a></p>
<blockquote>
<p>Closing this for now since it's a known issue</p>
</blockquote>
<p>As per <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip" rel="nofollow noreferrer"><code>k8s</code> manual</a>:</p>
<blockquote>
<p><code>.spec.externalTrafficPolicy</code> - denotes if this Service desires to route external traffic to node-local or cluster-wide endpoints. There are two available options: <code>Cluster</code> (default) and <code>Local</code>. <code>Cluster</code> obscures the client source IP and may cause a second hop to another node, but should have good overall load-spreading. <code>Local</code> preserves the client source IP and avoids a second hop for LoadBalancer and NodePort type Services, but risks potentially imbalanced traffic spreading.</p>
</blockquote>
<p>But you may try to fix local protocol like <a href="https://stackoverflow.com/a/58001933/5720818">in this answer</a></p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/80579" rel="nofollow noreferrer">Upd:</a></p>
<p>This is actually a known limitation where the AWS cloud provider does not allow for <code>--hostname-override</code>, see <a href="https://github.com/kubernetes/kubernetes/issues/54482" rel="nofollow noreferrer">#54482</a> for more details.</p>
<p>Upd 2: There is a workaround via patching <code>kube-proxy</code>:</p>
<p>As per <a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-troubleshoot-unhealthy-targets-nlb/" rel="nofollow noreferrer">AWS KB</a></p>
<blockquote>
<p>A Network Load Balancer with the <a href="https://github.com/kubernetes/kubernetes/issues/61486" rel="nofollow noreferrer">externalTrafficPolicy is set to Local</a> (from the Kubernetes website), with a custom Amazon VPC DNS on the DHCP options set. To resolve this issue, patch kube-proxy with the hostname override flag.</p>
</blockquote>
|
<p>Do anyone know what's the libraries load order based on, in a single folder on Tomcat 8?</p>
<p>Here is my situation:<br />
There is this customer java application deployed on tomcat that, for some reasons, has a class in multiple libraries in the same webapp-shared folder. I know it's wrong, everyone knows that, but this customer refuses to fix this.<br />
We have to deploy this application on kubernetes, so we created a Dockerfile and everything else, and it's working correctly <em>most of the time</em>.<br />
When a pod is deployed on some nodes, it seems that the load order of the libraries is different, and that causes the application to not work properly.</p>
<p>So, basically, what I'm asking is: is there anyone who actually knows what's the load order based on? Is it the filesystem used? Is it the docker overlay? Can it be the LC_COLLATE? Is it actually pseudo random?<br />
These nodes are basically identical and I'm really having a hard time trying to find out what the difference could be.<br />
Thank you.</p>
| <p>The only ordering specified by the Servlet Specification is that classes in <code>/WEB-INF/classes</code> have precedence over <code>/WEB-INF/libs/*.jar</code>.</p>
<p>In Tomcat 7 and earlier the order of libraries in the <code>libs</code> depended on the underlying filesystem, so that it would break when you move your web application to a different OS or to a different Java EE server. In Tomcat 8 and later it is random by design. As random as a <code>HashSet</code> is.</p>
<p>See discussion in <a href="https://bz.apache.org/bugzilla/show_bug.cgi?id=57129" rel="nofollow noreferrer">Bug 57129</a>. Regarding <code>Resources</code> and <code>PreResources</code> see <a href="https://tomcat.apache.org/tomcat-8.5-doc/config/resources.html" rel="nofollow noreferrer">configuration reference</a>.</p>
|
<p>I'm new to Kubernetes and plan to use Google Kubernetes Engine. Hypothetically speaking, let's say I have a K8s cluster with 2 worker nodes. Each node would have its own pod housing the same application. This application will grab a file from some persistent volume and generate an output file that will be pushed back into a different persistent volume. Both pods in my cluster would be doing this continuously until there are no input files in the persistent volume left to be processed. Do the pods inherently know NOT to grab the same file that one pod is already using? If not, how would I be able account for this? I would like to avoid 2 pods using the same input file.</p>
| <blockquote>
<p>Do the pods inherently know NOT to grab the same file that one pod is already using?</p>
</blockquote>
<p>Pods are just processes. Two separate processes accessing files from a shared directory are going to run into conflicts unless they have some sort of coordination mechanism.</p>
<h2>Option 1</h2>
<p>Have one process whose job it is to enumerate the available files. Your two workers connect to this process and receive filenames via some sort of queue/message bus/etc. When they finish processing a file, they request the next one, until all files are processed. Because only a single process is enumerating the files and passing out the work, there's no option for conflict.</p>
<h2>Option 2</h2>
<p><em>In general</em>, renaming files is an atomic operation. Each worker creates a subdirectory within your PV. To claim a file, it renames the file into the appropriate subdirectory and then processes it. Because renames are atomic, even if both workers happen to pick the same file at the same time, only one will succeed.</p>
<h2>Option 3</h2>
<p>If your files have some sort of systematic naming convention, you can divide the list of files between your two workers (e.g., "everything that ends in an even number is processed by worker 1, and everything that ends with an odd number is processed by worker 2").</p>
<hr />
<p>Etc. There are many ways to coordinate this sort of activity. The wikipedia entry on <a href="https://en.wikipedia.org/wiki/Synchronization_(computer_science)" rel="nofollow noreferrer">Synchronization</a> may be of interest.</p>
|
<p>I'm trying to code a bash function to get and decode a kubernetes secret's data in just one short command. My idea is to have something like <code>kgsecd -n <namespace> <secret-name> <secret-data></code> that is just mapped to <code>kubectl get secret -n <namespace> <secret-name> -o "jsonpath={.data.<secret-data>}" | base64 -d</code>. I have already coded it as follows:</p>
<pre class="lang-sh prettyprint-override"><code> kgsecd() {
secretData="${@: -1}"
kubectlParams=(${@:1:-1})
echo "kubectl get secret ${kubectlParams} -o \"jsonpath={.data.$secretData}\" | base64 -d"
kubectl get secret "${kubectlParams}" -o "jsonpath={.data.$secretData}" | base64 -d;
}
</code></pre>
<p>However, I'm struggling to make it work as when I call it, it doesn't show any output in the terminal (apart from that from the <code>echo</code> sentence), but if I copy&paste and executed the output of the echo sentence, it works as expected. Let me show you what I mean with an example:</p>
<pre class="lang-sh prettyprint-override"><code>$> kgsecd -n my-ns my-secret secret-data
kubectl get secret -n my-ns my-secret -o "jsonpath={.data.secret-data}" | base64 -d
$>
</code></pre>
<p>But when I execute <code>kubectl get secret -n my-ns my-secret -o "jsonpath={.data.secret-data}" | base64 -d</code>, I get the expected result.</p>
| <p>If you are using <code>bash</code> try the following, only change is the way <code>kubectlParams</code> is assigned. Here, <code>kubectlParams</code> is assigned with 1st arg to penultimate(<code>$#-1</code>) arguments.</p>
<p>Also, <code>"${kubectlParams}"</code> if quoted, then will be considered as a command. Eg: <code>-n my-ns my-secret</code> would be consider as as a single string. and it would be considered as a single string , and that string is argument to <code>kubectl</code>. kubectl understand <code>-n</code> , <code>my-ns</code>, <code>my-secret</code> , but not <code>-n my-ns mysecret</code>.</p>
<pre><code>kgsecd() {
secretData="${@: -1}"
kubectlParams=${@:1:$#-1}
echo "kubectl get secret ${kubectlParams} -o \"jsonpath={.data.$secretData}\" | base64 -d"
kubectl get secret ${kubectlParams} -o "jsonpath={.data.$secretData}" | base64 -d
}
</code></pre>
<p>Execution output:</p>
<pre><code>#test secret created:
kubectl create secret generic my-secret -n my-ns --from-literal=secret-data=helloooooo
#function output
kgsecd -n my-ns my-secret secret-data
kubectl get secret -n my-ns my-secret -o "jsonpath={.data.secret-data}" | base64 -d
helloooooo
</code></pre>
<p>#manual command execution output:</p>
<pre><code>kubectl get secret -n my-ns my-secret -o "jsonpath={.data.secret-data}" | base64 -d
helloooooo
</code></pre>
<p><code>zsh</code> solution(see comments by OP):</p>
<pre><code>kgsecd() { kubectl get secret ${@:1:-1} -o "jsonpath={.data.${@: -1}}" | base64 -d }
</code></pre>
|
<p>I am trying to access a helm value dynamically though another variable as I am utilizing the range functionality to create multiple deployments. Take for example this section of my deployment file</p>
<pre><code>{{- range $teams := .Values.teams }}
.
.
.
ports:
- containerPort: {{ .Values.deployment.backend.($teams.tag).serverPort }}
protocol: {{ .Values.deployment.backend.($teams.tag).serverProtocol }}
- containerPort: {{ .Values.deployment.backend.($teams.tag).authPort }}
protocol: {{ .Values.deployment.backend.($teams.tag).authProtocol }}
.
.
.
---
{{- end }}
</code></pre>
<p>with a values.yml file of</p>
<pre><code>teams:
- name: TeamA
tag: teamA
- name: TeamB
tag: teamB
- name: TeamC
tag: teamC
deployment:
backend:
teamA:
serverPort: 10001
serverProtocol: TCP
authPort: 10010
authProtocol: TCP
teamB:
serverPort: 9101
serverProtocol: TCP
authPort: 9110
authProtocol: TCP
teamC:
serverPort: 9001
serverProtocol: TCP
authPort: 9010
authProtocol: TCP
</code></pre>
<p>I am not able to figure out how to pass $teams.tag to be evaluated to return the overall value of containerPort for example.</p>
<p>Any help is appreciated.</p>
<p>Cheers</p>
| <p>Helm itself provides a number of functions for you to manipulate the value.</p>
<p>Here is one way to handle the your use case with <code>get</code> function.</p>
<pre><code>{{- $ := . -}}
{{- range $teams := .Values.teams }}
.
.
.
ports:
- containerPort: {{ (get $.Values.deployment.backend $teams.tag).serverPort }}
protocol: {{ (get $.Values.deployment.backend $teams.tag).serverProtocol }}
- containerPort: {{ (get $.Values.deployment.backend $teams.tag).authPort }}
protocol: {{ (get $.Values.deployment.backend $teams.tag).authProtocol }}
.
.
.
---
{{- end }}
</code></pre>
<p>Please be noted that the scope inside the <code>range</code> operator will change. So, you need to preassign the <code>.</code> to <code>$</code> in order to access the root scope.</p>
<p>You can also refer to this doc, <a href="https://helm.sh/docs/chart_template_guide/function_list/" rel="nofollow noreferrer">https://helm.sh/docs/chart_template_guide/function_list/</a>, to know more about the funcitons you can use.</p>
|
<p>I am new to Kubernetes and I am experimenting with some of these in my local development. Before I give my problem statement here is my environment and the state of my project.</p>
<ol>
<li>I have Windows 10 with WSL2 enable with Ubuntu running through VS Code.</li>
<li>I have enabled the required plugins in VS Code (like Kubernetes, Docker and such of those)</li>
<li>I have Docker desktop installed which has WSL2 + Ubuntu + Kubernetes enabled.</li>
<li>I have ASP.Net Core 5 working version from my local system + ubuntu through Docker</li>
<li>I have dockerfile + docker compose file and I have tested them all with and without SSL port and those are working with SSL and without SSL as well. (for that I have modified the program to accept non-SSL request).</li>
</ol>
<p>coming to docker file
-- It has required ports exposed like 5000 (for not SSL) and 5001 (for SSL)
coming to docker compose file
-- It has reuqired mapping like <code>5000:80</code> and <code>5000:443</code></p>
<p>-- It also has environment variable for URLs like</p>
<pre><code>ASPNETCORE_URLS=https://+5001;http://+5000
</code></pre>
<p>-- It also has environment variable for Certificate path like</p>
<pre><code>ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx
</code></pre>
<p>-- It also has environment variable for Certificate password like</p>
<pre><code>ASPNETCORE_Kestrel__Certificates__Default__Password=SECRETPASSWORD
</code></pre>
<p>Now, when I says <code>docker compose up --build</code> It build the project and also start the containers.
I am able to access the site through https://localhost:5001 as well as http://localhost:5000</p>
<p>Now, coming to kubernets
-- I have used kompose tool to generate kubernetes specific yaml files
-- I haven't made any change in that. I ran the command <code>kompose convert -f docker-compose.yaml -o ./.k8</code>
-- finally, I ran <code>kubectl apply -f .k8</code></p>
<p>It starts the container but immediately failed. I checked the logs and it says the following:</p>
<pre><code>crit: Microsoft.AspNetCore.Server.Kestrel[0]
Unable to start Kestrel.
Interop+Crypto+OpenSslCryptographicException: error:2006D080:BIO routines:BIO_new_file:no such file
at Interop.Crypto.CheckValidOpenSslHandle(SafeHandle handle)
at Internal.Cryptography.Pal.OpenSslX509CertificateReader.FromFile(String fileName, SafePasswordHandle password, X509KeyStorageFlags keyStorageFlags)
at System.Security.Cryptography.X509Certificates.X509Certificate..ctor(String fileName, String password, X509KeyStorageFlags keyStorageFlags)
at System.Security.Cryptography.X509Certificates.X509Certificate2..ctor(String fileName, String password)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Certificates.CertificateConfigLoader.LoadCertificate(CertificateConfig certInfo, String endpointName)
at Microsoft.AspNetCore.Server.Kestrel.KestrelConfigurationLoader.LoadDefaultCert()
at Microsoft.AspNetCore.Server.Kestrel.KestrelConfigurationLoader.Reload()
at Microsoft.AspNetCore.Server.Kestrel.KestrelConfigurationLoader.Load()
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServerImpl.BindAsync(CancellationToken cancellationToken)
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServerImpl.StartAsync[TContext](IHttpApplication`1 application, CancellationToken cancellationToken)
Unhandled exception. Interop+Crypto+OpenSslCryptographicException: error:2006D080:BIO routines:BIO_new_file:no such file
</code></pre>
| <p>In "It has required mapping like <code>5000:80</code> and <code>5000:443</code>", actually it should be <code>5001:443</code> (as the port 5001 is used to map to the https 443 port).</p>
<p>Based on this error message</p>
<blockquote>
<p>"nterop+Crypto+OpenSslCryptographicException: error:2006D080:BIO routines:BIO_new_file:no such file",</p>
</blockquote>
<p>It seems the certificate file doesn't exist in the following location: <code>/https/aspnetapp.pfx</code></p>
<p>Run the image, using the following Docker command:</p>
<pre><code>docker run -it --entrypoint sh <image name>
</code></pre>
<p>You will access the container without running the entrypoint, do a <code>cd /https/</code>, check if the certificate is located in this folder or not, if not this is probably the problem.</p>
|
<p>I have a Rancher running inside a Kubernetes cluster. It is installed using helm chart. The Rancher web UI is exposed using an ingress.</p>
<p>There is a DNS record for this ingress in an external DNS: rancher.myexample.com (this is just en example! DNS name)</p>
<p>I have a wildcard TLS certificate that covers *.myexample.com</p>
<p>How to use this TLS certificate for Rancher exposed via ingress?</p>
| <p>You can add the certificate from Resources > Secrets > Certificates. Click Add Certificate.</p>
<p>You can check same path based on the version of the rancher you are using.</p>
<p>Read more at : <a href="https://rancher.com/docs/rancher/v2.5/en/k8s-in-rancher/certificates/" rel="nofollow noreferrer">https://rancher.com/docs/rancher/v2.5/en/k8s-in-rancher/certificates/</a></p>
<p><strong>Option : 2</strong></p>
<p>Create secret with the certificate details, and attach cert to ingress.</p>
<p>Setting default certificate of the Nginx ingress : <a href="https://rancher.com/docs/rke/latest/en/config-options/add-ons/ingress-controllers/#configuring-an-nginx-default-certificate" rel="nofollow noreferrer">https://rancher.com/docs/rke/latest/en/config-options/add-ons/ingress-controllers/#configuring-an-nginx-default-certificate</a></p>
|
<p>I have a .Net Core Api that returns data from SQL and a respective pdf/document from Azure Blob container/storage. I am developing a Proof of Concept to host this Api in AKS (Azure Kubernetes). My prototype returns data successfully from SQL (on Linux). However, I am unable to pull the document/pdf from blob storage.</p>
<p>Need information on how I can access blob storage from AKS.</p>
<p>Many thanks Ramesh</p>
| <p>You can use Blobfuse to access Blob storage from AKS.</p>
<p>Blobfuse works by creating a virtual file system on a Linux host. Whenever a file on that system is requested, the driver will retrieve it from Blob Storage and put it in a temporary folder so the operations can be completed, then it writes the changed file back to Blob Storage once the file handles on the file have been released.</p>
<p><strong>Tradeoff</strong>: Access to files may be slow and highly latent.</p>
<p><strong>Advantage</strong>: Cheaper than file storage for the same relative amount of data.</p>
<p>Links: <a href="https://github.com/kubernetes-sigs/blob-csi-driver" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/blob-csi-driver</a></p>
<p><a href="https://github.com/kubernetes-sigs/blob-csi-driver/blob/master/docs/install-driver-on-aks.md" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/blob-csi-driver/blob/master/docs/install-driver-on-aks.md</a></p>
<p><a href="https://docs.seldon.io/projects/seldon-core/en/latest/examples/triton_gpt2_example_azure_setup.html" rel="nofollow noreferrer">https://docs.seldon.io/projects/seldon-core/en/latest/examples/triton_gpt2_example_azure_setup.html</a></p>
|
<pre><code>Warning FailedMount 23m (x55 over 3h) kubelet, ip-172-31-3-191.us-east-2.compute.internal Unable to attach or mount volumes: unmounted volumes=[mysql-persistent-storage], unattached volumes=[mysql-persistent-storage default-token-6vkr2]: timed out waiting for the condition
Warning FailedMount 4m (x105 over 3h7m) kubelet, ip-172-31-3-191.us-east-2.compute.internal (combined from similar events): MountVolume.SetUp failed for volume "pv" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/408ffa6b-1f64-4f1a-adfd-01b77ad7b886/volumes/kubernetes.ioaws-ebs/pv --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-011d7bb42da888b82 /var/lib/kubelet/pods/408ffa6b-1f64-4f1a-adfd-01b77ad7b886/volumes/kubernetes.ioaws-ebs/pv
Output: Running scope as unit run-9240.scope.
mount: /var/lib/kubelet/pods/408ffa6b-1f64-4f1a-adfd-01b77ad7b886/volumes/kubernetes.io~aws-ebs/pv: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-2a/vol-011d7bb42da888b82 does not exist.
Warning FailedAttachVolume 2m38s (x92 over 3h7m) attachdetach-controller AttachVolume.NewAttacher failed for volume "pv" : Failed to get AWS Cloud Provider. GetCloudProvider returned instead
</code></pre>
<p>I am running <code>kubernetes</code> cluster on <code>AWS EC2</code> machines.</p>
<p>When I am trying to attach <code>EBS</code> volume I am getting the above error.</p>
| <p>There are at least three possible solutions:</p>
<ul>
<li>check your <code>k8s</code> version; maybe you'll want to update it;</li>
<li>install <code>NFS</code> on your infrastructure;</li>
<li>fix <code>inbound</code> CIDR rule</li>
</ul>
<h1><code>k8s</code> version</h1>
<p>There is a known issue:
<a href="https://github.com/kubernetes/kubernetes/issues/77916" rel="nofollow noreferrer">MountVolume.SetUp failed for volume pvc mount failed: exit status 32 · Issue #77916 · kubernetes/kubernetes</a></p>
<p>And it's fixed in <a href="https://github.com/kubernetes/kubernetes/issues/77663#issuecomment-493453846" rel="nofollow noreferrer">#77663</a></p>
<p>So, check your <code>k8s</code> version</p>
<pre><code>kubectl version
kubeadm version
</code></pre>
<h1>Install NFS on your infrastructure</h1>
<p>There are three answers where installing <code>NFS</code> helped in cases like yours:</p>
<p><a href="https://stackoverflow.com/a/49571145/5720818">A: 49571145</a>
<a href="https://stackoverflow.com/a/52457325/5720818">A: 52457325</a>
<a href="https://stackoverflow.com/a/55922445/5720818">A: 55922445</a></p>
<h1>Fix inbound CIDR rule</h1>
<p>One of solutions is to add cluster's CIDR to</p>
<p>See <a href="https://stackoverflow.com/a/61868673/5720818">A: 61868673</a></p>
<blockquote>
<p>I was facing this issue because of the Kubernetes cluster node CIDR range was not present in the inbound rule of Security Group of my AWS EC2 instance(where my NFS server was running )</p>
</blockquote>
<blockquote>
<p>Solution: Added my Kubernetes cluser Node CIDR range to inbound rule of Security Group.</p>
</blockquote>
|
<p>I've set up an nfs server that serves a RMW pv according to the example at <a href="https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs" rel="nofollow noreferrer">https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs</a></p>
<p>This setup works fine for me in lots of production environments, but in some specific GKE cluster instance, mount stopped working after pods restarted.</p>
<p>From kubelet logs I see the following repeating many times</p>
<blockquote>
<p>Unable to attach or mount volumes for pod "api-bf5869665-zpj4c_default(521b43c8-319f-425f-aaa7-e05c08282e8e)": unmounted volumes=[shared-mount], unattached volumes=[geekadm-net deployment-role-token-6tg9p shared-mount]: timed out waiting for the condition; skipping pod</p>
</blockquote>
<blockquote>
<p>Error syncing pod 521b43c8-319f-425f-aaa7-e05c08282e8e ("api-bf5869665-zpj4c_default(521b43c8-319f-425f-aaa7-e05c08282e8e)"), skipping: unmounted volumes=[shared-mount], unattached volumes=[geekadm-net deployment-role-token-6tg9p shared-mount]: timed out waiting for the condition</p>
</blockquote>
<p>Manually mounting the nfs on any of the nodes work just fine: <code>mount -t nfs <service ip>:/ /tmp/mnt</code></p>
<p>How can I further debug the issue? Are there any other logs I could look at besides kubelet?</p>
| <p>In case the pod gets kicked out of the node because the mount is too slow, you may see messages like that in logs.</p>
<p>Kubelets even inform about this issue in logs.<br />
<strong>Sample log from Kubelets:</strong><br />
Setting volume ownership for /var/lib/kubelet/pods/c9987636-acbe-4653-8b8d-
aa80fe423597/volumes/kubernetes.io~gce-pd/pvc-fbae0402-b8c7-4bc8-b375-
1060487d730d and fsGroup set. If the volume has a lot of files then setting
volume ownership could be slow, see <a href="https://github.com/kubernetes/kubernetes/issues/69699" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/69699</a></p>
<p><strong>Cause:</strong><br />
The pod.spec.securityContext.fsGroup setting causes kubelet to run chown and chmod on all the files in the volumes mounted for given pod. This can be a very time consuming thing to do in case of big volumes with many files.</p>
<p>By default, Kubernetes recursively changes ownership and permissions for the contents of each volume to match the fsGroup specified in a Pod's securityContext when that volume is mounted. From the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods" rel="nofollow noreferrer">document</a>.</p>
<p><strong>Solution:</strong><br />
You can deal with it in the following ways.</p>
<ol>
<li>Reduce the number of files in the volume.</li>
<li>Stop using the fsGroup setting.</li>
</ol>
|
<p>I have a question regarding Kubernetes. In the non-container world, I have observed that a JVM with too less of RAM allocated will consume 100% CPU on all cores, because the garbage collector does not have enough memory and thus is run more often (I guess).</p>
<p>In Kubernetes, I often see containers used now. Can a Java process there do the same, but causing 100% CPU usage on an entire node, and in the process, bring that node down, provided that no requests and limits are configured, and the node is overprovisioned by other pods?</p>
| <p>A kubernetes pod usually runs an application as a regular container which is basically a regular process. It depends on the scheduler how to handle high load.</p>
<p>Nevertheless you can limit the maximum resources that can be used and you should do that:
For each container you can specify the resource limits (preventing consumption of all resources) and requests (helping with optimal scheduling) like so:</p>
<pre><code>containers:
...
- image: ...
resources:
limits:
cpu: 2500m
memory: 500Mi
requests:
cpu: 300m
memory: 500Mi
</code></pre>
|
<p>My ingress.yml file is bellow</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: "example-issuer"
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: example-service
servicePort: http
path: /
tls:
- secretName: example-tls-cert
hosts:
- example.com
</code></pre>
<p>After changing apiVersion: networking.k8s.io/v1beta1 to networking.k8s.io/v1 getting bellow error.</p>
<p>error validating data: [ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field "serviceName" in io.k8s.api.networking.v1.IngressBackend, ValidationError(Ingress.spec.rules[0].http.paths[0].backend)</p>
| <p>Try bellow</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: "example-issuer"
spec:
rules:
- host: example.com
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: example-service
port:
number: 80
tls:
- secretName: example-tls-cert
hosts:
- example.com
</code></pre>
|
<p>I am running FPM and nginx as two containers in one pod. My app is working, I can access it but the browser do not render the CSS files. No errors in the console.
My deployment file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
volumes:
- name: shared-files
emptyDir: {}
- name: nginx-config-volume
configMap:
name: test
containers:
- image: test-php
name: app
ports:
- containerPort: 9000
protocol: TCP
volumeMounts:
- name: shared-files
mountPath: /var/appfiles
lifecycle:
postStart:
exec:
command: ['sh', '-c', 'cp -r /var/www/* /var/appfiles']
- image: nginx
name: nginx
ports:
- containerPort: 80
protocol: TCP
volumeMounts:
- name: shared-files
mountPath: /var/appfiles
- name: nginx-config-volume
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
</code></pre>
<p>Nginx config:</p>
<pre><code>events {
}
http {
server {
listen 80;
root /var/appfiles/;
index index.php index.html index.htm;
# Logs
access_log /var/log/nginx/tcc-webapp-access.log;
error_log /var/log/nginx/tcc-webapp-error.log;
location / {
# try_files $uri $uri/ =404;
# try_files $uri $uri/ /index.php?q=$uri&$args;
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
}
</code></pre>
<p>I can open the page in the browser, I can see all components, links, buttons and so on but page is not rendered and looks like css is not load.</p>
| <p>In order to resolve your issue, please change <code>configMap</code> to its exact name as <code>nginx-configmap</code> and in your <em>configMap</em> nginx configuration file can be as the following:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: "nginx-configmap"
data:
nginx.conf: |
server {
listen 80;
server_name _;
charset utf-8;
root /var/appfiles/;
access_log /var/log/nginx/tcc-webapp-access.log;
error_log /var/log/nginx/tcc-webapp-error.log;
location / {
index index.php;
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
</code></pre>
<p>You can find <a href="https://medium.com/flant-com/stateful-app-files-in-kubernetes-d015311e5e6b" rel="nofollow noreferrer">the medium article</a> useful for you.</p>
|
<p>Commands used:</p>
<pre><code>git clone https://github.com/helm/charts.git
</code></pre>
<hr />
<pre><code>cd charts/stable/prometheus
</code></pre>
<hr />
<pre><code>helm install prometheus . --namespace monitoring --set rbac.create=true
</code></pre>
<p>After Running the 3rd command I got below error:</p>
<p><a href="https://i.stack.imgur.com/XCVd0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XCVd0.png" alt="enter image description here" /></a></p>
<p>Anyone Please help me out from this issue...</p>
<p>Thanks...</p>
| <p>On the <a href="https://github.com/helm/charts/tree/master/stable/prometheus#prometheus" rel="nofollow noreferrer">GitHub page</a> you can see that this repo it deprecated:</p>
<blockquote>
<p>DEPRECATED and moved to <a href="https://github.com/prometheus-community/helm-charts" rel="nofollow noreferrer">https://github.com/prometheus-community/helm-charts</a></p>
</blockquote>
<p>So I'd recommend to add and use <a href="https://github.com/prometheus-community/helm-charts" rel="nofollow noreferrer">Prometheus Community Kubernetes Helm Charts</a> repository:</p>
<pre><code>helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
</code></pre>
<p>Then you can install Prometheus with your flags using following command:</p>
<pre><code>helm install prometheus prometheus-community/prometheus --namespace monitoring --set rbac.create=true
</code></pre>
<p>If you really want to stick to the version from the old repository, you don't have to clone repo to your host. Just <a href="https://github.com/helm/charts/tree/master/stable/prometheus#prometheus" rel="nofollow noreferrer">follow steps from the repository page</a>. Make sure you have <code>https://charts.helm.sh/stable</code> repository added to the helm by running <code>helm repo list</code>. If not, add it using command:</p>
<pre><code>helm repo add stable https://charts.helm.sh/stable
</code></pre>
<p>Then, you can install your chart:</p>
<pre><code>helm install prometheus stable/prometheus --namespace monitoring --set rbac.create=true
</code></pre>
|
<p>I am using Docker Desktop version 3.6.0 which has Kubernetes 1.21.3.</p>
<p>I am following this tutorial to get started on Istio</p>
<p><a href="https://istio.io/latest/docs/setup/getting-started/" rel="nofollow noreferrer">https://istio.io/latest/docs/setup/getting-started/</a></p>
<p>Istio is properly installed as per the instructions.</p>
<p>Now whenever i try to apply the Istio configuration</p>
<p>by issuing the command <code>kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml</code>.</p>
<p>I get the following error</p>
<pre><code>unable to recognize "samples/bookinfo/networking/bookinfo-gateway.yaml": no matches for kind "Gateway" in version "networking.istio.io/v1alpha3"
unable to recognize "samples/bookinfo/networking/bookinfo-gateway.yaml": no matches for kind "VirtualService" in version "networking.istio.io/v1alpha3"
</code></pre>
<p>I checked in internet and found that the Gateway and VirtualService resources are missing.</p>
<p>If i perform <code>kubectl get crd</code> i get no resources found</p>
<p>Content of bookinfo-gatway.yaml</p>
<pre><code> apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage
port:
number: 9080
</code></pre>
| <p>The CRDs for istio should be installed as part of the istioctl install process, I'd recommend re-running the install if you don't have them available.</p>
<pre><code>>>> ~/.istioctl/bin/istioctl install --set profile=demo -y
✔ Istio core installed
✔ Istiod installed
✔ Egress gateways installed
✔ Ingress gateways installed
✔ Installation complete
</code></pre>
<p>kubectl get po -n istio-system should look like</p>
<pre><code>>>> kubectl get po -n istio-system
NAME READY STATUS RESTARTS AGE
istio-egressgateway-7ddb45fcdf-ctnp5 1/1 Running 0 3m20s
istio-ingressgateway-f7cdcd7dc-zdqhg 1/1 Running 0 3m20s
istiod-788ff675dd-9p75l 1/1 Running 0 3m32s
</code></pre>
<p>Otherwise your initial install has gone wrong somewhere.</p>
|
<p>Is there a way to load any kernel module ("modprobe nfsd" in my case) automatically after starting/upgrading nodes or in GKE? We are running an NFS server pod on our kubernetes cluster and it dies after every GKE upgrade</p>
<p>Tried both cos and ubuntu images, none of them seems to have nfsd loaded by default.</p>
<p>Also tried something like this, but it seems it does not do what it is supposed to do:</p>
<pre><code>kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: nfsd-modprobe
labels:
app: nfsd-modprobe
spec:
template:
metadata:
labels:
app: nfsd-modprobe
spec:
hostPID: true
containers:
- name: nfsd-modprobe
image: gcr.io/google-containers/startup-script:v1
imagePullPolicy: Always
securityContext:
privileged: true
env:
- name: STARTUP_SCRIPT
value: |
#! /bin/bash
modprobe nfs
modprobe nfsd
while true; do sleep 1; done
</code></pre>
| <p>I faced the same issue, existing answer is correct, I want to expand it with working example of <code>nfs</code> pod within kubernetes cluster which has capabilities and libraries to load required modules.</p>
<p>It has two important parts:</p>
<ul>
<li>privileged mode</li>
<li>mounted <code>/lib/modules</code> directory within the container to use it</li>
</ul>
<p><code>nfs-server.yaml</code></p>
<pre><code>kind: Pod
apiVersion: v1
metadata:
name: nfs-server-pod
spec:
containers:
- name: nfs-server-container
image: erichough/nfs-server
securityContext:
privileged: true
env:
- name: NFS_EXPORT_0
value: "/test *(rw,no_subtree_check,insecure,fsid=0)"
volumeMounts:
- mountPath: /lib/modules # mounting modules into container
name: lib-modules
readOnly: true # make sure it's readonly
- mountPath: /test
name: export-dir
volumes:
- hostPath: # using hostpath to get modules from the host
path: /lib/modules
type: Directory
name: lib-modules
- name: export-dir
emptyDir: {}
</code></pre>
<p>Reference which helped as well - <a href="https://github.com/ehough/docker-nfs-server/blob/develop/doc/feature/auto-load-kernel-modules.md" rel="nofollow noreferrer">Automatically load required kernel modules</a>.</p>
|
<p>I have a python program that fetches the client IP using request.client.host headers and Fast API.
This program is running on a kubernetes pod(ip-pod).
I have another Gateway API implemented using KrakenD and this runs on another pod in kubernetes cluster.
The Kubernetes yaml files for both(ip-pod and KrakenD) have the property <code>externalTrafficPolicy: Local</code>
I am unable to retrieve the real IP of the user and this could be because KrakenD is not allowing the real IP to reach to the ip-pod.
I have tested the program by exposing the ip-pod to the internet using <code>type: LoadBalancer</code> and that way it gives the correct client IP. But when I use the KrakenD gateway, the IP is something different(private IP).</p>
| <p>You can use the no-op with krakenD which will forward request to the backend</p>
<p><a href="https://www.krakend.io/docs/endpoints/no-op/" rel="nofollow noreferrer">https://www.krakend.io/docs/endpoints/no-op/</a></p>
<p>you can check parameter forwarding config in your YAML of KrakenD also</p>
<pre><code>"headers_to_pass":[
"*"
]
</code></pre>
<p>If you are getting client IP in header KrakenD will forward it to backend.</p>
<p><a href="https://www.krakend.io/docs/endpoints/parameter-forwarding/#sending-all-client-headers-to-the-backends" rel="nofollow noreferrer">https://www.krakend.io/docs/endpoints/parameter-forwarding/#sending-all-client-headers-to-the-backends</a></p>
<p>Refrence</p>
<pre><code>{
"endpoint": "/api/v1/{uid}/user",
"method": "GET",
"headers_to_pass": [ "*" ],
"querystring_params": [ "*" ],
"output_encoding": "no-op",
"concurrent_calls": 1,
"backend": [
{
"url_pattern": "/api/{uid}/user",
"encoding": "no-op",
"host": [
"http://IP:Port"
]
}
]
}
</code></pre>
|
<p>I am trying to execute the <code>kubectl</code> command using python script but keep getting error. I have requirement to execute the <code>kubectl</code> command to create pod and check the <code>pod</code> log for any failure.</p>
<p>What am I doing wrong here?</p>
<pre><code>import subprocess
command = 'kubectl apply -f deployment.yaml'
check_output= subprocess.check_output(command)
print(check_output)
</code></pre>
<p>error</p>
<pre><code>Traceback (most recent call last):
File "/usr/bin/cma-scripts/kubectl.py", line 6, in <module>
check_output= subprocess.check_output(command)
File "/usr/local/lib/python3.9/subprocess.py", line 424, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/usr/local/lib/python3.9/subprocess.py", line 505, in run
with Popen(*popenargs, **kwargs) as process:
File "/usr/local/lib/python3.9/subprocess.py", line 951, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/local/lib/python3.9/subprocess.py", line 1821, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'kubectl apply -f deployment.yaml'
</code></pre>
| <p>You can execute <code>kubectl</code> commands with Python, but you can also use the <a href="https://github.com/kubernetes-client/python" rel="noreferrer">Python client</a> for the Kubernetes API.</p>
<p>Below I will give examples for both options.</p>
<h3>Executing kubectl commands with Python.</h3>
<p>You can use the <a href="https://docs.python.org/3.7/library/subprocess.html" rel="noreferrer">subprocess</a> module:</p>
<pre><code>$ cat script-1.py
#!/usr/bin/python3.7
import subprocess
subprocess.run(["kubectl", "apply", "-f", "deployment.yaml"])
$ ./script-1.py
deployment.apps/web-app created
</code></pre>
<p>You can also use the <a href="https://docs.python.org/3.7/library/os.html" rel="noreferrer">os</a> module:</p>
<pre><code>$ cat script-1.py
#!/usr/bin/python3.7
import os
os.system("kubectl apply -f deployment.yaml")
$ ./script-1.py
deployment.apps/web-app created
</code></pre>
<h3>Using the Python client for the kubernetes API.</h3>
<p>As previously mentioned, you can also use a Python client to create a Deployment.</p>
<p>Based on the <a href="https://github.com/kubernetes-client/python/blob/master/examples/deployment_create.py" rel="noreferrer">deployment_create.py</a> example, I've created a script to deploy <code>deployment.yaml</code> in the <code>default</code> Namespace:</p>
<pre><code>$ cat script-2.py
#!/usr/bin/python3.7
from os import path
import yaml
from kubernetes import client, config
def main():
config.load_kube_config()
with open(path.join(path.dirname(__file__), "deployment-1.yaml")) as f:
dep = yaml.safe_load(f)
k8s_apps_v1 = client.AppsV1Api()
resp = k8s_apps_v1.create_namespaced_deployment(
body=dep, namespace="default")
print("Deployment created. status='%s'" % resp.metadata.name)
if __name__ == '__main__':
main()
$ ./script-2.py
Deployment created. status='web-app'
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE
web-app 1/1 1 1
</code></pre>
|
<p>My frontend runs on nginx, and I'm serving a bunch of <code>.chunk.js</code> files that are built from react.</p>
<p>Every time I update my frontend, I rebuild the docker image and update the kubernetes deployment. However, <strong>some users might still be trying to fetch the old js files.</strong></p>
<p>I would <em>like</em> Google Cloud CDN to serve the stale, cached version of the old files, however it seems that <a href="https://cloud.google.com/cdn/docs/serving-stale-content" rel="nofollow noreferrer">it will only serve stale content in the event of errors or the server being unreachable, not a 404.</a></p>
<p>Cloud CDN also has something called "negative caching", <a href="https://cloud.google.com/cdn/docs/using-negative-caching" rel="nofollow noreferrer">however that seems to be for deciding how long a 404 is cached.</a></p>
<h1></h1>
<p><strong>--> What's the best way to temporarily serve old files on Google Cloud? Can this be done with Cloud CDN?</strong></p>
<p>(Ideally without some funky build process that requires deploying the old files as well)</p>
| <p>I have this issue as well, if you find a way to set it up on Google CDN please let me know.</p>
<p>Here is my Workaround:</p>
<ol>
<li><p>Choose <code>Use origin settings based on Cache-Control headers</code></p>
</li>
<li><p>Since most browsers cache static JS assets, I reduce my <code>Cache-Control</code> header for <code>.html</code> to be reasonably low or normal, like 5-60 minutes, whereas the javascript files have a much longer cache time, like a week.</p>
</li>
</ol>
<p>Some Context: After deployment, if google serves the old index.html from its CDN cache, the user's browser will request old JS files. If it's time for those JS files to be re-validated, google will see they are now 404s, and send a 404 response instead of the JS file. The workaround above makes sure that the JS files a highly likely to be available in the cache, while the index.html is updated more frequently.</p>
<p><strong>Update:</strong> This works... but there appears to be a caveat, if the page isn't a frequently trafficked page, google will eventually return a 404 on the javascript file before the specified time. Even though <a href="https://cloud.google.com/cdn/docs/caching#:%7E:text=Cloud%20CDN%20revalidates%20cached%20objects,if%20it%20were%202%2C592%2C000%20seconds" rel="nofollow noreferrer">google docs</a> state it won't get revalidated for 30 days, this appears to be false.</p>
<p><strong>Update 2</strong>: Google's Response:</p>
<blockquote>
<p>The expiration doc says "Cloud CDN revalidates cached objects that are older than 30 days." It doesn't say that Google won't revalidate prior to 30 days. Things fall out of cache arbitrarily quickly and max-age is just an upper bound.</p>
</blockquote>
|
<p>Whenever a job run fails I want to suspend cronjob so that no further jobs are started. Is their any possible way?
k8s version: 1.10</p>
| <p>you can configure it simply using <code>suspend: true</code></p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: my-job
spec:
suspend: true
parallelism: 2
completions: 10
template:
spec:
containers:
- name: my-container
image: busybox
command: ["sleep", "5"]
restartPolicy: Never
</code></pre>
<p>Any currently running jobs will complete but future jobs will be suspended.</p>
<p>Read more at : <a href="https://kubernetes.io/blog/2021/04/12/introducing-suspended-jobs/" rel="nofollow noreferrer">https://kubernetes.io/blog/2021/04/12/introducing-suspended-jobs/</a></p>
<p>If you are on an older version you can use <code>backoffLimit: 1</code></p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: error
spec:
backoffLimit: 1
template:
</code></pre>
<p><code>.spec.backoffLimit</code> can limit the number of time a pod is restarted when running inside a job</p>
<p>If you can't suspend it however we make sure job won't get re-run using</p>
<blockquote>
<p><strong>backoffLimit</strong> means the number of times it will retry before it is
considered failed. The default is 6.</p>
<p><strong>concurrencyPolicy</strong> means it will run 0 or 1 times, but
not more.</p>
<p><strong>restartPolicy</strong>: Never means it won't restart on failure.</p>
</blockquote>
|
<p>I'm trying to create an Istio ingress gateway (istio: 1.9.1, EKS: 1.18) with a duplicate <code>targetPort</code> like this:</p>
<pre><code>apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: istio
spec:
components:
ingressGateways:
- name: ingressgateway
k8s:
service:
ports:
- port: 80
targetPort: 8080
name: http2
- port: 443
name: https
targetPort: 8080
</code></pre>
<p>but I get the error:</p>
<pre><code>- Processing resources for Ingress gateways.
✘ Ingress gateways encountered an error: failed to update resource with server-side apply for obj Deployment/istio-system/istio: failed to create typed patch object: errors:
.spec.template.spec.containers[name="istio-proxy"].ports: duplicate entries for key [containerPort=8080,protocol="TCP"]
</code></pre>
<p>I am running Istio in EKS so we terminate TLS at the NLB, so all traffic (<code>http</code> and <code>https</code>) should go to the pod on the same port (8080)</p>
<p>Any ideas how I can solve this issue?</p>
| <p>Had to use different targetPorts in the end to get this working</p>
|
<p>I created a private Docker registry POD in my Kubernetes cluster.</p>
<p>Here are the relevant settings for the pod:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
private-repository-k8s-686564966d-8grr8 1/1 Running 2 (7h10m ago) 9d
$ kubectl describe pods private-repository-k8s-686564966d-8grr8
...
Containers:
private-repository-k8s:
Container ID: docker://faadba7513c6a1bae6ab96480fcc230ae94a1c8e27c20928f3f93bfd2e7b7714
Image: registry:2
Image ID: docker-pullable://registry@sha256:265d4a5ed8bf0df27d1107edb00b70e658ee9aa5acb3f37336c5a17db634481e
Port: 5000/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 05 Oct 2021 14:49:07 -0700
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Sun, 26 Sep 2021 16:36:48 -0700
Finished: Tue, 05 Oct 2021 14:48:43 -0700
Ready: True
Restart Count: 2
Environment:
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/registry.crt
REGISTRY_HTTP_TLS_KEY: /certs/registry.key
Mounts:
/certs from certs-vol (rw)
/var/lib/registry from registry-vol (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mqknd (ro)
...
Volumes:
certs-vol:
Type: HostPath (bare host directory volume)
Path: /opt/certs
HostPathType: Directory
registry-vol:
Type: HostPath (bare host directory volume)
Path: /opt/registry
HostPathType: Directory
kube-api-access-mqknd:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
</code></pre>
<p>I generated the certs on the master as follows:</p>
<pre><code>$ sudo openssl req -newkey rsa:4096 -nodes -sha256 -keyout \
/opt/certs/registry.key -x509 -days 365 -out /opt/certs/registry.crt
</code></pre>
<p>Then the folder with the crt and key files are shared via NFS mount across all of the workers.</p>
<p>When I try to push an image from outside the k8s cluster and I get the following error:</p>
<pre><code>$ docker push k8s-master:31320/nginx:1.17 The push refers to repository [k8s-master:31320/nginx]
Get "https://k8s-master:31320/v2/": x509: certificate is not valid for any names, but wanted to match k8s-master
</code></pre>
<p>The logs from the POD show this:</p>
<pre><code>$ kubectl logs private-repository-k8s-686564966d-8grr8 -f
...
2021/10/06 05:06:02 http: TLS handshake error from 10.108.82.192:28058: remote error: tls: bad certificate
</code></pre>
<p>This proves to me that the request is hitting the POD, but TLS certs weren't setup properly.</p>
<p>I'm doing trying to push the Docker image from my MacOS client to this private Docker registry on a k8s server (each node in the server running Ubuntu).</p>
<p>I'm a bit shaky on the TLS stuff, but my understanding is that I'm using a self-signed cert (which should be fine as I'm only accessing this from my internal network). But I assume I need to do something from my Mac client to setup the TLS certs in order to access the registry. I have already tried adding the crt and key files to my Keychain and that didn't work. I cannot figure out what to do here.</p>
<p>I'm using these instructions:
<a href="https://www.linuxtechi.com/setup-private-docker-registry-kubernetes/" rel="nofollow noreferrer">https://www.linuxtechi.com/setup-private-docker-registry-kubernetes/</a></p>
<p>I'm running k8s v1.22.0. I have 4 VMs running Ubuntu 20.04.2 LTS inside a single rack server using VMware ESXi: 1 master, 3 worker VMs. I'm trying to push the docker image from my MacBook.</p>
| <p>First, I found the CN (Common Name) was not setup property in the certificate (reference: <a href="https://github.com/docker/for-linux/issues/248" rel="nofollow noreferrer">https://github.com/docker/for-linux/issues/248</a>). Once I regenerated the certificate I hit this issue:</p>
<pre><code>$ docker push k8s-master:31320/nginx:1.17 The push refers to repository [k8s-master:31320/nginx]
Get "https://k8s-master:31320/v2/": x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0
</code></pre>
<p>Then I found I needed to add SAN (subjectAltName) to the certificate. I did this as follows:</p>
<pre><code>$ sudo openssl req -newkey rsa:4096 -nodes -sha256 -keyout /opt/certs/registry.key -x509 -days 365 -out /opt/certs/registry.crt -addext "subjectAltName = DNS:k8s-master, DNS:k8s-master.local"
</code></pre>
<p>I restarted the registry pod and then I ran into this error:</p>
<pre><code>$ docker push k8s-master:31320/nginx:1.17
The push refers to repository [k8s-master:31320/nginx]
Get "https://k8s-master:31320/v2/": x509: certificate signed by unknown authority
</code></pre>
<p>At this point, I realized MacOS client needed the certificate installed into the Keychain. I downloaded the registry.crt file and install it in Keychain (drag and drop). I also had to go into the Keychain, double-clicked on the certificate, opened the "Trust" drop down and selected "Always Trust". Then I restarted Docker on my MacOS.</p>
<p>At this point push started to work:</p>
<pre><code>$ docker push k8s-master:31320/nginx:1.17
The push refers to repository [k8s-master:31320/nginx]
65e1ea1dc98c: Pushed
88891187bdd7: Pushed
6e109f6c2f99: Pushed
0772cb25d5ca: Pushed
525950111558: Pushed
476baebdfbf7: Pushed
1.17: digest: sha256:39065444eb1acb2cfdea6373ca620c921e702b0f447641af5d0e0ea1e48e5e04 size: 1570
</code></pre>
|
<p>I'm working on an umbrella chart which has several child charts.
On the top level, I have a file values-ext.yaml which contains some values which are used in the child charts.</p>
<pre><code>sql:
common:
host: <SQL Server host>
user: <SQL user>
pwd: <SQL password>
</code></pre>
<p>These settings are read in configmap.yaml of a child chart. In this case, a SQL Server connection string is built up from these settings.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: "childchart-config"
labels:
app: some-app
chart: my-child-chart
data:
ConnectionStrings__DbConnection: Server={{ .Values.sql.common.host }};Database=some-db
</code></pre>
<p>I test the chart from the umbrella chart dir like this: <code>helm template --values values-ext.yaml .</code>
It gives me this error:</p>
<pre><code>executing "my-project/charts/child-chart/templates/configmap.yaml" at <.Values.sql.common.host>:
nil pointer evaluating interface {}.host
</code></pre>
<p>So, it clearly can't find the values that I want to read from the values-ext.yaml file.
I should be able to pass in additional values files like this, right?
I also tried with <code>$.Values.sql.common.host</code> but it doesn't seem to matter.</p>
<p>What's wrong here?</p>
| <p>When the child charts are rendered, their <code>.Values</code> are <a href="https://docs.helm.sh/docs/topics/charts/#scope-dependencies-and-values" rel="nofollow noreferrer">a subset of the parent chart values</a>. Using <code>$.Values</code> to "escape" the current scope doesn't affect this at all. So within <code>child-chart</code>, .Values in effect refers to what <code>.Values.child-chart</code> would have referred to in the parent.</p>
<p>There's three main things you can do about this:</p>
<ol>
<li><p>Move the settings down one level in the YAML file; you'd have to repeat them for each child chart, but they could be used unmodified.</p>
<pre class="lang-yaml prettyprint-override"><code>child-chart:
sql:
common: { ... }
</code></pre>
</li>
<li><p>Move the settings under a <a href="https://docs.helm.sh/docs/topics/charts/#global-values" rel="nofollow noreferrer"><code>global:</code></a> key. All of the charts that referenced this value would have to reference <code>.Values.global.sql...</code>, but it would be consistent across the parent and child charts.</p>
<pre class="lang-yaml prettyprint-override"><code>global:
sql:
common: { ... }
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>ConnectionStrings__DbConnection: Server={{ .Values.global.sql.common.host }};...
</code></pre>
</li>
<li><p>Create the ConfigMap in the parent chart and indirectly refer to it in the child charts. It can help to know that all of the charts will be installed as part of the same Helm release, and if you're using the standard <code>{{ .Release.Name }}-{{ .Chart.Name }}-suffix</code> naming pattern, the <code>.Release.Name</code> will be the same in all contexts.</p>
<pre class="lang-yaml prettyprint-override"><code># in a child chart, that knows it's being included by the parent
env:
- name: DB_CONNECTION
valueFrom:
configMapKeyRef:
name: '{{ .Release.Name }}-parent-dbconfig'
key: ConnectionStrings__DbConnection
</code></pre>
</li>
</ol>
|
<p>we have a basic AKS cluster setup and we need to whitelist this AKS outbound ipadress in one of our services, i scanned the AKS cluster setting in Azure portal, i was not able to find any outbound IpAddress.</p>
<p>how do we get the outboud IP ?</p>
<p>Thanks -Nen</p>
| <p>If you are using an AKS cluster with a <strong>Standard SKU Load Balancer</strong> i.e.</p>
<pre><code>$ az aks show -g $RG -n akstest --query networkProfile.loadBalancerSku -o tsv
Standard
</code></pre>
<p>and the <code>outboundType</code> is set to <code>loadBalancer</code> i.e.</p>
<pre><code>$ az aks show -g $RG -n akstest --query networkProfile.outboundType -o tsv
loadBalancer
</code></pre>
<p>then you should be able to fetch the outbound IP addresses for the AKS cluster like (mind the capital <code>IP</code>):</p>
<pre><code>$ az aks show -g $RG -n akstest --query networkProfile.loadBalancerProfile.effectiveOutboundIPs[].id
[
"/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MC_xxxxxx_xxxxxx_xxxxx/providers/Microsoft.Network/publicIPAddresses/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
]
# Using $PUBLIC_IP_RESOURCE_ID obtained from the last step
$ az network public-ip show --ids $PUBLIC_IP_RESOURCE_ID --query ipAddress -o tsv
xxx.xxx.xxx.xxx
</code></pre>
<p>For more information please check <a href="https://learn.microsoft.com/en-us/azure/aks/load-balancer-standard" rel="noreferrer">Use a public Standard Load Balancer in Azure Kubernetes Service (AKS)</a></p>
<hr />
<p>If you are using an AKS cluster with a <strong>Basic SKU Load Balancer</strong> i.e.</p>
<pre><code>$ az aks show -g $RG -n akstest --query networkProfile.loadBalancerSku -o tsv
Basic
</code></pre>
<p>and the <code>outboundType</code> is set to <code>loadBalancer</code> i.e.</p>
<pre><code>$ az aks show -g $RG -n akstest --query networkProfile.outboundType -o tsv
loadBalancer
</code></pre>
<p>Load Balancer Basic chooses a single frontend to be used for outbound flows when multiple (public) IP frontends are candidates for outbound flows. This <strong>selection is not configurable</strong>, and you should <strong>consider the selection algorithm to be</strong> <em><strong>random</strong></em>. This public IP address is only valid for the lifespan of that resource. If you delete the Kubernetes <code>LoadBalancer</code> service, the associated load balancer and IP address are also deleted. If you want to assign a specific IP address or retain an IP address for redeployed Kubernetes services, you can <a href="https://learn.microsoft.com/en-us/azure/aks/egress" rel="noreferrer">create and use a static public IP address</a>, as @nico-meisenzahl mentioned.</p>
<p>The static IP address works only as long as you have one Service on the AKS cluster (with a Basic Load Balancer). When multiple addresses are configured on the Azure Load Balancer, any of these public IP addresses are a candidate for outbound flows, and one is selected at random. Thus every time a Service gets added, you will have to add that corresponding IP address to the whitelist which isn't very scalable. [<a href="https://learn.microsoft.com/en-us/azure/aks/egress#create-a-service-with-the-static-ip" rel="noreferrer">Reference</a>]</p>
<hr />
<p>In the latter case, we would recommend setting <code>outBoundType</code> to <code>userDefinedRouting</code> at the time of AKS cluster creation. If <code>userDefinedRouting</code> is set, AKS won't automatically configure egress paths. The egress setup must be done by you.</p>
<p>The AKS cluster must be deployed into an existing virtual network with a subnet that has been previously configured because when not using standard load balancer (SLB) architecture, you must establish explicit egress. As such, this architecture requires explicitly sending egress traffic to an appliance like a firewall, gateway, proxy or to allow the Network Address Translation (NAT) to be done by a public IP assigned to the standard load balancer or appliance.</p>
<h5>Load balancer creation with userDefinedRouting</h5>
<p>AKS clusters with an outbound type of UDR receive a standard load balancer (SLB) only when the first Kubernetes service of type 'loadBalancer' is deployed. The load balancer is configured with a public IP address for inbound requests and a backend pool for <em>inbound</em> requests. Inbound rules are configured by the Azure cloud provider, but no <strong>outbound public IP address or outbound rules</strong> are configured as a result of having an outbound type of UDR. Your UDR will still be the only source for egress traffic.</p>
<p>Azure load balancers <a href="https://azure.microsoft.com/pricing/details/load-balancer/" rel="noreferrer">don't incur a charge until a rule is placed</a>.</p>
<p>[<strong>!! Important:</strong> Using outbound type is an advanced networking scenario and requires proper network configuration.]</p>
<p>Here's instructions to <a href="https://learn.microsoft.com/en-us/azure/aks/egress-outboundtype#deploy-a-cluster-with-outbound-type-of-udr-and-azure-firewall" rel="noreferrer">Deploy a cluster with outbound type of UDR and Azure Firewall</a></p>
|
<p>I have a similar issue like <a href="https://stackoverflow.com/questions/66772611/kubernetes-ingress-nginx-controller-is-not-found">Kubernetes Ingress Nginx Controller is Not Found</a>.</p>
<p>I'm trying to deploy project for local. I have 404 for backend <a href="http://posts.com/posts" rel="nofollow noreferrer">http://posts.com/posts</a> and for frontend <a href="http://posts.com" rel="nofollow noreferrer">http://posts.com</a>.
Work only if I use url <a href="http://posts.com:32685/posts" rel="nofollow noreferrer">http://posts.com:32685/posts</a> for backend, <a href="http://posts.com:32685" rel="nofollow noreferrer">http://posts.com:32685</a> - not working for front.
I googled a lot of and I am stuck :(</p>
<p><strong>ingress-srv.yaml</strong>:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: posts.com
http:
paths:
- path: /posts/create
pathType: Prefix
backend:
service:
name: blog-posts-clusterip-srv
port:
number: 4000
- path: /posts
pathType: Prefix
backend:
service:
name: blog-query-srv
port:
number: 4002
- path: /posts/?(.*)/comments
pathType: Prefix
backend:
service:
name: blog-comments-srv
port:
number: 4001
- path: /?(.*)
pathType: Prefix
backend:
service:
name: blog-client-srv
port:
number: 3000
</code></pre>
<p><strong>posts-depl.yaml:</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: blog-posts-depl
spec:
replicas: 1
selector:
matchLabels:
app: blog-posts
template:
metadata:
labels:
app: blog-posts
spec:
containers:
- name: blog-posts
image: meylisday/blog-posts
imagePullPolicy: Never
---
apiVersion: v1
kind: Service
metadata:
name: blog-posts-clusterip-srv
spec:
selector:
app: blog-posts
ports:
- name: blog-posts
protocol: TCP
port: 4000
targetPort: 4000
</code></pre>
<p><strong>posts-svr.yaml:</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: blog-posts-srv
spec:
type: NodePort
selector:
app: blog-posts
ports:
- name: blog-posts
protocol: TCP
port: 4000
targetPort: 4000
</code></pre>
<p><strong>k get all</strong></p>
<pre><code> NAME READY STATUS RESTARTS AGE
pod/blog-client-depl-64ff878fdf-cbd6d 1/1 Running 0 29m
pod/blog-comments-depl-c7c998884-c4r8m 1/1 Running 0 41m
pod/blog-event-bus-depl-7f67777497-4tjbs 1/1 Running 0 40m
pod/blog-moderation-depl-666bdccc66-pzqkc 1/1 Running 0 38m
pod/blog-posts-depl-5f66df48c4-dcqgn 1/1 Running 0 37m
pod/blog-query-depl-658d489d7c-4sds5 1/1 Running 0 36m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/blog-client-srv ClusterIP 10.111.45.128 <none> 3000/TCP 17h
service/blog-comments-srv ClusterIP 10.98.76.124 <none> 4001/TCP 4d16h
service/blog-event-bus-srv ClusterIP 10.101.244.228 <none> 4005/TCP 42d
service/blog-moderation-srv ClusterIP 10.109.100.160 <none> 4003/TCP 4d16h
service/blog-posts-clusterip-srv ClusterIP 10.98.80.155 <none> 4000/TCP 42d
service/blog-posts-srv NodePort 10.104.129.247 <none> 4000:32685/TCP 43d
service/blog-query-srv ClusterIP 10.103.8.73 <none> 4002/TCP 4d16h
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 44d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/blog-client-depl 1/1 1 1 17h
deployment.apps/blog-comments-depl 1/1 1 1 4d16h
deployment.apps/blog-event-bus-depl 1/1 1 1 4d18h
deployment.apps/blog-moderation-depl 1/1 1 1 4d16h
deployment.apps/blog-posts-depl 1/1 1 1 5d17h
deployment.apps/blog-query-depl 1/1 1 1 4d16h
</code></pre>
<p><strong>k get service -n ingress-nginx</strong></p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.99.133.41 localhost 80:30383/TCP,443:31513/TCP 20h
ingress-nginx-controller-admission ClusterIP 10.107.76.88 <none> 443/TCP 20h
</code></pre>
<p><strong>kubectl describe ingress ingress-srv</strong></p>
<pre><code>Name: ingress-srv
Namespace: default
Address: localhost
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
posts.com
/posts/create blog-posts-clusterip-srv:4000 (10.1.0.183:4000)
/posts blog-query-srv:4002 (10.1.0.184:4002)
/posts/?(.*)/comments blog-comments-srv:4001 (10.1.0.180:4001)
/?(.*) blog-client-srv:3000 (10.1.0.185:3000)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 30m (x15 over 5h28m) nginx-ingress-controller Scheduled for sync
Normal Sync 24m (x4 over 25m) nginx-ingress-controller Scheduled for sync
</code></pre>
<p><strong>k get ingress ingress-srv</strong></p>
<pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-srv <none> posts.com localhost 80 20h
</code></pre>
<p>The entire project is available here <a href="https://github.com/meylisday/blog-react-nodejs" rel="nofollow noreferrer">https://github.com/meylisday/blog-react-nodejs</a>
I'll be glad any hint. Thank you.</p>
| <p>I solved my problem. Needed to stop the local apache web server.</p>
|
<p>I have an on-premise kubernetes cluster v1.22.1 (1 master & 2 worker nodes) and wanted to run jenkins slave agents on this kubernetes cluster using kubernetes plugin on jenkins. Jenkins is currently hosted outside of K8s cluster, running 2.289.3. For Kubernetes credentials in Jenkins Cloud, I have created new service account with cluster role cluster-admin and provided token secret text to Jenkins. The connection between Jenkins and Kubernetes has been established successfully however when I am running a jenkins job to create pods in kubernetes, pods are showing error and not coming online.</p>
<p>Below are the Kubernetes Logs.
<a href="https://i.stack.imgur.com/Kj6qm.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Kj6qm.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/xVVg1.png" rel="noreferrer"><img src="https://i.stack.imgur.com/xVVg1.png" alt="enter image description here" /></a></p>
<p>Jenkins logs
<a href="https://i.stack.imgur.com/03cJw.png" rel="noreferrer"><img src="https://i.stack.imgur.com/03cJw.png" alt="enter image description here" /></a>
Has any experienced such issue when connecting from Jenkins master, installed outside of kubernetes cluster?</p>
| <p>I understand that rootCAConfigMap publishes the kube-root-ca.crt in each namespace for default service account. Starting from kubernetes version 1.22, RootCAConfigMap is set to true by default and hence when creating pods, this certificate for default account is being used. Please find more info on bound service account token based projected volumes -
<a href="https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/</a></p>
<p>To stop creating automated volume using default service account or specific service account that has been used for creating pods, simply set "automountServiceAccountToken" to "false" under serviceaccount config which should then allow jenkins to create slave pods on Kubernetes cluster. I have tested this successfully in my on premise cluster.</p>
|
<p>I have a requirement to rewrite all URLs to lowercase.</p>
<p>E.g. <code>test.com/CHILD</code> to <code>test.com/child</code></p>
<p>Frontend application is developed on docker on azure kubernetes services. Ingress is controlled on nginx ingress controller.</p>
| <p>You can rewrite URLs using Lua as described in the <a href="https://www.rewriteguide.com/nginx-enforce-lower-case-urls/" rel="nofollow noreferrer">Enforce Lower Case URLs (NGINX)</a> article.</p>
<p>All we need to do is add the following configuration block to nginx:</p>
<pre><code>location ~ [A-Z] {
rewrite_by_lua_block {
ngx.redirect(string.lower(ngx.var.uri), 301);
}
}
</code></pre>
<p>I will show you how it works.</p>
<hr />
<p>First, I created an Ingress resource with the previously mentioned configuration:</p>
<pre><code>$ cat test-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/server-snippet: |
location ~ [A-Z] {
rewrite_by_lua_block {
ngx.redirect(string.lower(ngx.var.uri), 301);
}
}
spec:
rules:
- http:
paths:
- path: /app-1
pathType: Prefix
backend:
service:
name: app-1
port:
number: 80
$ kubectl apply -f test-ingress.yaml
ingress.networking.k8s.io/test-ingress created
$ kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
test-ingress <none> * <PUBLIC_IP> 80 58s
</code></pre>
<p>Then I created a sample <code>app-1</code> Pod and exposed it on port <code>80</code>:</p>
<pre><code>$ kubectl run app-1 --image=nginx
pod/app-1 created
$ kubectl expose pod app-1 --port=80
service/app-1 exposed
</code></pre>
<p>Finally, we can test if rewrite works as expected:</p>
<pre><code>$ curl -I <PUBLIC_IP>/APP-1
HTTP/1.1 301 Moved Permanently
Date: Wed, 06 Oct 2021 13:53:56 GMT
Content-Type: text/html
Content-Length: 162
Connection: keep-alive
Location: /app-1
$ curl -L <PUBLIC_IP>/APP-1
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
</code></pre>
<p>Additionally, in the <code>ingress-nginx-controller</code> logs, we can see the following log entries:</p>
<pre><code>10.128.15.213 - - [06/Oct/2021:13:54:34 +0000] "GET /APP-1 HTTP/1.1" 301 162 "-" "curl/7.64.0" 83 0.000 [-] [] - - - - c4720e38c06137424f7b951e06c3762b
10.128.15.213 - - [06/Oct/2021:13:54:34 +0000] "GET /app-1 HTTP/1.1" 200 615 "-" "curl/7.64.0" 83 0.001 [default-app-1-80] [] 10.4.1.13:80 615 0.001 200 f96b5664765035de8832abebefcabccf
</code></pre>
|
<p>I have built a structure for a microservice app. auth is the first Dockerfile I have made and as far as I can tell is not building.</p>
<pre><code>C:\Users\geeks\Desktop\NorthernHerpGeckoSales\NorthernHerpGeckoSales\auth>docker build -t giantgecko/auth .
[+] Building 0.1s (9/9) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 206B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 53B 0.0s
=> [internal] load metadata for docker.io/library/node:alpine 0.0s
=> [1/5] FROM docker.io/library/node:alpine 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 479B 0.0s
=> [2/5] WORKDIR /app 0.0s
=> CACHED [3/5] COPY package.json . 0.0s
=> CACHED [4/5] RUN npm install 0.0s
=> ERROR [5/5] COPY /Users/geeks/Desktop/NorthernHerpGeckoSales/NorthernHerpGeckoSales/auth . 0.0s
------
> [5/5] COPY /Users/geeks/Desktop/NorthernHerpGeckoSales/NorthernHerpGeckoSales/auth .:
------
failed to compute cache key: "/Users/geeks/Desktop/NorthernHerpGeckoSales/NorthernHerpGeckoSales/auth" not found: not found
</code></pre>
| <p>I was able to resolve this by changing my Dockerfile to:</p>
<pre class="lang-sh prettyprint-override"><code>FROM node:alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "start"]
</code></pre>
<p>The fix came after I changed <code>COPY</code> from the absolute path to <code>. .</code>, and cleared the npm cache with <code>npm cache clean -f</code>.</p>
|
<p>I am developing an application which uses Pod Identity to connect to Azure Sql Database.</p>
<p>After deploying it on Azure Kubernetes Service (<strong>AKS</strong>), POD(application) connects to Azure Sql using PodIdentity (Managed Identity).</p>
<p><strong>How can I assign the same identity to POD while running on my local k8s cluster?</strong></p>
<p>My deployment yaml looks like</p>
<pre><code>kind: Deployment
metadata:
name: xxx
labels:
app: xxx
spec:
selector:
matchLabels:
appName: xxx
replicas: 1
template:
metadata:
labels:
appName: xxx
aadpodidbinding: samplepodidentity
spec:
containers:
- name: xxx
image: xxx
env:
- name: xxx
value: "xxx"
- name: UpdateDbTraceEndpoint
value: "xxx"
ports:
- containerPort: 80
</code></pre>
<p><strong>Update:---------------------------</strong></p>
<p>I tried the the <a href="https://azure.github.io/aad-pod-identity/docs/demo/standard_walkthrough/" rel="nofollow noreferrer">standard walk thru</a> but still it is giving error.</p>
<p><a href="https://i.stack.imgur.com/CD23V.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CD23V.png" alt="enter image description here" /></a></p>
<p><strong>According to me MIC is not working on local K8s cluster</strong>. How can I get it working?</p>
| <p>You can deploy <a href="https://azure.github.io/aad-pod-identity/" rel="nofollow noreferrer"><code>aad-pod-identity</code></a> on your local cluster using <a href="https://azure.github.io/aad-pod-identity/docs/getting-started/installation/#helm" rel="nofollow noreferrer">helm</a> or the <a href="https://azure.github.io/aad-pod-identity/docs/getting-started/installation/#quick-install" rel="nofollow noreferrer">YAML Deployment files</a>.</p>
<p>The main difference is that you can't use some of the <code>az aks</code> commands and instead perform the steps manually like creating the <code>AzureIdentity</code> resource.</p>
<p>The <a href="https://azure.github.io/aad-pod-identity/docs/demo/standard_walkthrough/" rel="nofollow noreferrer">standard walkthrough</a> doc covers the details.</p>
|
<p>I am using <strong>DirectXMan12/k8s-prometheus-adapte</strong> to push the external metric from Prometheus to Kubernetes.</p>
<p>After pushing the external metric how can I verify the data is k8s?</p>
<p>When I hit <strong>kubectl get --raw /apis/external.metrics.k8s.io/v1beta1 | jq</strong> I got the following result but after that, I do not have an idea how to fetch actual metrics value</p>
<pre><code>{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "external.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "subscription_back_log",
"singularName": "",
"namespaced": true,
"kind": "ExternalMetricValueList",
"verbs": [
"get"
]
}]
}
</code></pre>
| <p>The metric is namespaced, so you will need to add the namespace into the URL. Contrary to what the other answer suggests, I believe you don't need to include pods into the URL. This is an external metric. External metrics are not associated to any kubernetes object, so only the namespace should suffice:</p>
<ul>
<li><code>/apis/external.metrics.k8s.io/v1beta1/namespaces/<namespace>/<metric_name></code></li>
</ul>
<p>Here's an example that works for me, using an external metric in my setup:</p>
<pre><code>$ kubectl get --raw /apis/external.metrics.k8s.io/v1beta1 | jq
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "external.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "redis_key_size",
"singularName": "",
"namespaced": true,
"kind": "ExternalMetricValueList",
"verbs": [
"get"
]
}
]
}
</code></pre>
<pre><code>$ kubectl get --raw /apis/external.metrics.k8s.io/v1beta1/namespaces/default/redis_key_size
{
"kind": "ExternalMetricValueList",
"apiVersion": "external.metrics.k8s.io/v1beta1",
"metadata": {},
"items": [
{
"metricName": "redis_key_size",
"metricLabels": {
"key": "..."
},
"timestamp": "2021-10-07T09:00:01Z",
"value": "0"
},
...
]
}
</code></pre>
|
<p>Let's say I've a deployment. For some reason it's not responding after sometime. Is there any way to tell Kubernetes to rollback to previous version automatically on failure?</p>
| <p>You mentioned that:</p>
<blockquote>
<p>I've a deployment. For some reason it's not responding after sometime.</p>
</blockquote>
<p>In this case, you can use <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="noreferrer">liveness and readiness</a> probes:</p>
<blockquote>
<p>The kubelet uses liveness probes to know when to restart a container. For example, liveness probes could catch a deadlock, where an application is running, but unable to make progress. Restarting a container in such a state can help to make the application more available despite bugs.</p>
</blockquote>
<blockquote>
<p>The kubelet uses readiness probes to know when a container is ready to start accepting traffic. A Pod is considered ready when all of its containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers.</p>
</blockquote>
<p>The above probes may prevent you from deploying a corrupted version, however liveness and readiness probes aren't able to rollback your Deployment to the previous version. There was a similar <a href="https://github.com/kubernetes/kubernetes/issues/23211" rel="noreferrer">issue</a> on Github, but I am not sure there will be any progress on this matter in the near future.</p>
<p>If you really want to automate the rollback process, below I will describe a solution that you may find helpful.</p>
<hr />
<p>This solution requires running <code>kubectl</code> commands from within the Pod.
In short, you can use a script to continuously monitor your Deployments, and when errors occur you can run <code>kubectl rollout undo deployment DEPLOYMENT_NAME</code>.</p>
<p>First, you need to decide how to find failed Deployments. As an example, I'll check Deployments that perform the update for more than 10s with the following command:<br />
<strong>NOTE:</strong> You can use a different command depending on your need.</p>
<pre><code>kubectl rollout status deployment ${deployment} --timeout=10s
</code></pre>
<p>To constantly monitor all Deployments in the <code>default</code> Namespace, we can create a Bash script:</p>
<pre><code>#!/bin/bash
while true; do
sleep 60
deployments=$(kubectl get deployments --no-headers -o custom-columns=":metadata.name" | grep -v "deployment-checker")
echo "====== $(date) ======"
for deployment in ${deployments}; do
if ! kubectl rollout status deployment ${deployment} --timeout=10s 1>/dev/null 2>&1; then
echo "Error: ${deployment} - rolling back!"
kubectl rollout undo deployment ${deployment}
else
echo "Ok: ${deployment}"
fi
done
done
</code></pre>
<p>We want to run this script from inside the Pod, so I converted it to <code>ConfigMap</code> which will allow us to mount this script in a volume (see: <a href="https://kubernetes.io/docs/concepts/configuration/configmap/#using-configmaps-as-files-from-a-pod" rel="noreferrer">Using ConfigMaps as files from a Pod</a>):</p>
<pre><code>$ cat check-script-configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: check-script
data:
checkScript.sh: |
#!/bin/bash
while true; do
sleep 60
deployments=$(kubectl get deployments --no-headers -o custom-columns=":metadata.name" | grep -v "deployment-checker")
echo "====== $(date) ======"
for deployment in ${deployments}; do
if ! kubectl rollout status deployment ${deployment} --timeout=10s 1>/dev/null 2>&1; then
echo "Error: ${deployment} - rolling back!"
kubectl rollout undo deployment ${deployment}
else
echo "Ok: ${deployment}"
fi
done
done
$ kubectl apply -f check-script-configmap.yml
configmap/check-script created
</code></pre>
<p>I've created a separate <code>deployment-checker</code> ServiceAccount with the <code>edit</code> Role assigned and our Pod will run under this ServiceAccount:<br />
<strong>NOTE:</strong> I've created a Deployment instead of a single Pod.</p>
<pre><code>$ cat all-in-one.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: deployment-checker
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: deployment-checker-binding
subjects:
- kind: ServiceAccount
name: deployment-checker
namespace: default
roleRef:
kind: ClusterRole
name: edit
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: deployment-checker
name: deployment-checker
spec:
selector:
matchLabels:
app: deployment-checker
template:
metadata:
labels:
app: deployment-checker
spec:
serviceAccountName: deployment-checker
volumes:
- name: check-script
configMap:
name: check-script
containers:
- image: bitnami/kubectl
name: test
command: ["bash", "/mnt/checkScript.sh"]
volumeMounts:
- name: check-script
mountPath: /mnt
</code></pre>
<p>After applying the above manifest, the <code>deployment-checker</code> Deployment was created and started monitoring Deployment resources in the <code>default</code> Namespace:</p>
<pre><code>$ kubectl apply -f all-in-one.yaml
serviceaccount/deployment-checker created
clusterrolebinding.rbac.authorization.k8s.io/deployment-checker-binding created
deployment.apps/deployment-checker created
$ kubectl get deploy,pod | grep "deployment-checker"
deployment.apps/deployment-checker 1/1 1
pod/deployment-checker-69c8896676-pqg9h 1/1 Running
</code></pre>
<p>Finally, we can check how it works. I've created three Deployments (<code>app-1</code>, <code>app-2</code>, <code>app-3</code>):</p>
<pre><code>$ kubectl create deploy app-1 --image=nginx
deployment.apps/app-1 created
$ kubectl create deploy app-2 --image=nginx
deployment.apps/app-2 created
$ kubectl create deploy app-3 --image=nginx
deployment.apps/app-3 created
</code></pre>
<p>Then I changed the image for the <code>app-1</code> to the incorrect one (<code>nnnginx</code>):</p>
<pre><code>$ kubectl set image deployment/app-1 nginx=nnnginx
deployment.apps/app-1 image updated
</code></pre>
<p>In the <code>deployment-checker</code> logs we can see that the <code>app-1</code> has been rolled back to the previous version:</p>
<pre><code>$ kubectl logs -f deployment-checker-69c8896676-pqg9h
...
====== Thu Oct 7 09:20:15 UTC 2021 ======
Ok: app-1
Ok: app-2
Ok: app-3
====== Thu Oct 7 09:21:16 UTC 2021 ======
Error: app-1 - rolling back!
deployment.apps/app-1 rolled back
Ok: app-2
Ok: app-3
</code></pre>
|
<p>I have 15 pods that run different PHP applications in a Kubernetes cluster, each application must include a PHP library updated regularly, it's always the same lib/version for all pods.
What is the best approach to share a library with all pods?</p>
<p>I tried to find out a solution but my doubts persist.
What I thought:</p>
<ol>
<li><p>Include the lib in the application container during the build, this solution create a consistent application but I have to re-deploy all 15 applications. Make sense?</p>
</li>
<li><p>Include the lib as shared volume among all pods, in this case I can update only the lib and all application will be updated. It seems better even if I don't like this option because each application depends on the shared volume.</p>
</li>
</ol>
<p>What are your thoughts regarding this issue?</p>
| <p>Always make your images be self-contained. Don't build an image that can't run unless something outside the container space is present, or try to use a volume to share code. (That is, pick your first option over the second.)</p>
<hr />
<p>Say you have your shared library, and you have your 15 applications A through O that depend on it. Right now they're running last week's build <code>image: service-a:20210929</code> and so on. Service A needs a new feature with support in the shared library, so you update that and redeploy it and announce the feature to your users. But then you discover that the implementation in the shared library causes a crash in service B on some specific customer requests. What now?</p>
<p>If you've built each service as a standalone image, this is easy. <code>helm rollback service-b</code>, or otherwise change service B back to the <code>20210929</code> tag while service A is still using the updated <code>20211006</code> tag. You're running mixed versions, but that's probably okay, and you can fix the library and redeploy everything tomorrow, and your system as a whole is still online.</p>
<p>But if you're trying to share the library, you're stuck. The new version breaks a service, so you need to roll it back; but rolling back the library would require you to also know to roll back the service that depends on the newer version, and so on. You'd have to undeploy your new feature even though the thing that's broken is only indirectly related to it.</p>
<hr />
<p>If the library is really used everywhere, and if disk space is actually your limiting factor, you could restructure your image setup to have three layers:</p>
<ol>
<li>The language interpreter itself;</li>
<li><code>FROM language-interpreter</code>, plus the shared library;</li>
<li><code>FROM shared-library:20211006</code>, plus this service's application code.</li>
</ol>
<p>If they're identical, the lower layers can be shared between images, and the image pull mechanics know to not pull layers that are already present on the system. However, this can lead to a more complex build setup that might be a little trickier to replicate in developer environments.</p>
|
<p>I'm aiming to mount a persistent volume onto a postgres stateful set. To test persistence, I:</p>
<ul>
<li>deploy the stateful set, with mounted volume (<code>kubectl apply -f postgres.yml</code>)</li>
<li>create a table and insert a row</li>
<li>rescale the stateful set to 0, then delete it (<code>kubectl scale statefulset/postgresql-db --replicas 0 && kubectl delete statefulset/postgresql-db</code>)</li>
<li>redeploy the stateful set (<code>kubectl apply -f postgres.yml</code>)</li>
</ul>
<p>In my new deployment the db is empty, I expected the table and data to be persistent, why is this?</p>
<p>I've been following a few articles to do this, notably:</p>
<ul>
<li><a href="https://www.bmc.com/blogs/kubernetes-postgresql/" rel="nofollow noreferrer">https://www.bmc.com/blogs/kubernetes-postgresql/</a></li>
<li><a href="https://medium.com/@xcoulon/storing-data-into-persistent-volumes-on-kubernetes-fb155da16666" rel="nofollow noreferrer">https://medium.com/@xcoulon/storing-data-into-persistent-volumes-on-kubernetes-fb155da16666</a></li>
</ul>
<pre class="lang-yaml prettyprint-override"><code># kubectl apply -f postgres.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/tp/projects/bitbuyer"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgresql-db
spec:
serviceName: postgresql-db-service
selector:
matchLabels:
app: postgresql-db
replicas: 1
template:
metadata:
labels:
app: postgresql-db
spec:
containers:
- name: postgresql-db
image: postgres:latest
volumeMounts:
- name: postgres-pv
mountPath: /data
env:
- name: POSTGRES_PASSWORD
value: pgpassword
- name: PGDATA
value: /data/pgdata
volumes:
- name: postgres-pv
persistentVolumeClaim:
claimName: postgres-pvc
</code></pre>
| <p>Issue could be possible due to</p>
<pre><code>volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data
subPath: pgdata
</code></pre>
<p>however this is what i am using for postgres</p>
<p><a href="https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment/blob/main/postgres.yaml" rel="nofollow noreferrer">https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment/blob/main/postgres.yaml</a></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
ports:
- name: pgql
port: 5432
targetPort: 5432
protocol: TCP
selector:
app: postgres
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: "postgres"
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:9.5
volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data
subPath: pgdata
env:
- name: POSTGRES_USER
value: root
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_DB
value: kong
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
ports:
- containerPort: 5432
terminationGracePeriodSeconds: 60
volumeClaimTemplates:
- metadata:
name: postgres-data
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 10Gi
</code></pre>
|
<p>In Kubernetes (using a filter) we can limit the output from k8s on interesting resource and I wonder whether it is possible to list top 5 latest created pods using only filter.</p>
<p>The current example mostly list all pods and pipe to another unix (<code>head</code> command)</p>
<pre><code>kubectl get pods --sort-by=.metadata.creationTimestamp | head -n 5
</code></pre>
<p>But I guess it takes quite long time to get all first from server side, then lists first 5.</p>
<p>Can I use special filter will make it more efficiency?</p>
| <p>There are several aspects which prevent you to solve this question using only <code>filter</code>:</p>
<ol>
<li><p><code>Filter</code> itself:</p>
<blockquote>
<p>Field selectors are essentially resource filters. By default, no
selectors/filters are applied, meaning that all resources of the
specified type are selected. This makes the kubectl queries kubectl
get pods and kubectl get pods --field-selector "" equivalent.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/" rel="nofollow noreferrer">Reference - Field selectors</a>.</p>
<p>And its limitations on <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/#supported-operators" rel="nofollow noreferrer">supported operations</a>:</p>
<blockquote>
<p>You can use the =, ==, and != operators with field selectors (= and ==
mean the same thing). This kubectl command, for example, selects all
Kubernetes Services that aren't in the default namespace:</p>
<p>kubectl get services --all-namespaces --field-selector
metadata.namespace!=default</p>
</blockquote>
<p>It can't compare value like <code>></code> or <code><</code> even if a number of total pods would have been available.</p>
</li>
<li><p>I compared requests with <code>--v=8</code> to see which exact response is performed when <code>kubectl get pods</code> is executed with different options:</p>
<pre><code>$ kubectl get pods -A --sort-by=.metadata.creationTimestamp --v=8
I1007 12:40:45.296727 562 round_trippers.go:432] GET https://10.186.0.2:6443/api/v1/pods?includeObject=Object
</code></pre>
<p>and</p>
<pre><code>$ kubectl get pods -A --field-selector=metadata.namespace=kube-system --v=8
I1007 12:41:42.873609 1067 round_trippers.go:432] GET https://10.186.0.2:6443/api/v1/pods?fieldSelector=metadata.namespace%3Dkube-system&limit=500
</code></pre>
<p>The difference when <code>--field-selector</code> <code>kubectl</code> used is it adds <code>&limit=500</code> to the request, so it's a point where some data inconsistency can appear if using <code>kubectl</code> from terminal. While <code>--sort-by</code> get all data from <code>api server</code> to the client.</p>
</li>
<li><p>Using <code>-o jsonpath</code> works the same way as regular <code>kubectl get pods</code> request and has again the limit of 500 results which may lead to data inconsistency:</p>
<pre><code>$ kubectl get pods -A --v=7 -o jsonpath='{range.items[*]}{.metadata.creationTimestamp}{"\t"}{.metadata.name}{"\n"}{end}'
I1007 12:52:25.505109 6943 round_trippers.go:432] GET https://10.186.0.2:6443/api/v1/pods?limit=500
</code></pre>
<p>Also even developers using another linux commands (<code>jq</code>, <code>grep</code>, <code>sort</code>, <code>|</code>) to work with initial results which are got from kubernetes api. See <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#viewing-finding-resources" rel="nofollow noreferrer">examples of Viewing, finding resources</a>.</p>
</li>
</ol>
<p>So to confirm <a href="https://stackoverflow.com/questions/69408472/how-to-get-top-n-latest-created-pods-inside-kubernetes-based-on-filter#comment122686547_69408472">@P...'s comment</a>, you will need to get data first to the client and only then work on it.</p>
|
<p>I was trying to use the netshoot image with Nginx server image on the same pod.</p>
<p>If I do not include sleep argument to netshoot container,</p>
<pre><code>args:
- sleep
- '9999999999999999999'
</code></pre>
<p>the pod status always becomes CrashLoopBackOff. I did not figure out why I have to put sleep argument to sleep current thread in netshoot container. Do containers have to consist of application servers or running jars to retain running status? I think I missed the fundamental point about the running mechanism of containers. Thanks.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
# Define the Deployment Name
name: nginx-deployment
labels:
app: nginx
spec:
# Define the Number of Pods
replicas: 1
# Define the Selector
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers: # Container Details
- name: nginx
image: nginx:latest # Image
ports:
- containerPort: 80
- name: netshoot
image: nicolaka/netshoot:latest
args:
- sleep
- '9999999999999999999'
</code></pre>
| <p>Containers are a wrapper around a command, and they run until that command exits. The default command for netshoot is <code>zsh</code>. Like other shells, they run until they reach an exit command, a signal interrupts them, or the input they process returns an EOF. Without any stdin input defined, a container will reach that EOF immediately.</p>
<p>As an aside, a common noop command for containers is <code>tail -f /dev/null</code>.</p>
|
<p>I want to run two containers on a single pod.</p>
<p><code>container1</code> is a test that tries to connect to a SQL Server Database that runs on <code>container2</code>.</p>
<p>How can I make sure that the sql container (<code>container2</code>) will run and be ready before <code>container1</code> starts?</p>
<p><code>initContainer</code> won't work here, as it will run before both containers.</p>
<p>This is my compose.yaml:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: sql-test-pod
labels:
name: sql-test
spec:
restartPolicy: Never
containers:
- name: my-sqldb
image: docker-registry.com/database
imagePullPolicy: Always
resources:
limits:
memory: "4096Mi"
cpu: "750m"
requests:
memory: "4096Mi"
cpu: "750m"
- name: tests
tty: true
stdin: true
image: docker-registry.com/test
imagePullPolicy: Always
resources:
limits:
memory: "4096Mi"
cpu: "750m"
requests:
memory: "4096Mi"
cpu: "750m"
env:
- name: sqlhostname
value: "SqlHostnamePlaceholder"
nodeSelector:
kubernetes.io/os: windows
tolerations:
- key: "windows"
operator: "Equal"
value: "2019"
effect: "NoSchedule"
</code></pre>
| <p>In order to make sure that <code>container1</code> will start only after <code>container2</code>'s SQL Server is up the only way I found is to use <code>postStart</code> container's lifecycle event.
<code>postStart</code> triggered after after the container is create, it is true that there is no guarantee, that the postStart handler is called before the Container's entrypoint is called, <strong>but it turns out that the Kubelet code that starts the container blocks the start of the next container until the post-start handler terminates.</strong></p>
<p>And this is how my new compose file will look like:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: sql-test-pod
labels:
name: sql-test
spec:
restartPolicy: Never
containers:
- name: my-sqldb
image: docker-registry.com/database
imagePullPolicy: Always
lifecycle:
postStart:
exec:
command: ['powershell.exe', '-command', "$connectionString = 'Server=sql-test-pod;User Id=user;Password=password'; $sqlConnection = New-Object System.Data.SqlClient.SqlConnection $connectionString; $i=0; while($i -lt 6) {Try { $i++;$sqlConnection.Open();$sqlConnection.Close(); return}Catch {Write-Error $_; start-sleep 30}}"]
resources:
limits:
memory: "4096Mi"
cpu: "750m"
requests:
memory: "4096Mi"
cpu: "750m"
- name: tests
tty: true
stdin: true
image: docker-registry.com/test
imagePullPolicy: Always
resources:
limits:
memory: "4096Mi"
cpu: "750m"
requests:
memory: "4096Mi"
cpu: "750m"
env:
- name: sqlhostname
value: "sql-test-pod"
nodeSelector:
kubernetes.io/os: windows
tolerations:
- key: "windows"
operator: "Equal"
value: "2019"
effect: "NoSchedule"
</code></pre>
<p>Similar case you can find <a href="https://medium.com/@marko.luksa/delaying-application-start-until-sidecar-is-ready-2ec2d21a7b74" rel="nofollow noreferrer">here</a></p>
|
<p>Assuming that there is only one admission controller Pod running, and the admission controller has a webhook that will be triggered by Pod deletion events.</p>
<p><strong>Example Scenario</strong></p>
<p>There are 2 Pods (Pod A and Pod B) within a namespace. 2 different users (Alice and Bob) perform Pod deletion at the exact same time, in which:</p>
<ol>
<li>Alice deletes the Pod A</li>
<li>Bob deletes the Pod B</li>
</ol>
<p>In this specific scenario, will the admission controller handle both the admission requests serially or in parallel? In other words, will the admission controller handle the admission request for Pod A before that of Pod B (or vice versa), or will it handle both the admission requests are the same time?</p>
<p><strong>General Scenario</strong></p>
<p>The admission requests are sent from the API Server to the admission controller. Generally speaking, will it be possible that multiple admission requests are sent to the admission controller at the exact same time?</p>
<p>And if so, will the admission controller handle them in parallel via some built-in parallelism mechanism, or will the admission controller queue them and process them serially?</p>
| <p>Since in <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/#options" rel="nofollow noreferrer">kube-api options</a> we can see <code>--max-mutating-requests-inflight</code> and <code>--max-requests-inflight</code> flags, which are used to determine the server's total concurrency limit, I think that admission controller also support multi-threading. Because otherwise it will be a bottleneck for proceeding with API requests.</p>
<p>This is true and accurate advice, as long as we are using the base environment.</p>
<p>But on the other hand, we can customize our environment and specify how requests should be processed.
For that purpose can be used <a href="https://kubernetes.io/docs/concepts/cluster-administration/flow-control" rel="nofollow noreferrer">API Priority and Fairness (APF)</a>. APF classifies and isolates requests in a more fine-grained way in comparison with <code>--max-mutating-requests-inflight</code> and <code>--max-requests-inflight</code>.</p>
<blockquote>
<p>Without APF enabled, overall concurrency in the API server is limited by the <code>kube-apiserver</code> flags <code>--max-requests-inflight</code> and <code>--max-mutating-requests-inflight</code>. With APF enabled, the concurrency limits defined by these flags are summed and then the sum is divided up among a configurable set of <em>priority levels</em>. Each incoming request is assigned to a single priority level, and each priority level will only dispatch as many concurrent requests as its configuration allows.</p>
</blockquote>
<blockquote>
<p>The default configuration, for example, includes separate priority levels for leader-election requests, requests from built-in controllers, and requests from Pods.</p>
</blockquote>
<p>So, it's a quite wide question. It all depends on which admission controller we using: original or custom, what controls are used (<code>--max-mutating-requests-inflight</code> and <code>--max-requests-inflight</code> command-line flags), APF configuration.</p>
|
<p>The <code>help</code> of the <code>kubectl port-forward</code> says <code>The forwarding session ends when the selected pod terminates, and rerun of the command is needed to resume forwarding.</code></p>
<p>Although it does not auto-reconnect when the pod terminates the command does not return either and just hangs with errors:</p>
<pre><code>E0929 11:57:50.187945 62466 portforward.go:400] an error occurred forwarding 8000 -> 8080: error forwarding port 8080 to pod a1fe1d167955e1c345e0f8026c4efa70a84b9d46029037ebc5b69d9da5d30249, uid : network namespace for sandbox "a1fe1d167955e1c345e0f8026c4efa70a84b9d46029037ebc5b69d9da5d30249" is closed
Handling connection for 8000
E0929 12:02:44.505938 62466 portforward.go:400] an error occurred forwarding 8000 -> 8080: error forwarding port 8080 to pod a1fe1d167955e1c345e0f8026c4efa70a84b9d46029037ebc5b69d9da5d30249, uid : failed to find sandbox "a1fe1d167955e1c345e0f8026c4efa70a84b9d46029037ebc5b69d9da5d30249" in store: not found
</code></pre>
<p>I would like it to return so that I can handle this error and make the script that will rerun it.</p>
<p>Is there any way or workaround for how to do it?</p>
| <p>Based on the information, described on <a href="https://github.com/kubernetes/kubernetes/issues/67059" rel="nofollow noreferrer">Kubernetes Issues page on GitHub</a>, I can suppose that it is a normal behavior for your case: port-forward connection cannot be canceled on pod deletion, since there is no a connection management inside REST connectors on server side.</p>
<blockquote>
<p>A connection being maintained from kubectl all the way through to the kubelet hanging open even if the pod doesn't exist.</p>
</blockquote>
<blockquote>
<p>We'll proxy a websocket connection kubectl->kubeapiserver->kubelet on port-forward.</p>
</blockquote>
|
<p>My airflow service runs as a kubernetes deployment, and has two containers, one for the <code>webserver</code> and one for the <code>scheduler</code>.
I'm running a task using a KubernetesPodOperator, with <code>in_cluster=True</code> parameters, and it runs well, I can even <code>kubectl logs pod-name</code> and all the logs show up. </p>
<p>However, the <code>airflow-webserver</code> is unable to fetch the logs:</p>
<pre><code>*** Log file does not exist: /tmp/logs/dag_name/task_name/2020-05-19T23:17:33.455051+00:00/1.log
*** Fetching from: http://pod-name-7dffbdf877-6mhrn:8793/log/dag_name/task_name/2020-05-19T23:17:33.455051+00:00/1.log
*** Failed to fetch log file from worker. HTTPConnectionPool(host='pod-name-7dffbdf877-6mhrn', port=8793): Max retries exceeded with url: /log/dag_name/task_name/2020-05-19T23:17:33.455051+00:00/1.log (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fef6e00df10>: Failed to establish a new connection: [Errno 111] Connection refused'))
</code></pre>
<p>It seems as the pod is unable to connect to the airflow logging service, on port 8793. If I <code>kubectl exec bash</code> into the container, I can curl localhost on port 8080, but not on 80 and 8793.</p>
<p>Kubernetes deployment:</p>
<pre><code># Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: pod-name
namespace: airflow
spec:
replicas: 1
selector:
matchLabels:
app: pod-name
template:
metadata:
labels:
app: pod-name
spec:
restartPolicy: Always
volumes:
- name: airflow-cfg
configMap:
name: airflow.cfg
- name: dags
emptyDir: {}
containers:
- name: airflow-scheduler
args:
- airflow
- scheduler
image: registry.personal.io:5000/image/path
imagePullPolicy: Always
volumeMounts:
- name: dags
mountPath: /airflow_dags
- name: airflow-cfg
mountPath: /home/airflow/airflow.cfg
subPath: airflow.cfg
env:
- name: EXECUTOR
value: Local
- name: LOAD_EX
value: "n"
- name: FORWARDED_ALLOW_IPS
value: "*"
ports:
- containerPort: 8793
- containerPort: 8080
- name: airflow-webserver
args:
- airflow
- webserver
- --pid
- /tmp/airflow-webserver.pid
image: registry.personal.io:5000/image/path
imagePullPolicy: Always
volumeMounts:
- name: dags
mountPath: /airflow_dags
- name: airflow-cfg
mountPath: /home/airflow/airflow.cfg
subPath: airflow.cfg
ports:
- containerPort: 8793
- containerPort: 8080
env:
- name: EXECUTOR
value: Local
- name: LOAD_EX
value: "n"
- name: FORWARDED_ALLOW_IPS
value: "*"
</code></pre>
<p>note: If airflow is run in dev environment (locally instead of kubernetes) it all works perfectly.</p>
| <p>Creating a Persistent Volume and storing logs on them might help.</p>
<pre><code>--
kind: PersistentVolume
apiVersion: v1
metadata:
name: testlog-volume
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 2Gi
hostPath:
path: /opt/airflow/logs/
storageClassName: standard
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: testlog-volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: standard
</code></pre>
<p>if you are using helm chart to deploy airflow, you can use</p>
<pre><code> --set executor=KubernetesExecutor --set logs.persistence.enabled=true --set logs.persistence.existingClaim=testlog-volume
</code></pre>
|
<p>I have this mixed cluster which shows all the nodes as Ready (both Windows & Linux ones). However, only the Linux nodes have aws-node & kube-proxy pods. I RDPed into a Windows node and can see a kube-proxy service.</p>
<p>My question remains: do the Windows nodes need aws-node & kube-proxy pods in the kube-system namespace or do they work differently than Linux ones?</p>
| <p><code>kube-proxy</code> pods are part of default installation of Kubernetes. They are automaticaly created, and are needed on both Linux and Windows.</p>
<blockquote>
<p>kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.</p>
<p>kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
<sup>[<a href="https://kubernetes.io/docs/concepts/overview/components/#kube-proxy" rel="nofollow noreferrer">source</a>]</sup></p>
</blockquote>
<p><code>aws-node</code> pod is a part of AWS CNI plugin for Kubernetes</p>
<blockquote>
<p>The Amazon VPC Container Network Interface (CNI) plugin for Kubernetes is deployed with each of your Amazon EC2 nodes in a Daemonset with the name <code>aws-node</code>. The plugin consists of two primary components:<br />
<em>[...]</em>
<sup>[<a href="https://docs.aws.amazon.com/eks/latest/userguide/pod-networking.html" rel="nofollow noreferrer">source</a>]</sup></p>
</blockquote>
<p>It is currently only supported on Linux. Windows nodes are using a different CNI plugin - <a href="https://github.com/aws/amazon-vpc-cni-plugins/tree/master/plugins/vpc-shared-eni" rel="nofollow noreferrer">vpc-shared-eni</a></p>
|
<p>I have three nodes and have pod replica 9. When I am trying to deploy, Pods are unevenly scheduled on nodes.</p>
<p>How can I make sure that Pods are evenly scheduled on three nodes?</p>
| <p>You should configure <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/" rel="nofollow noreferrer">topology spread constraints</a>. These control how pods are scheduled across your Kubernetes cluster based on regions, zones, nodes.</p>
<p>For spreading across nodes, you could use <em>maxSkew</em> (maxSkew describes the degree to which Pods may be unevenly distributed)</p>
<pre><code>spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: node
whenUnsatisfiable: DoNotSchedule
</code></pre>
|
<p>I'm confused with the relationship between two parameters: <code>requests</code> and <code>cpu.shares</code> of the cgroup which is updated once the Pod is deployed. According the readings I've done so far, <code>cpu.shares</code> reflects some kind of priority when trying to get the chance to consume the CPU. And it's a relative value.</p>
<p>So my question why kubernetes considers the <code>request</code> value of the CPU as an absolute value when scheduling? When it comes to the CPU processes will get a time slice to get executed based on their priorities (according to the CFS mechanism). To my knowledge, there's no such thing called giving such amounts of CPUs (1CPU, 2CPUs etc.). So, if the <code>cpu.share</code> value is considered to prioritize the tasks, why kubernetes consider the exact request value (Eg: 1500m, 200m) to find out a node?</p>
<p>Please correct me if I've got this wrong. Thanks !!</p>
| <p>Answering your questions from the main question and comments:</p>
<blockquote>
<p>So my question why kubernetes considers the <code>request</code> value of the CPU as an absolute value when scheduling?</p>
</blockquote>
<blockquote>
<p>To my knowledge, there's no such thing called giving such amounts of CPUs (1CPU, 2CPUs etc.). So, if the <code>cpu.share</code> value is considered to prioritize the tasks, why kubernetes consider the exact request value (Eg: 1500m, 200m) to find out a node?</p>
</blockquote>
<p>It's because decimal CPU values from the requests <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu" rel="nofollow noreferrer">are always converted to the values in milicores, like 0.1 is equal to 100m which can be read as "one hundred millicpu" or "one hundred millicores"</a>. Those units are specific for Kubernetes:</p>
<blockquote>
<p>Fractional requests are allowed. A Container with <code>spec.containers[].resources.requests.cpu</code> of <code>0.5</code> is guaranteed half as much CPU as one that asks for 1 CPU. The expression <code>0.1</code> is equivalent to the expression <code>100m</code>, which can be read as "one hundred millicpu". Some people say "one hundred millicores", and this is understood to mean the same thing. A request with a decimal point, like <code>0.1</code>, is converted to <code>100m</code> by the API, and precision finer than <code>1m</code> is not allowed. For this reason, the form <code>100m</code> might be preferred.</p>
</blockquote>
<blockquote>
<p>CPU is always requested as an absolute quantity, never as a relative quantity; 0.1 is the same amount of CPU on a single-core, dual-core, or 48-core machine.</p>
</blockquote>
<p>Based on the above one, remember that you can specify to use let's say 1.5 CPU of the node by specifying <code>cpu: 1.5</code> or <code>cpu: 1500m</code>.</p>
<blockquote>
<p>Just wanna know lowering the <code>cpu.share</code> value in cgroups (which is modified by k8s after the deployment) affects to the cpu power consume by the process. For an instance, assume that A, B containers have 1024, 2048 shares allocated. So the available resources will be split into 1:2 ratio. So would it be the same as if we configure cpu.share as 10, 20 for two containers. Still the ratio is 1:2</p>
</blockquote>
<p>Let's make it clear - it's true that the ratio is the same, but the values are different. 1024 and 2048 in <code>cpu.shares</code> means <code>cpu: 1000m</code> and <code>cpu: 2000m</code> defined in Kubernetes resources, while 10 and 20 means <code>cpu: 10m</code> and <code>cpu: 20m</code>.</p>
<blockquote>
<p>Let's say the cluster nodes are based on Linux OS. So, how kubernetes ensure that request value is given to a container? Ultimately, OS will use configurations available in a cgroup to allocate resource, right? It modifies the <code>cpu.shares</code> value of the cgroup. So my question is, which files is modified by k8s to tell operating system to give <code>100m</code> or <code>200m</code> to a container?</p>
</blockquote>
<p>Yes, your thinking is correct. Let me explain in more detail.</p>
<p>Generally on the Kubernetes node <a href="https://medium.com/omio-engineering/cpu-limits-and-aggressive-throttling-in-kubernetes-c5b20bd8a718" rel="nofollow noreferrer">there are three cgroups under the root cgroup</a>, named as <em>slices</em>:</p>
<blockquote>
<p>The k8s uses <code>cpu.share</code> file to allocate the CPU resources. In this case, the root cgroup inherits 4096 CPU shares, which are 100% of available CPU power(1 core = 1024; this is fixed value). The root cgroup allocate its share proportionally based on children’s <code>cpu.share</code> and they do the same with their children and so on. In typical Kubernetes nodes, there are three cgroup under the root cgroup, namely <code>system.slice</code>, <code>user.slice</code>, and <code>kubepods</code>. The first two are used to allocate the resource for critical system workloads and non-k8s user space programs. The last one, <code>kubepods</code> is created by k8s to allocate the resource to pods.</p>
</blockquote>
<p>To check which files are modified we need to go to the <code>/sys/fs/cgroup/cpu</code> directory. <a href="https://gist.github.com/mcastelino/b8ce9a70b00ee56036dadd70ded53e9f#cpu-resource-management" rel="nofollow noreferrer">Here we can find directory called <code>kubepods</code></a> (which is one of the above mentioned <em>slices</em>) where all <code>cpu.shares</code> files for pods are here. In <code>kubepods</code> directory we can find two other folders - <code>besteffort</code> and <code>burstable</code>. Here is worth mentioning that Kubernetes have a three <a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#qos-classes" rel="nofollow noreferrer">QoS classes</a>:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#create-a-pod-that-gets-assigned-a-qos-class-of-guaranteed" rel="nofollow noreferrer">Guaranteed</a></li>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#create-a-pod-that-gets-assigned-a-qos-class-of-burstable" rel="nofollow noreferrer">Burstable</a></li>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#create-a-pod-that-gets-assigned-a-qos-class-of-besteffort" rel="nofollow noreferrer">BestEffort</a></li>
</ul>
<p>Each pod has an assigned QoS class and depending on which class it is, the pod is located in the corresponding directory (except guaranteed, pod with this class is created in <code>kubepods</code> directory).</p>
<p>For example, I'm creating a pod with following definition:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
spec:
selector:
matchLabels:
app: test-deployment
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: test-deployment
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
resources:
requests:
cpu: 300m
- name: busybox
image: busybox
args:
- sleep
- "999999"
resources:
requests:
cpu: 150m
</code></pre>
<p>Based on earlier mentioned definitions, this pod will have assigned Qos class <code>Burstable</code>, thus it will be created in the <code>/sys/fs/cgroup/cpu/kubepods/burstable</code> directory.</p>
<p>Now we can check <code>cpu.shares</code> set for this pod:</p>
<pre><code>user@cluster /sys/fs/cgroup/cpu/kubepods/burstable/podf13d6898-69f9-44eb-8ea6-5284e1778f90 $ cat cpu.shares
460
</code></pre>
<p>It is correct as one pod is taking 300m and the second one 150m and it's <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-limits-are-run" rel="nofollow noreferrer">calculated by multiplying 1024</a>. For each container we have sub-directories as well:</p>
<pre><code>user@cluster /sys/fs/cgroup/cpu/kubepods/burstable/podf13d6898-69f9-44eb-8ea6-5284e1778f90/fa6194cbda0ccd0b1dc77793bfbff608064aa576a5a83a2f1c5c741de8cf019a $ cat cpu.shares
153
user@cluster /sys/fs/cgroup/cpu/kubepods/burstable/podf13d6898-69f9-44eb-8ea6-5284e1778f90/d5ba592186874637d703544ceb6f270939733f6292e1fea7435dd55b6f3f1829 $ cat cpu.shares
307
</code></pre>
<p>If you want to read more about Kubrenetes CPU management, I'd recommend reading following:</p>
<ul>
<li><a href="https://medium.com/omio-engineering/cpu-limits-and-aggressive-throttling-in-kubernetes-c5b20bd8a718" rel="nofollow noreferrer">CPU limits and aggressive throttling in Kubernetes</a></li>
<li><a href="https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/" rel="nofollow noreferrer">Control CPU Management Policies on the Node</a></li>
<li><a href="https://medium.com/@kkwriting/kubernetes-resource-limits-and-kernel-cgroups-337625bab87d" rel="nofollow noreferrer">Kubernetes resource limits and kernel cgroups</a></li>
<li><a href="https://www.infoq.com/presentations/evolve-kubernetes-resource-manager/" rel="nofollow noreferrer">How to Evolve Kubernetes Resource Management Model</a></li>
</ul>
|
<p>I want to use vars without ConfigMaps or Secrets. Declaring a value would be sufficient for me. But I couldn't see any documentation regarding vars attributes or how I can use. Do you know any docs about this? Thanks!</p>
<pre><code>vars:
- name: ROUTE_HOST
objref:
kind: ConfigMap
name: template-vars
apiVersion: v1
fieldref:
fieldpath: data.ROUTE_HOST
</code></pre>
| <p>Summarizing Jonas's comments:</p>
<blockquote>
<p>WARNING: There are plans to deprecate vars. For existing users of vars, we recommend migration to <a href="https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/replacements/" rel="nofollow noreferrer">replacements</a> as early as possible. There is a guide for convering vars to replacements at the bottom of this page under “convert vars to replacements”. For new users, we recommend never using vars, and starting with replacements to avoid migration in the future.</p>
</blockquote>
<p>Please find more information in <a href="https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/vars/" rel="nofollow noreferrer">the official documentation</a>.</p>
<p>Try to use replacements as it's suggested above.</p>
|
<p>I am not sure if the issue is related to promtail (helm chart used) or to helm itself.</p>
<p>I want to update the default host value for loki chart to a local host used on kubernetes, so I tried with this:</p>
<pre><code>helm upgrade --install --namespace loki promtail grafana/promtail --set client.url=http://loki:3100/loki/api/v1/push
</code></pre>
<p>And with a custom values.yaml like this:</p>
<pre><code>helm upgrade --install --namespace loki promtail grafana/promtail -f promtail.yaml
</code></pre>
<p>But it still uses wrong default url:</p>
<pre><code>level=warn ts=2021-10-08T11:51:59.782636939Z caller=client.go:344 component=client host=loki-gateway msg="error sending batch, will retry" status=-1 error="Post \"http://loki-gateway/loki/api/v1/push\": dial tcp: lookup loki-gateway on 10.43.0.10:53: no such host"
</code></pre>
<p>If I inspect the config.yaml its using it doesnt use the internal url I gave during the installation:</p>
<pre><code>root@promtail-69hwg:/# cat /etc/promtail/promtail.yaml
server:
log_level: info
http_listen_port: 3101
client:
url: http://loki-gateway/loki/api/v1/push
</code></pre>
<p>Any ideas? or anything I am missing?</p>
<p>Thanks</p>
| <p>I don't think <code>client.url</code> is a value in the helm chart, but rather one inside a config file that your application is using.</p>
<p>Try setting <a href="https://github.com/grafana/helm-charts/blob/main/charts/promtail/values.yaml#L223-L239" rel="nofollow noreferrer"><code>config.lokiAddress</code></a>:</p>
<pre><code>config:
lokiAddress: http://loki-gateway/loki/api/v1/push
</code></pre>
<p>It <a href="https://github.com/grafana/helm-charts/blob/main/charts/promtail/values.yaml#L339-L353" rel="nofollow noreferrer">gets templated</a> into the config file I mentioned.</p>
|
<p>In my scenario user has access to four namespaces only, he will switch between namespaces using contexts below. How can I give him access to CRD's along with his exiting access to four namespaces.</p>
<pre><code>CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* dev-crd-ns-user dev dev-crd-ns-user dev-crd-ns
dev-mon-fe-ns-user dev dev-mon-fe-ns-user dev-mon-fe-ns
dev-strimzi-operator-ns dev dev-strimzi-operator-ns-user dev-strimzi-operator-ns
dev-titan-ns-1 dev dev-titan-ns-1-user dev-titan-ns-1
hifi@101common:/root$ kubectl get secret
NAME TYPE DATA AGE
default-token-mh7xq kubernetes.io/service-account-token 3 8d
dev-crd-ns-user-token-zd6xt kubernetes.io/service-account-token 3 8d
exfo@cmme101common:/root$ kubectl get crd
Error from server (Forbidden): customresourcedefinitions.apiextensions.k8s.io is forbidden: User "system:serviceaccount:dev-crd-ns:dev-crd-ns-user" cannot list resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the cluster scope
</code></pre>
<p>Tried below two options. Option 2 is the recommendation but didn't work with either one.</p>
<pre><code>Error from server (Forbidden): customresourcedefinitions.apiextensions.k8s.io is forbidden: User "system:serviceaccount:dev-crd-ns:dev-crd-ns-user" cannot list resource "customresourcedefinitions" in API group "apiextensions.k8s.io" at the **cluster scope**
</code></pre>
<h1>Option 1: Adding CRD to existing role</h1>
<h2>role</h2>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
annotations:
name: dev-ns-user-full-access
namespace: dev-crd-ns
rules:
- apiGroups:
- ""
- extensions
- apps
- networking.k8s.io
- apiextensions.k8s.io
resources:
- '*'
- customresourcedefinitions
verbs:
- '*'
- apiGroups:
- batch
resources:
- jobs
- cronjobs
verbs:
- '*'
</code></pre>
<h2>role binding</h2>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
annotations:
name: dev-crd-ns-user-view
namespace: dev-crd-ns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: dev-crd-ns-user-full-access
subjects:
- kind: ServiceAccount
name: dev-crd-ns-user
namespace: dev-crd-ns
</code></pre>
<h1>Option 2 : Adding CRD as a new role to "dev-crd-ns" namespace</h1>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: dev-crd-ns
name: crd-admin
rules:
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: crd-admin
namespace: dev-crd-ns
subjects:
- kind: ServiceAccount
name: dev-crd-ns-user
namespace: dev-crd-ns
roleRef:
kind: Role
name: crd-admin
apiGroup: rbac.authorization.k8s.io
</code></pre>
| <p>You need to create <a href="https://v1-18.docs.kubernetes.io/docs/reference/access-authn-authz/rbac/#role-example" rel="nofollow noreferrer">Role</a> and <a href="https://v1-20.docs.kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-example" rel="nofollow noreferrer">RoleBinding</a> for each service account like <code>dev-crd-ns-user</code>.</p>
<p>For <strong>dev-crd-ns-user</strong>:</p>
<ul>
<li>Update the existing Role or create a new one:</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: dev-crd-ns
name: crd-admin
rules:
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
</code></pre>
<pre class="lang-sh prettyprint-override"><code>$ kubectl apply -f crd-admin-role.yaml
</code></pre>
<ul>
<li>Update the existing RoleBinding with this new Role or create a new one:</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: crd-admin
namespace: dev-crd-ns
subjects:
- kind: ServiceAccount
name: dev-crd-ns-user
namespace: dev-crd-ns
roleRef:
kind: Role
name: crd-admin
apiGroup: rbac.authorization.k8s.io
</code></pre>
<pre class="lang-sh prettyprint-override"><code>$ kubectl apply -f crd-admin-role-binding.yaml
</code></pre>
<p>Now, the SA <code>dev-crd-ns-user</code> will have all the access to <code>customresourcedefinitions</code>.</p>
<p>Follow similar steps for the rest of the service accounts.</p>
|
<p>How to <em>list all</em> certificates & make a <em>describe</em> in the particular namespaces using <a href="https://github.com/kubernetes-client/python/tree/release-18.0" rel="nofollow noreferrer">kubernetes python cli</a>?</p>
<pre><code># list certificates
kubectl get certificates -n my-namespace
# describe a certificate
kubectl describe certificate my-certificate -n my-namespace
</code></pre>
| <p>Kubernetes by default doesn't have a kind <code>certificate</code>, you must first install <a href="https://cert-manager.io/docs/" rel="nofollow noreferrer">cert-manager's</a> <code>CustomResourceDefinition</code>.</p>
<p>Considering the above means that in Kuberentes Python client we must use <a href="https://github.com/kubernetes-client/python/blob/8a36dfb113868862d9ef8fd0a44b1fb7621c463a/kubernetes/client/api/custom_objects_api.py" rel="nofollow noreferrer">custom object API</a>, especially in your case: functions <code>list_namespaced_custom_object()</code> and <code>get_namespaced_custom_object()</code>.</p>
<p>Below code has two functions, one is returning all certificates (equivalent to the <code>kubectl get certificates</code> command), the second one is returning information about one specific certificate (equivalent to the <code>kubectl describe certificate {certificate-name}</code> command). Based on <a href="https://github.com/kubernetes-client/python/blob/8a36dfb113868862d9ef8fd0a44b1fb7621c463a/examples/namespaced_custom_object.py" rel="nofollow noreferrer">this example code</a>:</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes import client, config
config.load_kube_config()
api = client.CustomObjectsApi()
# kubectl get certificates -n my-namespace
def list_certificates():
resources = api.list_namespaced_custom_object(
group = "cert-manager.io",
version = "v1",
namespace = "my-namespace",
plural = "certificates"
)
return resources
# kubectl describe certificate my-certificate -n my-namespace
def get_certificate():
resource = api.get_namespaced_custom_object(
group = "cert-manager.io",
version = "v1",
name = "my-certificate",
namespace = "my-namespace",
plural = "certificates"
)
return resource
</code></pre>
<p>Keep in mind that both functions are returning <a href="https://www.w3schools.com/python/python_dictionaries.asp" rel="nofollow noreferrer">Python dictionaries</a>.</p>
|
<p>I'm trying to create an Istio ingress gateway (istio: 1.9.1, EKS: 1.18) with a duplicate <code>targetPort</code> like this:</p>
<pre><code>apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: istio
spec:
components:
ingressGateways:
- name: ingressgateway
k8s:
service:
ports:
- port: 80
targetPort: 8080
name: http2
- port: 443
name: https
targetPort: 8080
</code></pre>
<p>but I get the error:</p>
<pre><code>- Processing resources for Ingress gateways.
✘ Ingress gateways encountered an error: failed to update resource with server-side apply for obj Deployment/istio-system/istio: failed to create typed patch object: errors:
.spec.template.spec.containers[name="istio-proxy"].ports: duplicate entries for key [containerPort=8080,protocol="TCP"]
</code></pre>
<p>I am running Istio in EKS so we terminate TLS at the NLB, so all traffic (<code>http</code> and <code>https</code>) should go to the pod on the same port (8080)</p>
<p>Any ideas how I can solve this issue?</p>
| <p>Until the istio 1.8 version, this type of configuration worked (although with initial problems) - <a href="https://github.com/kubernetes/kubernetes/issues/53526" rel="nofollow noreferrer">github #53526</a>.</p>
<p>As of version 1.9 an error is generated and you can no longer use two of the same ports in one definition. There have also been created (or reopened) reports on github:</p>
<ul>
<li><a href="https://github.com/istio/istio/issues/35349" rel="nofollow noreferrer">github #35349</a></li>
<li><a href="https://github.com/kubernetes/kubernetes/pull/53576" rel="nofollow noreferrer">github #53576</a></li>
<li><a href="https://github.com/kubernetes/ingress-nginx/issues/1622" rel="nofollow noreferrer">github #1622</a></li>
</ul>
<p>However, the issue is with an outdated version of Kubernetes and Istio at the time of writing the response.</p>
<p>Possible workarounds:</p>
<ul>
<li>Use different <code>targetPort</code></li>
<li>Upgrade Kubernetes and Istio to newest versions.</li>
</ul>
|
<p>In my umbrella Helm chart, I defined a dependency to Redis:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v2
appVersion: "1.0"
description: A Helm chart for Kubernetes
name: my-project
version: 0.1.0
dependencies:
- name: redis
version: ~6.2.x
repository: https://charts.bitnami.com/bitnami
</code></pre>
<p>At time of writing, the latest version is 6.2.6 (see <a href="https://bitnami.com/stack/redis/helm" rel="nofollow noreferrer">https://bitnami.com/stack/redis/helm</a>).</p>
<p>But when I execute <code>helm dependency update my-project</code>, Helm downloads version 6.2.0 instead of 6.2.6.
When I try to install my chart, it fails:
<code>Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: [unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1", unable to recognize "": no matches for kind "StatefulSet" in version "apps/v1beta2"]</code>
Without the dependency to Redis, my chart installs fine.</p>
<p>I also tried to point at that specific Redis version in chart.yaml, but then <code>helm dependency list</code> returns:</p>
<pre><code>NAME VERSION REPOSITORY STATUS
redis 6.2.6 https://charts.bitnami.com/bitnami wrong version
</code></pre>
<p>I'm running Kubernetes in Docker Desktop on my laptop.
The versions I'm using:</p>
<ul>
<li>Helm version: 3.7.0-rc.2</li>
<li>K8s server: 1.21.2</li>
<li>K8s client: 1.21.4</li>
</ul>
<p>When I install Redis independently using <code>helm install my-release bitnami/redis</code>, the installation succeeds.</p>
<p>How do I use Redis 6.2.6 as a dependency in my chart?</p>
| <p>K8s version <strong>1.21</strong> have deployment latest API</p>
<p>Simple ref : <a href="https://stackoverflow.com/a/66164857/5525824">https://stackoverflow.com/a/66164857/5525824</a></p>
<p>While the chart you are using the Older API you might need to some changes or use the latest chart for installation.</p>
<p>The latest deployment API version is : <strong>apps/v1</strong></p>
<p>You can check your K8s cluster supported API using</p>
<pre><code>for kind in `kubectl api-resources | tail +2 | awk '{ print $1 }'`; do kubectl explain $kind; done | grep -e "KIND:" -e "VERSION:"
</code></pre>
<p>output</p>
<pre><code>KIND: deployment
VERSION: v1
KIND: statefulset
VERSION: v1
</code></pre>
<p>Or use simple command : <code>kubectl api-versions</code></p>
<p>You should checkout this <strong>Bitnami Redis</strong> document : <a href="https://artifacthub.io/packages/helm/bitnami/redis" rel="nofollow noreferrer">https://artifacthub.io/packages/helm/bitnami/redis</a></p>
<p>it updated a few days back and could work with minor changes of <strong>API</strong> only in your case.</p>
<p>If you check the stable Redis version helm chart : <a href="https://github.com/helm/charts/blob/master/stable/redis/templates/redis-master-statefulset.yaml" rel="nofollow noreferrer">https://github.com/helm/charts/blob/master/stable/redis/templates/redis-master-statefulset.yaml</a></p>
<p>stateful API version : <code>apiVersion: apps/v1</code></p>
<p>You change your Bitnami helm chart API using the : <a href="https://github.com/bitnami/charts/tree/master/bitnami/redis#common-parameters" rel="nofollow noreferrer">https://github.com/bitnami/charts/tree/master/bitnami/redis#common-parameters</a></p>
<p>You can change the API version at : <a href="https://github.com/bitnami/charts/blob/9f9d8aa887608e39aaab4ca1a80677605825b888/bitnami/redis/templates/master/statefulset.yaml#L2" rel="nofollow noreferrer">https://github.com/bitnami/charts/blob/9f9d8aa887608e39aaab4ca1a80677605825b888/bitnami/redis/templates/master/statefulset.yaml#L2</a></p>
<blockquote>
<p>Previous versions of this Helm Chart use apiVersion: v1 (installable
by both Helm 2 and 3), this Helm Chart was updated to apiVersion: v2
(installable by Helm 3 only). Here you can find more information about
the apiVersion field. The different fields present in the Chart.yaml
file has been ordered alphabetically in a homogeneous way for all the
Bitnami Helm Charts</p>
</blockquote>
<p>Read more at : <a href="https://helm.sh/docs/topics/charts/#the-apiversion-field" rel="nofollow noreferrer">https://helm.sh/docs/topics/charts/#the-apiversion-field</a></p>
<p>Or : <a href="https://github.com/bitnami/charts/tree/master/bitnami/redis#to-1200" rel="nofollow noreferrer">https://github.com/bitnami/charts/tree/master/bitnami/redis#to-1200</a></p>
<p>You have two option</p>
<ol>
<li>Either you change the helm chart edit it with latest stable API</li>
<li>Downgrade the K8s cluster to <strong>1.16</strong> or <strong>1.18</strong> and use your OLD redis chart which is giving error with <strong>1.21</strong></li>
</ol>
|
<p>When I'm learning kubernetes with book <em>KubernetesInAction</em> Ch.09 and make some pratice.</p>
<hr />
<p>I use <code>kubectl rolling-update kubia-v1 kubia-v2 --image=luksa/kubia:v2</code></p>
<p>and he tell me <code>Error: unknown command "rolling-update" for "kubectl"</code></p>
<p>I find it in github <a href="https://github.com/kubernetes/kubectl/commit/d3af7e08624bfa7c2f52714b47cfe96a52d15fc0" rel="nofollow noreferrer">Remove deprecated rolling-update command</a>.</p>
<p>I'm confused why they removed this command?</p>
<p>Hope to get an answer, thank you!</p>
| <p>The reason is that in production, you cannot guarantee the network is always stable. If the network lost, you'd end up in a immediate state where different version of pods running together.</p>
<p>Another important thing is that kubernetes is designed for you telling kubernetes the final status and it help you do it. It's not just like this way kubectl need to tell each detailed steps for the upgrade.</p>
<p>So that deployment resource is introduced.</p>
|
<p>I'm trying to adapt my Spring boot application to k8s environment and want to use ConfigMaps as property sources.
I faced that if I'm using</p>
<pre><code> kubernetes:
config:
sources:
- name: application-config
</code></pre>
<p>for application with name <code>appName</code> then any other ConfigMaps with Spring cloud kubernetes convention names like <code>appName-kubernetes</code> or <code>appName-dev</code> is silently ignored. Looks like listed sources in <code>config.sources</code> overrides and disables usage of any other PropertySources from ConfigMaps. <br>
I'm forced to use specific name for ConfigMap ('application-config' in sample above). <p>
So question is - how (if) can I specify both <code>config.sources</code> and simultaneously have ConfigMaps with names <code>appName-*</code> picked up correctly?</p>
| <p>After some debugging through ConfigMapPropertySource achieved this by</p>
<pre><code> kubernetes:
config:
sources:
- name: application-config
- name: @project.artifactId@
name: @project.artifactId@
</code></pre>
<p>Now it loads <code>application-config</code>, <code>application-config-kubernetes</code>, <code>appName-kubernetes</code> and other profile-specific ConfigMaps.</p>
|
<p>Below is an example <code>IngressRoute</code>.</p>
<p>I would like to have something like this where the first part of the domain would map to a Kubernetes service without having to statically define the service name.</p>
<p>service1.api.test.com -> service1</p>
<p>service2.api.test.com -> service1</p>
<pre><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingressroute
namespace: default
spec:
entryPoints:
- web
routes:
- match: "HostRegexp(`{subdomain:[a-z]+}.api.test.com`)"
kind: Rule
services:
- name: whoami # can this be dynamic?
port: 80
</code></pre>
| <p>No it's not possible. Because even if you are able to match the host regular expression it will not know under the hood which service the request needs to go to.</p>
<p>So you will need create two separate matches for different services in the same ingressRoute</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingressroute
namespace: default
spec:
entryPoints:
- web
routes:
- match: "HostRegexp(`someservice.yourdomain.com`) && PathPrefix(`/`)"
kind: Rule
services:
- name: whoamiV1
port: 80
- match: "HostRegexp(`yourdomain.com`,`{subdomain:[a-zA-Z0-9-]+}.yourdomain.com`) &&PathPrefix(`/api/someservice/`)"
kind: Rule
services:
- name: whoamiV2
port: 80
</code></pre>
<p>For more details you can refer the <a href="https://doc.traefik.io/traefik/providers/kubernetes-crd/" rel="nofollow noreferrer">docs</a></p>
|
<p>I'm trying to deploy vernemq on kubernetes and want to access it with subdomain with ssl but ssl should be terminated before request go to vernemq on port 1883.</p>
| <p>create file <code>haproxy-ingress-values.yaml</code> with following content</p>
<pre><code>controller:
tcp:
"1883": "default/vernemq:1883:::default/mqtt-tls"
</code></pre>
<p><code>default/vernemq:1883</code> is service of <code>vernemq</code> with port 1883 and <code>default/mqtt-tls</code> is tls secret for mqtt which you want terminate.</p>
<p>then use the following command to upgrade your <code>haproxy-ingress</code></p>
<pre><code>helm upgrade haproxy-ingress haproxy-ingress/haproxy-ingress --create-namespace --namespace ingress-controller --version 0.13.4 -f haproxy-ingress-values.yaml
</code></pre>
<p>just replace <code>upgrade</code> with <code>install</code> if fresh installing haproxy-ingress.</p>
<p>Then at last deploy the following ingress</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: vernemq-ingress
annotations:
kubernetes.io/ingress.class: haproxy
ingress.kubernetes.io/tcp-service-port: "1883"
spec:
rules:
- http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: vernemq
port:
number: 1883
</code></pre>
|
<ul>
<li>My service (tomcat/java) is running on a kubernetes cluster (AKS).</li>
<li>I would like to write the log files (tomcat access logs, application logs with logback) to an AzureFile volume.</li>
<li>I do not want to write the access logs to the stdout, because I do not want to mix the access logs with the application logs.</li>
</ul>
<p><strong>Question</strong></p>
<p>I expect that all logging is done asynchronously, so that writing to the slow AzureFile volume should not affect the performance.
Is this correct?</p>
<p><strong>Update</strong></p>
<p>In the end I want to collect the logfiles so that I can send all logs to ElasticSearch.</p>
<p>Especially I need a way to collect the access logs.</p>
| <p>If you want to send your access logs to Elastic Search, you just need to extend the <a href="https://tomcat.apache.org/tomcat-9.0-doc/api/org/apache/catalina/valves/AbstractAccessLogValve.html" rel="nofollow noreferrer"><code>AbstractAccessLogValve</code></a> and implement the <code>log</code> method.</p>
<p>The <code>AbstractAccessLogValve</code> already contains the logic to format the messages, so you need just to add the logic to send the formatted message.</p>
|
<p>I am creating an <code>EKS-Anywhere</code> local cluster by following these steps: <a href="https://anywhere.eks.amazonaws.com/docs/getting-started/local-environment/" rel="nofollow noreferrer">Create local cluster | EKS Anywhere</a></p>
<p>Getting the following error after executing this command.</p>
<p><code>eksctl anywhere create cluster -f $CLUSTER_NAME.yaml</code></p>
<pre><code>Performing setup and validations
Warning: The docker infrastructure provider is meant for local development and testing only
✅ Docker Provider setup is valid
Creating new bootstrap cluster
Installing cluster-api providers on bootstrap cluster
Provider specific setup
Creating new workload cluster
Installing networking on workload cluster
Installing storage class on workload cluster
Installing cluster-api providers on workload cluster
Moving cluster management from bootstrap to workload cluster
Error: failed to create cluster: error moving CAPI management from source to target: failed moving management cluster: Performing move...
Discovering Cluster API objects
Moving Cluster API objects Clusters=1
Creating objects in the target cluster
Deleting objects from the source cluster
Error: action failed after 10 attempts: failed to connect to the management cluster: action failed after 9 attempts: Get https://127.0.0.1:43343/api?timeout=30s: EOF
</code></pre>
| <h1>Upgrade your <code>cert-manager</code></h1>
<p>There is known issue: <a href="https://github.com/kubernetes-sigs/cluster-api/issues/3836" rel="nofollow noreferrer">clusterctl init fails when existing cert-manager runs 1.0+ · Issue #3836 · kubernetes-sigs/cluster-api</a></p>
<p>And there is a solution: <a href="https://github.com/kubernetes-sigs/cluster-api/pull/4013" rel="nofollow noreferrer">⚠️ Upgrade cert-manager to v1.1.0 by fabriziopandini · Pull Request #4013 · kubernetes-sigs/cluster-api</a></p>
<p>And <a href="https://github.com/kubernetes-sigs/cluster-api/issues/3836#issuecomment-836599445" rel="nofollow noreferrer">it works</a>:</p>
<blockquote>
<p>Cluster API is using cert-manager v1.1.0 now, so this should not be a problem anymore</p>
</blockquote>
<p>So, I'd suggest upgrading.</p>
|
<p>We are getting warnings in our production logs for .Net Core Web API services that are running in Kubernetes.</p>
<blockquote>
<p>Storing keys in a directory '{path}' that may not be persisted outside
of the container. Protected data will be unavailable when container is
destroyed.","@l":"Warning","path":"/root/.aspnet/DataProtection-Keys",SourceContext:"Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository"</p>
</blockquote>
<p>We do not explicitly call services.AddDataProtection() in StartUp, but it seems that we are getting the warnings for services that are using .Net Core 3.1 and .Net 5.(not for .Net Core 2.1)
,that also have in StartUp</p>
<pre><code>services.AddAuthentication Or
services.AddMvc()
</code></pre>
<p>(May be there are other conditions that I am missing).</p>
<p>I am not able to identify exactly, where it's called but locally I can see related DLLs that are loaded before the access to DataProtection-Keys from XmlKeyManager</p>
<pre><code> Loaded 'C:\Program Files\dotnet\shared\Microsoft.NETCore.App\3.1.19\Microsoft.Win32.Registry.dll'.
Loaded 'C:\Program Files\dotnet\shared\Microsoft.NETCore.App\3.1.19\System.Xml.XDocument.dll'.
Loaded 'C:\Program Files\dotnet\shared\Microsoft.NETCore.App\3.1.19\System.Private.Xml.Linq.dll'.
Loaded 'C:\Program Files\dotnet\shared\Microsoft.NETCore.App\3.1.19\System.Private.Xml.dll'.
Loaded 'C:\Program Files\dotnet\shared\Microsoft.NETCore.App\3.1.19\System.Resources.ResourceManager.dll'.
Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager:
Using 'C:\Users\..\AppData\Local\ASP.NET\DataProtection-Keys' as key repository and Windows DPAPI to encrypt keys at rest.
</code></pre>
<p>Is it safe to ignore such warnings, considering that we do not use DataProtection explicitly, do not use authentication encryption for long periods and during testing we haven't seen any issues?</p>
<p>Or the error means that if different pods will be involved for the same client, authentication may fail and it will be better to do something that is suggested in <a href="https://stackoverflow.com/questions/61452280/how-to-store-data-protection-keys-outside-of-the-docker-container">How to store Data Protection-keys outside of the docker container?</a>?</p>
| <p>After analysis how our applications are using protected data(authentication cookies, CSRF tokens etc) our team
decided , that “Protected data will be unavailable when container is destroyed." is just a warning and would have no customer impact, so we ignore it.</p>
<p>But <a href="https://www.howtogeek.com/693183/what-does-ymmv-mean-and-how-do-you-use-it/" rel="nofollow noreferrer">YMMV</a>.</p>
|
<p>I created a new kubernetes cluster (GKE) and installed an application using helm.</p>
<p>I was able to locate the gke-node on which my pod was deployed using the command,
<code>kubectl get pod -n namespace -o wide</code></p>
<p>Post that i logged on to the kubernetes-node, however on this node i am unable to view the docker images. This is the case on v1.21.4-gke.2300</p>
<p>In the older-version v1.19.13-gke.1200, i was able to observe the docker-images on nodes when they were pulled from repository</p>
<p>I can view the list of docker-images on v1.21.4-gke.2300 with the command
<code>kubectl get pods -n geo-test -o jsonpath="{.items[*].spec.containers[*].image}" | tr -s '[[:space:]]' '\n' | sort | uniq -c</code></p>
<p>But i would like to know where in the cluster are these images getting stored and why i am not able to observe them in the node like i did in the older version</p>
<p>My reason for asking is, because in version v1.19.13-gke.1200, i was able to perform minor changes in the docker-image, build a custom-image and use that for installation instead of storing the images in the gcr and pulling it from there
Any suggestion on how to go about it in the new</p>
<p><a href="https://i.stack.imgur.com/bZhmj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bZhmj.png" alt="v1.19.13-gke.1200" /></a></p>
| <p>Starting with GKE node version 1.19, the default node image for Linux nodes is the Container-Optimized OS with containerd (cos_containerd) variant instead of the Container-Optimized OS with Docker (cos) variant.</p>
<p>Now instead of <code>docker</code> commands you can use <code>crictl</code>. Refer to the <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/crictl/" rel="noreferrer">crictl user guide</a> for the complete set of supported features and usage information.</p>
<p>Specifically, you can run <code>crictl images</code> to list the images, available on the node.</p>
|
<p>We need to use the <strong>proxy protocol V2</strong> on GCP and we were not able to find the way to do it.
Any idea if its supported? if yes, how we can do it?</p>
<p>Something similar to this configuration on AWS
<a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#proxy-protocol" rel="nofollow noreferrer">https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#proxy-protocol</a></p>
| <p>Google Cloud TCP Proxy Load Balancers support <strong>Proxy Protocol Version 1</strong>.</p>
<p>Version 2 is not supported at this time.</p>
<p>Refer to the documentation for supported versions in this command:</p>
<p><a href="https://cloud.google.com/sdk/gcloud/reference/compute/target-tcp-proxies/create" rel="nofollow noreferrer">gcloud compute target-tcp-proxies create</a></p>
<p>Also documented in the REST API:</p>
<p><a href="https://cloud.google.com/compute/docs/reference/rest/v1/targetTcpProxies/setProxyHeader#request-body" rel="nofollow noreferrer">Method: targetTcpProxies.setProxyHeader</a></p>
|
<p>For the needs of a project i have created 2 Kubernetes clusters on GKE.</p>
<p><strong>Cluster 1</strong>: 10 containers in <em>one Pod</em></p>
<p><strong>Cluster 2</strong>: 10 containers in 10 <em>different Pods</em></p>
<p>All containers are connected and constitute an application.</p>
<p>What i would like to do is to generate some load and observe how the vpa will autoscale the containers..</p>
<p>Until now, using the "<strong>Auto</strong>" mode i have noticed that VPA changes values only once, at the begin and not while i generate load
and
that the Upper Bound is soo high, so it doesn't need any change!</p>
<p>Would you suggest me:</p>
<p><strong>1)</strong> to use Auto or Recommendation mode?</p>
<p>and</p>
<p><strong>2)</strong> to create 1 or 2 replicas of my application?</p>
<p>Also i would like to say that 2 of 10 containers is <strong>mysql</strong> and <strong>mongoDB</strong> . So if i have to create 2 replicas, i should use <strong>statefulsets or operators</strong>, right?</p>
<p>Thank you very much!!</p>
| <p>Not sure you mean it when you say this</p>
<blockquote>
<p>Cluster 1: 10 containers in one Pod</p>
<p>Cluster 2: 10 containers in different Pods</p>
</blockquote>
<p>At very first you are not following best practice, ideally, you should be keeping the single container in a single POD</p>
<p>Running 10 containers in one pod that too much, if there is interdependency your code should be using the K8s service name to connect to each other.</p>
<blockquote>
<p>to create 1 or 2 replicas of my application?</p>
</blockquote>
<p>Yes, that would be always better to run multiple replicas of the application so if due to anything even node goes down your POD on another node would be running.</p>
<blockquote>
<p>Also i would like to say that 2 of 10 containers is mysql and mongoDB
. So if i have to create 2 replicas, i should use statefulsets or
operators, right?</p>
</blockquote>
<p>You can use the operators and stateful sets both no in it, it's possible operator idally create the stateful sets.</p>
<p>Implementing the replication of MySQL across the replicas would be hard manually unless you have good experience as DBA and you are aware.</p>
<p>While with operator you will get the benefit both auto backup, replication auto management and other such things.</p>
<p>Operators indirectly creates the stateful set or deployment but you won't have to manage much and worry about the replication and failover planning and DB strategy.</p>
|
<p>I get this error when trying to get ALB logs:</p>
<pre><code>root@b75651fde30e:/apps/tekton/deployment# kubectl logs -f ingress/tekton-dashboard-alb-dev
error: cannot get the logs from *v1.Ingress: selector for *v1.Ingress not implemented
</code></pre>
<p>The load balancer YAML:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tekton-dashboard-alb-dev
namespace: tekton-pipelines
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/load-balancer-name: tekton-dashboard-alb-dev
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/tags: "Cost=SwiftALK,VantaOnwer=foo@bar.com,VantaNonProd=true,VantaDescription=ALB Ingress for Tekton Dashboard,VantaContainsUserData=false,VantaUserDataStored=None"
alb.ingress.kubernetes.io/security-groups: sg-034ca9846b81fd721
kubectl.kubernetes.io/last-applied-configuration: ""
spec:
defaultBackend:
service:
name: tekton-dashboard
port:
number: 9097
</code></pre>
<p><strong>Note:</strong> <code>sg-034ca9846b81fd721</code> restricts access to our VPN CIDRs</p>
<p>Ingress is up as revealed from:</p>
<pre><code>root@b75651fde30e:/apps/tekton/deployment# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
tekton-dashboard-alb-dev <none> * tekton-dashboard-alb-dev-81361211.us-east-1.elb.amazonaws.com 80 103m
root@b75651fde30e:/apps/tekton/deployment# kubectl describe ingress/tekton-dashboard-alb-dev
Name: tekton-dashboard-alb-dev
Namespace: tekton-pipelines
Address: tekton-dashboard-alb-dev-81361211.us-east-1.elb.amazonaws.com
Default backend: tekton-dashboard:9097 (172.18.5.248:9097)
Rules:
Host Path Backends
---- ---- --------
* * tekton-dashboard:9097 (172.18.5.248:9097)
Annotations: alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/load-balancer-name: tekton-dashboard-alb-dev
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/security-groups: sg-034ca9846b81fd721
alb.ingress.kubernetes.io/tags:
Cost=SwiftALK,VantaOnwer=swiftalkdevteam@digite.com,VantaNonProd=true,VantaDescription=ALB Ingress for SwifTalk Web Microservices,VantaCon...
alb.ingress.kubernetes.io/target-type: ip
kubernetes.io/ingress.class: alb
Events: <none>
</code></pre>
| <p><strong>The error you received means that the logs for your object are not implemented. It looks like you're trying to get logs from the wrong place.</strong></p>
<p>I am not able to reproduce your problem on AWS, but I tried to do it on GCP and the situation was very similar. You cannot get logs from <code>ingress/tekton-dashboard-alb-dev</code>, and this is normal bahaviour. If you want to get logs of your ALB, you have to find the appropriate pod and then extract the logs from it. Let me show you how I did it on GCP. The commands are the same, but the pod names will be different.</p>
<p>First I have executed:</p>
<pre><code>kubectl get pods --all-namespaces
</code></pre>
<p>Output:</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx ingress-nginx-controller-57cb5bf694-722ml 1/1 Running 0 18d
-----
and many other not related pods in other namespaces
</code></pre>
<p>You can find directly your pod with command:</p>
<pre><code>kubectl get pods -n ingress-nginx
</code></pre>
<p>Output:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
ingress-nginx-controller-57cb5bf694-722ml 1/1 Running 0 18d
</code></pre>
<p>Now you can get logs from <code>ingress controller</code> by command:</p>
<pre><code>kubectl logs -n ingress-nginx ingress-nginx-controller-57cb5bf694-722ml
</code></pre>
<p>in your situation:</p>
<pre><code>kubectl logs -n <your namespace> <your ingress controller pod>
</code></pre>
<p>The output should be similar to this:</p>
<pre><code>-------------------------------------------------------------------------------
NGINX Ingress controller
Release: v0.46.0
Build: 6348dde672588d5495f70ec77257c230dc8da134
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.6
-------------------------------------------------------------------------------
I0923 05:26:20.053561 8 flags.go:208] "Watching for Ingress" class="nginx"
W0923 05:26:20.053753 8 flags.go:213] Ingresses with an empty class will also be processed by this Ingress controller
W0923 05:26:20.054185 8 client_config.go:614] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0923 05:26:20.054502 8 main.go:241] "Creating API client" host="https://10.16.0.1:443"
I0923 05:26:20.069482 8 main.go:285] "Running in Kubernetes cluster" major="1" minor="20+" git="v1.20.9-gke.1001" state="clean" commit="1fe18c314ed577f6047d2712a9d1c8e498e22381" platform="linux/amd64"
I0923 05:26:20.842645 8 main.go:105] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I0923 05:26:20.846132 8 main.go:115] "Enabling new Ingress features available since Kubernetes v1.18"
W0923 05:26:20.849470 8 main.go:127] No IngressClass resource with name nginx found. Only annotation will be used.
I0923 05:26:20.866252 8 ssl.go:532] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I0923 05:26:20.917594 8 nginx.go:254] "Starting NGINX Ingress controller"
I0923 05:26:20.942084 8 event.go:282] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"42dc476e-3c5c-4cc9-a6a4-266edecb2a4b", APIVersion:"v1", ResourceVersion:"5600", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
I0923 05:26:22.118459 8 nginx.go:296] "Starting NGINX process"
I0923 05:26:22.118657 8 leaderelection.go:243] attempting to acquire leader lease ingress-nginx/ingress-controller-leader-nginx...
I0923 05:26:22.119481 8 nginx.go:316] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I0923 05:26:22.120266 8 controller.go:146] "Configuration changes detected, backend reload required"
I0923 05:26:22.126350 8 status.go:84] "New leader elected" identity="ingress-nginx-controller-57cb5bf694-8c9tn"
I0923 05:26:22.214194 8 controller.go:163] "Backend successfully reloaded"
I0923 05:26:22.214838 8 controller.go:174] "Initial sync, sleeping for 1 second"
I0923 05:26:22.215234 8 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-57cb5bf694-722ml", UID:"b9672f3c-ecdf-473e-80f5-529bbc5bc4e5", APIVersion:"v1", ResourceVersion:"59016530", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I0923 05:27:00.596169 8 leaderelection.go:253] successfully acquired lease ingress-nginx/ingress-controller-leader-nginx
I0923 05:27:00.596305 8 status.go:84] "New leader elected" identity="ingress-nginx-controller-57cb5bf694-722ml"
157.230.143.29 - - [23/Sep/2021:08:28:25 +0000] "GET / HTTP/1.1" 400 248 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:54.0) Gecko/20100101 Firefox/70.0" 165 0.000 [] [] - - - - d47be1e37ea504aca93d59acc7d36a2b
157.230.143.29 - - [23/Sep/2021:08:28:26 +0000] "\x00\xFFK\x00\x00\x00\xE2\x00 \x00\x00\x00\x0E2O\xAAC\xE92g\xC2W'\x17+\x1D\xD9\xC1\xF3,kN\x17\x14" 400 150 "-" "-" 0 0.076 [] [] - - - - c497187f4945f8e9e7fa84d503198e85
157.230.143.29 - - [23/Sep/2021:08:28:26 +0000] "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" 400 150 "-" "-" 0 0.138 [] [] - - - - 4067a2d34d0c1f2db7ffbfc143540c1a
167.71.216.70 - - [23/Sep/2021:12:02:23 +0000] "\x16\x03\x01\x01\xFC\x01\x00\x01\xF8\x03\x03\xDB\xBBo*K\xAE\x9A&\x8A\x9B)\x1B\xB8\xED3\xB7\xE16N\xEA\xFCS\x22\x14V\xF7}\xC8&ga\xDA\x00\x01<\xCC\x14\xCC\x13\xCC\x15\xC00\xC0,\xC0(\xC0$\xC0\x14\xC0" 400 150 "-" "-" 0 0.300 [] [] - - - - ff6908bb17b0da020331416773b928b5
167.71.216.70 - - [23/Sep/2021:12:02:23 +0000] "\x16\x03\x01\x01\xFC\x01\x00\x01\xF8\x03\x03a\xBF\xFB\xC1'\x03S\x83D\x5Cn$\xAB\xE1\xA6%\x93G-}\xD1C\xB2\xB0E\x8C\x8F\xA8q-\xF7$\x00\x01<\xCC\x14\xCC\x13\xCC\x15\xC00\xC0,\xC0(\xC0$\xC0\x14\xC0" 400 150 "-" "-" 0 0.307 [] [] - - - - fee3a478240e630e6983c60d1d510f52
66.240.205.34 - - [23/Sep/2021:12:04:11 +0000] "145.ll|'|'|SGFjS2VkX0Q0OTkwNjI3|'|'|WIN-JNAPIER0859|'|'|JNapier|'|'|19-02-01|'|'||'|'|Win 7 Professional SP1 x64|'|'|No|'|'|0.7d|'|'|..|'|'|AA==|'|'|112.inf|'|'|SGFjS2VkDQoxOTIuMTY4LjkyLjIyMjo1NTUyDQpEZXNrdG9wDQpjbGllbnRhLmV4ZQ0KRmFsc2UNCkZhbHNlDQpUcnVlDQpGYWxzZQ==12.act|'|'|AA==" 400 150 "-" "-" 0 0.086 [] [] - - - - 365d42d67e7378359b95c71a8d8ce983
147.182.148.98 - - [23/Sep/2021:12:04:17 +0000] "\x16\x03\x01\x01\xFC\x01\x00\x01\xF8\x03\x03\xABA\xF4\xD5\xB7\x95\x85[.v\xDB\xD1\x1B\x04\xE7\xB4\xB8\x92\x82\xEC\xCC\xDDr\xB7/\xBD\x93/\xD0f4\xB3\x00\x01<\xCC\x14\xCC\x13\xCC\x15\xC00\xC0,\xC0(\xC0$\xC0\x14\xC0" 400 150 "-" "-" 0 0.152 [] [] - - - - 858c2ad7535de95c84dd0899708a3801
164.90.203.66 - - [23/Sep/2021:12:08:19 +0000] "\x16\x03\x01\x01\xFC\x01\x00\x01\xF8\x03\x03\x93\x81+_\x95\xFA\xEAj\xA7\x80\x15 \x179\xD7\x92\xAE\xA9i+\x9D`\xA07:\xD2\x22\xB3\xC6\xF3\x22G\x00\x01<\xCC\x14\xCC\x13\xCC\x15\xC00\xC0,\xC0(\xC0$\xC0\x14\xC0" 400 150 "-" "-" 0 0.237 [] [] - - - - 799487dd8ec874532dcfa7dad1c02a27
164.90.203.66 - - [23/Sep/2021:12:08:20 +0000] "\x16\x03\x01\x01\xFC\x01\x00\x01\xF8\x03\x03\xB8\x22\xCB>1\xBEM\xD4\x92\x95\xEF\x1C0\xB5&\x1E[\xC5\xC8\x1E2\x07\x1C\x02\xA1<\xD2\xAA\x91F\x00\xC6\x00\x01<\xCC\x14\xCC\x13\xCC\x15\xC00\xC0,\xC0(\xC0$\xC0\x14\xC0" 400 150 "-" "-" 0 0.193 [] [] - - - - 4604513713d4b9fb5a7199b7980fa7f2
164.90.203.66 - - [23/Sep/2021:12:16:10 +0000] "\x16\x03\x01\x01\xFC\x01\x00\x01\xF8\x03\x03[\x16\x02\x94\x98\x17\xCA\xB5!\xC11@\x08\xD9\x89RE\x970\xC2\xDF\xFF\xEBh\xA0i\x9Ee%.\x07{\x00\x01<\xCC\x14\xCC\x13\xCC\x15\xC00\xC0,\xC0(\xC0$\xC0\x14\xC0" 400 150 "-" "-" 0 0.116 [] [] - - - - 23019f0886a1c30a78092753f6828e74
77.247.108.81 - - [23/Sep/2021:14:52:51 +0000] "GET /admin/config.php HTTP/1.1" 400 248 "-" "python-requests/2.26.0" 164 0.000 [] [] - - - - 04630dbf3d0ff4a4b7138dbc899080e5
209.141.48.211 - - [23/Sep/2021:16:17:46 +0000] "" 400 0 "-" "-" 0 0.057 [] [] - - - - 3c623b242909a99e18178ec10a814d7b
209.141.62.185 - - [23/Sep/2021:18:13:11 +0000] "GET /config/getuser?index=0 HTTP/1.1" 400 248 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:76.0) Gecko/20100101 Firefox/76.0" 353 0.000 [] [] - - - - 2640cf06912615a7600e814dc893884b
125.64.94.138 - - [23/Sep/2021:19:49:08 +0000] "GET / HTTP/1.0" 400 248 "-" "-" 18 0.000 [] [] - - - - b633636176888bc3b7f6230f691e0724
2021/09/23 19:49:20 [crit] 39#39: *424525 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 125.64.94.138, server: 0.0.0.0:443
125.64.94.138 - - [23/Sep/2021:19:49:21 +0000] "GET /favicon.ico HTTP/1.1" 400 650 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4 240.111 Safari/537.36" 197 0.000 [] [] - - - - ede08c8fb12e8ebaf3adcbd2b7ea5fd5
125.64.94.138 - - [23/Sep/2021:19:49:22 +0000] "GET /robots.txt HTTP/1.1" 400 650 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4 240.111 Safari/537.36" 196 0.000 [] [] - - - - fae50b56a11600abc84078106ba4b008
125.64.94.138 - - [23/Sep/2021:19:49:22 +0000] "GET /.well-known/security.txt HTTP/1.1" 400 650 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4 240.111 Safari/537.36" 210 0.000 [] [] - - - - ad82bcac7d7d6cd9aa2d044d80bb719d
87.251.75.145 - - [23/Sep/2021:21:29:10 +0000] "\x03\x00\x00/*\xE0\x00\x00\x00\x00\x00Cookie: mstshash=Administr" 400 150 "-" "-" 0 0.180 [] [] - - - - 8c2b62bcdf26ac1592202d0940fc30b8
167.71.102.181 - - [23/Sep/2021:21:54:58 +0000] "\x00\x0E8K\xA3\xAAe\xBCn\x14\x1B\x00\x00\x00\x00\x00" 400 150 "-" "-" 0 0.027 [] [] - - - - 65b8ee37a2c6bf8368843e4db3b90b2a
185.156.72.27 - - [23/Sep/2021:22:03:55 +0000] "\x03\x00\x00/*\xE0\x00\x00\x00\x00\x00Cookie: mstshash=Administr" 400 150 "-" "-" 0 0.139 [] [] - - - - 92c6ad2d71b961bf7de4e345ff69da10
185.156.72.27 - - [23/Sep/2021:22:03:55 +0000] "\x03\x00\x00/*\xE0\x00\x00\x00\x00\x00Cookie: mstshash=Administr" 400 150 "-" "-" 0 0.140 [] [] - - - - fe0424f8ecf9afc1d0154bbca2382d13
34.86.35.21 - - [23/Sep/2021:22:54:41 +0000] "\x16\x03\x01\x00\xE3\x01\x00\x00\xDF\x03\x03\x0F[\xA9\x18\x15\xD3@4\x7F\x7F\x98'\xA9(\x8F\xE7\xCCDd\xF9\xFF`\xE3\xCE\x9At\x05\x97\x05\xB1\xC3}\x00\x00h\xCC\x14\xCC\x13\xC0/\xC0+\xC00\xC0,\xC0\x11\xC0\x07\xC0'\xC0#\xC0\x13\xC0\x09\xC0(\xC0$\xC0\x14\xC0" 400 150 "-" "-" 0 2.039 [] [] - - - - c09d38bf2cd925dac4d9e5d5cb843ece
2021/09/24 02:41:15 [crit] 40#40: *627091 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 184.105.247.252, server: 0.0.0.0:443
61.219.11.151 - - [24/Sep/2021:03:40:51 +0000] "dN\x93\xB9\xE6\xBCl\xB6\x92\x84:\xD7\x03\xF1N\xB9\xC5;\x90\xC2\xC6\xBA\xE1I-\x22\xDDs\xBA\x1FgC:\xB1\xA7\x80+\x00\x00\x00\x00%\xFDK:\xAAW.|J\xB2\xB5\xF5'\xA5l\xD3V(\xB7\x01%(CsK8B\xCE\x9A\xD0z\xC7\x13\xAD" 400 150 "-" "-" 0 0.203 [] [] - - - - 190d00221eefc869b5938ab6380f835a
46.101.155.106 - - [24/Sep/2021:04:56:37 +0000] "HEAD / HTTP/1.0" 400 0 "-" "-" 17 0.000 [] [] - - - - e8c108201c37d7457e4578cf68feacf8
46.101.155.106 - - [24/Sep/2021:04:56:38 +0000] "GET /system_api.php HTTP/1.1" 400 650 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36" 255 0.000 [] [] - - - - b3032f9a9b3f4f367bdee6692daeb05c
46.101.155.106 - - [24/Sep/2021:04:56:39 +0000] "GET /c/version.js HTTP/1.1" 400 650 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36" 253 0.000 [] [] - - - - 9104ab72a0232caf6ff98da57d325144
46.101.155.106 - - [24/Sep/2021:04:56:40 +0000] "GET /streaming/clients_live.php HTTP/1.1" 400 650 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36" 267 0.000 [] [] - - - - 341cbb6cf424b348bf8b788f79373b8d
46.101.155.106 - - [24/Sep/2021:04:56:41 +0000] "GET /stalker_portal/c/version.js HTTP/1.1" 400 650 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36" 268 0.000 [] [] - - - - 9954fd805fa092595057dbf83511bd92
46.101.155.106 - - [24/Sep/2021:04:56:42 +0000] "GET /stream/live.php HTTP/1.1" 400 248 "-" "AlexaMediaPlayer/2.1.4676.0 (Linux;Android 5.1.1) ExoPlayerLib/1.5.9" 209 0.000 [] [] - - - - 3c9409419c1ec59dfc08c10cc3eb6eef
</code></pre>
|
<p>I'm developing <a href="https://github.com/msrumon/microservice-architecture" rel="nofollow noreferrer">this dummy project</a> and trying to make it work locally via <code>Skaffold</code>.</p>
<p>There are 3 services in my project (running on ports <code>3001</code>, <code>3002</code> and <code>3003</code> respectively), wired via <code>NATS server</code>.</p>
<p>The problem is: I get different kinds of errors each time I run <code>skaffold debug</code>, and one/more service(s) don't work.</p>
<p>At times, I don't get any errors, and all services work as expected. The followings are some of the errors:</p>
<pre><code>Waited for <...>s due to client-side throttling, not priority and fairness,
request: GET:https://kubernetes.docker.internal:6443/api/v1/namespaces/default/pods?labelSelector=app%!D(MISSING). <...>%!C(MISSING)app.kubernetes.io%!F(MISSING)managed-by%!D(MISSING)skaffold%!C(MISSING)skaffold.dev%!F(MISSING)run-id%!D(MISSING)<...>` (from `request.go:668`)
- `0/1 nodes are available: 1 Insufficient cpu.` (from deployments)
- `UnhandledPromiseRejectionWarning: NatsError: CONNECTION_REFUSED` (from apps)
- `UnhandledPromiseRejectionWarning: Error: getaddrinfo EAI_AGAIN nats-service` (from apps)
</code></pre>
<p>I'm at a loss and can't help myself anymore. I hope someone here will be able to help me out.</p>
<p>Thanks in advance.</p>
<p>PS: Below is my machine's config, in case it's my machine's fault.</p>
<pre><code>Processor: AMD Ryzen 7 1700 (8C/16T)
Memory: 2 x 8GB DDR4 2667MHz
Graphics: AMD Radeon RX 590 (8GB)
OS: Windows 10 Pro 21H1
</code></pre>
<pre class="lang-sh prettyprint-override"><code>$ docker version
Client:
Version: 19.03.12
API version: 1.40
Go version: go1.13.12
Git commit: 0ed913b8-
Built: 07/28/2020 16:36:03
OS/Arch: windows/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 20.10.8
API version: 1.41 (minimum version 1.12)
Go version: go1.16.6
Git commit: 75249d8
Built: Fri Jul 30 19:52:10 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.9
GitCommit: e25210fe30a0a703442421b0f60afac609f950a3
runc:
Version: 1.0.1
GitCommit: v1.0.1-0-g4144b63
docker-init:
Version: 0.19.0
GitCommit: de40ad0
</code></pre>
<pre class="lang-sh prettyprint-override"><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:38:50Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.4", GitCommit:"3cce4a82b44f032d0cd1a1790e6d2f5a55d20aae", GitTreeState:"clean", BuildDate:"2021-08-11T18:10:22Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>I use WSL2 (Debian) and <code>docker-desktop</code> is the context of Kubernetes.</p>
| <p>The main reason of issues like this one is that you are setting <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#if-you-specify-a-cpu-limit-but-do-not-specify-a-cpu-request" rel="nofollow noreferrer">only CPU limit (without setting CPU request) so Kubernetes automatically assigns a CPU request which is equal to the CPU limit</a>:</p>
<blockquote>
<p>If you specify a CPU limit for a Container but do not specify a CPU request, Kubernetes automatically assigns a CPU request that matches the limit. Similarly, if a Container specifies its own memory limit, but does not specify a memory request, Kubernetes automatically assigns a memory request that matches the limit.</p>
</blockquote>
<p>So as requests are equal to the limits, your node can't meet these requirements (you have 16 CPUs available; to start all services you need 24 CPUs) - that's why you are getting <code>0/1 nodes are available: 1 Insufficient cpu</code> error message.</p>
<p><em>How to fix it?</em></p>
<ul>
<li>Set a <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits" rel="nofollow noreferrer">CPU request</a> which is different from the <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits" rel="nofollow noreferrer">CPU limit</a></li>
<li>Delete CPU limits</li>
</ul>
<p><em>But...</em></p>
<p>You wrote that:</p>
<blockquote>
<p>Should I also try setting up the <code>requests</code> key and set the lower limit too? Or what about completely omitting it?
I tried that one, and still same issue.</p>
</blockquote>
<p>So if you deleted all CPU limits from all deployments and you still have error related to the insufficient CPU it clearly shows that your app is too resource-hungry. I'd suggest optimizing the application in terms of resource utilization. Another option is to increase node resources.</p>
|
<p>For example, I have an internal docker registry for my kube cluster, hosted on <code>internal-docker-registry.io:5000</code>.</p>
<p>When the pod from the kube cluster pulling image <code>busybox</code>, I don't want to pull from docker hub <code>docker.io</code>. But instead I want it to pull from <code>internal-docker-registry.io:5000</code>.</p>
<p><em>Note that I cannot change the image name to <code>internal-docker-registry.io:5000/busybox</code> since I don't own the spec and there are too man these kind of third party image in my cluster.</em></p>
| <p>I have posted community wiki answer to summarize the topic:</p>
<p>As <a href="https://stackoverflow.com/users/10008173/david-maze" title="76,777 reputation">David Maze</a> well mentioned in the comment:</p>
<blockquote>
<p><strong>there is no "image registry search list"</strong>; an image name without a registry <em>always</em> uses <code>docker.io</code>, to avoid surprises where <em>e.g.</em> <code>FROM ubuntu</code> means something different depending on local configuration.</p>
</blockquote>
<p>So if you can't change the image name to internal-docker-registry.io:5000/busybox then unfortunately you don't have the option to get the image from your private registry (according to David's comment).</p>
<p>See also similar questions:</p>
<p><a href="https://stackoverflow.com/questions/33054369/how-to-change-the-default-docker-registry-from-docker-io-to-my-private-registry">How to change the default docker registry from docker.io to my private registry?</a></p>
<p><a href="https://stackoverflow.com/questions/66026479/how-to-change-default-k8s-cluster-registry">How to change default K8s cluster registry?</a></p>
|
<p>I have microk8s v1.22.2 running on Ubuntu 20.04.3 LTS.</p>
<p>Output from <code>/etc/hosts</code>:</p>
<pre><code>127.0.0.1 localhost
127.0.1.1 main
</code></pre>
<p>Excerpt from <code>microk8s status</code>:</p>
<pre><code>addons:
enabled:
dashboard # The Kubernetes dashboard
ha-cluster # Configure high availability on the current node
ingress # Ingress controller for external access
metrics-server # K8s Metrics Server for API access to service metrics
</code></pre>
<p>I checked for the running dashboard (<code>kubectl get all --all-namespaces</code>):</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/calico-node-2jltr 1/1 Running 0 23m
kube-system pod/calico-kube-controllers-f744bf684-d77hv 1/1 Running 0 23m
kube-system pod/metrics-server-85df567dd8-jd6gj 1/1 Running 0 22m
kube-system pod/kubernetes-dashboard-59699458b-pb5jb 1/1 Running 0 21m
kube-system pod/dashboard-metrics-scraper-58d4977855-94nsp 1/1 Running 0 21m
ingress pod/nginx-ingress-microk8s-controller-qf5pm 1/1 Running 0 21m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 23m
kube-system service/metrics-server ClusterIP 10.152.183.81 <none> 443/TCP 22m
kube-system service/kubernetes-dashboard ClusterIP 10.152.183.103 <none> 443/TCP 22m
kube-system service/dashboard-metrics-scraper ClusterIP 10.152.183.197 <none> 8000/TCP 22m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 23m
ingress daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 22m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/calico-kube-controllers 1/1 1 1 23m
kube-system deployment.apps/metrics-server 1/1 1 1 22m
kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 22m
kube-system deployment.apps/dashboard-metrics-scraper 1/1 1 1 22m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/calico-kube-controllers-69d7f794d9 0 0 0 23m
kube-system replicaset.apps/calico-kube-controllers-f744bf684 1 1 1 23m
kube-system replicaset.apps/metrics-server-85df567dd8 1 1 1 22m
kube-system replicaset.apps/kubernetes-dashboard-59699458b 1 1 1 21m
kube-system replicaset.apps/dashboard-metrics-scraper-58d4977855 1 1 1 21m
</code></pre>
<p>I want to expose the microk8s dashboard within my local network to access it through <code>http://main/dashboard/</code></p>
<p>To do so, I did the following <code>nano ingress.yaml</code>:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: public
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
name: dashboard
namespace: kube-system
spec:
rules:
- host: main
http:
paths:
- backend:
serviceName: kubernetes-dashboard
servicePort: 443
path: /
</code></pre>
<p>Enabling the ingress-config through <code>kubectl apply -f ingress.yaml</code> gave the following error:</p>
<pre><code>error: unable to recognize "ingress.yaml": no matches for kind "Ingress" in version "extensions/v1beta1"
</code></pre>
<p>Help would be much appreciated, thanks!</p>
<p><strong>Update:</strong>
@harsh-manvar pointed out a mismatch in the config version. I have rewritten ingress.yaml to a very stripped down version:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard
namespace: kube-system
spec:
rules:
- http:
paths:
- path: /dashboard
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443
</code></pre>
<p>Applying this works. Also, the ingress rule gets created.</p>
<pre><code>NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
kube-system dashboard public * 127.0.0.1 80 11m
</code></pre>
<p>However, when I access the dashboard through <code>http://<ip-of-kubernetes-master>/dashboard</code>, I get a <code>400</code> error.</p>
<p>Log from the ingress controller:</p>
<pre><code>192.168.0.123 - - [10/Oct/2021:21:38:47 +0000] "GET /dashboard HTTP/1.1" 400 54 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36" 466 0.002 [kube-system-kubernetes-dashboard-443] [] 10.1.76.3:8443 48 0.000 400 ca0946230759edfbaaf9d94f3d5c959a
</code></pre>
<p>Does the dashboard also need to be exposed using the <code>microk8s proxy</code>? I thought the ingress controller would take care of this, or did I misunderstand this?</p>
| <p>To fix the error <code>error: unable to recognize "ingress.yaml": no matches for kind "Ingress" in version "extensions/v1beta1</code> you need to set <code>apiVersion</code> to the <code> networking.k8s.io/v1</code>. From the <a href="https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/" rel="noreferrer">Kubernetes v1.16 article about deprecated APIs</a>:</p>
<blockquote>
<ul>
<li>NetworkPolicy in the <strong>extensions/v1beta1</strong> API version is no longer served
- Migrate to use the <strong>networking.k8s.io/v1</strong> API version, available since v1.8. Existing persisted data can be retrieved/updated via the new version.</li>
</ul>
</blockquote>
<p>Now moving to the second issue. You need to add a few annotations and make few changes in your Ingress definition to make dashboard properly exposed on the microk8s cluster:</p>
<ul>
<li>add <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="noreferrer"><code>nginx.ingress.kubernetes.io/rewrite-target: /$2</code> annotation</a></li>
<li>add <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#configuration-snippet" rel="noreferrer"><code>nginx.ingress.kubernetes.io/configuration-snippet: | rewrite ^(/dashboard)$ $1/ redirect;</code> annotation</a></li>
<li>change <code>path: /dashboard</code> to <code>path: /dashboard(/|$)(.*)</code></li>
</ul>
<p>We need them to properly forward the request to the backend pods - <a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-kubernetes-dashboard-custom-path/" rel="noreferrer">good explanation in this article</a>:</p>
<blockquote>
<p><strong>Note:</strong> The "nginx.ingress.kubernetes.io/rewrite-target" annotation rewrites the URL before forwarding the request to the backend pods. In <strong>/dashboard(/|$)(.*)</strong> for <strong>path</strong>, <strong>(.*)</strong> stores the dynamic URL that's generated while accessing the Kubernetes Dashboard. The "nginx.ingress.kubernetes.io/rewrite-target" annotation replaces the captured data in the URL before forwarding the request to the <strong>kubernetes-dashboard</strong> service. The "nginx.ingress.kubernetes.io/configuration-snippet" annotation rewrites the URL to add a trailing slash ("/") only if <strong>ALB-URL/dashboard</strong> is accessed.</p>
</blockquote>
<p>Also we need another two changes:</p>
<ul>
<li>add <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol" rel="noreferrer"><code>nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"</code> annotation</a> to tell NGINX Ingress to communicate with Dashboard service using HTTPs</li>
<li>add <code>kubernetes.io/ingress.class: public</code> annotation <a href="https://stackoverflow.com/a/67041204/16391991">to use NGINX Ingress created by microk8s <code>ingress</code> plugin</a></li>
</ul>
<p>After implementing everything above, the final YAML file looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite ^(/dashboard)$ $1/ redirect;
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
kubernetes.io/ingress.class: public
name: dashboard
namespace: kube-system
spec:
rules:
- http:
paths:
- path: /dashboard(/|$)(.*)
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443
</code></pre>
<p>It should work fine. No need to run <code>microk8s proxy</code> command.</p>
|
<p>Ever since I upgraded my EKS cluster to <code>v1.21</code>, I get the following error when triggering Cronjobs manually:</p>
<pre><code>➜ ~ kubectl create job --from=cronjob/elt-dim-customer-new test-1 -n dwh-dev
error: from must be an existing cronjob: no kind "CronJob" is registered for version "batch/v1" in scheme "k8s.io/kubectl/pkg/scheme/scheme.go:28"
➜ ~ kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2",
GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean",
BuildDate:"2021-01-13T13:28:09Z", GoVersion:"go1.15.5", Compiler:"gc",
Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"21+", GitVersion:"v1.21.2-eks-0389ca3", GitCommit:"8a4e27b9d88142bbdd21b997b532eb6d493df6d2", GitTreeState:"clean",
BuildDate:"2021-07-31T01:34:46Z", GoVersion:"go1.16.5", Compiler:"gc",
Platform:"linux/amd64"}
</code></pre>
<p>This is the Cronjob structure (you can see the <code>apiVersion</code>):</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: elt-dim-customer-new
spec:
schedule: {{ .Values.cron }}
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
</code></pre>
<p>Couldn't find what has changed in EKS 1.21. Will appreciate your assistance!</p>
| <p>Please upgrade your kubectl to 1.21 as well.</p>
|
<p>i have a question about kubernetes networking.</p>
<p>My working senario:</p>
<ul>
<li>I have a Jenkins container my localhost and this container up and running. Inside Jenkins, i have a job. Access jenkins , i use "http://localhost:8080" url. (jenkins is not runing inside kubernetes)</li>
<li>My flask app, trigger the Jenkins job with this command:</li>
</ul>
<blockquote>
<pre><code> @app.route("/create",methods=["GET","POST"])
def create():
if request.method =="POST":
dosya_adi=request.form["sendmail"]
server = jenkins.Jenkins('http://localhost:8080/', username='my-user-name', password='my-password')
server.build_job('jenkins_openvpn', {'FILE_NAME': dosya_adi}, token='my-token')
</code></pre>
</blockquote>
<ul>
<li>Then, i did Dockerize this flask app. My image name is: "jenkins-app"</li>
<li>If i run this command, everythings perfect:</li>
</ul>
<blockquote>
<p><code>docker run -it --network="host" --name=jenkins-app jenkins-app</code></p>
</blockquote>
<p>But i want to do samething with kubernetes. For that i wrote this yml file.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: jenkins-pod
spec:
hostNetwork: true
containers:
- name: jenkins-app
image: jenkins-app:latest
imagePullPolicy: Never
ports:
- containerPort: 5000
</code></pre>
<ul>
<li>With this yml file, i access the flask app using port 5000. While i want to trigger jenkins job, i get an error like this: requests.exceptions.ConnectionError</li>
</ul>
<p>Would you suggest if there is a way to do this with Kubernetes?</p>
| <p>I create an endpoint.yml file and add in this file below commands, this solve my problem:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Endpoints
metadata:
name: jenkins-server
subsets:
- addresses:
- ip: my-ps-ip
ports:
- port: 8080
</code></pre>
<p>Then, I change this line in my flask app like this:</p>
<pre class="lang-py prettyprint-override"><code>server = jenkins.Jenkins('http://my-ps-ip:8080/', username='my-user-name', password='my-password')
</code></pre>
|
<p>How can I schedule a Kubernetes <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/" rel="nofollow noreferrer">cron job</a> to run at a specific time and just once?</p>
<p>(Or alternatively, a Kubernetes job which is not scheduled to run right away, but delayed for some amount of time – what is in some scheduling systems referred to as "earliest time to run".)</p>
<p>The documentation says:</p>
<blockquote>
<p>Cron jobs can also schedule individual tasks for a specific time [...]</p>
</blockquote>
<p>But how does that work in terms of job history; is the control plane smart enough to know that the scheduling is for a specific time and won't be recurring?</p>
| <p>You can always put specific minute, hour, day, month in the schedule cron expression, for example 12:15am on 25th of December:</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "15 0 25 12 *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
</code></pre>
<p>Unfortunately it does not support specifying the year (the single <code>*</code> in the cron expression is for the day of the week) but you have one year to remove the cronjob before the same date & time comes again for the following year.</p>
|
<p>I have a Raspberry Pi Cluster consisting of 1-Master 20-Nodes:</p>
<ul>
<li>192.168.0.92 (Master)</li>
<li>192.168.0.112 (Node w/ USB Drive)</li>
</ul>
<p>I mounted a USB drive to <code>/media/hdd</code> & set a label <code>- purpose=volume</code> to it.</p>
<p>Using the following I was able to setup a NFS server:</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: storage
labels:
app: storage
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv
namespace: storage
spec:
capacity:
storage: 3.5Ti
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /media/hdd
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: purpose
operator: In
values:
- volume
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: local-claim
namespace: storage
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-storage
resources:
requests:
storage: 3Ti
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-server
namespace: storage
labels:
app: nfs-server
spec:
replicas: 1
selector:
matchLabels:
app: nfs-server
template:
metadata:
labels:
app: nfs-server
name: nfs-server
spec:
containers:
- name: nfs-server
image: itsthenetwork/nfs-server-alpine:11-arm
env:
- name: SHARED_DIRECTORY
value: /exports
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: local-claim
nodeSelector:
purpose: volume
---
kind: Service
apiVersion: v1
metadata:
name: nfs-server
namespace: storage
spec:
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
clusterIP: 10.96.0.11
selector:
app: nfs-server
</code></pre>
<p>And I was even able to make a persistent volume with this:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-nfs-volume
labels:
directory: mysql
spec:
capacity:
storage: 200Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: slow
nfs:
path: /mysql
server: 10.244.19.5
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-nfs-claim
spec:
storageClassName: slow
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
selector:
matchLabels:
directory: mysql
</code></pre>
<p>But when I try to use the volume like so:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-nfs-claim
</code></pre>
<p>I get NFS version transport protocol not supported error.</p>
| <p>When seeing <strong><code>mount.nfs: requested NFS version or transport protocol is not supported</code></strong> error there are three main reasons:</p>
<blockquote>
<ol>
<li>NFS services are not running on NFS server</li>
<li>NFS utils not installed on the client</li>
<li>NFS service hung on NFS server</li>
</ol>
</blockquote>
<p>According tho <a href="https://kerneltalks.com/troubleshooting/mount-nfs-requested-nfs-version-or-transport-protocol-is-not-supported/" rel="nofollow noreferrer">this artice</a> there are three solutions to resolve the problem with your error.</p>
<p><strong>First one:</strong></p>
<p>Login to the NFS server and check the NFS services status. If the following command
<code>service nfs status</code> returns an information that NFS services are stopped on the server - just start them using <code>service nfs start</code>. To mount NFS share on the client use the same command.</p>
<p><strong>Second one:</strong></p>
<p>If after trying first solution your problem isn't resolved</p>
<blockquote>
<p>try <a href="https://kerneltalks.com/tools/package-installation-linux-yum-apt/" rel="nofollow noreferrer">installing package</a> nfs-utils on your server.</p>
</blockquote>
<p><strong>Third one:</strong></p>
<blockquote>
<p>Open file <code>/etc/sysconfig/nfs</code> and try to check below parameters</p>
</blockquote>
<pre><code># Turn off v4 protocol support
#RPCNFSDARGS="-N 4"
# Turn off v2 and v3 protocol support
#RPCNFSDARGS="-N 2 -N 3"
</code></pre>
<blockquote>
<p>Removing hash from <code>RPCNFSDARGS</code> lines will turn off specific version support. This way clients with mentioned NFS versions won’t be able to connect to the NFS server for mounting share. If you have any of it enabled, try disabling it and mounting at the client after the NFS server service restarts.</p>
</blockquote>
|
<p>I'm trying to spread my <code>ingress-nginx-controller</code> pods such that:</p>
<ul>
<li>Each availability zone has the same # of pods (+- 1).</li>
<li>Pods prefer Nodes that currently run the least pods.</li>
</ul>
<p>Following other questions here, I have set up Pod Topology Spread Constraints in my pod deployment:</p>
<pre><code> replicas: 4
topologySpreadConstraints:
- labelSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
- labelSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
</code></pre>
<p>I currently have 2 Nodes, each in a different availability zone:</p>
<pre><code>$ kubectl get nodes --label-columns=topology.kubernetes.io/zone,kubernetes.io/hostname
NAME STATUS ROLES AGE VERSION ZONE HOSTNAME
ip-{{node1}}.compute.internal Ready node 136m v1.20.2 us-west-2a ip-{{node1}}.compute.internal
ip-{{node2}}.compute.internal Ready node 20h v1.20.2 us-west-2b ip-{{node2}}.compute.internal
</code></pre>
<p>After running <code>kubectl rollout restart</code> for that deployment, I get 3 pods in one Node, and 1 pod in the other, which has a skew of <code>2 > 1</code>:</p>
<pre><code>$ kubectl describe pod ingress-nginx-controller -n ingress-nginx | grep 'Node:'
Node: ip-{{node1}}.compute.internal/{{node1}}
Node: ip-{{node2}}.compute.internal/{{node2}}
Node: ip-{{node1}}.compute.internal/{{node1}}
Node: ip-{{node1}}.compute.internal/{{node1}}
</code></pre>
<p>Why is my constraint not respected? How can I debug the pod scheduler?</p>
<p>My kubectl version:</p>
<pre><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"21+", GitVersion:"v1.21.0-beta.0.607+269d62d895c297", GitCommit:"269d62d895c29743931bfaaec6e8d37ced43c35f", GitTreeState:"clean", BuildDate:"2021-03-05T22:28:02Z", GoVersion:"go1.16", Compiler:"gc", Platform:"darwin/arm64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:20:00Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| <p><code>kubectl rollout restart</code> spins up new pods and then terminate old pods <strong>after</strong> all the new pods are up and running.</p>
<p>From the pod topology spread constraints <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/#known-limitations" rel="nofollow noreferrer">known limitations</a> section, constraints don't remain satisfied when pods are removed and the recommended mitigation is now to use <a href="https://github.com/kubernetes-sigs/descheduler" rel="nofollow noreferrer">Descheduler</a> , which you already seemed to have been using from your comment.</p>
|
<p>Trying to run a pod based on an image with this <code>Dockerfile</code>:</p>
<pre><code>...
ENTRYPOINT [ "./mybashscript", ";", "flask" ]
CMD [ "run" ]
</code></pre>
<p>I would be expecting the full command to be <code>./mybashscript; flask run</code>.
However, in this example, the <code>pod</code> / <code>container</code> executes <code>./mybashscript</code> but not <code>flask</code>.</p>
<p>I also tried a couple of variations like:</p>
<pre><code>...
ENTRYPOINT [ "/bin/bash", "-c", "./mybashscript && flask" ]
CMD [ "run" ]
</code></pre>
<p>Now, <code>flask</code> gets executed but <code>run</code> is ignored.</p>
<p>PS: I am trying to understand why this doesn't work and I am aware that I can fit all into the <code>entrypoint</code> or shove everything inside the <code>bash</code> script, but that is not the point.</p>
| <p>In both cases you show here, you use the JSON-array <a href="https://docs.docker.com/engine/reference/builder/#exec-form-entrypoint-example" rel="nofollow noreferrer">exec form</a> for <code>ENTRYPOINT</code> and <code>CMD</code>. This means no shell is run, except in the second case where you run it explicitly. The two parts are just <a href="https://docs.docker.com/engine/reference/builder/#understand-how-cmd-and-entrypoint-interact" rel="nofollow noreferrer">combined together into a single command</a>.</p>
<p>The first construct runs the script <code>./mybashscript</code>, which must be executable and have a valid "shebang" line (probably <code>#!/bin/bash</code>). The script is passed three arguments, which you can see in the shell variables <code>$1</code>, <code>$2</code>, and <code>$3</code>: a semicolon <code>;</code>, <code>flask</code>, and <code>run</code>.</p>
<p>The second construct runs <code>/bin/sh -c './mybashscript && flask' run</code>. <code>sh -c</code> takes a single argument, which is <code>mybashscript && flask</code>; the remaining argument <code>run</code> is interpreted as a positional argument, and the <code>sh -c</code> command would see it as <code>$0</code>.</p>
<hr />
<p>The arbitrary split of <code>ENTRYPOINT</code> and <code>CMD</code> you show doesn't really make sense. The only really important difference between the two is that it is easier to change <code>CMD</code> when you run the container, for example by putting it after the image name in a <code>docker run</code> command. It makes sense to put all of the command in the command part, or none of it, but not really to put half of the command in one part and half in another.</p>
<p>My first pass here would be to write:</p>
<pre class="lang-sh prettyprint-override"><code># no ENTRYPOINT
CMD ./mybashscript && flask run
</code></pre>
<p>Docker will insert a <code>sh -c</code> wrapper for you in bare-string <a href="https://docs.docker.com/engine/reference/builder/#shell-form-entrypoint-example" rel="nofollow noreferrer">shell form</a>, so the <code>&&</code> has its usual Bourne-shell meaning.</p>
<hr />
<p>This setup looks like you're trying to run an initialization script before the main container command. There's a reasonably standard pattern of using an <code>ENTRYPOINT</code> for this. Since it gets passed the <code>CMD</code> as parameters, the script can end with <code>exec "$@"</code> to run the <code>CMD</code> (potentially as overridden in the <code>docker run</code> command). The entrypoint script could look like</p>
<pre class="lang-sh prettyprint-override"><code>#!/bin/sh
# entrypoint.sh
./mybashscript
exec "$@"
</code></pre>
<p>(If you wrote <code>mybashscript</code>, you could also end it with the <code>exec "$@"</code> line, and use that script as the entrypoint.)</p>
<p>In the Dockerfile, set this wrapper script as the <code>ENTRYPOINT</code>, and then whatever the main command is as the <code>CMD</code>.</p>
<pre class="lang-sh prettyprint-override"><code>ENTRYPOINT ["./entrypoint.sh"] # must be a JSON array
CMD ["flask", "run"] # can be either form
</code></pre>
<p>If you provide an alternate command, it replaces <code>CMD</code>, and so the <code>exec "$@"</code> line will run that command instead of what's in the Dockerfile, but the <code>ENTRYPOINT</code> wrapper still runs.</p>
<pre class="lang-sh prettyprint-override"><code># See the environment the wrapper sets up
docker run --rm your-image env
# Double-check the data directory setup
docker run --rm -v $PWD/data:/data your-image ls -l /data
</code></pre>
<hr />
<p>If you <em>really</em> want to use the <code>sh -c</code> form and the split <code>ENTRYPOINT</code>, then the command inside <code>sh -c</code> has to read <code>$@</code> to find its positional arguments (the <code>CMD</code>), plus you need to know the first argument is <code>$0</code> and not <code>$1</code>. The form you show would be functional if you wrote</p>
<pre class="lang-sh prettyprint-override"><code># not really recommended but it would work
ENTRYPOINT ["/bin/sh", "-c", "./mybashscript && flask \"$@\"", "flask"]
CMD ["run"]
</code></pre>
|
<p>My container code needs to know in which environment it is running on GKE, more specifically what cluster and project. In standard kubernetes, this could be retrieved from current-context value (<code>gke_<project>_<cluster></code>).</p>
<p>Kubernetes has a downward api that can push pod info to containers - see <a href="https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/</a> - but unfortunately nothing from "higher" entities.</p>
<p>Any thoughts on how this can be achieved ?</p>
<p>Obviously I do not want to explicit push any info at deployment (e.g. as <code>env</code> in the <code>configMap</code>). I rather deploy using a generic/common yaml and have the code at runtime retrieve the info from env or file and branch accordingly.</p>
| <p>You can query the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#gke_mds" rel="nofollow noreferrer">GKE metadata server</a> from within your code. In your case, you'd want to query the <code>/computeMetadata/v1/instance/attributes/cluster-name</code> and <code>/computeMetadata/v1/project/project-id</code> endpoints to get the cluster and project. The client libraries for each supported language all have simple wrappers for accessing the metadata API as well.</p>
|
<p>What is the different between</p>
<pre><code>--dry-run
--dry-run=client
--dry-run=server
</code></pre>
<p>opportunities?</p>
<p>And is there any purpose other than create a definition file?</p>
<p>Thank you for your time.</p>
| <p>Passage from the <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands" rel="nofollow noreferrer">official Kubernetes <code>kubectl</code> references</a>:</p>
<blockquote>
<p>[--dry-run] Must be "none", "server", or "client". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.</p>
</blockquote>
<p>The following table should explain it in a much simpler way:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;"></th>
<th style="text-align: center;">sends data to server/cluster</th>
<th style="text-align: center;">perform change on server/cluster</th>
<th style="text-align: center;">validation by the server/cluster</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;"><code>--dry-run client</code></td>
<td style="text-align: center;">no</td>
<td style="text-align: center;">no</td>
<td style="text-align: center;">no</td>
</tr>
<tr>
<td style="text-align: left;"><code>--dry-run server</code></td>
<td style="text-align: center;">yes</td>
<td style="text-align: center;">no</td>
<td style="text-align: center;">yes</td>
</tr>
<tr>
<td style="text-align: left;"><code>--dry-run none</code></td>
<td style="text-align: center;">yes</td>
<td style="text-align: center;">yes</td>
<td style="text-align: center;">yes</td>
</tr>
</tbody>
</table>
</div> |
<p>First I installed lens on my mac, when I try to shell one of the pods, there's message said that I don't have any kubectl installed, so I install kubectl and it works properly.</p>
<p>Now I try to change configmaps but I get an error</p>
<blockquote>
<p>kubectl/1.18.20/kubectl not found</p>
</blockquote>
<p>When I check the kubectl folder there's 2 kubectl version 1.18.20 and 1.21.</p>
<p>1.21 is the one that I installed before.</p>
<p>How can I move kubectl version that has define in lens (1.18.20) and change it to 1.21 ?</p>
<p>Note:</p>
<ul>
<li>Lens: 5.2.0-latest.20210908.1</li>
<li>Electron: 12.0.17</li>
<li>Chrome: 89.0.4389.128</li>
<li>Node: 14.16.0</li>
<li>© 2021 Mirantis, Inc.</li>
</ul>
<p>Thanks in advance, sorry for bad English</p>
| <p>You can set kubectl path at File -> Preference -> Kubernetes -> PATH TO KUBECTL BINARY. Or you can also check "Download kubectl binaries matching the Kubernetes cluster version", this way Lens will use the same version as your target cluster.</p>
<p>By the way, you should use latest v5.2.5.</p>
|
<p>When I run skaffold this is the error I get. Skaffold generates tags, checks the cache, starts the deploy then it cleans up.</p>
<pre><code>- stderr: "error: error parsing C: ~\k8s\\ingress-srv.yaml: error converting YAML to JSON: yaml: line 20: mapping values are not allowed in this context
\n"
- cause: exit status 1
</code></pre>
<p>Docker creates a container for the server. Here is the ingress server yaml file:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: northernherpgeckosales.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
- path: /?(.*)
pathType: Prefix
backend:
service:
name: front-end-srv
port:
number: 3000
</code></pre>
<p>For good measure here is the skaffold file:</p>
<pre><code>apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: giantgecko/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
- image: giantgecko/front-end
context: front-end
docker:
dockerfile: Dockerfile
sync:
manual:
- src: '**/*.js'
dest: .
</code></pre>
| <p>Take a closer look at your Ingress definition file (starting from line 19):</p>
<pre class="lang-yaml prettyprint-override"><code>- path: /?(.*)
pathType: Prefix
backend:
service:
name: front-end-srv
port:
number: 3000
</code></pre>
<p>You have unnecessary indents from the line 20 (<code>pathType: Prefix</code>) till the end of the file. Just format your YAML file properly. For the previous <code>path: /api/users/?(.*)</code> everything is alright - no unnecessary indents.</p>
<p>Final YAML looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: northernherpgeckosales.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
- path: /?(.*)
pathType: Prefix
backend:
service:
name: front-end-srv
port:
number: 3000
</code></pre>
|
<p>I have an application that runs a code and at the end it sends an email with a report of the data. When I deploy pods on <strong>GKE</strong> , certain pods get terminated and a new pod is created due to Auto Scale, but the problem is that the termination is done after my code is finished and the email is sent twice for the same data.</p>
<p>Here is the <strong>JSON</strong> file of the deploy API:</p>
<pre><code>{
"apiVersion": "batch/v1",
"kind": "Job",
"metadata": {
"name": "$name",
"namespace": "$namespace"
},
"spec": {
"template": {
"metadata": {
"name": "********"
},
"spec": {
"priorityClassName": "high-priority",
"containers": [
{
"name": "******",
"image": "$dockerScancatalogueImageRepo",
"imagePullPolicy": "IfNotPresent",
"env": $env,
"resources": {
"requests": {
"memory": "2000Mi",
"cpu": "2000m"
},
"limits":{
"memory":"2650Mi",
"cpu":"2650m"
}
}
}
],
"imagePullSecrets": [
{
"name": "docker-secret"
}
],
"restartPolicy": "Never"
}
}
}
}
</code></pre>
<p>and here is a screen-shot of the pod events:</p>
<p><a href="https://i.stack.imgur.com/IvZIZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IvZIZ.png" alt="enter image description here" /></a></p>
<p>Any idea how to fix that?</p>
<p>Thank you in advance.</p>
| <p>"Perhaps you are affected by this "Note that even if you specify .spec.parallelism = 1 and .spec.completions = 1 and .spec.template.spec.restartPolicy = "Never", the same program may sometimes be started twice." from doc. What happens if you increase terminationgraceperiodseconds in your yaml file? – "</p>
<p>@danyL</p>
|
<p>In the documentation <a href="https://kubernetes.io/docs/concepts/workloads/pods/disruptions/" rel="noreferrer">here</a> it is stated that deleting a pod is a voluntary disruption that <code>PodDisruptionBudget</code> should protect against.</p>
<p>I have created a simple test:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: test
spec:
minAvailable: 1
selector:
matchLabels:
app: test
---
apiVersion: v1
kind: Pod
metadata:
name: test
labels:
app: test
spec:
containers:
- name: test
image: myimage
</code></pre>
<p>Now if I run <code>apply</code> and then <code>delete pod test</code>, there is no trouble deleting this pod.</p>
<p>If I now run <code>cordon node</code>, then it is stuck as it cannot evict the last pod (which is correct). But the same behavior seems to not be true for deleting the pod.</p>
<p>The same goes if I create a deployment with minimum 2 replicas and just delete both at the same time - they are deleted as well (not one by one).</p>
<p>Do I misunderstand something here?</p>
| <p>The link in your question refer to static pod managed by kubelet, guess you want this <a href="https://kubernetes.io/docs/concepts/workloads/pods/disruptions/" rel="nofollow noreferrer">link</a> instead.</p>
<p><code>...if I run apply and then delete pod test, there is no trouble deleting this pod</code></p>
<p>PDB protect pod managed by one of the controllers: Deployment, ReplicationController, ReplicaSet or StatefulSet.</p>
<p><code>...if I create a deployment with minimum 2 replicas and just delete both at the same time - they are deleted as well (not one by one)</code></p>
<p>PDB does not consider explicitly deleting a deployment a voluntary disruptions. From the K8s documentation:</p>
<blockquote>
<p>Caution: Not all voluntary disruptions are constrained by Pod
Disruption Budgets. For example, deleting deployments or pods bypasses
Pod Disruption Budgets.</p>
</blockquote>
<p>Hope this help to clear the mist.</p>
|
<p>I am trying to call k8s api in one k8s pod. But hit the following permission issue:</p>
<pre><code>User "system:serviceaccount:default:flink" cannot list resource "nodes" in API group "" at the cluster scope.
</code></pre>
<p>In my yaml file, I already have specified the <code>Role</code> & <code>RoleBinding</code>. What do I miss here?</p>
<pre><code>---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flink
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: zeppelin-server-role
rules:
- apiGroups: [""]
resources: ["pods", "services", "configmaps", "deployments", "nodes"]
verbs: ["create", "get", "update", "patch", "list", "delete", "watch"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["roles", "rolebindings"]
verbs: ["bind", "create", "get", "update", "patch", "list", "delete", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: zeppelin-server-role-binding
namespace: default
subjects:
- kind: ServiceAccount
name: flink
roleRef:
kind: ClusterRole
name: zeppelin-server-role
apiGroup: rbac.authorization.k8s.io
</code></pre>
| <p>You are deploying zeppelin-server on Kubernetes, right? Your yaml file with the service account looks good as I suppose, however to be sure that this works, you should follow the next steps:</p>
<ul>
<li><code>kubectl get clusterrole</code></li>
</ul>
<p>and you should get <code>zeppelin-server-role</code> role.</p>
<ul>
<li>check if your account '<strong>flink</strong>' has a binding to clusterrole "zeppelin-server-role"</li>
</ul>
<p><code>kubectl get clusterrole clusterrolebinding</code></p>
<p>if there is no, you can create it by the following command:</p>
<p><code>kubectl create clusterrolebinding zeppelin-server-role-binding --clusterrole=zeppelin-server-role --serviceaccount=default:flink</code></p>
<ul>
<li>finally, check if you really act as this account:</li>
</ul>
<p><code>kubectl get deploy flink-deploy -o yaml</code></p>
<p>if you can't see the settings "serviceAccount" and "serviceAccountName" from the output something like:</p>
<pre><code>...
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
...
</code></pre>
<p>then add this account you want flink to use:</p>
<p><code>kubectl patch deploy flink-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'</code></p>
|
<p>I have deployed on prod 10+ java/js microservices in GKE, all is good, none use external volumes, its a simple process in pipeline of generating new image, pushing to container registry and when upgrading the app to new version, just deploy new deployment with the new image and pods using rolling update are upgraded.</p>
<p>My question is how would it look like with Common Lisp application ? The main benefit of the language is that the code can be changed in runtime. Should the config <code>.lisp</code> files be attached as ConfigMap? (update to ConfigMap still requires recreation of pods for the new ConfigMap changes to be applied) Or maybe as some volume? (but what about there being 10x pods of the same deployment? all read from the same volume? what if there are 50 pods or more (wont there be some problems?)) And should the deploy of new version of the application look like v1 and v2 (new pods) or do we use somehow the benefits of runtime changes (with solutions I mentioned above), and the pods version stays the same, while the new code is added via some external solution</p>
| <p>I would probably generate an image with the compiled code, and possibly a post-dump image, then rely on Kubernetes to restart pods in your Deployment or StatefulSet in a sensible way. If necessary (and web-based), use Readiness checks to gate what pods will be receiving requests.</p>
<p>As an aside, the projected contents of a ConfigMap should show up in side the container, unless you have specified the filename(s) of the projected keys from the ConfigMap, so it should be possible to keep the source that way, then have either the code itself check for updates or have another mechanism to signal "time for a reload". But, unless you pair that with compilation, you would probably end up with interpreted code.</p>
|
<p>An existing nginx ingress named <code>nginx-proxy</code> running on the K8 cluster.</p>
<p>Now, there is a requirement from the Dev team to disable TLS 1.0, 1.1 support.</p>
<p>Upon searching, I could see <a href="https://kubernetes.github.io/ingress-nginx/user-guide/tls/" rel="nofollow noreferrer">this solution</a> using configmap.</p>
<p>Do you think applying/creating a new configmap as follows to an existing nginx ingress helps me to resolve the issue?</p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-proxy
data:
ssl-protocols: "TLSv1.2 TLSv1.3"
</code></pre>
<p>Adding a new configmap like that to an existing nginx ingress breaks anything?. Because this is for the production website.</p>
<p>A piece of advice would be really helpful.</p>
| <blockquote>
<p>To provide the most secure baseline configuration possible,</p>
<p>nginx-ingress defaults to using TLS 1.2 and 1.3 only, with a <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#ssl-ciphers" rel="nofollow noreferrer">secure set of TLS ciphers</a>. <sup>[<a href="https://kubernetes.github.io/ingress-nginx/user-guide/tls/#default-tls-version-and-ciphers" rel="nofollow noreferrer">source</a>]</sup></p>
</blockquote>
<p>It seems ingress-nginx uses TLS <em>1.2</em> and <em>1.3</em> <strong>only</strong> by default. The snippet you added to your question can be used to enable older TLS versions - like <em>1.0</em> and <em>1.1</em>.</p>
<pre class="lang-yaml prettyprint-override"><code>kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
data:
ssl-ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA"
ssl-protocols: "TLSv1 TLSv1.1 TLSv1.2 TLSv1.3"
</code></pre>
<p><sup>[<a href="https://kubernetes.github.io/ingress-nginx/user-guide/tls/#legacy-tls" rel="nofollow noreferrer">source</a>]</sup></p>
<p>You can check which versions of TLS (and ciphers) are enabled by issuing</p>
<pre class="lang-text prettyprint-override"><code>nmap --script ssl-enum-ciphers -p 443 <ingress-nginx>
</code></pre>
<p>replace <code><ingress-nginx></code> with your ingress IP.</p>
|
<p>I have a dotnet application, which is not working as non-root user even though I am exposing it on port 5000, greater then the 1024 requirement. </p>
<pre><code>WORKDIR /app
EXPOSE 5000
COPY app $local_artifact_path
RUN chown www-data:www-data /app /app/*
RUN chmod 777 /app
USER www-data
ENTRYPOINT dotnet $app_entry_point
</code></pre>
<p>The stacktrace is</p>
<pre><code>warn: Microsoft.AspNetCore.DataProtection.Repositories.EphemeralXmlRepository[50]
Using an in-memory repository. Keys will not be persisted to storage.
warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[59]
Neither user profile nor HKLM registry available. Using an ephemeral key repository. Protected data will be unavailable when application exits.
warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
No XML encryptor configured. Key {551dd8d6-67f6-4c6a-b5a4-9ea86b69593b} may be persisted to storage in unencrypted form.
crit: Microsoft.AspNetCore.Server.Kestrel[0]
Unable to start Kestrel.
System.Net.Sockets.SocketException (13): Permission denied
at System.Net.Sockets.Socket.UpdateStatusAfterSocketErrorAndThrowException(SocketError error, String callerName)
at System.Net.Sockets.Socket.DoBind(EndPoint endPointSnapshot, SocketAddress socketAddress)
at System.Net.Sockets.Socket.Bind(EndPoint localEP)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.Bind()
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketTransportFactory.BindAsync(EndPoint endpoint, CancellationToken cancellationToken)
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer.<>c__DisplayClass21_0`1.<<StartAsync>g__OnBind|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.BindEndpointAsync(ListenOptions endpoint, AddressBindContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.ListenOptions.BindAsync(AddressBindContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.AnyIPListenOptions.BindAsync(AddressBindContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.AddressesStrategy.BindAsync(AddressBindContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.BindAsync(IServerAddressesFeature addresses, KestrelServerOptions serverOptions, ILogger logger, Func`
2 createBinding)
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer.StartAsync[TContext](IHttpApplication`1 application, CancellationToken cancellationToken)
Unhandled exception. System.Net.Sockets.SocketException (13): Permission denied
at System.Net.Sockets.Socket.UpdateStatusAfterSocketErrorAndThrowException(SocketError error, String callerName)
at System.Net.Sockets.Socket.DoBind(EndPoint endPointSnapshot, SocketAddress socketAddress)
at System.Net.Sockets.Socket.Bind(EndPoint localEP)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.Bind()
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketTransportFactory.BindAsync(EndPoint endpoint, CancellationToken cancellationToken)
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer.<>c__DisplayClass21_0`1.<<StartAsync>g__OnBind|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.BindEndpointAsync(ListenOptions endpoint, AddressBindContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.ListenOptions.BindAsync(AddressBindContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.AnyIPListenOptions.BindAsync(AddressBindContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.AddressesStrategy.BindAsync(AddressBindContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.BindAsync(IServerAddressesFeature addresses, KestrelServerOptions serverOptions, ILogger logger, Func`2 createBinding)
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer.StartAsync[TContext](IHttpApplication`1 application, CancellationToken cancellationToken)
at Microsoft.AspNetCore.Hosting.GenericWebHostService.StartAsync(CancellationToken cancellationToken)
at Microsoft.Extensions.Hosting.Internal.Host.StartAsync(CancellationToken cancellationToken)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.RunAsync(IHost host, CancellationToken token)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.RunAsync(IHost host, CancellationToken token)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.Run(IHost host)
at SanUserManagementService.Program.Main(String[] args) in /home/jankins/workspace/Daniel/dotnet/SanUserManagementService/Program.cs:line 10
Aborted (core dumped)
</code></pre>
<p>Any help related to this will be appreciated!
Thanks!</p>
| <p>In my case the <code>ASPNETCORE_URLS</code> setting in environment variables or <code>appsettings.json</code> was set to <code>http://+:80</code>.</p>
<p>Changing it to <code>http://+:5000</code> worked. Make sure you change your Docker port bindings as well, or load balancer settings if using AWS.</p>
|
<p>I uninstalled nginx-ingress from my AKS cluster and almost all the resources got deleted my main service which is a type of LoadBalancer is still there I tried deleting manually by using the delete command but still, it's not deleting. I don't know what is the issue please help me out with this.</p>
<p>Thanks in advance</p>
| <p>It may happen due to <code>service.kubernetes.io/load-balancer-cleanup</code> finalizer.</p>
<p>Check <a href="https://stackoverflow.com/questions/66067108/azure-k8s-not-able-to-delete-load-balancer-service">Azure-k8s: Not able to delete Load Balancer service?</a>, it can happen LB type service stuck in such way.</p>
<p>I would suggest you open service by running <code>kubectl edit svc service_name</code>, remove below part and saving it again.</p>
<pre><code> finalizers:
- service.kubernetes.io/load-balancer-cleanup
</code></pre>
<p>If that doesnt help please provide in your question detailed verbose deletion output + <code>kubectl describe svc <service-name></code>, as was mentioned before</p>
|
<p>To define a VPC Link into API Gateway we have to declare an NLB in eks (LoadBalancer service) to access the pod in the VPC.</p>
<p>When we define ingress resource, we can group them into one ALB with the annotation <code>alb.ingress.kubernetes.io/group.name</code></p>
<p>It seems not possible to do the same with multiple service as a network load balancer. Is it possible ? Or just a bad idea to expose multiple micro-service (with different endpoint) on the same NLB with port as discriminant ?</p>
| <p>Quick answer: Not possible as of today</p>
<p>AWS LB ingress controller supports ALB and NLB, but keep in mind that the ALB ingress controller:</p>
<ul>
<li>watches the <code>Ingress</code> objects with <code>alb.ingress.kubernetes.io/*</code> annotations for ALB</li>
<li>also watches <code>Service</code> objects with <code>service.beta.kubernetes.io/*</code> annotations for NLB</li>
</ul>
<p>As of I am writing this, there are no annotations under service.beta.kubernetes.io/* that implements want you need.</p>
|
<p>An existing nginx ingress named <code>nginx-proxy</code> running on the K8 cluster.</p>
<p>Now, there is a requirement from the Dev team to disable TLS 1.0, 1.1 support.</p>
<p>Upon searching, I could see <a href="https://kubernetes.github.io/ingress-nginx/user-guide/tls/" rel="nofollow noreferrer">this solution</a> using configmap.</p>
<p>Do you think applying/creating a new configmap as follows to an existing nginx ingress helps me to resolve the issue?</p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-proxy
data:
ssl-protocols: "TLSv1.2 TLSv1.3"
</code></pre>
<p>Adding a new configmap like that to an existing nginx ingress breaks anything?. Because this is for the production website.</p>
<p>A piece of advice would be really helpful.</p>
| <p>You can follow this official document or disabling the TLS 1.0</p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
data:
ssl-ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384"
ssl-protocols: "TLSv1.2 TLSv1.3"
</code></pre>
<p>You need to update the ciphers also as perversion instead of using the default one.</p>
<p>You can try with the above <strong>configmap</strong>.</p>
<p>Also, I would recommend updating the SSL/TLS cert if you using in ingress.</p>
<p>If you are using the cert-manager please try deleting the secret which containing the SSL/TLS cert for ingress endpoint and try getting the cert again using the cert-manager once.</p>
|
<p>One of the pods in my local cluster can't be started because I get <code>Unable to attach or mount volumes: unmounted volumes=[nats-data-volume], unattached volumes=[nats-data-volume nats-initdb-volume kube-api-access-5b5cz]: timed out waiting for the condition</code> error.</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
deployment-nats-db-5f5f9fd6d5-wrcpk 0/1 ContainerCreating 0 19m
deployment-nats-server-57bbc76d44-tz5zj 1/1 Running 0 19m
$ kubectl describe pods deployment-nats-db-5f5f9fd6d5-wrcpk
Name: deployment-nats-db-5f5f9fd6d5-wrcpk
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.4
Start Time: Tue, 12 Oct 2021 21:42:23 +0600
Labels: app=nats-db
pod-template-hash=5f5f9fd6d5
skaffold.dev/run-id=1f5421ae-6e0a-44d6-aa09-706a1d1aa011
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/deployment-nats-db-5f5f9fd6d5
Containers:
nats-db:
Container ID:
Image: postgres:latest
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 1
memory: 256Mi
Requests:
cpu: 250m
memory: 128Mi
Environment Variables from:
nats-db-secrets Secret Optional: false
Environment: <none>
Mounts:
/docker-entrypoint-initdb.d from nats-initdb-volume (rw)
/var/lib/postgresql/data from nats-data-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5b5cz (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
nats-data-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nats-pvc
ReadOnly: false
nats-initdb-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nats-pvc
ReadOnly: false
kube-api-access-5b5cz:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 19m default-scheduler Successfully assigned default/deployment-nats-db-5f5f9fd6d5-wrcpk to docker-desktop
Warning FailedMount 4m9s (x2 over 17m) kubelet Unable to attach or mount volumes: unmounted volumes=[nats-data-volume], unattached volumes=[nats-initdb-volume kube-api-access-5b5cz nats-data-volume]: timed out waiting for the condition
Warning FailedMount 112s (x6 over 15m) kubelet Unable to attach or mount volumes: unmounted volumes=[nats-data-volume], unattached volumes=[nats-data-volume nats-initdb-volume kube-api-access-5b5cz]: timed out waiting for the condition
</code></pre>
<p>I don't know where the issue is. The PVs and PVCs are all seemed to be successfully applied.</p>
<pre><code>$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/nats-pv 50Mi RWO Retain Bound default/nats-pvc local-hostpath-storage 21m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/nats-pvc Bound nats-pv 50Mi RWO local-hostpath-storage 21m
</code></pre>
<p>Following are the configs for SC, PV and PVC:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-hostpath-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nats-pv
spec:
capacity:
storage: 50Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
storageClassName: local-hostpath-storage
hostPath:
path: /mnt/wsl/nats-pv
type: DirectoryOrCreate
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nats-pvc
spec:
volumeName: nats-pv
resources:
requests:
storage: 50Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
storageClassName: local-hostpath-storage
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-nats-db
spec:
selector:
matchLabels:
app: nats-db
template:
metadata:
labels:
app: nats-db
spec:
containers:
- name: nats-db
image: postgres:latest
envFrom:
- secretRef:
name: nats-db-secrets
volumeMounts:
- name: nats-data-volume
mountPath: /var/lib/postgresql/data
- name: nats-initdb-volume
mountPath: /docker-entrypoint-initdb.d
resources:
requests:
cpu: 250m
memory: 128Mi
limits:
cpu: 1000m
memory: 256Mi
volumes:
- name: nats-data-volume
persistentVolumeClaim:
claimName: nats-pvc
- name: nats-initdb-volume
persistentVolumeClaim:
claimName: nats-pvc
</code></pre>
<p>This pod will be started successfully if I comment out <code>volumeMounts</code> and <code>volumes</code> keys. And it's specifically with this <code>/var/lib/postgresql/data</code> path. Like if I remove <code>nats-data-volume</code> and keep <code>nats-initdb-volume</code>, it's started successfully.</p>
<p>Can anyone help me where I'm wrong exactly? Thanks in advance and best regards.</p>
| <p><code>...if I remove nats-data-volume and keep nats-initdb-volume, it's started successfully.</code></p>
<p>This PVC cannot be mounted twice, that's where the condition cannot be met.</p>
<p>Looking at your spec, it seems you don't mind which worker node will run your postgress pod. In this case you don't need PV/PVC, you can mount hostPath directly like:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-nats-db
spec:
selector:
matchLabels:
app: nats-db
template:
metadata:
labels:
app: nats-db
spec:
containers:
- name: nats-db
image: postgres:latest
envFrom:
- secretRef:
name: nats-db-secrets
volumeMounts:
- name: nats-data-volume
mountPath: /var/lib/postgresql/data
- name: nats-data-volume
mountPath: /docker-entrypoint-initdb.d
resources:
requests:
cpu: 250m
memory: 128Mi
limits:
cpu: 1000m
memory: 256Mi
volumes:
- name: nats-data-volume
hostPath:
path: /mnt/wsl/nats-pv
type: DirectoryOrCreate
</code></pre>
|
<p>I've done this a dozen times before but this time, I cannot seem to connect to my web server using HTTPS. I created an <code>AWS EKS</code> cluster using <code>eksctl</code>. I deployed my deployments and services using <code>kubectl</code>. I have service URLs which are resolving on port <code>80</code>.</p>
<p>I take the service URL's, put them in <code>CNAME</code> records, and Cloudflare resolves via <code>http</code> but not <code>https</code>. I get <code>521</code> errors, when I accept connections on port <code>443</code> in my <code>Kubernetes</code> services, I get <code>SSL handshake</code> errors.</p>
<p>The thing that confuses me is I thought Cloudflare provided an <code>SSL</code> layer but using my service URLs on port <code>80</code>. It seems though that it's redirecting requests from <code>cloudflare:443</code> to <code>my-eks-cluster:443</code>.</p>
<p>How do I debug this further to get some insight into what is going on ?</p>
| <p>Since your cluster works and accepts traffic, then the most probable reason is that <code>Encription mode</code> is enabled in yours Cloudflare config.</p>
<p>And, according to your post, you are going to disable <code>https</code> at all on the origin side:</p>
<blockquote>
<p>The thing that confuses me is I thought Cloudflare provided an SSL layer but using my service URLs on port 80. It seems though that it's redirecting requests from cloudflare:443 to my-eks-cluster:443.</p>
</blockquote>
<p>So, you may want to check SSL settings to be sure that current <code>Encription mode</code> is <code>Off</code></p>
<p>As per Cloudlare documentation:
<a href="https://developers.cloudflare.com/ssl/origin-configuration/ssl-modes" rel="nofollow noreferrer">Encryption modes · Cloudflare SSL docs</a></p>
<blockquote>
<p>Mode <code>Off</code>
Setting your encryption mode to <strong>Off (not recommended)</strong> redirects any HTTPS request to plaintext HTTP.</p>
</blockquote>
|
<p>I have a jenkins service deployed in EKS v 1.16 using helm chart. The PV and PVC had been accidentally deleted so I have recreated the PV and PVC as follows:</p>
<p>Pv.yaml</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-vol
spec:
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: ext4
volumeID: aws://us-east-2b/vol-xxxxxxxx
capacity:
storage: 120Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: jenkins-ci
namespace: ci
persistentVolumeReclaimPolicy: Retain
storageClassName: gp2
volumeMode: Filesystem
status:
phase: Bound
</code></pre>
<p>PVC.yaml</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-ci
namespace: ci
spec:
storageClassName: gp2
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 120Gi
volumeMode: Filesystem
volumeName: jenkins-vol
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 120Gi
phase: Bound
</code></pre>
<p>kubectl describe sc gp2</p>
<pre><code>Name: gp2
IsDefaultClass: Yes
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"gp2","namespace":""},"parameters":{"fsType":"ext4","type":"gp2"},"provisioner":"kubernetes.io/aws-ebs"}
,storageclass.kubernetes.io/is-default-class=true
Provisioner: kubernetes.io/aws-ebs
Parameters: fsType=ext4,type=gp2
AllowVolumeExpansion: True
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
</code></pre>
<p>The issue I'm facing is that the pod is not running when its scheduled on a node in a different availability zone than the EBS volume? How can I fix this</p>
| <p>Add a nodeSelector to your deployment file, which will match it to a node in the needed availability zone (in your case us-east-2b):</p>
<pre><code> nodeSelector:
topology.kubernetes.io/zone: us-east-2b
</code></pre>
|
<p>So, I'm trying to install a chart with helm3 to kubernetes cluster(EKS).
I have a terraform configuration bellow. The actual cluster is active and visible</p>
<pre><code>variable "aws_access_key" {}
variable "aws_secret_key" {}
locals {
cluster_name = "some-my-cluster"
}
provider "aws" {
region = "eu-central-1"
access_key = var.aws_access_key
secret_key = var.aws_secret_key
}
data "aws_eks_cluster" "cluster" {
name = local.cluster_name
}
data "aws_eks_cluster_auth" "cluster" {
name = data.aws_eks_cluster.cluster.name
}
output "endpoint" {
value = data.aws_eks_cluster.cluster.endpoint
}
output "kubeconfig-certificate-authority-data" {
value = data.aws_eks_cluster.cluster.certificate_authority.0.data
}
output "identity-oidc-issuer" {
value = "${data.aws_eks_cluster.cluster.identity.0.oidc.0.issuer}"
}
provider "kubernetes" {
version = "~>1.10.0"
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
load_config_file = false
}
provider "helm" {
version = "~>1.0.0"
debug = true
alias = "my_helm"
kubernetes {
host = data.aws_eks_cluster.cluster.endpoint
token = data.aws_eks_cluster_auth.cluster.token
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
load_config_file = false
}
}
data "helm_repository" "stable" {
name = "stable"
url = "https://kubernetes-charts.storage.googleapis.com"
}
resource "helm_release" "mydatabase" {
provider = helm.my_helm
name = "mydatabase"
chart = "stable/mariadb"
namespace = "default"
set {
name = "mariadbUser"
value = "foo"
}
set {
name = "mariadbPassword"
value = "qux"
}
}
</code></pre>
<p>When I run <code>terraform apply</code> I see an <code>error: Error: Kubernetes cluster unreachable</code></p>
<p>Any thoughts? Will also appreciate some ideas how to debug the issue - the debug option doesn't work.</p>
<p>Can confirm that it works with newly created <a href="https://github.com/kharandziuk/eks-getting-started-helm3" rel="noreferrer">cluster</a>.</p>
| <p>The solution to this problem has to do with the kubectl provider. The only workaround that I could find that works is to replace the token request with the one I put below</p>
<pre><code>provider "kubernetes" {
version = "~>1.10.0"
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
exec {
api_version = "client.authentication.k8s.io/v1alpha1"
args = ["eks", "get-token", "--cluster-name", data.aws_eks_cluster.cluster.name]
command = "aws"
}
load_config_file = false
}
</code></pre>
|
<p>I have a simple pod with a nginx container which returns text <code>healthy</code> on path <code>/</code>. I have prometheus to scrape port 80 on path <code>/</code>. When I ran <code>up == 0</code> in the prometheus dashboard it showed this pod which means this pod is not healthy. But I tried ssh into the container, it was running fine and I saw in the nginx log prometheus was pinging <code>/</code> and getting 200 response. Any idea why?</p>
<p>deployment.yml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
metadata:
labels:
...
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/"
prometheus.io/port: "80"
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx
readOnly: true
ports:
- containerPort: 80
volumes:
- name: nginx-conf
configMap:
name: nginx-conf
items:
- key: nginx.conf
path: nginx.conf
</code></pre>
<p>nginx.conf</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
data:
nginx.conf: |
http {
server {
listen 80;
location / {
return 200 'healthy\n';
}
}
}
</code></pre>
<p>nginx access log</p>
<pre><code>192.168.88.81 - - [xxx +0000] "GET / HTTP/1.1" 200 8 "-" "Prometheus/2.26.0"
192.168.88.81 - - [xxx +0000] "GET / HTTP/1.1" 200 8 "-" "Prometheus/2.26.0"
192.168.88.81 - - [xxx +0000] "GET / HTTP/1.1" 200 8 "-" "Prometheus/2.26.0"
</code></pre>
| <p>When you configure these annotations to pods, the Prometheus expects that the given path returns Prometheus-readable metrics. But <code>'healthy\n'</code> is not a valid Prometheus metrics type.</p>
<pre class="lang-yaml prettyprint-override"><code> annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/"
prometheus.io/port: "80"
</code></pre>
<p><strong>Recommended Fix:</strong></p>
<ul>
<li>Use <a href="https://github.com/nginxinc/nginx-prometheus-exporter" rel="nofollow noreferrer">nginx-prometheus-exporter</a> as sidecar.</li>
<li>Add sidecar info to annotation so that Prometheus can scrape metrics from it.</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
metadata:
labels:
...
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/metrics"
prometheus.io/port: "9113"
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx
readOnly: true
ports:
- containerPort: 80
- name: nginx-exporter
args:
- "-nginx.scrape-uri=http://localhost:80/stub_status" # nginx address
image: nginx/nginx-prometheus-exporter:0.9.0
ports:
- containerPort: 9113
volumes:
- name: nginx-conf
configMap:
name: nginx-conf
items:
- key: nginx.conf
path: nginx.conf
</code></pre>
<p>Now, try querying <code>nginx_up</code> from Prometheus. The nginx-prometheus-exporter also comes with a <a href="https://github.com/nginxinc/nginx-prometheus-exporter/tree/master/grafana" rel="nofollow noreferrer">grafana dashboard</a>, you can also give it a try.</p>
|
<p>I'm wondering about an approach one has to take for our server setup. We have pods that are short lived. They are started up with 3 pods at a minimum and each server is waiting on a single request that it handles - then the pod is destroyed. I'm not sure of the mechanism that this pod is destroyed, but my question is not about this part anyway.</p>
<p>There is an "active session count" metric that I am envisioning. Each of these pod resources could make a rest call to some "metrics" pod that we would create for our cluster. The metrics pod would expose a <code>sessionStarted</code> and <code>sessionEnded</code> endpoint - which would increment/decrement the kubernetes <code>activeSessions</code> metric. That metric would be what is used for horizontal autoscaling of the number of pods needed.</p>
<p>Since having a pod as "up" counts as zero active sessions, the custom event that increments the session count would update the metric server session count with a rest call and then decrement again on session end (the pod being up does not indicate whether or not it has an active session).</p>
<p>Is it correct to think that I need this metric server (and write it myself)? Or is there something that Prometheus exposes where this type of metric is supported already - rest clients and all (for various languages), that could modify this metric?</p>
<p>Looking for guidance and confirmation that I'm on the right track. Thanks!</p>
| <p>It's impossible to give only one way to solve this and your question is more "opinion-based". However there is an useful <a href="https://stackoverflow.com/questions/68176705/how-to-get-active-connections-count-of-a-kubernetes-pod">similar question on StackOverFlow</a>, please check the comments that can give you some tips. If nothing works, probably you should write the script. There is no exact solution from Kubernetes's side.</p>
<p>Please also take into the consideration of <a href="https://flink.apache.org/flink-architecture.html" rel="nofollow noreferrer">Apache Flink</a>. It has <a href="https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/deployment/resource-providers/standalone/kubernetes/#using-standalone-kubernetes-with-reactive-mode" rel="nofollow noreferrer">Reactive Mode</a> in combination of Kubernetes:</p>
<blockquote>
<p><a href="https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/deployment/elastic_scaling/#reactive-mode" rel="nofollow noreferrer">Reactive Mode</a> allows to run Flink in a mode, where the Application Cluster is always adjusting the job parallelism to the available resources. In combination with Kubernetes, the replica count of the TaskManager deployment determines the available resources. Increasing the replica count will scale up the job, reducing it will trigger a scale down. This can also be done automatically by using a <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaler</a>.</p>
</blockquote>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.