prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p><a href="https://i.stack.imgur.com/dMwWE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dMwWE.png" alt="enter image description here" /></a></p>
<p>Error in text:</p>
<blockquote>
<p>Error from server: Get
<a href="https://10.128.15.203:10250/containerLogs/default/postgres-54db6bdb8b-cmrsb/postgres" rel="nofollow noreferrer">https://10.128.15.203:10250/containerLogs/default/postgres-54db6bdb8b-cmrsb/postgres</a>: EOF</p>
</blockquote>
<p>How could I solve this issue? And what can be reason? I've used <a href="https://severalnines.com/database-blog/using-kubernetes-deploy-postgresql" rel="nofollow noreferrer">this tutorial</a> for configuring all stuff.</p>
<p><strong>kubectl describe pods postgres-54db6bdb8b-cmrsb</strong></p>
<pre><code>Name: postgres-54db6bdb8b-cmrsb
Namespace: default
Priority: 0
Node: gke-booknotes-pool-2-c1d23e62-r6nb/10.128.15.203
Start Time: Sat, 14 Dec 2019 23:27:20 +0700
Labels: app=postgres
pod-template-hash=54db6bdb8b
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container postgres
Status: Running
IP: 10.56.1.3
IPs: <none>
Controlled By: ReplicaSet/postgres-54db6bdb8b
Containers:
postgres:
Container ID: docker://1a607cfb9a8968d708ff79419ec8bfc7233fb5ad29fb1055034ddaacfb793d6a
Image: postgres:10.4
Image ID: docker-pullable://postgres@sha256:9625c2fb34986a49cbf2f5aa225d8eb07346f89f7312f7c0ea19d82c3829fdaa
Port: 5432/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: ContainerCannotRun
Message: error while creating mount source path '/mnt/data': mkdir /mnt/data: read-only file system
Exit Code: 128
Started: Sat, 14 Dec 2019 23:54:00 +0700
Finished: Sat, 14 Dec 2019 23:54:00 +0700
Ready: False
Restart Count: 25
Requests:
cpu: 100m
Environment Variables from:
postgres-config ConfigMap Optional: false
Environment: <none>
Mounts:
/var/lib/postgresql/data from postgredb (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-t48dw (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
postgredb:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: postgres-pv-claim
ReadOnly: false
default-token-t48dw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-t48dw
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 32m default-scheduler Successfully assigned default/postgres-54db6bdb8b-cmrsb to gke-booknotes-pool-2-c1d23e62-r6nb
Normal Pulled 28m (x5 over 30m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Container image "postgres:10.4" already present on machine
Normal Created 28m (x5 over 30m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Created container postgres
Warning Failed 28m (x5 over 30m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Error: failed to start container "postgres": Error response from daemon: error while creating mount source path '/mnt/data': mkdir /mnt/data: read-only file system
Warning BackOff 27m (x10 over 29m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Back-off restarting failed container
Warning Failed 23m (x4 over 25m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Error: failed to start container "postgres": Error response from daemon: error while creating mount source path '/mnt/data': mkdir /mnt/data: read-only file system
Warning BackOff 22m (x11 over 25m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Back-off restarting failed container
Normal Pulled 22m (x5 over 25m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Container image "postgres:10.4" already present on machine
Normal Created 22m (x5 over 25m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Created container postgres
Normal Pulled 19m (x4 over 20m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Container image "postgres:10.4" already present on machine
Normal Created 19m (x4 over 20m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Created container postgres
Warning Failed 19m (x4 over 20m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Error: failed to start container "postgres": Error response from daemon: error while creating mount source path '/mnt/data': mkdir /mnt/data: read-only file system
Warning BackOff 18m (x11 over 20m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Back-off restarting failed container
Normal Created 15m (x4 over 17m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Created container postgres
Warning Failed 15m (x4 over 17m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Error: failed to start container "postgres": Error response from daemon: error while creating mount source path '/mnt/data': mkdir /mnt/data: read-only file system
Normal Pulled 14m (x5 over 17m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Container image "postgres:10.4" already present on machine
Warning BackOff 12m (x19 over 17m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Back-off restarting failed container
Normal Pulled 5m38s (x5 over 8m29s) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Container image "postgres:10.4" already present on machine
Normal Created 5m38s (x5 over 8m27s) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Created container postgres
Warning Failed 5m37s (x5 over 8m24s) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Error: failed to start container "postgres": Error response from daemon: error while creating mount source path '/mnt/data': mkdir /mnt/data: read-only file system
Warning BackOff 5m24s (x10 over 7m58s) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Back-off restarting failed container
</code></pre>
<p>Here is also my yaml files:</p>
<p><strong>deployment.yaml</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:10.4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
</code></pre>
<p><strong>postgres-configmap.yaml</strong></p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: postgresdb
POSTGRES_USER: postgresadmin
POSTGRES_PASSWORD: some_password
</code></pre>
<p><strong>postgres-service.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
type: NodePort
ports:
- port: 5432
selector:
app: postgres
</code></pre>
<p><strong>postgres-storage.yaml</strong></p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
</code></pre>
<p>After I've changed RWM to RWO - then I this (I've deleted old instances and have created new one):</p>
<pre><code>Name: postgres-54db6bdb8b-wgvr2
Namespace: default
Priority: 0
Node: gke-booknotes-pool-1-3e566443-dc08/10.128.15.236
Start Time: Sun, 15 Dec 2019 04:56:57 +0700
Labels: app=postgres
pod-template-hash=54db6bdb8b
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container postgres
Status: Running
IP: 10.56.6.13
IPs: <none>
Controlled By: ReplicaSet/postgres-54db6bdb8b
Containers:
postgres:
Container ID: docker://1070018c2a670cc7e0248e6269c271c3cba022fdd2c9cc5099a8eb4da44f7d65
Image: postgres:10.4
Image ID: docker-pullable://postgres@sha256:9625c2fb34986a49cbf2f5aa225d8eb07346f89f7312f7c0ea19d82c3829fdaa
Port: 5432/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: ContainerCannotRun
Message: error while creating mount source path '/mnt/data': mkdir /mnt/data: read-only file system
Exit Code: 128
Started: Sun, 15 Dec 2019 10:56:21 +0700
Finished: Sun, 15 Dec 2019 10:56:21 +0700
Ready: False
Restart Count: 76
Requests:
cpu: 100m
Environment Variables from:
postgres-config ConfigMap Optional: false
Environment: <none>
Mounts:
/var/lib/postgresql/data from postgredb (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-t48dw (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
postgredb:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: postgres-pv-claim
ReadOnly: false
default-token-t48dw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-t48dw
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 81s (x1629 over 6h) kubelet, gke-booknotes-pool-1-3e566443-dc08 Back-off restarting failed container
</code></pre>
| <h3>1. How to fix CrashLoopBackOff (postgres) - GCP</h3>
<p>The issue is here: error while creating mount source path '/mnt/data': mkdir /mnt/data: <strong>read-only</strong> file system.</p>
<p>You need to make sure <code>postgres-pv-claim</code> is writable. You need to recreate the pv and pv claim with RWO access (you must have mistyped it to RO instead which is why you ran into the issue) then try to deploy postgres pod which should fix the issue.</p>
<h3>2. Fixing FailedScheduling 69s (x10 over 7m35s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 4 times)</h3>
<p>Provisioning persistent volume in GKE you don't need to create PersistentVolume objects they are dynamically created by GKE. So solve the <code>Warning FailedScheduling 69s (x10 over 7m35s)</code> issue</p>
<ol>
<li>remove <code>storageClassName</code> property from your pvc and</li>
<li>delete pv</li>
</ol>
<p>which should fix the issue. Please see below revised <code>postgres-storage.yaml</code>.</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
</code></pre>
|
<p>I'm trying to use Kubernetes <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#go-client" rel="nofollow noreferrer">Go-Client</a> in my program (which will run outside my Kubernetes cluster) which requires access to config file. The config file requires some token, service account details etc so that the program can access the Kubernetes cluster. </p>
<p>The config file is of the form below: </p>
<pre><code>apiVersion: v1
kind: Config
users:
- name: testsa
user:
token: my-token
clusters:
- cluster:
certificate-authority-data: my-cert
server: my-server
name: self-hosted-cluster
contexts:
- context:
cluster: self-hosted-cluster
user: testsa
name: test-name
current-context: test-context
</code></pre>
<p>In the above file I need to give <code>my-token</code>, <code>my-cert</code> & <code>my-server</code> as environment variables since I can't hard-code this in the file present in the repository due to security reason. How can I do that? </p>
| <p>Thanks Markus for the hint link. </p>
<p>I am writing the answer in Go as the original link showed how to do it from command line. The steps are as follows: </p>
<ul>
<li>Replace the fields to be modified in the file with something of the form <code>${X}</code>. In my case e.g. I replaced <code>my-token</code> with <code>${my-token}</code> and so on.</li>
<li>Here you can set <code>X</code> as an environment variable so that your code can access it during runtime. e.g. by doing <code>export X="abcd"</code> in command line.</li>
<li>Say the file name is <code>config</code>.</li>
<li>Execute the following code: </li>
</ul>
<pre><code>package main
import (
"os"
"os/exec"
)
func main() {
mytoken := os.Getenv("mytoken")
part := fmt.Sprintf("s/${mytoken}/%s/g", mytoken)
command := exec.Command("sed", "-i", "", "-e", part, "config")
command.Run()
}
</code></pre>
<p>This will do the required replacement during runtime. </p>
|
<p>Currently, I'm trying to create a Kubernetes cluster on Google Cloud with two <strong>load balancers</strong>: one for backend (in Spring boot) and another for frontend (in Angular), where each service (load balancer) communicates with 2 replicas (pods). To achieve that, I created the following ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: sample-ingress
spec:
rules:
- http:
paths:
- path: /rest/v1/*
backend:
serviceName: sample-backend
servicePort: 8082
- path: /*
backend:
serviceName: sample-frontend
servicePort: 80
</code></pre>
<p>The ingress above mentioned can make the frontend app communicate with the REST API made available by the backend app. However, I have to create <strong>sticky sessions</strong>, so that every user communicates with the same POD because of the authentication mechanism provided by the backend. To clarify, if one user authenticates in POD #1, the cookie will not be recognized by POD #2.</p>
<p>To overtake this issue, I read that the <strong>Nginx-ingress</strong> manages to deal with this situation and I installed through the steps available here: <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/</a> using Helm. </p>
<p>You can find below the diagram for the architecture I'm trying to build:</p>
<p><a href="https://i.stack.imgur.com/iZeHX.png" rel="noreferrer"><img src="https://i.stack.imgur.com/iZeHX.png" alt="enter image description here"></a></p>
<p>With the following services (I will just paste one of the services, the other one is similar):</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: sample-backend
spec:
selector:
app: sample
tier: backend
ports:
- protocol: TCP
port: 8082
targetPort: 8082
type: LoadBalancer
</code></pre>
<p>And I declared the following ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: sample-nginx-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/affinity-mode: persistent
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
nginx.ingress.kubernetes.io/session-cookie-name: sample-cookie
spec:
rules:
- http:
paths:
- path: /rest/v1/*
backend:
serviceName: sample-backend
servicePort: 8082
- path: /*
backend:
serviceName: sample-frontend
servicePort: 80
</code></pre>
<p>After that, I run <code>kubectl apply -f sample-nginx-ingress.yaml</code> to apply the ingress, it is created and its status is OK. However, when I access the URL that appears in "Endpoints" column, the browser can't connect to the URL.
Am I doing anything wrong?</p>
<h1>Edit 1</h1>
<p>** Updated service and ingress configurations **</p>
<p>After some help, I've managed to access the services through the Ingress Nginx. Above here you have the configurations:</p>
<h2>Nginx Ingress</h2>
<p>The paths shouldn't contain the "<em>", unlike the default Kubernetes ingress that is mandatory to have the "</em>" to route the paths I want.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: sample-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "sample-cookie"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- http:
paths:
- path: /rest/v1/
backend:
serviceName: sample-backend
servicePort: 8082
- path: /
backend:
serviceName: sample-frontend
servicePort: 80
</code></pre>
<h2>Services</h2>
<p>Also, the services shouldn't be of type "LoadBalancer" but "<strong>ClusterIP</strong>" as below:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: sample-backend
spec:
selector:
app: sample
tier: backend
ports:
- protocol: TCP
port: 8082
targetPort: 8082
type: ClusterIP
</code></pre>
<p>However, I still can't achieve sticky sessions in my Kubernetes Cluster, once I'm still getting 403 and even the cookie name is not replaced, so I guess the annotations are not working as expected.</p>
| <p>I looked into this matter and I have found solution to your issue. </p>
<p><strong>To achieve sticky session for both paths you will need two definitions of ingress.</strong></p>
<p>I created example configuration to show you the whole process: </p>
<p><strong>Steps to reproduce:</strong> </p>
<ul>
<li>Apply Ingress definitions </li>
<li>Create deployments</li>
<li>Create services</li>
<li>Create Ingresses </li>
<li>Test </li>
</ul>
<p>I assume that the cluster is provisioned and is working correctly. </p>
<h2>Apply Ingress definitions</h2>
<p>Follow this <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="noreferrer">Ingress link </a> to find if there are any needed prerequisites before installing Ingress controller on your infrastructure. </p>
<p>Apply below command to provide all the mandatory prerequisites: </p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
</code></pre>
<p>Run below command to apply generic configuration to create a service: </p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
</code></pre>
<h2>Create deployments</h2>
<p>Below are 2 example deployments to respond to the Ingress traffic on specific services: </p>
<p><strong>hello.yaml:</strong> </p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: hello
spec:
selector:
matchLabels:
app: hello
version: 1.0.0
replicas: 5
template:
metadata:
labels:
app: hello
version: 1.0.0
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:1.0"
env:
- name: "PORT"
value: "50001"
</code></pre>
<p>Apply this first deployment configuration by invoking command:</p>
<p><code>$ kubectl apply -f hello.yaml</code></p>
<p><strong>goodbye.yaml:</strong> </p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: goodbye
spec:
selector:
matchLabels:
app: goodbye
version: 2.0.0
replicas: 5
template:
metadata:
labels:
app: goodbye
version: 2.0.0
spec:
containers:
- name: goodbye
image: "gcr.io/google-samples/hello-app:2.0"
env:
- name: "PORT"
value: "50001"
</code></pre>
<p>Apply this second deployment configuration by invoking command:</p>
<p><code>$ kubectl apply -f goodbye.yaml</code></p>
<p>Check if deployments configured pods correctly: </p>
<p><code>$ kubectl get deployments</code></p>
<p>It should show something like that: </p>
<pre><code>NAME READY UP-TO-DATE AVAILABLE AGE
goodbye 5/5 5 5 2m19s
hello 5/5 5 5 4m57s
</code></pre>
<h2>Create services</h2>
<p>To connect to earlier created pods you will need to create services. Each service will be assigned to one deployment. Below are 2 services to accomplish that:</p>
<p><strong>hello-service.yaml:</strong> </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: hello-service
spec:
type: NodePort
selector:
app: hello
version: 1.0.0
ports:
- name: hello-port
protocol: TCP
port: 50001
targetPort: 50001
</code></pre>
<p>Apply first service configuration by invoking command:</p>
<p><code>$ kubectl apply -f hello-service.yaml</code></p>
<p><strong>goodbye-service.yaml:</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: goodbye-service
spec:
type: NodePort
selector:
app: goodbye
version: 2.0.0
ports:
- name: goodbye-port
protocol: TCP
port: 50001
targetPort: 50001
</code></pre>
<p>Apply second service configuration by invoking command:</p>
<p><code>$ kubectl apply -f goodbye-service.yaml</code></p>
<p><strong>Take in mind that in both configuration lays type: <code>NodePort</code></strong></p>
<p>Check if services were created successfully: </p>
<p><code>$ kubectl get services</code> </p>
<p>Output should look like that:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
goodbye-service NodePort 10.0.5.131 <none> 50001:32210/TCP 3s
hello-service NodePort 10.0.8.13 <none> 50001:32118/TCP 8s
</code></pre>
<h2>Create Ingresses</h2>
<p>To achieve sticky sessions you will need to create 2 ingress definitions. </p>
<p>Definitions are provided below: </p>
<p><strong>hello-ingress.yaml:</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "hello-cookie"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/affinity-mode: persistent
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
spec:
rules:
- host: DOMAIN.NAME
http:
paths:
- path: /
backend:
serviceName: hello-service
servicePort: hello-port
</code></pre>
<p><strong>goodbye-ingress.yaml:</strong> </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: goodbye-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "goodbye-cookie"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/affinity-mode: persistent
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
spec:
rules:
- host: DOMAIN.NAME
http:
paths:
- path: /v2/
backend:
serviceName: goodbye-service
servicePort: goodbye-port
</code></pre>
<p>Please change <code>DOMAIN.NAME</code> in both ingresses to appropriate to your case.
I would advise to look on this <a href="https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/" rel="noreferrer">Ingress Sticky session </a> link.
Both Ingresses are configured to HTTP only traffic. </p>
<p>Apply both of them invoking command: </p>
<p><code>$ kubectl apply -f hello-ingress.yaml</code></p>
<p><code>$ kubectl apply -f goodbye-ingress.yaml</code></p>
<p>Check if both configurations were applied: </p>
<p><code>$ kubectl get ingress</code></p>
<p>Output should be something like this: </p>
<pre class="lang-sh prettyprint-override"><code>NAME HOSTS ADDRESS PORTS AGE
goodbye-ingress DOMAIN.NAME IP_ADDRESS 80 26m
hello-ingress DOMAIN.NAME IP_ADDRESS 80 26m
</code></pre>
<h2>Test</h2>
<p>Open your browser and go to <code>http://DOMAIN.NAME</code>
Output should be like this: </p>
<pre><code>Hello, world!
Version: 1.0.0
Hostname: hello-549db57dfd-4h8fb
</code></pre>
<p><code>Hostname: hello-549db57dfd-4h8fb</code> is the name of the pod. Refresh it a couple of times. </p>
<p>It should stay the same. </p>
<p>To check if another route is working go to <code>http://DOMAIN.NAME/v2/</code>
Output should be like this: </p>
<pre><code>Hello, world!
Version: 2.0.0
Hostname: goodbye-7b5798f754-pbkbg
</code></pre>
<p><code>Hostname: goodbye-7b5798f754-pbkbg</code> is the name of the pod. Refresh it a couple of times. </p>
<p>It should stay the same. </p>
<p>To ensure that cookies are not changing open developer tools (probably F12) and navigate to place with cookies. You can reload the page to check if they are not changing. </p>
<p><a href="https://i.stack.imgur.com/Odr0O.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Odr0O.png" alt="Cookies"></a></p>
|
<p>I am using service mesh <a href="https://istio.io/" rel="noreferrer">https://istio.io/</a> installed on top of kubernetes and have installed
the example <a href="https://istio.io/docs/examples/bookinfo/" rel="noreferrer">https://istio.io/docs/examples/bookinfo/</a>, that ISTIO provides on their website. </p>
<p>Assume, I've created a service <strong>FOO</strong> and would like to call the service <strong>ratings</strong> through the virtual service <strong>ratings</strong>. </p>
<p><a href="https://i.stack.imgur.com/Z7uIY.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Z7uIY.png" alt="enter image description here"></a></p>
<ul>
<li><p>How to call <strong>ratings</strong> within <strong>FOO</strong>? Which address do I have to
provide the http client in the <strong>FOO</strong> service to call <strong>ratings</strong>. Do
Ihave to create a virtual service for <strong>ratings</strong>? <strong>ratings</strong> should
not be accessible outside of kubernetes cluser. </p></li>
<li><p>When <strong>FOO</strong> calls <strong>ratings</strong>, will the request go first through the
own envoy proxy or it goes directly to <strong>ratings</strong> envoy proxy?</p></li>
</ul>
<p><strong>Follow-up question</strong></p>
<p>Here are all virtual services installed on the kubernetes cluster: </p>
<p>[<img src="https://i.stack.imgur.com/I5etw.jpg" alt="enter image description here"><a href="https://i.stack.imgur.com/I5etw.jpg" rel="noreferrer">2</a></p>
<p>The cluster IP address is: </p>
<p><a href="https://i.stack.imgur.com/R1q8p.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/R1q8p.jpg" alt="enter image description here"></a></p>
<p>The question is, how can I call the <strong>ratings</strong> service in <strong>FOO</strong> service? With the <strong>Cluster IP</strong> address?</p>
| <blockquote>
<p>How to call RATINGS within FOO? Which address do I have to provide the http client in the FOO service to call RATINGS. Do I have to create a virtual service for RATINGS? RATINGS should not be accessible outside of kubernetes cluster.</p>
</blockquote>
<p>You can still call other services the same way you would without istio. Since the service only needs to be accessible inside the cluster, you'll want to expose it with a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">clusterIP service</a>. You can then call the service by name using <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">Kubernetes DNS</a>. In general, the service is available at <code>http(s)://{namespace}.{service-name}</code>. To call a service in the same namespace, you can leave the <code>{namespace}</code> out of the url.</p>
<p>While it is not necessary to create a VirtualService, <a href="https://istio.io/docs/ops/best-practices/traffic-management/#set-default-routes-for-services" rel="nofollow noreferrer">it is advised by istio</a>:</p>
<blockquote>
<p>Although the default Istio behavior conveniently sends traffic from
any source to all versions of a destination service without any rules
being set, creating a VirtualService with a default route for every
service, right from the start, is generally considered a best practice
in Istio.</p>
</blockquote>
<blockquote>
<p>When FOO calls RATINGS, will the request go first through the own envoy proxy or it goes directly to RATINGS envoy proxy?**</p>
</blockquote>
<p>It will go through both envoy proxies. This is how istio can manage the routing of your requests and provide traffic insights like tracing.</p>
<p>The outbound envoy proxy can be bypassed though, with the <a href="https://istio.io/docs/tasks/traffic-management/egress/egress-control/#direct-access-to-external-services" rel="nofollow noreferrer">traffic.sidecar.istio.io/includeOutboundIPRanges annotation</a>.</p>
|
<p>I am trying to create a pipeline job for <code>Angular</code> code to deploy the application into k8 cluster. Below there is a code for pipeline container <code>podTemplate</code>, during the build I get the next error.</p>
<pre><code>def label = "worker-${UUID.randomUUID().toString()}"
podTemplate(
cloud: 'kubernetes',
namespace: 'test',
imagePullSecrets: ['regcred'],
label: label,
containers: [
containerTemplate(name: 'nodejs', image: 'nodejscn/node:latest', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'docker', image: 'nodejscn/node:latest', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'kubectl', image: 'k8spoc1/kubctl:latest', ttyEnabled: true, command: 'cat')
],
volumes: [
hostPathVolume(hostPath: '/var/run/docker.sock', mountPath: '/var/run/docker.sock'),
hostPathVolume(hostPath: '/root/.m2/repository', mountPath: '/root/.m2/repository')
]
) {
node(label) {
def scmInfo = checkout scm
def image_tag
def image_name
sh 'pwd'
def gitCommit = scmInfo.GIT_COMMIT
def gitBranch = scmInfo.GIT_BRANCH
def commitId
commitId= scmInfo.GIT_COMMIT[0..7]
image_tag = "${scmInfo.GIT_BRANCH}-${scmInfo.GIT_COMMIT[0..7]}"
stage('NPM Install') {
container ('nodejs') {
withEnv(["NPM_CONFIG_LOGLEVEL=warn"]) {
sh 'npm install'
}
}
}
}
}
</code></pre>
<p>Error from Jenkins:</p>
<pre><code>[Pipeline] Start of Pipeline
[Pipeline] podTemplate
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
ERROR: Labels must follow required specs - https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set: Ubuntu-82f3782f-b5aa-4029-9c51-57610153747c
Finished: FAILURE
</code></pre>
<p>Do I need to mention a <code>spec</code> value of my <code>Jenkins</code> file?</p>
| <p>The error message you get:</p>
<pre><code>ERROR: Labels must follow required specs - https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set: Ubuntu-82f3782f-b5aa-4029-9c51-57610153747c
</code></pre>
<p>points out quite precisely what can be wrong with your <code>Pod</code> template. As you can see in <a href="https://kubernetes.io/docs/concepts/overv" rel="nofollow noreferrer">link</a> to kubernetes documentation given in the <code>ERROR</code> message, you need to follow certain rules when defining a <code>Pod</code>. <code>labels</code> element is a <code>dictionary</code>/<code>map</code> field that requires you to provide at least one key-value pair so you cannot just write <code>label: label</code> in your specification.</p>
<p>You can try to define your <code>PodTemplate</code> in <code>yaml</code> format (which is mostly used in <strong>kubernetes</strong>) like in <a href="https://github.com/jenkinsci/kubernetes-plugin#using-yaml-to-define-pod-templates" rel="nofollow noreferrer">this</a> example:</p>
<pre><code>podTemplate(yaml: """
apiVersion: v1
kind: Pod
metadata:
labels:
some-label: some-label-value
spec:
containers:
- name: busybox
image: busybox
command:
- cat
tty: true
"""
) {
node(POD_LABEL) {
container('busybox') {
sh "hostname"
}
}
}
</code></pre>
<p>As you can read <a href="https://github.com/jenkinsci/kubernetes-plugin#pod-and-container-template-configuration" rel="nofollow noreferrer">here</a>:</p>
<blockquote>
<p>label The label of the pod. Can be set to a unique value to avoid
conflicts across builds, or omitted and POD_LABEL will be defined
inside the step.</p>
</blockquote>
<p><code>label</code> field can be omitted at all so first you can try without it and you shouldn't get any error message.</p>
|
<p>I've been working with Kubernetes for quite a while, but still often got confused about Volume, PersistentVolume and PersistemtVolumeClaim. It would be nice if someone could briefly summarize the difference of them.</p>
| <p>Volume - For a pod to reference a storage that is external , it needs volume spec. This volume can be from configmap, from secrets, from persistantvolumeclaim, from hostpath etc</p>
<p>PeristentVolume - It is representation of a storage that is made avaliable. The plugins for cloud provider enable to create this resource.</p>
<p>PeristentVolumeClaim - This claims specific resources and if the persistent volume is avaliable in namespaces match the claim requirement, then claim get tied to that Peristentvolume</p>
<p>At this point this PVC/PV aren't used. Then in Pod spec, pod makes use of claim as volumes and then the storage is attached to Pod</p>
|
<p>I encountered a problem when integrating K8S nginx ingress. I installed the nginx ingress controller and established the testing ingress resources according to the instructions on the document, but I was not able to jump to the normal path. The test serive was normal and Accessible via cluster IP. Am I missing something?</p>
<blockquote>
<p>Install script</p>
</blockquote>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
</code></pre>
<blockquote>
<p>Bare-metal Using NodePort</p>
</blockquote>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml
</code></pre>
<p><strong>Ingress controller is OK</strong>
<a href="https://i.stack.imgur.com/0vmLv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0vmLv.png" alt="enter image description here"></a></p>
<blockquote>
<p>Testing ingress resource</p>
</blockquote>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
app: my-nginx
template:
metadata:
labels:
app: my-nginx
spec:
containers:
- name: my-nginx
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
app: my-nginx
spec:
ports:
- port: 80
protocol: TCP
name: http
selector:
app: my-nginx
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-nginx
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: nginx1.beencoding.com
http:
paths:
- path: /
backend:
serviceName: nginx-1
servicePort: 80
</code></pre>
<p><strong>We can see the test nginx pod raised and works fine, I can access the nginx index page by cluster IP</strong>
<a href="https://i.stack.imgur.com/I6Vf4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/I6Vf4.png" alt="enter image description here"></a></p>
<p><strong>But I can't access nginx1.beencoding.com</strong>
<a href="https://i.stack.imgur.com/g1dce.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/g1dce.png" alt="enter image description here"></a></p>
<p><strong>Can't access via browser</strong>
<a href="https://i.stack.imgur.com/GvD8C.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GvD8C.png" alt="enter image description here"></a></p>
| <p>I have solved the problem by setting <code>hostnetwork: true</code></p>
|
<p>We want to disable <code>oc get/describe</code> for <code>secrets</code> to prevent token login</p>
<p>The current policy prevent create, update, delete but not the viewing of secrets</p>
<pre><code>package admission
import data.k8s.matches
# Deny all user for doing secret ops except policyadmin
deny[query] {
matches[[resource]]
not "policyadmin" == resource.userInfo.username
"Secret" == resource.kind.kind
msg := sprintf("Custom Unauthorized user: %v", [resource.userInfo.username])
query = {
"id": "policy-admin-for-secret-only",
"resource": {
"kind": kind,
"namespace": namespace,
"name": name
},
"resolution": {
"message": msg
},
}
}
</code></pre>
<p>The data in the resource object is just: </p>
<blockquote>
<p>{\"kind\": {\"group\": \"\", \"kind\": \"Secret\", \"version\":
\"v1\"}, \"name\": \"s5-token-n6v6q\", \"namespace\": \"demo\",
\"operation\": \"DELETE\", \"resource\": {\"group\": \"\",
\"resource\": \"secrets\", \"version\": \"v1\"}, \"uid\":
\"748cdab2-1c1d-11ea-8b11-080027f8814d\", \"userInfo\": {\"groups\":
[\"system:cluster-admins\", \"system:masters\",
\"system:authenticated\"], \"username\": \"system:admin\"}</p>
</blockquote>
<p>The example in <a href="https://github.com/raffaelespazzoli/openshift-opa/blob/master/examples/authorization-webhooks/unreadable_secrets.rego" rel="nofollow noreferrer">https://github.com/raffaelespazzoli/openshift-opa/blob/master/examples/authorization-webhooks/unreadable_secrets.rego</a> uses the <strong>resource.spec</strong> object, but I don't think it's available in my <code>input/AdmissionReview</code> object?</p>
<p>I am using </p>
<ul>
<li>minishift 1.24 </li>
<li>openshift v3.9.0+2e78773-56 </li>
<li>kubernetes v1.9.1+a0ce1bc657 </li>
<li>etcd 3.2.16</li>
</ul>
| <p>Admission control in Kubernetes does NOT let you control a <code>get</code>. It only lets you control <code>create</code>, <code>update</code>, <code>delete</code>, and <code>connect</code>. The API docs for the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#validatingwebhookconfiguration-v1-admissionregistration-k8s-io" rel="nofollow noreferrer">validating webhook</a> and its descendent RuleWithOperations (no handy link) don't make this clear, but the <a href="https://kubernetes.io/docs/reference/access-authn-authz/controlling-access/#admission-control" rel="nofollow noreferrer">docs introducing API access</a> state it explicitly. </p>
<p>To control <code>get</code>, you need to use <a href="https://kubernetes.io/docs/reference/access-authn-authz/controlling-access/#authorization" rel="nofollow noreferrer">authorization</a>. You could use <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">RBAC</a> to restrict who can <code>get</code> any of the <code>Secret</code>s. To use OPA for authorization you would need the <a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/#authorization-modules" rel="nofollow noreferrer">authorization webhook mode</a>.</p>
<p>In Andrew's code that you link to, he is using an authorization webhook--not an admission control webhook. That's why some of the data he is using from <code>input</code> isn't the same as what you see from an admission control webhook. Taking a quick look at his writeup, it seems you need to follow his instructions to <a href="https://github.com/raffaelespazzoli/openshift-opa#enable-authorization" rel="nofollow noreferrer">Enable Authorization</a>. </p>
|
<ol>
<li>I use openssl to create a wildcard self-signed certificate. I set certificate validity duration to
to ten years (I double-checked the validity duration by inspecting the certificate with openssl)</li>
<li>I create a Kubernetes secret with the private key and certificate prepared in step 1 with following <code>kubectl</code> command:
<code>kubectl create secret tls my-secret -n test --key server.key --cert server.crt</code></li>
<li>We use nginx ingress controller version 0.25.1 running on AWS EKS</li>
<li>I refer to this secret in the Kubernetes ingress of my service</li>
<li>When connecting to my service via browser and inspecting the certificate, I notice it is issued by
"Kubernetes ingress Controller Fake certificate" and <strong>expires in one year instead of ten years</strong></li>
</ol>
<p>This certificate is used for internal traffic only, we expect the validity duration to be ten years. Why is it changed to one year? What can be done to keep the validity duration in the original certificate?</p>
<p><code>kubectl get secret dpaas-secret -n dpaas-prod -o yaml</code>:</p>
<pre><code>apiVersion: v1
data:
tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR2ekNDQXFlZ0F3SUJBZ0lRSE5tMUQwakYxSHBQYVk2cTlCR2hGekFOQmdrcWhraUc5dzBCQVFzRkFEQlEKTVJrd0Z3WURWUVFLRXhCaFkyMWxJRk5sYkdZZ1UybG5ibVZrTVJVd0V3WURWUVFMRXd4aFkyMWxJR1Y0WVcxdwpiR1V4SERBYUJnTlZCQU1URTJGamJXVWdVMlZzWmlCVGFXZHVaV1FnUTBFd0hoY05NVGt4TWpFMk1UUXhOak14CldoY05Namt4TWpFMk1ERXhOak14V2pCWk1Rc3dDUVlEVlFRR0V3SlZVekVaTUJjR0ExVUVDaE1RWVdOdFpTQlQKWld4bUlGTnBaMjVsWkRFUk1BOEdBMVVFQ3hNSVlXTnRaUzVqYjIweEhEQWFCZ05WQkFNTUV5b3VaSEJ6TG0xNQpZMjl0Y0dGdWVTNWpiMjB3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQzRjVmtaCndJY1cwS1VpTk5zQ096YjlleFREQU11SENwT3Jia2ExNmJHVHhMd2FmSmtybzF4TGVjYU8yM3p0SkpaRTZEZG8KZlB1UXAyWlpxVjJoL0VqL3ozSmZrSTRKTXdRQXQyTkd2azFwZk9YVlJNM1lUT1pWaFNxMU00Sm01ZC9BUHMvaApqZTdueUo4Y1J1aUpCMUh4SStRRFJpMllBK3Nzak12ZmdGOTUxdVBwYzR6Skppd0huK3JLR0ZUOFU3d2FueEdJCnRGRC9wQU1LaXRzUEFheW9aT2czazk4ZnloZS9ra1Z0TlNMTTdlWEZTODBwUEg2K3lEdGNlUEtMZ3N6Z3BqWFMKWGVsSGZLMHg1TXhneXIrS0dTUWFvU3Q0SVVkYWZHa05meVVYVURTMmtSRUdzVERVcC94R2NnQWo3UGhJeGdvZAovNWVqOXRUWURwNG0zWStQQWdNQkFBR2pnWXN3Z1lnd0RnWURWUjBQQVFIL0JBUURBZ1dnTUIwR0ExVWRKUVFXCk1CUUdDQ3NHQVFVRkJ3TUJCZ2dyQmdFRkJRY0RBakFNQmdOVkhSTUJBZjhFQWpBQU1COEdBMVVkSXdRWU1CYUEKRk5VOE90NXBtM0dqU3pPTitNSVJta3k3dVBVRU1DZ0dBMVVkRVFRaE1CK0NIU291WkhCekxuVnpMV1ZoYzNRdApNUzV0ZVdOdmJYQmhibmt1WTI5dE1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQlJITlN5MXZoWXRoeUpHRHpkCnpjSi9FR3QxbktXQkEzdFovRmtlRDhpelRiT282bmt0Sys5Rk52YmNxeGFQbzZSTWVtM2VBcEZCQkNFYnFrQncKNEZpRkMzODFRTDFuZy9JL3pGS2lmaVRlK0xwSCtkVVAxd1IzOUdOVm9mdWdNcjZHRmlNUk5BaWw4MVZWLzBEVworVWZ2dFg5ZCtLZU0wRFp4MGxqUkgvRGNubEVNN3Z3a3F5NXA2VklJNXN6Y1F4WTlGWTdZRkdMUms4SllHeXNrCjVGYW8vUFV1V1ZTaUMzRk45eWZ0Y3c1UGZ6MSt4em1PSmphS3FjQWkvcFNlSVZzZ0VTNTFZYm9JTUhLdDRxWGcKSlFqeU9renlKbFhpRzhBa2ZRMVJraS91cmlqZllqaWl6M04yUXBuK2laSFNhUmJ5OUhJcGtmOGVqc3VTb2wrcgpxZ3N4Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdUhGWkdjQ0hGdENsSWpUYkFqczIvWHNVd3dETGh3cVRxMjVHdGVteGs4UzhHbnlaCks2TmNTM25HanR0ODdTU1dST2czYUh6N2tLZG1XYWxkb2Z4SS84OXlYNUNPQ1RNRUFMZGpScjVOYVh6bDFVVE4KMkV6bVZZVXF0VE9DWnVYZndEN1A0WTN1NThpZkhFYm9pUWRSOFNQa0EwWXRtQVByTEl6TDM0QmZlZGJqNlhPTQp5U1lzQjUvcXloaFUvRk84R3A4UmlMUlEvNlFEQ29yYkR3R3NxR1RvTjVQZkg4b1h2NUpGYlRVaXpPM2x4VXZOCktUeCt2c2c3WEhqeWk0TE00S1kxMGwzcFIzeXRNZVRNWU1xL2loa2tHcUVyZUNGSFdueHBEWDhsRjFBMHRwRVIKQnJFdzFLZjhSbklBSSt6NFNNWUtIZitYby9iVTJBNmVKdDJQandJREFRQUJBb0lCQUJuWFY2SnlCUHMvVkVPTQpvRHFaelVTS1lBaEtMam5IVTVVcktDRUlrdWFmSTdPYVRXTjl5Y3FSVHk1b3RnSUxwRG9YUnR3TzFyZ1huQkZuCjEwU0Fza0dVOFBOT3IzZStmQXNWcG9VYzJIKzFEZ1pwVTJYQXNHeSs4WkxkbXFHTUIyTko2Wm95Wm94MjRVUDIKODFGdmd4MkQ1OGhGcHRHcml1RjlBSHRaNHdhUXhnNE9EckNnQUVmZU8rTlNsckEvZHB0bERFcDJYUHBVVGg5VQpKMGk2b3VGZjUwTllHZXptMTR5ZkpWMDhOdWJGYjNjWldNOUZYZXAvUDhnZTFXZXBRemJsZWtWcEQ0eGZQa1ZjCjNwam1GREszdUVuSC9qcmRKeDJUb0NjTkJqK21nemN6K1JNZUxTNVBQZ29sc0huSFVNdkg4eG51cis5MURlRXgKWVlSYUtRRUNnWUVBMkVjUjFHcTFmSC9WZEhNU3VHa2pRUXJBZ0QvbEpiL0NGUExZU2xJS0pHdmV5Vi9qbWtKNApRdUFiYWFZRDVOdmtxQ0dDd2N1YXZaU05YTnFkMGp5OHNBWmk0M0xOaCt0S1VyRDhOVlVYM2ZiR2wyQUx0MTFsCmVUM0ZRY1NVZmNreEZONW9jdEFrV3hLeG5hR2hpOHJlelpjVStRZkMxdDF5VTdsOXR0MUgrODhDZ1lFQTJsRjQKRDBnZHpKWHduQnBJM0FJSjNyMDlwalZpMXlScGgyUDRvK1U2a1cvTmE5NnRLcTg5VEg3c0ViQkpSWEVjUDdBawpSYnNHN1p4SStCQkF5cy9YdFhkOWowdXRBODc4c1hsbm5ZVTlUa21xQXRoYVBCaWh3K00wZE9LNnFhRFpueWZBCnE5Z2NoZ0ZvS3pGenBuMzVvVzllNStId2EyYWk2YW8yZnFHSFlFRUNnWUVBcVVaR3dEaWN2MHJXYUlSQVhMRjkKZEVUVUVnendickU5V0dRUndXbWdvbzBESEIyKzZGZXFCTDJlOXZ1SEJMTE9yb0U3OUM1RmVLZ3lWRUNQVWFOVQpFM21NSUhVVVJKTjE0bTYvbDRaNFhiUHVEMENQS3Y4Z2t0b3o3NXZLbFFESk40b3p1ZGtLKzNVUUswMzhRSXVTCkF0dURBTDZBVXVlVHVjL3VneGVDWmFVQ2dZQnFZUlE5YmdpSExmQ21QL0NNczdtWGZXTFM0R1NmTExEM05mRnIKKzBDRXFaUFJJaG9ER0l5bi81aU1MZmdtREMyVm93Q3BzYTU0alpUSXV6SzNJSHVkZ3ZIOXB3UlJQTVRJdmIyTgpkZVVmaHFsKzVXbGlxeVgzeTNnK0ZGU2NYekpyYVBWclJzenZSelE1Qjhtd3NPVzRrZ29PdDN0cytnQWNGOEtpCkJaZHZnUUtCZ1FDdTBSQmwrTkRIZ2RiZC9YUnN0ZmZWWUlDRnpPOFJoWUo0NGpZYWRVc3BtcW5BYUp3T3liL0EKek9FNC9RbE85MGx2WCtYbXFud3FKMzlDOVF0SDFCeTZGdVhhVCtGdVdxeWVlK3MyNEd1Rnp5b3pUVnBZNURROQpSUS9iL2NQbXZuMTZMMnlTaVY2d3N3Nk8xWGdtc2ZCZ0JHUjJsdU5PTjIwNGdnazRZWGgvZ3c9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
kind: Secret
metadata:
creationTimestamp: "2019-12-16T14:31:59Z"
name: dpaas-secret
namespace: dpaas-prod
resourceVersion: "134564"
selfLink: /api/v1/namespaces/dpaas-prod/secrets/dpaas-secret
uid: d1c692b6-2010-11ea-bce8-1247666f5179
type: kubernetes.io/tls
</code></pre>
<p><code>kubectl describe ingress ingress-test4 -n dpaas-prod</code>:</p>
<pre><code>Name: ingress-test4
Namespace: dpaas-prod
Address: ad6c6ea681f5d11ea91440a6af5c8987-559e0a22f4b3e398.elb.us-east-1.amazonaws.com
Default backend: default-http-backend:80 (<none>)
TLS:
dpaas-secret terminates
Rules:
Host Path Backends
---- ---- --------
test4.dps.mycompany.com
/ cpe-test4:80 (10.0.13.222:8080,10.0.38.178:8080)
Annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: false
nginx.ingress.kubernetes.io/server-alias: test4.dps.us-east-1.mycompany.com
nginx.ingress.kubernetes.io/ssl-redirect: true
Events: <none>
</code></pre>
| <p>In general, "Kubernetes ingress Controller Fake certificate" indicates problems on the certificates itself or in your setup. You can read more about it <a href="https://github.com/kubernetes/ingress-nginx/issues/1044" rel="nofollow noreferrer">here</a>, <a href="https://rancher.com/docs/rancher/v2.x/en/installation/options/troubleshooting/#cert-cn-is-kubernetes-ingress-controller-fake-certificate" rel="nofollow noreferrer">here</a>, <a href="https://github.com/kubernetes/ingress-nginx/issues/4674" rel="nofollow noreferrer">here</a> and <a href="https://github.com/kubernetes/ingress-nginx/issues/1984" rel="nofollow noreferrer">here</a>. </p>
<p>None of these posts will tell you how to solve your problem as the reason may be very wide and depends on your certificate and how it was generated. </p>
<p><a href="https://github.com/kubernetes/ingress-nginx/issues/4674#issuecomment-558504185" rel="nofollow noreferrer">Here</a> for example, it's reported that problem was not in the certificate itself but in his ingress:</p>
<blockquote>
<p>I just realized that I was missing the host in the rule per se (not
sure if this is required, but it fixed the issues and now the cert.
being used is the one I'm providing and not the Fake Kubernetes one).
Example of my ingress:</p>
</blockquote>
<p>So, I as suggested in the comments, you reviewed the steps used to generate your certificate and discovered that adding the certificate common name to the list of SANs and regenerating the self-signed certificate fixed the problem. </p>
|
<p>Below two URl's are given in GitHub and both are giving 404 Error.
<a href="https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml</a> to <a href="https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml</a></p>
| <p>In Many documentation this broken link is there, correct one is updated in github repo
<a href="https://github.com/kubernetes/dashboard" rel="nofollow noreferrer">https://github.com/kubernetes/dashboard</a></p>
<p>try this will work
<a href="https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml</a></p>
|
<p>I have a Kubernetes setup of 7 Apache Ignite servers and over 100 clients.</p>
<p>With my current Apache Ignite configuration, I am seeing the following line of log for the servers:</p>
<pre><code>java.lang.OutOfMemoryError: Java heap space
</code></pre>
<p>Below is the memory configuration Apache Ignite server:</p>
<ul>
<li>Pod memory limit: 2Gb</li>
<li>Xmx: 768m</li>
</ul>
<p>I would like to know what should be the optimum Memory configuration for the Apache Ignite cluster</p>
| <p>It depends on what you're trying to do -- persistence and SQL tend to use more heap space for example -- but both 2Gb and 768Mb are <em>much</em> smaller than I'd expect for an in-memory database.</p>
<p>The <a href="https://www.gridgain.com/docs/latest/perf-troubleshooting-guide/memory-tuning" rel="nofollow noreferrer">tuning guide</a> suggests 10Gb as a starting point:</p>
<pre><code>-server
-Xms10g
-Xmx10g
-XX:+AlwaysPreTouch
-XX:+UseG1GC
-XX:+ScavengeBeforeFullGC
-XX:+DisableExplicitGC
</code></pre>
|
<p>for some legacy systems, I need to activate TLSv1.1 on my NGINX ingress controller until they are switched to TLSv1.2.
It should be fairly easy according to the documentation, but I am getting a handshake error. Looks like Nginx is not serving any certificate at all.</p>
<p>ConfigMap:</p>
<pre><code>apiVersion: v1
data:
log-format: '{"time": "$time_iso8601", "x-forwarded-for": "$http_x_forwarded_for",
"remote_addr": "$proxy_protocol_addr", "x-forward-for": "$proxy_add_x_forwarded_for",
"request_id": "$req_id", "remote_user": "$remote_user", "bytes_sent": $bytes_sent,
"request_time": $request_time, "status":$status, "vhost": "$host", "request_proto":
"$server_protocol", "path": "$uri", "request_query": "$args", "request_length":
$request_length, "duration": $request_time,"method": "$request_method", "http_referrer":
"$http_referer", "http_user_agent": "$http_user_agent" }'
log-format-escape-json: "true"
log-format-upstream: '{"time": "$time_iso8601", "x-forwarded-for": "$http_x_forwarded_for",
"remote_addr": "$proxy_protocol_addr", "x-forward-for": "$proxy_add_x_forwarded_for",
"request_id": "$req_id", "remote_user": "$remote_user", "bytes_sent": $bytes_sent,
"request_time": $request_time, "status":$status, "vhost": "$host", "request_proto":
"$server_protocol", "path": "$uri", "request_query": "$args", "request_length":
$request_length, "duration": $request_time,"method": "$request_method", "http_referrer":
"$http_referer", "http_user_agent": "$http_user_agent" }'
ssl-protocols: TLSv1.1 TLSv1.2
kind: ConfigMap
metadata:
name: nginx-ingress-controller
namespace: nginx
</code></pre>
<p>curl:</p>
<pre><code>$ curl https://example.com/healthcheck -I --tlsv1.2
HTTP/2 200
....
$ curl https://example.com/healthcheck -I --tlsv1.1 -k -vvv
* Trying 10.170.111.150...
* TCP_NODELAY set
* Connected to example.com (10.170.111.150) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.1 (OUT), TLS handshake, Client hello (1):
* TLSv1.1 (IN), TLS alert, Server hello (2):
* error:14004410:SSL routines:CONNECT_CR_SRVR_HELLO:sslv3 alert handshake failure
* stopped the pause stream!
* Closing connection 0
curl: (35) error:14004410:SSL routines:CONNECT_CR_SRVR_HELLO:sslv3 alert handshake failure
</code></pre>
<p>openssh:</p>
<pre><code>$ openssl s_client -servername example.com -connect example.com:443 -tls1_2
CONNECTED(00000007)
depth=2 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert Global Root CA
verify return:1
depth=1 C = US, O = DigiCert Inc, CN = DigiCert SHA2 Secure Server CA
verify return:1
depth=0 C = US, L = NY, O = Example, CN = example.com
verify return:1
---
Certificate chain
...
---
Server certificate
...
issuer=/C=US/O=DigiCert Inc/CN=DigiCert SHA2 Secure Server CA
---
No client certificate CA names sent
Server Temp Key: ECDH, X25519, 253 bits
---
SSL handshake has read 3584 bytes and written 345 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES256-GCM-SHA384
....
Verify return code: 0 (ok)
---
$ openssl s_client -servername example.com -connect example.com:443 -tls1_1
CONNECTED(00000007)
4541097580:error:14004410:SSL routines:CONNECT_CR_SRVR_HELLO:sslv3 alert handshake failure:/BuildRoot/Library/Caches/com.apple.xbs/Sources/libressl/libressl-22.260.1/libressl-2.6/ssl/ssl_pkt.c:1205:SSL alert number 40
4541097580:error:140040E5:SSL routines:CONNECT_CR_SRVR_HELLO:ssl handshake failure:/BuildRoot/Library/Caches/com.apple.xbs/Sources/libressl/libressl-22.260.1/libressl-2.6/ssl/ssl_pkt.c:585:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 7 bytes and written 0 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.1
Cipher : 0000
Session-ID:
Session-ID-ctx:
Master-Key:
Start Time: 1576574691
Timeout : 7200 (sec)
Verify return code: 0 (ok)
---
</code></pre>
<p>A sum-up of questions:</p>
<p>1) How to enable TLSv1.1 on Nginx ingress?</p>
<p>2) Can I see in the logs (where) which tls version was used to connect? I cannot find anything with kubectl logs -n Nginx pod?</p>
| <p>For anyone else having this problem. -> But please consider deactivating TLSv1 and TLSv1.1 as soon as possible!!!</p>
<pre><code>apiVersion: v1
data:
log-format: '{"time": "$time_iso8601", "x-forwarded-for": "$http_x_forwarded_for",
"remote_addr": "$proxy_protocol_addr", "x-forward-for": "$proxy_add_x_forwarded_for",
"request_id": "$req_id", "remote_user": "$remote_user", "bytes_sent": $bytes_sent,
"request_time": $request_time, "status":$status, "vhost": "$host", "request_proto":
"$server_protocol", "path": "$uri", "request_query": "$args", "request_length":
$request_length, "duration": $request_time,"method": "$request_method", "http_referrer":
"$http_referer", "http_user_agent": "$http_user_agent" }'
log-format-escape-json: "true"
log-format-upstream: '{"time": "$time_iso8601", "x-forwarded-for": "$http_x_forwarded_for",
"remote_addr": "$proxy_protocol_addr", "x-forward-for": "$proxy_add_x_forwarded_for",
"request_id": "$req_id", "remote_user": "$remote_user", "bytes_sent": $bytes_sent,
"request_time": $request_time, "status":$status, "vhost": "$host", "request_proto":
"$server_protocol", "path": "$uri", "request_query": "$args", "request_length":
$request_length, "duration": $request_time,"method": "$request_method", "http_referrer":
"$http_referer", "http_user_agent": "$http_user_agent" }'
ssl-ciphers: ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA
ssl-early-data: "true"
ssl-protocols: TLSv1 TLSv1.1 TLSv1.2 TLSv1.3
kind: ConfigMap
metadata:
name: nginx-ingress-controller
namespace: nginx
</code></pre>
|
<p>We have a platform built using microservices architecture, which is deployed using Kubernetes and Ingress. One of the platform's components is a Django Rest API.
The yaml for the Ingress is the below (I have changed only the service names & endpoints):</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dare-ingress
annotations:
kubernetes.io/ingress.provider: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$1
certmanager.k8s.io/issuers: "letsencrypt-prod"
certmanager.k8s.io/acme-challenge-type: http01
spec:
tls:
- hosts:
- demo-test.com
secretName: dare-ingress-tls
rules:
- host: demo-test.com
http:
paths:
- path: /prov/?(.*)
backend:
serviceName: prov
servicePort: 8082
- path: /(prov-ui/?(.*))
backend:
serviceName: prov-ui
servicePort: 8080
- path: /flask-api/?(.*)
backend:
serviceName: flask-api
servicePort: 80
- path: /django-rest/?(.*)
backend:
serviceName: django-rest
servicePort: 8000
</code></pre>
<p>The django component is the last one. I have a problem with the swagger documentation. While all the Rest calls work fine, when I want to view the documentation the page is not load. This is because it requires login and the redirection to the documentation does not work. </p>
<p>I mean that, without Ingress the documentation url is for example: <a href="https://demo-test.com/docs" rel="nofollow noreferrer">https://demo-test.com/docs</a> but using Ingress, the url should be <a href="https://demo-test.com/django-rest/login" rel="nofollow noreferrer">https://demo-test.com/django-rest/login</a> and then <a href="https://demo-test.com/django-rest/docs" rel="nofollow noreferrer">https://demo-test.com/django-rest/docs</a> but the redirection does not work, I get a 404 error.
Does anyone have any idea how to fix this in Ingress?</p>
| <p>I managed to fix the redirection error (and stay logged in) using the FORCE_SCRIPT_NAME as suggested in a comment in this <a href="https://stackoverflow.com/questions/58723145/hosting-django-on-aks-behind-nginx-ingress">thread</a> </p>
<p>However, now the swagger documentation is not properly loaded:</p>
<p><a href="https://i.stack.imgur.com/APvBG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/APvBG.png" alt="enter image description here"></a></p>
<p>I followed the suggestions from <a href="https://github.com/marcgibbons/django-rest-swagger/issues/758" rel="nofollow noreferrer">here</a> regarding the swagger documentation but still the page cannot be loaded properly. Any ideas?</p>
|
<p>While trying to perform <code>helm install</code></p>
<blockquote>
<p>Error: unable to build kubernetes objects from release manifest:
[unable to recognize "": no matches for kind "Service" in version
"extensions/v1beta1", error validating "": error validating data:
ValidationError(Deployment.spec): missing required field "selector" in
io.k8s.api.apps.v1.DeploymentSpec]</p>
</blockquote>
<p>My <code>service.yaml</code> looks like below</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Service
metadata:
name: helm-xxx-helper-api
spec:
type: NodePort
ports:
- nodePort: 31235
port: 80
targetPort: 8080
selector:
app: helm-xxx-helper
</code></pre>
<p>My <code>deployment.yaml</code> </p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: helm-xxx-helper
spec:
replicas: 2
selector:
matchLabels:
name: helm-xxx-helper
template:
metadata:
labels:
app: helm-xxx-helper
spec:
containers:
- name: helm-xxx-helper
image: xxxxxxxxx:5001/devops/xxx-helper:latest
imagePullPolicy: Always
env:
- name: XXX_STAGE
value: "DEV"
ports:
- containerPort: 8080
</code></pre>
<p>What could be the issue here?</p>
| <p>As you received this error it means that you are using version Kubernetes 1.16 or newer.</p>
<p><strong>Issue 1</strong> - With <code>Service</code></p>
<p>In this version many <code>apiVersion</code> has been changed (Deployments, StatefulSet, Service). More details can be found <a href="https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/" rel="noreferrer">here</a>.</p>
<p>In Kubernetes 1.16 you need to use <code>apiVersion: v1</code> for <code>service</code>. Otherwise you will receive errors like</p>
<pre><code>error: unable to recognize "STDIN": no matches for kind "Service" in version "extensions/v1beta1"
error: unable to recognize "STDIN": no matches for kind "Service" in version "extensions/v1"
error: unable to recognize "STDIN": no matches for kind "Service" in version "apps/v1"
</code></pre>
<p><strong>Issue 2</strong> - With <code>Deployment</code>.</p>
<ul>
<li><code>spec.selector.matchLabels</code> does not contain value like <code>name</code>. You need to use value from <code>labels</code>. So in this case instead of <code>name: helm-xxx-helper</code> you need to use <code>app: helm-xxx-helper</code> otherwise you will receive error like:</li>
</ul>
<pre><code>The Deployment "helm-xxx-helper" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"app":"helm-xxx-helper"}: `selector` does not match template `labels`
</code></pre>
<ul>
<li>wrong YAML format. In your code you have</li>
</ul>
<pre><code>...
selector:
matchLabels:
name: helm-xxx-helper
...
</code></pre>
<p>Value for <code>matchLabels</code> should be under 3rd letter (t). Also as I mentioned in previous point you need to change <code>name</code> to <code>app</code></p>
<p>Proper format with correct valye of <code>matchLables</code>:</p>
<pre><code>...
selector:
matchLabels:
app: helm-xxx-helper
...
</code></pre>
<p>You can read about <code>Labels</code> and <code>Selectors</code> <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="noreferrer">here</a>.</p>
<p>As you mentioned it is <code>HELM</code>, you will need to change <code>Kubernetes version</code> to older than 1.16 or change <code>apiVersion</code> in each object YAML in <code>template</code> directory.
There was already a similar case. Please check <a href="https://stackoverflow.com/a/58482194/11148139">this thread</a> for more information.</p>
<p>Below both YAMLs which will create <code>Service</code> and <code>Deployment</code>. Tested on Kubernetes 1.16.1.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: helm-xxx-helper-api
spec:
type: NodePort
ports:
- nodePort: 31235
port: 80
targetPort: 8080
selector:
app: helm-xxx-helper
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: helm-xxx-helper
spec:
replicas: 2
selector:
matchLabels:
app: helm-xxx-helper
template:
metadata:
labels:
app: helm-xxx-helper
spec:
containers:
- name: helm-xxx-helper
image: nginx # As I dont have your image ive put nginx
imagePullPolicy: Always
env:
- name: XXX_STAGE
value: "DEV"
ports:
- containerPort: 8080
</code></pre>
|
<p>We have a kubernetes cluster with istio proxy running.
At first I created a cronjob which reads from database and updates a value if found. It worked fine.</p>
<p>Then it turns out we already had a service that does the database update so I changed the database code into a service call.</p>
<pre><code>conn := dial.Service("service:3550", grpc.WithInsecure())
client := protobuf.NewServiceClient(conn)
client.Update(ctx)
</code></pre>
<p>But istio rejects the calls with an RBAC error. It just rejects and doesnt say why.</p>
<p>Is it possible to add a role to a cronjob? How can we do that?</p>
<p>The mTLS meshpolicy is PERMISSIVE.</p>
<p>Kubernetes version is 1.17 and istio version is 1.3</p>
<pre><code>API Version: authentication.istio.io/v1alpha1
Kind: MeshPolicy
Metadata:
Creation Timestamp: 2019-12-05T16:06:08Z
Generation: 1
Resource Version: 6578
Self Link: /apis/authentication.istio.io/v1alpha1/meshpolicies/default
UID: 25f36b0f-1779-11ea-be8c-42010a84006d
Spec:
Peers:
Mtls:
Mode: PERMISSIVE
</code></pre>
<p>The cronjob yaml</p>
<pre><code>Name: cronjob
Namespace: serve
Labels: <none>
Annotations: <none>
Schedule: */30 * * * *
Concurrency Policy: Allow
Suspend: False
Successful Job History Limit: 1
Failed Job History Limit: 3
Pod Template:
Labels: <none>
Containers:
service:
Image: service:latest
Port: <none>
Host Port: <none>
Environment:
JOB_NAME: (v1:metadata.name)
Mounts: <none>
Volumes: <none>
Last Schedule Time: Tue, 17 Dec 2019 09:00:00 +0100
Active Jobs: <none>
Events:
</code></pre>
<p><em>edit</em>
I have turned off RBA for my namespace in ClusterRBACConfig and now it works. So cronjobs are affected by roles is my conclusion then and it should be possible to add a role and call other services.</p>
| <p>The <code>cronjob</code> needs proper permissions in order to run if RBAC is enabled.</p>
<p>One of the solutions in this case would be to add a <code>ServiceAccount</code> to the <code>cronjob</code> configuration file that has enough privileges to execute what it needs to.</p>
<p>Since You already have existing services in the namespace You can check if You have existing <code>ServiceAccount</code> for specific <code>NameSpace</code> by using:</p>
<pre><code>$ kubectl get serviceaccounts -n serve
</code></pre>
<p>If there is existing <code>ServiceAccount</code> You can add it into Your cronjob manifest yaml file.</p>
<p>Like in this example:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: adwords-api-scale-up-cron-job
spec:
schedule: "*/2 * * * *"
jobTemplate:
spec:
activeDeadlineSeconds: 100
template:
spec:
serviceAccountName: scheduled-autoscaler-service-account
containers:
- name: adwords-api-scale-up-container
image: bitnami/kubectl:1.15-debian-9
command:
- bash
args:
- "-xc"
- |
kubectl scale --replicas=2 --v=7 deployment/adwords-api-deployment
volumeMounts:
- name: kubectl-config
mountPath: /.kube/
readOnly: true
volumes:
- name: kubectl-config
hostPath:
path: $HOME/.kube # Replace $HOME with an evident path location
restartPolicy: OnFailure
</code></pre>
<p>Then under Pod Template there should be Service Account visable:</p>
<pre><code>$ kubectl describe cronjob adwords-api-scale-up-cron-job
Name: adwords-api-scale-up-cron-job
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"batch/v1beta1","kind":"CronJob","metadata":{"annotations":{},"name":"adwords-api-scale-up-cron-job","namespace":"default"},...
Schedule: */2 * * * *
Concurrency Policy: Allow
Suspend: False
Successful Job History Limit: 3
Failed Job History Limit: 1
Starting Deadline Seconds: <unset>
Selector: <unset>
Parallelism: <unset>
Completions: <unset>
Active Deadline Seconds: 100s
Pod Template:
Labels: <none>
Service Account: scheduled-autoscaler-service-account
Containers:
adwords-api-scale-up-container:
Image: bitnami/kubectl:1.15-debian-9
Port: <none>
Host Port: <none>
Command:
bash
Args:
-xc
kubectl scale --replicas=2 --v=7 deployment/adwords-api-deployment
Environment: <none>
Mounts:
/.kube/ from kubectl-config (ro)
Volumes:
kubectl-config:
Type: HostPath (bare host directory volume)
Path: $HOME/.kube
HostPathType:
Last Schedule Time: <unset>
Active Jobs: <none>
Events: <none>
</code></pre>
<p>In case of custom RBAC configuration i suggest referring to <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">kubernetes</a> documentation.</p>
<p>Hope this helps.</p>
|
<p>I have a helm chart that deploys a number of Kubernetes resources. One of them is a resource that is of a Custom Resource Definition (CRD) type (<code>ServiceMonitor</code> used by <code>prometheus-operator</code>). </p>
<p>I am looking for a way, how to "tell" helm that I'd want to create this resource only if such a CRD is defined in the cluster OR to ignore errors <em>only</em> caused by the fact that such a CRD is missing.</p>
<p>Is that possible and how can I achieve that?</p>
| <p>Helm's <a href="https://v2.helm.sh/docs/chart_template_guide/#built-in-objects" rel="noreferrer">Capabilities</a> object can tell you if an entire API class is installed in the cluster. I don't think it can test for a specific custom resource type.</p>
<p>In your <code>.tpl</code> files, you can wrap the entire file in a <code>{{ if }}...{{ end }}</code> block. Helm doesn't especially care if the rendered version of a file is empty.</p>
<p>That would lead you to a file like:</p>
<pre><code>{{ if .Capabilities.APIVersions.Has "monitoring.coreos.com/v1" -}}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
...
{{ end -}}
</code></pre>
<p>That would get installed if the operator is installed in the cluster, and skipped if not.</p>
|
<p>I managed to create a cluster using GKE using gce ingress successfully. However it takes a long time for Ingress to detect the service is ready (I already set both livenessProbe and readinessProbe).
My pods set up</p>
<pre><code>Containers:
...
gateway:
Liveness: http-get http://:5100/api/v1/gateway/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:5100/api/v1/gateway/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
...
</code></pre>
<p>and ingress</p>
<pre><code>...
Name: main-ingress
Host Path Backends
---- ---- --------
<host>
/api/v1/gateway/ gateway:5100 (<ip:5100>)
/api/v1/gateway/* gateway:5100 (<ip:5100>)
web:80 (<ip>)
Annotations:
ingress.kubernetes.io/backends: {"k8s-be-***":"HEALTHY","k8s-be-***":"HEALTHY","k8s-be-***":"HEALTHY"}
kubernetes.io/ingress.allow-http: false
</code></pre>
<p>What I notice is that if I killed all the service and redeploy, the backend stays at <code>UNHEALTHY</code> for quite some time before it picks it up even though Kubernetes itself managed to pick up that pods/service are all running</p>
<p>I also noticed that when <code>livenessProbe</code> and <code>readinessProbe</code> is set, the Backend health check that's generated by ingress-gce is the following</p>
<pre><code>Backend
Timeout: 30 seconds
Backend Health check
Interval: 70 seconds
Timeout: 1 second
Unhealthy threshold: 10 consecutive failures
Healthy threshold: 1 success
</code></pre>
<p>Whereas if I just deploy a simple nginx pod without specifying <code>livenessProbe</code> and <code>readinessProbe</code>, the backend generated is the following</p>
<pre><code>Backend
Timeout: 30 seconds
Backend Health Check
Interval: 60 seconds
Timeout: 60 seconds
Unhealthy threshold: 10 consecutive failures
Healthy threshold: 1 success
</code></pre>
<p>Is the Backend health check the root cause of the slowness of picking things up? If so, any idea how to speed it up?</p>
<hr>
<p><strong>UPDATE</strong>
Wanted to clarify after reading <a href="https://stackoverflow.com/a/59374808/364569">@yyyyahir's answer below</a></p>
<p>I understand that when creating new ingress it will take much longer because the ingress controller needs to provision the new Load Balancer, backend and all the other related things.</p>
<p>However what I also notice is that when I release a new version of the service (through Helm - deployment is set to Recreate rather than RollingUpgrade) OR if the pod is died (out of memory) and restarted, it takes quite a while before the backend status is healthy again despite the Pod is already in running/healthy state (this is with existing Ingress and Load Balancer in GCP). Is there a way to speed this up?</p>
| <p>When using GCE Ingress, you need to wait for the load balancer provisioning time before the <a href="https://cloud.google.com/load-balancing/docs/backend-service" rel="nofollow noreferrer">backend service</a> is deemed as healthy.</p>
<p>Consider that when you use this ingress class, you're relying on the GCE infrastructure that automatically has to provision an <a href="https://cloud.google.com/load-balancing/docs/https/" rel="nofollow noreferrer">HTTP(S) load balancer</a> and all of its components before sending requests into the cluster.</p>
<p>When you set up a deployment without <code>readinessProbe</code>, the default values are going to be applied to the load balancer health check:</p>
<pre><code>Backend Health Check
Interval: 60 seconds
Timeout: 60 seconds
Unhealthy threshold: 10 consecutive failures
Healthy threshold: 1 success
</code></pre>
<p>However, using the <code>readinessProbe</code> <a href="https://github.com/kubernetes/ingress-gce/blob/69ae0fe59e6ab69d8865339fa096e4cb142309c4/pkg/backends/syncer.go#L224" rel="nofollow noreferrer">will add the <code>periodSeconds</code> value to the default health check configuration</a>. So, in your case, you had <code>10</code> seconds + <code>60</code> by default = <code>70</code>.</p>
<pre><code>Backend Health check
Interval: 70 seconds
Timeout: 1 second
Unhealthy threshold: 10 consecutive failures
Healthy threshold: 1 success
</code></pre>
<p>Note that GKE will only use <code>readinessProbe</code> to set the health check in the load balancer. Liveness is never picked.</p>
<p>This means that, the lowest value will always be that of the default load balancer health check, <code>60</code>. Since these values are automatically set when the load balancer is invoked from GKE, there is no way to change them.</p>
<p>Wrapping up, you have to wait for the load balancer provisioing period (around ~1-3 minutes) plus the <code>periodSeconds</code> value set in your <code>readinessProbe</code>.</p>
|
<p>Does <code>kubectl drain</code> first make sure that pods with <code>replicas=1</code> are healthy on some other node?<br>
Assuming the pod is controlled by a deployment, and the pods can indeed be moved to other nodes.
Currently as I see it only evict (delete pods) from the nodes, without scheduling them first.</p>
| <p>In addition to <a href="https://stackoverflow.com/users/8803619/suresh-vishnoi">Suresh Vishnoi</a> answer:</p>
<p>If <a href="https://kubernetes.io/docs/tasks/run-application/configure-pdb/" rel="noreferrer">PodDisruptionBudget</a> is not specified and you have a deployment with one replica, the pod will be terminated and <strong>then</strong> new pod will be scheduled on a new node.</p>
<p>To make sure your application will be available during node draining process you have to specify PodDisruptionBudget and create more replicas. If you have 1 pod with <code>minAvailable: 30%</code> it will refuse to drain with following error: </p>
<pre><code>error when evicting pod "pod01" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
</code></pre>
<p>Briefly that's how draining process works:</p>
<p>As explained in documentation <code>kubectl drain</code> command "safely evicts all of your pods from a node before you perform maintenance on the node and allows the pod’s containers to gracefully terminate and will respect the <code>PodDisruptionBudgets</code> you have specified” </p>
<p>Drain does two things:</p>
<ol>
<li><p>cordons the node- it means that node is marked as unschedulable, so new pods cannot be scheduled on this node. Makes sense- if we know that node will be under maintenance there is no point to schedule a pod there and then reschedule it on another node due to maintenance. From Kubernetes perspective it adds a taint to the node: <code>node.kubernetes.io/unschedulable:NoSchedule</code></p></li>
<li><p>evicts/ deletes the pods- after node is marked as unschedulable it tries to evict the pods that are running on the node. It uses <a href="https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/#the-eviction-api" rel="noreferrer">Eviction API</a> which takes <code>PodDisruptionBudgets</code> into account (if it's not supported it will delete pods). It calls DELETE method to K8S but considers <code>GracePeriodSeconds</code> so it lets a pod finish it's processes. </p></li>
</ol>
|
<p>we have .config file to see our azure kubernetes but I want to modify my personel kubernetes environment in my local machine.So I decided to add my personel cluster info in ".config" file. My cluster name is "grpccluster" . I have 2 config file content below . First one is orginal . Second one is my modification. Buti it is notworking also I don't know I will effect my company kubernetes structure.
Original :</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data:xxx
server: https://xxxxxxxx.io:443
name: xxxmaincluster
contexts:
- context:
cluster: xxxmaincluster
user: clusteruser1
name: xxxmaincluster
current-context: xxxmaincluster
kind: Config
preferences: {}
users:
- name: clusteruser1
user:
client-certificate-data: xxxxxxxx
client-key-data: xxxxxxxxxxx
token: xxxxxx
</code></pre>
<p>Modified : </p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data:xxx
server: https://xxxxxxxx.io:443
name: xxxmaincluster
- cluster:
certificate-authority-data: DATA+OMITTED
server: 127.0.0.1
name: grpcluster
contexts:
- context:
cluster: grpcluster
namespace: rancher
user: user-qkvpz
name: grpcluster
- context:
cluster: xxxmaincluster
user: clusteruser1
name: xxxmaincluster
current-context: xxxmaincluster
kind: Config
preferences: {}
users:
- name: clusteruser1
user:
client-certificate-data: xxxxxxxx
client-key-data: xxxxxxxxxxx
token: xxxxxx
</code></pre>
<p>But I want to learn this is correct or not? And How can I produce crt file for kubernetes. Can you help me for these questions? </p>
| <p>Here's relevant documentation: <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" rel="nofollow noreferrer">Configure Access to Multiple Clusters</a></p>
|
<p>Have few developer logs in kubernetes pods, what is the best method to get the logs for the developers to see it.</p>
<p>Any specific tools that we can use?</p>
<p>I have the option of graylog, but not sure if that can be customized to get the developer logs into it.</p>
| <p>The most basic method would be to simply use <code>kubectl logs</code> command:</p>
<blockquote>
<p>Print the logs for a container in a pod or specified resource. If the
pod has only one container, the container name is optional.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs" rel="nofollow noreferrer">Here</a> you can find more details regarding the command and it's flags alongside some useful examples.</p>
<p>Also, you may want to use:</p>
<ul>
<li><p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/" rel="nofollow noreferrer">Logging Using Elasticsearch and Kibana</a></p></li>
<li><p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/logging-stackdriver/" rel="nofollow noreferrer">Logging Using Stackdriver</a></p></li>
</ul>
<p>Both should do the trick in your case.</p>
<p>Please let me know if that is what you had on mind and if my answer was helpful.</p>
|
<p>I want to define a livenessProbe with an httpHeader whose value is secret.</p>
<p>This syntax is invalid:</p>
<pre><code>livenessProbe:
httpGet:
path: /healthz
port: 8080
httpHeaders:
- name: X-Custom-Header
valueFrom:
secretKeyRef:
name: my-secret-key
value: secret
</code></pre>
<p>If I specify <code>my-secret-key</code> with value <code>secret</code> as an environment variable named <code>MY_SECRET_KEY</code>, the following could work:</p>
<pre><code>livenessProbe:
exec:
command:
- curl
- --fail
- -H
- "X-Custom-Header: $MY_SECRET_KEY"
- 'http://localhost:8080/healthz'
</code></pre>
<p>Unfortunately it doesn't due to the way the quotations are being evaluated. If I type the command <code>curl --fail -H "X-Custom-Header: $MY_SECRET_KEY" http://localhost:8080/healthz</code> directly on the container, it works.</p>
<p>I've also tried many combinations of single quotes and escaping the double quotes.</p>
<p>Does anyone know of a workaround?</p>
| <p>Here some examples with curl and wget:</p>
<pre><code>exec:
command:
- /bin/sh
- -c
- "curl -H 'Authorization: Bearer $(AUTH_TOKEN)' 'http://example.com'"
</code></pre>
<hr>
<pre><code>exec:
command:
- /bin/sh
- -c
- "wget --spider --header \"Authorization: Bearer $AUTH_TOKEN\" http://some.api.com/spaces/${SPACE_ID}/entries"
</code></pre>
|
<p>I have the following definitions in my custom namespace:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: test-sa
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test
rules:
- apiGroups: [""]
resources: ["pods", "pods/exec"]
verbs: ["get", "list", "delete", "patch", "create"]
- apiGroups: ["extensions", "apps"]
resources: ["deployments", "deployments/scale"]
verbs: ["get", "list", "delete", "patch", "create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test
subjects:
- kind: User
name: test-sa
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: test
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>Running <code>describe role test</code></p>
<pre><code>Name: test
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{},"name":"test","namespace":"test-namesapce...
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
pods/exec [] [] [get list delete patch create]
pods [] [] [get list delete patch create]
deployments.apps/scale [] [] [get list delete patch create]
deployments.apps [] [] [get list delete patch create]
deployments.extensions/scale [] [] [get list delete patch create]
deployments.extensions [] [] [get list delete patch create]
</code></pre>
<p>When I'm trying to run the command <code>kubectl get pods</code> in a pod that is using this service account, I'm getting the following error:</p>
<blockquote>
<p>Error from server (Forbidden): pods is forbidden: User
"system:serviceaccount:test-namespace:test-sa" cannot list resource
"pods" in API group "" in the namespace "test-namespace"</p>
</blockquote>
<p>Where is that misconfigured?</p>
| <p>The problem was with the <code>subjects</code> of <code>RoleBinding</code>. The correct definition would be:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test
subjects:
- kind: ServiceAccount
name: test-sa
roleRef:
kind: Role
name: test
apiGroup: rbac.authorization.k8s.io
</code></pre>
|
<p>When I deploy without ingress-gateway I can get access via port-forwarding directly to LoadBalancer of application in the browser. But through ingress-gateway it not working.
Sidecars injection is enabled!</p>
<p>Istio v1.4.0<br>
Minukube v1.5.2<br>
Kubernetes v1.16.0 </p>
<p>Istio instalation:</p>
<pre><code>istioctl manifest apply \
--set values.global.mtls.enabled=false \
--set values.tracing.enabled=true \
--set values.kiali.enabled=true \
--set values.grafana.enabled=true \
--namespace istio-system
</code></pre>
<p>Than deploy React with NGINX.</p>
<pre><code># Frontend service
apiVersion: v1
kind: Service
metadata:
name: front-web
namespace: demo
spec:
type: NodePort
selector:
app: front-web
ports:
- name: http
port: 80
---
# Frontend app
apiVersion: apps/v1
kind: Deployment
metadata:
name: front-web
namespace: demo
labels:
app: front-web
spec:
replicas: 1
selector:
matchLabels:
app: front-web
template:
metadata:
labels:
app: front-web
spec:
containers:
- name: front-web
image: sergeygreat/front-web:v1.0.3
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
</code></pre>
<p>Gateway</p>
<pre><code>
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: demo-gw
namespace: demo
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: demo-vs
namespace: demo
spec:
hosts:
- "*"
gateways:
- demo-gw
http:
- match:
- uri:
exact: /
route:
- destination:
port:
number: 80
host: front-web.default.svc.cluster.local # <- tryed just `front-web`, no luck
</code></pre>
<pre><code>>> kubectl get svc -n istio-system
istio-ingressgateway LoadBalancer 10.104.158.110 <pending> 80:31332/TCP,...
</code></pre>
<p>Everything is running but no access via <a href="http://minikube" rel="nofollow noreferrer">http://minikube</a> ip:31332 ??? </p>
<p>For port 80 in browser: <a href="http://minikube_ip" rel="nofollow noreferrer">http://minikube_ip</a> - <code>404 Not Found openresty/1.15.8.2</code><br>
For port 31332 in browser: <a href="http://minikube_ip:31332" rel="nofollow noreferrer">http://minikube_ip:31332</a> - 503 Error </p>
<blockquote>
<blockquote>
<p>minikube ssh</p>
</blockquote>
</blockquote>
<pre><code>$ curl -i http://localhost:31332
HTTP/1.1 503 Service Unavailable
date: Mon, 16 Dec 2019 16:04:32 GMT
server: istio-envoy
content-length: 0
$ curl -i http://192.168.99.101:31332
HTTP/1.1 503 Service Unavailable
date: Mon, 16 Dec 2019 16:04:45 GMT
server: istio-envoy
content-length: 0
</code></pre>
<p>Can please someone help?</p>
| <p>Try to recreate minikube:</p>
<pre><code>>>minikube stop
>>minikube delete
>>minikube start --memory=16384 --cpus=4 --vm-driver=virtualbox --kubernetes-version=v1.16.0
</code></pre>
<p>If not help try to bind to another port:</p>
<pre><code>- Deployment set to 80
- Service type should be NodePort and bind it to port 8080 targetPort:80
- VirtualService host "*" port 8080
</code></pre>
<p>It should work! </p>
<p>If not try to remove this part from VirtualService:</p>
<pre><code>- match:
- uri:
exact: /
</code></pre>
|
<p>I have my own Spring Cloud Data Flow processor with Python inside, I used this sample as a guide: <a href="https://dataflow.spring.io/docs/recipes/polyglot/processor/" rel="nofollow noreferrer">https://dataflow.spring.io/docs/recipes/polyglot/processor/</a>.
Then I want to scale and create three of these processors, so using <code>spring.cloud.deployer.myApp.count=3</code> I create 3 pods with Python inside.
I slightly modified a code in sample: when I create a Kafka consumer, I also pass a group id, so messages should be load balanced.</p>
<pre><code>consumer = KafkaConsumer(get_input_channel(), group_id=get_consumer_group(), bootstrap_servers=[get_kafka_binder_brokers()])
</code></pre>
<p>The issue is that SCDF creates a Kafka topic with only 1 partition, so messages arrive to one pod only.
So I am wondering:</p>
<ul>
<li>Should I somehow configure SCDF to create a Kafka topic with 3 partitions?</li>
<li>Or should I not rely on SCDF and create topics on my own in Python? I suppose this will be redundant, since SCDF also creates this topic.</li>
<li>What component in SCDF is actually responsible for Kafka topics creation? And how can I influence it regarding number of partitions?</li>
<li>If I stop this stream and launch again with 4 processor steps, should the topic be extended with the 4th partition? Because currently no new partitions get created.</li>
</ul>
| <p>Was able to get around this by introducing the following code into the Python container startup script, improved code provided in <a href="https://dataflow.spring.io/docs/recipes/polyglot/processor/" rel="nofollow noreferrer">https://dataflow.spring.io/docs/recipes/polyglot/processor/</a> . Using arguments passed by SCDF server to get a broker URL, topic name, number of instances:</p>
<pre><code>admin_client = KafkaAdminClient(bootstrap_servers=[get_kafka_binder_brokers()], client_id=sys.argv[0])
partition_count = get_cmd_arg("spring.cloud.stream.instanceCount")
# create Kafka topic if does not exist
new_topic = NewTopic(name=get_input_channel(), num_partitions=partition_count, replication_factor=1)
try:
admin_client.create_topics(new_topics=[new_topic])
except TopicAlreadyExistsError:
logging.info(f"Topic {get_input_channel()} was already created")
# add Kafka partitions to existing topic
new_partitions = NewPartitions(total_count=partition_count)
try:
admin_client.create_partitions(topic_partitions={get_input_channel(): new_partitions})
except InvalidPartitionsError as exp:
logging.info(f"No need to increase Kafka partitions for topic {get_input_channel()}")
</code></pre>
|
<p>I was playing with Kubernetes in Minikube. I could able to deploy spring boot sample application into Kubernetes. </p>
<p>I am exploring Kubernetes configMap. I could successfully run a spring boot application with a spring cloud starter and picking the property keys from config map. Till here I am successful.</p>
<p>The issue I am facing currently is configmap reload. </p>
<p>Here is my config map :</p>
<p><strong>ConfigMap.yaml</strong></p>
<pre><code> apiVersion: v1
kind: ConfigMap
metadata:
name: minikube-sample
namespace: default
data:
app.data.name: name
application.yml: |-
app:
data:
test: test
</code></pre>
<p><strong>bootstrap.yaml</strong></p>
<pre><code>management:
endpoint:
health:
enabled: true
info:
enabled: true
restart:
enabled: true
spring:
application:
name: minikube-sample
cloud:
kubernetes:
config:
enabled: true
name: ${spring.application.name}
namespace: default
reload:
enabled: true
</code></pre>
<p><strong>HomeController:</strong></p>
<pre><code>package com.minikube.sample.rest.controller;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.minikube.sample.properties.PropertiesConfig;
import lombok.Getter;
import lombok.Setter;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Lookup;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
/**
* @author Gorantla, Eresh
* @created 06-12-2018
*/
@RestController
@RequestMapping("/home")
public class HomeResource {
@Autowired
PropertiesConfig config;
@GetMapping("/data")
public ResponseEntity<ResponseData> getData() {
ResponseData responseData = new ResponseData();
responseData.setId(1);
responseData.setName(config.getName());
responseData.setPlace("Hyderabad");
responseData.setValue(config.getTest());
return new ResponseEntity<>(responseData, HttpStatus.OK);
}
@Getter
@Setter
public class ResponseData {
private String name;
private Integer id;
private String place;
private String value;
}
}
</code></pre>
<p><strong>deployment.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: minikube-sample
namespace: default
spec:
selector:
matchLabels:
app: minikube-sample
replicas: 1
template:
metadata:
labels:
app: minikube-sample
spec:
containers:
- name: minikube-sample
image: minikube-sample:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
env:
- name: env.namespace
value: default
volumeMounts:
- name: config
mountPath: /config
volumes:
- name: config
configMap:
name: minikube-sample
</code></pre>
<p>I used @ConfigurationProperties to reload properties.</p>
<p><strong>Dependencies</strong></p>
<pre><code> <dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes</artifactId>
<version>1.1.0.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes-config</artifactId>
<version>1.1.0.RELEASE</version>
</dependency>
</code></pre>
<p><strong>What I did ?</strong>
I have gone through spring cloud documentation. <strong>"The view role on the service account is required in order to listen for config map changes."</strong>
Then I created cluster view role through below command</p>
<pre><code>C:\Users\eresh.gorantla\apps\minikube-sample\src\main\fabric8 (master -> origin)
λ kubectl create clusterrolebinding minikube-sample --clusterrole=view --serviceaccount=default:minikube --namespace=default
clusterrolebinding.rbac.authorization.k8s.io/minikube-sample created
</code></pre>
<p>But when I update the configmap in kubernetes, the properties are not reloaded on the fly.
I suspect something wrong in clusterrolebinding.
Please provide your thoughts. Any help is appreciated. </p>
| <p>The Deployment doesn't have <code>serviceAccountName</code> configured so it uses the <code>default</code> service account. The command in the question, however - <code>kubectl create clusterrolebinding ... --serviceaccount=default:minikube...</code> - is for an account named <code>minikube</code> in the <code>default</code> namespace.</p>
<p>Moreover, creating <code>clusterrolebinding</code> may be "too much" when <code>rolebinding</code> for the namespace would work.</p>
<p>With the Deployment being for the <code>default</code> namespace (<code>metadata.namespace: default</code>), this should create a proper <code>rolebinding</code> to grant read-only permission to the <code>default</code> account:</p>
<pre><code>kubectl create rolebinding default-sa-view \
--clusterrole=view \
--serviceaccount=default:default \
--namespace=default
</code></pre>
<p>For reference, see <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">Using RBAC Authorization</a>. </p>
|
<p>I am new to Kubernetes. I am trying to launch a kubernetes cluster in my local mac machine. I am using following command to launch Kubernetes:</p>
<pre><code>minikube start --vm-driver=hyperkit
</code></pre>
<p>I am getting following error:</p>
<pre><code>/usr/local/bin/kubectl is version 1.14.7, and is incompatible with Kubernetes 1.17.0.
You will need to update /usr/local/bin/kubectl or use 'minikube kubectl' to connect with this cluster
</code></pre>
<p>Now while executing following command: </p>
<pre><code>minikube kubectl
</code></pre>
<p>It is not doing anything, just showing basic commands along with their usages.</p>
<p>And while trying to upgrade kubetctl it is showing it is already up to date.</p>
<p>I have not found any solution for this. Any idea regarding how to fix this ?</p>
| <p>The best solution for you is to <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-macos" rel="nofollow noreferrer">update kubectl manually</a>. To perform this you need to download the binary: </p>
<p><a href="https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/darwin/amd64/kubectl" rel="nofollow noreferrer">https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/darwin/amd64/kubectl</a></p>
<p>Change permissions of kubectl to be executable:</p>
<pre><code>$ chmod +x ./kubectl
</code></pre>
<p>And move to /usr/local/bin/ overwriting the old one. </p>
<pre><code>$ sudo mv ./kubectl $(which kubectl)
</code></pre>
<p>To check the effects, run:</p>
<pre><code>$ kubectl version
</code></pre>
|
<ol>
<li>i created a self-signed tls certificate and private key via terraform. The files are called server.key and server.crt</li>
<li><p>i create a kubernetes tls secret with this certificate and private key using this command:<br>
<em>kubectl create secret tls dpaas-secret -n dpaas-prod --key server.key --cert server.crt</em></p></li>
<li><p>this works fine, nginx ingress ssl termination works, and the following kubectl command: <em>kubectl get secret test-secret -o yaml -n dpaas-prod</em><br>
returns correct output with tls.crt pem data and tls.key pem data (see correct output in step 6 below)</p></li>
<li><p>Since we use terraform, i tried creating the same secret via terraform kubernetes provider with same server.key and servert.crt files. However this time, the command :<br>
<em>kubectl get secret test-secret -o yaml -n dpaas-prod</em><br>
returned weird output for crt pem and key pem (see output in step 5 below) and the ssl termination on my nginx ingress does not work.<br>
This is how i create the kubernetes secret via terraform:</p></li>
</ol>
<pre><code> resource "kubernetes_secret" "this" {
metadata {
name = "dpaas-secret"
namespace = "dpaas-prod"
}
data = {
"tls.crt" = "${path.module}/certs/server.crt"
"tls.key" = "${path.module}/certs/server.key"
}
type = "kubernetes.io/tls"
}
</code></pre>
<ol start="5">
<li>bad output following step 4. notice the short values of tls.crt and tls.key (secret created via terraform) : </li>
</ol>
<pre><code> *apiVersion: v1
data:
tls.crt: bXktdGVzdHMvY2VydHMvc2VydmVyLmNydA==
tls.key: bXktdGVzdHMvY2VydHMvc2VydmVyLmtleQ==
`enter code here` kind: Secret
metadata:
creationTimestamp: "2019-12-17T16:18:22Z"
name: dpaas-secret
namespace: dpaas-prod
resourceVersion: "9879"
selfLink: /api/v1/namespaces/dpaas-prod/secrets/dpaas-secret
uid: d84db7f0-20e8-11ea-aa92-1269ad9fd693
type: kubernetes.io/tls*
</code></pre>
<ol start="6">
<li>correct output following step 2 (secret created via kubectl command) </li>
</ol>
<pre><code>*apiVersion: v1
data:
tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR4RENDQXF5Z0F3SUJBZ0lRZGM0cmVoOHRQNFZPN3NkTWpzc1lTVEFOQmdrcWhraUc5dzBCQVFzRkFEQkUKTVJBd0RnWURWUVFLRXdkbGVHRnRjR3hsTVJRd0VnWURWUVFMRXd0bGVHRnRjR3hsS
UdSd2N6RWFNQmdHQTFVRQpBd3dSS2k1a2NITXVaWGhoYlhCc1pTNWpiMjB3SGhjTk1Ua3hNakUzTVRZeE5UVXpXaGNOTWpreE1qRTNNRE14Ck5UVXpXakJaTVFzd0NRWURWUVFHRXdKVlV6RVVNQklHQTFVRUNoTUxaWGhoYlhCc1pTQmtjSE14R0RBV0JnTlYKQkFzV
EQyVjRZVzF3YkdVdVpIQnpMbU52YlRFYU1CZ0dBMVVFQXd3UktpNWtjSE11WlhoaGJYQnNaUzVqYjIwdwpnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFDaDU0enBwVTBod1hscm1qVXpOeVl0Ckp5WG9NSFU4WFpXTzhoVG9KZ09YUDU5N
nZFVmJQRXJlQ1VxM1BsZXB5SkRrcHNWbHo1WWc1TWp4NkVGTnlxNVQKOHVLUlVZUXNPVzNhd1VCbzM2Y3RLZEVvci8wa0JLNXJvYTYyR2ZFcHJmNVFwTlhEWnY3T1Y1YU9VVjlaN2FFTwpNNEl0ejJvNWFVYm5mdHVDZVdqKzhlNCtBS1phVTlNOTFCbFROMzFSUUFSR
3RnUzE4MFRzcVlveGV3YXBoS3FRCmUvTm5TeWF6ejUyTU5jeml6WTRpWXlRUU9EbUdEOEtWRGRJbWxJYXFoYXhiVGVTMldWZFJzdmpTa2xVZ0pGMUUKb2VWaWo1KytBd0FBczYwZkI2M1A4eFB1NEJ3cmdGTmhTV2F2ZXdJV1RMUXJPV1I2V2wvWTY1Q3lnNjlCU0xse
gpBZ01CQUFHamdad3dnWmt3RGdZRFZSMFBBUUgvQkFRREFnV2dNQjBHQTFVZEpRUVdNQlFHQ0NzR0FRVUZCd01CCkJnZ3JCZ0VGQlFjREFqQU1CZ05WSFJNQkFmOEVBakFBTUI4R0ExVWRJd1FZTUJhQUZPanRBdTJoNDN0WjhkS1YKaHUzc2xVS3VJYTlHTURrR0ExV
WRFUVF5TURDQ0VTb3VaSEJ6TG1WNFlXMXdiR1V1WTI5dGdoc3FMbVJ3Y3k1MQpjeTFsWVhOMExURXVaWGhoYlhCc1pTNWpiMjB3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUlxRVlubHdwQnEyCmNmSnhNUUl0alF4ZTlDK2FDTnZXS1VOZjlwajlhZ3V6YXNxTW9wU
URWTFc1dnZxU21FbHJrVXNBQzJPemZ3K2UKRkxKNUFvOFg3VFcxTHBqbk01Mm1FVjRZYUcvM05hVTg5dWhOb0FHd0ZPbU5TK3ZldU12N3RKQjhsUHpiQ1k3VApKaG9TL2lZVE9jUEZUN1pmNkVycjFtd1ZkWk1jbEZuNnFtVmxwNHZGZk1pNzRFWnRCRXhNaDV3aWU3Q
Wl4Z2tTCmZaVno4QUEzTWNpalNHWFB6YStyeUpJTnpYY0gvM1FRaVdLbzY5SUQrYUlSYTJXUUtxVlhVYmk0bmlZaStDUXcKeTJuaW5TSEVCSDUvOHNSWVZVS1ZjNXBPdVBPcFp0RmdqK1l6d1VsWGxUSytLRTR0R21Ed09teGxvMUNPdGdCUAorLzFXQWdBN1p0QT0KL
S0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBb2VlTTZhVk5JY0Y1YTVvMU16Y21MU2NsNkRCMVBGMlZqdklVNkNZRGx6K2ZlcnhGCld6eEszZ2xLdHo1WHFjaVE1S2JGWmMrV0lPVEk4ZWhCVGNxdVUvTGlrVkdFT
ERsdDJzRkFhTituTFNuUktLLzkKSkFTdWE2R3V0aG54S2EzK1VLVFZ3MmIremxlV2psRmZXZTJoRGpPQ0xjOXFPV2xHNTM3YmdubG8vdkh1UGdDbQpXbFBUUGRRWlV6ZDlVVUFFUnJZRXRmTkU3S21LTVhzR3FZU3FrSHZ6WjBzbXM4K2RqRFhNNHMyT0ltTWtFRGc1C
mhnL0NsUTNTSnBTR3FvV3NXMDNrdGxsWFViTDQwcEpWSUNSZFJLSGxZbytmdmdNQUFMT3RId2V0ei9NVDd1QWMKSzRCVFlVbG1yM3NDRmt5MEt6bGtlbHBmMk91UXNvT3ZRVWk1Y3dJREFRQUJBb0lCQUZaNjFqdmUvY293QytrNwozM3JSMUdSOTZyT1JYcTIxMXpNW
mY2MVkwTVl6Ujc1SlhrcVRjL0lSeUlVRW1kS292U3hGSUY5M2VGdHRtU0FOCnpRUCtaUXVXU3dzUUhhZDVyWUlSZzVRQkVzeis3eWZxaVM1NkNhaVlIamhLdHhScVNkTk5tSmpkSlBHV3UyYWQKZEc4V2pOYUhFTnZqVkh3Q0RjdU5hVGJTSHhFOTAwSjhGQTg0c3d2M
lZFUGhSbExXVjJudVpLTko5aGIrY2IzVQpsZ2JrTnVxMkFsd2Y3MkRaTVRXZ21DM3N1Z004eGYwbWFCRWV3UXdETVdBZis2dWV6MEJ5V0hLdThwNHZRREJvCjBqQVYzOGx6UHppTDU3UTZYbFdnYjIxWUh2QmJMSVVKcEFRMGdrcGthaEFNVmJlbHdiSDJVR25wOXcrb
zU3MnIKTmhWMFJXRUNnWUVBeEtFU3FRWEV5dmpwWU40MGxkbWNhTmdIdFd1d3RpUThMMWRSQitEYXBUbWtZbzFlWXJnWgpzNi9HbStRalFtNlU1cG04dHFjTm1nL21kTDJzdk9uV1Y1dnk5eThmZHo3OFBjOFlobXFJRE5XRE9tZG9wUVJsCmxsVUZ6S0NwRmVIVTJRW
URYYjBsNWNvZzYyUVFUaWZIdjFjUGplWlltc2I5elF0cDd6czJZMGtDZ1lFQTBzcFoKTWRRUUxiRkZkWDlYT05FU2xvQlp3Slg5UjFQZVA0T2F4eUt2a01jQXFFQ0Npa05ZU3FvOU55MkZQNVBmQlplQgpWbzYvekhHR0dqVkFPQUhBczA5My8zZUZxSFRleWVCSzhQR
kJWMHh5em9ZZThxYUhBR1JxVnpjS240Zy9LVjhWClpjVGMwTm5aQzB5b09NZkhYUTVnQm1kWnpBVXBFOHlqZzhucGV0c0NnWUVBd0ZxU1ZxYytEUkhUdk4ranNiUmcKUG5DWG1mTHZ2RDlXWVRtYUc0cnNXaFk1cWUrQ0ZqRGpjOVRSQmsvMzdsVWZkVGVRVlY2Mi82L
3VVdVg2eGhRNwppeGtVWnB2Q3ZIVHhiY1hheUNRUFUvN0xrYWIzeC9hMUtvdWlVTHdhclQxdmE1OW1TNTF1SlkzSEJuK3RNOGZXCnNHZ0szMVluOThJVEp6T3pQa1UrdjRFQ2dZQUpRQkFCKzhocCtPbVBhbk10Yng5ZHMydzg0MWdtRlN3ZnBXcloKYWxCQ0RqbWRLS
mVSOGJxaUxDNWJpWWZiYm1YUEhRTDBCWGV0UlI0WmNGVE5JR2FRZCtCUU9iS0gzZmtZNnRyZgpEL2RLR1hUQVUycHdRNWFSRWRjSTFNV0drcmdTM0xWWHJmZnl3bHlmL2xFempMRFhDSloyTVhyalZTYWtVOHFwCk1lY3BHUUtCZ0c0ZjVobDRtL1EvaTNJdGZJbGw4W
DFRazFUSXVDK0JkL1NGM0xVVW5JUytyODA4dzVuaTNCUnEKNXgvQjFnRUhZbTNjSTROajR5ZzEvcE1CejhPMk1PeFhVbVNVZVh6dit1MG5oOFQxUE96eDJHOTNZaVlOL0cvNQpjMlBMSFMvTTlmVjhkTEVXL0hBVFM3K0hsMDFGQlVlREhrODQrVXlha2V2ZFU2djdVZ
2ErCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
kind: Secret
metadata:
creationTimestamp: "2019-12-17T16:16:56Z"
name: dpaas-secret
namespace: dpaas-prod
resourceVersion: "9727"
selfLink: /api/v1/namespaces/dpaas-prod/secrets/dpaas-secret
uid: a5739134-20e8-11ea-a977-0a0ee9b505f9
type: kubernetes.io/tls*
</code></pre>
<p>Question: why is the kubernetes secret pem data short (corrupt?) when adding secret with terraform?<br>
we use terraform version 0.12.8 and kubernetes 1.13</p>
<p>for reproducing this here are the file links: </p>
<ul>
<li>server.crt - <a href="https://drive.google.com/open?id=1vFakHWkx9JxyDAEzFE_5fJomEkmQQ7Wt" rel="nofollow noreferrer">https://drive.google.com/open?id=1vFakHWkx9JxyDAEzFE_5fJomEkmQQ7Wt</a> </li>
<li>server.key - <a href="https://drive.google.com/open?id=1wc5Xn-yHWDDY9mFQ2l42k2KqeCObb-L3" rel="nofollow noreferrer">https://drive.google.com/open?id=1wc5Xn-yHWDDY9mFQ2l42k2KqeCObb-L3</a></li>
</ul>
| <p>The problem is that you are encoding the <strong>paths</strong> of the certificate files into the secret and not the <strong>contents</strong> of the files.</p>
<p>You can see this is the case if you base64-decode the secret strings in your example:</p>
<pre><code>$ echo -n bXktdGVzdHMvY2VydHMvc2VydmVyLmNydA== | base64 -d
my-tests/certs/server.crt
$ echo -n bXktdGVzdHMvY2VydHMvc2VydmVyLmtleQ== | base64 -d
my-tests/certs/server.key
</code></pre>
<p>So, instead of this:</p>
<pre><code>data = {
"tls.crt" = "${path.module}/certs/server.crt"
"tls.key" = "${path.module}/certs/server.key"
}
</code></pre>
<p>do this:</p>
<pre><code>data = {
"tls.crt" = file("${path.module}/certs/server.crt")
"tls.key" = file("${path.module}/certs/server.key")
}
</code></pre>
|
<p>I am using the awesome Powerlevel9k theme for my Zsh.</p>
<p>I defined a custom kubecontext element to show my kubernetes cluster (context) and namespace (see code below).</p>
<p>While I conditionally set the foreground color through the color variable I would like to set the background color instead to be able to better see when I work on the production cluster.
Is that somehow possible with Powerlevel9k? All I could find is that I can set the background color of the prompt element statically with <code>POWERLEVEL9K_CUSTOM_KUBECONTEXT_BACKGROUND='075'</code></p>
<pre><code># Kubernetes Current Context/Namespace
custom_prompt_kubecontext() {
local kubectl_version="$(kubectl version --client 2>/dev/null)"
if [[ -n "$kubectl_version" ]]; then
# Get the current Kuberenetes context
local cur_ctx=$(kubectl config view -o=jsonpath='{.current-context}')
cur_namespace="$(kubectl config view -o=jsonpath="{.contexts[?(@.name==\"${cur_ctx}\")].context.namespace}")"
# If the namespace comes back empty set it default.
if [[ -z "${cur_namespace}" ]]; then
cur_namespace="default"
fi
local k8s_final_text="$cur_ctx/$cur_namespace"
local color='%F{black}'
[[ $cur_ctx == "prod" ]] && color='%F{196}'
echo -n "%{$color%}\U2388 $k8s_final_text%{%f%}" # \U2388 is Kubernetes Icon
#"$1_prompt_segment" "$0" "$2" "magenta" "black" "$k8s_final_text" "KUBERNETES_ICON"
fi
}
POWERLEVEL9K_CUSTOM_KUBECONTEXT="custom_prompt_kubecontext"
# Powerlevel9k configuration
POWERLEVEL9K_LEFT_PROMPT_ELEMENTS=(context dir vcs custom_kubecontext)
</code></pre>
<p>Here is a screenshot of the current setup in action:</p>
<p><a href="https://i.stack.imgur.com/2cAFI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2cAFI.png" alt="screenshot showing the different kubecontext prompt colors depending on the current kubecontext"></a></p>
| <p>Disclaimer: I'm the author of powerlevel10k.</p>
<p>No, this is not possible in powerlevel9k. It is, however, possible in powerlevel10k. Powerlevel10k is backward compatible with powerlevel9k configuration, meaning that you won't have to change any <code>POWERLEVEL9K</code> parameters if you decide to switch.</p>
<p>Powerlevel10k has several advantages over its predecessor:</p>
<ol>
<li>It's over 10 times faster.</li>
<li>It has a builtin configuration wizard. Type <code>p10k configure</code> to access it.</li>
<li>It has many new features. One of them is relevant to you. The builtin <code>kubecontext</code> supports <em>context classes</em> that allow you to style this prompt segment differently depending on which kubernetes context is currently active. Here's the excerpt from the configuration that <code>p10k configure</code> generates:</li>
</ol>
<pre><code># Kubernetes context classes for the purpose of using different colors, icons and expansions with
# different contexts.
#
# POWERLEVEL9K_KUBECONTEXT_CLASSES is an array with even number of elements. The first element
# in each pair defines a pattern against which the current kubernetes context gets matched.
# More specifically, it's P9K_CONTENT prior to the application of context expansion (see below)
# that gets matched. If you unset all POWERLEVEL9K_KUBECONTEXT_*CONTENT_EXPANSION parameters,
# you'll see this value in your prompt. The second element of each pair in
# POWERLEVEL9K_KUBECONTEXT_CLASSES defines the context class. Patterns are tried in order. The
# first match wins.
#
# For example, given these settings:
#
# typeset -g POWERLEVEL9K_KUBECONTEXT_CLASSES=(
# '*prod*' PROD
# '*test*' TEST
# '*' DEFAULT)
#
# If your current kubernetes context is "deathray-testing/default", its class is TEST
# because "deathray-testing/default" doesn't match the pattern '*prod*' but does match '*test*'.
#
# You can define different colors, icons and content expansions for different classes:
#
# typeset -g POWERLEVEL9K_KUBECONTEXT_TEST_FOREGROUND=0
# typeset -g POWERLEVEL9K_KUBECONTEXT_TEST_BACKGROUND=2
# typeset -g POWERLEVEL9K_KUBECONTEXT_TEST_VISUAL_IDENTIFIER_EXPANSION='⭐'
# typeset -g POWERLEVEL9K_KUBECONTEXT_TEST_CONTENT_EXPANSION='> ${P9K_CONTENT} <'
typeset -g POWERLEVEL9K_KUBECONTEXT_CLASSES=(
# '*prod*' PROD # These values are examples that are unlikely
# '*test*' TEST # to match your needs. Customize them as needed.
'*' DEFAULT)
typeset -g POWERLEVEL9K_KUBECONTEXT_DEFAULT_FOREGROUND=7
typeset -g POWERLEVEL9K_KUBECONTEXT_DEFAULT_BACKGROUND=5
typeset -g POWERLEVEL9K_KUBECONTEXT_DEFAULT_VISUAL_IDENTIFIER_EXPANSION='⎈'
</code></pre>
<p>You can also customize the text content of <code>kubecontext</code>. You'll find more info in <code>~/.p10k.zsh</code> once you run <code>p10k configure</code>. Oh, and <code>kubecontext</code> is about 1000 times faster in powerlevel10k.</p>
|
<p>If you run taint command on Kubernetes master:</p>
<pre><code>kubectl taint nodes --all node-role.kubernetes.io/master-
</code></pre>
<p>it allows you to schedule pods.
So it acts as node and master.</p>
<p>I have tried to run 3 server cluster where all nodes have both roles. I didn't notice any issues from the first look.</p>
<p>Do you think nowadays this solution can be used to run small cluster for production service? If not, what are the real downsides? In which situations this setup fails comparing with standard setup?</p>
<p>Assume that etcd is running on all three servers.</p>
<p>Thank you</p>
| <p>The standard reason to run separate master nodes and worker nodes is to keep a busy workload from interfering with the cluster proper.</p>
<p>Say you have three nodes as proposed. One winds up running a database; one runs a Web server; the third runs an asynchronous worker pod. Suddenly you get a bunch of traffic into your system, the Rails application is using 100% CPU, the Sidekiq worker is cranking away at 100% CPU, the MySQL database is trying to handle some complicated joins and is both high CPU and also is using all of the available disk bandwidth. You run <code>kubectl get pods</code>: which node is actually able to service these requests? If your application triggers the Linux out-of-memory killer, can you guarantee that it won't kill <code>etcd</code> or <code>kubelet</code>, both of which are critical to the cluster working?</p>
<p>If this is running in a cloud environment, you can often get away with smaller (cheaper) nodes to be the masters. (Kubernetes on its own doesn't need a huge amount of processing power, but it does need it to be reliably available.)</p>
|
<p>when I run <code>kubectl get pods</code> it shows pod existing and ready, but when run <code>kubectl port-forward</code> I get <code>pod not foud</code> error. what's going on here?</p>
<pre><code>(base):~ zwang$ k get pods -n delivery
NAME READY STATUS RESTARTS AGE
screenshot-history-7f76489574-wntkf 1/1 Running 86 7h18m
(base):~ zwang$ k port-forward screenshot-history-7f76489574-wntkf 8000:8000
Error from server (NotFound): pods "screenshot-history-7f76489574-wntkf" not found
</code></pre>
| <p>You need to specify the namespace on the <code>port-forward</code> command too. <code>kubectl port-forward -n delivery screenshot-history-7f76489574-wntkf 8000:8000</code></p>
|
<p>I'm trying to restrict a <code>ServiceAccount</code>'s RBAC permissions to manage secrets in all namespaces:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: gitlab-secrets-manager
rules:
- apiGroups:
- ""
resources:
- secrets
resourceNames:
- gitlab-registry
verbs:
- get
- list
- create
- update
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab-service-account
namespace: gitlab
secrets:
- name: gitlab-service-account-token-lllll
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: gitlab-service-account-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: gitlab-secrets-manager
subjects:
- kind: ServiceAccount
name: gitlab-service-account
namespace: gitlab
</code></pre>
<p>So far, I've created the ServiceAccount and the related CRB, however, actions are failing:</p>
<pre><code>secrets "gitlab-registry" is forbidden: User "system:serviceaccount:gitlab:default" cannot get resource "secrets" in API group "" in the namespace "shamil"
</code></pre>
<p>Anyone know what I'm missing?</p>
| <p>You can do the following steps:</p>
<ul>
<li>At first, you need to insure that your serviceaccount named <code>gitlab-service-account</code> in <code>gitlab</code> namespace exists in the cluster.</li>
<li>Then you will create a <code>ClusterRole</code> as you have given:</li>
</ul>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: gitlab-secrets-manager
rules:
- apiGroups:
- ""
resources:
- secrets
resourceNames:
- gitlab-registry
verbs:
- get
- list
- create
- update
</code></pre>
<ul>
<li>Then you will also create a <code>ClusterRoleBinding</code> to grant permission at the cluster level.</li>
</ul>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: gitlab-secrets-manager-clusterrolebinding
subjects:
- kind: ServiceAccount
name: gitlab-service-account
namespace: gitlab
roleRef:
kind: ClusterRole
name: gitlab-secrets-manager
apiGroup: rbac.authorization.k8s.io
</code></pre>
|
<p>Sorry for my poor english.</p>
<p>I'm tuning java vm that work in k8s pod.</p>
<p>Normally java vm shouldn't use all memory. Some memory should used by kernel for cache, buffer, resource management and so on.
I knew container share host os kernel.
The question is Does resource limit of pods include kernel memory such as file cache.</p>
<p>Thanks.</p>
| <p>As the question you asked, I assumed you have searched about how to setup limitation for Memory in K8S -> <code>https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-limits-are-run</code></p>
<p>In this section you will know the limitation you set would be convert to docker command(if you use docker in k8s), <code>spec.containers[].resources.limits.memory</code> -> <code>--memory=""</code></p>
<p>and if you check the official document about kernel memory in docker -> <code>https://docs.docker.com/config/containers/resource_constraints/#--kernel-memory-details</code> which hasnt mentioned in k8s document. So I assume the value should be the default value for different type of linux system.</p>
|
<p>I want to expose my Mariadb pod using Nginx ingress TCP service by following this step <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/</a>. Mariadb running in default name space, with mariadb service type as ClusterIP. I am running Nginx Ingress controller in nginx-ingress namespace, also defined <code>tcp-services</code> cofigmap for <code>mariadb</code> service. But I am unable to connect MariaDB database from outside of the cluster.</p>
<p>From Nginx controller log I can see its reading tcp-services.</p>
<p>Ingress configuration</p>
<pre><code>containers:
- args:
- /nginx-ingress-controller
- --default-backend-service=nginx-ingress/nginx-ingress-default-backend
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=nginx-ingress/nginx-ingress-controller
- --default-ssl-certificate=nginx-ingress/ingress-tls
- --tcp-services-configmap=nginx-ingress/tcp-services
- --udp-services-configmap=nginx-ingress/udp-services
</code></pre>
<p>ConfigMap:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: nginx-ingress
data:
3306: "default/mariadb:3306"
</code></pre>
<p>Ingress controller nginx config for TCP Service</p>
<pre><code> # TCP services
server {
preread_by_lua_block {
ngx.var.proxy_upstream_name="tcp-default-mariadb-3306";
}
listen 3306;
proxy_timeout 600s;
proxy_pass upstream_balancer;
}
</code></pre>
<p>when I connect from external server, getting this message:</p>
<pre><code>ERROR 2002 (HY000): Can't connect to MySQL server on
</code></pre>
<p>any tips to troubleshoot this issue?</p>
<p>thanks</p>
<p>I was missing my service with TCP Port info, after adding it I was able to access the MySQL with my service Port Number. Thanks for <code>Emanuel Bennici</code> pointing this one out.</p>
<p>Here is my service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-controller
spec:
externalTrafficPolicy: Cluster
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
- name: 3066-tcp
port: 3066
protocol: TCP
targetPort: 3066-tcp
selector:
app: nginx-ingress
component: controller
release: nginx-ingress
sessionAffinity: None
type: NodePort
</code></pre>
| <p>Please check if you have opened the MySQL port in the Pod
So to open the port on the Kubernetes-Port you have to create a pod like this:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: mysql
namespace: default
labels:
name: mysql
spec:
containers:
- name: mysql
image: docker.io/bitnami/mariadb:10.3.20-debian-9-r19
ports:
- containerPort: 3306
protocol: TCP
</code></pre>
<p>Then you have to create a <em>service</em> so you can <em>talk directly</em> to the MySQL Pod through the service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: svc-mysql
namespace: default
labels:
run: mysql
spec:
ports:
- port: 3306
targetPort: 3306
protocol: TCP
selector:
name: mysql
</code></pre>
<p>If the Nginx Ingress Controller is working correctly you can now add the following line to your <em>tcp-services-configmap</em>:</p>
<pre><code>3306: "default/svc-mysql:3306"
</code></pre>
<p><em>Please note</em> that you have to add the MySQL port to the <em>Nginx Ingress Service</em>, like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: nginx-ingress
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
- name: proxie-tcp-mysql
port: 3306
targetPort: 3306
protocol: TCP
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
</code></pre>
<p>Now you can use the <em>external IP</em> of the <em>Nginx Ingress Controller</em> to connect to your MySQL Server.</p>
<hr>
<p>Please provide more information about your setup in future Questions :)</p>
|
<p>Circuit breaker doesn't trip on httpConsecutiveErrors: 1 (for 500 response). All requests pass through and give a 500 instead .
Circuit breaker should trip and should return 503(Service Unavailable) instead .</p>
<p>Follow the steps <a href="https://istio.io/docs/tasks/traffic-management/circuit-breaking.html" rel="noreferrer">Circuit breaker setup</a>
.</p>
<p>Once httpbin is up you can simulate 500 with it
Request :</p>
<pre><code>kubectl exec -it $FORTIO_POD -c fortio /usr/local/bin/fortio -- load -c 1 -qps 0 -n 20 -loglevel Warning http://httpbin:8000/status/500
</code></pre>
<p>Running this will simulate 20 requests returning 500 .</p>
<p>But if you have applied the circuit breaker if should allow just the one request as 500 and remaining requests should be tripped and a 503 should be returned . This doesn't happen.
Issue raised on github <a href="https://github.com/istio/issues/issues/363" rel="noreferrer">Github issue</a></p>
| <p>Yes, currently Circuit Breaker is not working in the case of HTTP-500, it only works with (Http-502/3/4) till now. But for making Http-500 is in the scope of the circuit breaker the work has been started. You can check this GitHub <a href="https://github.com/istio/api/issues/909" rel="nofollow noreferrer">issue</a> for more detail.</p>
|
<p>I have a Kubernetes job running a BackgroundService in Asp.net Core. </p>
<p>After doing some processing the BackgroundService initiates a shutdown by calling StopApplication.</p>
<p>Is there a way to shutdown the application and set the Kubernetes job to failed status when there is an exception in a BackgroundService?</p>
| <blockquote>
<p>"Is there a way to shutdown the application and set the Kubernetes job
to failed status ... ?"</p>
</blockquote>
<p>Yes, by anticipating that your application will exit with non-zero exit code when encountering an exception. That's the way how Kubernetes Job's final <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#handling-pod-and-container-failures" rel="nofollow noreferrer">status</a> is determined. </p>
<p>In other words, if container's process finishes with non-zero code, it causes Pod to terminate and eventually the Job status becomes <code>'Failed'</code>.</p>
<p>In case of stopping ASP.NET Core Application programmatically, with <code>'StopApplication()'</code> method, it exits with <code>'0'</code> exit code (example usage <a href="https://edi.wang/post/2019/3/7/restart-an-aspnet-core-application-programmatically" rel="nofollow noreferrer">here</a>).</p>
<p><strong>Possible solution:</strong></p>
<p>You program needs to handle an exception, even like graceful shutdown = <code>StopApplication()</code>, by explicitly setting an exit code to non-zero one beforehand (<code>[1,250]</code>).</p>
<p>Pseudo code:</p>
<pre><code> catch (Exception ex)
{
_logger.LogError(ex, "Worker process caught exception");
Environment.ExitCode = 1;
}
finally
{
applicationLifetime.StopApplication();
}
</code></pre>
<p>based on example <a href="https://github.com/aspnet/Extensions/issues/2363#issuecomment-532719733" rel="nofollow noreferrer">here</a>.</p>
|
<p>I have deployed mongodb as a StatefulSets on the K8S. It is not connecting, when I try to connect the DB using connection string URI(Ex: mongodb://mongo-0.mongo:27017,mongo-1.mongo:27017/cool_db), but it is connecting and getting the results when I use Endpoint IP Address. </p>
<pre><code># kubectl get sts
NAME READY AGE
mongo 2/2 7h33m
#kubectl get pods
NAME READY STATUS RESTARTS AGE
mongo-0 2/2 Running 0 7h48m
mongo-1 2/2 Running 2 7h47m
# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10h
mongo ClusterIP None <none> 27017/TCP 7h48m
</code></pre>
<p>I'm trying to test the connection using connection string URI in the python using the below process but it fails. </p>
<pre><code>>>> import pymongo
>>> client = pymongo.MongoClient("mongodb://mongo-0.mongo:27017,mongo-1.mongo:27017/cool_db")
>>> db = client.cool_db
>>> print db.cool_collection.count()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python2.7/site-packages/pymongo/collection.py", line 1800, in count
return self._count(cmd, collation, session)
File "/usr/lib64/python2.7/site-packages/pymongo/collection.py", line 1600, in _count
_cmd, self._read_preference_for(session), session)
File "/usr/lib64/python2.7/site-packages/pymongo/mongo_client.py", line 1454, in _retryable_read
read_pref, session, address=address)
File "/usr/lib64/python2.7/site-packages/pymongo/mongo_client.py", line 1253, in _select_server
server = topology.select_server(server_selector)
File "/usr/lib64/python2.7/site-packages/pymongo/topology.py", line 235, in select_server
address))
File "/usr/lib64/python2.7/site-packages/pymongo/topology.py", line 193, in select_servers
selector, server_timeout, address)
File "/usr/lib64/python2.7/site-packages/pymongo/topology.py", line 209, in _select_servers_loop
self._error_message(selector))
pymongo.errors.ServerSelectionTimeoutError: mongo-0.mongo:27017: [Errno -2] Name or service not known,mongo-1.mongo:27017: [Errno -2] Name or service not known
</code></pre>
<p>If we use the Endpoint IP Address, then we will get a response from DB.</p>
<pre><code>>>> import pymongo
>>> client = pymongo.MongoClient("mongodb://10.244.1.8,10.244.2.9:27017/cool_db")
>>> db = client.cool_db
>>> print db.cool_collection.count()
0
>>>
</code></pre>
<p>I have tried with different URI's like (client = pymongo.MongoClient("mongodb://mongo-0.mongo:27017/cool_db") )but not working. Could anyone please help me?</p>
| <p>From <a href="https://stackoverflow.com/a/58118505/10490683">my previous similar answer</a>:</p>
<p>From within the cluster you should reference the MongoDB Pod using <code><service-name>.<namespace-name>.svc.cluster.local</code>.</p>
|
<p><strong>ENVIRONMENT:</strong></p>
<pre><code>Kubernetes version: v1.16.3
OS: CentOS 7
Kernel: Linux k8s02-master01 3.10.0-1062.4.3.el7.x86_64 #1 SMP Wed Nov 13 23:58:53 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
</code></pre>
<p><strong>WHAT HAPPENED:</strong></p>
<p>I have a Wordpress Deployment running a container built from a custom Apache/Wordpress image. The image exposes port 8080 instead of 80 <em>(Dockerfile below)</em>. The Pod is exposed to the world through Traefik reverse proxy. Everything works fine without any liveness or readiness checks. Pod gets ready and Wordpress is accessible from <a href="https://www.example.com/" rel="nofollow noreferrer">https://www.example.com/</a>.</p>
<p>I tried adding liveness and readiness probes and they both repeatedly fail with "connection refused". When I remove both probes and reapply the Deployment, it works again. It works until the probe hits the failure threshhold, at which point the container goes into an endless restart loop and becomes unaccessible. </p>
<p><strong>POD EVENTS:</strong></p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned development/blog-wordpress-5dbcd9c7c7-kdgpc to gg-k8s02-worker02
Normal Killing 16m (x2 over 17m) kubelet, gg-k8s02-worker02 Container blog-wordpress failed liveness probe, will be restarted
Normal Created 16m (x3 over 18m) kubelet, gg-k8s02-worker02 Created container blog-wordpress
Normal Started 16m (x3 over 18m) kubelet, gg-k8s02-worker02 Started container blog-wordpress
Normal Pulled 13m (x5 over 18m) kubelet, gg-k8s02-worker02 Container image "wordpress-test:test12" already present on machine
Warning Unhealthy 8m17s (x35 over 18m) kubelet, gg-k8s02-worker02 Liveness probe failed: Get http://10.244.3.83/: dial tcp 10.244.3.83:80: connect: connection refused
Warning BackOff 3m27s (x27 over 11m) kubelet, gg-k8s02-worker02 Back-off restarting failed container
</code></pre>
<p><strong>POD LOGS:</strong></p>
<pre><code>WordPress not found in /var/www/html - copying now...
WARNING: /var/www/html is not empty! (copying anyhow)
Complete! WordPress has been successfully copied to /var/www/html
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.244.3.83. Set the 'ServerName' directive globally to suppress this message
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.244.3.83. Set the 'ServerName' directive globally to suppress this message
[Wed Dec 11 06:39:07.502247 2019] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.38 (Debian) PHP/7.3.11 configured -- resuming normal operations
[Wed Dec 11 06:39:07.502323 2019] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'
10.244.3.1 - - [11/Dec/2019:06:39:18 +0000] "GET /index.php HTTP/1.1" 301 264 "-" "kube-probe/1.16"
10.244.3.1 - - [11/Dec/2019:06:39:33 +0000] "GET /index.php HTTP/1.1" 301 264 "-" "kube-probe/1.16"
10.244.3.1 - - [11/Dec/2019:06:39:48 +0000] "GET /index.php HTTP/1.1" 301 264 "-" "kube-probe/1.16"
10.244.3.1 - - [11/Dec/2019:06:40:03 +0000] "GET /index.php HTTP/1.1" 301 264 "-" "kube-probe/1.16"
10.244.3.1 - - [11/Dec/2019:06:40:18 +0000] "GET /index.php HTTP/1.1" 301 264 "-" "kube-probe/1.16"
</code></pre>
<p><strong>DOCKERFILE ("wordpress-test:test12"):</strong></p>
<pre><code>FROM wordpress:5.2.4-apache
RUN sed -i 's/Listen 80/Listen 8080/g' /etc/apache2/ports.conf;
RUN sed -i 's/:80/:8080/g' /etc/apache2/sites-enabled/000-default.conf;
# RUN sed -i 's/#ServerName www.example.com/ServerName localhost/g' /etc/apache2/sites-enabled/000-default.conf;
EXPOSE 8080
CMD ["apache2-foreground"]
</code></pre>
<p><strong>DEPLOYMENT:</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: blog-wordpress
namespace: development
labels:
app: blog
spec:
selector:
matchLabels:
app: blog
tier: wordpress
replicas: 4
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2
maxUnavailable: 2
template:
metadata:
labels:
app: blog
tier: wordpress
spec:
volumes:
- name: blog-wordpress
persistentVolumeClaim:
claimName: blog-wordpress
containers:
- name: blog-wordpress
# image: wordpress:5.2.4-apache
image: wordpress-test:test12
securityContext:
runAsUser: 65534
allowPrivilegeEscalation: false
capabilities:
add:
- "NET_ADMIN"
- "NET_BIND_SERVICE"
- "SYS_TIME"
resources:
requests:
cpu: "250m"
memory: "64Mi"
limits:
cpu: "500m"
memory: "128Mi"
ports:
- name: liveness-port
containerPort: 8080
readinessProbe:
initialDelaySeconds: 15
httpGet:
path: /index.php
port: 8080
timeoutSeconds: 15
periodSeconds: 15
failureThreshold: 5
livenessProbe:
initialDelaySeconds: 10
httpGet:
path: /index.php
port: 8080
timeoutSeconds: 10
periodSeconds: 15
failureThreshold: 5
env:
# Database
- name: WORDPRESS_DB_HOST
value: blog-mysql
- name: WORDPRESS_DB_NAME
value: wordpress
- name: WORDPRESS_DB_USER
valueFrom:
secretKeyRef:
name: blog-mysql
key: username
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: blog-mysql
key: password
- name: WORDPRESS_TABLE_PREFIX
value: wp_
- name: WORDPRESS_AUTH_KEY
valueFrom:
secretKeyRef:
name: blog-wordpress
key: auth-key
- name: WORDPRESS_SECURE_AUTH_KEY
valueFrom:
secretKeyRef:
name: blog-wordpress
key: secure-auth-key
- name: WORDPRESS_LOGGED_IN_KEY
valueFrom:
secretKeyRef:
name: blog-wordpress
key: logged-in-key
- name: WORDPRESS_NONCE_KEY
valueFrom:
secretKeyRef:
name: blog-wordpress
key: nonce-key
- name: WORDPRESS_AUTH_SALT
valueFrom:
secretKeyRef:
name: blog-wordpress
key: auth-salt
- name: WORDPRESS_SECURE_AUTH_SALT
valueFrom:
secretKeyRef:
name: blog-wordpress
key: secure-auth-salt
- name: WORDPRESS_LOGGED_IN_SALT
valueFrom:
secretKeyRef:
name: blog-wordpress
key: logged-in-salt
- name: WORDPRESS_NONCE_SALT
valueFrom:
secretKeyRef:
name: blog-wordpress
key: nonce-salt
- name: WORDPRESS_CONFIG_EXTRA
value: |
define('WPLANG', 'fr_FR');
define('WP_CACHE', false);
define('WP_MEMORY_LIMIT', '64M');
volumeMounts:
- name: blog-wordpress
mountPath: "/var/www/html/wp-content"
</code></pre>
<p><strong>DEPLOYMENT SERVICE:</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: blog-wordpress
namespace: development
labels:
app: blog
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8080
selector:
app: blog
tier: wordpress
type: ClusterIP
</code></pre>
<p><strong>TRAEFIK INGRESSROUTE:</strong></p>
<pre><code>##
# HTTP
##
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: blog
namespace: development
spec:
entryPoints:
- http
routes:
- match: Host(`example.com`)
kind: Rule
services:
- name: blog-wordpress
port: 80
middlewares:
- name: redirect-to-https
namespace: kube-system
---
##
# HTTPS
##
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: blog-https
namespace: development
spec:
entryPoints:
- https
routes:
- match: Host(`example.com`) && PathPrefix(`/`)
kind: Rule
services:
- name: blog-wordpress
port: 80
tls:
certResolver: letsencrypt
</code></pre>
<p>Thank you!</p>
| <p>For anyone interested I've managed to solve this issue.</p>
<p>I was getting a 301 redirect response from Wordpress due to Wordpress forcing my domain name example.com. Solved this issue by disabling Wordpress canonical redirection feature for the specific request <strong><a href="http://POD_IP:8080/index.php" rel="nofollow noreferrer">http://POD_IP:8080/index.php</a></strong>.</p>
<p><strong>Here's how:</strong></p>
<p>Added the Pod IP address as an environment variable:</p>
<pre><code>- name: K8S_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
</code></pre>
<p>Created a Wordpress plugin with a custom <strong>redirect_canonical</strong> filter that prevents Wordpress from redirecting <strong><a href="http://POD_IP:8080/index.php" rel="nofollow noreferrer">http://POD_IP:8080/index.php</a></strong>:</p>
<pre><code><?php
/**
* Plugin Name: Kubernetes Liveness Probe Exception
*/
add_filter('redirect_canonical', function($redirect_url, $requested_url) {
$K8S_POD_IP = getenv('K8S_POD_IP');
$LIVENESS_URL = "http://" . $K8S_POD_IP . ":8080/index.php";
if ($requested_url == $LIVENESS_URL) {
return $requested_url;
}
return $redirect_url;
}, 10, 2);
</code></pre>
|
<p>I'm trying to test and implement Traefik's <em>https</em> redirect feature in my kubernetes cluster per Traefik's documentation: <a href="https://docs.traefik.io/middlewares/overview/" rel="nofollow noreferrer">https://docs.traefik.io/middlewares/overview/</a>. Here's the definition of the <code>Middleware</code> and <code>IngressRoute</code>:</p>
<pre><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingressroutetls
namespace: default
spec:
entryPoints:
- web
- websecure
routes:
- match: Host(`your.domain.name`) && Host(`www.your.domain.name`)
kind: Rule
services:
- name: traefik-dashboard
port: 8080
middlewares:
- name: redirectscheme
tls:
secretName: cloud-tls
</code></pre>
<pre><code>apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: redirectscheme
spec:
redirectScheme:
scheme: https
</code></pre>
<p>However, <strong>https</strong>://your.domain.name works and <strong>http</strong>://your.domain.name gives me a <em>404 page not found</em>.
Does anyone know what have I misconfigured ?</p>
| <blockquote>
<p>that worked for me:</p>
</blockquote>
<pre><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: a3-ing
namespace: default
spec:
entryPoints:
- websecure
routes:
- match: Host(`example.com`)
kind: Rule
services:
- name: whoami
port: 80
tls:
certResolver: default
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: a3-ing-red
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`example.com`)
middlewares:
- name: test-redirectscheme
kind: Rule
services:
- name: whoami
port: 80
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: a3-ing-www
namespace: default
spec:
entryPoints:
- websecure
routes:
- match: Host(`www.example.com`)
kind: Rule
services:
- name: whoami
port: 80
tls:
certResolver: default
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: a3-ing-www-red
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`www.example.com`)
kind: Rule
middlewares:
- name: test-redirectscheme
services:
- name: whoami
port: 80
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: test-redirectscheme
namespace: default
spec:
redirectScheme:
scheme: https
</code></pre>
|
<p>I would like to configure custom DNS in CoreDNS (to bypass NAT loopback issue, meaning that within the network, IP are not resolved the same than outside the network).</p>
<p>I tried to modify ConfigMap for CoreDNS with a 'fake' domain just to test, but it does not work.
I am using minik8s</p>
<p>Here the config file of config map coredns:</p>
<pre><code>apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 8.8.8.8 8.8.4.4
cache 30
loop
reload
loadbalance
}
consul.local:53 {
errors
cache 30
forward . 10.150.0.1
}
kind: ConfigMap
</code></pre>
<p>Then I try to resolve this address using busy box, but it does not work.</p>
<pre><code>$kubectl exec -ti busybox -- nslookup test.consul.local
> nslookup: can't resolve 'test.consul.local'
command terminated with exit code 1
</code></pre>
<p>Even kubernetes DNS is failing</p>
<pre><code>$ kubectl exec -ti busybox -- nslookup kubernetes.default
nslookup: can't resolve 'kubernetes.default'
command terminated with exit code 1
</code></pre>
| <p>I've reproduced your scenario and it works as intended. </p>
<p>Here I'll describe two different ways to use custom DNS on Kubernetes. The first is in Pod level. You can customize what DNS server your pod will use. This is useful in specific cases where you don't want to change this configuration for all pods.</p>
<p>To achieve this, you need to add some optional fields. To know more about it, please read <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-config" rel="nofollow noreferrer">this</a>.
Example: </p>
<pre><code>kind: Pod
metadata:
name: busybox-custom
namespace: default
spec:
containers:
- name: busybox
image: busybox:1.28
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
dnsPolicy: "None"
dnsConfig:
nameservers:
- 8.8.8.8
searches:
- ns1.svc.cluster-domain.example
- my.dns.search.suffix
options:
- name: ndots
value: "2"
- name: edns0
restartPolicy: Always
</code></pre>
<pre><code>$ kubectl exec -ti busybox-custom -- nslookup cnn.com
Server: 8.8.8.8
Address 1: 8.8.8.8 dns.google
Name: cnn.com
Address 1: 2a04:4e42::323
Address 2: 2a04:4e42:400::323
Address 3: 2a04:4e42:200::323
Address 4: 2a04:4e42:600::323
Address 5: 151.101.65.67
Address 6: 151.101.129.67
Address 7: 151.101.193.67
Address 8: 151.101.1.67
</code></pre>
<pre><code>$ kubectl exec -ti busybox-custom -- nslookup kubernetes.default
Server: 8.8.8.8
Address 1: 8.8.8.8 dns.google
nslookup: can't resolve 'kubernetes.default'
command terminated with exit code 1
</code></pre>
<p>As you can see, this method will create problem to resolve internal DNS names.</p>
<p>The second way to achieve that, is to change the DNS on a Cluster level. This is the way you choose and as you can see. </p>
<pre><code>$ kubectl get cm coredns -n kube-system -o yaml
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . 8.8.8.8 8.8.4.4
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
</code></pre>
<p>As you can see, I don't have <code>consul.local:53</code> entry. </p>
<blockquote>
<p><a href="https://www.consul.io/" rel="nofollow noreferrer">Consul</a> is a service networking solution to connect and secure services
across any runtime platform and public or private cloud</p>
</blockquote>
<p>This kind of setup is not common and I don't think you need to include this entry in your setup. <strong>This might be your issue and when I add this entry, I face the same issues you reported.</strong></p>
<pre><code>$ kubectl exec -ti busybox -- nslookup cnn.com
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: cnn.com
Address 1: 2a04:4e42:200::323
Address 2: 2a04:4e42:400::323
Address 3: 2a04:4e42::323
Address 4: 2a04:4e42:600::323
Address 5: 151.101.65.67
Address 6: 151.101.193.67
Address 7: 151.101.1.67
Address 8: 151.101.129.67
</code></pre>
<pre><code>$ kubectl exec -ti busybox -- nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
</code></pre>
<p>Another main problem is that you are debugging DNS using the latest busybox image. I highly recommend you to avoid any version newer than 1.28 as it has come know <a href="https://github.com/docker-library/busybox/issues/48" rel="nofollow noreferrer">problems</a> regarding name resolution. </p>
<p>The best busybox image you can use to troubleshot DNS is <a href="https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/admin/dns/busybox.yaml" rel="nofollow noreferrer">1.28</a> as <a href="https://stackoverflow.com/users/1288818/oleg-butuzov" title="1,840 reputation">Oleg Butuzov</a> recommended on the comments. </p>
|
<p>I'm doing a deployment on the GKE service and I find that when I try to access the page the message </p>
<p>ERR_CONNECTION_REFUSED</p>
<p>I have defined a load balancing service for deployment and the configuration is as follows.</p>
<p>This is the .yaml for the deployment</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: bonsai-onboarding
spec:
selector:
matchLabels:
app: bonsai-onboarding
replicas: 2
template:
metadata:
labels:
app: bonsai-onboarding
spec:
containers:
- name: bonsai-onboarding
image: "eu.gcr.io/diaphanum/onboarding-iocash-master_web:v1"
ports:
- containerPort: 3000
</code></pre>
<p>This is the service .yaml file.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: lb-onboarding
spec:
type: LoadBalancer
selector:
app: bonsai-onboarding
ports:
- protocol: TCP
port: 3000
targetPort: 3000
</code></pre>
<p>This working fine, and all is green in GKE :)</p>
<pre><code>kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/bonsai-onboarding-8586b9b699-flhbn 1/1 Running 0 3h23m
pod/bonsai-onboarding-8586b9b699-p9sn9 1/1 Running 0 3h23m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP XX.xx.yy.YY <none> 443/TCP 29d
service/lb-onboarding LoadBalancer XX.xx.yy.YY XX.xx.yy.YY 3000:32618/TCP 3h
</code></pre>
<p>Then when i tried to connect the error is ERR_CONNECTION_REFUSED</p>
<p>I think is about the network because y did the next test from my local machine</p>
<pre><code>Ping [load balancer IP] ---> Correct
Telnet [Load Balancer IP] 3000 ---> Correct
</code></pre>
<p>From cloud shell i forward the port 3000 to 8080 and in other cloudShell make a Curl <a href="http://localhost:8080" rel="nofollow noreferrer">http://localhost:8080</a>, and work fine.</p>
<p>Any idea about the problem?</p>
<p>Thanks in advance</p>
| <p>I've changed a little bit your deployment to check it on my cluster because your image was unreachable:</p>
<ul>
<li><p>deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: bonsai-onboarding
spec:
selector:
matchLabels:
app: bonsai-onboarding
replicas: 2
template:
metadata:
labels:
app: bonsai-onboarding
spec:
containers:
- name: bonsai-onboarding
image: nginx:latest
ports:
- containerPort: 80
</code></pre></li>
<li><p>service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: lb-onboarding
spec:
type: LoadBalancer
selector:
app: bonsai-onboarding
ports:
- protocol: TCP
port: 3000
targetPort: 80
</code></pre></li>
</ul>
<p>and it works out of the box:</p>
<pre><code>kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/bonsai-onboarding-7bdf584499-j2nv7 1/1 Running 0 6m58s
pod/bonsai-onboarding-7bdf584499-vc7kh 1/1 Running 0 6m58s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.XXX.XXX.1 <none> 443/TCP 8m35s
service/lb-onboarding LoadBalancer 10.XXX.XXX.230 35.XXX.XXX.235 3000:31637/TCP 67s
</code></pre>
<p>and I'm able reach <code>35.XXX.XXX.235:3000</code> from any IP:</p>
<pre><code>Welcome to nginx!
...
Thank you for using nginx.
</code></pre>
<p>You can check if your app is reachable using this command:</p>
<pre><code>nmap -Pn $(kubectl get svc lb-onboarding -o jsonpath='{.status.loadBalancer.ingress[*].ip}')
</code></pre>
<p>Maybe the cause of your problem with "ERR_CONNECTION_REFUSED" in configuration of your image? I found no problem with your deployment and load balancer configuration.</p>
|
<p>TL;DR. I'm lost as to how to access the data after deleting a PVC, as well as why PV wouldn't go away after deleting a PVC.</p>
<p>Steps I'm taking:</p>
<ol>
<li><p>created a disk in GCE manually:</p>
<pre><code>gcloud compute disks create --size 5Gi disk-for-rabbitmq --zone europe-west1-b
</code></pre></li>
<li><p>ran:</p>
<pre><code>kubectl apply -f /tmp/pv-and-pvc.yaml
</code></pre>
<p>with the following config:</p>
<pre><code># /tmp/pv-and-pvc.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-for-rabbitmq
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 5Gi
gcePersistentDisk:
fsType: ext4
pdName: disk-for-rabbitmq
persistentVolumeReclaimPolicy: Delete
storageClassName: standard
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-for-rabbitmq
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: standard
volumeName: pv-for-rabbitmq
</code></pre></li>
<li><p>deleted a PVC manually (on a high level: I'm simulating a disastrous scenario here, like accidental deletion or misconfiguration of a <code>helm</code> release):</p>
<pre><code>kubectl delete pvc pvc-for-rabbitmq
</code></pre></li>
</ol>
<p>At this point I see the following:</p>
<pre><code>$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-for-rabbitmq 5Gi RWO Delete Released staging/pvc-for-rabbitmq standard 8m
$
</code></pre>
<blockquote>
<p>A side question, just improve my understanding: <strong>why PV is still there, even though it has a reclaim policy set to <code>Delete</code>?</strong> Isn't this what the <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#delete" rel="noreferrer">docs</a> say for the <code>Delete</code> reclaim policy?</p>
</blockquote>
<p>Now if I try to re-create the PVC to regain access to the data in PV:</p>
<pre><code>$ kubectl apply -f /tmp/pv-and-pvc.yaml
persistentvolume "pv-for-rabbitmq" configured
persistentvolumeclaim "pvc-for-rabbitmq" created
$
</code></pre>
<p>I still get this for <code>pv</code>s, e.g. a PV is stuck in <code>Released</code> state:</p>
<pre><code>$
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-for-rabbitmq 5Gi RWO Delete Released staging/pvc-for-rabbitmq standard 15m
$
</code></pre>
<p>...and I get this for <code>pvc</code>s:</p>
<pre><code>$
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-for-rabbitmq Pending pv-for-rabbitmq 0 standard 1m
$
</code></pre>
<p>Looks like my PV is stuck in <code>Released</code> status, and PVC cannot access the PV which is not in <code>Available</code> status.</p>
<p>So, why the same PV and PVC cannot be friends again? <strong>How do I make a PVC to regain access to data in the existing PV?</strong></p>
| <pre><code>kubectl patch pv pv-for-rabbitmq -p '{"spec":{"claimRef": null}}'
</code></pre>
<p>This worked for me.</p>
|
<p>I want to remove the server name from the error pages returned by the <strong>Kubernetes Nginx ingress controller</strong>. I added the server tokens to be false which takes care of the headers but when I curl or open the ingress in the browser I still get the server name. </p>
<blockquote>
<p>server-tokens : "False"</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/2o4Gp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2o4Gp.png" alt="enter image description here"></a></p>
<p>I want to get rid of the server name while returning a 404 or any other error for that matter. What is the easiest way to achieve this? I don't have a default backend. Is there any way I can edit the ingress for this backend and add a custom HTML page directly without having to deploy default backends or inject the page using the nginx-configmap?</p>
| <p><code>server-tokens</code> only removes info from the HTTP response header. You'll need to define custom error pages similar to</p>
<pre><code>server {
...
error_page 500 502 503 504 /custom_50x.html;
}
</code></pre>
<p><a href="http://nginx.org/en/docs/http/ngx_http_core_module.html#error_page" rel="nofollow noreferrer">http://nginx.org/en/docs/http/ngx_http_core_module.html#error_page</a></p>
|
<p>I know there are a lot of similar questions but none of them has a solution as far as I have browsed.
Coming to the issue, I have created a service account (using command), role (using .yaml file), role binding (using .yaml files). The role grants access only to the pods. But when I login into the dashboard (Token method) using the SA that the role is attached to, I'm able to view all the resources without any restrictions. Here are the file and commands used by me.</p>
<p><strong>Role.yaml:</strong></p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: assembly-prod
name: testreadrole
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
</code></pre>
<p><strong>RoleBinding.yaml</strong></p>
<pre><code>kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: testrolebinding
namespace: assembly-prod
subjects:
- kind: ServiceAccount
name: testsa
apiGroup: ""
roleRef:
kind: Role
name: testreadrole
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>Command used to create service account:
<code>kubectl create serviceaccount <saname> --namespace <namespacename></code></p>
<p><strong>UPDATE:</strong> I create a service account and did not attach any kind of role to it. When I tried to login with this SA, It let me through and I was able to perform all kinds activities including deleting "secrets". So by default all SA are assuming admin access and that is the reason why my above roles are not working. Is this behavior expected, If yes then how can I change it?</p>
| <p>Try the below steps</p>
<pre><code># create service account
kubectl create serviceaccount pod-viewer
# Create cluster role/role
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-viewer
rules:
- apiGroups: [""] # core API group
resources: ["pods", "namespaces"]
verbs: ["get", "watch", "list"]
---
# create cluster role binding
kubectl create clusterrolebinding pod-viewer \
--clusterrole=pod-viewer \
--serviceaccount=default:pod-viewer
# get service account secret
kubectl get secret | grep pod-viewer
pod-viewer-token-6fdcn kubernetes.io/service-account-token 3 2m58s
# get token
kubectl describe secret pod-viewer-token-6fdcn
Name: pod-viewer-token-6fdcn
Namespace: default
Labels: <none>
Annotations: kubernetes.io/service-account.name: pod-viewer
kubernetes.io/service-account.uid: bbfb3c4e-2254-11ea-a26c-0242ac110009
Type: kubernetes.io/service-account-token
Data
====
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InBvZC12aWV3ZXItdG9rZW4tNmZkY24iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoicG9kLXZpZXdlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImJiZmIzYzRlLTIyNTQtMTFlYS1hMjZjLTAyNDJhYzExMDAwOSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OnBvZC12aWV3ZXIifQ.Pgco_4UwTCiOfYYS4QLwqgWnG8nry6JxoGiJCDuO4ZVDWUOkGJ3w6-8K1gGRSzWFOSB8E0l2YSQR4PB9jlc_9GYCFQ0-XNgkuiZBPvsTmKXdDvCNFz7bmg_Cua7HnACkKDbISKKyK4HMH-ShgVXDoMG5KmQQ_TCWs2E_a88COGMA543QL_BxckFowQZk19Iq8yEgSEfI9m8qfz4n6G7dQu9IpUSmVNUVB5GaEsaCIg6h_AXxDds5Ot6ngWUawvhYrPRv79zVKfAxYKwetjC291-qiIM92XZ63-YJJ3xbxPAsnCEwL_hG3P95-CNzoxJHKEfs_qa7a4hfe0k6HtHTWA
ca.crt: 1025 bytes
namespace: 7 bytes
```
Login to dashboard using the above token. you should see only pods and namespaces
[![Refer the below link][1]][1]
[1]: https://i.stack.imgur.com/D9bDi.png
</code></pre>
|
<p>We want to deploy using ArgoCD from our Jenkinsfile (which is slightly not how this is intended to be done but close enough), and after done some experiments want to try using the official container with the CLI, so we have added this snippet to our our pipeline kubernetes yaml:</p>
<pre><code> - name: argocdcli
image: argoproj/argocli
command:
- argo
args:
- version
tty: true
</code></pre>
<p>Unfortunately the usual way to keep these containers alive is to invoke <code>cat</code> in the container, which isn't there, so it fails miserably. Actually the <em>only</em> command in there is the "argo" command which doesn't have a way to sleep infinitely. (We are going to report this upstream so it will be fixed, but while we wait for that....)</p>
<p>My question therefore is, is there a way to indicate to Kubernetes that we <em>know</em> that this pod cannot keep itself up on its own, and therefore not tear it down immediately? </p>
| <p>Unfortunately it's not possible since as you stated, <code>argo</code> is the only command available on this image. </p>
<p>It can be confirmed <a href="https://github.com/argoproj/argo/blob/master/Dockerfile#L97-L102" rel="nofollow noreferrer">here</a>: </p>
<pre><code>####################################################################################################
# argocli
####################################################################################################
FROM scratch as argocli
COPY --from=argo-build /go/src/github.com/argoproj/argo/dist/argo-linux-amd64 /bin/argo
ENTRYPOINT [ "argo" ]
</code></pre>
<p>As we can see on this output, running argo is all this container is doing:</p>
<pre><code>$ kubectl run -i --tty --image argoproj/argocli argoproj1 --restart=Never
argo is the command line interface to Argo
Usage:
argo [flags]
argo [command]
...
</code></pre>
<p>You can optionally create you own image based on that and include sleep, so it'll be possible to keep it running as in this example: </p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- name: busybox
image: busybox:1.28
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
</code></pre>
|
<p>I want to have two instances of same POD with an environment variable with different values in them.
How can we acheive this ? </p>
<p>THanks</p>
| <p>You can achieve what you want using one pod containing 2 different containers. </p>
<p>Here is an example on how to achieve that: </p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox1
image: busybox:1.28
env:
- name: VAR1
value: "Hello I'm VAR1"
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
- name: busybox2
image: busybox:1.28
env:
- name: VAR2
value: "VAR2 here"
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
</code></pre>
<p>We are creating 2 containers, one with <code>VAR1</code> and the second with <code>VAR2</code>. </p>
<pre><code>$ kubectl exec -ti busybox -c busybox1 -- env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=busybox
TERM=xterm
VAR1=Hello I'm VAR1
KUBERNETES_PORT_443_TCP_ADDR=10.31.240.1
KUBERNETES_SERVICE_HOST=10.31.240.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.31.240.1:443
KUBERNETES_PORT_443_TCP=tcp://10.31.240.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
HOME=/root
</code></pre>
<pre><code>$ kubectl exec -ti busybox -c busybox2 -- env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=busybox
TERM=xterm
VAR2=VAR2 here
KUBERNETES_PORT=tcp://10.31.240.1:443
KUBERNETES_PORT_443_TCP=tcp://10.31.240.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.31.240.1
KUBERNETES_SERVICE_HOST=10.31.240.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
HOME=/root
</code></pre>
<p>As you can see, they have the same hostname (inheritance from Pod name) and different variables. </p>
|
<p>Currently, the Fabric system consists of components running in multiple containers, which will be a challenge to maintain all those containers, thus I want to use some powerful containers management platform such as Kubernetes, to deploy Fabric. </p>
<p>Could anybody help ? </p>
| <p>Properly configuring and deploying Hyperledger Fabric components to Kubernetes is not trivial. And also is only one part of the puzzle. Populating the Fabric network regarding <code>channels</code> and <code>chaincodes</code> is also as important as launching the Fabric network. In my opinion, this second part is highly overlooked.</p>
<p>This GitHub repository called <a href="https://github.com/APGGroeiFabriek/PIVT" rel="nofollow noreferrer">Hyperledger Fabric meets Kubernetes</a> contains a couple of Helm charts to:</p>
<ul>
<li>Configure and launch the whole HL Fabric network, either:
<ul>
<li>A simple one, one peer per organization and Solo orderer</li>
<li>Or scaled up one, multiple peers per organization and Kafka or Raft orderer</li>
</ul></li>
<li>Populate the network declaratively:
<ul>
<li>Create the channels, join peers to channels, update channels for Anchor peers</li>
<li>Install/Instantiate all chaincodes, or some of them, or upgrade them to newer version</li>
</ul></li>
<li>Add new peer organizations to an already running network declaratively</li>
<li>Backup and restore the state of whole network</li>
</ul>
<p>Tested against Fabric versions 1.4.1 to 1.4.4 and just works fine.</p>
<p><strong>PS:</strong> I'm the author of these Helm charts. As mentioned in our repo, we had implemented these Helm charts for our project's needs, and as the results looks very promising, decided to share the source code with the HL Fabric community. Hopefully it will fill a large gap!</p>
<p>I can confirm, we are using the very same Helm charts in all our environments and CI/CD pipelines since May 2019 without any issues, and will go to production in early 2020.</p>
|
<p>I'm writing an application to be deployed to Kubernetes where I want each pod to hold
a TCP connection to all the other pods in the replica set so any container can notify the other
containers in some use case. If a pod is added, a new connection should be created between it and all
the other pods in the replica set.</p>
<p>From a given pod, how do I open a TCP connection to all the other pods? I.e., how do I discover all
their IP addresses?</p>
<p>I have a ClusterIP Service DNS name that points to these pods by selector, and that's the mechanism
I've thought of trying to use so far. E.g., open a TCP connection to that DNS name repeatedly in a
loop, requesting a container ID over the connection each time, pausing when I've gotten say at least
three connections for each container ID. Then repeating that every minute or so to get new pods that
have been added. A keepalive could be used to remove connections to pods that have gone away.</p>
<p>Is there a better way to do it?</p>
<p>If it matters, I'm writing this application in Go.</p>
| <blockquote>
<p>E.g., open a TCP connection to that DNS name repeatedly in a loop, requesting a container ID over the connection each time, pausing when I've gotten say at least three connections for each container ID. Then repeating that every minute or so to get new pods that have been added.</p>
</blockquote>
<p>This does not sounds like a solid solution.</p>
<p>Kubernetes does <strong>not</strong> provide the functionality that you are looking for out-of-the-box.</p>
<p><strong>Detect pods</strong></p>
<p>A possible solution is to query the Kubernetes API Server, for the pods matching your <em>selector</em> (labels). Use <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">client-go</a> and look at e.g. <a href="https://github.com/kubernetes/client-go/blob/master/examples/in-cluster-client-configuration/main.go#L53" rel="nofollow noreferrer">example on how to list Pods</a>. You may want to <em>watch</em> for new pods, or query regularly.</p>
<p><strong>Connect to Pods</strong></p>
<p>When you know the <em>name</em> of a pod that you want to connect to, you can use <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods" rel="nofollow noreferrer">DNS to connect to pods</a>.</p>
|
<p>I have setup statsd-exporter to scrape metric from gunicorn web server. My goal is to filter request duration metric only for successful request(non 5xx), however in statsd-exporter there is no way to tag status code in duration metric. Can anyone suggest a way to add status code in request duration metric or a way to filter only successful request duration in prometheus.</p>
<p>In particular I want to extract successful request duration hitogram from statsd-exporter to prometheus. </p>
| <p>To export successful request duration histogram metrics from gunicorn web server to prometheus you would need to add this functionality in gunicorn sorcecode.</p>
<p>First take a look at the code that exports statsd metrics <a href="https://github.com/benoitc/gunicorn/blob/aa8b258f937867a8a453b426e5c26db84a8ab879/gunicorn/instrument/statsd.py#L99" rel="nofollow noreferrer">here</a>.
You should see this peace of code:</p>
<pre><code>status = resp.status
...
self.histogram("gunicorn.request.duration", duration_in_ms)
</code></pre>
<p>By changing the code to sth like this:</p>
<pre><code>self.histogram("gunicorn.request.duration.%d" % status, duration_in_ms)
</code></pre>
<p>from this moment you will have metrics names exported with status codes like <code>gunicorn_request_duration_200</code> or <code>gunicorn_request_duration_404</code> etc.</p>
<p>You can also modify it a little bit and move status codes to label by adding a configuration like below to your <code>statsd_exporter</code>:</p>
<pre><code>mappings:
- match: gunicorn.request.duration.*
name: "gunicorn_http_request_duration"
labels:
status: "$1"
job: "gunicorn_request_duration"
</code></pre>
<p>So your metrics will now look like this:</p>
<pre><code># HELP gunicorn_http_request_duration Metric autogenerated by statsd_exporter.
# TYPE gunicorn_http_request_duration summary
gunicorn_http_request_duration{job="gunicorn_request_duration",status="200",quantile="0.5"} 2.4610000000000002e-06
gunicorn_http_request_duration{job="gunicorn_request_duration",status="200",quantile="0.9"} 2.4610000000000002e-06
gunicorn_http_request_duration{job="gunicorn_request_duration",status="200",quantile="0.99"} 2.4610000000000002e-06
gunicorn_http_request_duration_sum{job="gunicorn_request_duration",status="200"} 2.4610000000000002e-06
gunicorn_http_request_duration_count{job="gunicorn_request_duration",status="200"} 1
gunicorn_http_request_duration{job="gunicorn_request_duration",status="404",quantile="0.5"} 3.056e-06
gunicorn_http_request_duration{job="gunicorn_request_duration",status="404",quantile="0.9"} 3.056e-06
gunicorn_http_request_duration{job="gunicorn_request_duration",status="404",quantile="0.99"} 3.056e-06
gunicorn_http_request_duration_sum{job="gunicorn_request_duration",status="404"} 3.056e-06
gunicorn_http_request_duration_count{job="gunicorn_request_duration",status="404"} 1
</code></pre>
<p>And now to query all metrics except these with 5xx status in prometheus you can run:</p>
<pre><code>gunicorn_http_request_duration{status=~"[^5].*"}
</code></pre>
<p>Let me know if it was helpful.</p>
|
<p>I have created a hashicorp vault deployment and configured kubernetes auth. The vault container calls kubernetes api internally from the pod to do k8s authentication, and that call is failing with 500 error code (connection refused). I am using docker for windows kubernetes.</p>
<p>I added the below config to vault for kubernetes auth mechanism.</p>
<p><strong>payload.json</strong></p>
<pre><code>{
"kubernetes_host": "http://kubernetes",
"kubernetes_ca_cert": <k8s service account token>
}
</code></pre>
<pre><code>curl --header "X-Vault-Token: <vault root token>" --request POST --data @payload.json http://127.0.0.1:8200/v1/auth/kubernetes/config
</code></pre>
<p>I got 204 response as expected.</p>
<p>And I created a role for kubernetes auth using which I am trying to login to vault:</p>
<p><strong>payload2.json</strong></p>
<pre><code>{
"role": "tanmoy-role",
"jwt": "<k8s service account token>"
}
</code></pre>
<pre><code>curl --request POST --data @payload2.json http://127.0.0.1:8200/v1/auth/kubernetes/login
</code></pre>
<p>The above curl is giving below response:</p>
<blockquote>
<p>{"errors":["Post <a href="http://kubernetes/apis/authentication.k8s.io/v1/tokenreviews" rel="nofollow noreferrer">http://kubernetes/apis/authentication.k8s.io/v1/tokenreviews</a>: dial tcp 10.96.0.1:80: connect: connection refused"]}</p>
</blockquote>
<p>Below is my kubernetes service up and running properly and I can also access kubernetes dashboard by using proxy.</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13d
</code></pre>
<p>I am not able to figure out why 'kubernetes' service is not accessible from inside the container. Any help would be greatly appreciated.</p>
<p><strong>Edit 1.</strong> My vault pod and service are working fine:</p>
<p><strong>service</strong></p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
vault-elb-int LoadBalancer 10.104.197.76 localhost,192.168.0.10 8200:31650/TCP,8201:31206/TCP 26h
</code></pre>
<p><strong>Pod</strong></p>
<pre><code>NAME READY STATUS RESTARTS AGE
vault-84c65db6c9-pj6zw 1/1 Running 0 21h
</code></pre>
<p><strong>Edit 2.</strong>
As John suggested, I changed the 'kubernetes_host' in payload.json to '<a href="https://kubernetes" rel="nofollow noreferrer">https://kubernetes</a>'. But now I am getting this error:</p>
<pre><code>{"errors":["Post https://kubernetes/apis/authentication.k8s.io/v1/tokenreviews: x509: certificate signed by unknown authority"]}
</code></pre>
| <p>Your login request is being sent to the <code>tokenreview</code> endpoint on port 80. I think this is because your <code>kubernetes_host</code> specifies a <code>http</code> URL. The 500 response is because it's not listening on port 80, but on 443 instead (as you can see in your service list output).</p>
<p>Try changing to <code>https</code> when configuring the auth, i.e. </p>
<pre><code>payload.json
{
"kubernetes_host": "https://kubernetes",
"kubernetes_ca_cert": <k8s service account token>
}
</code></pre>
|
<p>I followed the tutorial from <a href="https://www.haproxy.com/documentation/hapee/1-9r1/traffic-management/kubernetes-ingress-controller/" rel="nofollow noreferrer">https://www.haproxy.com/documentation/hapee/1-9r1/traffic-management/kubernetes-ingress-controller/</a> to setup the haproxy controller (community edition) and the "Echo Server" App that should be accessed.</p>
<p>When I run <code>curl -L -H 'Host: echo.example.com' localhost:30884</code> I get the desired response</p>
<pre><code>Request served by app-58f7d69f54-p2kq8
HTTP/1.1 GET /
Host: echo.example.com
User-Agent: curl/7.61.1
Accept: */*
X-Forwarded-For: 10.42.0.0
</code></pre>
<p>However if I just use <code>curl -L echo.example.com:30884</code> I get the response <code>default backend - 404</code>
So apperently the request reaches the ingress controller, but the ingress controller doesn't know which host should be used.</p>
<p>Running <code>kubectl get ing</code> gives me</p>
<pre><code>eignungstest eignungstest.example.com 10.43.173.120 80 17h
kube-web-view dashboard.example.com 80 16h
web-ingress echo.example.com 80 16h
</code></pre>
<p>it seems there is no address assigned, but running the same commands above with the host <code>eignungstest.example.com</code> I get the same results, so that should be of no relevance.</p>
<p>Is there a setting I'm missing that prevents the host header from being passed when I dont explicitly specify it?</p>
| <p>Ingress controller and ingress looks fine. </p>
<p>As you can see in your <code>kubectl</code> response the ingress is pointing to port 80 of that service which would pick up <code>echo.example.com</code>. </p>
<p>So it only knows about port 80 of echo.example.com, but when you try <code>curl -L echo.example.com:30884</code>, it is checking for that particular port and routing all the traffic to default backend. </p>
<p>Make sure you could directly <code>curl -L echo.example.com</code>, if it doesn't work check the service if the service has something similar.</p>
<pre><code>spec:
ports:
- port: 80
targetPort: 3001
</code></pre>
<p>You might have done the service part right as well. Hope this helps.</p>
|
<p>I have a pod running and want to port forward so i can access the pod from the internal network.
I don't know what port it is listening on though, there is no service yet.</p>
<p>I describe the pod:</p>
<pre><code>$ kubectl describe pod queue-l7wck
Name: queue-l7wck
Namespace: default
Priority: 0
Node: minikube/192.168.64.3
Start Time: Wed, 18 Dec 2019 05:13:56 +0200
Labels: app=work-queue
chapter=jobs
component=queue
Annotations: <none>
Status: Running
IP: 172.17.0.2
IPs:
IP: 172.17.0.2
Controlled By: ReplicaSet/queue
Containers:
queue:
Container ID: docker://13780475170fa2c0d8e616ba1a3b1554d31f404cc0a597877e790cbf01838e63
Image: gcr.io/kuar-demo/kuard-amd64:blue
Image ID: docker-pullable://gcr.io/kuar-demo/kuard-amd64@sha256:1ecc9fb2c871302fdb57a25e0c076311b7b352b0a9246d442940ca8fb4efe229
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 18 Dec 2019 05:14:02 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mbn5b (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-mbn5b:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mbn5b
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/queue-l7wck to minikube
Normal Pulling 31h kubelet, minikube Pulling image "gcr.io/kuar-demo/kuard-amd64:blue"
Normal Pulled 31h kubelet, minikube Successfully pulled image "gcr.io/kuar-demo/kuard-amd64:blue"
Normal Created 31h kubelet, minikube Created container queue
Normal Started 31h kubelet, minikube Started container queue
</code></pre>
<p>even the JSON has nothing:</p>
<pre><code>$ kubectl get pods queue-l7wck -o json
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"creationTimestamp": "2019-12-18T03:13:56Z",
"generateName": "queue-",
"labels": {
"app": "work-queue",
"chapter": "jobs",
"component": "queue"
},
"name": "queue-l7wck",
"namespace": "default",
"ownerReferences": [
{
"apiVersion": "apps/v1",
"blockOwnerDeletion": true,
"controller": true,
"kind": "ReplicaSet",
"name": "queue",
"uid": "a9ec07f7-07a3-4462-9ac4-a72226f54556"
}
],
"resourceVersion": "375402",
"selfLink": "/api/v1/namespaces/default/pods/queue-l7wck",
"uid": "af43027d-8377-4227-b366-bcd4940b8709"
},
"spec": {
"containers": [
{
"image": "gcr.io/kuar-demo/kuard-amd64:blue",
"imagePullPolicy": "Always",
"name": "queue",
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [
{
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
"name": "default-token-mbn5b",
"readOnly": true
}
]
}
],
"dnsPolicy": "ClusterFirst",
"enableServiceLinks": true,
"nodeName": "minikube",
"priority": 0,
"restartPolicy": "Always",
"schedulerName": "default-scheduler",
"securityContext": {},
"serviceAccount": "default",
"serviceAccountName": "default",
"terminationGracePeriodSeconds": 30,
"tolerations": [
{
"effect": "NoExecute",
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"tolerationSeconds": 300
},
{
"effect": "NoExecute",
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"tolerationSeconds": 300
}
],
"volumes": [
{
"name": "default-token-mbn5b",
"secret": {
"defaultMode": 420,
"secretName": "default-token-mbn5b"
}
}
]
},
"status": {
"conditions": [
{
"lastProbeTime": null,
"lastTransitionTime": "2019-12-18T03:13:56Z",
"status": "True",
"type": "Initialized"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2019-12-18T03:14:02Z",
"status": "True",
"type": "Ready"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2019-12-18T03:14:02Z",
"status": "True",
"type": "ContainersReady"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2019-12-18T03:13:56Z",
"status": "True",
"type": "PodScheduled"
}
],
"containerStatuses": [
{
"containerID": "docker://13780475170fa2c0d8e616ba1a3b1554d31f404cc0a597877e790cbf01838e63",
"image": "gcr.io/kuar-demo/kuard-amd64:blue",
"imageID": "docker-pullable://gcr.io/kuar-demo/kuard-amd64@sha256:1ecc9fb2c871302fdb57a25e0c076311b7b352b0a9246d442940ca8fb4efe229",
"lastState": {},
"name": "queue",
"ready": true,
"restartCount": 0,
"started": true,
"state": {
"running": {
"startedAt": "2019-12-18T03:14:02Z"
}
}
}
],
"hostIP": "192.168.64.3",
"phase": "Running",
"podIP": "172.17.0.2",
"podIPs": [
{
"ip": "172.17.0.2"
}
],
"qosClass": "BestEffort",
"startTime": "2019-12-18T03:13:56Z"
}
}
</code></pre>
<p>How do you checker what port a pod is listening on with kubectl?</p>
<p><strong>Update</strong></p>
<p>If I ssh into the pod and run <code>netstat -tulpn</code> as suggested in the comments I get:</p>
<pre><code>$ kubectl exec -it queue-pfmq2 -- sh
~ $ netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 :::8080 :::* LISTEN 1/kuard
</code></pre>
<p>But this method is not using <code>kubectl</code>.</p>
| <p>Your container image has a port opened during the build (looks like port 8080 in your case) using the <a href="https://docs.docker.com/engine/reference/builder/#expose" rel="noreferrer">EXPOSE command in the Dockerfile</a>. Since the exposed port is baked into the image, k8s does not keep track of this open port since k8s does not need to take steps to open it.</p>
<p>Since k8s is not responsible for opening the port, you won't be able to find the listening port using kubectl or checking the pod YAML</p>
|
<p>I'm working in a company where we have build a SaaS-application with the PHP/MySQL/Redis/JS-stack.</p>
<p>It runs in our own Google Kubernetes Engine as a hosted/managed service for our clients.</p>
<p>Most of our business logic is placed in the PHP-layer.</p>
<p>Now we want to provide our product as an on-premise solution, so that our clients can deploy the whole product in their own private cloud or maybe even in their own basement.</p>
<p>This also means that the clients get access to all our "business secrets" (business logic) and this worries me.
We are a small business and we are doing business with large companies. Some of them might be interested in doing what we do themselves.</p>
<p>PHP is not compiled and the obfuscaters I have found does not seem to obfuscate the code too much.</p>
<p>We have talked about following solutions:</p>
<ol>
<li><p>Trust the customers to not copy/get inspired by our solution and only protect it through law/contracts</p></li>
<li><p>Move some vital parts of our logic to our own server and make the on-prem solutions call to our server sometimes. This way we can control licensing etc. But we fear that the customers then will say that it is not totally on-prem. Which is true.</p></li>
<li><p>Obfuscate it the best we can and hope for the best. Maybe we are just paranoid?</p></li>
<li><p>Do something that makes it possible for the clients to run the code in their private cloud without accessing the source code - we have looked into Google Cloud Marketplace, but as far as I can see, the clients will get access to the VM.</p></li>
</ol>
<p>Do you have any good inputs in this regard?
We are looking for creative solutions. The main goal is to protect our business.</p>
<p>It would be best if we could offer our Kubernetes setup as a fully on-prem solution, but if you have good ideas to how we can provide it to client's private cloud (even only in GCP) that is also very welcome.</p>
<p>Thanks in advance!</p>
| <p>Your question relates rather to IP than to k8s and clouds.</p>
<p>Solutions are:</p>
<h1>Trust and open source</h1>
<blockquote>
<ol>
<li>Trust the customers to not copy/get inspired by our solution and only protect it through law/contracts</li>
</ol>
</blockquote>
<p>You are right. And there is a power.</p>
<blockquote>
<p>“The best way to find out if you can trust somebody is to trust them.”
― Ernest Hemingway </p>
</blockquote>
<p>You can select appropriate license and give your app to the client.
See <a href="https://stackoverflow.com/a/336068/5720818">this answer</a>: </p>
<blockquote>
<p>Some parts of a EULA that come to mind:</p>
<ul>
<li>Limiting your liability if the product has bugs or causes damage.</li>
<li>Spelling out how the customer can use their licensed software, for how long, on how many machines, with or without redistribution rights, etc.</li>
<li>Giving you rights to audit their site, so you can enforce the licenses.</li>
<li>What happens if they violate the EULA, e.g. they lose their privilege to use your software.</li>
</ul>
<p>You should consult a legal professional to prepare a commercial EULA.</p>
<ul>
<li>"<a href="http://discuss.joelonsoftware.com/default.asp?biz.5.293027.14" rel="nofollow noreferrer">EULA advice</a>" on joelonsoftware</li>
<li>"<a href="http://www.avangate.com/articles/eula-software_75.htm" rel="nofollow noreferrer">How to Write an End User License Agreement</a>" </li>
</ul>
</blockquote>
<h1>Not-prem</h1>
<blockquote>
<ol start="2">
<li>Move some vital parts of our logic to our own server and make the on-prem solutions call to our server sometimes. This way we can control licensing etc. But we fear that the customers then will say that it is not totally on-prem. Which is true.</li>
</ol>
</blockquote>
<p>Not the best solutions since it's not real on-prem.
Your client's servers may be located in secure zone under firewall without access to your server.</p>
<p>Anyway, it's popular solutions.
For example, see how <a href="https://www.vepp.com/" rel="nofollow noreferrer">Vepp</a> works.</p>
<blockquote>
<ol start="3">
<li>Obfuscate it the best we can and hope for the best. Maybe we are just paranoid?</li>
</ol>
</blockquote>
<p>Solutions are:</p>
<ul>
<li><a href="https://www.ioncube.com/php_encoder.php" rel="nofollow noreferrer">ionCube PHP Encoder</a></li>
<li><a href="https://www.sourceguardian.com/" rel="nofollow noreferrer">SourceGuardian</a></li>
<li><a href="http://www.semanticdesigns.com/Products/Obfuscators/PHPObfuscator.jsp" rel="nofollow noreferrer">Thicket<sup>™</sup> Obfuscator for PHP</a> by Semantic Designs</li>
<li><a href="https://www.zend.com/products/zend-guard" rel="nofollow noreferrer">Zend Guard</a></li>
</ul>
<p>There are some vital examples of php-driven self-hosted applications.
I.e. <a href="https://www.bitrix24.com/self-hosted/" rel="nofollow noreferrer">Self-hosted Bitrix24</a></p>
<h1>Private cloud with ecryption</h1>
<blockquote>
<ol start="4">
<li>Do something that makes it possible for the clients to run the code in their private cloud without accessing the source code - we have looked into Google Cloud Marketplace, but as far as I can see, the clients will get access to the VM.</li>
</ol>
</blockquote>
<p>Yes, you can distribute your app as encrypted VM.</p>
<ul>
<li>VirtualBox: install <a href="https://www.virtualbox.org/wiki/Downloads" rel="nofollow noreferrer">Oracle VM VirtualBox Extension Pack</a> and enable disk encrytion</li>
<li>VMWare-vSphere You can use <a href="https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.security.doc/GUID-E6C5CE29-CD1D-4555-859C-A0492E7CB45D.html" rel="nofollow noreferrer">virtual machine encryption</a> since v6.5</li>
<li><a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html#how-ebs-encryption-works" rel="nofollow noreferrer">AWS</a>, <a href="https://cloud.google.com/storage/docs/encryption/customer-managed-keys" rel="nofollow noreferrer">GCP</a>, <a href="https://learn.microsoft.com/en-us/azure/security/fundamentals/encryption-overview" rel="nofollow noreferrer">Azure</a> support encryption of your data. If your client agreed with cloud hosting, it might work.</li>
</ul>
|
<p>I have deployed multiple microservices on an AKS cluster and exposed it on nginx ingress controller. The ingress pointing to a static ip with dns as blabla.eastus.azure.com</p>
<p>Application is exposed on blabla.eastus.azure.com/application/ and blabla.eastus.azure.com/application2/ .. etc. </p>
<p>I have created a Traffic manager profile in blabla.trafficmanager.net in Azure. How should i configure the AKS ingress in traffic manager such that traffic manager reroutes the request to an application deployed on AKS ingress.</p>
<pre><code>---Ingress.yaml configuration used
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: ns
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: blabla.eastus.azure.com
http:
paths:
- backend:
serviceName: application1
servicePort: 80
path: /application1(/|$)(.*)
- backend:
serviceName: application2
servicePort: 80
path: /application2(/|$)(.*)
- backend:
serviceName: aks-helloworld
servicePort: 80
path: /(.*)
</code></pre>
<p>When i hit curl <code>http://blabla.trafficmanager.net</code> the response is default backend - 404</p>
<p>When i update the host to <code>http://blabla.trafficmanager.net</code>, i am able to access the application through <a href="http://blabla.trafficmanager.net" rel="nofollow noreferrer">http://blabla.trafficmanager.net</a>\application1</p>
<p>The same is true for any custom cname created. I created a cname as custom.domain.com and redirected it to blabla.eastus.azure.com. So unless i update the host in ingress directly to custom.domain.com I am not able to access it through the custom domain</p>
| <p>The actual request will never pass via Traffic Manager. Traffic Manager is a DNS based load balancing solution that is offered by Azure. </p>
<p>When you browse Azure TM endpoint, it resolves and gives you an IP. Then your browser request that IP address.</p>
<p>In your case, your AKS should have a Public Endpoint to which TM can resolve the DNS query. Also you need to create an CNAME record to map TM FQDN to your Custom Domain. If this is not done, you will get 404.</p>
<p>The above mentioned custom header settings are for the probes, but the actual request will be sent from the client browser to the endpoint/IP which the TM resolves to. </p>
|
<p><strong>Problem</strong> </p>
<ul>
<li>Trying to use Spring boot admin to do a deep monitoring of spring boot micro services running in Kubernetes.</li>
<li>Spring boot admin listing the micro services but pointing to the internal IPs.</li>
</ul>
<blockquote>
<p>Spring boot admin application listing page showing the internal IP</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/bYDce.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bYDce.png" alt="Spring boot admin application listing page showing the internal IP"></a></p>
<blockquote>
<p>The application details page has almost zero info</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/1Gq3B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1Gq3B.png" alt="The application details page almost zero info"></a></p>
<p><strong>Details</strong></p>
<ul>
<li>Kubernetes 1.15</li>
<li>Spring boot applications are getting discovered by Spring boot admin using Spring cloud discovery</li>
<li>spring-cloud-kubernetes version 1.1.0.RELEASE</li>
<li>The problem is that the IPs are of internal pod network and would not be accessible to the users in any real world scenario.</li>
</ul>
<blockquote>
<p>Any hints on how to approach this scenario ? Any alternatives ?</p>
</blockquote>
<p>Also I was wondering how spring boot admin would behave in case of pods with more than one replica. I think it is close to impossible to point to a unique pod replica through ingress or node port.</p>
<blockquote>
<p>Hack I am working on</p>
</blockquote>
<p>If I can start another pod which exposes the Linux desktop to the end user. From a browser of this desktop, user may be able to access the pod network ips. It is just a wild thought as a hack.</p>
| <p>Spring Boot Admin register each application/client based on its name by below property.</p>
<pre><code>spring.boot.admin.client.instance.name=${spring.application.name}
</code></pre>
<p>If all your pods have same name it can register based on individual ips by enabling <em>perfer-ip</em> property (which is false by default): </p>
<pre><code>spring.boot.admin.client.instance.prefer-ip=true
</code></pre>
<p><strong>In your case</strong>, you want to SBA to register based on the Kubernetes load balanced url, then <em>service-base-url</em> property should be set the corresponding application's url.</p>
<pre><code>spring.boot.admin.client.instance.service-base-url=http://myapp.com
</code></pre>
|
<p>I need to run a docker container when a Cloud function is triggered. Also, I need this Docker to mount a folder from an NFS file server VM that is running in a Google Cloud Compute Engine. </p>
<ul>
<li><p>How can I run a docker container from a Firebase/Cloud Function?</p></li>
<li><p>If it is not possible, can I trigger a docker container hosted in Kubernetes from one of these functions and how?</p></li>
</ul>
| <p>What you describe sounds like an ideal story for the Google product called <a href="https://cloud.google.com/run/" rel="nofollow noreferrer">Cloud Run</a>. In summary and at the highest level ... Cloud Run lets you run a docker container on demand when an incoming REST request arrives. You are billed for only the duration of execution of your container. When there is no traffic/requests, the container is automatically spun down and you are no longer charged. You can control how many requests are served per container (1 request = 1 container or multiple requests served by the same container or any combinations).</p>
<p>Effectively, Cloud Run is an alternative to Cloud Functions. With Cloud Functions you provide the function body and Cloud Functions provides the environment in which it runs. With Cloud Run, you provide the container which provides the function implementation AND the environment in which it runs. In both cases, Google owns the startup, shutdown and other management of the environment including scaling.</p>
<p>Google on the phrase "GCP Cloud Run" and you will find a wealth of documentation. I strongly suggest that this be your first research area given your described requirement. Please feel free to post additional questions tagged with google-cloud-run if you need further elaboration. If Cloud Run isn't a good answer to your question, please also post back so that we may all understand better.</p>
|
<p>On my helm chart I have a job with the <code>pre-install</code> hook where I need to use a property from my secrets. However when I try to install my helm chart I get the following error on my <code>pre-install</code> job:</p>
<blockquote>
<p>Error: secret "SecretsFileName" not found</p>
</blockquote>
<p>Secrets aren't created before the pods execution? What's the problem here? How can I solve this?</p>
<p>Notes: </p>
<ul>
<li>I want to use secrets to have the properties encrypted. I don't want to use the decrypted value directly on my pod;</li>
<li>I already read <a href="https://stackoverflow.com/questions/51957676/helm-install-in-certain-order">Helm install in certain order</a> but I still not understanding the reason of this error;</li>
<li>I already tried to use <code>"helm.sh/hook": pre-install,post-delete</code> and <code>"helm.sh/hook-weight": "1"</code> on secrets, and <code>"helm.sh/hook-weight": "2"</code> on my pod but the problem remains.</li>
</ul>
<p><strong>My pre-install job:</strong></p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: "MyPodName"
annotations:
"helm.sh/hook": pre-install
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
#some more code
spec:
template:
#some more code
spec:
dnsPolicy: {{ .Values.specPolicy.dnsPolicy }}
restartPolicy: {{ .Values.specPolicy.restartPolicy }}
volumes:
- name: {{ .Values.volume.name }}
persistentVolumeClaim:
claimName: {{ .Values.volume.claimName }}
securityContext:
{{- toYaml .Values.securityContext | nindent 8 }}
containers:
- name: "MyContainerName"
#some more code
env:
- name: SECRET_TO_USE
valueFrom:
secretKeyRef:
name: SecretsFileName
key: PROP_FROM_SCRETS
#some more code
</code></pre>
<p><strong>My secrets file:</strong></p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: "SecretsFileName"
labels:
app: "MyAppName"
#some more code
type: Opaque
data:
PROP_FROM_SCRETS: eHB0bw==
</code></pre>
| <p>While <a href="https://helm.sh/docs/topics/charts_hooks/" rel="noreferrer">Helm hooks</a> are typically Jobs, there's no requirement that they are, and Helm doesn't do any analysis on the contents of a hook object to see what else it might depend on. If you read through the installation sequence described there, it is (7) install things tagged as hooks, (8) wait for those to be ready, then (9) install everything else; it waits for the Job to finish before it installs the Secret it depends on.</p>
<p>The first answer, then, is that you also need to tag your Secret as a hook for it to be installed during the pre-install phase, with a modified weight so that it gets installed before the main Job (smaller weight numbers happen sooner):</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Secret
annotations:
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-5"
</code></pre>
<p>The next question is when this Secret gets <em>deleted</em>. The documentation <a href="https://helm.sh/docs/topics/charts_hooks/#hook-resources-are-not-managed-with-corresponding-releases" rel="noreferrer">notes</a> that <code>helm uninstall</code> won't delete hook resources; you need to add a separate <a href="https://helm.sh/docs/topics/charts_hooks/#hook-deletion-policies" rel="noreferrer"><code>helm.sh/hook-delete-policy</code> annotation</a>, or else it will stick around until the next time the hook is scheduled to be run. This reads to me as saying that if you modify the Secret (or the values that make it up) and upgrade (not delete and reinstall) the chart, the Secret won't get updated.</p>
<p>I'd probably just create two copies of the Secret, one that's useful at pre-install time and one that's useful for the primary chart lifecycle. You could create a template to render the Secret body and then call that twice:</p>
<pre><code>{{- define "secret.content" -}}
type: Opaque
data:
PROP_FROM_SCRETS: eHB0bw==
{{- end -}}
---
apiVersion: v1
kind: Secret
metadata:
name: "SecretsFileName"
labels:
app: "MyAppName"
{{ include "secret.content" . }}
---
apiVersion: v1
kind: Secret
metadata:
name: "SecretsFileName-preinst"
labels:
app: "MyAppName"
annotations:
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
{{ include "secret.content" . }}
</code></pre>
|
<p>I'm experiencing intermittent failed to response when make an outbound connection such as RPC call, it is logged by my application (Java) like this : </p>
<p><code>org.apache.http.NoHttpResponseException: RPC_SERVER.com:443 failed to respond !</code></p>
<p><strong>Outbound connection flow</strong></p>
<p>Kubernetes Node -> ELB for internal NGINX -> internal NGINX ->[Upstream To]-> ELB RPC server -> RPC server instance</p>
<p>This problem is not occurred on usual EC2 (AWS). </p>
<p>I'm able to reproduce on my localhost by doing this</p>
<ol>
<li>Run main application which act as client in port 9200</li>
<li>Run RPC server in port 9205</li>
<li>Client will make a connection to server using port 9202</li>
<li>Run <code>$ socat TCP4-LISTEN:9202,reuseaddr TCP4:localhost:9205</code> that will listen on port 9202 and then forward it to 9205 (RPC Server)</li>
<li>Add rules on iptables using <code>$ sudo iptables -A INPUT -p tcp --dport 9202 -j DROP</code></li>
<li>Trigger a RPC calling, and it will return the same error message as I desrcibe before</li>
</ol>
<p><strong>Hypothesis</strong></p>
<p>Caused by NAT on kubernetes, as far as I know, NAT is using <code>conntrack</code>, <code>conntrack</code> and break the TCP connection if it was idle for some period of time, client will assume the connection is still established although it isn't. (CMIIW)</p>
<p>I also have tried scaling <code>kube-dns</code> into 10 replica, and the problem still occurred. </p>
<p><strong>Node Specification</strong></p>
<p>Use calico as network plugin</p>
<p><code>$ sysctl -a | grep conntrack</code></p>
<pre><code>net.netfilter.nf_conntrack_acct = 0
net.netfilter.nf_conntrack_buckets = 65536
net.netfilter.nf_conntrack_checksum = 1
net.netfilter.nf_conntrack_count = 1585
net.netfilter.nf_conntrack_events = 1
net.netfilter.nf_conntrack_expect_max = 1024
net.netfilter.nf_conntrack_generic_timeout = 600
net.netfilter.nf_conntrack_helper = 1
net.netfilter.nf_conntrack_icmp_timeout = 30
net.netfilter.nf_conntrack_log_invalid = 0
net.netfilter.nf_conntrack_max = 262144
net.netfilter.nf_conntrack_tcp_be_liberal = 0
net.netfilter.nf_conntrack_tcp_loose = 1
net.netfilter.nf_conntrack_tcp_max_retrans = 3
net.netfilter.nf_conntrack_tcp_timeout_close = 10
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 3600
net.netfilter.nf_conntrack_tcp_timeout_established = 86400
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_last_ack = 30
net.netfilter.nf_conntrack_tcp_timeout_max_retrans = 300
net.netfilter.nf_conntrack_tcp_timeout_syn_recv = 60
net.netfilter.nf_conntrack_tcp_timeout_syn_sent = 120
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_unacknowledged = 300
net.netfilter.nf_conntrack_timestamp = 0
net.netfilter.nf_conntrack_udp_timeout = 30
net.netfilter.nf_conntrack_udp_timeout_stream = 180
net.nf_conntrack_max = 262144
</code></pre>
<p><strong>Kubelet config</strong> </p>
<pre><code>[Service]
Restart=always
Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CLOUD_ARGS=--cloud-provider=aws"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_EXTRA_ARGS $KUBELET_CLOUD_ARGS
</code></pre>
<p><strong>Kubectl version</strong> </p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T09:14:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.7", GitCommit:"8e1552342355496b62754e61ad5f802a0f3f1fa7", GitTreeState:"clean", BuildDate:"2017-09-28T23:56:03Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p><strong>Kube-proxy Log</strong></p>
<pre><code>W1004 05:34:17.400700 8 server.go:190] WARNING: all flags other than --config, --write-config-to, and --cleanup-iptables are deprecated. Please begin using a config file ASAP.
I1004 05:34:17.405871 8 server.go:478] Using iptables Proxier.
W1004 05:34:17.414111 8 server.go:787] Failed to retrieve node info: nodes "ip-172-30-1-20" not found
W1004 05:34:17.414174 8 proxier.go:483] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
I1004 05:34:17.414288 8 server.go:513] Tearing down userspace rules.
I1004 05:34:17.443472 8 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_max' to 262144
I1004 05:34:17.443518 8 conntrack.go:52] Setting nf_conntrack_max to 262144
I1004 05:34:17.443555 8 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I1004 05:34:17.443584 8 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I1004 05:34:17.443851 8 config.go:102] Starting endpoints config controller
I1004 05:34:17.443888 8 config.go:202] Starting service config controller
I1004 05:34:17.443890 8 controller_utils.go:994] Waiting for caches to sync for endpoints config controller
I1004 05:34:17.443916 8 controller_utils.go:994] Waiting for caches to sync for service config controller
I1004 05:34:17.544155 8 controller_utils.go:1001] Caches are synced for service config controller
I1004 05:34:17.544155 8 controller_utils.go:1001] Caches are synced for endpoints config controller
</code></pre>
<p><code>$ lsb_release -s -d</code>
Ubuntu 16.04.3 LTS</p>
| <p>Check the value of <code>sysctl net.netfilter.nf_conntrack_tcp_timeout_close_wait</code> inside the pod that contains your program. It is possible that the value on the node that you listed (3600) isn't the same as the value inside the pod.</p>
<p>If the value in the pod is too small (e.g. 60), and your Java client half-closes the TCP connection with a FIN when it finishes transmitting, but the response takes longer than the close_wait timeout to arrive, nf_conntrack will lose the connection state and your client program will not receive the response.</p>
<p>You may need to change the behavior of the client program to not use a TCP half-close, OR modify the value of <code>net.netfilter.nf_conntrack_tcp_timeout_close_wait</code> to be larger. See <a href="https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/</a>.</p>
|
<p>I have ingress service with the fallowing config:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /api/?(.*)
backend:
serviceName: my-service
servicePort: 3001
- path: /auth/?(.*)
backend:
serviceName: my-service
servicePort: 3001
</code></pre>
<p>The thing is that when I'm running this on my minikube I cannot connect in a proper way, ie.
When I'm typing in the browser: <code>IP/api/test</code> it shows <code>not found</code> even though my express endpoint is:</p>
<pre><code>app.get('/api/test', (req, res) => {
return res.send({ hi: 'there' });
});
</code></pre>
<p>But <code>IP/api/api/test</code> (double <code>api</code>) works and delivers expected response. Obviously I would like to get there with <code>IP/api/test</code>. How can I achieve that in my ingress config?</p>
| <p>If You want to access <code>http://.../api/test</code> by calling curl <code>http://.../api/test</code> than you don't need a rewrite for it so just make it empty.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /?(.*)
backend:
serviceName: my-service
servicePort: 3001
- path: /auth/?(.*)
backend:
serviceName: my-service
servicePort: 3001
</code></pre>
<p>This configuration will for example rewrite the following:</p>
<pre><code>http://.../api/test -> http://.../api/test
</code></pre>
<pre><code>http://.../auth/test -> http://.../test
</code></pre>
<pre><code>http://.../asdasdasd -> http://.../asdasdasd
</code></pre>
<hr>
<p>Update:</p>
<p>In case You have another rewrite with <code>path: /?(.*)</code> You can modify Your app path request to:</p>
<pre><code>app.get('/test', (req, res) => {
return res.send({ hi: 'there' });
});
</code></pre>
<p>and use the original ingress configuration You posted in Your question.</p>
<p>This way when the request You have issues with will be resolved in following way:</p>
<p><code>IP/api/test</code> -> <code>IP/test</code></p>
<p>This is also the reason why You experienced <code>IP/api/api/test</code> before. Because one <code>/api/</code> was removed by rewrite and then <code>IP/api/test</code> was accessible.</p>
<p>Yes, You can have multiple rewrites in ingress. As long as they don't loop or rewrite too many times.</p>
|
<p>I'm writing an application to be deployed to Kubernetes where I want each pod to hold
a TCP connection to all the other pods in the replica set so any container can notify the other
containers in some use case. If a pod is added, a new connection should be created between it and all
the other pods in the replica set.</p>
<p>From a given pod, how do I open a TCP connection to all the other pods? I.e., how do I discover all
their IP addresses?</p>
<p>I have a ClusterIP Service DNS name that points to these pods by selector, and that's the mechanism
I've thought of trying to use so far. E.g., open a TCP connection to that DNS name repeatedly in a
loop, requesting a container ID over the connection each time, pausing when I've gotten say at least
three connections for each container ID. Then repeating that every minute or so to get new pods that
have been added. A keepalive could be used to remove connections to pods that have gone away.</p>
<p>Is there a better way to do it?</p>
<p>If it matters, I'm writing this application in Go.</p>
| <p>If the mentioned communication between the pods is not the main functionality of this application, but rather the application requires this functionality, I would strongly suggest not re-inventing the wheel and use a 3rd party component.</p>
<p><a href="https://redis.io/" rel="nofollow noreferrer">Redis</a> looks like a very good candidate, in particular <a href="https://redis.io/topics/pubsub" rel="nofollow noreferrer">Pub/Sub</a> mechanism of Redis.</p>
<p>So, all of the pods will subscribe to a certain channel in Redis, and when a pod wants to notify others, it will simply publish a message on the same channel. </p>
<p>Redis has client libraries in many languages including <a href="https://redis.io/clients#go" rel="nofollow noreferrer">Go</a>. Also very easy to deploy to Kubernetes via its <a href="https://github.com/helm/charts/tree/master/stable/redis" rel="nofollow noreferrer">Helm chart</a>.</p>
|
<p>we have couchbase cluster deployed on Openshift. Couchbase web console is opening but we are not able to add buckets to it. For that we have to mention buckets in Ymal file and then redeploy using openshift operator.
Is it the general behavior or we can add buckets without re-deployment of cluster ??</p>
| <p>Yes, that's the intended behavior. Much of the cluster management, like buckets and adding of nodes, is under the control of the operator.</p>
<p>It is possible to <a href="https://docs.couchbase.com/operator/current/couchbase-cluster-config.html#disablebucketmanagement" rel="nofollow noreferrer">disable bucket management if you have a need to</a>, but the intent is that you'll operate through kubectl.</p>
|
<p>I am having an issue with Kubernetes on GKE.
I am unable to resolve services by their name using internal DNS. </p>
<p>this is my configuration</p>
<p>Google GKE v1.15</p>
<pre><code>kubectl get namespaces
NAME STATUS AGE
custom-metrics Active 183d
default Active 245d
dev Active 245d
kube-node-lease Active 65d
kube-public Active 245d
kube-system Active 245d
stackdriver Active 198d
</code></pre>
<p>I've deployed a couple of simple services based on openjdk 11 docker image and made with spring boot + actuator in order to have a /actuator/health endpoint to test in dev </p>
<pre><code>kubectl get pods --namespace=dev
NAME READY STATUS RESTARTS AGE
test1-5d86946c49-h9t9l 1/1 Running 0 3h1m
test2-5bb5f4ff8d-7mzc8 1/1 Running 0 3h10m
</code></pre>
<p>If i try to execute under </p>
<pre><code>kubectl --namespace=dev exec -it test1-5d86946c49-h9t9 -- /bin/bash
root@test1-5d86946c49-h9t9:/app# cat /etc/resolv.conf
nameserver 10.40.0.10
search dev.svc.cluster.local svc.cluster.local cluster.local europe-west1-b.c.back-office-236415.internal c.back-office-236415.internal google.internal
options ndots:5
root@test1-5d86946c49-h9t9:/app# nslookup test2
Server: 10.40.0.10
Address: 10.40.0.10#53
** server can't find test2: NXDOMAIN
</code></pre>
<p>The same issue occurs if I try using test2 service and try to resolve test1. There is a special configuration for namespace to enable DNS resolve? Shouldn't this be automatic?</p>
| <p>I have reproduced this using master version 1.15 and and type of service as ‘ClusterIP’. I am able to do look up from the Pod of one service to another. For creating Kubernetes Services in a Google Kubernetes Engine cluster [1] might be helpful.</p>
<p>To see the services:
$ kubectl get svc --namespace=default</p>
<p>To access the deployment:
$ kubectl exec -it [Pod Name] sh</p>
<p>To lookup:
$ nslookup [Service Name]</p>
<p>Every Service defined in the cluster (including the DNS server itself) is assigned a DNS name. By default, a client Pod’s DNS search list will include the Pod’s own namespace and the cluster’s default domain.</p>
<p>“Normal” (not headless) Services are assigned a DNS A record for a name of the form my-svc.my-namespace.svc.cluster-domain.example. This resolves to the cluster IP of the Service.</p>
<p>For “Headless” (without a cluster IP) Services are also assigned a DNS A record for a name.Though this resolves to the set of IPs of the pods selected by the Service.</p>
<p>However, DNS policies can be set on a per-pod basis. Currently Kubernetes supports the following pod-specific DNS policies. These policies are specified in the dnsPolicy field of a Pod Spec [2]:</p>
<p>“Default“: The Pod inherits the name resolution configuration from the node that the pods run on. </p>
<p>“ClusterFirst“: Any DNS query that does not match the configured cluster domain suffix, such as “www.kubernetes.io”, is forwarded to the upstream nameserver inherited from the node. Cluster administrators may have extra stub-domain and upstream DNS servers configured. </p>
<p>“ClusterFirstWithHostNet“: For Pods running with hostNetwork, need to set its DNS policy “ClusterFirstWithHostNet”.</p>
<p>“None“: It allows a Pod to ignore DNS settings from the Kubernetes environment. All DNS settings are supposed to be provided using the dnsConfig field in the Pod Spec.</p>
<p>[1]-<a href="https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps</a>
[2]-<a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-config" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-config</a></p>
|
<p>I've the below configuration in ingress.yaml which forwards the requests with uris like /default/demoservice/health or /custom/demoservice/health to backend demoservice. I would want to retrieve the first part of uri (i.e default or custom in the example above)from the uri and pass as custom header to upstream.</p>
<p>I've deployed the ingress configmap with custom header </p>
<pre><code>X-MyVariable-Path: ${request_uri}
</code></pre>
<p>but this sends the full request uri. How can I split?</p>
<pre><code>- path: "/(.*?)/(demoservice.*)$"
backend:
serviceName: demoservice
servicePort: 80
</code></pre>
| <p>I have found a solution, tested it and it works.<br>
All you need is to add following annotations to your ingress object :</p>
<pre><code>nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header X-MyVariable-Path $1;
</code></pre>
<p>Where <code>$1</code> is referencing whatever is captured in first group of regexp expression in <code>path:</code> field.</p>
<p>I've reproduced your scenario using the following yaml:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header X-MyVariable-Path $1;
nginx.ingress.kubernetes.io/use-regex: "true"
name: foo-bar-ingress
spec:
rules:
- http:
paths:
- backend:
serviceName: echo
servicePort: 80
path: /(.*?)/(demoservice.*)$
---
apiVersion: v1
kind: Service
metadata:
labels:
run: echo
name: echo
spec:
ports:
- port: 80
targetPort: 80
selector:
run: echo
---
apiVersion: v1
kind: Pod
metadata:
labels:
run: echo
name: echo
spec:
containers:
- image: mendhak/http-https-echo
imagePullPolicy: Always
name: echo
</code></pre>
<p>You can test using curl:</p>
<p><code>curl -k https://<your_ip>/default/demoservice/healthz</code></p>
<p>Output:</p>
<pre><code> {
"path": "/default/demoservice/healthz",
"headers": {
"host": "192.168.39.129",
"x-request-id": "dfcc67a80f5b02e6fe6c647c8bf8cdf0",
"x-real-ip": "192.168.39.1",
"x-forwarded-for": "192.168.39.1",
"x-forwarded-host": "192.168.39.129",
"x-forwarded-port": "443",
"x-forwarded-proto": "https",
"x-scheme": "https",
"x-myvariable-path": "default", # your variable here
"user-agent": "curl/7.52.1",
"accept": "*/*"
},
"method": "GET",
"body": "",
"fresh": false,
"hostname": "192.168.39.129",
"ip": "::ffff:172.17.0.4",
"ips": [],
"protocol": "http",
"query": {},
"subdomains": [],
"xhr": false,
"os": {
"hostname": "echo"
}
}
</code></pre>
<p>I hope it helps =)</p>
|
<p>I have a values.yaml that has the following:</p>
<pre><code>abc:
env:
- name: name01
value: value01
- name: name02
value: value02
</code></pre>
<p>and I have another values file values-dev.yaml that I add when installing using -f and it has:</p>
<pre><code>abc:
env:
- name: name03
value: value03
</code></pre>
<p>and using range I list them in my template. My hope was that the list would become like this after both files are applied:</p>
<pre><code>abc:
env:
- name: name01
value: value01
- name: name02
value: value02
- name: name03
value: value03
</code></pre>
<p>but the values-dev.yaml values will override the one in the values.yaml and it becomes:</p>
<pre><code>abc:
env:
- name: name03
value: value03
</code></pre>
<p>How can I achieve merging these 2 lists with the same field names from different values files?</p>
| <p>Short answer is, you can not merge <code>lists</code>. </p>
<p>In your case <code>abc.env</code> is the <code>key</code> and the <code>value</code> is a list. Let me re-write your first values file in an equivalent notation and it will be more clear:</p>
<pre><code>abc:
env: [{name: name01, value: value01}, {name: name02, value: value02}]
</code></pre>
<p>So Helm is doing what is expected, overriding the key <code>abc.env</code> with the last provided one.</p>
<p>Solution is re-structuring your values files, like this:</p>
<pre><code>abc:
env:
name01: value01
name02: value02
</code></pre>
<p>This way, you can merge and override your values files as desired. This way, it's also much easy to override a single value with command line flag, for example:</p>
<pre><code>--set abc.env.name01=different
</code></pre>
<p>With some Helm magic, it's easy to pass those values as environment variables to your pods: </p>
<pre><code>...
containers:
- name: abc
image: abc
env:
{{- range $key, $value := .Values.abc.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
</code></pre>
|
<p>I am using Docker for Desktop on Windows 10 Professional with Hyper-V, also I am not using minikube. I have installed Kubernetes cluster via Docker for Desktop, as shown below:</p>
<p><a href="https://i.stack.imgur.com/EvtdQ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/EvtdQ.png" alt="enter image description here"></a></p>
<p>It shows the Kubernetes is successfully installed and running.</p>
<p>When I run the following command:</p>
<pre><code>kubectl config view
</code></pre>
<p>I get the following output:</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://localhost:6445
name: docker-for-desktop-cluster
contexts:
- context:
cluster: docker-for-desktop-cluster
user: docker-for-desktop
name: docker-for-desktop
current-context: docker-for-desktop
kind: Config
preferences: {}
users:
- name: docker-for-desktop
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
</code></pre>
<p>However when I run the </p>
<pre><code>kubectl cluster-info
</code></pre>
<p>I am getting the following error:</p>
<pre><code>Unable to connect to the server: dial tcp [::1]:6445: connectex: No connection could be made because the target machine actively refused it.
</code></pre>
<p>It seems like there is some network issue, I am not sure how to resolve this.</p>
| <p>I know this is an old question but the following helped me to resolve a similar issue. The root cause was that I had minikube installed previously and that was being used as my default context.
I was getting following error:</p>
<pre><code>Unable to connect to the server: dial tcp 192.168.1.8:8443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
</code></pre>
<p>In the power-shell run the following command:</p>
<pre><code>> kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
docker-desktop docker-desktop docker-desktop
docker-for-desktop docker-desktop docker-desktop
* minikube minikube minikube
</code></pre>
<p>this will list all the contexts and see if there are multiple. If you had installed minikube in the past, that will show a * mark as currently selected default context. You can change that to point to <strong>docker-desktop</strong> context like follows:</p>
<pre><code>> kubectl config use-context docker-desktop
</code></pre>
<p>Run the get-contexts command again to verify the * mark.</p>
<p>Now, the following command should work:</p>
<pre><code>> kubectl get pods
</code></pre>
|
<p>Does Kubernetes GET API actually support <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/" rel="noreferrer"><code>fieldSelector</code> parameter</a> to query values of array fields? </p>
<p>For example, I have a Pod like:</p>
<pre><code>apiGroup: v1
kind: Pod
metadata:
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: grpc-ping-r7f8r-deployment-54d688d777
uid: 262bab1a-1c79-11ea-8e23-42010a800016
</code></pre>
<p>Can I do something like:</p>
<pre><code>kubectl get pods --field-selector 'metadata.ownerReferences.uid=262bab1a-1c79-11ea-8e23-42010a800016'
</code></pre>
<p>This command fails (<em><code>field label not supported: metadata.ownerReferences.uid</code></em>). I suspect the reason is that <code>ownerReferences</code> is an array field itself. I've also tried, but didn't work:</p>
<ul>
<li><code>metadata.ownerReferences[*].uid=</code></li>
<li><code>metadata.ownerReferences[].uid=</code></li>
</ul>
<p>I might try client-go SDK for Kubernetes API, but I suspect it won't work for the same reason.</p>
<p>Is there a server-side way to query by this? Thanks much. </p>
| <p>The <code>--field-selector</code> only works with some limited fields.</p>
<p>Which contains:</p>
<pre><code>"metadata.name",
"metadata.namespace",
"spec.nodeName",
"spec.restartPolicy",
"spec.schedulerName",
"spec.serviceAccountName",
"status.phase",
"status.podIP",
"status.podIPs",
"status.nominatedNodeName"
</code></pre>
<ul>
<li><a href="https://github.com/kubernetes/kubernetes/blob/9d577d8a29893062dfbd669997396dbd01ab0e47/pkg/apis/core/v1/conversion.go#L33" rel="noreferrer">Reference</a></li>
</ul>
<p>But you can perform the task by using <code>jq</code>. Here is a command that I used for listing all ready nodes. It demonstrates the use of array fields that you're looking for.</p>
<pre class="lang-bash prettyprint-override"><code>$ kubectl get nodes -o json | jq -r '.items[] | select(.status.conditions[] | select(.type=="Ready" and .status=="True")) | .metadata.name '
master-0
node-1
node-3
</code></pre>
|
<p>I have a few Kubernetes clusters with different #of nodes in each. And my deployment of config has "replicas: #nodes". There is no specific config set up for scheduling that pod but after deployment, I see strange behavior in terms of the distribution of pods on nodes.</p>
<p>Example:</p>
<p><em>For 30 nodes cluster (30 replicas) all 30 pod replicas distributed across 25 nodes only and other 5 nodes are sitting ideal in the cluster.</em>
Similar cases for many other different clusters and this count varies in every new/redeployment. </p>
<p>Question:</p>
<p>I want to distribute my pod replicas across all nodes. If I set "replicas: #nodes" then I should have one pod replica in each node. If I increase/double the replicas count then it should distribute evenly.
is there any specific configuration in deployment yaml for Kubernetes?</p>
<p>Configuration with node AntiAffinity, but still this one is behaving as above. Tried with "requiredDuringSchedulingIgnoredDuringExecution" and that one did deployed one pod in each node, but if I increase the replicas or any node goes down during the deployment then then whole deployment fails.</p>
<pre><code>metadata:
labels:
app: test1
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- test1
topologyKey: kubernetes.io/hostname
</code></pre>
| <p>If you need a 1 pod per node deployment, you should be using a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">daemonSet</a> instead of a replicaSet. Alternately, if you need more than 1 pod per node and still want the pod distribution to be mostly even, you can use pod anti-affinity as I discussed <a href="https://stackoverflow.com/questions/57057561/kubernetes-evenly-distribute-the-replicas-across-the-cluster/57060271#57060271">in this post</a></p>
|
<p>I m having below error while installing Kubernetes cluster in Ubunutu 18.04. Kubernetes master is ready. I' m using flannel as pod network. I m going to add my first node to the cluster using join command.</p>
<pre><code> Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 11 Dec 2019 05:43:02 +0000 Wed, 11 Dec 2019 05:38:47 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 11 Dec 2019 05:43:02 +0000 Wed, 11 Dec 2019 05:38:47 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 11 Dec 2019 05:43:02 +0000 Wed, 11 Dec 2019 05:38:47 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Wed, 11 Dec 2019 05:43:02 +0000 Wed, 11 Dec 2019 05:38:47 +0000 KubeletNotReady Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condition; caused by: the server could not find the requested resource
</code></pre>
<p>Update:</p>
<p>I noticed below in worker node</p>
<pre><code> root@worker02:~# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Wed 2019-12-11 06:47:41 UTC; 27s ago
Docs: https://kubernetes.io/docs/home/
Main PID: 14247 (kubelet)
Tasks: 14 (limit: 2295)
CGroup: /system.slice/kubelet.service
└─14247 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driv
Dec 11 06:47:43 worker02 kubelet[14247]: I1211 06:47:43.085292 14247 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "flannel-cfg" (UniqueName: "kuber
Dec 11 06:47:43 worker02 kubelet[14247]: I1211 06:47:43.086115 14247 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "flannel-token-nbss2" (UniqueName
Dec 11 06:47:43 worker02 kubelet[14247]: I1211 06:47:43.087975 14247 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubern
Dec 11 06:47:43 worker02 kubelet[14247]: I1211 06:47:43.088104 14247 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kube
Dec 11 06:47:43 worker02 kubelet[14247]: I1211 06:47:43.088153 14247 reconciler.go:156] Reconciler: start to sync state
Dec 11 06:47:45 worker02 kubelet[14247]: E1211 06:47:45.130889 14247 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condit
Dec 11 06:47:48 worker02 kubelet[14247]: E1211 06:47:48.134042 14247 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condit
Dec 11 06:47:50 worker02 kubelet[14247]: E1211 06:47:50.538096 14247 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condit
Dec 11 06:47:53 worker02 kubelet[14247]: E1211 06:47:53.131425 14247 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condit
Dec 11 06:47:56 worker02 kubelet[14247]: E1211 06:47:56.840529 14247 csi_plugin.go:267] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condit
</code></pre>
<p>Please let me know how to fix this?</p>
| <p>I have the same issue, however, my situation is that I have a running k8s cluster, and suddenly, the CISNodeInfo issue happened on some of my k8s node, and the node not in the cluster anymore. Search with google for some days, finally get the answer from
<a href="https://github.com/kubernetes/kubernetes/issues/86094" rel="noreferrer">Node cannot join #86094</a></p>
<p>Just editing /var/lib/kubelet/config.yaml to add:</p>
<pre><code>featureGates:
CSIMigration: false
</code></pre>
<p>... at the end of the file seemed to allow the cluster to start as expected.</p>
|
<p>I'm trying to deploy a rabbitmq pod in my kubernetes. So I use the rabbitmq hosted by Google : <a href="https://github.com/GoogleCloudPlatform/rabbitmq-docker/blob/master/3/README.md#connecting-to-a-running-rabbitmq-container-kubernetes" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/rabbitmq-docker/blob/master/3/README.md#connecting-to-a-running-rabbitmq-container-kubernetes</a></p>
<p>In the documentation, it is said :
Starting a RabbitMQ instance</p>
<p>Replace <code>your-erlang-cookie</code> with a valid cookie value. For more information, see <code>RABBITMQ_ERLANG_COOKIE in</code> Environment Variable.</p>
<p>Copy the following content to pod.yaml file, and run kubectl create -f pod.yaml.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: some-rabbitmq
labels:
name: some-rabbitmq
spec:
containers:
- image: launcher.gcr.io/google/rabbitmq3
name: rabbitmq
env:
- name: "RABBITMQ_ERLANG_COOKIE"
value: "unique-erlang-cookie"
</code></pre>
<p>How could I generate the cookie erlang ? I find nothing after days of searching on internet. I have a rabbitmq installed in my windows, I never generated a cookie erlang.</p>
<p>Please how could I do ? Thanks</p>
| <p>It's any unique value; the only constraint is that every connected instance of RabbitMQ (that is, every Pod in your StatefulSet) has the same cookie value.</p>
<p>A good way to specify this is with a <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="noreferrer">Secret</a>:</p>
<pre class="lang-yaml prettyprint-override"><code>env:
- name: RABBITMQ_ERLANG_COOKIE
valueFrom:
secretKeyRef:
name: rabbitmq
key: erlangCookie
</code></pre>
<p>This requires you to create the Secret. Just to bring this up, you can run a one-off imperative command to create a random Secret:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl create secret generic rabbitmq \
--from-literal=erlangCookie=$(dd if=/dev/urandom bs=30 count=1 | base64)
</code></pre>
<p>For actual production use you'd need to store that credential somewhere safe and be able to inject (or recreate) it at deployment time. Managing that is a little beyond the scope of this question.</p>
|
<p>Please! Is it possible to squash multiple helm templates into one and then refer to it as a one-liner in a deployment file?</p>
<p>EG: </p>
<pre><code> {{- define "foo.deploy" -}}
value:
{{- include "foo.1" . | nindent 6 }}
{{- include "foo.2" . | nindent 6 }}
{{- include "foo.3" . | nindent 6 }}
</code></pre>
<p>And then do an <strong>{{- include "foo.deploy" . }}</strong> in a separate deployment file.</p>
<p>Which should then contain foo.1, foo.2 and foo.3, and their respective definitions.</p>
<p>As opposed to literally writing out all three different 'includes' especially if you've got loads.</p>
<p>Much appreciated,</p>
<p>Thanks,</p>
| <p>A <strong>named template</strong> (sometimes called a partial or a subtemplate) is simply a template defined inside of a file, and given a name. We’ll see two ways to create them, and a few different ways to use them.
Template names are global. As a result of this, if two templates are declared with the same name the last occurrence will be the one that is used. Since templates in subcharts are compiled together with top-level templates, it is best to name your templates with chart specific names. A popular naming convention is to prefix each defined template with the name of the chart: {{ define "mychart.labels" }}.</p>
<p>More information about named templates you can find here: <a href="https://helm.sh/docs/chart_template_guide/named_templates/" rel="nofollow noreferrer">named-template</a>.</p>
<p>Proper configuration file should look like:</p>
<pre><code>{{/* Generate basic labels */}}
{{- define "mychart.labels" }}
labels:
generator: helm
date: {{ now | htmlDate }}
{{- end }}
</code></pre>
<p>In your case part of file should looks like:</p>
<pre><code>{{- define "foo.deploy" -}}
{{- include "foo.1" . | nindent 6 }}
{{- include "foo.2" . | nindent 6 }}
{{- include "foo.3" . | nindent 6 }}
{{ end }}
</code></pre>
|
<p>I have deployed a 3 pod mongodb statefulset in kubernetes and I am attempting to use the new <code>mongodb+srv</code> connection string (mongodb 3.6) to connect to the headless k8s service that has the SRV records for the cluster members.</p>
<p>However, the connection is failing as follows (the mongo command is being executed on the first pod in the satefulset):</p>
<pre><code>root@mongodb-0:/# mongo "mongodb+srv://mongodb-headless.mongo.svc.cluster.local"
FailedToParse: Hostname mongodb-0.mongodb-headless.mongo.svc.cluster.local. is not within the domain mongo.svc.cluster.local
try 'mongo --help' for more information
</code></pre>
<p>Here is the headless service configuration:</p>
<pre><code>kubectl describe svc/mongodb-headless -n mongo
Name: mongodb-headless
Namespace: mongo
Labels: app=mongodb-headless
chart=mongodb-1.0.1
heritage=Tiller
release=mongo
Annotations: service.alpha.kubernetes.io/tolerate-unready-endpoints=true
Selector: app=mongodb,release=mongo
Type: ClusterIP
IP: None
Port: mongodb 27017/TCP
TargetPort: 27017/TCP
Endpoints:
192.168.16.8:27017,192.168.208.3:27017,192.168.64.9:27017
Session Affinity: None
Events: <none>
</code></pre>
<p>The mongodb cluster is functional and I can connect to the members over localhost or using a separate (non-headless) service (e.g. <code>mongo "mongodb://mongodb.mongo.svc.cluster.local"</code>).</p>
<p>Am I missing something in the <code>mongodb+srv</code> requirements/implementation or do I need to adjust something in my k8s deployment?</p>
| <p>The connections of <code>mongodb+srv://</code> <a href="http://docs.mongodb.com/manual/reference/connection-string/#urioption.tls" rel="nofollow noreferrer">use SSL/TLS by default</a>. You need to disable them manually by adding <code>tls=false</code> or <code>ssl=false</code>.</p>
<p>The following connection URI works for me on a MongoDB 4.2 three members replica set on GKE.</p>
<pre><code>mongo "mongodb+srv://svc-headless.my-namespace.cluster.local/?tls=false&ssl=false"
</code></pre>
<p>Add <code>replicaSet</code> query option may help to connect to the right replica set.</p>
<pre><code>mongo "mongodb+srv://svc-headless.my-namespace.cluster.local/?tls=false&ssl=false&replicaSet=rs0"
</code></pre>
|
<p>I am trying to build a docker image for dotnet core app on windows which I am planning to host it on Kubernetes </p>
<p>with following details</p>
<pre><code>#docker file
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 8989
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["amazing-app.csproj", "amazing-app/"]
RUN dotnet restore "amazing-app/amazing-app.csproj"
COPY . .
WORKDIR "/src/amazing-app"
RUN dotnet build "amazing-app.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "amazing-app.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "amazing-app.dll"]
</code></pre>
<p>directory Structure is >></p>
<p><a href="https://i.stack.imgur.com/eB3af.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eB3af.png" alt="enter image description here"></a></p>
<p>After running <code>docker build -t amazing-app-one .</code> I am getting following error at step 10/16. Application builds & runs locally but docker is not able to build image out of it.</p>
<pre><code>$> <Path>\amazing-app>docker build -t amazing-app-one .
Sending build context to Docker daemon 5.741MB
Step 1/16 : FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
---> 08096137b740
Step 2/16 : WORKDIR /app
---> Running in 14f5f9b7b3e5
Removing intermediate container 14f5f9b7b3e5
---> ae4846eda3f7
Step 3/16 : EXPOSE 8989
---> Running in 0f464e383641
Removing intermediate container 0f464e383641
---> 6b855b84749e
Step 4/16 : FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
---> 9817c25953a8
Step 5/16 : WORKDIR /src
---> Running in 5a8fc99a3ecf
Removing intermediate container 5a8fc99a3ecf
---> 694d5063e8e6
Step 6/16 : COPY ["amazing-app.csproj", "amazing-app/"]
---> 450921f630c3
Step 7/16 : RUN dotnet restore "amazing-app/amazing-app.csproj"
---> Running in ddb564e875be
Restore completed in 1.81 sec for /src/amazing-app/amazing-app.csproj.
Removing intermediate container ddb564e875be
---> b59e0c1dfb4d
Step 8/16 : COPY . .
---> 695977f3b543
Step 9/16 : WORKDIR "/src/amazing-app"
---> Running in aa5575c99ce3
Removing intermediate container aa5575c99ce3
---> 781b4c552434
Step 10/16 : RUN dotnet build "amazing-app.csproj" -c Release -o /app/build
---> Running in 3a602c34b5a9
Microsoft (R) Build Engine version 16.4.0+e901037fe for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.
Restore completed in 36.78 ms for /src/amazing-app/amazing-app.csproj.
CSC : error CS5001: Program does not contain a static 'Main' method suitable for an entry point [/src/amazing-app/amazing-app.csproj]
Build FAILED.
CSC : error CS5001: Program does not contain a static 'Main' method suitable for an entry point [/src/amazing-app/amazing-app.csproj]
0 Warning(s)
1 Error(s)
Time Elapsed 00:00:02.37
The command '/bin/sh -c dotnet build "amazing-app.csproj" -c Release -o /app/build' returned a non-zero code: 1
</code></pre>
<p>find the source code on <a href="https://github.com/Kundan22/amazing-app/tree/master/amazing-app" rel="nofollow noreferrer">github link</a></p>
<p>am I missing something here? any help much appreciated.</p>
| <p>The issue is in folder structure. You copied your source to <code>/src</code> folder (see line 8 and 11 of Dockerfile), but *.csproj file is copied to <code>amazing-app</code> subfolder. </p>
<p>You can check it by running "intermediate" image, eg: <code>781b4c552434</code> (it is a hash of image before crash), like following: <code>docker run -it --rm --name xxx b5a968c1103c bash</code> and then you can inspect file system by <code>ls</code> command to see that source code and csproj file are located in different places.</p>
<p>So, i suggest to move Dockerfile to root directory (close to .dockerignore) and update path to csproj file (also, you need to run <code>docker build</code> command from root directory. This Dockerfile should work:</p>
<pre><code>FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 8989
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["amazing-app/amazing-app.csproj", "amazing-app/"]
RUN dotnet restore "amazing-app/amazing-app.csproj"
COPY . .
WORKDIR "/src/amazing-app"
RUN dotnet build "amazing-app.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "amazing-app.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "amazing-app.dll"]
</code></pre>
|
<p>After learning that we should have used a <code>StatefulSet</code> instead of a <code>Deployment</code> in order to be able to attach the same persistent volume to multiple pods and especially pods on different nodes, I tried changing our config accordingly.</p>
<p>However, even when using the same name for the volume claim as before, it seems to be creating an entirely new volume instead of using our existing one, hence the application loses access to the existing data when run as a <code>StatefulSet</code>.</p>
<p>Here's the volume claim part of our current <code>Deployment</code> config:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gitea-server-data
labels:
app: gitea
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
</code></pre>
<p>This results in a claim with the same name.</p>
<p>And here's the template for the <code>StatefulSet</code>:</p>
<pre><code> volumeClaimTemplates:
- metadata:
name: gitea-server-data
labels:
app: gitea
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
</code></pre>
<p>This results in new claims for every pod, with the pod name and an ID per claim, like e.g. <code>gitea-server-data-gitea-server-0</code>.</p>
<p>The new claims are now using a new volume instead of the existing one. So I tried specifying the existing volume explicitly, like so:</p>
<pre><code> volumeClaimTemplates:
- metadata:
name: gitea-server-data
labels:
app: gitea
spec:
accessModes:
- ReadWriteOnce
volumeName: pvc-c87ff507-fd77-11e8-9a7b-420101234567
resources:
requests:
storage: 20Gi
</code></pre>
<p>However, this results in pods failing to be scheduled and the new claim being "pending" indefinitely:</p>
<blockquote>
<p>pod has unbound immediate PersistentVolumeClaims (repeated times)</p>
</blockquote>
<p>So the question is: how can we migrate the volume claim(s) in a way that allows us to use the existing persistent volume and access the current application data from a new <code>StatefulSet</code> instead of the current <code>Deployment</code>?</p>
<p>(In case it is relevant, we are using Kubernetes on GKE.)</p>
| <p>In StatefulSet, when you try to use PVC to store your data, you actually define your PVC by using <code>volumeClaimTemplates</code> like:</p>
<pre><code>volumeClaimTemplates:
- metadata:
name: gitea-server-data
labels:
app: gitea
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
</code></pre>
<p>In this scenario, the following things could happen:</p>
<ul>
<li>If the StatefulSet name is <code>gitea-server</code> and the replica is <code>1</code> then
the only pod of the StatefulSet will use the PVC named <code>gitea-server-data-gitea-server-0</code>(if already exist in the cluster) or create a new one named <code>gitea-server-data-gitea-server-0</code>(if doesn't exist in the cluster).</li>
<li>If the StatefulSet name is <code>gitea-server</code> and the replica is <code>2</code> then
the two Pod of the StatefulSet will use the PVCs named <code>gitea-server-data-gitea-server-0</code> and <code>gitea-server-data-gitea-server-1</code> repectively(if already exist in the cluster) or create new PVCs named <code>gitea-server-data-gitea-server-0</code> an <code>gitea-server-data-gitea-server-1</code>(if doesn't exist in the cluster) and so on.</li>
</ul>
<p>Generally, in StatefulSet, generated PVC name follow the convention:</p>
<pre><code><volumeClaimTemplates name>-<StatefulSet name>-<Pod ordinal>
</code></pre>
<p>Now, if you create a PVC named <code>gitea-server-data-gitea-server-0</code>and the other things looks like:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gitea-server-data
labels:
app: gitea
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
</code></pre>
<p>then after creating the PVC, if you try to create a StatefulSet with replica <code>1</code> and with the above configuration that defined in <code>volumeClaimTemplates</code> then the SatefulSet will use this PVC(<code>gitea-server-data-gitea-server-0</code>).</p>
<p>You can also use this PVC in other workload(Like Deployment) by specifying the field <code>spec.accessmodes</code> as <code>ReadWriteMany</code>.</p>
|
<p>I want to automate the use a certificate, that is created by <code>cert-manager</code> as documented <a href="https://cert-manager.io/docs/usage/certificate/" rel="nofollow noreferrer">here</a>, in a Helm chart. For example, the YAML below.</p>
<pre><code>---
apiVersion: v1
kind: Pod
metadata:
name: mypod
labels:
app: mypod
spec:
containers:
- name: mypod
image: repo/image:0.0.0
imagePullPolicy: Always
volumeMounts:
- name: certs
mountPath: /etc/certs
readOnly: true
ports:
- containerPort: 4443
protocol: TCP
volumes:
- name: certs
secret:
secretName: as_created_by_cert-manager
</code></pre>
<p>How do I submit the YAML for getting a <code>Certificate</code> from <code>cert-manager</code> and then plugin the generated <code>Secret</code> into the <code>Pod</code> YAML above, in a Helm chart?</p>
| <p>I am posting David's comment as a community wiki answer as requested by the OP:</p>
<blockquote>
<p>You should be able to write the YAML for the Certificate in the same
chart, typically in its own file. I'd expect it would work to create
them together, the generated Pod would show up as "Pending" in kubectl
get pods output until cert-manager actually creates the matching
Secret. – David Maze</p>
</blockquote>
|
<p>I am planning to setup Apache Nifi on Kubernetes and make it to production. During my surfing I didn't find any one who potentially using this combination for production setup. </p>
<p>Is this good idea to choose this combination. Could you please share your thoughts/experience here about the same.</p>
<p><a href="https://community.cloudera.com/t5/Support-Questions/NiFi-on-Kubernetes/td-p/203864" rel="noreferrer">https://community.cloudera.com/t5/Support-Questions/NiFi-on-Kubernetes/td-p/203864</a></p>
| <p>I would like to invite you to try a Helm chart at <a href="https://github.com/Datenworks/apache-nifi-helm" rel="nofollow noreferrer">https://github.com/Datenworks/apache-nifi-helm</a></p>
<p>We've been maintaining a 5-node Nifi cluster on GKE (Google Kubernetes Engine) in a production environment without major issues and performing pretty good. Please let me know if you find any issues on running this chart on your environment.</p>
|
<p>This question is regarding kubernetes storage.
I am using a local kubernetes cluster where while some applications to be deployed need to be backed by pvcs. The PVC are provisioned dynamically . However, sometimes when there is no storage left on the cluster the pvc request just gets stuck in forever pending state. </p>
<p>Is there any way that the available storage on the kubernetes cluster be checked? Checked extensively in the docs and it is just not clear how to check remaining storage capacity on a kubernetes cluster. </p>
<p>Also, as per kubernetes docs the capacity of a node is different and the pvc allocation is bound to the pv which are a completely separate cluster resource just like nodes.</p>
<p>In that case what storage do I need to check to find if there's any space available for say an x gb dynamic pvc?
Also, how do i check it?</p>
| <p>You can use <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/" rel="nofollow noreferrer">tools for monitoring resources</a>.</p>
<p>One of it is <a href="https://prometheus.io/" rel="nofollow noreferrer">Prometheus</a> you can combine it with <a href="https://grafana.com/" rel="nofollow noreferrer">Grafana</a> to visualize collected metrics.</p>
<p>Also take a look on <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#monitoring-ephemeral-storage-consumption" rel="nofollow noreferrer">compute-resources-consumption-monitoring</a>.</p>
<p>When local ephemeral storage is used, it is monitored on an ongoing basis by the kubelet. The monitoring is performed by scanning each emptyDir volume, log directories, and writable layers on a periodic basis. Starting with Kubernetes 1.15, emptyDir volumes (but not log directories or writable layers) may, at the cluster operator’s option, be managed by use of project quotas. Project quotas were originally implemented in XFS, and have more recently been ported to ext4fs. Project quotas can be used for both monitoring and enforcement; as of Kubernetes 1.16, they are available as alpha functionality for monitoring only.</p>
<p>Quotas are faster and more accurate than directory scanning. When a directory is assigned to a project, all files created under a directory are created in that project, and the kernel merely has to keep track of how many blocks are in use by files in that project. If a file is created and deleted, but with an open file descriptor, it continues to consume space. This space will be tracked by the quota, but will not be seen by a directory scan.</p>
<p>To enable use of project quotas, the cluster operator must do the following:</p>
<ul>
<li>enable the <code>LocalStorageCapacityIsolationFSQuotaMonitoring=true</code>
feature gate in the kubelet configuration. This defaults to false in
Kubernetes 1.16, so must be explicitly set to true</li>
<li>make sure that the root partition (or optional runtime partition) is
built with project quotas enabled. Take notice on that all XFS filesystems support
project quotas, but ext4 filesystems must be built specially.</li>
</ul>
<p>Make sure that the root partition (or optional runtime partition) is mounted with project quotas enabled.</p>
|
<p>when running <code>helm install</code> (helm 3.0.2) </p>
<p>I got the following error: Error:
<code>rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: kind: PodSecurityPolicy, namespace: , name: po-kube-state-metrics</code> </p>
<p>But I don't find it and also In the error im not getting the ns, How can I remove it ?</p>
<p>when running <code>kubectl get all --all-namespaces</code> I see all the resources but not the <code>po-kub-state-metrics</code> ... it also happen to other resources, any idea? </p>
<p>I got the same error to: <code>monitoring-grafana</code> entity and the result of
<code>kubectl get PodSecurityPolicy --all-namespaces</code> is:</p>
<p><code>monitoring-grafana false RunAsAny RunAsAny RunAsAny RunAsAny false configMap,emptyDir,projected,secret,do</code> </p>
| <p>First of all you need to make sure you've successfully uninstalled the helm <code>release</code>, before reinstalling.</p>
<p>To list all the releases, use:</p>
<pre><code>$ helm list --all --all-namespaces
</code></pre>
<p>To uninstall a release, use:</p>
<pre><code>$ helm uninstall <release-name> -n <namespace>
</code></pre>
<p>You can also use <code>--no-hooks</code> to skip running hooks for the command:</p>
<pre><code>$ helm uninstall <release-name> -n <namespace> --no-hooks
</code></pre>
<p>If uninstalling doesn't solve your problem, you can try the following command to cleanup:</p>
<pre><code>$ helm template <NAME> <CHART> --namespace <NAMESPACE> | kubectl delete -f -
</code></pre>
<p>Sample:</p>
<pre><code>$ helm template happy-panda stable/mariadb --namespace kube-system | kubectl delete -f -
</code></pre>
<p>Now, try installing again.</p>
<p><strong>Update:</strong></p>
<p>Let's consider that your chart name is <code>mon</code> and your release name is <code>po</code>. Since you are in the charts directory (<code>.</code>) like below:</p>
<pre><code>.
├── mon
│ ├── Chart.yaml
│ ├── README.md
│ ├── templates
│ │ ├── one.yaml
│ │ ├── two.yaml
│ │ ├── three.yaml
│ │ ├── _helpers.tpl
│ │ ├── NOTES.txt
│ └── values.yaml
</code></pre>
<p>Then you can skip the helm repo name (i.e. stable) in the <code>helm template</code> command. <code>Helm</code> will use your <code>mon</code> chart from the directory.</p>
<pre><code>$ helm template po mon --namespace mon | kubectl delete -f -
</code></pre>
|
<p>I am using istioctl to install istio in an EKS cluster. However, for the moment I will be using an nginx ingress for externally facing services. How can I just deploy the istio service internally, or at least avoid the automatically created ELB?</p>
| <p>You can do it by editing <a href="https://istio.io/docs/tasks/traffic-management/ingress/ingress-control/" rel="nofollow noreferrer">istio-ingressgateway</a>.</p>
<p>Change <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">service type</a> from </p>
<p><strong>LoadBalancer</strong> -> Exposes the Service externally using a cloud provider’s load balancer</p>
<p>to </p>
<p><strong>ClusterIP</strong> -> Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. </p>
<p>Let's edit ingressgateway </p>
<pre><code>kubectl edit svc istio-ingressgateway -n istio-system
</code></pre>
<p>Then please change the type from LoadBalancer to ClusterIP and <strong>#</strong> or <strong>delete</strong> every nodePort since You won't use them anymore and it have to be # or deleted so You could actually edit the file, without it, it fails to edit and nothing is happening.</p>
<p><strong>EDIT</strong></p>
<blockquote>
<p>I can do this at install with istioctl using a values.yaml file?</p>
</blockquote>
<p>Yes, it's possible. </p>
<p>This is a value You need to change:</p>
<blockquote>
<p>values.gateways.istio-ingressgateway.type</p>
</blockquote>
<p>example</p>
<p>Creating manifest to apply istio demo profile with ClusterIP</p>
<pre><code>istioctl manifest generate --set profile=demo --set values.gateways.istio-ingressgateway.type="ClusterIP" > $HOME/generated-manifest.yaml
</code></pre>
|
<p>After learning that we should have used a <code>StatefulSet</code> instead of a <code>Deployment</code> in order to be able to attach the same persistent volume to multiple pods and especially pods on different nodes, I tried changing our config accordingly.</p>
<p>However, even when using the same name for the volume claim as before, it seems to be creating an entirely new volume instead of using our existing one, hence the application loses access to the existing data when run as a <code>StatefulSet</code>.</p>
<p>Here's the volume claim part of our current <code>Deployment</code> config:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gitea-server-data
labels:
app: gitea
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
</code></pre>
<p>This results in a claim with the same name.</p>
<p>And here's the template for the <code>StatefulSet</code>:</p>
<pre><code> volumeClaimTemplates:
- metadata:
name: gitea-server-data
labels:
app: gitea
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
</code></pre>
<p>This results in new claims for every pod, with the pod name and an ID per claim, like e.g. <code>gitea-server-data-gitea-server-0</code>.</p>
<p>The new claims are now using a new volume instead of the existing one. So I tried specifying the existing volume explicitly, like so:</p>
<pre><code> volumeClaimTemplates:
- metadata:
name: gitea-server-data
labels:
app: gitea
spec:
accessModes:
- ReadWriteOnce
volumeName: pvc-c87ff507-fd77-11e8-9a7b-420101234567
resources:
requests:
storage: 20Gi
</code></pre>
<p>However, this results in pods failing to be scheduled and the new claim being "pending" indefinitely:</p>
<blockquote>
<p>pod has unbound immediate PersistentVolumeClaims (repeated times)</p>
</blockquote>
<p>So the question is: how can we migrate the volume claim(s) in a way that allows us to use the existing persistent volume and access the current application data from a new <code>StatefulSet</code> instead of the current <code>Deployment</code>?</p>
<p>(In case it is relevant, we are using Kubernetes on GKE.)</p>
| <p>OK, so I spent quite some time trying out all kinds of different configs until finally learning that GCE persistent disks simply don't support <code>ReadWriteMany</code> to begin with.</p>
<p>The <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes" rel="nofollow noreferrer">GKE docs</a> go out of their way to never explicitly mention that you cannot actually mount <em>any</em> normal GKE persistent volume on multiple pods/nodes.</p>
<p>Apparently, the only way to get shared file storage between pods is to deploy either your own NFS/Gluster/etc. or to cough up a bunch of money and <a href="https://cloud.google.com/filestore/docs/accessing-fileshares" rel="nofollow noreferrer">use Google Cloud Filestore</a>, for which there is a GKE storage class, and which can indeed be mounted on multiple pods.</p>
<p>Unfortunately, that's not an option for this app, as Filestore pricing begins with 1TB minimum capacity at a whopping $0.20/GB/month, which means that <strong>the cheapest option available costs around $205 per month</strong>. We currently pay around $60/month, so that would more than triple our bill, simply to get rolling deployments without errors.</p>
|
<p>I confused between Multi-Container Pod Design patterns.<br>
(sidecar, adapter, ambassador) </p>
<p>What I understand is :<br>
<strong>Sidecar</strong> : container + container(share same resource and do other functions)<br>
<strong>Adapter</strong> : container + adapter(for checking other container's status. <em>e.g. monitoring</em>)<br>
<strong>Ambassador</strong> : container + proxy(to networking outside) </p>
<blockquote>
<p>But, According to <a href="https://istio.io/docs/setup/additional-setup/sidecar-injection/#injection" rel="noreferrer">Istio -Installing the Sidecar</a>, They introduce proxy as a sidecar pattern. </p>
</blockquote>
<p>Adapter is container, and Proxy is container too. </p>
<p>So, My question is <strong>What is differences between Sidecar pattern and Adapter&Ambassador pattern?</strong> </p>
<p>Is the Sidecar pattern concept contain Adapter&Ambassador pattern?</p>
| <p>First, you are right, the term <strong>sidecar</strong> container has now became a word for describing <em>an extra container</em> in your pod. <a href="https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns/" rel="noreferrer">Originally(?)</a> it was a specific multi-container design pattern.</p>
<h2>Multi-container design patterns</h2>
<p><strong>Sidecar pattern</strong></p>
<p>An extra container in your pod to <strong>enhance</strong> or <strong>extend</strong> the functionality of the main container.</p>
<p><strong>Ambassador pattern</strong></p>
<p>A container that <strong>proxy the network connection</strong> to the main container.</p>
<p><strong>Adapter pattern</strong></p>
<p>A container that <strong>transform output</strong> of the main container.</p>
<p>This is taken from the original article from 2015: <a href="https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns/" rel="noreferrer">Patterns for Composite Containers</a></p>
<h2>Summary</h2>
<p>Your note on</p>
<blockquote>
<p>But, According to Istio -Installing the Sidecar, They introduce proxy as a sidecar pattern.</p>
</blockquote>
<p>In the patterns above, both <em>Ambassador</em> and <em>Adapter</em> must in fact <strong>proxy</strong> the network connection, but do it with different purpose. With Istio, this is done e.g. to terminate <strong>mTLS</strong> connection, collect metrics and more to <strong>enhance</strong> your main container. So it actually is a <strong>sidecar pattern</strong> but confusingly, as you correctly pointed out, all pattern <em>proxy the connection</em> - but for different purposes.</p>
|
<p>The use case is to get the environment variable *COUNTRY from all the pods running in a namespace </p>
<pre><code>kubectl get pods podname -n namespace -o 'jsonpath={.spec.containers[0].env[?(@.name~="^COUNTRY")].value}'
</code></pre>
<p>This does not seem to work. any lead?</p>
| <p>You can retrieve this information using the following command: </p>
<pre><code>kubectl get pods --all-namespaces -o jsonpath='{range .items[*]}{.spec.containers[*].env[*].name}{"\t"}{.spec.containers[*].env[*].value}{"\n"}{end}' | grep COUNTRY | cut -f 2
</code></pre>
<p>It will return the variables content as follows:</p>
<pre><code>$ kubectl get pods --all-namespaces -o jsonpath='{range .items[*]}{.spec.containers[*].env[*].name}{"\t"}{.spec.containers[*].env[*].value}{"\n"}{end}' | grep VAR | cut -f 2
123456
7890123
</code></pre>
|
<p>Not sure if this is a Kubernetes, <code>ingress-nginx</code>, or ReactJS (<code>create-react-app</code>) issue...</p>
<p><strong>Project Structure</strong></p>
<pre><code>new-app/
client/
src/
App.js
index.js
Test.js
package.json
k8s/
client.yaml
ingress.yaml
server/
skaffold.yaml
</code></pre>
<p><strong>Issue</strong></p>
<ul>
<li>ReactJS front-end should be running at <code>192.168.64.5/client</code> when the cluster is spun up with Skaffold</li>
<li>Navigate to <code>192.168.64.5/client</code> and blank screen</li>
<li>Developer Console shows:</li>
</ul>
<p><a href="https://i.stack.imgur.com/5TE2g.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5TE2g.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/PYarq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PYarq.png" alt="enter image description here"></a></p>
<p>Basically, it is trying to serve static files from <code>/static</code>, but I need them to come from <code>/client/static</code></p>
<p><strong>Propsed Solutions</strong></p>
<p>Assuming this is a ReactJS issue, the solutions have been the following:</p>
<ul>
<li>Setting <code>homepage</code> in the <code>package.json</code></li>
<li>Setting <code>basename</code> in the <code>react-router-dom</code></li>
<li>Following <a href="https://skryvets.com/blog/2018/09/20/an-elegant-solution-of-deploying-react-app-into-a-subdirectory/" rel="nofollow noreferrer">this "elegant" solution</a></li>
</ul>
<p>None seem to work in my case. Still shows assets trying to be served from <code>/static</code> instead of <code>/client/static</code>.</p>
<p><strong>ReactJS App Code</strong></p>
<pre><code>// App.js
import React from 'react';
import { BrowserRouter, Route } from 'react-router-dom';
import './App.css';
import Test from './Test';
const App = () => {
return (
<BrowserRouter
basename='/client'>
<>
<Route exact path='/' component={Test} />
</>
</BrowserRouter>
);
}
export default App;
</code></pre>
<pre><code>// Test.js
import React, { useState, useEffect } from 'react';
import logo from './logo.svg';
import './App.css';
import axios from 'axios';
const Test = () => {
const [data, setData] = useState('');
useEffect(() => {
const fetchData = async () => {
const result = await axios('/api/auth/test/');
setData(result.data);
};
fetchData();
}, []);
return (
<div className='App'>
<header className='App-header'>
<img src={logo} className='App-logo' alt='logo' />
<p>
Edit <code>src/App.js</code> and save to reload!
</p>
<p>
{data}
</p>
<a
className='App-link'
href='https://reactjs.org'
target='_blank'
rel='noopener noreferrer'
>
Learn React
</a>
</header>
</div>
)
};
export default Test;
</code></pre>
<pre><code>{
"name": "client",
"version": "0.1.0",
"private": true,
"homepage": "/client",
"dependencies": {
"axios": "^0.19.0",
"react": "^16.12.0",
"react-dom": "^16.12.0",
"react-router-dom": "^5.1.2",
"react-scripts": "3.2.0"
},
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test",
"eject": "react-scripts eject"
},
"eslintConfig": {
"extends": "react-app"
},
"browserslist": {
"production": [
">0.2%",
"not dead",
"not op_mini all"
],
"development": [
"last 1 chrome version",
"last 1 firefox version",
"last 1 safari version"
]
}
}
</code></pre>
<p><strong>Kubernetes/Skaffold and Docker Manifests</strong></p>
<pre><code># Dockerfile.dev
FROM node:alpine
EXPOSE 3000
WORKDIR '/app'
COPY ./client/package.json ./
RUN npm install
COPY ./client .
CMD ["npm", "run", "start"]
</code></pre>
<pre><code># ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /client/?(.*)
backend:
serviceName: client-cluster-ip-service
servicePort: 3000
</code></pre>
<pre><code># client.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: client-deployment
spec:
replicas: 3
selector:
matchLabels:
component: client
template:
metadata:
labels:
component: client
spec:
containers:
- name: client
image: clientappcontainers.azurecr.io/client
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: client-cluster-ip-service
spec:
type: ClusterIP
selector:
component: client
ports:
- port: 3000
targetPort: 3000
</code></pre>
<pre><code># skaffold.yaml
apiVersion: skaffold/v1beta15
kind: Config
build:
local:
push: false
artifacts:
- image: clientappcontainers.azurecr.io/client
docker:
dockerfile: ./client/Dockerfile.dev
sync:
manual:
- src: "***/*.js"
dest: .
- src: "***/*.html"
dest: .
- src: "***/*.css"
dest: .
deploy:
kubectl:
manifests:
- manifests/ingress.yaml
- manifests/client.yaml
</code></pre>
<p>So what am I doing wrong here?</p>
<p><strong>EDIT:</strong></p>
<p>I should note that things work fine when doing this though:</p>
<pre><code>- path: /?(.*)
backend:
serviceName: client-cluster-ip-service
servicePort: 3000
</code></pre>
<p>Repo to demo the issue:</p>
<p><a href="https://github.com/eox-dev/subdir-issue" rel="nofollow noreferrer">https://github.com/eox-dev/subdir-issue</a></p>
| <p>Apparently, this is not possible currently in CRA in a dev environment:</p>
<p><a href="https://github.com/facebook/create-react-app/issues/8222#issuecomment-568308139" rel="nofollow noreferrer">https://github.com/facebook/create-react-app/issues/8222#issuecomment-568308139</a></p>
|
<p>I have a Kubernetes cluster(Azure) with nginx-ingress, certmanager and one application, and I was wondering if there's a way of making the whole system as one single package so if there's any problem with the cluster it'd be easier to spin up another one. </p>
<p>My main idea is to make the manual configuration of nginx-ingress and certmanager automatic, but I'm not sure how it'd included in a helm chart, if possible. If not possible, is there a way(or tool) for minimising the manual configuration of environments when spinning it up?</p>
<p>Thanks in advance.</p>
| <p>You need to have a package of multiple helm charts here, likely as follows- </p>
<ol>
<li>Helm chart of your services</li>
<li>Other dependency services helm charts (like redis, kafka etc)</li>
<li>Nginx helm chart</li>
<li>Then you can add cert-manager's helm chart which carries cluster issuer, certificate and it can either self create the TLS secret, by name mentioned by you ( or you can add your own secret)</li>
<li>Then adding the ingress rule will place things inline for you.</li>
</ol>
<p>Just package things up in helm chart .tgz formats</p>
|
<p>I am deploy kubernetes UI using this command:</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
</code></pre>
<p>start proxy:</p>
<pre><code>kubectl proxy --address='172.19.104.231' --port=8001 --accept-hosts='^*$'
</code></pre>
<p>access ui:</p>
<pre><code>curl http://172.19.104.231:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
http://kubernetes.example.com/api/v1/namespaces/kube-system/services/kube-ui/#/dashboard/
</code></pre>
<p>the log output:</p>
<pre><code>[root@iZuf63refzweg1d9dh94t8Z ~]# curl http://172.19.104.231:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "services \"kubernetes-dashboard\" not found",
"reason": "NotFound",
"details": {
"name": "kubernetes-dashboard",
"kind": "services"
},
"code": 404}
</code></pre>
<p>how to fix the problem? Check pods status:</p>
<pre><code>[root@iZuf63refzweg1d9dh94t8Z ~]# kubectl get pod --namespace=kube-system
NAME READY STATUS RESTARTS AGE
kubernetes-dashboard-7d75c474bb-b2lwd 0/1 Pending 0 34h
</code></pre>
| <p>If you use K8S dashboard v2.0.0-betax,</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
</code></pre>
<p>Then use this to access the dashboard:</p>
<pre><code>http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
</code></pre>
<p>If you use K8S dashboard v1.10.1,</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
</code></pre>
<p>Then use this to access the dashboard:</p>
<pre><code>http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
</code></pre>
<p>I also faced the same problem, but then i realized that dashboard v2.0.0-betax and v1.10.1 use different namespace. Latest version use kubernetes-dashboard namespace, but the older one use kube-system namespace</p>
|
<p>From the <a href="https://azure.microsoft.com/en-us/pricing/details/api-management/" rel="nofollow noreferrer">Azure API management pricing page</a> I see that Virtual Networks aren't supported besides the Developer and Premium tiers.</p>
<p>Currently with my developer tier subscription when configuring the VN of an APIM I can choose between "off", "External" and "Internal". With the other tiers, can I still use an External VN or no VN at all?</p>
<p>When I try to connect a kubernetes cluster/VM to the APIM, I have to configure the APIM with an external VN. So if that's not possible with the other subscription tiers, is it still possible to connect to a kubernetes cluster?</p>
| <p>VNET Support (and hence the options) are available only in the Developer and Premium Tiers.</p>
<p>You can still use APIM by routing requests from APIM to the <a href="https://learn.microsoft.com/en-us/azure/aks/load-balancer-standard" rel="nofollow noreferrer">AKS load balancer</a> using a <a href="https://learn.microsoft.com/en-us/azure/aks/static-ip" rel="nofollow noreferrer">static IP</a> and overriding the <code>Host</code> header as required. If possible, you could also use <a href="https://github.com/Azure/application-gateway-kubernetes-ingress" rel="nofollow noreferrer">Azure Application Gateway as an ingress controller</a> too.</p>
<p><strong>When taking the load balancer approach,</strong> you could setup a <a href="https://learn.microsoft.com/en-us/azure/virtual-network/security-overview" rel="nofollow noreferrer">Network Security Group</a> to allow traffic only from APIM (and any other IPs/Services) to your AKS nodes.</p>
<p><strong>When taking the Application Gateway approach,</strong> you could setup <a href="https://learn.microsoft.com/en-us/azure/application-gateway/configuration-overview#allow-application-gateway-access-to-a-few-source-ips" rel="nofollow noreferrer">IP restrictions</a> to APIM.</p>
<p>You should be able to setup a similar source IP rule on your own ingress controller too instead of an NSG rule I suppose.</p>
|
<p>I need to grant access to one deployment and all pods of this deployment using RBAC.
I've managed to configure <code>Role</code> and <code>RoleBinding</code> for the <code>deploymet</code>, and it's working fine:</p>
<pre><code>---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: <my-namespace>
name: <deployment>-manager-role
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments"]
resourceNames: ["<deployment>"]
verbs: ["get", "list", "watch", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: <deployment>-manager-binding
namespace: <my-namespace>
subjects:
- kind: User
name: <username>
apiGroup: ""
roleRef:
kind: Role
name: <deployment>-manager-role
apiGroup: ""
</code></pre>
<p>Using this role user can access, update and patch the deployment. This deployment creates pods with dynamic names (like <code><deployment>-5594cbfcf4-v4xx8</code>). I tried to allow this user to access these pods (get, list, watch, read logs, exec, delete) using deployment name and using deployment name + wildcard char <code>*</code>:</p>
<pre><code>---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: <my-namespace>
name: <deployment>-pods-manager-role
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["pods"]
resourceNames: ["<deployment>*"]
verbs: ["get", "list", "watch", "update", "patch", "exec", "delete"]
</code></pre>
<p>I also updated the role binding. But when I try to get the pod:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl --context=<username>-ctx -n <namespace> get pods <deployment>-5594cbfcf4-v4xx8
</code></pre>
<p>I'm getting error:</p>
<blockquote>
<p>Error from server (Forbidden): pods "<deployment>-5594cbfcf4-v4xx8" is forbidden: User "<username>" cannot get resource "pods" in API group "" in the namespace "<namespace>"</p>
</blockquote>
<p>If I add <code><deployment>-5594cbfcf4-v4xx8</code> to the list of <code>resourceNames</code>, user can access this pod.</p>
<p>Is it possible to grant access to the specific pods based on deployment name?</p>
| <p>In Kubernetes, pods are considered as an ephemeral "cattle", they come and go. You shouldn't try to manage RBAC per pod.</p>
<p>In your use case, there is unfortunately no way to grant a role over a set of pods matching a certain name, because the <code>resourceNames</code> field <strong>doesn't support patterns like prefixes/suffixes</strong>. Don't get confused: a single asterisk character (<code>'*'</code>) has a special meaning that means "all", but it's not a pattern. So, <code>'my-app-*</code> in resourceNames will not work. There were tickets opened for this feature, but it wasn't implemented:<br>
<a href="https://github.com/kubernetes/kubernetes/issues/56582" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/56582</a></p>
<p>There was also a request to be able to manage RBAC over labels, but that feature isn't implemented neither:<br>
<a href="https://github.com/kubernetes/kubernetes/issues/44703" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/44703</a></p>
<p>Therefore, you probably need to change your model to grant roles to users to manage all pods in a certain namespace. Your deployment should be the only "source of pods" in that namespace. That way, you will not need to specify any resource names.</p>
|
<p>I have installed Tekton on private kubernetes cluster. After that I wanted to create first resource but got exception:</p>
<p><strong>Internal error occurred: failed calling webhook "webhook.tekton.dev": Post <a href="https://tekton-pipelines-webhook.tekton-pipelines.svc:443/?timeout=30s" rel="nofollow noreferrer">https://tekton-pipelines-webhook.tekton-pipelines.svc:443/?timeout=30s</a>: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)</strong></p>
<p>As far as I know it is because of restriction on private cluster. My question is if it is possible to change Port in POST url to use 8443 instead of 433?</p>
| <p>You need to manually define firewall rule to handle your Tekton webhook requests.
For example:
<a href="https://i.stack.imgur.com/AYxEt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AYxEt.png" alt="enter image description here"></a></p>
<p>Assuming that 10.44.0.0/14 is your endpoints network:</p>
<pre><code>Name: test
Namespace: tekton-pipelines
Labels: app.kubernetes.io/component=webhook-controller
app.kubernetes.io/name=tekton-pipelines
Annotations: <none>
Selector: app=tekton-pipelines-webhook
Type: ClusterIP
IP: 10.0.3.240
Port: <unset> 8443/TCP
TargetPort: 8443/TCP
Endpoints: 10.44.2.76:8443
Session Affinity: None
Events: <none>
</code></pre>
<p>You can find full problem explanation here:</p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/79739" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/79739</a></p>
|
<p>I’m working to install prometheus operator, I took the default helm chart and install it via the following command:</p>
<pre><code>helm install po stable/prometheus-operator -n mon -f values.yaml
</code></pre>
<p>using <a href="https://github.com/helm/charts/tree/master/stable/prometheus-operator" rel="nofollow noreferrer">this</a> chart.</p>
<p>The installation was successful however it didn’t take the values from the <code>values.yaml</code> files, what am I doing wrong? e.g we don’t see two replicas of alert manger (minimal reproducible) </p>
<p>cat values.yaml</p>
<pre><code>prometheus-operator:
defaultRules:
grafana:
enabled: true
alertmanager:
alertmanagerSpec:
replicas: 2
</code></pre>
<p>the values yaml is on the folder which I execute the command. I'm using Helm 3</p>
| <p>fairly certain it should look like this:</p>
<pre><code>grafana:
enabled: true
alertmanager:
enabled: true
alertmanagerSpec:
replicas: 2
</code></pre>
|
<p>We are using Kubernetes on google cloud's Google Kubernetes Engine. Our system dynamically generates instances based on request and these instances call an external web service. The external service generates images and the bandwidth usage per instance is not small.</p>
<p>This external web service has an IP whitelisting configured.</p>
<p>Is there any way that I can funnel all the requests going from the selected pods (they are grouped within a node pool) to the external service with a single IP?</p>
| <p>The answer is <code>Yes</code>, there are actually several ways one can achieve this. I will answer a simple way to get this done. By tunnelling through a proxy server. </p>
<p>It could also be done assigning external ips to all your nodes and allowing them from webservice, but many engineers don't prefer doing it because no one wants to expose the nodes to the external world for a million security reasons.</p>
<p>Add a separate very small may be nano VM within the same cluster and install a <code>HAProxy</code> or <code>Nginx</code> or your favourite proxy. Or install the proxy on one of the instances you already have but make sure it has external ip attached to it, and it should be inside your cluster in order to reduce any latency issues. </p>
<p>Now bind the url in the proxy to accept connection to a particular port and route them to your instance that has your external webservice. This is an example of HAProxy code how it would look like.</p>
<pre><code>listen port_2020
bind :2020
mode tcp
server external-web-service externalwebservice.mycompany.com:443 check
</code></pre>
<p>After the completion of this setup. Let's assume your k8s is running masters at <code>10.0.1.0/24</code> and nodes at <code>10.0.2.0/24</code>. And added this addition proxy service somewhere at <code>10.10.1.101/32</code> with an external ip of <code>52.*.*.*</code> with in the same VPC. Now all you have to do is open communication on <code>10.10.1.101</code> to accept communications to <code>port 2020</code> from <code>10.0.2.0/24</code>. </p>
<p>Now your pods have to keep polling <code>10.10.1.101:2020/api/health/check</code> instead of external webservice directly.</p>
<p>And now you can waitlist just the proxy vm ip <code>52.*.*.*</code> on your webservice vm without any issues.</p>
<p>This is just an example of how it could be done. But there are several approaches to get this done. There are many advanced ways of doing this using a sidecar as well.</p>
<p>Hope this is helpful.</p>
|
<p>How can I configure my Deployment to do a health check on the sprint boot actuator endpoint?
I'm using spring boot 2 running on port 9000. (PS: port-forward test works)</p>
<p>This is the error:</p>
<pre><code>Readiness probe failed: Get http://10.48.0.116:9000/actuator/health: dial tcp 10.48.0.116:9000: connect: connection refused
</code></pre>
<p>And that is my Deployment yml:</p>
<pre><code>apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: my-api
namespace: vidolin
labels:
stack: api
app: my-api
spec:
replicas: 1
selector:
matchLabels:
stack: api
app: my-api
template:
metadata:
labels:
stack: api
app: my-api
spec:
imagePullSecrets:
- name: itdevregistry
containers:
- name: primary
image: vidolinregistry.azurecr.io/my/api
ports:
- containerPort: 9000
envFrom:
- configMapRef:
name: my-api-config
readinessProbe:
httpGet:
path: /actuator/health
port: 9000
initialDelaySeconds: 10
timeoutSeconds: 2
periodSeconds: 3
failureThreshold: 1
livenessProbe:
httpGet:
path: /actuator/health
port: 9000
initialDelaySeconds: 20
timeoutSeconds: 2
periodSeconds: 8
failureThreshold: 1
</code></pre>
| <p>Although I am replying a bit late. But I think my implementation will help people in future to implement kubernetes readiness / liveness probe for spring boot applications.</p>
<p>Description of my docker image</p>
<ol>
<li>Built from alpine linux</li>
<li>Spring boot 2 application, can be accessible only via SSL</li>
<li>Spring boot actuator implemented /actuator/health which returns {"status":"UP"}</li>
</ol>
<p><strong>So I cannot use httpGet option of kubernetes, as it uses http protocol only.</strong></p>
<p>That is why I create a small shell script "check-if-healthy.sh" as part of docker image to know the status</p>
<pre><code>check-if-healthy.sh
===================
curl -k https://localhost:8888/actuator/health | grep '"status":"UP"' > /dev/null && exit 0 || exit 1
</code></pre>
<p>Please note you need to add this script into docker image, so that it will be available in running container and accessible to kubernetes, As kubernetes will fire <strong>"docker exec /bin/ash /home/config-server/check-if-healthy."</strong> like this</p>
<pre><code>COPY docker/check-if-healthy.sh /home/config-server/check-if-healthy.sh
</code></pre>
<p>And then used "exec" option of kubernetes readiness probe to call the script like this.</p>
<pre><code> readinessProbe:
exec:
command:
- /bin/ash
- /home/config-server/check-if-healthy.sh
initialDelaySeconds: 5
timeoutSeconds: 1
failureThreshold: 50
livenessProbe:
exec:
command:
- /bin/ash
- /home/config-server/check-if-healthy.sh
initialDelaySeconds: 5
timeoutSeconds: 1
failureThreshold: 50
</code></pre>
|
<p>My Requirement is Scale up PODS on Custom metrics like pending messages from queue increases pods has to increase to process jobs. In kubernetes Scale up is working fine with prometheus adapter & prometheus operator.</p>
<p>I have long running process in pods, but HPA checks the custom metrics and try to scale down, Due to this process killing mid of operations and loosing that message. How i can control the HPA kill only free pods where no process is running. </p>
<h2>AdapterService to collect custom metrics</h2>
<ul>
<li>seriesQuery: '{namespace="default",service="hpatest-service"}'
resources:
overrides:
namespace:
resource: "namespace"
service:
resource: "service"
name:
matches: "msg_consumergroup_lag"
metricsQuery: 'avg_over_time(msg_consumergroup_lag{topic="test",consumergroup="test"}[1m])'</li>
</ul>
<h2>HPA Configuration</h2>
<ul>
<li>type: Object
object:
describedObject:
kind: Service
name: custommetric-service
metric:
name: msg_consumergroup_lag
target:
type: Value
value: 2</li>
</ul>
| <p>At present the HPA cannot be configured to accommodate workloads of this nature. The HPA simply sets the replica count on the deployment to a desired value according to the scaling algorithm, and the deployment chooses one or more pods to terminate.</p>
<p>There is a lot of discussion on this topic in <a href="https://github.com/kubernetes/kubernetes/issues/45509" rel="nofollow noreferrer">this Kubernetes issue</a> that may be of interest to you. It is not solved by the HPA, and may never be. There may need to be a different kind of autoscaler for this type of workload. Some suggestions are given in the link that may help you in defining one of these.</p>
<p>If I was to take this on myself, I would create a new controller, with corresponding CRD containing a job definition and the scaling requirements. Instead of scaling deployments, I would have it launch jobs. I would have the jobs do their work (process the queue) until they became idle (no items in the queue) then exit. The controller would only scale up, by adding jobs, never down. The jobs themselves would scale down by exiting when the queue is empty.</p>
<p>This would require that your jobs be able to detect when they become idle, by checking the queue and exiting if there is nothing there. If your queue read blocks forever, this would not work and you would need a different solution.</p>
<p>The <a href="https://book.kubebuilder.io" rel="nofollow noreferrer">kubebuilder project</a> has an excellent example of a job controller. I would start with that and extend it with the ability to check your published metrics and start the jobs accordingly.</p>
<p>Also see <a href="https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/" rel="nofollow noreferrer">Fine Parallel Processing Using a Work Queue</a> in the Kubernetes documentation.</p>
|
<p>I am trying to deploy <code>Postgresql</code> through helm on <code>microk8s</code>, but pod keeps pending showing <code>pod has unbound immediate PersistentVolumeClaims</code> error.</p>
<p>I tried creating <code>pvc</code> and a <code>storageclass</code> inside it, and editing it but all keeps pending. </p>
<p>Does anyone know whats holding the <code>pvc</code> claiming <code>pv</code>?</p>
<p><a href="https://i.stack.imgur.com/xOHJh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xOHJh.png" alt="enter image description here"></a></p>
| <blockquote>
<p>on the 'PVC' it shows 'no persistent volumes available for this claim and no storage class is set' Error</p>
</blockquote>
<p>This means that you have to prepare PersistentVolumes for your platform that can be used by your PersistentVolumeClaims (e.g. with correct StorageClass or other requirements)</p>
|
<p>I've created Hyper-V machine and tried to deploy Sawtooth on Minikube using Sawtooth YAML file :
<a href="https://sawtooth.hyperledger.org/docs/core/nightly/master/app_developers_guide/sawtooth-kubernetes-default.yaml" rel="noreferrer">https://sawtooth.hyperledger.org/docs/core/nightly/master/app_developers_guide/sawtooth-kubernetes-default.yaml</a></p>
<p>I changed the apiVersion i.e. <code>apiVersion: extensions/v1beta1</code> to <code>apiVersion: apps/v1</code>, though I have launched Minikube in Kubernetes v1.17.0 using this command </p>
<blockquote>
<p>minikube start --kubernetes-version v1.17.0</p>
</blockquote>
<p>After that I can't deploy the server. Command is </p>
<blockquote>
<p>kubectl apply -f sawtooth-kubernetes-default.yaml --validate=false</p>
</blockquote>
<p>It shows an error with "sawtooth-0" is invalid.</p>
<p><a href="https://i.stack.imgur.com/cHCaV.png" rel="noreferrer"><img src="https://i.stack.imgur.com/cHCaV.png" alt="enter image description here"></a></p>
<pre><code>---
apiVersion: v1
kind: List
items:
- apiVersion: apps/v1
kind: Deployment
metadata:
name: sawtooth-0
spec:
replicas: 1
selector:
matchLabels:
name: sawtooth-0
template:
metadata:
labels:
name: sawtooth-0
spec:
containers:
- name: sawtooth-devmode-engine
image: hyperledger/sawtooth-devmode-engine-rust:chime
command:
- bash
args:
- -c
- "devmode-engine-rust -C tcp://$HOSTNAME:5050"
- name: sawtooth-settings-tp
image: hyperledger/sawtooth-settings-tp:chime
command:
- bash
args:
- -c
- "settings-tp -vv -C tcp://$HOSTNAME:4004"
- name: sawtooth-intkey-tp-python
image: hyperledger/sawtooth-intkey-tp-python:chime
command:
- bash
args:
- -c
- "intkey-tp-python -vv -C tcp://$HOSTNAME:4004"
- name: sawtooth-xo-tp-python
image: hyperledger/sawtooth-xo-tp-python:chime
command:
- bash
args:
- -c
- "xo-tp-python -vv -C tcp://$HOSTNAME:4004"
- name: sawtooth-validator
image: hyperledger/sawtooth-validator:chime
ports:
- name: tp
containerPort: 4004
- name: consensus
containerPort: 5050
- name: validators
containerPort: 8800
command:
- bash
args:
- -c
- "sawadm keygen \
&& sawtooth keygen my_key \
&& sawset genesis -k /root/.sawtooth/keys/my_key.priv \
&& sawset proposal create \
-k /root/.sawtooth/keys/my_key.priv \
sawtooth.consensus.algorithm.name=Devmode \
sawtooth.consensus.algorithm.version=0.1 \
-o config.batch \
&& sawadm genesis config-genesis.batch config.batch \
&& sawtooth-validator -vv \
--endpoint tcp://$SAWTOOTH_0_SERVICE_HOST:8800 \
--bind component:tcp://eth0:4004 \
--bind consensus:tcp://eth0:5050 \
--bind network:tcp://eth0:8800"
- name: sawtooth-rest-api
image: hyperledger/sawtooth-rest-api:chime
ports:
- name: api
containerPort: 8008
command:
- bash
args:
- -c
- "sawtooth-rest-api -C tcp://$HOSTNAME:4004"
- name: sawtooth-shell
image: hyperledger/sawtooth-shell:chime
command:
- bash
args:
- -c
- "sawtooth keygen && tail -f /dev/null"
- apiVersion: apps/v1
kind: Service
metadata:
name: sawtooth-0
spec:
type: ClusterIP
selector:
name: sawtooth-0
ports:
- name: "4004"
protocol: TCP
port: 4004
targetPort: 4004
- name: "5050"
protocol: TCP
port: 5050
targetPort: 5050
- name: "8008"
protocol: TCP
port: 8008
targetPort: 8008
- name: "8800"
protocol: TCP
port: 8800
targetPort: 8800
</code></pre>
| <p>You need to fix your deployment <code>yaml</code> file. As you can see from your error message, the <code>Deployment.spec.selector</code> field can't be empty.</p>
<p>Update the <code>yaml</code> (i.e. add <code>spec.selector</code>) as shown in below:</p>
<pre class="lang-yaml prettyprint-override"><code> spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: sawtooth-0
template:
metadata:
labels:
app.kubernetes.io/name: sawtooth-0
</code></pre>
<ul>
<li>Why <code>selector</code> field is important?</li>
</ul>
<p>The <code>selector</code> field defines how the Deployment finds which Pods to manage. In this case, you simply select a label that is defined in the Pod template (<code>app.kubernetes.io/name: sawtooth-0</code>). However, more sophisticated selection rules are possible, as long as the Pod template itself satisfies the rule.</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment" rel="noreferrer"> Reference </a></li>
</ul>
<p><strong>Update:</strong></p>
<p>The <code>apiVersion</code> for k8s service is <code>v1</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>- apiVersion: v1 # Update here
kind: Service
metadata:
app.kubernetes.io/name: sawtooth-0
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: sawtooth-0
... ... ...
</code></pre>
|
<p>I deployed a grpc service on eks and expose the service using ingress. I deployed an demo https app, it worked. However, I have a problem with the grpc app. The service is running but when I log the service I get an error.<br/> The grpc request does not even go to the server. The log is as following</p>
<blockquote>
<p>level=info msg="grpc: Server.Serve failed to create ServerTransport:
connection error: desc = \"transport: http2Server.HandleStreams
received bogus greeting from client: \\"GET / HTTP/1.1\\r\\nHost:
19\\"\"" system=system</p>
</blockquote>
<p>It seems it should receive http2 but it just has <code>HTTP/1.1</code>??</p>
<p>For ingress I tried </p>
<pre><code> alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:xxxx
alb.ingress.kubernetes.io/load-balancer-attributes: 'routing.http2.enabled=true'
</code></pre>
<p>for service.yml</p>
<pre><code> annotations:
service.alpha.kubernetes.io/app-protocols: '{"grpc":"HTTP2", "http": "HTTP2"}'
</code></pre>
<p>For the service deployed it seems fine. Once I have ingress deployed it keeps have the error above.</p>
| <p>Not sure if Ingress supports HTTP2/gRPC. If you are using GKE, you could try <a href="https://cloud.google.com/endpoints/docs/grpc/get-started-kubernetes-engine" rel="nofollow noreferrer">ESP</a></p>
|
<p>In a scenario where there are multiple containers running multiple instances of a database, when data in one instance is changed, how that change is applied to other instances so the data consistency preserved? it should be taken into consideration that containers may have bee placed in different nodes, so they can't share a common volume.</p>
| <p>Deploying database to Kubernetes in high-availability mode is a quite complex task. </p>
<p>To save a lot of time, please consider using <a href="https://enterprisersproject.com/article/2019/2/kubernetes-operators-plain-english" rel="nofollow noreferrer">Kubernetes operators</a>. This way you can automate most of complex manual operations. Kubernetes operators allow you to describe desired configuration of an application (or database) in declarative way. </p>
<p>Kubernetes operator in general is a set of additional objects (pods, services, configmaps, secrets, RBAC, custom resources definitions, PV, PVC, ...) responsible to create and maintain complex application lifecycle. </p>
<p>Usually Kubernetes operators related to database applications take care about deploying database cluster of the desired size to Kubernetes and creating all required resources. Also they keep desired number of replicas, implement backup/restore procedures, configure data replication, forward client traffic to a healthy database pod and more. </p>
<p>Most of well-known databases have ready-to-use operators already. You can find some resources below:</p>
<p>Mysql Resources: </p>
<ul>
<li><a href="https://blogs.oracle.com/developers/introducing-the-oracle-mysql-operator-for-kubernetes" rel="nofollow noreferrer">Introducing the Oracle MySQL Operator for Kubernetes</a></li>
<li><a href="https://github.com/oracle/mysql-operator" rel="nofollow noreferrer">Oracle MySQL Operator on GitHub</a></li>
<li><a href="https://medium.com/oracledevs/getting-started-with-the-mysql-operator-for-kubernetes-8df48591f592" rel="nofollow noreferrer">Getting started with the MySQL Operator for Kubernetes</a></li>
</ul>
<p>Percona Resources:</p>
<ul>
<li><a href="https://www.percona.com/doc/kubernetes-operator-for-pxc/index.html" rel="nofollow noreferrer">Percona Kubernetes Operator for Percona XtraDB Cluster</a></li>
<li><a href="https://www.presslabs.com/docs/mysql-operator/" rel="nofollow noreferrer">MySQL Operator</a></li>
<li><a href="https://github.com/presslabs/mysql-operator" rel="nofollow noreferrer">MySQL Operator on Github</a></li>
</ul>
<p>PostgreSQL resources:</p>
<ul>
<li><a href="https://github.com/CrunchyData/postgres-operator" rel="nofollow noreferrer">Crunchy Data PostgreSQL Operator</a></li>
<li><a href="https://github.com/zalando/postgres-operator" rel="nofollow noreferrer">Postgres Operator</a></li>
</ul>
<p>MongoDB resources:</p>
<ul>
<li><a href="https://docs.mongodb.com/kubernetes-operator/master/tutorial/install-k8s-operator/#install-the-k8s-op-full" rel="nofollow noreferrer">Install the MongoDB Enterprise Kubernetes Operator</a></li>
</ul>
<p>In case you are planning to use different type of database, please check for available operator from DB vendor searching "<em>DBname kubernetes operator</em>"</p>
|
<p>I am trying to deploy my microservices into Kubernetes cluster. My cluster having one master and one worker node. I created this cluster for my R&D of Kubernetes deployment. When I am trying to deploy I am getting the even error message like the following,</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler 0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate
</code></pre>
<p><strong>My attempt</strong></p>
<p>When I am exploring about the error, I found some comments in forums for restarting the docker in the node etc. So after that I restarted Docker. But still the error is the same.</p>
<p>When I tried the command <code>kubectl get nodes</code> it showing like that both nodes are master and both are <code>ready</code> state.</p>
<pre><code>NAME STATUS ROLES AGE VERSION
mildevkub020 Ready master 6d19h v1.17.0
mildevkub040 Ready master 6d19h v1.17.0
</code></pre>
<p>I did not found worker node here. I created one master (mildevkub020) and one worker node (mildev040) with one load balancer. And I followed the official documentation of Kubernetes from the following link,</p>
<p><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/" rel="noreferrer">https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/</a></p>
<p><strong>My question</strong></p>
<p>Is this error is because of the cluster problem? Because I am not finding the cluster worker node. Only master node.</p>
| <p>You can run below command to remove the taint from master node and then you should be able to deploy your pod on that node</p>
<pre><code>kubectl taint nodes mildevkub020 node-role.kubernetes.io/master-
kubectl taint nodes mildevkub040 node-role.kubernetes.io/master-
</code></pre>
<p>Now regarding why its showing as master node check the command you ran to join the node with kubeadm. There are separate commands for master and worker node joining.</p>
|
<p>I need to get client IP in Kubernetes, but I can only get the Flannel internal IP <code>10.244.X.1</code> (in nginx via <code>$remote_addr</code>, <code>$http_x_forwarded_for</code> is always <code>-</code>).</p>
<p>The Network topo is:<br>
client browser -> k8s cluster node ip -> k8s service (A) -> the pod (which desire the client ip) -> other services</p>
<p>I have tried <code>externalTrafficPolicy</code> in <code>A</code>, doesn't work. Is it only work on cloud provider, or behind a LB? <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip" rel="nofollow noreferrer">[1]</a>
<br>
I also tried ingress-nginx <a href="https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#source-ip-address" rel="nofollow noreferrer">[2]</a>, get the same result.</p>
<p>My Environment:</p>
<pre><code> Kernel Version: 3.10.0-1062.4.1.el7.x86_64
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://18.6.2
Kubelet Version: v1.16.2
Kube-Proxy Version: v1.16.2
Network Add-on: Flannel
</code></pre>
<p>I have read some similar questions <a href="https://stackoverflow.com/questions/32112922/how-to-read-client-ip-addresses-from-http-requests-behind-kubernetes-services">[3]</a> <a href="https://stackoverflow.com/questions/32759517/how-to-get-the-real-ip-in-the-request-in-the-pod-in-kubernetes">[4]</a> <a href="https://stackoverflow.com/questions/58149992/how-to-get-clients-public-ip-on-the-pod">[5]</a> , looks answers are imply to cloud provider or LB.</p>
| <p>Kubernetes services work in one of three proxy modes: Userspace, iptables, and IPVS. I'm not familiar with IPVS, but in both Userspace and iptables modes, the client IP will be altered for NodePort and Load Balancer services [<a href="https://kubernetes.io/docs/concepts/services-networking/service/#ips-and-vips" rel="nofollow noreferrer">docs</a>].</p>
<p>Due to this proxying behaviour, the client IP will necessarily change, and that data can only be preserved if the client and server are speaking a "higher layer" protocol and there's a "higher layer" proxy that can capture the original client IP and inject it into the request before forwarding it on. If client and server are talking HTTP, then there are standard headers that an IaaS L7/HTTP(S) load balancer or nginx can inject for you, however if you're using nginx-ingress then the problem is that is sitting behind a NodePort or LoadBalancer service, so by the time it gets to the actual nginx process the client IP has been altered. You would need to have nginx running outside the cluster to make this work.</p>
<p>If you're running this on a public cloud, I think most cloud's L7 LB solutions inject these <code>X-Forwarded-For</code> or <code>Forwarded</code> headers out of the box for you. Alternatively, consider designing your server such that it doesn't need the client IP -- if client IP is being used to authenticate a client or authorize a request, there are other more cloud-native patterns that could work instead.</p>
|
<p>Is it best practice to include installation of sub-charts in global part of <code>values.yaml</code>. Example..</p>
<p>Root level <code>values.yaml</code></p>
<pre><code>global:
foo: bar
subchartA:
enable: true
</code></pre>
<p>Or the best practice is to have subcharts out of the global part as shown.</p>
<pre><code>global:
foo: bar
subchartA:
enable: true
</code></pre>
<p>Please provide a brief explanation why. Thank you</p>
| <p>Subchart configuration settings need to be at the top level, outside a <code>global:</code> block.</p>
<p>At a style level, each chart should be independently installable, whether or not it's used as a subchart. Something like the <a href="https://github.com/helm/charts/tree/master/stable/mysql" rel="nofollow noreferrer">stable/mysql</a> chart is a reasonable example: you can manually <code>helm install mysql stable/mysql --set mysqlPassword=...</code> without mentioning <code>global</code>. That means when you include it as a dependency its settings need to be under the subchart's key in the <code>values.yaml</code> file.</p>
<p>At a mechanical level, when the subchart is run, the <code>subchartA</code> settings are promoted up to be <code>.Values</code> and then the original <code>global:</code> is merged with that (see <a href="https://helm.sh/docs/topics/chart_template_guide/subcharts_and_globals/" rel="nofollow noreferrer">Subcharts and Globals</a>). So the subchart itself needs to be aware of the difference</p>
<pre><code>{{/* Option 1 */}}
{{ .Values.global.subchartA.enabled }}
{{/* Option 2 (within subchartA) */}}
{{ .Values.enabled }}
</code></pre>
<p>and at the top level you need to use the form that's compatible with the included chart.</p>
<p>(If you browse through the <a href="https://github.com/helm/charts/tree/master/stable" rel="nofollow noreferrer">"stable" Helm chart repository</a> you'll see <code>global</code> used fairly sparingly; <code>rabbitmq</code> allows you to declare <code>global.imagePullSecrets</code> but that's close to it.)</p>
|
<p>I'm curious why Kubernetes secrets require both a key and a name. When I create a secret I typically use the same value for both so I'm confused as to why they're both necessary.</p>
| <p>It is just a design flexibility as reusability. If you want to use custom names at deployment time you can change the secret key names. Also you reuse the secrets in other deployments with different keys</p>
|
<p>I have a container that I want to run on Kubernetes Let's say image1</p>
<p>when I run kubectl apply -f somePod.yml (which runs the image1) how can I start the image with the user that runned the command kubectl? </p>
| <p>It's not possible <a href="https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#user-accounts-vs-service-accounts" rel="nofollow noreferrer">by design</a>. Please find explanation below:</p>
<p>In the most cases Jobs create Pods, so I use Pods in my explanation.<br>
In case of Jobs it means just a bit different YAML file. </p>
<pre><code>$ kubectl explain job.spec.
$ kubectl explain job.spec.template.spec
</code></pre>
<p><strong>Users</strong> run <code>kubectl</code> using <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer"><strong>user accounts</strong>. </a></p>
<p><strong>Pods</strong> are runing using <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="nofollow noreferrer"><strong>service accounts</strong></a>. There is no way to run pod <code>"from user account"</code>.<br>
<strong>Note:</strong> in recent versions <code>spec.ServiceAccount</code> was replaced by <code>spec.serviceAccountName</code></p>
<p><em>However</em>, you can use <strong>user account credentials</strong> by running <code>kubectl</code> inside a Pod's container or making <code>curl</code> requests to Kubernetes <code>api-server</code> from inside a pod container.<br>
Using <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">Secrets</a> is the most convenient way to do that.</p>
<p>What else you can do differentiate users in the cluster: </p>
<ul>
<li>create a <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/" rel="nofollow noreferrer">namespace</a> for each user</li>
<li><a href="https://jeremievallee.com/2018/05/28/kubernetes-rbac-namespace-user.html" rel="nofollow noreferrer">limit user permission to specific namespace</a> </li>
<li>create default service account in that namespace. </li>
</ul>
<p>This way if the user creates a Pod without specifying <code>spec.ServiceAccountName</code>, by default it will use <code>default</code> service-account from Pod's namespace. </p>
<p>You can even set for the <code>default service account</code> the same permissions as for the <code>user account</code>. The only difference would be that <code>service accounts</code> exist in the <em>specific namespace</em>.</p>
<p>If you need to use the same namespace for all users, you can use <a href="https://helm.sh/docs/howto/charts_tips_and_tricks/" rel="nofollow noreferrer">helm charts</a> to set the correct <code>serviceAccountName</code> for each user <em>( imagine you have service-accounts with the same name as users )</em> by using <a href="https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing" rel="nofollow noreferrer">--set</a> command line arguments as follows: </p>
<pre><code>$ cat testchart/templates/job.yaml
...
serviceAccountName: {{ .Values.saname }}
...
$ export SANAME=$(kubectl config view --minify -o jsonpath='{.users[0].name}')
$ helm template ./testchart --set saname=$SANAME
---
# Source: testchart/templates/job.yaml
...
serviceAccountName: kubernetes-admin
...
</code></pre>
<p>You can also specify <code>namespace</code> for each user in the same way.</p>
|
<p>I am trying to work through the Istio quickstart guide here and am having trouble with the step <a href="https://preliminary.istio.io/docs/examples/bookinfo/#confirm-the-app-is-accessible-from-outside-the-cluster" rel="nofollow noreferrer">confirm the app is accessible from outside the cluster</a>. </p>
<p>I'm on a mac and running docker for desktop rather than minikube.</p>
<p><a href="https://i.stack.imgur.com/3djcv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3djcv.png" alt="docker desktop version information"></a></p>
<p>I am also behind a proxy at work but even when I bypass the proxy I get the following error:</p>
<pre><code>$ curl http://${GATEWAY_URL}/productpage -v
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connection failed
* connect to 127.0.0.1 port 30489 failed: Connection refused
* Failed to connect to 127.0.0.1 port 30489: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 127.0.0.1 port 30489: Connection refused
</code></pre>
<p>The pod is running:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
details-v1-c5b5f496d-ccw5f 2/2 Running 0 18h
productpage-v1-c7765c886-xm2jd 2/2 Running 0 18h
ratings-v1-f745cf57b-5df2q 2/2 Running 0 18h
reviews-v1-75b979578c-jtc5l 2/2 Running 0 18h
reviews-v2-597bf96c8f-g7bzd 2/2 Running 0 18h
reviews-v3-54c6c64795-gbqqj 2/2 Running 0 18h
</code></pre>
<p>I'm able to curl it from within a pod:</p>
<pre><code>$ kubectl exec -it $(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}') -c ratings -- curl productpage:9080/productpage | grep -o "<title>.*</title>"
<title>Simple Bookstore App</title>
</code></pre>
<p>I'm not certain what I may be doing wrong. Any guidance is greatly appreciated!</p>
| <p>I found the confusion thanks to <a href="https://discuss.istio.io/t/istio-on-docker-edge-cant-access-service-externally/2408" rel="nofollow noreferrer">this post</a> in the istio community. In that post they mention that to find the ingress port they are running the following command:</p>
<pre><code>export INGRESS_PORT=$(kubectl -n istio-system get service istio- ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
</code></pre>
<p>This differs from what you are instructed to do on the bookinfo <a href="https://istio.io/docs/examples/bookinfo/#determine-the-ingress-ip-and-port" rel="nofollow noreferrer">tutorial</a> when on step 3 you are asked to go <a href="https://istio.io/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports" rel="nofollow noreferrer">here</a> to determine your ingress host and port. There, if you are running locally and don't have a public loadbalancer you are told to pull the nodeport like this:</p>
<pre><code>export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}')
</code></pre>
<p>If I instead change the json element to 'port' it works perfectly!</p>
<pre><code>export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
</code></pre>
<p>Sample curl from tutorial:</p>
<pre><code>$ curl -v -s http://127.0.0.1:80/productpage | grep -o "<title>.*</title>"
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> GET /productpage HTTP/1.1
> Host: 127.0.0.1
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: text/html; charset=utf-8
< content-length: 4183
< server: istio-envoy
< date: Tue, 24 Dec 2019 20:28:55 GMT
< x-envoy-upstream-service-time: 940
<
{ [4183 bytes data]
* Connection #0 to host 127.0.0.1 left intact
<title>Simple Bookstore App</title>
</code></pre>
<p>Hopefully this helps somebody that was struggling like I was.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.