prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>After creating a deployment for a web-application, consisting of pod, internal service and ingress, the ingress can't be created. When executing <code>kubectl apply -f web-app.yaml</code>, I get the error:
<code>error when creating "web-app.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": dial tcp 10.100.7.97:443: connect: no route to host</code></p>
<p>After some retries, it worked surprisingly, but after some changes to the deployment, the same error occurs again.</p>
<p><strong>web-app.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
namespace: development
spec:
replicas: 1
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web-app
image: registry/web-app:latest
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: web-app-internal
namespace: development
spec:
selector:
app: web-app
ports:
- port: 8080
targetPort: 8080
protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
name: web-app-ingress
namespace: development
labels:
name: web-app-ingress
spec:
rules:
- host: web.app
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: web-app-internal
port:
number: 8080
</code></pre>
<p>Any clues what the issue could be?</p>
| <p>The error you are getting is from the webhook trying to validate your ingress, and not the ingress object itself you are trying to apply.</p>
<p>The validation webhook is set by default to <a href="https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/templates/admission-webhooks/validating-webhook.yaml#L27" rel="nofollow noreferrer">fail</a> the request if the webhook cannot process it. Is there are a chance your ingress controller was unavailable for any reason while you were trying to apply the ingress object?</p>
<p>Also, looking at a related <a href="https://stackoverflow.com/questions/52619828/kubernetes-no-route-to-host">SO question,</a> it might be an error related to DNS resolution.</p>
|
<p>In k8s, a pod starts with a container named <code>pause</code>.<br />
Pause container helps other containers share network namespace.<br />
I know it but have a question.</p>
<p>What is the lifecycle of the pause container?<br />
What I want to know is that when a pod gets <code>crashloopbackoff</code> or temporally doesn't work, does the pause container also stop?</p>
<p>If not, does the pause container maintains its own Linux namespace?</p>
| <blockquote>
<p>When a pod gets <code>crashloopbackoff</code> or temporally doesn't work, does the pause container also stop?</p>
</blockquote>
<p>No. A <a href="https://sysdig.com/blog/debug-kubernetes-crashloopbackoff/" rel="nofollow noreferrer">CrashloopBackOff</a> is independent of <code>pause</code> containers. I have reproduced this situation in Minikube with docker driver with following yaml:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: dummy-pod
spec:
containers:
- name: dummy-pod
image: ubuntu
restartPolicy: Always
</code></pre>
<p>Command <code>kubectl get pods</code> returns for me:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
dummy-pod 0/1 CrashLoopBackOff 7 (2m59s ago) 14m
</code></pre>
<p>Then I have logged into the node on which the crashed pod exist and run <code>docker ps</code>:</p>
<pre><code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7985cf2b01ad k8s.gcr.io/pause:3.5 "/pause" 14 minutes ago Up 14 minutes k8s_POD_dummy-pod_default_0f278cd1-6225-4311-98c9-e154bf9b42a3_0
18eeb073fe71 6e38f40d628d "/storage-provisioner" 16 minutes ago Up 16 minutes k8s_storage-provisioner_storage-provisioner_kube-system_5c3cec65-5a2d-4881-aa34-d98e1098f17f_1
b7dd2640584d 8d147537fb7d "/coredns -conf /etc…" 17 minutes ago Up 17 minutes k8s_coredns_coredns-78fcd69978-h28mp_kube-system_f62eec5a-290c-4a42-b488-e1475d7f6ff2_0
d3acb4e61218 36c4ebbc9d97 "/usr/local/bin/kube…" 17 minutes ago Up 17 minutes k8s_kube-proxy_kube-proxy-bf75s_kube-system_39dc64cc-2eab-497d-bf13-b5e6d1dbc9cd_0
083690fe3672 k8s.gcr.io/pause:3.5 "/pause" 17 minutes ago Up 17 minutes k8s_POD_coredns-78fcd69978-h28mp_kube-system_f62eec5a-290c-4a42-b488-e1475d7f6ff2_0
df0186291c8c k8s.gcr.io/pause:3.5 "/pause" 17 minutes ago Up 17 minutes k8s_POD_kube-proxy-bf75s_kube-system_39dc64cc-2eab-497d-bf13-b5e6d1dbc9cd_0
06fdfb5eab54 k8s.gcr.io/pause:3.5 "/pause" 17 minutes ago Up 17 minutes k8s_POD_storage-provisioner_kube-system_5c3cec65-5a2d-4881-aa34-d98e1098f17f_0
183f6cc10573 aca5ededae9c "kube-scheduler --au…" 17 minutes ago Up 17 minutes k8s_kube-scheduler_kube-scheduler-minikube_kube-system_6fd078a966e479e33d7689b1955afaa5_0
2d032a2ec51d f30469a2491a "kube-apiserver --ad…" 17 minutes ago Up 17 minutes k8s_kube-apiserver_kube-apiserver-minikube_kube-system_4889789e825c65fc82181cf533a96c40_0
cd157b628bc5 6e002eb89a88 "kube-controller-man…" 17 minutes ago Up 17 minutes k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_f8d2ab48618562b3a50d40a37281e35e_0
a2d5608e5bac 004811815584 "etcd --advertise-cl…" 17 minutes ago Up 17 minutes k8s_etcd_etcd-minikube_kube-system_08a3871e1baa241b73e5af01a6d01393_0
e9493a3f2383 k8s.gcr.io/pause:3.5 "/pause" 17 minutes ago Up 17 minutes k8s_POD_kube-apiserver-minikube_kube-system_4889789e825c65fc82181cf533a96c40_0
1088a8210eed k8s.gcr.io/pause:3.5 "/pause" 17 minutes ago Up 17 minutes k8s_POD_kube-scheduler-minikube_kube-system_6fd078a966e479e33d7689b1955afaa5_0
f551447a77b6 k8s.gcr.io/pause:3.5 "/pause" 17 minutes ago Up 17 minutes k8s_POD_etcd-minikube_kube-system_08a3871e1baa241b73e5af01a6d01393_0
c8414ee790d8 k8s.gcr.io/pause:3.5 "/pause" 17 minutes ago Up 17 minutes k8s_POD_kube-controller-manager-minikube_kube-system_f8d2ab48618562b3a50d40a37281e35e_0
</code></pre>
<p>The <code>pause</code> containers are independent in the pod. The pause container is a container which holds the network namespace for the pod. It does nothing. It doesn't stop even if the pod is in the CrashLoopBackOff state. If the pause container is dead, kubernetes consider the pod died and kill it and reschedule a new one. There was no such situation here.</p>
<p>See also an <a href="https://stackoverflow.com/questions/48651269/what-are-the-pause-containers">explanation of <code>pause</code> containers</a>.</p>
|
<p>In k8s, a pod starts with a container named <code>pause</code>.<br />
Pause container helps other containers share network namespace.<br />
I know it but have a question.</p>
<p>What is the lifecycle of the pause container?<br />
What I want to know is that when a pod gets <code>crashloopbackoff</code> or temporally doesn't work, does the pause container also stop?</p>
<p>If not, does the pause container maintains its own Linux namespace?</p>
| <p>The pause container is a fully independent container like the others in the pod (other than the places where the namespaces overlap, as you mentioned). It starts when the pod is started up by the kubelet and is torn down when the pod is gone (deleted, scheduled elsewhere, whatever).</p>
|
<p>Can anyone explain to me why when running my load test on one pod it gives better TPS rather than when scaling to two pods.</p>
<p>I expected that when running the same scenario with the same configuration on 2 pods the TPS will be increased but this is not what happened.</p>
<p>Is this normal behaviour that scaling horizontal not improve the total number of requests?</p>
<p>Please note that I didn't get any failures on one pod just scaled to 2 for high availability.</p>
| <pre><code>...my load test on one pod it gives better TPS rather than when scaling to two pods.
</code></pre>
<p>This can happen when 2 pods race for same resource and create a bottleneck.</p>
<pre><code>Is this normal behaviour that scaling horizontal not improve the total number of requests?
</code></pre>
<p>Client (web) requests can improve but the capacity for backend, sometimes middleware too (if any), needs to catch up.</p>
|
<p>I've installed <code>kong-ingress-controller</code> using yaml file on a 3-nodes k8s cluster.
but I'm getting this (the status of pod is <code>CrashLoopBackOff</code>):</p>
<pre><code>$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kong ingress-kong-74d8d78f57-57fqr 1/2 CrashLoopBackOff 12 (3m23s ago) 40m
[...]
</code></pre>
<p>there are 2 container declarations in kong yaml file: <code>proxy</code> and <code>ingress-controller</code>.
The first one is up and running but the <code>ingress-controller</code> container is not:</p>
<pre><code>$kubectl describe pod ingress-kong-74d8d78f57-57fqr -n kong |less
[...]
ingress-controller:
Container ID: docker://8e9a3370f78b3057208b943048c9ecd51054d0b276ef6c93ccf049093261d8de
Image: kong/kubernetes-ingress-controller:1.3
Image ID: docker-pullable://kong/kubernetes-ingress-controller@sha256:cff0df9371d5ad07fef406c356839736ce9eeb0d33f918f56b1b232cd7289207
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 07 Sep 2021 17:15:54 +0430
Finished: Tue, 07 Sep 2021 17:15:54 +0430
Ready: False
Restart Count: 13
Liveness: http-get http://:10254/healthz delay=5s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:10254/healthz delay=5s timeout=1s period=10s #success=1 #failure=3
Environment:
CONTROLLER_KONG_ADMIN_URL: https://127.0.0.1:8444
CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY: true
CONTROLLER_PUBLISH_SERVICE: kong/kong-proxy
POD_NAME: ingress-kong-74d8d78f57-57fqr (v1:metadata.name)
POD_NAMESPACE: kong (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ft7gg (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-ft7gg:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 46m default-scheduler Successfully assigned kong/ingress-kong-74d8d78f57-57fqr to kung-node-2
Normal Pulled 46m kubelet Container image "kong:2.5" already present on machine
Normal Created 46m kubelet Created container proxy
Normal Started 46m kubelet Started container proxy
Normal Pulled 45m (x4 over 46m) kubelet Container image "kong/kubernetes-ingress-controller:1.3" already present on machine
Normal Created 45m (x4 over 46m) kubelet Created container ingress-controller
Normal Started 45m (x4 over 46m) kubelet Started container ingress-controller
Warning BackOff 87s (x228 over 46m) kubelet Back-off restarting failed container
</code></pre>
<p>And here is the log of <code>ingress-controller</code> container:</p>
<pre><code>-------------------------------------------------------------------------------
Kong Ingress controller
Release:
Build:
Repository:
Go: go1.16.7
-------------------------------------------------------------------------------
W0907 12:56:12.940106 1 client_config.go:614] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
time="2021-09-07T12:56:12Z" level=info msg="version of kubernetes api-server: 1.22" api-server-host="https://10.*.*.1:443" git_commit=632ed300f2c34f6d6d15ca4cef3d3c7073412212 git_tree_state=clean git_version=v1.22.1 major=1 minor=22 platform=linux/amd64
time="2021-09-07T12:56:12Z" level=fatal msg="failed to fetch publish-service: services \"kong-proxy\" is forbidden: User \"system:serviceaccount:kong:kong-serviceaccount\" cannot get resource \"services\" in API group \"\" in the namespace \"kong\"" service_name=kong-proxy service_namespace=kong
</code></pre>
<p>If someone could help me to get a solution, that would be awesome.</p>
<p>============================================================</p>
<p><strong>UPDATE</strong>:</p>
<p>The <code>kong-ingress-controller</code>'s yaml file:</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: kong
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongclusterplugins.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .plugin
description: Name of the plugin
name: Plugin-Type
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
- JSONPath: .disabled
description: Indicates if the plugin is disabled
name: Disabled
priority: 1
type: boolean
- JSONPath: .config
description: Configuration of the plugin
name: Config
priority: 1
type: string
group: configuration.konghq.com
names:
kind: KongClusterPlugin
plural: kongclusterplugins
shortNames:
- kcp
scope: Cluster
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
config:
type: object
configFrom:
properties:
secretKeyRef:
properties:
key:
type: string
name:
type: string
namespace:
type: string
required:
- name
- namespace
- key
type: object
type: object
disabled:
type: boolean
plugin:
type: string
protocols:
items:
enum:
- http
- https
- grpc
- grpcs
- tcp
- tls
type: string
type: array
run_on:
enum:
- first
- second
- all
type: string
required:
- plugin
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongconsumers.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .username
description: Username of a Kong Consumer
name: Username
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
group: configuration.konghq.com
names:
kind: KongConsumer
plural: kongconsumers
shortNames:
- kc
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
credentials:
items:
type: string
type: array
custom_id:
type: string
username:
type: string
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongingresses.configuration.konghq.com
spec:
group: configuration.konghq.com
names:
kind: KongIngress
plural: kongingresses
shortNames:
- ki
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
proxy:
properties:
connect_timeout:
minimum: 0
type: integer
path:
pattern: ^/.*$
type: string
protocol:
enum:
- http
- https
- grpc
- grpcs
- tcp
- tls
type: string
read_timeout:
minimum: 0
type: integer
retries:
minimum: 0
type: integer
write_timeout:
minimum: 0
type: integer
type: object
route:
properties:
headers:
additionalProperties:
items:
type: string
type: array
type: object
https_redirect_status_code:
type: integer
methods:
items:
type: string
type: array
path_handling:
enum:
- v0
- v1
type: string
preserve_host:
type: boolean
protocols:
items:
enum:
- http
- https
- grpc
- grpcs
- tcp
- tls
type: string
type: array
regex_priority:
type: integer
request_buffering:
type: boolean
response_buffering:
type: boolean
snis:
items:
type: string
type: array
strip_path:
type: boolean
upstream:
properties:
algorithm:
enum:
- round-robin
- consistent-hashing
- least-connections
type: string
hash_fallback:
type: string
hash_fallback_header:
type: string
hash_on:
type: string
hash_on_cookie:
type: string
hash_on_cookie_path:
type: string
hash_on_header:
type: string
healthchecks:
properties:
active:
properties:
concurrency:
minimum: 1
type: integer
healthy:
properties:
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
successes:
minimum: 0
type: integer
type: object
http_path:
pattern: ^/.*$
type: string
timeout:
minimum: 0
type: integer
unhealthy:
properties:
http_failures:
minimum: 0
type: integer
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
tcp_failures:
minimum: 0
type: integer
timeout:
minimum: 0
type: integer
type: object
type: object
passive:
properties:
healthy:
properties:
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
successes:
minimum: 0
type: integer
type: object
unhealthy:
properties:
http_failures:
minimum: 0
type: integer
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
tcp_failures:
minimum: 0
type: integer
timeout:
minimum: 0
type: integer
type: object
type: object
threshold:
type: integer
type: object
host_header:
type: string
slots:
minimum: 10
type: integer
type: object
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongplugins.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .plugin
description: Name of the plugin
name: Plugin-Type
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
- JSONPath: .disabled
description: Indicates if the plugin is disabled
name: Disabled
priority: 1
type: boolean
- JSONPath: .config
description: Configuration of the plugin
name: Config
priority: 1
type: string
group: configuration.konghq.com
names:
kind: KongPlugin
plural: kongplugins
shortNames:
- kp
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
config:
type: object
configFrom:
properties:
secretKeyRef:
properties:
key:
type: string
name:
type: string
required:
- name
- key
type: object
type: object
disabled:
type: boolean
plugin:
type: string
protocols:
items:
enum:
- http
- https
- grpc
- grpcs
- tcp
- tls
type: string
type: array
run_on:
enum:
- first
- second
- all
type: string
required:
- plugin
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: tcpingresses.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .status.loadBalancer.ingress[*].ip
description: Address of the load balancer
name: Address
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
group: configuration.konghq.com
names:
kind: TCPIngress
plural: tcpingresses
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
properties:
rules:
items:
properties:
backend:
properties:
serviceName:
type: string
servicePort:
format: int32
type: integer
type: object
host:
type: string
port:
format: int32
type: integer
type: object
type: array
tls:
items:
properties:
hosts:
items:
type: string
type: array
secretName:
type: string
type: object
type: array
type: object
status:
type: object
version: v1beta1
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kong-serviceaccount
namespace: kong
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kong-ingress-clusterrole
rules:
- apiGroups:
- ""
resources:
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
- extensions
- networking.internal.knative.dev
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- networking.k8s.io
- extensions
- networking.internal.knative.dev
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- configuration.konghq.com
resources:
- tcpingresses/status
verbs:
- update
- apiGroups:
- configuration.konghq.com
resources:
- kongplugins
- kongclusterplugins
- kongcredentials
- kongconsumers
- kongingresses
- tcpingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- get
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kong-ingress-clusterrole-nisa-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kong-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: kong-serviceaccount
namespace: kong
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-type: nlb
name: kong-proxy
namespace: kong
spec:
ports:
- name: proxy
port: 80
protocol: TCP
targetPort: 8000
- name: proxy-ssl
port: 443
protocol: TCP
targetPort: 8443
selector:
app: ingress-kong
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: kong-validation-webhook
namespace: kong
spec:
ports:
- name: webhook
port: 443
protocol: TCP
targetPort: 8080
selector:
app: ingress-kong
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ingress-kong
name: ingress-kong
namespace: kong
spec:
replicas: 1
selector:
matchLabels:
app: ingress-kong
template:
metadata:
annotations:
kuma.io/gateway: enabled
prometheus.io/port: "8100"
prometheus.io/scrape: "true"
traffic.sidecar.istio.io/includeInboundPorts: ""
labels:
app: ingress-kong
spec:
containers:
- env:
- name: KONG_PROXY_LISTEN
value: 0.0.0.0:8000, 0.0.0.0:8443 ssl http2
- name: KONG_PORT_MAPS
value: 80:8000, 443:8443
- name: KONG_ADMIN_LISTEN
value: 127.0.0.1:8444 ssl
- name: KONG_STATUS_LISTEN
value: 0.0.0.0:8100
- name: KONG_DATABASE
value: "off"
- name: KONG_NGINX_WORKER_PROCESSES
value: "2"
- name: KONG_ADMIN_ACCESS_LOG
value: /dev/stdout
- name: KONG_ADMIN_ERROR_LOG
value: /dev/stderr
- name: KONG_PROXY_ERROR_LOG
value: /dev/stderr
image: kong:2.5
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- kong quit
livenessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 8100
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: proxy
ports:
- containerPort: 8000
name: proxy
protocol: TCP
- containerPort: 8443
name: proxy-ssl
protocol: TCP
- containerPort: 8100
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 8100
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
- env:
- name: CONTROLLER_KONG_ADMIN_URL
value: https://127.0.0.1:8444
- name: CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY
value: "true"
- name: CONTROLLER_PUBLISH_SERVICE
value: kong/kong-proxy
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: kong/kubernetes-ingress-controller:1.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: ingress-controller
ports:
- containerPort: 8080
name: webhook
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
serviceAccountName: kong-serviceaccount
</code></pre>
| <p>Having analysed the comments it looks like changing <code>apiVersion</code> from <code>rbac.authorization.k8s.io/v1beta1</code> to <code>rbac.authorization.k8s.io/v1</code> has solved the problem temporally, an alternative to this solution is to downgrade the cluster.</p>
|
<p>I have a service running as a <code>DaemonSet</code> across a number of kubernetes nodes. I would like to make some policy decisions based on the labels of the node on which each <code>DaemonSet</code> pod is running. From within a container, how do I know on which node this pod is running? Given this information, looking up the node labels through the API should be relatively easy.</p>
<p>What I'm doing right now is passing in the node name as an environment variable, like this:</p>
<pre><code> env:
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
</code></pre>
<p>Is that the correct way of getting at this information? More specifically, is there any sort of API call that will answer the question, "where am I?"?</p>
| <p>Answering your question <em>"How do I know what node I'm on?"</em>:</p>
<p>Your approach seems to be the best one - to use <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#the-downward-api" rel="nofollow noreferrer">the DownWard API</a> which allows us to access some pod's or container's fields. In our case it is pod's field <code>spec.nodeName</code> which is accessible.</p>
<p>You can use two options to expose pod information using this API:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">environment variable(s)</a></li>
<li><a href="https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/" rel="nofollow noreferrer">through file(s)</a></li>
</ul>
<p>You can also access <a href="https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/" rel="nofollow noreferrer">Kubernetes API from a pod</a> and get this information from here but it is kind of workaround. It's better to take advantage of the fact that the name of the node is one of the pod's field and use previously described the DownWard API which is made for this purpose and officialy described in Kubernetes documentation.</p>
|
<p>I'm trying to deploy a mysql instance in k8s through a StatefulSet using the official Mysql image from DockerHub. I'm following the image documentation from DockerHub and providing <code>MYSQL_ROOT_PASSWORD</code>, <code>MYSQL_USER</code> and <code>MYSQL_PASSWORD</code> env vars, so the user should be automatically created, but it is not. The error I can see in container's logs is that <code>root</code> user is not able to connect at the point the user provided in <code>MYSQL_USER</code> is being created.</p>
<pre><code>2021-09-14 17:28:20+00:00 [Note] [Entrypoint]: Creating user foo_user
2021-09-14T17:28:20.860763Z 5 [Note] Access denied for user 'root'@'localhost' (using password: YES)
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
</code></pre>
<p>After some investigation, I've noticed that the problem occurs when the values for the env vars are taken from k8s secrets, but if I hardcode their values in the StatefulSet's manifest, it works just fine. You can see my current code below:</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
labels:
app: mysql
spec:
replicas: 1
serviceName: mysql-svc
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: "mysql:latest"
env:
- name: MYSQL_DATABASE
value: 'foo_db'
- name: MYSQL_USER
value: 'foo_user'
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-sec
key: MYSQL_PASSWORD
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-sec
key: MYSQL_ROOT_PASSWORD
ports:
- containerPort: 3306
protocol: TCP
volumeMounts:
- name: mysql-db
mountPath: /var/lib/mysql
subPath: mysql
volumeClaimTemplates:
- metadata:
name: mysql-db
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
</code></pre>
<p>And the <code>secrets.yml</code> file:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Secret
metadata:
name: mysql-sec
labels:
app: mysql
type: Opaque
data:
MYSQL_PASSWORD: ***************************
MYSQL_ROOT_PASSWORD: ***************************
</code></pre>
<p>I've also tried to create the secrets first to make sure that the secrets already exist when the pod spins up, but without any success.</p>
<p>Any idea?</p>
| <p>I was finally able to find out the root cause of the problem and it had nothing to do with secrets... The problem was related with the "complexity" of the value picked for the password. I chose a strong password autogenerated by an online tool, something similar to <code>!6Y*]q~x+xG{9HQ~</code>, and for some unknown reason this vale made the mysql docker's image's <code>/entrypoint.sh</code> script to fail with the aforementioned error <code>Access denied for user 'root'@'localhost' (using password: YES)</code>. However, even though the script failed, the container and mysql server were up&running, and I was able to sneak into in and successfully execute <code>mysql -u root --password="$MYSQL_ROOT_PASSWORD"</code>, so it seems pretty clear to me that the error is located in this script and the way it expands and use the value of this env var. After exchanging the password value by a "less complex" one, it worked like a charm.</p>
|
<p>ok I am banging my head now agaainst he wall for serveral days...</p>
<p>my usecase:
I am on my own bare metal cloud, I am running ubuntu machines and setup kubernetes on 4 machines one master 3 workers. I created a private registry and cert-manager etc etc. </p>
<p>also nfs shares are on the worker nodes</p>
<p>i have a piece of software that <strong>has to run as root inside a pod</strong> now i want this root user to store data on a presistent volume on a nfs share.</p>
<p>root_squash is biting me in the but... </p>
<p>I have created volumes and claims and all works ok if i am not root inside the pod. when root the files on the nfs shares are squashed to nobody:nogroup and the root user inside the pod can no longer use them...</p>
<p>what to do?</p>
<p>1) export nfs share with the no_root_squash option but this seems like a very bad idea given security issues, not sure if this can be mitigated by firewall rules alone?</p>
<p>2) i trid all kinds of securityContext options for fsGroup and uid en gid mount options, all work ok as long as you are not root in de pod... but i am not sure if i understand this in full so</p>
<p>My pc yaml:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: s03-pv0004
annotations:
pv.beta.kubernetes.io/gid: "1023"
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /data/k8s/pv0004
server: 212.114.120.61
</code></pre>
<p>As you van see i created a dedicated nfsuser with uid 1023 and use this to make the pods store data as this user... works fine as long as i am not rooot inside the pods...</p>
<p>The pods i am running are MarkLogic pods in a stateful set like so:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: marklogic
namespace: default
spec:
selector:
matchLabels:
app: marklogic
serviceName: "ml-service"
replicas: 3
template:
metadata:
labels:
app: marklogic
spec:
securityContext:
fsGroup: 1023
... more
</code></pre>
<p>runAsUser: 1023 works but again not if i want to be root inside the pod...</p>
<p>My question: Can it be done, run a pod as root and still use nfs as persistent volume with a secure nfs share(that is not using no_root_squash) ??? </p>
<p>or do i need to drop the idea of nfs and move to an alternative like glusterfs?</p>
| <p><strong>I will try to answer in very simple steps:</strong></p>
<p><strong>1. You can even make root_squash to work for k8s :</strong></p>
<pre><code>- run your containers as non root user: 1023 in your case
- chown -R 1023:1023 <nfs dir>
</code></pre>
<p><strong>2. You can make no_root_squash to work for k8s:</strong></p>
<pre><code>- run your containers as root user: 0
- chown -R root:root <nfs dir>
</code></pre>
|
<p>I'm trying to get access to my kubernetes cluster in my self hosted gitlab instance as it is described in the <a href="https://gitlab.jaqua.de/help/user/project/clusters/deploy_to_cluster.md#deployment-variables" rel="noreferrer">docs</a>.</p>
<pre><code>deploy:
stage: deployment
script:
- kubectl create secret docker-registry gitlab-registry --docker-server="$CI_REGISTRY" --docker-username="$CI_DEPLOY_USER" --docker-password="$CI_DEPLOY_PASSWORD" --docker-email="$GITLAB_USER_EMAIL" -o yaml --dry-run=client | kubectl apply -f -
</code></pre>
<p>But I do get the error</p>
<pre><code>Error from server (Forbidden): error when retrieving current configuration of:
Resource: "/v1, Resource=secrets", GroupVersionKind: "/v1, Kind=Secret"
Name: "gitlab-registry", Namespace: "gitlab"
from server for: "STDIN": secrets "gitlab-registry" is forbidden: User "system:serviceaccount:gitlab:default" cannot get resource "secrets" in API group "" in the namespace "gitlab"
</code></pre>
<p>I do not understand the error. Why do I get a forbidden error?</p>
<hr />
<p><em>Update</em></p>
<p>The kubernetes cluster is integrated in gitlab at instance level.</p>
<p>But running <code>kubectl config view</code> in the CI pipeline gives me</p>
<pre><code>apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
</code></pre>
<hr />
<p><em>Update2</em></p>
<p>Thanks to AndD, the secret can be created with this role / service account:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
namespace: gitlab
name: gitlab-deploy
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["secrets"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: use-secrets
namespace: gitlab
subjects:
- kind: ServiceAccount
name: default
namespace: gitlab
roleRef:
kind: ClusterRole
name: gitlab-deploy
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>But running a simple apply for this namespace.yaml file</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: myns
</code></pre>
<p>gives me a similar error:</p>
<pre><code>Error from server (Forbidden): error when retrieving current configuration of:
Resource: "/v1, Resource=namespaces", GroupVersionKind: "/v1, Kind=Namespace"
Name: "myns", Namespace: ""
from server for: "namespace.yaml": namespaces "myns" is forbidden: User "system:serviceaccount:gitlab:default" cannot get resource "namespaces" in API group "" in the namespace "myns"
</code></pre>
<p>I used ClusterBinding to get this working even for a different namespace. What am I doing wrong?</p>
| <p>Kubernetes makes use of a Role-based access control (RBAC) to prevent Pods and Users from being able to interact with resources in the cluster, unless they are not authorized.</p>
<p>From the error, you can see that Gitlab is trying to use the <code>secrets</code> resource and also that it is using as <code>ServiceAccount</code> the <code>default</code> service account in its namespace.</p>
<p>This means that Gitlab is not configured to use a particular ServiceAccount, which means it makes use of the default one (there's a default service account in each namespace of the cluster)</p>
<hr />
<p>You can attach role auth and permissions to service accounts by using <code>Role</code> / <code>ClusterRole</code> and <code>RoleBinding</code> / <code>ClusterRoleBinding</code>.</p>
<p>Roles or ClusterRoles describe permissions. For example, a Role could be:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: gitlab
name: secret-user
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["secrets"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
</code></pre>
<p>and this states that "whoever has this role, can do whatever (all the verbs) with secrets <strong>but only</strong> in the namespace <code>gitlab</code>"</p>
<p>If you want to give generic permissions in all namespaces, you can use a ClusterRole instead, which is very similar.</p>
<p>Once the Role is created, you then can attach it to a User, a Group or a ServiceAccount, for example:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: use-secrets
namespace: gitlab
subjects:
subjects:
- kind: ServiceAccount
name: default
namespace: gitlab
roleRef:
# "roleRef" specifies the binding to a Role / ClusterRole
kind: Role # this must be Role or ClusterRole
name: secret-user # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>and this bind the role previously created to the <code>ServiceAccount</code> called default in the namespace <code>gitlab</code>.</p>
<p>Then, <strong>all</strong> Pods running in the namespace <code>gitlab</code> and using the <code>default</code> service account, will be able to use <code>secrets</code> (use the verbs listed in the Role) <strong>but only</strong> in the namespace specified by the Role.</p>
<hr />
<p>As you can see, this aspect of Kubernetes is pretty complex and powerful, so have a look at the docs because they explain things <strong>really</strong> well and are also full of examples:</p>
<p>Service Accounts - <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/</a></p>
<p>RBAC - <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/rbac/</a></p>
<p>A list of RBAC resources - <a href="https://stackoverflow.com/questions/57872201/how-to-refer-to-all-subresources-in-a-role-definition">How to refer to all subresources in a Role definition?</a></p>
<hr />
<p><strong>UPDATE</strong></p>
<p>You are doing nothing wrong. It's just that you are trying to use the resource <code>namespace</code> but Gitlab has no Bind that gives access to that type of resource. With your <code>ClusterRole</code> you just gave it access to <code>secrets</code>, but nothing more.</p>
<p>Consider giving the ClusterRole more permissions, changing it to list all resources that you need to access:</p>
<pre><code>rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["secrets", "namespaces", "pods"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
</code></pre>
<p>For example this will give access to secrets, namespaces and Pods.</p>
<p>As an alternative, you can bind Gitlab's service account to <code>cluster-admin</code> to directly give it access to <strong>everything</strong>.</p>
<pre><code>kubectl create clusterrolebinding gitlab-is-now-cluster-admin \
--clusterrole=cluster-admin \
--serviceaccount=gitlab:default
</code></pre>
<p>Before doing so tho, consider the following:</p>
<blockquote>
<p>Fine-grained role bindings provide greater security, but require more
effort to administrate. Broader grants can give unnecessary (and
potentially escalating) API access to ServiceAccounts, but are easier
to administrate.</p>
</blockquote>
<p>So, it is way more secure to first decide which resources can be used by Gitlab and then create a Role / ClusterRole giving access to only those resources (and with the verbs that you need)</p>
|
<p>I'm working on a library to read secrets from a given directory that I've got easily up and running with Docker Swarm by using the /run/secrets directory as the defined place to read secrets from. I'd like to do the same for a Kubernetes deployment but looking online I see many guides that advise using various Kubernetes APIs and libraries. Is it possible to simply read from disk as it is with Docker Swarm? If so, what is the directory that these are stored in?</p>
| <p>Please read the <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">documentation</a></p>
<p>I see 2 practical ways to access the k8s secrets:</p>
<ol>
<li>Mount the secret as a file</li>
</ol>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret
</code></pre>
<ol start="2">
<li>Expose the secret as an environmental variable</li>
</ol>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
</code></pre>
|
<p>I'm not asking how to create a rootless container from scratch. Rather, I've been given some software deployed as pre-built Docker container images that run as root by default. I need to modify these containers so they can be deployed on Kubernetes, which means I need to make these containers rootless. To be clear, I DO NOT have the source to these containers so I can't simply rebuild them from scratch.</p>
<p>I've found plenty of articles about building rootless containers in general, but they all assume you're building your containers from scratch. I've spent hours searching but can't find anything about modifying an existing container to be rootless.</p>
<p>I realize this might be a very open question, but I don't know all the things I need to take into consideration. Two things I've been able to gather is adding a line such as <code>USER 1000</code> to Dockerfile, and adjusting ownership and permissions on certain files and folders. Beyond that, I'm not sure what I need to do.</p>
| <p>Create users in the container and switch users;</p>
<pre><code> Add a new user, named user;
Let this user have root privileges;
Set its password to password;
After the Container is started, log in as user and go directly to the user's home directory;
</code></pre>
<p>Put the following code snippet in the Dockerfile.</p>
<pre><code>RUN useradd --create-home --no-log-init --shell /bin/bash user \
&& RUN adduser user sudo \
&& RUN echo 'user:password' | chpasswd
USER user
WORKDIR /home/user
</code></pre>
<p>Use fixuid to modify the uid and gid of non-root users in the container;</p>
<p>After creating a non-root user with the above code, the user's uid and gid are generally 1000:1000.</p>
<p>Docker and the host share a set of kernel, and there is still only one set of uid and gid controlled by the kernel. In other words, we execute the process as a newly created docker user (uid 1000) in the container, and the host will think that the process is executed by a user with a uid of 1000 on the host, and this user may not necessarily be our account, which is equivalent to us A user who has replaced someone else with an impostor makes it difficult to trace back to the real user.</p>
<p>To solve this problem, you can specify the uid as the user's uid when adding the user, such as 1002;</p>
<pre><code>RUN addgroup --gid 1002 docker && \
adduser --uid 1002 --ingroup docker --home /home/docker --shell /bin/sh --gecos "" docker
</code></pre>
<p>A better solution is to use fixuid to switch the uid when the container starts:</p>
<pre><code>
RUN useradd --create-home --no-log-init --shell /bin/bash user\
&& adduser user sudo \
&& echo 'user:password' | chpasswd
RUN USER=user && \
GROUP=docker && \
curl -SsL https://github.com/boxboat/fixuid/releases/download/v0.4.1/fixuid-0.4.1-linux-amd64.tar.gz | tar -C /usr/local/bin -xzf - && \
chown root:root /usr/local/bin/fixuid && \
chmod 4755 /usr/local/bin/fixuid && \
mkdir -p /etc/fixuid && \
printf "user: $USER\ngroup: $GROUP\n" > /etc/fixuid/config.yml
USER user:docker
ENTRYPOINT ["fixuid"]
</code></pre>
<p>At this time, you need to specify the uid and gid when starting the container. The command is as follows:</p>
<pre><code>docker run --rm -it -u $(id -u):$(id -g) image-name bash
</code></pre>
|
<p>when a SIGTERM is received from k8s I want my sidecar to die only after the main container is finished. How do I create a dependency chain for shutdown routine?</p>
<p>I can use a <code>preStop</code> and wait for say 120 secs, but it is going to a constant 120 secs for the pods to be cleaned up.</p>
<p>I am wondering if there is a way for my side car to check if my main container was killed. Or is it possible for the main container to signal my side car when it has finished its clean up (through k8s instead of a code change in my main container.</p>
| <p>In your <code>preStop</code> hook, check if your main container is still running (instead of just waiting a fixed amount of time.</p>
<p>If the main container has an existing healthprobe (for example an http endpoint) you can call that from within the <code>preStop</code> hook.</p>
<p>Another option is to edit the entrypoint script in your main container to include a <a href="https://www.linuxjournal.com/content/bash-trap-command" rel="nofollow noreferrer">trap</a> command that creates a file on shutdown. The <code>preStop</code> hook of your sidecar to wait for this file be present.</p>
<pre><code>trap "touch /lifecycle/main-pod-terminated" SIGTERM
</code></pre>
<p>If using the preStop hook, just <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks" rel="nofollow noreferrer">keep the grace period countdown in mind</a> and increase the default (30 seconds) if required by setting an appropriate value of <code>terminationGracePeriodSeconds</code></p>
<blockquote>
<p><strong>Pod's termination grace period countdown begins before the <code>preStop</code>
hook is executed</strong>, so regardless of the outcome of the handler, the
container will eventually terminate within the Pod's termination grace
period</p>
</blockquote>
|
<ol>
<li><p>Once I register the crd into the k8s cluster, I can use .yaml to create it, without operator running. Then What happends to these created resouces?</p>
</li>
<li><p>I have seen the <code>Reconciler</code> of operator, but it's more like an async status transfer. When we create a pod, we can directly get the pod ip from the create result. But it seems that I didn't find a place to write my <code>OnCreate</code> hook. (I just see some <code>validate</code> webhook, but never see a hook that be called when creation request made, defines how to create the resourse, and return the created resourse info to the caller ).</p>
</li>
<li><p>If my story is that for one kind of resource, in a time window, all coming creation will multiplex only one pod. Can you give me some advice?</p>
</li>
</ol>
| <p>That's a big story for kubernetes <code>crd</code>/<code>controller</code> life cycle, I try to make a simple representation.</p>
<ol>
<li>After register a new CRD, and create CR, <code>kube-api-server</code> do not care if there is a related <code>controller</code> existed or not. see the process:
<a href="https://i.stack.imgur.com/WZUQj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WZUQj.png" alt="enter image description here" /></a></li>
</ol>
<p>That's means the resource(your CR) will be store to etcd, has no business of your <code>controller</code></p>
<ol start="2">
<li>ok, let talk about your controller. your controller will setup a <code>list/watch</code>(actually a long live http link) to the <code>api-server</code> and register <code>hook</code>(what you ask, right?) for different event: <code>onCreate</code>, <code>onUpdate</code> and <code>onDelete</code>. Actually you will handle all event in your controller's <code>reconcile</code> (remember kubernetes reconcile's responsibility: move current state to desired state). see the diagram:
<a href="https://i.stack.imgur.com/OvDqY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OvDqY.png" alt="enter image description here" /></a></li>
</ol>
<ol start="3">
<li>For the <code>list/watch</code> link in your <code>controller</code>, you need set different link for different kind of resource. for example: if you care about event for <code>pod</code>, you need set <code>pod</code> <code>list/watch</code> or care about deployment, and set a <code>deployment</code> <code>list/watch</code>...</li>
</ol>
|
<p>I try to get a general understanding of OOMKilled events and I've found 2 different reasons:</p>
<ol>
<li><p><strong>Pod</strong> memory limit exceeded: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/#exceed-a-container-s-memory-limit" rel="noreferrer">If the Container continues to consume memory beyond its limit, the Container is terminated.</a></p>
</li>
<li><p><strong>Node</strong> out of memory: <a href="https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#node-oom-behavior" rel="noreferrer">If the kubelet is unable to reclaim memory prior to a node experiencing system OOM, ... then kills the container ...</a></p>
</li>
</ol>
<p><strong>Questions</strong></p>
<ul>
<li>Is this correct?</li>
<li>Are there any other reasons?</li>
<li>Is it possible to see which reason caused the OOMKilled? (It's important to know the reason, because the remedy will be different.)</li>
</ul>
| <p>Both problems result in different error states:</p>
<p>1: An exceeded pod memory limit causes a <code>OOMKilled</code> termination</p>
<p>2: Node out of memory causes a <code>MemoryPressure</code> and and pod eviction.</p>
<pre><code>kubectl describe pod mypod-xxxx
...
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
...
</code></pre>
|
<p>I can't access to public IP assigned by MetalLB load Balancer</p>
<p>I created a Kubernetes cluster in Contabo. Its 1 master and 2 workers. Each one has its own public IP. </p>
<p>I did it with kubeadm + flannel. Later I did install MetalLB to use Load Balancing. </p>
<p>I used this manifest for installing nginx:</p>
<pre><code>apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 80
selector:
app: nginx
type: LoadBalancer
</code></pre>
<p>It works, pods are running. I see the external IP adress after:</p>
<pre><code>kubectl get services
</code></pre>
<p><a href="https://i.stack.imgur.com/LJCxH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/LJCxH.png" alt="enter image description here"></a>
From each node/host I can curl to that ip and port and I can get nginx's:</p>
<pre><code><h1>Welcome to nginx!</h1>
</code></pre>
<p>So far, so good. BUT:</p>
<p>What I still miss is to access to that service (nginx) from my computer.
I can try to access to each node (master + 2 slaves) by their IP:PORT and nothing happens. The final goal is to have a domain that access to that service but I can't guess witch IP should I use.</p>
<p>What I'm missing?</p>
<p>Should MetalLB just expose my 3 possible IPs?
Should I add something else on each server as a reverse proxy?</p>
<p>I'm asking this here because all articles/tutorials on baremetal/VPS (non aws,GKE, etc...) do this on a kube on localhost and miss this basic issue. </p>
<p>Thanks.</p>
| <p>I am having the very same hardware layout:</p>
<ul>
<li>a 3-Nodes Kubernetes Cluster - here with the 3 IPs:
| 123.223.149.27
| 22.36.211.68
| 192.77.11.164 |</li>
<li>running on (different) VPS-Providers (connected to a running cluster(via JOIN), of course)</li>
</ul>
<p><strong>Target: "expose" the nginx via metalLB, so I can access my web-app from outside the cluster via browser via the IP of one of my VPS'</strong></p>
<p><strong>Problem: I do not have <em>a "range of IPs"</em> I could declare for the metallb</strong></p>
<p>Steps done:</p>
<ol>
<li>create one .yaml file for the Loadbalancer, the <strong>kindservicetypeloadbalancer.yaml</strong></li>
<li>create one .yaml file for the ConfigMap, containing the IPs of the 3 nodes, the <strong>kindconfigmap.yaml</strong></li>
</ol>
<p>``</p>
<pre><code>### start of the kindservicetypeloadbalancer.yaml
### for ensuring a unique name: loadbalancer name nginxloady
apiVersion: v1
kind: Service
metadata:
name: nginxloady
annotations:
metallb.universe.tf/address-pool: production-public-ips
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginx
type: LoadBalancer
</code></pre>
<p>``</p>
<p>below, the second .yaml file to be added to the Cluster:</p>
<pre><code> # start of the kindconfigmap.yaml
## info: the "production-public-ips" can be found
## within the annotations-sector of the kind: Service type: loadbalancer / the kindservicetypeloadbalancer.yaml
## as well... ...namespace: metallb-system & protocol: layer2
## note: as you can see, I added a /32 after every of my node-IPs
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: production-public-ips
protocol: layer2
addresses:
- 123.223.149.27/32
- 22.36.211.68/32
- 192.77.11.164/32
</code></pre>
<p>``</p>
<ul>
<li><p>add the LoadBalancer:</p>
<p><code>kubectl apply -f kindservicetypeloadbalancer.yaml</code></p>
</li>
<li><p>add the ConfigMap:</p>
<p><code>kubectl apply -f kindconfigmap.yaml </code></p>
</li>
<li><p>Check the status of the namespace ( "n" ) metallb-system:</p>
<p><code> kubectl describe pods -n metallb-system</code></p>
</li>
</ul>
<p>PS:
actually it is all there:
<a href="https://metallb.universe.tf/installation/" rel="nofollow noreferrer">https://metallb.universe.tf/installation/</a></p>
<p>and here:
<a href="https://metallb.universe.tf/usage/#requesting-specific-ips" rel="nofollow noreferrer">https://metallb.universe.tf/usage/#requesting-specific-ips</a></p>
|
<p>I'm using <a href="https://github.com/Unitech/pm2" rel="nofollow noreferrer">pm2</a> to watch the directory holding the source code for my app-server's NodeJS program, running within a Kubernetes cluster.</p>
<p>However, I am getting this error:</p>
<pre><code>ENOSPC: System limit for number of file watchers reached
</code></pre>
<p>I searched on that error, and found this answer: <a href="https://stackoverflow.com/a/55763478">https://stackoverflow.com/a/55763478</a></p>
<pre><code># insert the new value into the system config
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
</code></pre>
<p>However, I tried running that in a pod on the target k8s node, and it says the command <code>sudo</code> was not found. If I remove the <code>sudo</code>, I get this error:</p>
<pre><code>sysctl: setting key "fs.inotify.max_user_watches": Read-only file system
</code></pre>
<p>How can I modify the file-system watcher limit from the 8192 found on my Kubernetes node, to a higher value such as 524288?</p>
| <p>I found a solution: use a privileged Daemon Set that runs on each node in the cluster, which has the ability to modify the <code>fs.inotify.max_user_watches</code> variable.</p>
<p>Add the following to a <code>node-setup-daemon-set.yaml</code> file, included in your Kubernetes cluster:</p>
<pre><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-setup
namespace: kube-system
labels:
k8s-app: node-setup
spec:
selector:
matchLabels:
name: node-setup
template:
metadata:
labels:
name: node-setup
spec:
containers:
- name: node-setup
image: ubuntu
command: ["/bin/sh","-c"]
args: ["/script/node-setup.sh; while true; do echo Sleeping && sleep 3600; done"]
env:
- name: PARTITION_NUMBER
valueFrom:
configMapKeyRef:
name: node-setup-config
key: partition_number
volumeMounts:
- name: node-setup-script
mountPath: /script
- name: dev
mountPath: /dev
- name: etc-lvm
mountPath: /etc/lvm
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumes:
- name: node-setup-script
configMap:
name: node-setup-script
defaultMode: 0755
- name: dev
hostPath:
path: /dev
- name: etc-lvm
hostPath:
path: /etc/lvm
---
apiVersion: v1
kind: ConfigMap
metadata:
name: node-setup-config
namespace: kube-system
data:
partition_number: "3"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: node-setup-script
namespace: kube-system
data:
node-setup.sh: |
#!/bin/bash
set -e
# change the file-watcher max-count on each node to 524288
# insert the new value into the system config
sysctl -w fs.inotify.max_user_watches=524288
# check that the new value was applied
cat /proc/sys/fs/inotify/max_user_watches
</code></pre>
<p>Note: The file above could probably be simplified quite a bit. (I was basing it on <a href="https://docs.ovh.com/gb/en/kubernetes/formating-nvme-disk-on-iops-nodes" rel="nofollow noreferrer">this guide</a>, and left in a lot of stuff that's probably not necessary for simply running the <code>sysctl</code> command.) If others succeed in trimming it further, while confirming that it still works, feel free to make/suggest those edits to my answer.</p>
|
<p>So I've been trying to fix this for days now and I'm beyond stuck.</p>
<p>My app is running, and I can access the site when I go to the default url (example.com). I can refresh on this url without issues, and I can navigate through the pages being rendered through react router as long as I don't refresh on any other page. (e.g.) refreshing on example.com/path1 doesn't work, I get a 404 error.</p>
<p>My current ingress file looks like:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myApp-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
tls:
- hosts:
- my.app.com
secretName: myApp-tls
rules:
- host: "my.app.com"
http:
paths:
- pathType: Prefix
path: /.*
backend:
service:
name: myApp
port:
number: 80
</code></pre>
<p>I've added many of the most common replies to issues such as this, but it only makes matters worse, such as white screen with "Unexpected token '<'" errors for all javascript files it tries to load, and none of the pages loading at all.</p>
<p>For example, I've tried:</p>
<pre><code> nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/configuration-snippet: |
try_files $uri $uri/ /index.html;
</code></pre>
<p>I've tried adding additional paths to the controller for each route available in the app, I've tried setting the path to just "/" or "/(.*)", nothing is working.</p>
<p>Any help would be greatly appreciated! thank you!</p>
<p>EDIT:</p>
<pre><code># latest active node image
FROM node:14-alpine
# Create app directory
RUN mkdir /app
RUN mkdir -p /app/node_modules && chown -R node:node /app
WORKDIR /app
COPY package.json /app
COPY tsconfig.json /app
COPY webpack.config.js /app
COPY .env.prod .env
ADD src /app/src
ADD dist /app/dist
RUN mkdir -p /app/dist/js && chown -R node:node /app/dist/js
ADD server /app/server
ADD assets /app/assets
ADD config /app/config
ARG NPM_TOKEN
COPY .npmrc_docker .npmrc
RUN npm cache verify
RUN npm install
RUN npm run build:prod
RUN rm -f .npmrc
# Expose necessary port
EXPOSE 3000
# Compile typescript and start app
CMD ["cross-env", "NODE_ENV=production", "PORT=3000", "ts-node", "server/server"]
</code></pre>
<p>I've also tried doing it with <code>CMD ["npx", "http-server", "./dist", "-p", "3000"]</code> since the server part is only doing the following:</p>
<pre><code> this.app.use(express.static(path.resolve(__dirname, "../dist")));
this.app.get("*", (request: express.Request, response: express.Response) => {
response.sendFile(path.resolve(__dirname, "../dist/index.html"));
});
</code></pre>
| <p>Alright, so I found the real problem.</p>
<p>My domain uses both DigitalOcean and Microsoft name servers. I had the A record added on DO, but I guess it needs to be on both. So I added the A record to the other and now my config in my question works perfectly.</p>
|
<p>At first, I set limit range for namespace <code>kube-system</code> as below:</p>
<pre><code>apiVersion: v1
kind: LimitRange
metadata:
name: cpu-limit-range
namespace: kube-system
spec:
limits:
- default:
cpu: 500m
memory: 500Mi
defaultRequest:
cpu: 100m
memory: 100Mi
type: Container
</code></pre>
<p>However, later found that there is insufficient CPU and Memory to start up my pod as limits are > 100% from namespace <code>kube-system</code> already.</p>
<p>How can I reset reasonable limits for pods in <code>kube-system</code>. It is better to set their limits to <code>unlimitted</code> but I don't know how to set it.</p>
<hr />
<p>Supplement information for namespace <code>kube-system</code>:</p>
<p><a href="https://i.stack.imgur.com/cU72x.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cU72x.png" alt="enter image description here" /></a></p>
| <p>Not sure if your <code>kube-system</code> namespace has a limit set. You can confirm it checking the namespace itself:</p>
<pre><code>kubectl describe namespace kube-system
</code></pre>
<p>If you have a limit range or a resource quota set, it will appear in the description. Something like the following:</p>
<pre><code>Name: default-cpu-example
Labels: <none>
Annotations: <none>
Status: Active
No resource quota.
Resource Limits
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container cpu - - 500m 1 -
</code></pre>
<p>In this case I have set resource limits for my namespace.</p>
<p>Now I can list all the ResourceQuotas and LimitRanges using:</p>
<pre><code>kubectl get resourcequotas -n kube-system
kubectl get limitranges -n kube-system
</code></pre>
<p>If somethings returns, from those you can simply remove it:</p>
<pre><code>kubectl delete resourcequotas NAME_OF_YOUR_RESOURCE_QUOTA -n kube-system
kubectl delete limitranges NAME_OF_YOUR_LIMIT_RANGE -n kube-system
</code></pre>
<p>I'm still not sure if that's your true problem, but that answers your question.</p>
<p>You can find more info here:</p>
<p><a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/</a></p>
|
<p>I've installed <code>kong-ingress-controller</code> using yaml file on a 3-nodes k8s cluster.
but I'm getting this (the status of pod is <code>CrashLoopBackOff</code>):</p>
<pre><code>$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kong ingress-kong-74d8d78f57-57fqr 1/2 CrashLoopBackOff 12 (3m23s ago) 40m
[...]
</code></pre>
<p>there are 2 container declarations in kong yaml file: <code>proxy</code> and <code>ingress-controller</code>.
The first one is up and running but the <code>ingress-controller</code> container is not:</p>
<pre><code>$kubectl describe pod ingress-kong-74d8d78f57-57fqr -n kong |less
[...]
ingress-controller:
Container ID: docker://8e9a3370f78b3057208b943048c9ecd51054d0b276ef6c93ccf049093261d8de
Image: kong/kubernetes-ingress-controller:1.3
Image ID: docker-pullable://kong/kubernetes-ingress-controller@sha256:cff0df9371d5ad07fef406c356839736ce9eeb0d33f918f56b1b232cd7289207
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 07 Sep 2021 17:15:54 +0430
Finished: Tue, 07 Sep 2021 17:15:54 +0430
Ready: False
Restart Count: 13
Liveness: http-get http://:10254/healthz delay=5s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:10254/healthz delay=5s timeout=1s period=10s #success=1 #failure=3
Environment:
CONTROLLER_KONG_ADMIN_URL: https://127.0.0.1:8444
CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY: true
CONTROLLER_PUBLISH_SERVICE: kong/kong-proxy
POD_NAME: ingress-kong-74d8d78f57-57fqr (v1:metadata.name)
POD_NAMESPACE: kong (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ft7gg (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-ft7gg:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 46m default-scheduler Successfully assigned kong/ingress-kong-74d8d78f57-57fqr to kung-node-2
Normal Pulled 46m kubelet Container image "kong:2.5" already present on machine
Normal Created 46m kubelet Created container proxy
Normal Started 46m kubelet Started container proxy
Normal Pulled 45m (x4 over 46m) kubelet Container image "kong/kubernetes-ingress-controller:1.3" already present on machine
Normal Created 45m (x4 over 46m) kubelet Created container ingress-controller
Normal Started 45m (x4 over 46m) kubelet Started container ingress-controller
Warning BackOff 87s (x228 over 46m) kubelet Back-off restarting failed container
</code></pre>
<p>And here is the log of <code>ingress-controller</code> container:</p>
<pre><code>-------------------------------------------------------------------------------
Kong Ingress controller
Release:
Build:
Repository:
Go: go1.16.7
-------------------------------------------------------------------------------
W0907 12:56:12.940106 1 client_config.go:614] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
time="2021-09-07T12:56:12Z" level=info msg="version of kubernetes api-server: 1.22" api-server-host="https://10.*.*.1:443" git_commit=632ed300f2c34f6d6d15ca4cef3d3c7073412212 git_tree_state=clean git_version=v1.22.1 major=1 minor=22 platform=linux/amd64
time="2021-09-07T12:56:12Z" level=fatal msg="failed to fetch publish-service: services \"kong-proxy\" is forbidden: User \"system:serviceaccount:kong:kong-serviceaccount\" cannot get resource \"services\" in API group \"\" in the namespace \"kong\"" service_name=kong-proxy service_namespace=kong
</code></pre>
<p>If someone could help me to get a solution, that would be awesome.</p>
<p>============================================================</p>
<p><strong>UPDATE</strong>:</p>
<p>The <code>kong-ingress-controller</code>'s yaml file:</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: kong
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongclusterplugins.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .plugin
description: Name of the plugin
name: Plugin-Type
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
- JSONPath: .disabled
description: Indicates if the plugin is disabled
name: Disabled
priority: 1
type: boolean
- JSONPath: .config
description: Configuration of the plugin
name: Config
priority: 1
type: string
group: configuration.konghq.com
names:
kind: KongClusterPlugin
plural: kongclusterplugins
shortNames:
- kcp
scope: Cluster
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
config:
type: object
configFrom:
properties:
secretKeyRef:
properties:
key:
type: string
name:
type: string
namespace:
type: string
required:
- name
- namespace
- key
type: object
type: object
disabled:
type: boolean
plugin:
type: string
protocols:
items:
enum:
- http
- https
- grpc
- grpcs
- tcp
- tls
type: string
type: array
run_on:
enum:
- first
- second
- all
type: string
required:
- plugin
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongconsumers.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .username
description: Username of a Kong Consumer
name: Username
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
group: configuration.konghq.com
names:
kind: KongConsumer
plural: kongconsumers
shortNames:
- kc
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
credentials:
items:
type: string
type: array
custom_id:
type: string
username:
type: string
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongingresses.configuration.konghq.com
spec:
group: configuration.konghq.com
names:
kind: KongIngress
plural: kongingresses
shortNames:
- ki
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
proxy:
properties:
connect_timeout:
minimum: 0
type: integer
path:
pattern: ^/.*$
type: string
protocol:
enum:
- http
- https
- grpc
- grpcs
- tcp
- tls
type: string
read_timeout:
minimum: 0
type: integer
retries:
minimum: 0
type: integer
write_timeout:
minimum: 0
type: integer
type: object
route:
properties:
headers:
additionalProperties:
items:
type: string
type: array
type: object
https_redirect_status_code:
type: integer
methods:
items:
type: string
type: array
path_handling:
enum:
- v0
- v1
type: string
preserve_host:
type: boolean
protocols:
items:
enum:
- http
- https
- grpc
- grpcs
- tcp
- tls
type: string
type: array
regex_priority:
type: integer
request_buffering:
type: boolean
response_buffering:
type: boolean
snis:
items:
type: string
type: array
strip_path:
type: boolean
upstream:
properties:
algorithm:
enum:
- round-robin
- consistent-hashing
- least-connections
type: string
hash_fallback:
type: string
hash_fallback_header:
type: string
hash_on:
type: string
hash_on_cookie:
type: string
hash_on_cookie_path:
type: string
hash_on_header:
type: string
healthchecks:
properties:
active:
properties:
concurrency:
minimum: 1
type: integer
healthy:
properties:
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
successes:
minimum: 0
type: integer
type: object
http_path:
pattern: ^/.*$
type: string
timeout:
minimum: 0
type: integer
unhealthy:
properties:
http_failures:
minimum: 0
type: integer
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
tcp_failures:
minimum: 0
type: integer
timeout:
minimum: 0
type: integer
type: object
type: object
passive:
properties:
healthy:
properties:
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
successes:
minimum: 0
type: integer
type: object
unhealthy:
properties:
http_failures:
minimum: 0
type: integer
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
tcp_failures:
minimum: 0
type: integer
timeout:
minimum: 0
type: integer
type: object
type: object
threshold:
type: integer
type: object
host_header:
type: string
slots:
minimum: 10
type: integer
type: object
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongplugins.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .plugin
description: Name of the plugin
name: Plugin-Type
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
- JSONPath: .disabled
description: Indicates if the plugin is disabled
name: Disabled
priority: 1
type: boolean
- JSONPath: .config
description: Configuration of the plugin
name: Config
priority: 1
type: string
group: configuration.konghq.com
names:
kind: KongPlugin
plural: kongplugins
shortNames:
- kp
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
config:
type: object
configFrom:
properties:
secretKeyRef:
properties:
key:
type: string
name:
type: string
required:
- name
- key
type: object
type: object
disabled:
type: boolean
plugin:
type: string
protocols:
items:
enum:
- http
- https
- grpc
- grpcs
- tcp
- tls
type: string
type: array
run_on:
enum:
- first
- second
- all
type: string
required:
- plugin
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: tcpingresses.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .status.loadBalancer.ingress[*].ip
description: Address of the load balancer
name: Address
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
group: configuration.konghq.com
names:
kind: TCPIngress
plural: tcpingresses
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
properties:
rules:
items:
properties:
backend:
properties:
serviceName:
type: string
servicePort:
format: int32
type: integer
type: object
host:
type: string
port:
format: int32
type: integer
type: object
type: array
tls:
items:
properties:
hosts:
items:
type: string
type: array
secretName:
type: string
type: object
type: array
type: object
status:
type: object
version: v1beta1
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kong-serviceaccount
namespace: kong
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kong-ingress-clusterrole
rules:
- apiGroups:
- ""
resources:
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
- extensions
- networking.internal.knative.dev
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- networking.k8s.io
- extensions
- networking.internal.knative.dev
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- configuration.konghq.com
resources:
- tcpingresses/status
verbs:
- update
- apiGroups:
- configuration.konghq.com
resources:
- kongplugins
- kongclusterplugins
- kongcredentials
- kongconsumers
- kongingresses
- tcpingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- get
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kong-ingress-clusterrole-nisa-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kong-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: kong-serviceaccount
namespace: kong
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-type: nlb
name: kong-proxy
namespace: kong
spec:
ports:
- name: proxy
port: 80
protocol: TCP
targetPort: 8000
- name: proxy-ssl
port: 443
protocol: TCP
targetPort: 8443
selector:
app: ingress-kong
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: kong-validation-webhook
namespace: kong
spec:
ports:
- name: webhook
port: 443
protocol: TCP
targetPort: 8080
selector:
app: ingress-kong
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ingress-kong
name: ingress-kong
namespace: kong
spec:
replicas: 1
selector:
matchLabels:
app: ingress-kong
template:
metadata:
annotations:
kuma.io/gateway: enabled
prometheus.io/port: "8100"
prometheus.io/scrape: "true"
traffic.sidecar.istio.io/includeInboundPorts: ""
labels:
app: ingress-kong
spec:
containers:
- env:
- name: KONG_PROXY_LISTEN
value: 0.0.0.0:8000, 0.0.0.0:8443 ssl http2
- name: KONG_PORT_MAPS
value: 80:8000, 443:8443
- name: KONG_ADMIN_LISTEN
value: 127.0.0.1:8444 ssl
- name: KONG_STATUS_LISTEN
value: 0.0.0.0:8100
- name: KONG_DATABASE
value: "off"
- name: KONG_NGINX_WORKER_PROCESSES
value: "2"
- name: KONG_ADMIN_ACCESS_LOG
value: /dev/stdout
- name: KONG_ADMIN_ERROR_LOG
value: /dev/stderr
- name: KONG_PROXY_ERROR_LOG
value: /dev/stderr
image: kong:2.5
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- kong quit
livenessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 8100
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: proxy
ports:
- containerPort: 8000
name: proxy
protocol: TCP
- containerPort: 8443
name: proxy-ssl
protocol: TCP
- containerPort: 8100
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 8100
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
- env:
- name: CONTROLLER_KONG_ADMIN_URL
value: https://127.0.0.1:8444
- name: CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY
value: "true"
- name: CONTROLLER_PUBLISH_SERVICE
value: kong/kong-proxy
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: kong/kubernetes-ingress-controller:1.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: ingress-controller
ports:
- containerPort: 8080
name: webhook
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
serviceAccountName: kong-serviceaccount
</code></pre>
| <p>I had installed <code>kubernetes:1.22</code> and tried to use <code>kong/kubernetes-ingress-controller:1.3</code> .</p>
<p>as @mdaniel said in the comment:</p>
<blockquote>
<p>Upon further investigation into that repo, 1.x only works up to k8s 1.21 so I'll delete my answer and you'll have to downgrade your cluster(!) or find an alternative Ingress controller</p>
</blockquote>
<p>Based on documentation(you can find this at <a href="https://docs.konghq.com/kubernetes-ingress-controller/1.3.x/references/version-compatibility/" rel="nofollow noreferrer">KIC version compatibility</a>) :</p>
<p><a href="https://i.stack.imgur.com/vpCS7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vpCS7.png" alt="version compatibility" /></a></p>
<p>As you can see, <code>Kong/kubernetes-ingress-controller</code> supports a maximum version of <code>1.21</code> of <code>kubernetes</code>(at the time of writing this answer). So I decided to downgrade my cluster to version <code>1.20</code> and this solved my problem.</p>
|
<p>I already have on my Ingress a lot of domains with so many paths as this is an environment with many microservices.</p>
<p>How can I edit my ingress in some way that when someone access to path <code>/servicex</code> it gets instead <code>/serviceb</code> for example</p>
<p>My current ingress is as follow (for simplicity I'm omitting some path from other hosts)</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: main-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/proxy-body-size: "100m"
nginx.ingress.kubernetes.io/max-worker-connections: "0"
nginx.ingress.kubernetes.io/max-worker-open-files: "0"
nginx.ingress.kubernetes.io/client-header-buffer-size: "4k"
spec:
tls:
- hosts:
- subdomain-a.domain.com
- subdomain-b.domain.com
- subdomain-c.domain.com
- subdomain-d.domain.com
- subdomain-e.domain.com
secretName: domain-com-secret
rules:
- host: subdomain-a.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: default-service
port:
number: 80
- path: /serviceb
pathType: Prefix
backend:
service:
name: b-service
port:
number: 80
- path: /servicec
pathType: Prefix
backend:
service:
name: c-service
port:
number: 80
- path: /serviced
pathType: Prefix
backend:
service:
name: d-service
port:
number: 80
- path: /servicee
pathType: Prefix
backend:
service:
name: e-service
port:
number: 80
- path: /servicee
pathType: Prefix
backend:
service:
name: e-service
port:
number: 80
- path: /servicef
pathType: Prefix
backend:
service:
name: f-service
port:
number: 80
- path: /serviceg
pathType: Prefix
backend:
service:
name: g-service
port:
number: 80
- host: subdomain-b.domain.com
< tons of other rules >
- host: subdomain-c.domain.com
< tons of other rules >
- host: subdomain-d.domain.com
< tons of other rules >
- host: subdomain-e.domain.com
< tons of other rules >
</code></pre>
<p>In other words we want to get access to the <code>/serviceb</code> if the user enters any of the following:</p>
<ul>
<li><a href="https://subdomain-a.domain.com/serviceb" rel="nofollow noreferrer">https://subdomain-a.domain.com/serviceb</a> -> nothing to do here as we already have this covered</li>
<li><a href="https://subdomain-a.domain.com/servicex" rel="nofollow noreferrer">https://subdomain-a.domain.com/servicex</a> -> this should be transformed into <a href="https://subdomain-a.domain.com/serviceb" rel="nofollow noreferrer">https://subdomain-a.domain.com/serviceb</a></li>
</ul>
<p>I have reviewed the rewrite option but its seems to me that if I add the annotation <code>nginx.ingress.kubernetes.io/rewrite-target: /servicex/$2</code> for example and then try to do something like:</p>
<pre><code>- path: /serviceb(/|$)(.*)
pathType: Prefix
backend:
service:
name: serviceb
port:
number: 80
</code></pre>
<p>Won't work because first, we already have that path taken and second I think that the annotation will apply to the whole thing and this will damage all the other routes.</p>
<p>Thanks in advance, any help would be appreciated</p>
| <p>NGINX ingress is almost the same as in the NGINX. In most of the cases, the built-in annotation helps you to configure the standard operation in NGINX.</p>
<p>However, if you have encountered any kind of customization like your case, you are also free to input the server or configuration snippet through these two annotations.</p>
<p><code>nginx.ingress.kubernetes.io/server-snippet</code> and <code>nginx.ingress.kubernetes.io/configuration-snippet</code>.</p>
<p>Inside the snippet, you can do whatever you want like an ordinary NGINX.</p>
<p>Grabbing the configuration from your reply here for others' information.</p>
<pre><code>nginx.ingress.kubernetes.io/configuration-snippet: |-
rewrite ^(/servicex)(.*) /serviceb$2 last;
</code></pre>
|
<p>I want to route through Istio virtual Service to my microservice. When I use <code>@RequestParam</code> based input in prefix or even in exact it throws <code>404</code> for <code>/api/cities/{city_id}/tours</code> but the rest works fine.</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: app
namespace: nsapp
spec:
gateways:
- app-gateway
hosts:
- app.*.*.*
http:
- match:
- uri:
prefix: "/api/cities/{city_id}/tours"
- uri:
prefix: "/api/countries/{country_id}/cities"
- uri:
prefix: "/api/countries"
route:
- destination:
host: appservice
port:
number: 9090
</code></pre>
| <p>This fragment</p>
<pre class="lang-yaml prettyprint-override"><code> - uri:
prefix: "/api/cities/{city_id}/tours"
- uri:
prefix: "/api/countries/{country_id}/cities"
</code></pre>
<p>will be taken literally. The <code>{city_id}</code> and <code>{country_id}</code> will not be replaced with your custom ID. At this point, istio will look for a prefix that reads literally <code>/api/cities/{city_id}/tours</code> or <code>/api/countries/{country_id}/cities</code>, which doesn't exist (you get error 404). If you want to match the expression to your custom ID you have to use a regular expression. Look at <a href="https://istio.io/latest/docs/reference/config/networking/virtual-service/#StringMatch" rel="nofollow noreferrer">this doc</a>. There you will find information about the capabilities of the StringMatch: <code>exact</code>, <code>prefix</code> or <code>regex</code>. You can find the syntax of the regular expressions used in istio <a href="https://github.com/google/re2/wiki/Syntax" rel="nofollow noreferrer">here</a>.</p>
<p>Summary:
You should change <code>prefix</code> to <code>regex</code> and then create your own regular expression to match your custom ID. Example:</p>
<pre class="lang-yaml prettyprint-override"><code> - uri:
regex: "/api/cities/[a-zA-Z]+/tours"
- uri:
regex: "/api/countries/[a-zA-Z]+/cities"
</code></pre>
<p>In my example, there are only letters (upper or lower case) in the ID. Here you have to create your own regular expression based on <a href="https://github.com/google/re2/wiki/Syntax" rel="nofollow noreferrer">this documentation</a>.</p>
|
<p>We have been happily using ArgoCD with public repositories for a while, but we've run into problems trying to connect ArgoCD to a private repository. We have an <code>Application</code> that looks like this:</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: example-app
namespace: argocd
spec:
destination:
name: example-cluster
namespace: open-cluster-management-agent
project: ops
source:
path: .
repoURL: ssh://git@github.com/example-org/example-repo.git
targetRevision: HEAD
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- Validate=false
- ApplyOutOfSyncOnly=true
</code></pre>
<p>And a corresponding secret with the secret key that looks like this:</p>
<pre><code>apiVersion: v1
metadata:
labels:
argocd.argoproj.io/secret-type: repository
name: example-repo
namespace: argocd
type: Opaque
stringData:
sshPrivateKey: |
-----BEGIN OPENSSH PRIVATE KEY-----
...
-----END OPENSSH PRIVATE KEY-----
url: ssh://git@github.com/example-org/example-repo.git
kind: Secret
</code></pre>
<p>ArgoCD fails to sync this app and shows the following error:</p>
<pre><code>rpc error: code = Unknown desc = error creating SSH agent: "SSH agent
requested but SSH_AUTH_SOCK not-specified"
</code></pre>
<p>I understand what that's telling me, but I'm not sure how to correct
the problem in the context of ArgoCD.</p>
| <p>You can review possible causes from <a href="https://github.com/argoproj/argo-cd/issues/1172" rel="nofollow noreferrer"><code>argoproj/argo-cd</code> issue 1172</a></p>
<p>Since you have the proper URL (<code>ssh://git@github.com/...</code>, as requested by <a href="https://github.com/operate-first/argocd-apps/pull/203" rel="nofollow noreferrer">PR 203</a>), double-check the indentation:</p>
<blockquote>
<p>It was an identation problem caused by me :) in the <code>repositories.yaml</code> causing a wrong unmarshalling and a empty <code>sshPrivateKeySecret</code>.<br />
The right version is here:</p>
<pre class="lang-yaml prettyprint-override"><code>- sshPrivateKeySecret:
key: sshPrivateKey
name: bitbucket
url: git@bitbucket.org:MY-PROJECT/MY-REPO
</code></pre>
</blockquote>
|
<p>I want to route through Istio virtual Service to my microservice. When I use <code>@RequestParam</code> based input in prefix or even in exact it throws <code>404</code> for <code>/api/cities/{city_id}/tours</code> but the rest works fine.</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: app
namespace: nsapp
spec:
gateways:
- app-gateway
hosts:
- app.*.*.*
http:
- match:
- uri:
prefix: "/api/cities/{city_id}/tours"
- uri:
prefix: "/api/countries/{country_id}/cities"
- uri:
prefix: "/api/countries"
route:
- destination:
host: appservice
port:
number: 9090
</code></pre>
| <p>Prefix matching matches literal strings.</p>
<p><code>/api/countries</code> also matches what you intend to match with <code>/api/countries/{country_id}/cities</code></p>
<p><code>/api/cities/{city_id}/tours</code> however, does not work.</p>
<p>For more complex matching, you can use an exact and regex, as described in the <a href="https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPMatchRequest" rel="nofollow noreferrer"><code>VirtualService</code></a> documentation.</p>
<p>Maybe something like this (untested):</p>
<pre><code>exact: "/api/countries"
regex: "/api/countries/[^/]*/cities"
</code></pre>
|
<p>I installed WSL2 on Windows10 followed these instructions: <a href="https://learn.microsoft.com/en-us/windows/wsl/install-win10" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/windows/wsl/install-win10</a>, manual install.</p>
<p>All commands worked for me however at the end when I open wsl terminal and type <code>kubectl</code> I have response <code>-sh: kubectl: not found</code>.</p>
<p>I installed Ubuntu 20.04 LTS and when I open the Ubuntu terminal then kubectl works there.</p>
<p>Powershell says it's installed correctly:</p>
<pre><code>PS C:\Users\michu> wsl --list --verbose
NAME STATE VERSION
*docker-desktop Running 2
docker-desktop-data Running 2
Ubuntu-20.04 Running 2
</code></pre>
<p>How can I make docker/kubectl works in the WSL terminal as well?<br />
Shouldn't it work right after all the instruction steps?</p>
| <p>The <strong>answer</strong> was in one of the comments, clarifying it so it will be useful to others as well</p>
<pre><code># run this command in order to enable kubectl in your wsl terminal
wsl --setdefault Ubuntu-20.04
</code></pre>
|
<p>I am trying to create a local cluster on a centos7 machine using eks anywhere. However I am getting below error. Please let me know if I am missing anything? Here is the link I am following to create the cluster. I have also attached the cluster create yaml file</p>
<p>Link:
<a href="https://aws.amazon.com/blogs/aws/amazon-eks-anywhere-now-generally-available-to-create-and-manage-kubernetes-clusters-on-premises/" rel="nofollow noreferrer">https://aws.amazon.com/blogs/aws/amazon-eks-anywhere-now-generally-available-to-create-and-manage-kubernetes-clusters-on-premises/</a></p>
<p>Error:
Error: failed to create cluster: error waiting for external etcd for workload cluster to be ready: error executing wait: error: timed out waiting for the condition on clusters/dev-cluster</p>
<p><a href="https://i.stack.imgur.com/gUbVt.png" rel="nofollow noreferrer">clustercreate yaml file</a></p>
<hr />
| <p>The default spec will look for external etcd. To test it locally remove the <code>externalEtcdConfiguration</code>:</p>
<pre><code>apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: dev-cluster
spec:
clusterNetwork:
cni: cilium
pods:
cidrBlocks:
- 192.168.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
controlPlaneConfiguration:
count: 1
datacenterRef:
kind: DockerDatacenterConfig
name: dev-cluster
kubernetesVersion: "1.21"
workerNodeGroupConfigurations:
- count: 1
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: DockerDatacenterConfig
metadata:
name: dev-cluster
spec: {}
---
</code></pre>
|
<p>I would like to deploy Airflow locally on Minikube and have a local folder mounted for DAGs handling.</p>
<p>Airflow is deployed like this:</p>
<pre><code>helm install $AIRFLOW_NAME apache-airflow/airflow \
--values values.yml \
--set logs.persistence.enabled=true \
--namespace $AIRFLOW_NAMESPACE \
--kubeconfig ~/.kube/config
</code></pre>
<p>The <code>values.yml</code> looks like this:</p>
<pre><code>executor: KubernetesExecutor
config:
core:
dags_folder: /dags
webserver:
extraVolumes:
- name: dags
hostPath:
path: /path/dags
extraVolumeMounts:
- name: dags
mountPath: /dags
</code></pre>
<p><code>kubectl describe pods airflow-webserver --kubeconfig ~/.kube/config --namespace airflow</code>:</p>
<pre><code>Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: airflow-airflow-config
Optional: false
logs:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: airflow-logs
ReadOnly: false
dags:
Type: HostPath (bare host directory volume)
Path: /path/dags/
HostPathType:
airflow-webserver-token-xtq9h:
Type: Secret (a volume populated by a Secret)
SecretName: airflow-webserver-*
Optional: false
QoS Class: BestEffort
</code></pre>
<p>The volume dags appears to be correctly mounted but remains empty.
What could cause this behaviour ?</p>
<p>Edit:
<code>kubectl describe pods airflow-scheduler-0 --kubeconfig ~/.kube/config --namespace airflow</code></p>
<pre><code> Mounts:
/opt/airflow/airflow.cfg from config (ro,path="airflow.cfg")
/opt/airflow/dags from dags (rw)
/opt/airflow/logs from logs (rw)
/opt/airflow/pod_templates/pod_template_file.yaml from config (ro,path="pod_template_file.yaml")
/var/run/secrets/kubernetes.io/serviceaccount from airflow-scheduler-token-9zfpv (ro)
</code></pre>
<pre><code>Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: airflow-airflow-config
Optional: false
dags:
Type: HostPath (bare host directory volume)
Path: /path/dags
HostPathType:
logs:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: airflow-logs
ReadOnly: false
airflow-scheduler-token-9zfpv:
Type: Secret (a volume populated by a Secret)
SecretName: airflow-scheduler-token-9zfpv
Optional: false
</code></pre>
| <p>Assuming that you have some dags in /path/dags already, you should mount your dags folder to scheduler not to webserver (if you are using Airflow 2). Scheduler is the one to parse dags, webserver only displays them based on information stored in the DB so it does not actually need DAGs (it used to need it Airflow 1.10 without serialization)</p>
<p>Also I guess you should use LocalExecutor not KubernetesExecutor if you want to execute dags from local folder - then the <code>dags</code> mounted to scheduler will be available to the processes which are spawned from scheduler in the same container.</p>
<p>If you want to run Kubernetes Executor and want to mount host folder, I believe you will need to add it as a mount to pod template file of yours (you can generate such pod template file using airflow CLI</p>
<p>See <a href="https://airflow.apache.org/docs/apache-airflow/stable/executor/kubernetes.html#pod-template-file" rel="nofollow noreferrer">https://airflow.apache.org/docs/apache-airflow/stable/executor/kubernetes.html#pod-template-file</a></p>
|
<p>Given this yaml:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress-2
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: test.com
http:
paths:
- backend:
serviceName: admin
servicePort: 8080
path: /admin/*
pathType: ImplementationSpecific
- backend:
serviceName: keycloak
servicePort: 8080
path: /auth/*
pathType: ImplementationSpecific
</code></pre>
<p>I would like the <code>rewrite-target</code> to only apply to the service <code>admin</code>. Requests to <code>keycloak</code> should not be affected. How can I achieve that?</p>
| <p>You can separate out ingress file or config, just make sure you keep the different name</p>
<p>So you can you create <strong>TWO</strong> ingress</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress-1
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: test.com
http:
paths:
- backend:
serviceName: admin
servicePort: 8080
path: /admin/*
pathType: ImplementationSpecific
</code></pre>
<p><strong>Ingress</strong>: <strong>2</strong> with different config, you can edit anything as per need now remove annotation.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress-2
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: test.com
http:
paths:
- backend:
serviceName: keycloak
servicePort: 8080
path: /auth/*
pathType: ImplementationSpecific
</code></pre>
<p>If you want to store everything in a single YAML file you can also do it by merging them with <code>---</code>.</p>
|
<h3>Bug Description</h3>
<p>My cluster uses Istio and one of service (java) which is deployed in mesh needs to connect to external resource <code>x.cmp.net/doc.pdf</code> with http and 443 port. This external resource using trusted wildcard cert (DigiCert) with subjects <code>*.cmp.net</code> and <code>cmp.net</code>.
When I try to use openssl to verify (from app container) ssl cert I'm getting <strong>Google cert</strong> (?? istio cert ?):</p>
<pre><code>opt$ **openssl s_client -showcerts -connect x.cmp.net:443**
CONNECTED(00000003)
depth=2 C = US, O = Google Trust Services LLC, CN = GTS Root R1
verify return:1
depth=1 C = US, O = Google Trust Services LLC, CN = GTS CA 1C3
verify return:1
depth=0 CN = *.google.com
verify return:1
---
Certificate chain
0 s:CN = *.google.com
</code></pre>
<p>Application is written in java and when app tries to download resource getting:</p>
<pre><code>No subject alternative DNS name matching shipjobmt.ista.net found.
</code></pre>
<p>My configuration:</p>
<pre><code>apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: internalnet
namespace: ppct
spec:
hosts:
- x.cmp.net
- y.cmp.net
- z.cmp.net
exportTo:
- "."
ports:
- number: 443
name: https
protocol: HTTPS
location: MESH_EXTERNAL
resolution: DNS
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: xinternalnet
namespace: ppct
spec:
host: x.cmp.net
trafficPolicy:
tls:
mode: SIMPLE
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: yinternalnet
namespace: ppct
spec:
host: y.cmp.net
trafficPolicy:
tls:
mode: SIMPLE
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: zinternalnet
namespace: ppct
spec:
host: z.cmp.net
trafficPolicy:
tls:
mode: SIMPLE
</code></pre>
<p><strong>I also have blocking mode on:</strong></p>
<pre><code>kubectl get istiooperator installed-state -n istio-system -o jsonpath='{.spec.meshConfig.outboundTrafficPolicy.mode}'
REGISTRY_ONLY
</code></pre>
<h3>Version</h3>
<pre><code>`istioctl version`
client version: 1.11.0
control plane version: 1.11.0
data plane version: 1.11.0 (65 proxies)
`kubectl version --short`
Client Version: v1.19.9
Server Version: v1.19.9
</code></pre>
<h3>Additional Information</h3>
<p><em>No response</em></p>
| <p>Your problem seems to be linked to double TLS (you can read more <a href="https://istio.io/latest/docs/ops/common-problems/network-issues/#double-tls" rel="nofollow noreferrer">here</a>).
There are two possible solutions:</p>
<h3>Solution 1</h3>
<p>In <code>ServiceEntry</code> define protocol as HTTP with redirection to 443</p>
<pre class="lang-yaml prettyprint-override"><code>spec:
hosts:
- ...
ports:
- number: 80
name: http
protocol: HTTP
targetPort: 443
</code></pre>
<h3>Solution 2</h3>
<p>Disable TLS for outgoing traffic with <code>DestinationRule</code></p>
<pre class="lang-yaml prettyprint-override"><code>trafficPolicy:
tls:
mode: DISABLE
</code></pre>
|
<p>I would like to install kubectx on Ubuntu 20.04. I couldn't find any info on how to do it. Any comment is much appreciated. Thanks!</p>
| <p>This package contains in <code>deb http://ftp.de.debian.org/debian buster main</code> repo which is not available in ubuntu.</p>
<p>First you need to do add this to <code>/etc/apt/sources.list</code> like below</p>
<pre><code>#for kubectlx
deb [trusted=yes] http://ftp.de.debian.org/debian buster main
</code></pre>
<p>Then run</p>
<pre><code>sudo apt-get update
</code></pre>
<p>Then you can install using this cmd.</p>
<pre><code>sudo apt install kubectx
</code></pre>
|
<p>I would like to deploy Airflow locally on Minikube and have a local folder mounted for DAGs handling.</p>
<p>Airflow is deployed like this:</p>
<pre><code>helm install $AIRFLOW_NAME apache-airflow/airflow \
--values values.yml \
--set logs.persistence.enabled=true \
--namespace $AIRFLOW_NAMESPACE \
--kubeconfig ~/.kube/config
</code></pre>
<p>The <code>values.yml</code> looks like this:</p>
<pre><code>executor: KubernetesExecutor
config:
core:
dags_folder: /dags
webserver:
extraVolumes:
- name: dags
hostPath:
path: /path/dags
extraVolumeMounts:
- name: dags
mountPath: /dags
</code></pre>
<p><code>kubectl describe pods airflow-webserver --kubeconfig ~/.kube/config --namespace airflow</code>:</p>
<pre><code>Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: airflow-airflow-config
Optional: false
logs:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: airflow-logs
ReadOnly: false
dags:
Type: HostPath (bare host directory volume)
Path: /path/dags/
HostPathType:
airflow-webserver-token-xtq9h:
Type: Secret (a volume populated by a Secret)
SecretName: airflow-webserver-*
Optional: false
QoS Class: BestEffort
</code></pre>
<p>The volume dags appears to be correctly mounted but remains empty.
What could cause this behaviour ?</p>
<p>Edit:
<code>kubectl describe pods airflow-scheduler-0 --kubeconfig ~/.kube/config --namespace airflow</code></p>
<pre><code> Mounts:
/opt/airflow/airflow.cfg from config (ro,path="airflow.cfg")
/opt/airflow/dags from dags (rw)
/opt/airflow/logs from logs (rw)
/opt/airflow/pod_templates/pod_template_file.yaml from config (ro,path="pod_template_file.yaml")
/var/run/secrets/kubernetes.io/serviceaccount from airflow-scheduler-token-9zfpv (ro)
</code></pre>
<pre><code>Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: airflow-airflow-config
Optional: false
dags:
Type: HostPath (bare host directory volume)
Path: /path/dags
HostPathType:
logs:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: airflow-logs
ReadOnly: false
airflow-scheduler-token-9zfpv:
Type: Secret (a volume populated by a Secret)
SecretName: airflow-scheduler-token-9zfpv
Optional: false
</code></pre>
| <p>I was completely mistaking the <code>hostPath</code> parameter for my local machine.
<code>hostPath</code> refers to the Minikube node running the pod.</p>
<pre><code> extraVolumes:
- name: dags
hostPath:
path: /mnt/airflow/dags
type: Directory
extraVolumeMounts:
- name: dags
mountPath: /opt/airflow/dags
</code></pre>
<p>This will mount a volume between the Minikube host node and the port.
The path <code>/mnt/airflow/dags</code> <strong>must not be present on the local machine</strong>.</p>
<p>The local DAGs folder can then be mounted into the Minikube node:</p>
<p><code>minikube mount ./dags/:/mnt/airflow/dags</code></p>
<p>See: <a href="https://medium.com/@ipeluffo/running-apache-airflow-locally-on-kubernetes-minikube-31f308e3247a" rel="nofollow noreferrer">https://medium.com/@ipeluffo/running-apache-airflow-locally-on-kubernetes-minikube-31f308e3247a</a></p>
|
<p>I'm beginning to build out a kubernetes cluster for our applications. We are using Azure for cloud services, so my K8s cluster is built using AKS. The AKs cluster was created using the portal interface for Azure. It has one node, and I am attempting to create a pod with a single container to deploy to the node. Where I am stuck currently is trying to connect to the AKS cluster from Powershell.
The steps I have taken are:</p>
<pre><code>az login (followed by logging in)
az account set --subscription <subscription id>
az aks get-credentials --name <cluster name> --resource-group <resource group name>
kubectl get nodes
</code></pre>
<p>After entering the last line, I am left with the error: Unable to connect to the server: dial tcp: lookup : no such host</p>
<p>I've also gone down a few other rabbit holes found on SO and other forums, but quite honestly, I'm looking for a straight forward way to access my cluster before complicating it further.</p>
<p>Edit: So in the end, I deleted the resource I was working with and spun up a new version of AKS, and am now having no trouble connecting. Thanks for the suggestions though!</p>
| <p>As of now, the aks run command adds a <a href="https://learn.microsoft.com/en-gb/azure/aks/private-clusters#options-for-connecting-to-the-private-cluster" rel="nofollow noreferrer">fourth option</a> to connect to private clusters extending <a href="https://stackoverflow.com/users/7001697/darius">@Darius</a>'s <a href="https://stackoverflow.com/a/66002325">three options</a> posted earlier:</p>
<blockquote>
<ol start="4">
<li>Use the <a href="https://learn.microsoft.com/en-gb/azure/aks/command-invoke" rel="nofollow noreferrer">AKS Run Command feature</a>.</li>
</ol>
</blockquote>
<p>Below are some copy/paste excerpts of a simple command, and one that requires a file. It is possible to chain multiple commands with <code>&&</code>.</p>
<pre class="lang-bash prettyprint-override"><code>az aks command invoke \
--resource-group myResourceGroup \
--name myAKSCluster \
--command "kubectl get pods -n kube-system"
az aks command invoke \
--resource-group myResourceGroup \
--name myAKSCluster \
--command "kubectl apply -f deployment.yaml -n default" \
--file deployment.yaml
</code></pre>
<p>In case you get a <code>(ResourceGroupNotFound)</code> error, try adding the subscription, too</p>
<pre class="lang-bash prettyprint-override"><code>az aks command invoke \
--resource-group myResourceGroup \
--name myAKSCluster \
--subscription <subscription> \
--command "kubectl get pods -n kube-system"
</code></pre>
<p>You can also configure the default subscription:</p>
<pre class="lang-bash prettyprint-override"><code>az account set -s <subscription>
</code></pre>
|
<p>I have a Python application and it utilizes environment variables in Kubernetes configuration such as:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
namespace: default
data:
var1: foo
var2: bar
---
apiVersion: v1
kind: Pod
metadata:
name: my-pod
namespace: default
spec:
containers:
- envFrom:
- configMapRef:
name: my-config
</code></pre>
<p>So, it is fine when the app is Dockerized and running on a Kubernetes farm.</p>
<p>However, when running the application on a <strong>local machine</strong> without Docker and Kubernetes but a humble command:</p>
<pre class="lang-py prettyprint-override"><code>python app.py
</code></pre>
<p>I have to make the Python modules find the environment variables with <code>os.getenv('var1')</code> even if there is no ConfigMap or Pod on.</p>
<p>Is it possible to without a need to add extra code in Python modules or adding environment variables to the local machine system?</p>
| <p>In a shell, you could also simply temporarily assign the value(s) for the environment variable(s) right before calling the script. No need to change your script at all.</p>
<p>Consider the following <code>app.py</code> which just prints the environment variables <code>ENV_PARAM</code> and <code>ENV_PARAM2</code>:</p>
<pre class="lang-python prettyprint-override"><code>#!/usr/bin/env python3
import os
print(os.environ['ENV_PARAM'])
print(os.environ['ENV_PARAM2'])
</code></pre>
<p>When the vars are not set and you call it like this</p>
<pre class="lang-bash prettyprint-override"><code>python app.py
</code></pre>
<p>you will get a <code>KeyError</code>.</p>
<blockquote>
<p>KeyError: 'ENV_PARAM'</p>
</blockquote>
<p>When you instead specify the values in the same line and call it like this</p>
<pre class="lang-bash prettyprint-override"><code>ENV_PARAM='foo' ENV_PARAM2='bar' python app.py
</code></pre>
<p>it works fine. Output:</p>
<pre><code>foo
bar
</code></pre>
<p>This will not set the environment variable beyond that, so if you do</p>
<pre class="lang-bash prettyprint-override"><code>echo "$ENV_PARAM"
</code></pre>
<p>afterwards, it will return nothing. The environment variable was only set temporary, like you required.</p>
|
<p>I have a cluster with MySQL database which is a <strong>StatefulSet</strong>.</p>
<p>I would like to scale up my database with hpa.</p>
<p>The problem is that the second database that has been created is <strong>empty</strong>.
I don't know how to synchronize the second with the first replica.</p>
<p>Someone told me that I have to create Operators, but I thought that the problem could have been solved with StatefulSets...</p>
<p>This is mysql statefulset code:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
serviceName: mysql
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.7.21
imagePullPolicy: Always
resources:
requests:
memory: 50Mi #50
cpu: 50m
limits:
memory: 500Mi #220?
cpu: 400m #65
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-storage
mountPath: /var/lib/mysql
subPath: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret # MARK P
key: password
- name: MYSQL_ROOT_HOST
valueFrom:
secretKeyRef:
name: mysql-secret # MARK P
key: host
volumeClaimTemplates:
- metadata:
name: mysql-storage
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 5Gi
</code></pre>
<p>Does anyone have any idea?</p>
| <p>You need to use volumes which mean <code>PersistanceVolume</code> in K8S.</p>
<p>Here is a working sample with all the required resources.</p>
<p><a href="https://github.com/nirgeier/KubernetesLabs/tree/master/Labs/09-StatefulSet" rel="nofollow noreferrer">https://github.com/nirgeier/KubernetesLabs/tree/master/Labs/09-StatefulSet</a></p>
<ul>
<li>Check out the <code>kustomization.yaml</code> to see what is actually required.</li>
<li>You will need the <code>StaefulSet</code> an optional <code>ConfigMap/Secret</code> and a 'Volume' or 'PersistanceVolume<code>&</code>PersistanceVolumeClaim`</li>
</ul>
<h3>How to sync your DB?</h3>
<ul>
<li>In order to "sync" your DB the <code>Staefulset</code> need to share the same volume.</li>
<li>You also need teh secondary instance to be set as <code>super-read-only</code></li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: mysql
labels:
app: mysql
data:
primary.cnf: |
# Apply this config only on the primary.
[mysqld]
log-bin
replica.cnf: |
# Apply this config only on replicas.
[mysqld]
super-read-only
</code></pre>
<pre class="lang-yaml prettyprint-override"><code># Headless service for stable DNS entries of StatefulSet members.
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
ports:
- name: mysql
port: 3306
clusterIP: None
selector:
app: mysql
---
# Client service for connecting to any MySQL instance for reads.
# For writes, you must instead connect to the primary: mysql-0.mysql.
apiVersion: v1
kind: Service
metadata:
name: mysql-read
labels:
app: mysql
spec:
ports:
- name: mysql
port: 3306
selector:
app: mysql
</code></pre>
<h2>Set up your primary and your replicas</h2>
<p>(Check out the command in the sample)</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
serviceName: mysql
replicas: 3
template:
metadata:
labels:
app: mysql
spec:
initContainers:
- name: init-mysql
image: mysql:5.7
command:
- bash
- "-c"
- |
set -ex
# Generate mysql server-id from pod ordinal index.
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
echo [mysqld] > /mnt/conf.d/server-id.cnf
# Add an offset to avoid reserved server-id=0 value.
echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
# Copy appropriate conf.d files from config-map to emptyDir.
if [[ $ordinal -eq 0 ]]; then
cp /mnt/config-map/primary.cnf /mnt/conf.d/
else
cp /mnt/config-map/replica.cnf /mnt/conf.d/
fi
volumeMounts:
- name: conf
mountPath: /mnt/conf.d
- name: config-map
mountPath: /mnt/config-map
- name: clone-mysql
image: gcr.io/google-samples/xtrabackup:1.0
command:
- bash
- "-c"
- |
set -ex
# Skip the clone if data already exists.
[[ -d /var/lib/mysql/mysql ]] && exit 0
# Skip the clone on primary (ordinal index 0).
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
[[ $ordinal -eq 0 ]] && exit 0
# Clone data from previous peer.
ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
# Prepare the backup.
xtrabackup --prepare --target-dir=/var/lib/mysql
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: "1"
ports:
- name: mysql
containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 500m
memory: 1Gi
livenessProbe:
exec:
command: ["mysqladmin", "ping"]
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
exec:
# Check we can execute queries over TCP (skip-networking is off).
command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
initialDelaySeconds: 5
periodSeconds: 2
timeoutSeconds: 1
- name: xtrabackup
image: gcr.io/google-samples/xtrabackup:1.0
ports:
- name: xtrabackup
containerPort: 3307
command:
- bash
- "-c"
- |
set -ex
cd /var/lib/mysql
# Determine binlog position of cloned data, if any.
if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" != "x" ]]; then
# XtraBackup already generated a partial "CHANGE MASTER TO" query
# because we're cloning from an existing replica. (Need to remove the tailing semicolon!)
cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in
# Ignore xtrabackup_binlog_info in this case (it's useless).
rm -f xtrabackup_slave_info xtrabackup_binlog_info
elif [[ -f xtrabackup_binlog_info ]]; then
# We're cloning directly from primary. Parse binlog position.
[[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
rm -f xtrabackup_binlog_info xtrabackup_slave_info
echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
fi
# Check if we need to complete a clone by starting replication.
if [[ -f change_master_to.sql.in ]]; then
echo "Waiting for mysqld to be ready (accepting connections)"
until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
echo "Initializing replication from clone position"
mysql -h 127.0.0.1 \
-e "$(<change_master_to.sql.in), \
MASTER_HOST='mysql-0.mysql', \
MASTER_USER='root', \
MASTER_PASSWORD='', \
MASTER_CONNECT_RETRY=10; \
START SLAVE;" || exit 1
# In case of container restart, attempt this at-most-once.
mv change_master_to.sql.in change_master_to.sql.orig
fi
# Start a server to send backups when requested by peers.
exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
"xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 100m
memory: 100Mi
volumes:
- name: conf
emptyDir: {}
- name: config-map
configMap:
name: mysql
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
</code></pre>
|
<p>What I have, are multi similar and simple <code>dockerfile</code>s</p>
<p>But what I want is to have a single base <code>dockerfile</code> and my <code>dockerfile</code>s
pass their variables into it.</p>
<p>In my case the only difference between <code>dockerfile</code>s are
simply their <code>EXPOSE</code>, so I think it's better to keep a base <code>dockerfile</code> and other <code>dockerfile</code>s only inject that variables into base <code>dockerfile</code> like a template engine</p>
<p>A sample <code>dockerfile</code>:</p>
<pre><code>FROM golang:1.17 AS builder
WORKDIR /app
COPY . .
RUN go mod download
RUN go build -o /bin/app ./cmd/root.go
FROM alpine:latest
WORKDIR /bin/
COPY --from=builder /bin/app .
EXPOSE 8080
LABEL org.opencontainers.image.source="https://github.com/mohammadne/bookman-auth"
ENTRYPOINT ["/bin/app"]
CMD ["server", "--env=dev"]
</code></pre>
| <h1><code>IMPORT</code> directive will never be implemented</h1>
<p>A long time ago there was proposed <code>IMPORT</code> directive for <code>Docker</code></p>
<p>Unfortunately, issues are closed while PR's are still open:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/69234878/using-shared-dockerfile-for-multiple-dockerfiles">docker - using shared dockerfile for multiple dockerfiles - Stack Overflow</a></li>
<li><a href="https://github.com/moby/moby/issues/735" rel="nofollow noreferrer">Proposal: Dockerfile add INCLUDE · Issue #735 · moby/moby</a></li>
<li><a href="https://github.com/moby/moby/issues/974" rel="nofollow noreferrer">implement an INCLUDE verb/instruction for docker build files · Issue #974 · moby/moby</a></li>
<li><a href="https://github.com/moby/moby/issues/40165" rel="nofollow noreferrer">Add INCLUDE feature · Issue #40165 · moby/moby</a></li>
<li><a href="https://github.com/moby/moby/pull/2108" rel="nofollow noreferrer">docker build: initial work on the include command by flavio · Pull Request #2108 · moby/moby</a></li>
<li><a href="https://www.isaac.nl/en/developer-blog/dockerfile-templating-to-automate-image-creation/index.html" rel="nofollow noreferrer">Dockerfile templating to automate image creation</a></li>
<li><a href="https://blog.dockbit.com/templating-your-dockerfile-like-a-boss-2a84a67d28e9" rel="nofollow noreferrer">Templating your Dockerfile like a boss! | by Ahmed ElGamil | Dockbit</a></li>
</ul>
<h1>Solution for your case</h1>
<p>But for your case, all you need - is just a bit of <code>sed</code></p>
<p>E.g.:</p>
<pre><code># Case1: inplace templating
EXPOSED_PORT=8081 sed -i "s/EXPOSE 8080/EXPOSE $EXPOSED_PORT/" Dockerfile
# Case2: generating Dockerfile from template
sed "s/EXPOSE 8080/EXPOSE $EXPOSED_PORT/" Dockerfile.template > Dockerfile
</code></pre>
<p>Explanation:</p>
<ul>
<li><code>EXPOSED_PORT=8081</code> declares local <code>bash</code> variable</li>
<li><code>sed</code> is a tool for text manipulation</li>
<li><code>sed -i "s/EXPOSE 8080/EXPOSE $EXPOSED_PORT/" Dockerfile</code> replaces <code>EXPOSE 8080</code> to <code>EXPOSE 8081</code></li>
<li><code>sed "s/EXPOSE 8080/EXPOSE $EXPOSED_PORT/" Dockerfile.template > Dockerfile</code> generates the new <code>Dockerfile</code> from <code>Dockerfile.template</code></li>
</ul>
|
<p>I've installed kong-ingress-controller using yaml file on a 3-nodes k8s cluster( bare metal ) (you can see the file at the bottom of question) and every thing is up and runnig:</p>
<pre><code>$kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default bar-deployment-64d5b5b5c4-99p4q 1/1 Running 0 12m
default foo-deployment-877bf9974-xmpj6 1/1 Running 0 15m
kong ingress-kong-5cd9db4db9-4cg4q 2/2 Running 0 79m
kube-system calico-kube-controllers-5f6cfd688c-5njnn 1/1 Running 0 18h
kube-system calico-node-5k9b6 1/1 Running 0 18h
kube-system calico-node-jbb7k 1/1 Running 0 18h
kube-system calico-node-mmmts 1/1 Running 0 18h
kube-system coredns-74ff55c5b-5q5fn 1/1 Running 0 23h
kube-system coredns-74ff55c5b-9bbbk 1/1 Running 0 23h
kube-system etcd-kubernetes-master 1/1 Running 1 23h
kube-system kube-apiserver-kubernetes-master 1/1 Running 1 23h
kube-system kube-controller-manager-kubernetes-master 1/1 Running 1 23h
kube-system kube-proxy-4h7hs 1/1 Running 0 20h
kube-system kube-proxy-sd6b2 1/1 Running 0 20h
kube-system kube-proxy-v9z8p 1/1 Running 1 23h
kube-system kube-scheduler-kubernetes-master 1/1 Running 1 23h
</code></pre>
<p><strong>but the problem is here</strong>:</p>
<p>the <strong><code>EXTERNAL_IP</code></strong> of <strong><code>kong-proxy service</code></strong> is <strong>pending</strong> so i'm not able to reach to my cluster from the outside</p>
<pre><code>$kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default bar-service ClusterIP 10.103.49.102 <none> 5000/TCP 15m
default foo-service ClusterIP 10.102.52.89 <none> 5000/TCP 19m
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23h
kong kong-proxy LoadBalancer 10.104.79.161 <pending> 80:31583/TCP,443:30053/TCP 82m
kong kong-validation-webhook ClusterIP 10.109.75.104 <none> 443/TCP 82m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 23h
</code></pre>
<pre><code>$ kubectl describe service kong-proxy -n kong
Name: kong-proxy
Namespace: kong
Labels: <none>
Annotations: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-type: nlb
Selector: app=ingress-kong
Type: LoadBalancer
IP Families: <none>
IP: 10.104.79.161
IPs: 10.104.79.161
Port: proxy 80/TCP
TargetPort: 8000/TCP
NodePort: proxy 31583/TCP
Endpoints: 192.168.74.69:8000
Port: proxy-ssl 443/TCP
TargetPort: 8443/TCP
NodePort: proxy-ssl 30053/TCP
Endpoints: 192.168.74.69:8443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>my k8s version is 1.20.1 and
my docker version is 19.3.10 .
If someone could help me to get a solution ,that would be awesome</p>
<p>=============================================</p>
<p><strong>kong-ingress-controller</strong> yaml file:</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: kong
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongclusterplugins.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .plugin
description: Name of the plugin
name: Plugin-Type
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
- JSONPath: .disabled
description: Indicates if the plugin is disabled
name: Disabled
priority: 1
type: boolean
- JSONPath: .config
description: Configuration of the plugin
name: Config
priority: 1
type: string
group: configuration.konghq.com
names:
kind: KongClusterPlugin
plural: kongclusterplugins
shortNames:
- kcp
scope: Cluster
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
config:
type: object
configFrom:
properties:
secretKeyRef:
properties:
key:
type: string
name:
type: string
namespace:
type: string
required:
- name
- namespace
- key
type: object
type: object
disabled:
type: boolean
plugin:
type: string
protocols:
items:
enum:
- http
- https
- grpc
- grpcs
- tcp
- tls
type: string
type: array
run_on:
enum:
- first
- second
- all
type: string
required:
- plugin
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongconsumers.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .username
description: Username of a Kong Consumer
name: Username
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
group: configuration.konghq.com
names:
kind: KongConsumer
plural: kongconsumers
shortNames:
- kc
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
credentials:
items:
type: string
type: array
custom_id:
type: string
username:
type: string
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongingresses.configuration.konghq.com
spec:
group: configuration.konghq.com
names:
kind: KongIngress
plural: kongingresses
shortNames:
- ki
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
proxy:
properties:
connect_timeout:
minimum: 0
type: integer
path:
pattern: ^/.*$
type: string
protocol:
enum:
- http
- https
- grpc
- grpcs
- tcp
- tls
type: string
read_timeout:
minimum: 0
type: integer
retries:
minimum: 0
type: integer
write_timeout:
minimum: 0
type: integer
type: object
route:
properties:
headers:
additionalProperties:
items:
type: string
type: array
type: object
https_redirect_status_code:
type: integer
methods:
items:
type: string
type: array
path_handling:
enum:
- v0
- v1
type: string
preserve_host:
type: boolean
protocols:
items:
enum:
- http
- https
- grpc
- grpcs
- tcp
- tls
type: string
type: array
regex_priority:
type: integer
request_buffering:
type: boolean
response_buffering:
type: boolean
snis:
items:
type: string
type: array
strip_path:
type: boolean
upstream:
properties:
algorithm:
enum:
- round-robin
- consistent-hashing
- least-connections
type: string
hash_fallback:
type: string
hash_fallback_header:
type: string
hash_on:
type: string
hash_on_cookie:
type: string
hash_on_cookie_path:
type: string
hash_on_header:
type: string
healthchecks:
properties:
active:
properties:
concurrency:
minimum: 1
type: integer
healthy:
properties:
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
successes:
minimum: 0
type: integer
type: object
http_path:
pattern: ^/.*$
type: string
timeout:
minimum: 0
type: integer
unhealthy:
properties:
http_failures:
minimum: 0
type: integer
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
tcp_failures:
minimum: 0
type: integer
timeout:
minimum: 0
type: integer
type: object
type: object
passive:
properties:
healthy:
properties:
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
successes:
minimum: 0
type: integer
type: object
unhealthy:
properties:
http_failures:
minimum: 0
type: integer
http_statuses:
items:
type: integer
type: array
interval:
minimum: 0
type: integer
tcp_failures:
minimum: 0
type: integer
timeout:
minimum: 0
type: integer
type: object
type: object
threshold:
type: integer
type: object
host_header:
type: string
slots:
minimum: 10
type: integer
type: object
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kongplugins.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .plugin
description: Name of the plugin
name: Plugin-Type
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
- JSONPath: .disabled
description: Indicates if the plugin is disabled
name: Disabled
priority: 1
type: boolean
- JSONPath: .config
description: Configuration of the plugin
name: Config
priority: 1
type: string
group: configuration.konghq.com
names:
kind: KongPlugin
plural: kongplugins
shortNames:
- kp
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
config:
type: object
configFrom:
properties:
secretKeyRef:
properties:
key:
type: string
name:
type: string
required:
- name
- key
type: object
type: object
disabled:
type: boolean
plugin:
type: string
protocols:
items:
enum:
- http
- https
- grpc
- grpcs
- tcp
- tls
type: string
type: array
run_on:
enum:
- first
- second
- all
type: string
required:
- plugin
version: v1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: tcpingresses.configuration.konghq.com
spec:
additionalPrinterColumns:
- JSONPath: .status.loadBalancer.ingress[*].ip
description: Address of the load balancer
name: Address
type: string
- JSONPath: .metadata.creationTimestamp
description: Age
name: Age
type: date
group: configuration.konghq.com
names:
kind: TCPIngress
plural: tcpingresses
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
properties:
rules:
items:
properties:
backend:
properties:
serviceName:
type: string
servicePort:
format: int32
type: integer
type: object
host:
type: string
port:
format: int32
type: integer
type: object
type: array
tls:
items:
properties:
hosts:
items:
type: string
type: array
secretName:
type: string
type: object
type: array
type: object
status:
type: object
version: v1beta1
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kong-serviceaccount
namespace: kong
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kong-ingress-clusterrole
rules:
- apiGroups:
- ""
resources:
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
- extensions
- networking.internal.knative.dev
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- networking.k8s.io
- extensions
- networking.internal.knative.dev
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- configuration.konghq.com
resources:
- tcpingresses/status
verbs:
- update
- apiGroups:
- configuration.konghq.com
resources:
- kongplugins
- kongclusterplugins
- kongcredentials
- kongconsumers
- kongingresses
- tcpingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- get
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kong-ingress-clusterrole-nisa-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kong-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: kong-serviceaccount
namespace: kong
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-type: nlb
name: kong-proxy
namespace: kong
spec:
ports:
- name: proxy
port: 80
protocol: TCP
targetPort: 8000
- name: proxy-ssl
port: 443
protocol: TCP
targetPort: 8443
selector:
app: ingress-kong
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: kong-validation-webhook
namespace: kong
spec:
ports:
- name: webhook
port: 443
protocol: TCP
targetPort: 8080
selector:
app: ingress-kong
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ingress-kong
name: ingress-kong
namespace: kong
spec:
replicas: 1
selector:
matchLabels:
app: ingress-kong
template:
metadata:
annotations:
kuma.io/gateway: enabled
prometheus.io/port: "8100"
prometheus.io/scrape: "true"
traffic.sidecar.istio.io/includeInboundPorts: ""
labels:
app: ingress-kong
spec:
containers:
- env:
- name: KONG_PROXY_LISTEN
value: 0.0.0.0:8000, 0.0.0.0:8443 ssl http2
- name: KONG_PORT_MAPS
value: 80:8000, 443:8443
- name: KONG_ADMIN_LISTEN
value: 127.0.0.1:8444 ssl
- name: KONG_STATUS_LISTEN
value: 0.0.0.0:8100
- name: KONG_DATABASE
value: "off"
- name: KONG_NGINX_WORKER_PROCESSES
value: "2"
- name: KONG_ADMIN_ACCESS_LOG
value: /dev/stdout
- name: KONG_ADMIN_ERROR_LOG
value: /dev/stderr
- name: KONG_PROXY_ERROR_LOG
value: /dev/stderr
image: kong:2.5
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- kong quit
livenessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 8100
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: proxy
ports:
- containerPort: 8000
name: proxy
protocol: TCP
- containerPort: 8443
name: proxy-ssl
protocol: TCP
- containerPort: 8100
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /status
port: 8100
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
- env:
- name: CONTROLLER_KONG_ADMIN_URL
value: https://127.0.0.1:8444
- name: CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY
value: "true"
- name: CONTROLLER_PUBLISH_SERVICE
value: kong/kong-proxy
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: kong/kubernetes-ingress-controller:1.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: ingress-controller
ports:
- containerPort: 8080
name: webhook
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
serviceAccountName: kong-serviceaccount
</code></pre>
| <p>the short answer is what @iglen_ said <a href="https://stackoverflow.com/a/69177863/12741668">in this answer</a> but I decided to explain the solution.</p>
<p>When using a cloud provider the <code>LoadBalancer</code> type for Services will be managed and provisioned by the environment (see <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#external-load-balancer-providers" rel="nofollow noreferrer">k8s docs</a>) automatically, but when creating your own baremetal cluster you will need to add the service which will manage provisioning <code>IPs</code> for <code>LoadBalancer</code> type Services. One such service is <a href="https://metallb.universe.tf" rel="nofollow noreferrer">Metal-LB</a> which is built for this.</p>
<p>Before installation MetalLB Check the <a href="https://metallb.universe.tf/#requirements" rel="nofollow noreferrer">requirements</a>.</p>
<p>before we deploying MetalLB we need to do one step:</p>
<blockquote>
<p>If you’re using kube-proxy in IPVS mode, since Kubernetes v1.14.2 you
have to enable strict ARP mode.</p>
</blockquote>
<p>Note, you don’t need this if you’re using kube-router as service-proxy because it is enabling strict ARP by default.
enter this command:</p>
<p><code>$ kubectl edit configmap -n kube-system kube-proxy</code></p>
<p>in the opened up page search for <strong>mode</strong>, in my case the mode is equal to empty string so i don't need to change any thing but in the case that mode is set to <code>ipvs</code> as it said in the installation guide you need to set below configuration in this file:</p>
<pre><code>apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
strictARP: true
</code></pre>
<p>as the next step you need to run these commands:</p>
<pre><code>$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/namespace.yaml
$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/metallb.yaml
$ kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
</code></pre>
<p>after running above commands we have this:</p>
<pre><code>$ kubectl get all -n metallb-system
NAME READY STATUS RESTARTS AGE
pod/controller-6b78sff7d9-2rv2f 1/1 Running 0 3m
pod/speaker-7bqev 1/1 Running 0 3m
pod/speaker-txrg5 1/1 Running 0 3m
pod/speaker-w7th5 1/1 Running 0 3m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/speaker 3 3 3 3 3 kubernetes.io/os=linux 3m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/controller 1/1 1 1 3m
NAME DESIRED CURRENT READY AGE
replicaset.apps/controller-6b78sff7d9 1 1 1 3m
</code></pre>
<p>MetalLB needs some <code>IPv4 addresses</code> :</p>
<pre><code>$ ip a s
1: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
[...]
inet 10.240.1.59/24 brd 10.240.1.255 scope global dynamic noprefixroute ens160
valid_lft 425669sec preferred_lft 421269sec
[...]
</code></pre>
<p>the <code>ens160</code> is my control-plane network interface and as you see its ip range is <code>10.240.1.59/24</code> so i'm going to assign a set of ip address in this network:</p>
<pre><code>$ sipcalc 10.240.1.59/24
-[ipv4 : 10.240.1.59/24] - 0
[CIDR]
Host address - 10.240.1.59
Host address (decimal) - 183500115
Host address (hex) - AF0031B
Network address - 10.240.1.0
Network mask - 255.255.255.0
Network mask (bits) - 24
Network mask (hex) - FFFFF000
Broadcast address - 10.240.1.255
Cisco wildcard - 0.0.0.255
Addresses in network - 256
Network range - 10.240.1.0 - 10.240.1.255
Usable range - 10.240.1.1 - 10.240.1.254
</code></pre>
<p>now i'm going to take <strong>10</strong> ip addresses from the <code>Usable range</code> and assign it to MetalLB.
let's create a <code>configmap</code> for MetalLB :</p>
<pre><code>$ sudo nano metallb-cm.yaml
</code></pre>
<p>paste the below configuration into metallb-cm.yaml:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 10.240.1.100-10.240.1.110
</code></pre>
<p>then save the file and run this command:</p>
<pre><code>$ kubectl create -f metallb-cm.yaml
</code></pre>
<p>now let's check our services again:</p>
<pre><code>$ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default bar-service ClusterIP 10.103.49.102 <none> 5000/TCP 15m
default foo-service ClusterIP 10.102.52.89 <none> 5000/TCP 19m
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23h
kong kong-proxy LoadBalancer 10.104.79.161 10.240.1.100 80:31583/TCP,443:30053/TCP 82m
kong kong-validation-webhook ClusterIP 10.109.75.104 <none> 443/TCP 82m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 23h
</code></pre>
<p>as you can see the service of type <code>LoadBalancer</code> has an ip address now.</p>
|
<p>I have a multi-container pod running on AWS EKS. One web app container running on port 80 and a Redis container running on port 6379.</p>
<p>Once the deployment goes through, manual curl probes on the pod's IP address:port from within the cluster are always good responses.<br />
The ingress to service is fine as well.</p>
<p>However, the kubelet's probes are failing, leading to restarts and I'm not sure how to replicate that probe fail nor fix it yet.</p>
<p>Thanks for reading!</p>
<p>Here are the events:</p>
<pre><code>0s Warning Unhealthy pod/app-7cddfb865b-gsvbg Readiness probe failed: Get http://10.10.14.199:80/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
0s Warning Unhealthy pod/app-7cddfb865b-gsvbg Liveness probe failed: Get http://10.10.14.199:80/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
0s Warning Unhealthy pod/app-7cddfb865b-gsvbg Readiness probe failed: Get http://10.10.14.199:80/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
0s Warning Unhealthy pod/app-7cddfb865b-gsvbg Readiness probe failed: Get http://10.10.14.199:80/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
0s Warning Unhealthy pod/app-7cddfb865b-gsvbg Readiness probe failed: Get http://10.10.14.199:80/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
0s Warning Unhealthy pod/app-7cddfb865b-gsvbg Liveness probe failed: Get http://10.10.14.199:80/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
0s Warning Unhealthy pod/app-7cddfb865b-gsvbg Readiness probe failed: Get http://10.10.14.199:80/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
0s Warning Unhealthy pod/app-7cddfb865b-gsvbg Liveness probe failed: Get http://10.10.14.199:80/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
0s Warning Unhealthy pod/app-7cddfb865b-gsvbg Liveness probe failed: Get http://10.10.14.199:80/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
0s Normal Killing pod/app-7cddfb865b-gsvbg Container app failed liveness probe, will be restarted
0s Normal Pulling pod/app-7cddfb865b-gsvbg Pulling image "registry/app:latest"
0s Normal Pulled pod/app-7cddfb865b-gsvbg Successfully pulled image "registry/app:latest"
0s Normal Created pod/app-7cddfb865b-gsvbg Created container app
</code></pre>
<p>Making things generic, this is my deployment yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "16"
creationTimestamp: "2021-05-26T22:01:19Z"
generation: 19
labels:
app: app
chart: app-1.0.0
environment: production
heritage: Helm
owner: acme
release: app
name: app
namespace: default
resourceVersion: "234691173"
selfLink: /apis/apps/v1/namespaces/default/deployments/app
uid: 3149acc2-031e-4719-89e6-abafb0bcdc3c
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: app
release: app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 100%
type: RollingUpdate
template:
metadata:
annotations:
kubectl.kubernetes.io/restartedAt: "2021-09-17T09:04:49-07:00"
creationTimestamp: null
labels:
app: app
environment: production
owner: acme
release: app
spec:
containers:
- image: redis:5.0.6-alpine
imagePullPolicy: IfNotPresent
name: redis
ports:
- containerPort: 6379
hostPort: 6379
name: redis
protocol: TCP
resources:
limits:
cpu: 500m
memory: 500Mi
requests:
cpu: 500m
memory: 500Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
- env:
- name: SYSTEM_ENVIRONMENT
value: production
envFrom:
- configMapRef:
name: app-production
- secretRef:
name: app-production
image: registry/app:latest
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /
port: 80
scheme: HTTP
initialDelaySeconds: 90
periodSeconds: 20
successThreshold: 1
timeoutSeconds: 1
name: app
ports:
- containerPort: 80
hostPort: 80
name: app
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /
port: 80
scheme: HTTP
initialDelaySeconds: 90
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: "1"
memory: 500Mi
requests:
cpu: "1"
memory: 500Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
priorityClassName: critical-app
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2021-08-10T17:34:18Z"
lastUpdateTime: "2021-08-10T17:34:18Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2021-05-26T22:01:19Z"
lastUpdateTime: "2021-09-17T16:48:54Z"
message: ReplicaSet "app-7f7cb8fd4" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 19
readyReplicas: 1
replicas: 1
updatedReplicas: 1
</code></pre>
<p>This is my service yaml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2021-05-05T20:11:33Z"
labels:
app: app
chart: app-1.0.0
environment: production
heritage: Helm
owner: acme
release: app
name: app
namespace: default
resourceVersion: "163989104"
selfLink: /api/v1/namespaces/default/services/app
uid: 1f54cd2f-b978-485e-a1af-984ffeeb7db0
spec:
clusterIP: 172.20.184.161
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 32648
port: 80
protocol: TCP
targetPort: 80
selector:
app: app
release: app
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
</code></pre>
<p>Update 10/20/2021:</p>
<p>So I went with the advice to tinker the readiness probe with these generous settings:</p>
<pre><code>readinessProbe:
failureThreshold: 3
httpGet:
path: /
port: 80
scheme: HTTP
initialDelaySeconds: 300
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
</code></pre>
<p>These are the events:</p>
<pre><code>5m21s Normal Scheduled pod/app-686494b58b-6cjsq Successfully assigned default/app-686494b58b-6cjsq to ip-10-10-14-127.compute.internal
5m20s Normal Created pod/app-686494b58b-6cjsq Created container redis
5m20s Normal Started pod/app-686494b58b-6cjsq Started container redis
5m20s Normal Pulling pod/app-686494b58b-6cjsq Pulling image "registry/app:latest"
5m20s Normal Pulled pod/app-686494b58b-6cjsq Successfully pulled image "registry/app:latest"
5m20s Normal Created pod/app-686494b58b-6cjsq Created container app
5m20s Normal Pulled pod/app-686494b58b-6cjsq Container image "redis:5.0.6-alpine" already present on machine
5m19s Normal Started pod/app-686494b58b-6cjsq Started container app
0s Warning Unhealthy pod/app-686494b58b-6cjsq Readiness probe failed: Get http://10.10.14.117:80/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
</code></pre>
<p>I see the readiness probe kicking into action though when I actually request the health check page (root page) manually, which is odd. But be that is it may, the probe failure is not for the containers not running fine -- they are -- but somewhere else.</p>
| <p>Let's go over your probes so you can understand what is going and might find a way to fix it:</p>
<pre class="lang-yaml prettyprint-override"><code>
### Readiness probe - "waiting" for the container to be ready
### to get to work.
###
### Liveness is executed once the pod is running which means that
### you have passed the readinessProbe so you might want to start
### with the readinessProbe first
livenessProbe:
### - Define how many retries to test the URL before restarting the pod.
### Try to increase this number and once your pod is restarted reduce
### it back to a lower value
failureThreshold: 3
httpGet:
path: /
port: 80
scheme: HTTP
###
### Delay before executing the first test
### As before - try to increase the delay and reduce it
### back when you figured out the correct value
###
initialDelaySeconds: 90
### How often (in seconds) to perform the test.
periodSeconds: 20
successThreshold: 1
### Number of seconds after which the probe times out.
### Since the value is 1 I assume that you did not change it.
### Same as before - increase the value and figure out what
### the current value
timeoutSeconds: 1
### Same comments as above + `initialDelaySeconds`
### Readiness is "waiting" for the container to be ready to
### get to work.
readinessProbe:
failureThreshold: 3
httpGet:
path: /
port: 80
scheme: HTTP
### Again, nothing new here, same comments to increase the value
### and then reduce it until you figure out what is desired value
### for this probe
initialDelaySeconds: 90
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
</code></pre>
<p><a href="https://i.stack.imgur.com/snaeI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/snaeI.png" alt="enter image description here" /></a></p>
<hr />
<h3>View the logs/events</h3>
<ul>
<li>If you are not sure that the probes are the root cause, view the logs and the events to figure out what is the root cause for those failures</li>
</ul>
|
<p><strong>Can't replicate <code>kubectl apply -f</code> functionality with Python client</strong></p>
<p>I have created a deployment using <code>create_namespced_deployment</code> (<a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/AppsV1Api.md#create_namespaced_deployment" rel="nofollow noreferrer">https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/AppsV1Api.md#create_namespaced_deployment</a>) in python.</p>
<p>But when I tried to update the deployment manually using <code>kubectl apply -f</code>, I am getting warning message like below</p>
<pre><code>Warning: resource deployments/fronend-deployment is missing the
kubectl.kubernetes.io/last-applied-configuration annotation which is
required by kubectl apply.
kubectl apply should only be used on resources created declaratively
by either kubectl create --save-config or kubectl apply.
The missing annotation will be patched automatically
</code></pre>
<p>I want to get rid of this warning message.Is there any other way?</p>
<p>I think the <code>create_namespced_deployment</code> method internally using <code>kubectl create -f</code> functionality.Is there any method that uses <code>kubectl apply -f</code>.</p>
<p>Let me know if you have any questions</p>
| <p>In order to supply a full answer, let's start by explaining the difference between<br />
<code>kubectl apply</code> to <code>kubectl create</code></p>
<hr />
<h3><code>kubectl apply</code> vs <code>kubectl create</code></h3>
<p>The key difference between <code>kubectl apply</code> and <code>kubectl create</code> is that apply creates Kubernetes objects through a <strong>declarative</strong> syntax,
while the <code>create</code> command is <strong>imperative</strong>.</p>
<hr />
<p><a href="https://i.stack.imgur.com/7IHhi.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7IHhi.jpg" alt="enter image description here" /></a></p>
<hr />
<h3><code>apply</code></h3>
<ul>
<li>The <code>kubectl apply</code> is a cli command used to create or modify Kubernetes resources defined in a <strong>manifest file</strong>.</li>
<li>Manifest file is referred to as <strong><code>Declarative</code></strong>.</li>
<li>Using a <strong><code>Declarative</code></strong> files we "describe" what should be our <strong>final</strong> state of our cluster (regarding the resources which we define)</li>
<li>The state of the resource is <strong>declared in the manifest file</strong> and this is where the name is coming from, then <code>kubectl apply</code> is used to implement that state.</li>
</ul>
<h3><code>create</code></h3>
<ul>
<li><p>In contrast to <code>apply</code>, the create command <code>kubectl create</code> is used for <strong>creating</strong> a Kubernetes resource directly. (for example: <code>kubectl create ns XXX</code> which will create a namespace.</p>
</li>
<li><p>This is an <strong>Imperative</strong> usage.</p>
</li>
<li><p>You can also use <code>kubectl create</code> with a manifest file to create a <strong>new instance</strong> of the resource. However, if the resource already exists, you should get an error 9but most likely K8S will tolerate it and you will not see the error in most cases).</p>
</li>
</ul>
<hr />
<p>Hopefully, now you can understand the errors you keep getting.</p>
|
<p>Up to now, Helm is the only package manager that I know for K8s. It can help deploy and manage k8s application dependencies seamlessly. Why are so many K8s applications still not adopting or prioritizing it?</p>
<p>I have used quite a few popular ones like Argo, Istio, etc. They seem to promote the default installation method to <code>$ kubectl apply -f <http path to YAML config></code>. Some do support the Helm installation method but it is pushed further down below and in many cases lack docs or are outdated. A few don't even support Helm charts.</p>
<p>How do people actually configure, deploy and manage big K8s applications with layers upon layers of dependencies like usually seen in other platforms? Do they actually manage every dependency manually? If that is the case, the technical debt will be huge as a K8s admin needs to understand every detail of the config logic for every dependency in his application.</p>
| <p>There are a several aspects to that question.</p>
<h1>Cluster Security</h1>
<p>Historically, the first one was security.</p>
<p>Prior Helm3, Helm did rely on some Tiller container, running in your cluster, some kind of cluster admin able to create resources on your behalf.</p>
<p>Back then, one of the main argument against Helm was Tiller, and its being a privileged target for any attacker trying to escalate its privileges.</p>
<p>As of Helm 3: Tiller is gone. This argument is no longer valid - though beware Helm 2 is not dead, I still encounter Tillers pods from time to time.</p>
<h1>Compatibility</h1>
<p>Helm Charts often won't work out of the box, depending on your cluster, and usually requires some patch.</p>
<p>A common issue would be charts trying to start privileged containers without any PodSecurityPolicy or SecurityContextConstraints configuration, resulting in containers not starting. Or more generally missing Pods or containers SecurityContexts, no way to customize them through existing Values, LoadBalancer Services when no cloud integration configured nor MetalLB available, ...</p>
<p>From experience, Charts tend to work great on Kubernetes with no RBAC, no security, ... Things could get "complicated", depending on which options your cluster enforces.</p>
<h1>Quality</h1>
<p>Which leads us to another common issue: quality. Which has two facets: the quality of the Kubernetes configuration provided, how relevant it is, how does it fit the application you are packaging. And the quality of the container images you are relying on, packaging that application.</p>
<p>Best case, your Chart comes from the editor of the software you are trying to install. Usually, the container image is relatively well built.</p>
<p>But the editor may not have a lot of Kubernetes expertise: they could be missing resources allocations, affinities/anti-affinities/topology spread constraints, there could be some deployment strategy or rolling params that doesn't make sense, very often trying to bind on privileged ports when this could be avoided, ... Sometime they don't have much time to work on those topics, focusing on their product itself.</p>
<p>Historically, the first Charts I could try used to be written by some GitHub user, with no affiliation with the editor, and variable understanding of what they were doing in both writing Kubernetes configurations and Dockerfiles.</p>
<p>Regardless of editors involvement, I won't blame anyone in particular, but there are a wide variety of container images and Helm Charts, binding on privileged ports and running as root, with no reason other than "that's how it works when you apt-get install". Chmod 777s. Stuff that may not work. Stuff that would, but I would question the security implications. Things that could be simplified, ...</p>
<h1>Support</h1>
<p>Last point to take under consideration, evaluating a Chart, would be how well maintained are its container images and Kubernetes configurations.</p>
<p>Can you find sources on a public CVS? How active is it, any outstanding bugs, are those fixed in a timely manner? Would you be able to contribute yourself, maybe, should anything bad show up on your setup?</p>
<h1>Conclusion</h1>
<p>Based on those aspects, and the requirements you have on your cluster (is this a test, a dev that should host CI, a production for a PCI-DSS bank, ...) and the time you can spend setting it up: you may trust a given Helm Chart, or prefer building your own solution.</p>
<p>Although Helm architecture used to be the main blocker, prior Helm 3, nowadays the rational not to adopt a Chart usually relates to its quality.</p>
<p>My first grudge against Helm is the variable - and sometimes, very poor - quality of images and configurations. Which is not to say there aren't good or great Charts. But that's for you to evaluate, on case by case, in relation to your context.</p>
<p>As pointed in comments: Helm isn't always the definitive answer. You could use Kustomize. Kustomize can in turn install Helm Charts. Or not. You could apply plain-text files. You could consider ArgoCD applying configurations out of a Git repository. OpenShift has "Templates" (though it does not allow for lots of templating: in that regard, Helm is better)</p>
<p>If we are to consider alternatives, it depends on what you want to do. Considering that most of the time, I would deploy several objects, with relations to each other, ... the "best" way to do this, arguably, is an operator. You could write yours in Go, Ansible (operator-sdk), Java, Python (kopf), ... there are tons of libraries available. While Helm/Templates/Kustomize would just apply a bunch of configurations,
operators are state machines (wait for X to be up before creating Y).</p>
<hr />
<p>And ansering to your last question: how do people actually configure, deploy and manage big K8s applications with layers upon layers of dependencies?</p>
<p>Depends. There's no definitive answer to this.</p>
<p>I could tell you the customer I'm working with nowadays uses Terraform, with a custom provider that applies Ansible Playbooks, generating Kustomization files from some templates, in turn applying plaintext files or sometimes calling Helm Charts hosted on Artifactories. Those would sometime deploy Operators (written in java/go/python), which would in turn create their own objects, ...</p>
<p>You can do this however you want. As long as it's OK with your security department if you have one, usable by your devs, manageable by your ops, ...</p>
|
<p>I find some usecases of k8s in production which work with the Public Cloud will put a LoadBalancer type of Service in front of the Nginx Ingress. (You can find an example from the below yaml.)</p>
<p>As I known, ingress can be used to expose the internal servcie to the public, so what's the point to put a loadbalancer in front of the ingress? Can I delete that service?</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
annotations:
labels:
helm.sh/chart: ingress-nginx-3.27.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.45.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
kubernetes.io/elb.class: union
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
loadBalancerIP: xxx.xxx.xxx.xxx
externalTrafficPolicy: Cluster
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
</code></pre>
| <pre><code>...so what's the point to put a loadbalancer in front of the ingress?
</code></pre>
<p>This way allows you to take advantage of the cloud provider LB facilities (eg. multi-az etc), then with Ingress you can further control routing using path or name-based virtual host for services in the cluster.</p>
<pre><code>Can I delete that service?
</code></pre>
<p>Ingress doesn't do port mapping or pods selection, and you can't resolve an Ingress name with DNS.</p>
|
<p>I am using EKS with node group for K8S.<br />
The desired node is 1 and the maximum is 3.</p>
<p>Currently, there is only one ec2 (t3.xlarge) is running.<br />
When I deploy two pods in one namespace, each requests 8G memory and 2 CPU, the first one is deployed successfully but the second one failed with the error <code>1 Insufficient cpu, 1 Insufficient memory</code>.</p>
<p>I expect EKS should scale up one more node to allocate this new pod but it doesn't.<br />
Does anyone know why the auto-scale doesn't happen?
What am I understand wrong?</p>
| <p>With Amazon EKS you need to follow those rules prior to use AutoScaler</p>
<p><a href="https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html</a></p>
<h3>From the Official Docs above:</h3>
<p><strong>I have highlighted what might be your problem</strong></p>
<h3>Prerequisites</h3>
<ul>
<li><p>Before deploying the Cluster Autoscaler, you must meet the following prerequisites:</p>
</li>
<li><p>Have an existing Amazon EKS cluster – If you don’t have a cluster, see Creating an Amazon EKS cluster.</p>
</li>
<li><p>An existing IAM OIDC provider for your cluster. To determine whether you have one or need to create one, see Create an IAM OIDC provider for your cluster.</p>
</li>
<li><p><strong>Node groups with Auto Scaling groups tags – The Cluster Autoscaler requires the following tags on your Auto Scaling groups so that they can be auto-discovered.</strong></p>
</li>
<li><p>If you used eksctl to create your node groups, these tags are automatically applied.</p>
</li>
<li><p><strong>If you didn't use eksctl, you must manually tag your Auto Scaling groups with the following tags.</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Key</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>k8s.io/cluster-autoscaler/<cluster-name></td>
<td>owned</td>
</tr>
<tr>
<td>k8s.io/cluster-autoscaler/enabled</td>
<td>TRUE</td>
</tr>
</tbody>
</table>
</div></li>
</ul>
|
<p>Does containers in a Kubernetes Pod run in different "mount" namespaces? As per what i read in few online resources , when a container is launched , each container points to an isolated file system and the only thing that can be used to share file directories across containers is Kubernetes volumes. Hence, just wanted to understand, if they refer to different file systems and by default cannot refer to each others file systems, are they running in different mount namespaces?</p>
| <p>It's right, containers can share files between each others only by using the volumes.</p>
<p>Assume that volume mounting is just the same as mounting network partition. So there is no conflicts with OS or namespaces (we are talking about presistent volumes).</p>
|
<p>I have several containers in my pod which are executing specific tasks. Two of the containers act as daemons and if they fail they should be restarted. On the other side, the 3 other containers are meant to execute specific tasks only once.</p>
<p>My intuitive approach was to make them explicitly quit with <code>exit 0</code> but this doesn't seem to prevent them to restart and do the job again and again. How should I proceed?</p>
| <p><strong>solution #1</strong></p>
<p>Assuming you are using some sort of package manager for k8s deployment. You can use similar feature like helm hooks <a href="https://helm.sh/docs/topics/charts_hooks/" rel="nofollow noreferrer">https://helm.sh/docs/topics/charts_hooks/</a> to execute specific tasks at specific lifecycle of pods.</p>
<p><strong>solution #2</strong></p>
<p>You can configure <strong>restartPolicy</strong> for those containers to <strong>Never</strong> as explained here
<a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy</a></p>
<p><strong>solution #3</strong></p>
<p>Depending on your requirement, you can also use lifecycle handlers of a POD as explained here to execute jobs:</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/</a></p>
|
<p>Checking the latest image used in the metrics-server <a href="https://github.com/kubernetes-sigs/metrics-server/releases/tag/v0.5.0" rel="nofollow noreferrer">Github repo</a>, the tag used is <strong>v0.5.0</strong>, for arm64 I would usually add <em>arm64</em> to the image name and pull it.</p>
<p>But the image doesn't exist and doing an inspect to the base image shows that its arch is <em>amd64</em>.</p>
<p>In <a href="https://console.cloud.google.com/gcr/images/google-containers/global/metrics-server" rel="nofollow noreferrer">google's registry</a> the latest image is <code>v0.3.6</code>. So I'm not sure if support for arm64 has continued or staled.</p>
| <p>There's no need to append <em>arm64</em> starting v0.3.7, the image support multiple architectures. See official FAQ <a href="https://github.com/kubernetes-sigs/metrics-server/blob/master/FAQ.md#how-to-run-metric-server-on-different-architecture" rel="nofollow noreferrer">here</a> with complete image url.</p>
|
<p>Why there is restriction while expanding and reducing the storage config size in cassandra operator- Datastax</p>
<p><a href="https://github.com/datastax/cass-operator/issues/390" rel="nofollow noreferrer">https://github.com/datastax/cass-operator/issues/390</a></p>
<p>Why there is a validation/restriction in statefulset for expanding storage config?</p>
| <p>The exact same question was asked on <a href="https://community.datastax.com/questions/12269/" rel="nofollow noreferrer">https://community.datastax.com/questions/12269/</a> so I'm reposting my answer here.</p>
<p>Jim Dickinson already answered this question in <a href="https://github.com/datastax/cass-operator/issues/390" rel="nofollow noreferrer">issue #390</a> you referenced.</p>
<p>Attempts to change <code>StorageConfig</code> is blocked because the <code>StatefulSet</code> also doesn't allow changes to the <code>PersistentVolumeClaim</code>.</p>
<p>If a pod has run out of disk space, it is possible to workaround it by resizing the underlying volume provided it is supported in your environment. For example, public cloud providers allow volumes to be resized in certain configurations. You will need to consult the documentation of the relevant cloud provider for details.</p>
<p>Be aware that this workaround can be risky since it means that the underlying volume will be out of sync with the <code>CassandraDatacenter</code>, <code>StatefulSet</code> and/or <code>PersistentVolumeClaim</code> definitions. When you scale up the DC at a later date, the volume will be the originally defined size.</p>
<p>Note that we aim to document a workaround for the cass-operator in <a href="https://dtsx.io/3znMa05" rel="nofollow noreferrer">K8ssandra.io</a>. Cheers!</p>
|
<p>I am trying to learn how to use Kubernetes and tried following the guide <a href="https://minikube.sigs.k8s.io/docs/start/" rel="nofollow noreferrer">here</a> to create a local Kubernetes cluster with Docker driver.</p>
<p>However, I'm stuck at step 3: Interact with your cluster. I tried to run <code>minikube dashboard</code> and I keep getting this error:</p>
<pre><code>Unknown error (404)
the server could not find the requested resource (get ingresses.extensions)
</code></pre>
<p>My Kubernetes version:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:59:11Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.1", GitCommit:"632ed300f2c34f6d6d15ca4cef3d3c7073412212", GitTreeState:"clean", BuildDate:"2021-08-19T15:39:34Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Can someone point out where the problem lies please?</p>
| <p>I have installed the minikube according to the same guide, both locally and with the help of cloud providor and ... it works for me :)
If you are just learning and starting your adventure with Kubernetes, try to install everything from scratch.</p>
<p>The error you are getting is related to different versions of Client and Server. When I have installed according to the same guide, following the instructions, my versions look like this:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.1", GitCommit:"632ed300f2c34f6d6d15ca4cef3d3c7073412212", GitTreeState:"clean", BuildDate:"2021-08-19T15:45:37Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.1", GitCommit:"632ed300f2c34f6d6d15ca4cef3d3c7073412212", GitTreeState:"clean", BuildDate:"2021-08-19T15:39:34Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>You have different versions. Client <code>v1.21.2</code> and Server <code>v1.22.1</code>. Additionally your <code>Platforms</code> are also not the same. If you want to solve this you need to upgrade your client version or downgrade the server version. For more see <a href="https://stackoverflow.com/questions/51180147/determine-what-resource-was-not-found-from-error-from-server-notfound-the-se">this question</a>.</p>
|
<p>When I am accessing a Istio gateway <code>NodePort</code> from the Nginx server using <code>curl</code>, I am getting response properly, like below:</p>
<pre><code>curl -v "http://52.66.195.124:30408/status/200"
* Trying 52.66.195.124:30408...
* Connected to 52.66.195.124 (52.66.195.124) port 30408 (#0)
> GET /status/200 HTTP/1.1
> Host: 52.66.195.124:30408
> User-Agent: curl/7.76.1
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< server: istio-envoy
< date: Sat, 18 Sep 2021 04:33:35 GMT
< content-type: text/html; charset=utf-8
< access-control-allow-origin: *
< access-control-allow-credentials: true
< content-length: 0
< x-envoy-upstream-service-time: 2
<
* Connection #0 to host 52.66.195.124 left intact
</code></pre>
<p>The same when I am configuring through Nginx proxy like below, I am getting <code>HTTP ERROR 426</code> through the domain.</p>
<p>Note: my domain is HTTPS - <a href="https://dashboard.example.com" rel="nofollow noreferrer">https://dashboard.example.com</a></p>
<pre><code>server {
server_name dashboard.example.com;
location / {
proxy_pass http://52.66.195.124:30408;
}
}
</code></pre>
<p>Can anyone help me to understand the issue?</p>
| <p>HTTP 426 error means <a href="https://httpstatuses.com/426" rel="nofollow noreferrer">upgrade required</a>:</p>
<blockquote>
<p>The server refuses to perform the request using the current protocol but might be willing to do so after the client upgrades to a different protocol.</p>
</blockquote>
<p>or <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/426" rel="nofollow noreferrer">another info</a>:</p>
<blockquote>
<p>The HTTP <strong><code>426 Upgrade Required</code></strong> client error response code indicates that the server refuses to perform the request using the current protocol but might be willing to do so after the client upgrades to a different protocol.</p>
</blockquote>
<p>In your situation, you need to check what version of the HTTP protocol you are using. It seems too low. Look at <a href="https://github.com/envoyproxy/envoy/issues/2506" rel="nofollow noreferrer">this thread</a>. In that case, you had to upgrade from <code>1.0</code> to <code>1.1</code>.</p>
<p>You need to upgrade your HTTP protocol version in NGINX config like there:</p>
<blockquote>
<p>This route is for a legacy API, which enabled NGINX cache for performance reason, but in this route's proxy config, it missed a shared config <a href="http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_http_version" rel="nofollow noreferrer"><code>proxy_http_version 1.1</code></a>, which default to use HTTP 1.0 for all NGINX upstream.</p>
<p>And Envoy will return <code>HTTP 426</code> if the request is <code>HTTP 1.0</code>.</p>
</blockquote>
|
<p>I have a network with 3 ordering nodes running on a Kubernetes cluster. I am using NFS for persistence storage. The Kubernetes cluster is running on bare metal machines. I onboarded a new organization into the network, added that organization into the constortium. As soon as I execute d the command <code>peer channel create</code>, the orderer started throwing the below mentioned error:</p>
<pre><code>2021-01-31 11:09:04.412 UTC [orderer.consensus.etcdraft] createOrReadWAL -> INFO 089 No WAL data found, creating new WAL at path '/var/hyperledger/production/orderer/etcdraft/wal/mvp1x-channel' channel=mvp1x-channel node=1
2021-01-31 11:09:04.549 UTC [orderer.consensus.etcdraft] createOrReadWAL -> WARN 08a failed to create a temporary WAL directory channel=mvp1x-channel node=1 tmp-dir-path=/var/hyperledger/production/orderer/etcdraft/wal/mvp1x-channel.tmp dir-path=/var/hyperledger/production/orderer/etcdraft/wal/mvp1x-channel error="expected \"/var/hyperledger/production/orderer/etcdraft/wal/mvp1x-channel.tmp\" to be empty, got [\".nfs000000000982c7c700005443\"]"
2021-01-31 11:09:04.549 UTC [orderer.commmon.multichannel] newChainSupport -> PANI 08b [channel: mvp1x-channel] Error creating consenter: failed to restore persisted raft data: failed to create or read WAL: failed to initialize WAL:expected "/var/hyperledger/production/orderer/etcdraft/wal/mvp1x-channel.tmp" to be empty, got [".nfs000000000982c7c700005443"]
panic: [channel: mvp1x-channel] Error creating consenter: failed to restore persisted raft data: failed to create or read WAL: failed to initialize WAL:expected "/var/hyperledger/production/orderer/etcdraft/wal/mvp1x-channel.tmp" to be empty, got [".nfs000000000982c7c700005443"]
</code></pre>
<p>After the automatic restart, it started throwing an error:</p>
<pre><code>2021-02-01 04:55:31.392 UTC [orderer.commmon.multichannel] newChain -> PANI 1519 Error creating chain support: error creating consenter forchannel: mvp1x-channel: failed to restore persisted raft data: failed to create or read WAL: failed to open WAL: fileutil: file already locked
panic: Error creating chain support: error creating consenter for channel: mvp1x-channel: failed to restore persisted raft data: failed to create or read WAL: failed to open WAL: fileutil: file already locked
goroutine 74 [running]:
go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc00014fe40, 0x0, 0x0, 0x0)
/go/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore/entry.go:230 +0x545
go.uber.org/zap.(*SugaredLogger).log(0xc0000101b0, 0xc000557004, 0x101b08c, 0x20, 0xc001ca7588, 0x1, 0x1, 0x0, 0x0, 0x0)
/go/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:234 +0x100
go.uber.org/zap.(*SugaredLogger).Panicf(...)
/go/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:159
github.com/hyperledger/fabric/common/flogging.(*FabricLogger).Panicf(...)
/go/src/github.com/hyperledger/fabric/common/flogging/zap.go:74
github.com/hyperledger/fabric/orderer/common/multichannel.(*Registrar).newChain(0xc000146a00, 0xc000452b40)
/go/src/github.com/hyperledger/fabric/orderer/common/multichannel/registrar.go:369 +0x251
github.com/hyperledger/fabric/orderer/common/multichannel.(*BlockWriter).WriteConfigBlock(0xc00056f540, 0xc00021cb40, 0xc000573ca0, 0x9, 0x9)
/go/src/github.com/hyperledger/fabric/orderer/common/multichannel/blockwriter.go:118 +0x357
github.com/hyperledger/fabric/orderer/consensus/etcdraft.(*Chain).writeConfigBlock(0xc001d7a000, 0xc00021cb40, 0x12)
/go/src/github.com/hyperledger/fabric/orderer/consensus/etcdraft/chain.go:1226 +0x28f
github.com/hyperledger/fabric/orderer/consensus/etcdraft.(*Chain).writeBlock(0xc001d7a000, 0xc00021cb40, 0x12)
/go/src/github.com/hyperledger/fabric/orderer/consensus/etcdraft/chain.go:805 +0x1a9
github.com/hyperledger/fabric/orderer/consensus/etcdraft.(*Chain).apply(0xc001d7a000, 0xc00059db10, 0x1, 0x1)
/go/src/github.com/hyperledger/fabric/orderer/consensus/etcdraft/chain.go:990 +0x24a
github.com/hyperledger/fabric/orderer/consensus/etcdraft.(*Chain).run(0xc001d7a000)
/go/src/github.com/hyperledger/fabric/orderer/consensus/etcdraft/chain.go:713 +0x95c
created by github.com/hyperledger/fabric/orderer/consensus/etcdraft.(*Chain).Start
/go/src/github.com/hyperledger/fabric/orderer/consensus/etcdraft/chain.go:360 +0x230
</code></pre>
<p>I believe the first error is thrown because the NFS client creates a file with pattern .nfsxxxxxxx. But I don't understand the second error and I am looking for a way to resolve it.</p>
<p>I am also looking for the other best storage mechanism alternative to the NFS server for a bare-metal cluster.</p>
<p>Any suggestion/feedback would be appreciated</p>
<p>Update: The other 2 orderers are running fine.</p>
| <p>This bug is fixed in <a href="https://github.com/hyperledger/fabric/pull/2934" rel="nofollow noreferrer">https://github.com/hyperledger/fabric/pull/2934</a>, you can look at the issue <a href="https://github.com/hyperledger/fabric/issues/2931" rel="nofollow noreferrer">https://github.com/hyperledger/fabric/issues/2931</a></p>
|
<p>I have an application using Azure Kubernetes. Everything was working fine and the API gave me 200 response all the time, but last week I started receiving 500 internal server errors from the API management, and it indicated that its a backend error. I ran the server locally and sent requests to the API and it worked, so I figured the problem happens somewhere in Azure Kubernetes.</p>
<p>However the logs were super cryptic and didn't add that much info so I never really found out what was the problem. I just ran my code to deploy the image again and it got fixed but there was no way to realize that's the problem.</p>
<p>This time I managed to fix the problem but I'm looking for a better way to troubleshoot 500 internal server error in Azure. I have looked all through the Azure documentation but haven't found anything other than the logs, which weren't really helpful in my case. How do you usually go about troubleshooting 500 errors in applications running in Kubernetes?</p>
| <p>In general, it all depends specifically on the situation you are dealing with. Nevertheless, you should always start by looking at the logs (application event logs and server logs). Try to look for information about the error in them. Error 500 is actually the effect, not the cause. If you want to find out what may have caused the error, you need to look for this information in the logs. Often times, you can tell what went wrong and fix the problem right away.</p>
<p>If you want to reproduce the problem, check the comment of <a href="https://stackoverflow.com/users/10008173/david-maze" title="75,151 reputation">David Maze</a>:</p>
<blockquote>
<p>I generally try to figure out what triggers the error, reproduce it in a local environment (not Kubernetes, not Docker, no containers at all), debug, write a regression test, fix the bug, get a code review, redeploy. That process isn't especially unique to Kubernetes; it's the same way I'd debug an error in a customer environment where I don't have direct access to the remote systems, or in a production environment where I don't want to risk breaking things further.</p>
</blockquote>
<p>See also:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/6324463/how-to-debug-azure-500-internal-server-error">similar question</a></li>
<li><a href="https://support.cloudways.com/en/articles/5121238-how-to-resolve-500-internal-server-error" rel="nofollow noreferrer">how to solve 500 error</a></li>
</ul>
|
<p>I noticed that all the documentations on deploying operators in kubernetes always use a simple <code>kubectl create -f /point/to/some/big/blob/deploying/the/operator.yaml</code> as the "newer" alternative to a plain helm chart. That made me wonder why operator deployments are not usually managed by helm so we can depend upon it in our charts.</p>
<p>For example if I want a Vitess database cluster as part of my helm-managed application deployment to my understanding I'm out of luck because their helm chart is deprecated and they recommend using the operator. However there's no way for me to ensure the operator is present in a cluster by declaring it as a dependency in my Chart.yaml.</p>
<p>Is there any reason we can't deploy operators using helm charts? The only one I could think of is that CRDs are not namespaced so we can't have multiple versions of an operator running in the same cluster and hence we would break stuff if we try to roll out two applications that demand different versions of an operator.</p>
<p>What are my possibilities to solve the issue of not being able to depend upon another piece of software in my deployment?</p>
| <p>A problem with helm dependencies is that they cannot be shared between multiple projects. So operator-as-helm-dependency would break anyway as soon as you would have multiple helm charts depending on the same operator: helm would try to deploy multiple instances of that operator, which is probably not what you want.</p>
<p>That being said, multiple operators (eg strimzi) offer a way to install as a helm chart.</p>
|
<p>I've begun testing Google Cloud's Kubernetes Engine new Autopilot feature, and I'm having a fair number of issues with the autoscaling backend.</p>
<p>I have a fairly standard cluster configuration with basically all of the defaults selected. I'm attempting to deploy a number of microservices backed by a single nginx ingress to manage traffic.</p>
<p>What I'm seeing is that the compute resources are simply not scaling up, and I'm not seeing anything in the configuration that would suggest why this is.</p>
<p>If I review the cluster logs, I see results that explain why the scaling isn't happening, but I have no explanation to explain the problem it's reporting or how to fix them.</p>
<p>For example:</p>
<pre><code>{
"noDecisionStatus": {
"measureTime": "1623700519",
"noScaleUp": {
"unhandledPodGroupsTotalCount": 1,
"unhandledPodGroups": [
{
"podGroup": {
"totalPodCount": 1,
"samplePod": {
"name": "proxy-9d779889d-zt8pv",
"namespace": "default",
"controller": {
"kind": "ReplicaSet",
"apiVersion": "apps/v1",
"name": "proxy-9d779889d"
}
}
},
"napFailureReasons": [
{
"messageId": "no.scale.up.nap.pod.zonal.resources.exceeded",
"parameters": [
"northamerica-northeast1-c"
]
}
]
}
],
...
</code></pre>
<blockquote>
<p>no.scale.up.nap.pod.zonal.resources.exceeded</p>
</blockquote>
<p>This seems to suggest I'm not allowed to scale this single-zone cluster beyond where it currently is. But I can't see any documentation explaining this limitation. Additionally, the resources I'm currently using seem way too low to be hitting a quota.</p>
<p>The cluster currently reports a total <strong>CPU provision of 3.25, and memory of 12 GB</strong>.</p>
<p>That's using 6 deployments of 1 pod each. I can't imagine that being the limit of a single availability zone in GKE.</p>
<p>The logs are consistently updating, which leads me to believe GKE is <em>trying</em> to scale, but it keeps being declined. I need to know why and how I can fix that.</p>
<p>Kubernetes version: 1.19.9-gke.1900</p>
| <p>Using default configuration will give you a public cluster. The public cluster will assign external IP to each node, which might cause your cluster to hit the IP address quota.</p>
<p>What do you want is to configure the cluster to be a private cluster. Private does not mean it can not be accessed via public, it just means node and pod are isolated from the internet by default.</p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#all_access" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#all_access</a></p>
|
<p>I'm having an application which stores data in a cloud instance of mongoDB. So If I explain further on requirement, I'm currently having data organized at collection level like below.</p>
<pre><code>collection_1 : [{doc_1}, {doc_2}, ... , {doc_n}]
collection_2 : [{doc_1}, {doc_2}, ... , {doc_n}]
...
...
collection_n : [{doc_1}, {doc_2}, ... , {doc_n}]
</code></pre>
<p>Note: my collection name is a unique ID to represent collection and in this explanation I'm using <code>collection_1, collection_2 ...</code> to represent that ids.</p>
<p>So I want to change this data model to a single collection model as below. The collection ID will be embedded into document to uniquely identify the data.</p>
<pre><code>global_collection: [{doc_x, collection_id : collection_1}, {doc_y, collection_id : collection_1}, ...]
</code></pre>
<p>I'm having the data access layer(data insert, delete, update and create operations) for this application written using Java backend.</p>
<p>Additionally, the entire application is deployed on k8s cluster.</p>
<p>My requirement is to do this migration (data access layer change and existing data migration) with a zero downtime and without impacting any operation in the application. Assume that my application is a heavily used application which has a high concurrent traffic.</p>
<p>What is the proper way to handle this, experts please provide me the guidance..??</p>
<p>For example, if I consider the backend (data access layer) change, I may use a temporary code in java to support both the models and do the migration using an external client. If so, what is the proper way to do the implementation change, is there any specific design patterns for this??</p>
<p>Likewise a complete explanation for this is highly appreciated...</p>
| <p>I think you have honestly already hinted at the simplest answer.</p>
<p>First, update your data access layer to handle both the new and old schema: Inserts and updates should update both the new and old in order to keep things in sync. Queries should only look at the old schema as it's the source of record at this point.</p>
<p>Then copy all data from the old to the new schema.</p>
<p>Then update the data access to now query the new data. This will keep the old data updated, but will allow full testing of the new data before making any changes that will result in the two sets of data being out of sync. It will also help facilitate rolling updates (ie. applications with both new and old data access code will still function at the same time.</p>
<p>Finally, update the data access layer to only access the new schema and then delete the old data.</p>
<p>Except for this final stage, you can always roll back to the previous version should you encounter problems.</p>
|
<p>Upon submitting few jobs (say, 50) targeted on a single node, I am getting pod status as "OutOfpods" for few jobs. I have reduced the maximum number of pods on this worker node to "10", but still observe above issue.
Kubelet configuration is default with no changes.</p>
<p>kubernetes version: v1.22.1</p>
<ul>
<li>Worker Node</li>
</ul>
<p>Os: CentOs 7.9
memory: 528 GB
CPU: 40 cores</p>
<p>kubectl describe pod :</p>
<blockquote>
<p>Warning OutOfpods 72s kubelet Node didn't have enough resource: pods, requested: 1, used: 10, capacity: 10</p>
</blockquote>
| <p>I have realized this to be a known issue for kubelet v1.22 as confirmed <a href="https://github.com/kubernetes/kubernetes/issues/104560" rel="nofollow noreferrer">here</a>. The fix will be reflected in the next latest release.</p>
<p>Simple resolution here is to downgrade kubernetes to v1.21.</p>
|
<p>There are lot of fields out there in the library but it basically talks about job finished or not, but how to check if a job is finished and successful or job is finished and failure</p>
<pre><code>if con.Type == v1.JobComplete && con.Status == corev1.ConditionTrue && job.Status.Succeeded > 0 {
fmt.Printf("Job: %v Completed Successfully: %v\n", name, con)
break
} else if con.Type == v1.JobFailed && con.Status == corev1.ConditionTrue {
if job.Status.Active == 0 && job.Status.Succeeded == 0 {
fmt.Printf("Job: %v Failed: %v\n", name, con)
break
}
}
</code></pre>
<p>This is how I am checking now, I am not completely sure this is correct</p>
| <p>Similar to <code>Bharath</code>s answer except it handles the race condition where this function is called after the job is created but before the job pod is active.</p>
<pre><code>func getJobStatus(jobName string) error {
// k8sClient := initialize k8s client
job, err := k8sClient.BatchV1().Jobs(h.namespace).Get(jobName, metav1.GetOptions{})
if err != nil {
return err
}
if job.Status.Active == 0 && job.Status.Succeeded == 0 && job.Status.Failed == 0 {
return fmt.Errorf("%s hasn't started yet", job.Name)
}
if job.Status.Active > 0 {
return fmt.Errorf("%s is still running", job.Name)
}
if job.Status.Succeeded > 0 {
return nil // Job ran successfully
}
return fmt.Errorf("%s has failed with error", job.Name)
}
</code></pre>
|
<p>I am trying to convert an Istio service mesh running on k8s from <code>http</code> to <code>https</code> but stumbled upon many problems. I don't really understand what are all the steps required to do that.</p>
<p>As I know, there are 2 kinds of traffic that requires TLS in a mesh:</p>
<ul>
<li><p><strong>between internal services</strong>: scramming through Istio docs let me know that Istio will somehow automatically configure mTLS between services so all of them will communicate securely without any extra configuration. However, I still don't understand deeply how they implement this mTLS. How does it differ from normal TLS and what is mTLS role in the other kind of traffic (client outside to service inside)?</p>
</li>
<li><p><strong>from client outside to a service inside</strong>: this is where I don't know what to do. I know that in order for a service to have TLS it needs TLS certificate by a trusted CA. However, as the outer client will not talk directly to the service inside but only through the Istio ingress gateway. Do I need to provide cert for every service or only the ingress gateway? All of my services are now exposing port 80 for <code>HTTP</code>. Do I need to convert all of them to port 443 and <code>HTTPS</code> or just the ingress gateway is enough?</p>
</li>
</ul>
<p>Regarding the certificates, if I just use self-signing certs for now, can I just create cert and key with openssl and create secrets from it (maybe sync between namespaces with <code>kubed</code>), then all services use the same cert and key? Everywhere suggests me to use <code>cert-manager</code>. However, I don't know if it is worth the effort?</p>
<p>I would be really thankful if anyone can explain with some illustrations.</p>
| <p>In general, if you need a good explanation of the issues related to Istio (also with pictures), I recommend that you check the documentation. You can find around <a href="https://istio.io/latest/search/?q=tls&site=docs" rel="nofollow noreferrer">540 topics</a> related to TLS in Istio.</p>
<p>Istio is a very well documented service. Here you can find more information about <a href="https://istio.io/latest/docs/ops/configuration/traffic-management/tls-configuration/" rel="nofollow noreferrer">Understanding TLS Configuration</a>. You can also find good article about <a href="https://istio.io/latest/docs/tasks/security/authentication/mtls-migration/" rel="nofollow noreferrer">Mutual TLS Migration</a>.</p>
<blockquote>
<p>However I still don't understand deeply how they implement this mTLS, how does it differ from normal TLS and what is mTLS role in the other kind of traffic (client outside to service inside).</p>
</blockquote>
<p>Mutual TLS, or mTLS for short, is a method for <a href="https://www.cloudflare.com/learning/access-management/what-is-mutual-authentication/" rel="nofollow noreferrer">mutual authentication</a>. mTLS ensures that the parties at each end of a network connection are who they claim to be by verifying that they both have the correct private <a href="https://www.cloudflare.com/learning/ssl/what-is-a-cryptographic-key/" rel="nofollow noreferrer">key</a>. The information within their respective <a href="https://www.cloudflare.com/learning/ssl/what-is-an-ssl-certificate" rel="nofollow noreferrer">TLS certificates</a> provides additional verification. You can read more about it <a href="https://www.cloudflare.com/learning/access-management/what-is-mutual-tls/" rel="nofollow noreferrer">here</a>. Additionally yo can also see page about <a href="https://istio.io/latest/docs/tasks/security/authorization/authz-http/" rel="nofollow noreferrer">HTTP Traffic</a> (mTLS is required for this case).</p>
<blockquote>
<p>All of my services are now exposing port 80 for HTTP. Do I need to convert all of them to port 443 and HTTPS or just the ingress gateway is enough?</p>
</blockquote>
<p>It is possible to create <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-sni-passthrough/" rel="nofollow noreferrer">Ingress Gateway without TLS Termination</a>:</p>
<blockquote>
<p>The <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/secure-ingress/" rel="nofollow noreferrer">Securing Gateways with HTTPS</a> task describes how to configure HTTPS ingress access to an HTTP service. This example describes how to configure HTTPS ingress access to an HTTPS service, i.e., configure an ingress gateway to perform SNI passthrough, instead of TLS termination on incoming requests.</p>
</blockquote>
<p><strong>EDIT (added more explanation and documentation links):</strong></p>
<blockquote>
<p><a href="https://istio.io/latest/about/service-mesh/" rel="nofollow noreferrer">Service mesh</a> uses a proxy to intercept all your network traffic, allowing a broad set of application-aware features based on configuration you set.</p>
</blockquote>
<blockquote>
<p>Istio securely provisions strong identities to every workload with X.509 certificates. Istio agents, running alongside each Envoy proxy, work together with istiod to automate key and certificate rotation at scale. The <a href="https://istio.io/latest/docs/concepts/security/#pki" rel="nofollow noreferrer">following diagram</a> shows the identity provisioning flow.</p>
</blockquote>
<blockquote>
<p>Peer <a href="https://istio.io/latest/docs/concepts/security/#authentication" rel="nofollow noreferrer">authentication</a>: used for service-to-service authentication to verify the client making the connection. Istio offers <a href="https://en.wikipedia.org/wiki/Mutual_authentication" rel="nofollow noreferrer">mutual TLS</a> as a full stack solution for transport authentication, which can be enabled without requiring service code changes.</p>
</blockquote>
<p>Peer authentication modes that are supported: <code>Permissive</code>, <code>Strict</code>, and <code>Disable</code>.</p>
<p>In order to answer this question:</p>
<blockquote>
<p>All of my services are now exposing port 80 for HTTP. Do I need to convert all of them to port 443 and HTTPS or just the ingress gateway is enough?</p>
</blockquote>
<p>fully we could have informed the customer that using Istio Gateway can expose services from Istio service mesh to the outside using plain HTTP, with TLS termination or in PASSTHROUGH TLS mode. Incoming TLS termination could be improved (using TLS certificate approved by a trusted CA or using cert-manger with Istio Gateway). You can read more about this topic <a href="https://istio.io/latest/docs/ops/integrations/certmanager/" rel="nofollow noreferrer">here</a>.</p>
|
<p>I am replacing a Kubernetes secret and I want to make sure I am catching all places in the cluster which use it.</p>
<p>Is there a way to tell without reading all deployment YAMLs using K8s or helm?</p>
<p>We have multiple services deployed on the same cluster and sharing secrets. Some using Helm, some don't.</p>
| <p>You can use secrets in several different ways, it's not always bound as volume. So the most convenient way is to check the secret's namespace for all objects that could use secret in their specs.</p>
<p>For manual check here are two commands, one for checking for the certain secret name references among k8s objects, the second one helps to find the object that contains the secret reference.</p>
<pre><code>kubectl get deployments,statefulsets,daemonsets,cronjobs,jobs,pods -n namespace-name -o yaml | grep secret_name
kubectl get deployments,statefulsets,daemonsets,cronjobs,jobs,pods -n namespace-name -o yaml | grep -i -e "^ name:" -e "^ kind" -e secret_name
</code></pre>
<p>Annotation can be removed by <code>grep -v annotation -v last-applied</code> or probably even easier <code>grep -v "\"kind"</code>.</p>
|
<p>I have installed cert manager on a k8s cluster:</p>
<pre><code>helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.5.3 --set installCRDs=true
</code></pre>
<p>My objective is to do mtls communication between micro-services running in same name-space.</p>
<p>For this purpose I have created a ca issuer .i.e..</p>
<pre><code>kubectl get issuer -n sandbox -o yaml
apiVersion: v1
items:
- apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"cert-manager.io/v1","kind":"Issuer","metadata":{"annotations":{},"name":"ca-issuer","namespace":"sandbox"},"spec":{"ca":{"secretName":"tls-internal-ca"}}}
creationTimestamp: "2021-09-16T17:24:58Z"
generation: 1
managedFields:
- apiVersion: cert-manager.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
.: {}
f:ca:
.: {}
f:secretName: {}
manager: HashiCorp
operation: Update
time: "2021-09-16T17:24:58Z"
- apiVersion: cert-manager.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
.: {}
f:conditions: {}
manager: controller
operation: Update
time: "2021-09-16T17:24:58Z"
name: ca-issuer
namespace: sandbox
resourceVersion: "3895820"
selfLink: /apis/cert-manager.io/v1/namespaces/sandbox/issuers/ca-issuer
uid: 90f0c811-b78d-4346-bb57-68bf607ee468
spec:
ca:
secretName: tls-internal-ca
status:
conditions:
message: Signing CA verified
observedGeneration: 1
reason: KeyPairVerified
status: "True"
type: Ready
kind: List
metadata:
resourceVersion: ""
selfLink: ""
</code></pre>
<p>Using this ca issuer, I have created certificates for my two micro-service .i.e.</p>
<pre><code>kubectl get certificate -n sandbox
NAME READY SECRET Age
service1-certificate True service1-certificate 3d
service2-certificate True service2-certificate 2d23h
</code></pre>
<p>which is configured as</p>
<pre><code>apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
annotations:
meta.helm.sh/release-name: service1
meta.helm.sh/release-namespace: sandbox
creationTimestamp: "2021-09-17T10:20:21Z"
generation: 1
labels:
app.kubernetes.io/managed-by: Helm
managedFields:
- apiVersion: cert-manager.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:meta.helm.sh/release-name: {}
f:meta.helm.sh/release-namespace: {}
f:labels:
.: {}
f:app.kubernetes.io/managed-by: {}
f:spec:
.: {}
f:commonName: {}
f:dnsNames: {}
f:duration: {}
f:issuerRef:
.: {}
f:kind: {}
f:name: {}
f:renewBefore: {}
f:secretName: {}
f:subject:
.: {}
f:organizations: {}
f:usages: {}
manager: Go-http-client
operation: Update
time: "2021-09-17T10:20:21Z"
- apiVersion: cert-manager.io/v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:privateKey: {}
f:status:
.: {}
f:conditions: {}
f:notAfter: {}
f:notBefore: {}
f:renewalTime: {}
f:revision: {}
manager: controller
operation: Update
time: "2021-09-20T05:14:12Z"
name: service1-certificate
namespace: sandbox
resourceVersion: "5177051"
selfLink: /apis/cert-manager.io/v1/namespaces/sandbox/certificates/service1-certificate
uid: 0cf1ea65-92a1-4b03-944e-b847de2c80d9
spec:
commonName: example.com
dnsNames:
- service1
duration: 24h0m0s
issuerRef:
kind: Issuer
name: ca-issuer
renewBefore: 12h0m0s
secretName: service1-certificate
subject:
organizations:
- myorg
usages:
- client auth
- server auth
status:
conditions:
- lastTransitionTime: "2021-09-20T05:14:13Z"
message: Certificate is up to date and has not expired
observedGeneration: 1
reason: Ready
status: "True"
type: Ready
notAfter: "2021-09-21T05:14:13Z"
notBefore: "2021-09-20T05:14:13Z"
renewalTime: "2021-09-20T17:14:13Z"
revision: 5
</code></pre>
<p>Now as you could see in the configuration I have configured to renew them in 12 hours. However, the secrets created via this custom certificate resource are still aged to two days (the first it was created). I was thinking this tls secret will be renewed via cert manager each day) .i.e.</p>
<pre><code>kubectl get secrets service1-certificate service2-certificate -n sandbox -o wide
NAME TYPE DATA AGE
service1-certificate kubernetes.io/tls 3 2d23h
service2-certificate kubernetes.io/tls 3 3d1h
</code></pre>
<p>Is there is some wrong in my understanding? In the <code>certmangager</code> pod logs I do see some error around renewing .i.e.</p>
<pre><code>I0920 05:14:04.649158 1 trigger_controller.go:181] cert-manager/controller/certificates-trigger "msg"="Certificate must be re-issued" "key”=“sandbox/service1-certificate" "message"="Renewing certificate as renewal was scheduled at 2021-09-19 08:24:13 +0000 UTC" "reason"="Renewing"
I0920 05:14:04.649235 1 conditions.go:201] Setting lastTransitionTime for Certificate “service1-certificate" condition "Issuing" to 2021-09-20 05:14:04.649227766 +0000 UTC m=+87949.327215532
I0920 05:14:04.652174 1 trigger_controller.go:181] cert-manager/controller/certificates-trigger "msg"="Certificate must be re-issued" "key"="sandbox/service2 "message"="Renewing certificate as renewal was scheduled at 2021-09-19 10:20:22 +0000 UTC" "reason"="Renewing"
I0920 05:14:04.652231 1 conditions.go:201] Setting lastTransitionTime for Certificate “service2-certificate" condition "Issuing" to 2021-09-20 05:14:04.652224302 +0000 UTC m=+87949.330212052
I0920 05:14:04.671111 1 conditions.go:190] Found status change for Certificate “service2-certificate" condition "Ready": "True" -> "False"; setting lastTransitionTime to 2021-09-20 05:14:04.671094596 +0000 UTC m=+87949.349082328
I0920 05:14:04.671344 1 conditions.go:190] Found status change for Certificate “service1-certificate" condition "Ready": "True" -> "False"; setting lastTransitionTime to 2021-09-20 05:14:04.671332206 +0000 UTC m=+87949.349319948
I0920 05:14:12.703039 1 controller.go:161] cert-manager/controller/certificates-readiness "msg"="re-queuing item due to optimistic locking on resource" "key”=“sandbox/service2-certificate" "error"="Operation cannot be fulfilled on certificates.cert-manager.io \”service2-certificate\": the object has been modified; please apply your changes to the latest version and try again"
I0920 05:14:12.703896 1 conditions.go:190] Found status change for Certificate “service2-certificate" condition "Ready": "True" -> "False"; setting lastTransitionTime to 2021-09-20 05:14:12.7038803 +0000 UTC m=+87957.381868045
I0920 05:14:12.749502 1 controller.go:161] cert-manager/controller/certificates-readiness "msg"="re-queuing item due to optimistic locking on resource" "key”=“sandbox/service1-certificate" "error"="Operation cannot be fulfilled on certificates.cert-manager.io \”service1-certificate\": the object has been modified; please apply your changes to the latest version and try again"
I0920 05:14:12.750096 1 conditions.go:190] Found status change for Certificate “service1-certificate" condition "Ready": "True" -> "False"; setting lastTransitionTime to 2021-09-20 05:14:12.750082572 +0000 UTC m=+87957.428070303
I0920 05:14:13.009032 1 controller.go:161] cert-manager/controller/certificates-key-manager "msg"="re-queuing item due to optimistic locking on resource" "key"="sandbox/service1-certificate" "error"="Operation cannot be fulfilled on certificates.cert-manager.io \”service1-certificate\": the object has been modified; please apply your changes to the latest version and try again"
I0920 05:14:13.117843 1 controller.go:161] cert-manager/controller/certificates-readiness "msg"="re-queuing item due to optimistic locking on resource" "key”=“sandbox/service2-certificate" "error"="Operation cannot be fulfilled on certificates.cert-manager.io \”service2-certificate\": the object has been modified; please apply your changes to the latest version and try again"
I0920 05:14:13.119366 1 conditions.go:190] Found status change for Certificate “service2-certificate" condition "Ready": "True" -> "False"; setting lastTransitionTime to 2021-09-20 05:14:13.119351795 +0000 UTC m=+87957.797339520
I0920 05:14:13.122820 1 controller.go:161] cert-manager/controller/certificates-key-manager "msg"="re-queuing item due to optimistic locking on resource" "key”=“sandbox\service2-certificate" "error"="Operation cannot be fulfilled on certificates.cert-manager.io \”service-certificate\": the object has been modified; please apply your changes to the latest version and try again"
I0920 05:14:13.123907 1 conditions.go:261] Setting lastTransitionTime for CertificateRequest “service2-certificate-t92qh" condition "Approved" to 2021-09-20 05:14:13.123896104 +0000 UTC m=+87957.801883833
I0920 05:14:13.248082 1 conditions.go:261] Setting lastTransitionTime for CertificateRequest “service1-certificate-p9stz" condition "Approved" to 2021-09-20 05:14:13.248071551 +0000 UTC m=+87957.926059296
I0920 05:14:13.253488 1 conditions.go:261] Setting lastTransitionTime for CertificateRequest “serivce2-certificate-t92qh" condition "Ready" to 2021-09-20 05:14:13.253474153 +0000 UTC m=+87957.931461871
I0920 05:14:13.388001 1 conditions.go:261] Setting lastTransitionTime for CertificateRequest “service1-certificate-p9stz" condition "Ready" to 2021-09-20 05:14:13.387983783 +0000 UTC m=+87958.065971525
</code></pre>
| <h2>Short answer</h2>
<p>Based on logs and details from certificate you provided it's safe to say <strong>it's working as expected</strong>.</p>
<p>Pay attention to <code>revision: 5</code> in your certificate, which means that certificate has been renewed 4 times already. If you try to look there now, this will be 6 or 7 because certificate is updated every 12 hours.</p>
<h2>Logs</h2>
<p>First thing which can be really confusing is <code>error messages</code> in <code>cert-manager</code> pod. This is mostly noisy messages which are not really helpful by itself.</p>
<p>See about it here <a href="https://github.com/jetstack/cert-manager/issues/3501#issuecomment-884003519" rel="nofollow noreferrer">Github issue comment</a> and here <a href="https://github.com/jetstack/cert-manager/issues/3667" rel="nofollow noreferrer">github issue 3667</a>.</p>
<p>In case logs are really needed, <code>verbosity level</code> should be increased by setting <code>args</code> to <code>--v=5</code> in the <code>cert-manager</code> deployment. To edit a deployment run following command:</p>
<pre><code>kubectl edit deploy cert-manager -n cert-manager
</code></pre>
<h2>How to check certificate/secret</h2>
<p>When certificate is renewed, secret's and certificate's age are not changed, but content is edited, for instance <code>resourceVersion</code> in <code>secret</code> and <code>revision</code> in certificate.</p>
<p>Below are options to check if certificate was renewed:</p>
<ol>
<li><p>Check this by getting secret in <code>yaml</code> before and after renew:</p>
<pre><code>kubectl get secret example-certificate -o yaml > secret-before
</code></pre>
</li>
</ol>
<p>And then run <code>diff</code> between them. It will be seen that <code>tls.crt</code> as well as <code>resourceVersion</code> is updated.</p>
<ol start="2">
<li><p>Look into certificate <code>revision</code> and <code>dates</code> in status
(I set duration to minimum possible <code>1h</code> and renewBefore <code>55m</code>, so it's updated every 5 minutes):</p>
<pre><code> $ kubectl get cert example-cert -o yaml
notAfter: "2021-09-21T14:05:24Z"
notBefore: "2021-09-21T13:05:24Z"
renewalTime: "2021-09-21T13:10:24Z"
revision: 7
</code></pre>
</li>
<li><p>Check events in the namespace where certificate/secret are deployed:</p>
<pre><code>$ kubectl get events
117s Normal Issuing certificate/example-cert The certificate has been successfully issued
117s Normal Reused certificate/example-cert Reusing private key stored in existing Secret resource "example-staging-certificate"
6m57s Normal Issuing certificate/example-cert Renewing certificate as renewal was scheduled at 2021-09-21 13:00:24 +0000 UTC
6m57s Normal Requested certificate/example-cert Created new CertificateRequest resource "example-cert-bs8g6"
117s Normal Issuing certificate/example-cert Renewing certificate as renewal was scheduled at 2021-09-21 13:05:24 +0000 UTC
117s Normal Requested certificate/example-cert Created new CertificateRequest resource "example-cert-7x8cf" UTC
</code></pre>
</li>
<li><p>Look at <code>certificaterequests</code>:</p>
<pre><code>$ kubectl get certificaterequests
NAME APPROVED DENIED READY ISSUER REQUESTOR AGE
example-cert-2pxdd True True ca-issuer system:serviceaccount:cert-manager:cert-manager 14m
example-cert-54zzc True True ca-issuer system:serviceaccount:cert-manager:cert-manager 4m29s
example-cert-8vjcm True True ca-issuer system:serviceaccount:cert-manager:cert-manager 9m29s
</code></pre>
</li>
<li><p>Check logs in <code>cert-manager</code> pod to see four stages:</p>
<pre><code>I0921 12:45:24.000726 1 trigger_controller.go:181] cert-manager/controller/certificates-trigger "msg"="Certificate must be re-issued" "key"="default/example-cert" "message"="Renewing certificate as renewal was scheduled at 2021-09-21 12:45:24 +0000 UTC" "reason"="Renewing"
I0921 12:45:24.000761 1 conditions.go:201] Setting lastTransitionTime for Certificate "example-cert" condition "Issuing" to 2021-09-21 12:45:24.000756621 +0000 UTC m=+72341.194879378
I0921 12:45:24.120503 1 conditions.go:261] Setting lastTransitionTime for CertificateRequest "example-cert-mxvbm" condition "Approved" to 2021-09-21 12:45:24.12049391 +0000 UTC m=+72341.314616684
I0921 12:45:24.154092 1 conditions.go:261] Setting lastTransitionTime for CertificateRequest "example-cert-mxvbm" condition "Ready" to 2021-09-21 12:45:24.154081971 +0000 UTC m=+72341.348204734
</code></pre>
</li>
</ol>
<h2>Note</h2>
<p>Very important that not all <code>issuers</code> support <code>duration</code> and <code>renewBefore</code> flags. E.g. <code>letsencrypt</code> still doesn't work with it and have 90 default days.</p>
<p><a href="https://cert-manager.io/docs/release-notes/release-notes-1.1/#duration" rel="nofollow noreferrer">Refence</a>.</p>
|
<p>I have tried using the Patch Class to scale the Deployments but unable to do so. Please let me know how to do it. i have researched a lot but no proper docs on it/Example to achieve.</p>
<pre><code>public static async Task<V1Scale> Scale([FromBody] ReplicaRequest request)
{
try
{
// Use the config object to create a client.
using (var client = new Kubernetes(config))
{
// Create a json patch for the replicas
var jsonPatch = new JsonPatchDocument<V1Scale>();
// Set the new number of repplcias
jsonPatch.Replace(e => e.Spec.Replicas, request.Replicas);
// Creat the patch
var patch = new V1Patch(jsonPatch,V1Patch.PatchType.ApplyPatch);
var list = client.ListNamespacedPod("default");
//client.PatchNamespacedReplicaSetStatus(patch, request.Deployment, request.Namespace);
//var _result = await client.PatchNamespacedDeploymentScaleAsync(patch, request.Deployment, request.Namespace,null, null,true,default);
await client.PatchNamespacedDeploymentScaleAsync(patch, request.Deployment, request.Namespace,null, "M", true, default);
}
}
catch (Microsoft.Rest.HttpOperationException e)
{
Console.WriteLine(e.Response.Content);
}
return null;
}
public class ReplicaRequest
{
public string Deployment { get; set; }
public string Namespace { get; set; }
public int Replicas { get; set; }
}
</code></pre>
|
<p>The <code>JsonPatchDocument<T></code> you are using generates Json Patch, but you are specifying ApplyPatch.</p>
<p>Edit as of 2022-04-19:<br />
The <a href="https://github.com/kubernetes-client/csharp" rel="nofollow noreferrer">kubernetes-client/csharp</a> library changed the serializer to <code>System.Text.Json</code> and this doesn't support <code>JsonPatchDocument<T></code> serialization, hence we have to do it beforehand</p>
<h4><code>From version 7 of the client library:</code></h4>
<pre class="lang-cs prettyprint-override"><code>var jsonPatch = new JsonPatchDocument<V1Scale>();
jsonPatch.ContractResolver = new DefaultContractResolver
{
NamingStrategy = new CamelCaseNamingStrategy()
};
jsonPatch.Replace(e => e.Spec.Replicas, request.Replicas);
var jsonPatchString = Newtonsoft.Json.JsonConvert.SerializeObject(jsonPatch);
var patch = new V1Patch(jsonPatchString, V1Patch.PatchType.JsonPatch);
await client.PatchNamespacedDeploymentScaleAsync(patch, request.Deployment, request.Namespace);
</code></pre>
<h4><code>Works until version 6 of the client library:</code></h4>
<p>Either of these should work:</p>
<p>Json Patch:</p>
<pre class="lang-cs prettyprint-override"><code>var jsonPatch = new JsonPatchDocument<V1Scale>();
jsonPatch.Replace(e => e.Spec.Replicas, request.Replicas);
var patch = new V1Patch(jsonPatch, V1Patch.PatchType.JsonPatch);
await client.PatchNamespacedDeploymentScaleAsync(patch, request.Deployment, request.Namespace);
</code></pre>
<p>Json Merge Patch:</p>
<pre class="lang-cs prettyprint-override"><code>var jsonMergePatch = new V1Scale { Spec = new V1ScaleSpec { Replicas = request.Replicas } };
var patch = new V1Patch(jsonMergePatch, V1Patch.PatchType.MergePatch);
await client.PatchNamespacedDeploymentScaleAsync(patch, request.Deployment, request.Namespace);
</code></pre>
<h3>About the different patch options:</h3>
<p>They are described here (official C# client):
<a href="https://github.com/kubernetes-client/csharp/blob/master/src/KubernetesClient/Kubernetes.Header.cs#L32" rel="nofollow noreferrer">https://github.com/kubernetes-client/csharp/blob/master/src/KubernetesClient/Kubernetes.Header.cs#L32</a></p>
<ul>
<li>Nice article about the difference between JSON Patch and JSON Merge Patch: <a href="https://erosb.github.io/post/json-patch-vs-merge-patch/" rel="nofollow noreferrer">https://erosb.github.io/post/json-patch-vs-merge-patch/</a></li>
<li>Strategic merge patch is a custom k8s version of Json Patch:
<a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md" rel="nofollow noreferrer">https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md</a></li>
<li>And ApplyPatch is the "server side apply" that replaces it in yaml format:
<a href="https://kubernetes.io/docs/reference/using-api/server-side-apply/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/using-api/server-side-apply/</a></li>
</ul>
|
<p>I have to process tasks stored in a work queue and I am launching this kind of Job to do it:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
template:
spec:
parallelism: 10
containers:
- name: pi
image: perl
command: ["some", "long", "command"]
restartPolicy: Never
backoffLimit: 0
</code></pre>
<p>The problem is that if one of the Pod managed by the Job fails, the Job will terminate all the other Pods before they can complete. On my side, I would like the Job to be marked as failed but I do not want its Pods to be terminated. I would like them to continue running and finish processing the items they have picked in the queue.</p>
<p>Is there a way to do that please?</p>
| <p>As it already mentioned in the comments, you can set <code>restartPolicy: OnFailure</code>, that means kubelet will perform restarts until the Job succeeds. However every retry <a href="https://docs.openshift.com/container-platform/4.1/nodes/jobs/nodes-nodes-jobs.html" rel="nofollow noreferrer">doesn't increment the number of failures</a>. However, you can set <code>activeDeadlineSeconds</code> to some value in order to avoid a loop of failing.</p>
|
<p>I am trying to migrate our cassandra tables to use <code>liquibase</code>. Basically the idea is trivial, have a <code>pre-install</code> and <code>pre-upgrade</code> job that will run some <code>liquibase</code> scripts and will manage our database upgrade.</p>
<p>For that purpose I have created a custom docker image that will have the actual <code>liquibase cli</code> and then I can invoke it from the job. For example:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: "{{ .Release.Name }}-update-job"
namespace: spring-k8s
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-install, pre-upgrade
"helm.sh/hook-weight": "5"
"helm.sh/hook-delete-policy": before-hook-creation
spec:
template:
metadata:
name: "{{ .Release.Name }}-cassandra-update-job"
namespace: spring-k8s
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
restartPolicy: Never
containers:
- name: pre-install-upgrade-job
image: "lq/liquibase-cassandra:1.0.0"
command: ["/bin/bash"]
args:
- "-c"
- "./liquibase-cassandra.sh --username {{ .Values.liquibase.username }} --password {{ .Values.liquibase.username }} --url {{ .Values.liquibase.url | squote }} --file {{ .Values.liquibase.file }}"
</code></pre>
<p>Where <code>.Values.liquibase.file</code> == <code>databaseChangelog.json</code>.</p>
<p>So this image <code>lq/liquibase-cassandra:1.0.0</code> basically has a script <code>liquibase-cassandra.sh</code> that when passed some arguments can do its magic and update the DB schema (not going to go into the details).</p>
<p>The problem is that last argument: <code>--file {{ .Values.liquibase.file }}</code>. This file resides <em>not</em> in the image, obviously, but in each micro-services repository.</p>
<p>I need a way to "copy" that file to the image, so that I could invoke it. One way would be to build this <code>lq/liquibase-cassandra</code> all the time (with the same lifecycle of the project itself) and copy the file into it, but that will take time and seems like at least cumbersome. What am I missing?</p>
| <p>It turns out that <code>helm</code> hooks can be used for other things, not only jobs. As such, I can mount this file into a <code>ConfigMap</code> <em>before</em> the Job even starts (the file I care about resides in <code>resources/databaseChangelog.json</code>):</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: "liquibase-changelog-config-map"
namespace: spring-k8s
annotations:
helm.sh/hook: pre-install, pre-upgrade
helm.sh/hook-delete-policy: hook-succeeded
helm.sh/hook-weight: "1"
data:
{{ (.Files.Glob "resources/*").AsConfig | indent 2 }}
</code></pre>
<p>And then just reference it inside the job:</p>
<pre><code> .....
spec:
restartPolicy: Never
volumes:
- name: liquibase-changelog-config-map
configMap:
name: liquibase-changelog-config-map
defaultMode: 0755
containers:
- name: pre-install-upgrade-job
volumeMounts:
- name: liquibase-changelog-config-map
mountPath: /liquibase-changelog-file
image: "lq/liquibase-cassandra:1.0.0"
command: ["/bin/bash"]
args:
- "-c"
- "./liquibase-cassandra.sh --username {{ .Values.liquibase.username }} --password {{ .Values.liquibase.username }} --url {{ .Values.liquibase.url | squote }} --file {{ printf "/liquibase-changelog-file/%s" .Values.liquibase.file }}"
</code></pre>
|
<p>I am new to stackoverflow, so pardon if I have not followed all rules for asking this question.</p>
<p>So, I am using this helm chart: <a href="https://github.com/helm/charts/tree/master/stable/mysql" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/mysql</a> for deploying mysql for our production env. Infact we have this setup in our production running but it creates only 1 pod of mysql whereas I need to run 3 pods. I tried to set the replicas=3 in values.yaml file of the helm chart and redeploy that chart but it is continuously failing with some error, the error being:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4m58s default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 4 times)
Normal Scheduled 4m58s default-scheduler Successfully assigned test-helm/cluster-mysql-5bcdf87779-vdb2s to innolx12896
Normal Pulled 4m57s kubelet, innolx12896 Container image "busybox:1.29.3" already present on machine
Normal Created 4m56s kubelet, innolx12896 Created container
Normal Started 4m56s kubelet, innolx12896 Started container
Warning BackOff 4m32s (x2 over 4m37s) kubelet, innolx12896 Back-off restarting failed container
Normal Pulled 2m52s (x4 over 4m55s) kubelet, innolx12896 Container image "mysql:5.7.14" already present on machine
Normal Created 2m52s (x4 over 4m55s) kubelet, innolx12896 Created container
Warning Unhealthy 2m52s (x3 over 3m12s) kubelet, innolx12896 Readiness probe failed: mysqladmin: [Warning] Using a password on the command line interface can be insecure.
mysqladmin: connect to server at 'localhost' failed
error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)'
Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists!
Normal Killing 2m52s kubelet, innolx12896 Killing container with id docker://cluster-mysql:Container failed liveness probe.. Container will be killed and recreated.
Normal Started 2m51s (x4 over 4m54s) kubelet, innolx12896 Started container
Warning Unhealthy 2m12s (x4 over 3m42s) kubelet, innolx12896 Liveness probe failed: mysqladmin: [Warning] Using a password on the command line interface can be insecure.
mysqladmin: connect to server at 'localhost' failed
error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)'
Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists!
</code></pre>
<p>Also when I try to see the <strong>mysqld</strong> sock, it is missing. But I don't understand why. Since a single pod deployment works fine but 3 pod deployment produces this error.</p>
<p>Also there might be ways to solve this problem by explicitly defining the paths for the sock which other StackOverflow answers suggest like this answer <a href="https://stackoverflow.com/questions/11990708/error-cant-connect-to-local-mysql-server-through-socket-var-run-mysqld-mysq">1</a>
or this answer <a href="https://stackoverflow.com/questions/5376427/cant-connect-to-local-mysql-server-through-socket-var-mysql-mysql-sock-38">2</a>. But the caveat is we have whole production chart deployment automated through Jenkins and there is no scope of explicitly running those commands for every production release.</p>
<p>So, I need to know how to solve this problem or maybe know what is causing this problem, so that I can change or alter some parameters in the helm chart I mentioned in the beginning. Also tell me if it is not possible to scale this deployment, meaning this helm chart doesn't support or is not meant for multiple mysql pods.</p>
<p>Thanks in advance.</p>
| <p>It looks like the Readiness probe failed & the Liveness probe is failing in your case. For MySQL, please try with the below (in <code>templates/deployment.yaml</code>) for Readiness & Liveness:</p>
<pre class="lang-yaml prettyprint-override"><code>ports:
- name: mysql
containerPort: {{ .Values.service.port }}
protocol: TCP
livenessProbe:
exec:
command: ["mysqladmin", "ping"]
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
exec:
# Check we can execute queries over TCP (skip-networking is off).
command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
initialDelaySeconds: 5
periodSeconds: 2
timeoutSeconds: 1
</code></pre>
|
<p>Below is kubernetes POD definition</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: static-web
labels:
role: myrole
spec:
containers:
- name: web
image: nginx
ports:
- name: web
containerPort: 80
protocol: TCP
</code></pre>
<p>as I have not specified the resources, how much Memory & CPU will be allocated? Is there a kubectl to find what is allocated for the POD?</p>
| <p>If resources are not specified for the Pod, the Pod will be scheduled to any node and resources are not considered when choosing a node.</p>
<p>The Pod might be "terminated" if it uses more memory than available or get little CPU time as Pods with specified resources will be prioritized. It is a good practice to set resources for your Pods.</p>
<p>See <a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/" rel="nofollow noreferrer">Configure Quality of Service for Pods</a> - your Pod will be classified as "Best Effort":</p>
<blockquote>
<p>For a Pod to be given a QoS class of BestEffort, the Containers in the Pod must not have any memory or CPU limits or requests.</p>
</blockquote>
|
<p>I'm trying to understand why this particular <code>socat</code> command isn't working in my case where I run it in a IPv6 only Kubernetes cluster.</p>
<p>Cluster is build on top of AWS with Calico CNI & containerd. Provisioned using <code>kubeadm</code> and Kubernetes 1.21.</p>
<p>I have run the following <code>socat</code> command which binds to loopback interface <code>::1</code>,</p>
<pre><code>kubectl --context=$CLUSTER1 run --image=alpine/socat socat -- tcp6-listen:15000,bind=\[::1\],fork,reuseaddr /dev/null
</code></pre>
<p>And then I try to <code>port-forward</code> and <code>curl</code> to <code>15000</code> port,</p>
<pre><code>kubectl --context=$CLUSTER1 port-forward pod/socat 35000:15000 --address=::1
curl -ivg http://localhost:35000
</code></pre>
<p>I get the error,</p>
<pre><code>Forwarding from [::1]:35000 -> 15000
Handling connection for 35000
E0830 17:09:59.604799 79802 portforward.go:400] an error occurred forwarding 35000 -> 15000: error forwarding port 15000 to pod a8ba619774234e73f4c1b4fe4ff47193af835cffc56cb6ad1a8f91e745ac74e9, uid : failed to execute portforward in network namespace "/var/run/netns/cni-8bade2c1-28c9-6776-5326-f10d55fd0ff9": failed to dial 15000: dial tcp4 127.0.0.1:15000: connect: connection refused
</code></pre>
<p>Its listening to <code>15000</code> as,</p>
<pre><code>Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 ::1:15000 :::* LISTEN 1/socat
</code></pre>
<p>However if I run the following it works fine,</p>
<pre><code>kubectl --context=$CLUSTER1 run --image=alpine/socat socat -- tcp6-listen:15000,bind=\[::\],fork,reuseaddr /dev/null
</code></pre>
<p>Not sure I understand why <code>port-forward</code> would fail for the loopback interface binding <code>::1</code> but not for catch all <code>::</code>. Can someone please shed some light on this ?</p>
| <p>For those of you running into a similar issue with your IPv6 only Kubernetes clusters heres what I have investigated found so far.</p>
<p><strong>Background:</strong> It seems that this is a generic issue relating to IPv6 and CRI.
I was running <code>containerd</code> in my setup and <code>containerd</code> versions <code>1.5.0</code>-<code>1.5.2</code> added two PRs (<a href="https://github.com/containerd/containerd/commit/11a78d9d0f1664466fa3fffebd8ff234f3ef2677" rel="nofollow noreferrer">don't use socat for port forwarding</a> and <a href="https://github.com/containerd/containerd/commit/305b42583073aa1435d15ff5773c049827fa8d51" rel="nofollow noreferrer">use happy-eyeballs for port-forwarding</a>) which fixed a number of issues in IPv6 port-forwarding.</p>
<p><strong>Potential fix:</strong> Further to pulling in <code>containerd</code> version <code>1.5.2</code> (as part of Ubuntu 20.04 LTS) I was also getting the error <code>IPv4: dial tcp4 127.0.0.1:15021: connect: connection refused IPv6 dial tcp6: address localhost: no suitable address found</code> when port-forwarding. This is caused by a DNS issue when resolving <code>localhost</code>. Hence I added <code>localhost</code> to resolve as <code>::1</code> in the host machine with the following command.</p>
<pre><code>sed -i 's/::1 ip6-localhost ip6-loopback/::1 localhost ip6-localhost ip6-loopback/' /etc/hosts
</code></pre>
<p>I think the important point here is that check your container runtimes to make sure IPv6 (tcp6 binding) is supported.</p>
|
<p>Can anyone guide if we monitoring out <code>EKS</code> cluster using <code>prometheus</code></p>
<p>Then what would be the units for the metric <code>kube_metrics_server_pods_cpu</code> by default.</p>
| <h1>CPU is measured in nanocores.</h1>
<p><code>kube_metrics_server_pods_cpu</code> is measured in nanocores.</p>
<p>I agree with @noam-yizraeli</p>
<p>As per the <a href="https://github.com/olxbr/metrics-server-exporter/blob/bdf831c4e9d794e2f760edf217cf923a7c461d21/app.py#L130" rel="nofollow noreferrer">source code</a> of the <a href="https://github.com/olxbr/metrics-server-exporter" rel="nofollow noreferrer">metrics-server-exporter</a>, there is <code>pod_container_cpu</code> variable.</p>
<pre><code>metrics_pods_cpu.add_sample('kube_metrics_server_pods_cpu', value=int(pod_container_cpu), labels={ 'pod_name': pod_name, 'pod_namespace': pod_namespace, 'pod_container_name': pod_container_name })
</code></pre>
<p><code>pod_container_cpu</code> is declared <a href="https://github.com/olxbr/metrics-server-exporter/blob/bdf831c4e9d794e2f760edf217cf923a7c461d21/app.py#L123" rel="nofollow noreferrer">here</a></p>
<p>And <a href="https://github.com/olxbr/metrics-server-exporter/blob/bdf831c4e9d794e2f760edf217cf923a7c461d21/README.md" rel="nofollow noreferrer">README.md</a> says:</p>
<blockquote>
<p>kube_metrics_server_nodes_cpu</p>
<ul>
<li>Provides nodes CPU information in nanocores.</li>
</ul>
</blockquote>
<h1>Memory is measured in kibibites.</h1>
<p>As for the memory usage, <a href="https://github.com/olxbr/metrics-server-exporter/blob/master/README.md" rel="nofollow noreferrer">the same README.md says</a>:</p>
<blockquote>
<p><code>kube_metrics_server_nodes_mem</code></p>
<ul>
<li>Provides nodes memory information in kibibytes.</li>
</ul>
</blockquote>
|
<p>I am trying to output the value for .metadata.name followed by the student's name in .spec.template.spec.containers[].students[] array using the regex in JsonPath for a Kubectl query.</p>
<p>I had actually asked a similar question linked here for this in jq.</p>
<p><a href="https://stackoverflow.com/questions/69260360/how-do-i-print-a-specific-value-of-an-array-given-a-condition-in-jq-if-there-is/69260669?noredirect=1#comment122445926_69260669">How do I print a specific value of an array given a condition in jq if there is no key specified</a></p>
<p>The solution worked but I am wondering if there is an alternative solution for it using JsonPath or go-template perhaps (without the need for using jq).</p>
<p>For example, if I check the students[] array if it contains the word "Jeff", I would like the output to display as below:</p>
<p>student-deployment: Jefferson</p>
<p><strong>What I've tried:</strong></p>
<p>For JsonPath, I've tried the query below:</p>
<pre><code>kubectl get deployment -o=jsonpath="{range .items[?(@.spec.template.spec.containers[*].students[*])]}{'\n'}{.metadata.name}{':\t'}{range .spec.template.spec.containers[*]}{.students[?(@=="Jefferson")]}{end}{end}"
</code></pre>
<p>But this only works to evaluate for matching words. Would it be possible to use JsonPath to query for regex as I've read here that JsonPath regex =~ doesn't work? I did try to use | grep and findstr but it still returned all values inside the array back to me. Other than jq, is there another way to retrieve the regex output?</p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/61406" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/61406</a></p>
<p>The deployment template below is in json and I shortened it to only the relevant parts.</p>
<pre><code>{
"apiVersion": "v1",
"items": [
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"name": "student-deployment",
"namespace": "default"
},
"spec": {
"template": {
"spec": {
"containers": [
{
"students": [
"Alice",
"Bob",
"Peter",
"Sally",
"Jefferson"
]
}
]
}
}
}
}
]
}
</code></pre>
| <p>The documentation for JSONPath Support clearly describes that it is not possible with JSONPath and you can use jq.</p>
<p><a href="https://kubernetes.io/docs/reference/kubectl/jsonpath/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/kubectl/jsonpath/</a></p>
<p><a href="https://i.stack.imgur.com/VKr41.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VKr41.png" alt="enter image description here" /></a></p>
|
<p>I'm running a MySQL image in my one-node cluster for local testing purposes only.</p>
<p>I would like to be able to delete the database when needed to have the image build a new database from scratch, but I can't seem to find where or how I can do that easily.</p>
<p>I am on Windows, using Docker Desktop to manage my Docker images and Kubernetes cluster with WSL2. The pod uses a persistent volume/claim which can be seen below.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/MySQLTemp"
type: DirectoryOrCreate
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
</code></pre>
<p>The volume part of my deployment looks like:</p>
<pre class="lang-yaml prettyprint-override"><code> volumeMounts:
- name: mysql-persistent
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent
persistentVolumeClaim:
claimName: mysql-pv-claim
</code></pre>
<p>Is there a command I can use to either see where this database is stored on my Windows or WSL2 machine so I can delete it manually, or delete it from the command line through <code>docker</code> or <code>kubectl</code>?</p>
| <p>A brilliant work-around to Docker Desktop's inability to easily support persistent volumes using WSL2 and KinD when the host file system is Windows.</p>
<p>I could not get hostpath to work either. To make this a generic solution when using KinD and Docker Desktop, I simply modified this entry:</p>
<p>/run/desktop/mnt/host/c/MySQLTemp
to
/run/desktop/mnt/host/v <- The windows drive I wanted to access and it worked!</p>
<p>Kudos to Ral :-)</p>
|
<p>I deployed a k3s cluster into 2 raspberry pi 4. One as a master and the second as a worker using the script k3s offered with the following options:</p>
<p>For the master node:</p>
<pre><code>curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC='server --bind-address 192.168.1.113 (which is the master node ip)' sh -
</code></pre>
<p>To the agent node:</p>
<pre><code>curl -sfL https://get.k3s.io | \
K3S_URL=https://192.168.1.113:6443 \
K3S_TOKEN=<master-token> \
INSTALL_K3S_EXEC='agent' sh-
</code></pre>
<p>Everything seems to work, but <code>kubectl top nodes</code> returns the following:</p>
<pre><code>NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k3s-master 137m 3% 1285Mi 33%
k3s-node-01 <unknown> <unknown> <unknown> <unknown>
</code></pre>
<p>I also tried to deploy the k8s dashboard, according to what is written in <a href="https://rancher.com/docs/k3s/latest/en/installation/kube-dashboard/" rel="nofollow noreferrer">the docs</a> but it fails to work because it can't reach the metrics server and gets a timeout error:</p>
<pre><code>"error trying to reach service: dial tcp 10.42.1.11:8443: i/o timeout"
</code></pre>
<p>and I see a lot of errors in the pod logs:</p>
<pre><code>2021/09/17 09:24:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2021/09/17 09:25:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2021/09/17 09:26:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2021/09/17 09:27:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
</code></pre>
<p>logs from the <code>metrics-server</code> pod:</p>
<pre><code>elet_summary:k3s-node-01: unable to fetch metrics from Kubelet k3s-node-01 (k3s-node-01): Get https://k3s-node-01:10250/stats/summary?only_cpu_and_memory=true: dial tcp 192.168.1.106:10250: connect: no route to host
E0917 14:03:24.767949 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:k3s-node-01: unable to fetch metrics from Kubelet k3s-node-01 (k3s-node-01): Get https://k3s-node-01:10250/stats/summary?only_cpu_and_memory=true: dial tcp 192.168.1.106:10250: connect: no route to host
E0917 14:04:24.767960 1 manager.go:111] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:k3s-node-01: unable to fetch metrics from Kubelet k3s-node-01 (k3s-node-01): Get https://k3s-node-01:10250/stats/summary?only_cpu_and_memory=true: dial tcp 192.168.1.106:10250: connect: no route to host
</code></pre>
| <p>Moving this out of comments for better visibility.</p>
<hr />
<p>After creation of small cluster, I wasn't able to reproduce this behaviour and <code>metrics-server</code> worked fine for both nodes, <code>kubectl top nodes</code> showed information and metrics about both available nodes (thought it took some time to start collecting the metrics).</p>
<p>Which leads to troubleshooting steps why it doesn't work. Checking <code>metrics-server</code> logs is the most efficient way to figure this out:</p>
<pre><code>$ kubectl logs metrics-server-58b44df574-2n9dn -n kube-system
</code></pre>
<p>Based on logs it will be different steps to continue, for instance in comments above:</p>
<ul>
<li>first it was <code>no route to host</code> which is related to network and lack of possibility to resolve hostname</li>
<li>then <code>i/o timeout</code> which means route exists, but service did not respond back. This may happen due to firewall which blocks certain ports/sources, <code>kubelet</code> is not running (listens to port <code>10250</code>) or as it appeared for OP, there was an issue with <code>ntp</code> which affected certificates and connections.</li>
<li>errors may be different in other cases, it's important to find the error and based on it troubleshoot further.</li>
</ul>
|
<p>I'm running a simple Spark job on Kubernetes cluster that writes data to HDFS with Hive catologization. For whatever reason my app fails to run Spark SQL commands with the following exception:</p>
<pre><code>21/09/22 09:23:54 ERROR SplunkStreamListener: |exception=org.apache.spark.sql.AnalysisException
org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Got exception: java.io.IOException There is no primary group for UGI spark (auth:SIMPLE));
at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:106)
at org.apache.spark.sql.hive.HiveExternalCatalog.createDatabase(HiveExternalCatalog.scala:183)
at org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.createDatabase(ExternalCatalogWithListener.scala:47)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createDatabase(SessionCatalog.scala:211)
at org.apache.spark.sql.execution.command.CreateDatabaseCommand.run(ddl.scala:70)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194)
at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194)
at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3370)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:80)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$withAction(Dataset.scala:3369)
at org.apache.spark.sql.Dataset.<init>(Dataset.scala:194)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:79)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:643)
</code></pre>
<p>I'm connecting to Hive metastore via Thrift URL. The docker container runs the application as non-root user. Are there some kind of groups I need the user to be added to sync with the metastore?</p>
| <p>Try add this before setting up the spark context</p>
<pre><code>System.setProperty("HADOOP_USER_NAME", "root")
</code></pre>
|
<p>I've created a Kubernetes cluster on Google Cloud and even though the application is running properly (which I've checked running requests inside the cluster) it seems that the NEG health check is not working properly. Any ideas on the cause?</p>
<p>I've tried to change the service from NodePort to LoadBalancer, different ways of adding annotations to the service. I was thinking that perhaps it might be related to the https requirement in the django side.</p>
<p><a href="https://i.stack.imgur.com/G29KX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/G29KX.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/wEbwG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wEbwG.png" alt="enter image description here" /></a></p>
<pre><code># [START kubernetes_deployment]
apiVersion: apps/v1
kind: Deployment
metadata:
name: moner-app
labels:
app: moner-app
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: moner-app
template:
metadata:
labels:
app: moner-app
spec:
containers:
- name: moner-core-container
image: my-template
imagePullPolicy: Always
resources:
requests:
memory: "128Mi"
limits:
memory: "512Mi"
startupProbe:
httpGet:
path: /ht/
port: 5000
httpHeaders:
- name: "X-Forwarded-Proto"
value: "https"
failureThreshold: 30
timeoutSeconds: 10
periodSeconds: 10
initialDelaySeconds: 90
readinessProbe:
initialDelaySeconds: 120
httpGet:
path: "/ht/"
port: 5000
httpHeaders:
- name: "X-Forwarded-Proto"
value: "https"
periodSeconds: 10
failureThreshold: 3
timeoutSeconds: 10
livenessProbe:
initialDelaySeconds: 30
failureThreshold: 3
periodSeconds: 30
timeoutSeconds: 10
httpGet:
path: "/ht/"
port: 5000
httpHeaders:
- name: "X-Forwarded-Proto"
value: "https"
volumeMounts:
- name: cloudstorage-credentials
mountPath: /secrets/cloudstorage
readOnly: true
env:
# [START_secrets]
- name: THIS_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: GRACEFUL_TIMEOUT
value: '120'
- name: GUNICORN_HARD_TIMEOUT
value: '90'
- name: DJANGO_ALLOWED_HOSTS
value: '*,$(THIS_POD_IP),0.0.0.0'
ports:
- containerPort: 5000
args: ["/start"]
# [START proxy_container]
- image: gcr.io/cloudsql-docker/gce-proxy:1.16
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=moner-dev:us-east1:core-db=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
resources:
requests:
memory: "64Mi"
limits:
memory: "128Mi"
volumeMounts:
- name: cloudsql-oauth-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
# [END proxy_container]
# [START volumes]
volumes:
- name: cloudsql-oauth-credentials
secret:
secretName: cloudsql-oauth-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir: {}
- name: cloudstorage-credentials
secret:
secretName: cloudstorage-credentials
# [END volumes]
# [END kubernetes_deployment]
---
# [START service]
apiVersion: v1
kind: Service
metadata:
name: moner-svc
annotations:
cloud.google.com/neg: '{"ingress": true, "exposed_ports": {"5000":{}}}' # Creates an NEG after an Ingress is created
cloud.google.com/backend-config: '{"default": "moner-backendconfig"}'
labels:
app: moner-svc
spec:
type: NodePort
ports:
- name: moner-core-http
port: 5000
protocol: TCP
targetPort: 5000
selector:
app: moner-app
# [END service]
---
# [START certificates_setup]
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: managed-cert
spec:
domains:
- domain.com
- app.domain.com
# [END certificates_setup]
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: moner-backendconfig
spec:
customRequestHeaders:
headers:
- "X-Forwarded-Proto:https"
healthCheck:
checkIntervalSec: 15
port: 5000
type: HTTP
requestPath: /ht/
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: managed-cert-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: moner-ssl
networking.gke.io/managed-certificates: managed-cert
kubernetes.io/ingress.class: "gce"
spec:
defaultBackend:
service:
name: moner-svc
port:
name: moner-core-http
</code></pre>
| <p>Apparently, you didn’t have a GCP firewall rule to allow traffic on port 5000 to your GKE nodes. <a href="https://cloud.google.com/vpc/docs/using-firewalls#creating_firewall_rules" rel="nofollow noreferrer">Creating an ingress firewall rule</a> with IP range - 0.0.0.0/0 and port - TCP 5000 targeted to your GKE nodes could allow your setup to work even with port 5000.</p>
|
<p>I'm trying to run a minimalistic sample of oauth2-proxy with Keycloak. I used oauth2-proxy's <a href="https://github.com/oauth2-proxy/oauth2-proxy/tree/master/contrib/local-environment/kubernetes" rel="nofollow noreferrer">k8s example</a>, which uses dex, to build up my keycloak example.
The problem is that I don't seem to get the proxy to work:</p>
<pre class="lang-sh prettyprint-override"><code># kubectl get pods
NAME READY STATUS RESTARTS AGE
httpbin-774999875d-zbczh 1/1 Running 0 2m49s
keycloak-758d7c758-27pgh 1/1 Running 0 2m49s
oauth2-proxy-5875dd67db-8qwqn 0/1 CrashLoopBackOff 2 2m49s
</code></pre>
<p>Logs indicate a network error:</p>
<pre class="lang-sh prettyprint-override"><code># kubectl logs oauth2-proxy-5875dd67db-8qwqn
[2021/09/22 08:14:56] [main.go:54] Get "http://keycloak.localtest.me/auth/realms/master/.well-known/openid-configuration": dial tcp 127.0.0.1:80: connect: connection refused
</code></pre>
<p>I believe I have set up the ingress correctly, though.</p>
<h3>Steps to reproduce</h3>
<ol>
<li>Set up the cluster:</li>
</ol>
<pre class="lang-sh prettyprint-override"><code>#Creare kind cluster
wget https://raw.githubusercontent.com/oauth2-proxy/oauth2-proxy/master/contrib/local-environment/kubernetes/kind-cluster.yaml
kind create cluster --name oauth2-proxy --config kind-cluster.yaml
#Setup dns
wget https://raw.githubusercontent.com/oauth2-proxy/oauth2-proxy/master/contrib/local-environment/kubernetes/custom-dns.yaml
kubectl apply -f custom-dns.yaml
kubectl -n kube-system rollout restart deployment/coredns
kubectl -n kube-system rollout status --timeout 5m deployment/coredns
#Setup ingress
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml
kubectl --namespace ingress-nginx rollout status --timeout 5m deployment/ingress-nginx-controller
#Deploy
#import keycloak master realm
wget https://raw.githubusercontent.com/oauth2-proxy/oauth2-proxy/master/contrib/local-environment/keycloak/master-realm.json
kubectl create configmap keycloak-import-config --from-file=master-realm.json=master-realm.json
</code></pre>
<ol start="2">
<li>Deploy the test application. My <code>deployment.yaml</code> file:</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>###############oauth2-proxy#############
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
name: oauth2-proxy
name: oauth2-proxy
spec:
replicas: 1
selector:
matchLabels:
name: oauth2-proxy
template:
metadata:
labels:
name: oauth2-proxy
spec:
containers:
- args:
- --provider=oidc
- --oidc-issuer-url=http://keycloak.localtest.me/auth/realms/master
- --upstream="file://dev/null"
- --client-id=oauth2-proxy
- --client-secret=72341b6d-7065-4518-a0e4-50ee15025608
- --cookie-secret=x-1vrrMhC-886ITuz8ySNw==
- --email-domain=*
- --scope=openid profile email users
- --cookie-domain=.localtest.me
- --whitelist-domain=.localtest.me
- --pass-authorization-header=true
- --pass-access-token=true
- --pass-user-headers=true
- --set-authorization-header=true
- --set-xauthrequest=true
- --cookie-refresh=1m
- --cookie-expire=30m
- --http-address=0.0.0.0:4180
image: quay.io/oauth2-proxy/oauth2-proxy:latest
# image: "quay.io/pusher/oauth2_proxy:v5.1.0"
name: oauth2-proxy
ports:
- containerPort: 4180
name: http
protocol: TCP
livenessProbe:
httpGet:
path: /ping
port: http
scheme: HTTP
initialDelaySeconds: 0
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /ping
port: http
scheme: HTTP
initialDelaySeconds: 0
timeoutSeconds: 1
successThreshold: 1
periodSeconds: 10
resources:
{}
---
apiVersion: v1
kind: Service
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
spec:
type: ClusterIP
ports:
- port: 4180
targetPort: 4180
name: http
selector:
name: oauth2-proxy
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
annotations:
nginx.ingress.kubernetes.io/server-snippet: |
large_client_header_buffers 4 32k;
spec:
rules:
- host: oauth2-proxy.localtest.me
http:
paths:
- path: /
backend:
serviceName: oauth2-proxy
servicePort: 4180
---
# ######################httpbin##################
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpbin
spec:
replicas: 1
selector:
matchLabels:
name: httpbin
template:
metadata:
labels:
name: httpbin
spec:
containers:
- image: kennethreitz/httpbin:latest
name: httpbin
resources: {}
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
hostname: httpbin
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: httpbin-svc
labels:
app: httpbin
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
name: httpbin
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: httpbin
labels:
name: httpbin
annotations:
nginx.ingress.kubernetes.io/auth-response-headers: X-Auth-Request-User,X-Auth-Request-Email
nginx.ingress.kubernetes.io/auth-signin: http://oauth2-proxy.localtest.me/oauth2/start
nginx.ingress.kubernetes.io/auth-url: http://oauth2-proxy.localtest.me/oauth2/auth
spec:
rules:
- host: httpbin.localtest.me
http:
paths:
- path: /
backend:
serviceName: httpbin-svc
servicePort: 80
---
# ######################keycloak#############
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: keycloak
name: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- args:
- -Dkeycloak.migration.action=import
- -Dkeycloak.migration.provider=singleFile
- -Dkeycloak.migration.file=/etc/keycloak_import/master-realm.json
- -Dkeycloak.migration.strategy=IGNORE_EXISTING
env:
- name: KEYCLOAK_PASSWORD
value: password
- name: KEYCLOAK_USER
value: admin@example.com
- name: KEYCLOAK_HOSTNAME
value: keycloak.localtest.me
- name: PROXY_ADDRESS_FORWARDING
value: "true"
image: quay.io/keycloak/keycloak:15.0.2
# image: jboss/keycloak:10.0.0
name: keycloak
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
volumeMounts:
- mountPath: /etc/keycloak_import
name: keycloak-config
hostname: keycloak
volumes:
- configMap:
defaultMode: 420
name: keycloak-import-config
name: keycloak-config
---
apiVersion: v1
kind: Service
metadata:
name: keycloak-svc
labels:
app: keycloak
spec:
type: ClusterIP
sessionAffinity: None
ports:
- name: http
targetPort: http
port: 8080
selector:
app: keycloak
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: keycloak
spec:
tls:
- hosts:
- "keycloak.localtest.me"
rules:
- host: "keycloak.localtest.me"
http:
paths:
- path: /
backend:
serviceName: keycloak-svc
servicePort: 8080
---
</code></pre>
<pre class="lang-sh prettyprint-override"><code># kubectl apply -f deployment.yaml
</code></pre>
<ol start="3">
<li>Configure <code>/etc/hosts</code> on the development machine file to include <code>localtest.me</code> domain:</li>
</ol>
<pre><code>127.0.0.1 oauth2-proxy.localtest.me
127.0.0.1 keycloak.localtest.me
127.0.0.1 httpbin.localtest.me
127.0.0.1 localhost
</code></pre>
<p>Note that I can reach <code>http://keycloak.localtest.me/auth/realms/master/.well-known/openid-configuration</code> with no problem from my host browser. It appears that the <code>oauth2-proxy</code>'s pod cannot reach the service via the ingress. Would really appreciate any sort of help here.</p>
| <p>Turned out that I needed to add keycloak to <code>custom-dns.yaml</code>.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
hosts {
10.244.0.1 dex.localtest.me. # <----Configured for dex
10.244.0.1 oauth2-proxy.localtest.me
fallthrough
}
}
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
</code></pre>
<p>Added keycloak showed as below:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
hosts {
10.244.0.1 keycloak.localtest.me
10.244.0.1 oauth2-proxy.localtest.me
fallthrough
}
}
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
</code></pre>
|
<p>I'm trying to figure it out, how to change one string inside configmap in Kubernetes. I have pretty simple configmap:</p>
<pre><code>apiVersion: v1
data:
config.cfg: |-
[authentication]
USERNAME=user
PASSWORD=password
[podname]
PODNAME=metadata.podName
kind: ConfigMap
metadata:
name: name_here
</code></pre>
<p>And I need to mount the configmap inside a couple of pods. But PODNAME should be matched to current podname. Is it possible in any another way? thanks!</p>
| <p>I do not think it could be done with ConfigMap. But you can set environment variables in your pod spec that references a pod fields.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test-ref-pod-name
spec:
containers:
- name: test-container
image: busybox
command: [ "sh", "-c"]
args:
- env | grep PODNAME
env:
- name: PODNAME
valueFrom:
fieldRef:
fieldPath: metadata.name
restartPolicy: Never
</code></pre>
<p>See official documentation: <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#use-pod-fields-as-values-for-environment-variables" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#use-pod-fields-as-values-for-environment-variables</a></p>
|
<p>Is there a way that I can get release logs for a particular K8s release within my K8s cluster as the replica-sets related to that deployment is no longer serving pods?</p>
<p>For an example <code>kubectl rollout history deployment/pod1-dep</code> would result</p>
<p>1</p>
<p>2 <- failed deploy</p>
<p>3 <- Latest deployment successful</p>
<p>If I want to pick the logs related to events in <code>2</code>, would it be a possible task, or is there a way that we can such functionality with this.</p>
| <p><em>This is a Community Wiki answer, posted for better visibility, so feel free to edit it and add any additional details you consider important.</em></p>
<p>As David Maze rightly suggested in his comment above:</p>
<blockquote>
<p>Once a pod is deleted, its logs are gone with it. If you have some
sort of external log collector that will generally keep historical
logs for you, but you needed to have set that up before you attempted
the update.</p>
</blockquote>
<p>So the answer to your particular question is: <strong>no, you can't get such logs once those particular pods are deleted.</strong></p>
|
<p>I deployed a brand new k8s cluster using kubespray, everything works fine but all of the calico related pods are not ready. And after many hours of debugging I couldn't find the reason why calico pods are crashing. I even disabled/stopped the entire firewalld service but nothing changed.</p>
<p>One other important thing is that <code>calicoctl node status</code> output is not stable and every time gets called show something different:</p>
<pre><code>Calico process is not running.
</code></pre>
<pre><code>Calico process is running.
None of the BGP backend processes (BIRD or GoBGP) are running.
</code></pre>
<pre><code>Calico process is running.
IPv4 BGP status
+----------------+-------------------+-------+----------+---------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+----------------+-------------------+-------+----------+---------+
| 192.168.231.42 | node-to-node mesh | start | 06:23:41 | Passive |
+----------------+-------------------+-------+----------+---------+
IPv6 BGP status
No IPv6 peers found.
</code></pre>
<p>Another log that shown up often is the following message:</p>
<pre><code>bird: Unable to open configuration file /etc/calico/confd/config/bird.cfg: No such file or directory
bird: Unable to open configuration file /etc/calico/confd/config/bird6.cfg: No such file or directory
</code></pre>
<p>Also tried changing IP_AUTODETECTION_METHOD with each of the following but nothing changed:</p>
<pre><code>kubectl set env daemonset/calico-node -n kube-system IP_AUTODETECTION_METHOD=can-reach=www.google.com
kubectl set env daemonset/calico-node -n kube-system IP_AUTODETECTION_METHOD=can-reach=8.8.8.8
kubectl set env daemonset/calico-node -n kube-system IP_AUTODETECTION_METHOD=interface=eth1
kubectl set env daemonset/calico-node -n kube-system IP_AUTODETECTION_METHOD=interface=eth.*
</code></pre>
<h2>Expected Behavior</h2>
<p>All pods, daemonset, deployment and replicaset related to calico should be in READY state.</p>
<h2>Current Behavior</h2>
<p>All pods, daemonset, deployment and replicaset related to calico is in NOT READY state.</p>
<h2>Possible Solution</h2>
<p>Nothing yet, I am asking for help on how to debug / overcome this issue.</p>
<h2>Steps to Reproduce (for bugs)</h2>
<p>Its the latest version of kubespray with the following Context & Environment.</p>
<pre><code>git reflog
7e4b176 HEAD@{0}: clone: from https://github.com/kubernetes-sigs/kubespray.git
</code></pre>
<h2>Context</h2>
<p>I'm trying to deploy a k8s cluster which has one master and one worker node. Also note that the servers taking part in this cluster are located in an almost airgapped/offline enviroment with limited access to global internet, of course the ansible process of deploying cluster using kubespray was successful but I'm facing this issue with calico pods.</p>
<h2>Your Environment</h2>
<pre><code>cat inventory/mycluster/hosts.yaml
all:
hosts:
node1:
ansible_host: 192.168.231.41
ansible_port: 32244
ip: 192.168.231.41
access_ip: 192.168.231.41
node2:
ansible_host: 192.168.231.42
ansible_port: 32244
ip: 192.168.231.42
access_ip: 192.168.231.42
children:
kube_control_plane:
hosts:
node1:
kube_node:
hosts:
node1:
node2:
etcd:
hosts:
node1:
k8s_cluster:
children:
kube_control_plane:
kube_node:
calico_rr:
hosts: {}
</code></pre>
<pre><code>calicoctl version
Client Version: v3.19.2
Git commit: 6f3d4900
Cluster Version: v3.19.2
Cluster Type: kubespray,bgp,kubeadm,kdd,k8s
</code></pre>
<pre><code>cat /etc/centos-release
CentOS Linux release 7.9.2009 (Core)
</code></pre>
<pre><code>uname -r
3.10.0-1160.42.2.el7.x86_64
</code></pre>
<pre><code>kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.4", GitCommit:"3cce4a82b44f032d0cd1a1790e6d2f5a55d20aae", GitTreeState:"clean", BuildDate:"2021-08-11T18:16:05Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.4", GitCommit:"3cce4a82b44f032d0cd1a1790e6d2f5a55d20aae", GitTreeState:"clean", BuildDate:"2021-08-11T18:10:22Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<pre><code> kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node1 Ready control-plane,master 19h v1.21.4 192.168.231.41 <none> CentOS Linux 7 (Core) 3.10.0-1160.42.2.el7.x86_64 docker://20.10.8
node2 Ready <none> 19h v1.21.4 192.168.231.42 <none> CentOS Linux 7 (Core) 3.10.0-1160.42.2.el7.x86_64 docker://20.10.8
</code></pre>
<pre><code>kubectl get all --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system pod/calico-kube-controllers-8575b76f66-57zw4 0/1 CrashLoopBackOff 327 19h 192.168.231.42 node2 <none> <none>
kube-system pod/calico-node-4hkzb 0/1 Running 245 14h 192.168.231.42 node2 <none> <none>
kube-system pod/calico-node-hznhc 0/1 Running 245 14h 192.168.231.41 node1 <none> <none>
kube-system pod/coredns-8474476ff8-b6lqz 1/1 Running 0 19h 10.233.96.1 node2 <none> <none>
kube-system pod/coredns-8474476ff8-gdkml 1/1 Running 0 19h 10.233.90.1 node1 <none> <none>
kube-system pod/dns-autoscaler-7df78bfcfb-xnn4r 1/1 Running 0 19h 10.233.90.2 node1 <none> <none>
kube-system pod/kube-apiserver-node1 1/1 Running 0 19h 192.168.231.41 node1 <none> <none>
kube-system pod/kube-controller-manager-node1 1/1 Running 0 19h 192.168.231.41 node1 <none> <none>
kube-system pod/kube-proxy-dmw22 1/1 Running 0 19h 192.168.231.41 node1 <none> <none>
kube-system pod/kube-proxy-wzpnv 1/1 Running 0 19h 192.168.231.42 node2 <none> <none>
kube-system pod/kube-scheduler-node1 1/1 Running 0 19h 192.168.231.41 node1 <none> <none>
kube-system pod/nginx-proxy-node2 1/1 Running 0 19h 192.168.231.42 node2 <none> <none>
kube-system pod/nodelocaldns-6h5q2 1/1 Running 0 19h 192.168.231.42 node2 <none> <none>
kube-system pod/nodelocaldns-7fwbd 1/1 Running 0 19h 192.168.231.41 node1 <none> <none>
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 19h <none>
kube-system service/coredns ClusterIP 10.233.0.3 <none> 53/UDP,53/TCP,9153/TCP 19h k8s-app=kube-dns
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
kube-system daemonset.apps/calico-node 2 2 0 2 0 kubernetes.io/os=linux 19h calico-node quay.io/calico/node:v3.19.2 k8s-app=calico-node
kube-system daemonset.apps/kube-proxy 2 2 2 2 2 kubernetes.io/os=linux 19h kube-proxy k8s.gcr.io/kube-proxy:v1.21.4 k8s-app=kube-proxy
kube-system daemonset.apps/nodelocaldns 2 2 2 2 2 kubernetes.io/os=linux 19h node-cache k8s.gcr.io/dns/k8s-dns-node-cache:1.17.1 k8s-app=nodelocaldns
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
kube-system deployment.apps/calico-kube-controllers 0/1 1 0 19h calico-kube-controllers quay.io/calico/kube-controllers:v3.19.2 k8s-app=calico-kube-controllers
kube-system deployment.apps/coredns 2/2 2 2 19h coredns k8s.gcr.io/coredns/coredns:v1.8.0 k8s-app=kube-dns
kube-system deployment.apps/dns-autoscaler 1/1 1 1 19h autoscaler k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3 k8s-app=dns-autoscaler
NAMESPACE NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
kube-system replicaset.apps/calico-kube-controllers-8575b76f66 1 1 0 19h calico-kube-controllers quay.io/calico/kube-controllers:v3.19.2 k8s-app=calico-kube-controllers,pod-template-hash=8575b76f66
kube-system replicaset.apps/coredns-8474476ff8 2 2 2 19h coredns k8s.gcr.io/coredns/coredns:v1.8.0 k8s-app=kube-dns,pod-template-hash=8474476ff8
kube-system replicaset.apps/dns-autoscaler-7df78bfcfb 1 1 1 19h autoscaler k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3 k8s-app=dns-autoscaler,pod-template-hash=7df78bfcfb
</code></pre>
| <p>Fortunately increasing <code>timeoutSeconds</code> for both <code>livenessProbe</code> & <code>readinessProbe</code> from <strong>1</strong> to <strong>60</strong> fixes the issue.</p>
<pre><code>kubectl edit -n kube-system daemonset.apps/calico-node
kubectl edit -n kube-system deployment.apps/calico-kube-controllers
</code></pre>
<p><a href="https://github.com/projectcalico/calico/issues/4935" rel="nofollow noreferrer">https://github.com/projectcalico/calico/issues/4935</a></p>
|
<p>I am following <a href="https://learn.microsoft.com/en-us/azure/aks/azure-files-volume" rel="nofollow noreferrer">setting up the Azure File share to the pod</a>.</p>
<ul>
<li>created the namespace</li>
<li>created the secrets as specified</li>
<li>pod configuration</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: test-storage-pod
namespace: storage-test
spec:
containers:
- image: nginx:latest
name: test-storage-pod
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
volumeMounts:
- name: azure
mountPath: /mnt/azure-filestore
volumes:
- name: azure
azureFile:
secretName: azure-storage-secret
shareName: appdata/data
readOnly: false
</code></pre>
<ul>
<li><code>kubectl describe -n storage-test pod/<pod-name></code> or <code>kubectl get -n storage-test event</code></li>
</ul>
<pre><code>LAST SEEN TYPE REASON OBJECT MESSAGE
2m13s Normal Scheduled pod/test-storage-pod Successfully assigned storage-test/test-storage-pod to aks-default-1231523-vmss00001a
6s Warning FailedMount pod/test-storage-pod MountVolume.SetUp failed for volume "azure" : Couldn't get secret default/azure-storage-secret
11s Warning FailedMount pod/test-storage-pod Unable to attach or mount volumes: unmounted volumes=[azure], unattached volumes=[default-token-gzxk8 azure]: timed out waiting for the condition
</code></pre>
<p>Question:</p>
<ul>
<li>the secret is created under the namespace storage-test as well, is that Kubelet first checks the storage under default namespace?</li>
</ul>
| <p>Probably you are working default namespace, that's why Kubelet first checks the default namespace. Please try to switch to your created namespace with the command:</p>
<blockquote>
<p>kubens storage-test</p>
</blockquote>
<p>Try to run your pod under storage-test namespace once again.</p>
|
<p>I have started using KubernetesExecutor and I have set up a PV/PVC with an AWS EFS to store logs for my dags. I am also using s3 remote logging.</p>
<p>All the logging is working perfectly fine after a dag completes. However, I want to be able to see the logs of my jobs as they are running for long running ones.</p>
<p>When I exec into my scheduler pod, while an executor pod is running, I am able to see the <code>.log</code> file of the currently running job because of the shared EFS. However, when I <code>cat</code> the log file, I do not see the logs as long as the executor is still running. Once the executor finishes however, I can see the full logs both when I <code>cat</code> the file and in the airflow UI.</p>
<p>Weirdly, on the other hand, when I exec into the executor pod as it is running, and I <code>cat</code> the exact same log file in the shared EFS, I am able to see the correct logs up until that point in the job, and when I immediately <code>cat</code> from the scheduler or check the UI, I can also see the logs up until that point.</p>
<p>So it seems that when I <code>cat</code> from within the executor pod, it is causing the logs to be flushed in some way, so that it is available everywhere. Why are the logs not flushing regularly?</p>
<p>Here are the config variables I am setting, note these env variables get set in my webserver/scheduler and executor pods:</p>
<pre><code># ----------------------
# For Main Airflow Pod (Webserver & Scheduler)
# ----------------------
export PYTHONPATH=$HOME
export AIRFLOW_HOME=$HOME
export PYTHONUNBUFFERED=1
# Core configs
export AIRFLOW__CORE__LOAD_EXAMPLES=False
export AIRFLOW__CORE__SQL_ALCHEMY_CONN=${AIRFLOW__CORE__SQL_ALCHEMY_CONN:-postgresql://$DB_USER:$DB_PASSWORD@$DB_HOST:5432/$DB_NAME}
export AIRFLOW__CORE__FERNET_KEY=$FERNET_KEY
export AIRFLOW__CORE__DAGS_FOLDER=$AIRFLOW_HOME/git/dags/$PROVIDER-$ENV/
# Logging configs
export AIRFLOW__LOGGING__BASE_LOG_FOLDER=$AIRFLOW_HOME/logs/
export AIRFLOW__LOGGING__REMOTE_LOGGING=True
export AIRFLOW__LOGGING__REMOTE_LOG_CONN_ID=aws_default
export AIRFLOW__LOGGING__REMOTE_BASE_LOG_FOLDER=s3://path-to-bucket/airflow_logs
export AIRFLOW__LOGGING__TASK_LOG_READER=s3.task
export AIRFLOW__LOGGING__LOGGING_CONFIG_CLASS=config.logging_config.LOGGING_CONFIG
# Webserver configs
export AIRFLOW__WEBSERVER__COOKIE_SAMESITE=None
</code></pre>
<p>My logging config looks like the one in the question <a href="https://stackoverflow.com/questions/55526759/airflow-1-10-2-not-writing-logs-to-s3">here</a></p>
<p>I thought this could be a python buffering issue so added <code>PYTHONUNBUFFERED=1</code>, but that didn't help. This is happening whether I use the <code>PythonOperator</code> or <code>BashOperator</code></p>
<p>Is it the case that K8sExecutors logs just won't be available during their runtime? Only after? Or is there some configuration I must be missing?</p>
| <p>I had the same issue and those are things that helped me - worth checking them on your end</p>
<ul>
<li><code>PYTHONUNBUFFERED=1</code> is not enough, but necessary to view logs in realtime. Please keep it</li>
<li>have EFS mounted in web, scheduler, and pod_template (executor).</li>
<li>Your experience with log file being complete after task having finished, makes me wonder if PVC you use for logs, has ReadWriteMany accessMode</li>
<li>Are the paths you cat in different pods, identical? Do they include full task format, eg <code>efs/logs/dag_that_executes_via_KubernetesPodOperator/task1/2021-09-21T19\:00\:21.894859+00\:00/1.log</code> ? Asking because, before i had EFS hooked up in every place (scheduler, web, pod_template), i could only access executor logs that do not include task name and task time</li>
<li>have EFS logs folder belong to airflow (for me uid 50000 because may have to prepare this from different place), group root, mode 755</li>
<li>do not have AIRFLOW__LOGGING__LOGGING_CONFIG_CLASS set up. Try to get things running as vanilla as possible, before introducing custom logging config</li>
</ul>
<p>If you have remote logging set up, i understand that after task completes, the first line in the UI is going to say <code>Reading remote log from</code>, but what does the first line say for you when the task is running? <code>reading remote</code> or mentioning usage of local log file ?</p>
<ul>
<li>If it says about remote, this would mean that you don't have EFS hooked up in every place.</li>
<li>If it says about local, i would check your EFS settings (readwritemany) and directory ownership and mode</li>
</ul>
|
<p>Any ideas how can I replace variables via Kustomize? I simply want to use a different ACCOUNT_ID and IAM_ROLE_NAME for each overlay.</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::${ACCOUNT_ID}:role/${IAM_ROLE_NAME}
</code></pre>
<p>Thanks in advance!</p>
| <p>Kustomize doesn't use "variables". The way you would typically handle this is by patching the annotation in an overlay. That is, you might start with a base directory that looks like:</p>
<pre><code>base
├── kustomization.yaml
└── serviceaccount.yaml
</code></pre>
<p>Where <code>serviceaccount.yaml</code> contains your <code>ServiceAccount</code> manifest:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: my-service-account
annotions:
eks.amazonaws.com/role-arn: "THIS VALUE DOESN'T MATTER"
</code></pre>
<p>And <code>kustomization.yaml</code> looks like:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: my-namespace
resources:
- serviceaccount.yaml
</code></pre>
<p>Then in your overlays, you would replace the <code>eks.amazonaws.com/role-arn</code> annotation by using a patch. For example, if you had an overlay called <code>production</code>, you might end up with this layout:</p>
<pre><code>.
├── base
│ ├── kustomization.yaml
│ └── serviceaccount.yaml
└── overlay
└── production
├── kustomization.yaml
└── patch_aws_creds.yaml
</code></pre>
<p>Where <code>overlay/production/patch_aws_creds.yaml</code> looks like:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: my-service-account
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::1234:role/production-role
</code></pre>
<p>And <code>overlay/production/kustomization.yaml</code> looks like:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- patch_aws_creds.yaml
</code></pre>
<p>With this in place, running...</p>
<pre><code>kustomize build overlay/production
</code></pre>
<p>...would generate output using your production role information, and so forth for any other overlays you choose to create.</p>
<hr />
<p>If you don't like the format of the strategic merge patch, you can use a json patch document instead. Here's what it would look like inline in your <code>kustomization.yaml</code>:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- target:
version: v1
kind: ServiceAccount
name: my-service-account
patch: |-
- op: replace
path: /metadata/annotations/eks.amazonaws.com~1role-arn
value: arn:aws:iam::1234:role/production-role
</code></pre>
<p>I don't think this really gets you anything, though.</p>
|
<p>I have a YAML file defining multiple Kubernetes resources of various types (separated with <code>---</code> according to the YAML spec):</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
# ...
spec:
# ...
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
# ...
rules:
# ...
---
# etc
</code></pre>
<p>Now, I want to parse this list into a slice of <code>client.Object</code> instances, so I can apply some filtering and transforms, and eventually send them to the cluster using</p>
<pre class="lang-golang prettyprint-override"><code>myClient.Patch( # myClient is a client.Client instance
ctx,
object, # object needs to be a client.Object
client.Apply,
client.ForceOwnership,
client.FieldOwner("my.operator.acme.inc"),
)
</code></pre>
<p>However, I can't for the life of me figure out how to get from the YAML doc to <code>[]client.Object</code>. The following gets me <em>almost</em> there:</p>
<pre class="lang-golang prettyprint-override"><code>results := make([]client.Object, 0)
scheme := runtime.NewScheme()
clientgoscheme.AddToScheme(scheme)
apiextensionsv1beta1.AddToScheme(scheme)
apiextensionsv1.AddToScheme(scheme)
decode := serializer.NewCodecFactory(scheme).UniversalDeserializer().Decode
data, err := ioutil.ReadAll(reader)
if err != nil {
return nil, err
}
for _, doc := range strings.Split(string(data), "---") {
object, gvk, err := decode([]byte(doc), nil, nil)
if err != nil {
return nil, err
}
// object is now a runtime.Object, and gvk is a schema.GroupVersionKind
// taken together, they have all the information I need to expose a
// client.Object (I think) but I have no idea how to actually construct a
// type that implements that interface
result = append(result, ?????)
}
return result, nil
</code></pre>
<p>I am totally open to other parser implementations, of course, but I haven't found anything that gets me any further. But this seems like it <em>must</em> be a solved problem in the Kubernetes world... so how do I do it?</p>
| <p>I was finally able to make it work! Here's how:</p>
<pre class="lang-golang prettyprint-override"><code>import (
"k8s.io/client-go/kubernetes/scheme"
"sigs.k8s.io/controller-runtime/pkg/client"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
apiextensionsv1beta1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1"
)
func deserialize(data []byte) (*client.Object, error) {
apiextensionsv1.AddToScheme(scheme.Scheme)
apiextensionsv1beta1.AddToScheme(scheme.Scheme)
decoder := scheme.Codecs.UniversalDeserializer()
runtimeObject, groupVersionKind, err := decoder.Decode(data, nil, nil)
if err != nil {
return nil, err
}
return runtime
}
</code></pre>
<p>A couple of things that seem key (but I'm not sure my understanding is 100% correct here):</p>
<ul>
<li>while the declared return type of <code>decoder.Decode</code> is <code>(runtime.Object, *scheme.GroupVersionKind, error)</code>, the returned first item of that tuple is actually a <code>client.Object</code> and can be cast as such without problems.</li>
<li>By using <code>scheme.Scheme</code> as the baseline before adding the <code>apiextensions.k8s.io</code> groups, I get all the "standard" resources registered for free.</li>
<li>If I use <code>scheme.Codecs.UniversalDecoder()</code>, I get errors about <code> no kind "CustomResourceDefinition" is registered for the internal version of group "apiextensions.k8s.io" in scheme "pkg/runtime/scheme.go:100"</code>, and the returned <code>groupVersionKind</code> instance shows <code>__internal</code> for version. No idea why this happens, or why it <em>doesn't</em> happen when I use the <code>UniversalDeserializer()</code> instead.</li>
</ul>
|
<p>I tested those queries. The first query was half the value of the second query:</p>
<p><code>sum(container_memory_working_set_bytes{image!="",name=~"^k8s_.*",pod=~"$pod"}) by (pod)</code></p>
<p>and</p>
<p><code>sum (container_memory_working_set_bytes{pod=~"$pod"}) by (pod)</code></p>
<p>Why is writing <code>image! = "", name = ~ "^ k8s_. *"</code> Halving the value?</p>
| <p>That's because <code>cAdvisor</code> takes these values from <code>cgroups</code>. The structure of cgroups looks like a tree, where there are branches for each pod, and every pod has child cgroups for each container in it. This is how it looks (<code>systemd-cgls</code>):</p>
<pre><code>├─kubepods
│ ├─podb0c98680-4c6d-4788-95ef-0ea8b43121d4
│ │ ├─799e2d3f0afe0e43d8657a245fe1e97edfdcdd00a10f8a57277d310a7ecf4364
│ │ │ └─5479 /bin/node_exporter --path.rootfs=/host --web.listen-address=0.0.0.0:9100
│ │ └─09ce1040f746fb497d5398ad0b2fabed1e4b55cde7fb30202373e26537ac750a
│ │ └─5054 /pause
</code></pre>
<p>The resource value for each cgroup is <em><strong>a cumulative for all its children</strong></em>. That's how you got memory utilization doubled, you just summarized the total pod consumption with each container in it.</p>
<p>If you execute those queries in Prometheus, you would notice the duplicated values:</p>
<pre><code>{pod="cluster-autoscaler-58b9c77456-krl5m"} 59076608
{container="POD",pod="cluster-autoscaler-58b9c77456-krl5m"} 708608
{container="cluster-autoscaler",pod="cluster-autoscaler-58b9c77456-krl5m"} 58368000
</code></pre>
<p>The first one is the parent cgroup. As you see, it has no <code>container</code> label. The two others in this example are <a href="https://stackoverflow.com/questions/48651269/what-are-the-pause-containers">the pause container</a> and the actual application. Combining their values you will get the value of the parent cgroup:</p>
<pre class="lang-py prettyprint-override"><code>>>> 708608 + 58368000 == 59076608
True
</code></pre>
<p>There are multiple ways to fix the problem. For example, you can exclude metrics without container name by using <code>container!=""</code> label filter.</p>
<p>Another (more difficult) way to solve this is to drop the cumulative metrics in <code>metric_relabel_configs</code> (prometheus.yml). I.e. you can write a relabeling rule that will drop metrics without a container name. <strong>Be careful with this one</strong>, you may accidentally drop all non-cadvisor metrics.</p>
|
<p>I'm just adding the containers part of the spec. Everything is otherwise set up and working fine and values are hardcoded here. This is a simple Postgres pod that is part of a single replica deployment with its own PVC to persist state. But the problem is having nothing to do with my pod/deployment setup.</p>
<pre><code>containers:
- name: postgres-container
image: postgres
imagePullPolicy: Always
volumeMounts:
- name: postgres-internal-volume
mountPath: /var/lib/postgresql/data
subPath: postgres
envFrom:
- configMapRef:
name: postgres-internal-cnf
ports:
- containerPort: 5432
command: ['psql']
args: [-U postgres -tc "SELECT 1 FROM pg_database WHERE datname = 'dominion'" | grep -q 1 || psql -h localhost -p 5432 -U postgres -c "CREATE DATABASE dominion"]
</code></pre>
<p>This command will create a database if it does not already exist. If I create the deployment and exec into the pod and run this command everything works fine. If I however run it here the pod fails to spin up and I get this error:</p>
<p>psql: error: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?</p>
<p>I was under the impression that this error comes from the default connection values being incorrect, but here I am hardcoding the localhost and the port number.</p>
| <p>With your pod spec, you've replaced the default command -- which starts up the postgres server -- with your own command, so the server never starts. The proper way to perform initialization tasks with the official Postgres image is <a href="https://github.com/docker-library/docs/blob/master/postgres/README.md#initialization-scripts" rel="nofollow noreferrer">in the documentation</a>.</p>
<p>You want to move your initialization commands into a ConfigMap, and then mount the scripts into <code>/docker-entrypoint-initdb.d</code> as described in those docs.</p>
<p>The docs have more details, but here's a short example. We want to run
<code>CREATE DATABASE dominion</code> when the postgres server starts (and only
if it is starting with an empty data directory). We can define a
simple SQL script in a <code>ConfigMap</code>:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-init-scripts
data:
create-dominion-db.sql: |
CREATE DATABASE dominion
</code></pre>
<p>And then mount that script into the appropriate location in the pod
spec:</p>
<pre><code>volumes:
- name: postgres-init-scripts
configMap:
name: postgres-init-scripts
containers:
- name: postgres-container
image: postgres
imagePullPolicy: Always
volumeMounts:
- name: postgres-internal-volume
mountPath: /var/lib/postgresql/data
subPath: postgres
- name: postgres-init-scripts
mountPath:
/docker-entrypoint-initdb.d/create-dominion-db.sql
subPath: create-dominion-db.sql
envFrom:
- configMapRef:
name: postgres-internal-cnf
ports:
- containerPort: 5432
</code></pre>
|
<p><strong>ENVIRONMENT:</strong></p>
<pre><code>Kubernetes version: v1.16.3
OS: CentOS 7
Kernel: Linux k8s02-master01 3.10.0-1062.4.3.el7.x86_64 #1 SMP Wed Nov 13 23:58:53 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
</code></pre>
<p><strong>WHAT HAPPENED:</strong></p>
<p>I have a Wordpress Deployment running a container built from a custom Apache/Wordpress image. The image exposes port 8080 instead of 80 <em>(Dockerfile below)</em>. The Pod is exposed to the world through Traefik reverse proxy. Everything works fine without any liveness or readiness checks. Pod gets ready and Wordpress is accessible from <a href="https://www.example.com/" rel="nofollow noreferrer">https://www.example.com/</a>.</p>
<p>I tried adding liveness and readiness probes and they both repeatedly fail with "connection refused". When I remove both probes and reapply the Deployment, it works again. It works until the probe hits the failure threshhold, at which point the container goes into an endless restart loop and becomes unaccessible. </p>
<p><strong>POD EVENTS:</strong></p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned development/blog-wordpress-5dbcd9c7c7-kdgpc to gg-k8s02-worker02
Normal Killing 16m (x2 over 17m) kubelet, gg-k8s02-worker02 Container blog-wordpress failed liveness probe, will be restarted
Normal Created 16m (x3 over 18m) kubelet, gg-k8s02-worker02 Created container blog-wordpress
Normal Started 16m (x3 over 18m) kubelet, gg-k8s02-worker02 Started container blog-wordpress
Normal Pulled 13m (x5 over 18m) kubelet, gg-k8s02-worker02 Container image "wordpress-test:test12" already present on machine
Warning Unhealthy 8m17s (x35 over 18m) kubelet, gg-k8s02-worker02 Liveness probe failed: Get http://10.244.3.83/: dial tcp 10.244.3.83:80: connect: connection refused
Warning BackOff 3m27s (x27 over 11m) kubelet, gg-k8s02-worker02 Back-off restarting failed container
</code></pre>
<p><strong>POD LOGS:</strong></p>
<pre><code>WordPress not found in /var/www/html - copying now...
WARNING: /var/www/html is not empty! (copying anyhow)
Complete! WordPress has been successfully copied to /var/www/html
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.244.3.83. Set the 'ServerName' directive globally to suppress this message
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.244.3.83. Set the 'ServerName' directive globally to suppress this message
[Wed Dec 11 06:39:07.502247 2019] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.38 (Debian) PHP/7.3.11 configured -- resuming normal operations
[Wed Dec 11 06:39:07.502323 2019] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'
10.244.3.1 - - [11/Dec/2019:06:39:18 +0000] "GET /index.php HTTP/1.1" 301 264 "-" "kube-probe/1.16"
10.244.3.1 - - [11/Dec/2019:06:39:33 +0000] "GET /index.php HTTP/1.1" 301 264 "-" "kube-probe/1.16"
10.244.3.1 - - [11/Dec/2019:06:39:48 +0000] "GET /index.php HTTP/1.1" 301 264 "-" "kube-probe/1.16"
10.244.3.1 - - [11/Dec/2019:06:40:03 +0000] "GET /index.php HTTP/1.1" 301 264 "-" "kube-probe/1.16"
10.244.3.1 - - [11/Dec/2019:06:40:18 +0000] "GET /index.php HTTP/1.1" 301 264 "-" "kube-probe/1.16"
</code></pre>
<p><strong>DOCKERFILE ("wordpress-test:test12"):</strong></p>
<pre><code>FROM wordpress:5.2.4-apache
RUN sed -i 's/Listen 80/Listen 8080/g' /etc/apache2/ports.conf;
RUN sed -i 's/:80/:8080/g' /etc/apache2/sites-enabled/000-default.conf;
# RUN sed -i 's/#ServerName www.example.com/ServerName localhost/g' /etc/apache2/sites-enabled/000-default.conf;
EXPOSE 8080
CMD ["apache2-foreground"]
</code></pre>
<p><strong>DEPLOYMENT:</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: blog-wordpress
namespace: development
labels:
app: blog
spec:
selector:
matchLabels:
app: blog
tier: wordpress
replicas: 4
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2
maxUnavailable: 2
template:
metadata:
labels:
app: blog
tier: wordpress
spec:
volumes:
- name: blog-wordpress
persistentVolumeClaim:
claimName: blog-wordpress
containers:
- name: blog-wordpress
# image: wordpress:5.2.4-apache
image: wordpress-test:test12
securityContext:
runAsUser: 65534
allowPrivilegeEscalation: false
capabilities:
add:
- "NET_ADMIN"
- "NET_BIND_SERVICE"
- "SYS_TIME"
resources:
requests:
cpu: "250m"
memory: "64Mi"
limits:
cpu: "500m"
memory: "128Mi"
ports:
- name: liveness-port
containerPort: 8080
readinessProbe:
initialDelaySeconds: 15
httpGet:
path: /index.php
port: 8080
timeoutSeconds: 15
periodSeconds: 15
failureThreshold: 5
livenessProbe:
initialDelaySeconds: 10
httpGet:
path: /index.php
port: 8080
timeoutSeconds: 10
periodSeconds: 15
failureThreshold: 5
env:
# Database
- name: WORDPRESS_DB_HOST
value: blog-mysql
- name: WORDPRESS_DB_NAME
value: wordpress
- name: WORDPRESS_DB_USER
valueFrom:
secretKeyRef:
name: blog-mysql
key: username
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: blog-mysql
key: password
- name: WORDPRESS_TABLE_PREFIX
value: wp_
- name: WORDPRESS_AUTH_KEY
valueFrom:
secretKeyRef:
name: blog-wordpress
key: auth-key
- name: WORDPRESS_SECURE_AUTH_KEY
valueFrom:
secretKeyRef:
name: blog-wordpress
key: secure-auth-key
- name: WORDPRESS_LOGGED_IN_KEY
valueFrom:
secretKeyRef:
name: blog-wordpress
key: logged-in-key
- name: WORDPRESS_NONCE_KEY
valueFrom:
secretKeyRef:
name: blog-wordpress
key: nonce-key
- name: WORDPRESS_AUTH_SALT
valueFrom:
secretKeyRef:
name: blog-wordpress
key: auth-salt
- name: WORDPRESS_SECURE_AUTH_SALT
valueFrom:
secretKeyRef:
name: blog-wordpress
key: secure-auth-salt
- name: WORDPRESS_LOGGED_IN_SALT
valueFrom:
secretKeyRef:
name: blog-wordpress
key: logged-in-salt
- name: WORDPRESS_NONCE_SALT
valueFrom:
secretKeyRef:
name: blog-wordpress
key: nonce-salt
- name: WORDPRESS_CONFIG_EXTRA
value: |
define('WPLANG', 'fr_FR');
define('WP_CACHE', false);
define('WP_MEMORY_LIMIT', '64M');
volumeMounts:
- name: blog-wordpress
mountPath: "/var/www/html/wp-content"
</code></pre>
<p><strong>DEPLOYMENT SERVICE:</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: blog-wordpress
namespace: development
labels:
app: blog
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8080
selector:
app: blog
tier: wordpress
type: ClusterIP
</code></pre>
<p><strong>TRAEFIK INGRESSROUTE:</strong></p>
<pre><code>##
# HTTP
##
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: blog
namespace: development
spec:
entryPoints:
- http
routes:
- match: Host(`example.com`)
kind: Rule
services:
- name: blog-wordpress
port: 80
middlewares:
- name: redirect-to-https
namespace: kube-system
---
##
# HTTPS
##
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: blog-https
namespace: development
spec:
entryPoints:
- https
routes:
- match: Host(`example.com`) && PathPrefix(`/`)
kind: Rule
services:
- name: blog-wordpress
port: 80
tls:
certResolver: letsencrypt
</code></pre>
<p>Thank you!</p>
| <p>Just to give another way - wordpress will try to redirect because you are missing the X-Forwarded http headers that you should have if you are connecting to wordpress via a proxy.</p>
<p>Something like this works without the need for custom php:</p>
<pre><code> livenessProbe:
initialDelaySeconds: 10
httpGet:
path: /
port: 8080
httpHeaders:
- name: X-Forwarded-Proto
value: https
- name: X-Forwarded-Host
value: www.your-wordpress-domain-here.com
- name: Host
value: www.your-wordpress-domain-here.com
timeoutSeconds: 10
periodSeconds: 15
failureThreshold: 5
</code></pre>
|
<p>I have been trying to watch some resources in my K8s cluster and after reading some blogs about watch vs informers, i've decided to go with Informers.</p>
<p>I came across this example of how to use one: <a href="https://github.com/Netflix-Skunkworks/kubernetes-client-java/blob/master/examples/src/main/java/io/kubernetes/client/examples/InformerExample.java" rel="nofollow noreferrer">https://github.com/Netflix-Skunkworks/kubernetes-client-java/blob/master/examples/src/main/java/io/kubernetes/client/examples/InformerExample.java</a></p>
<p>In the example, I see that the SharedIndexInformer is defined as such:</p>
<pre><code> factory.sharedIndexInformerFor(
(CallGeneratorParams params) -> {
return coreV1Api.listNodeCall(
null,
null,
null,
null,
null,
params.resourceVersion,
params.timeoutSeconds,
params.watch,
null,
null);
},
V1Node.class,
V1NodeList.class);
</code></pre>
<p>Based on my understanding of how lambdas are written, this basically says that we're creating a <code>sharedIndexInformer</code> from the factory by passing it a param Call (returned by coreV1Api.listNodeCall).</p>
<p>The Call object is created by this dynamic method which takes in a <code>CallGeneratorParams</code> argument.</p>
<p>I do not seem to understand how and where this argument is passed in from in the case of a SharedInformerFactory. It's very evident that some fields within the <code>params</code> variable is being used in building the <code>listNodeCall</code> but where and how is this object constructed ?</p>
| <p>Well it's a ride down a rabbit hole.</p>
<blockquote>
<p>I suggest to keep the diagrams <a href="https://github.com/huweihuang/kubernetes-notes/blob/master/code-analysis/kube-controller-manager/sharedIndexInformer.md" rel="nofollow noreferrer">from the official docs</a> open in separate tab/window in order to appreciate the whole picture better.</p>
</blockquote>
<p>In order to understand this, you would have to look at the implementation of the <code>SharedInformerFactory</code>, especially the <a href="https://github.com/kubernetes-client/java/blob/332c3a43bbd96b6814947f1f8543453b824ca253/util/src/main/java/io/kubernetes/client/informer/SharedInformerFactory.java#L115" rel="nofollow noreferrer">sharedIndexInformerFor</a> call.</p>
<p>Notice how the lambda is just passed further down to construct a new <a href="https://github.com/kubernetes-client/java/blob/332c3a43bbd96b6814947f1f8543453b824ca253/util/src/main/java/io/kubernetes/client/informer/ListerWatcher.java" rel="nofollow noreferrer">ListWatcher</a> instance <a href="https://github.com/kubernetes-client/java/blob/332c3a43bbd96b6814947f1f8543453b824ca253/util/src/main/java/io/kubernetes/client/informer/SharedInformerFactory.java#L194" rel="nofollow noreferrer">(method at line 194)</a>, which is then passed into a new <a href="https://github.com/kubernetes-client/java/blob/332c3a43bbd96b6814947f1f8543453b824ca253/util/src/main/java/io/kubernetes/client/informer/impl/DefaultSharedIndexInformer.java" rel="nofollow noreferrer">DefaultSharedIndexInformer</a> instance <a href="https://github.com/kubernetes-client/java/blob/332c3a43bbd96b6814947f1f8543453b824ca253/util/src/main/java/io/kubernetes/client/informer/SharedInformerFactory.java#L144" rel="nofollow noreferrer">(statement at line 144)</a>.</p>
<p>So now we have an instance of a <code>SharedIndexInformer</code> that passes the <code>ListerWatcher</code> yet further down to its <a href="https://github.com/kubernetes-client/java/blob/332c3a43bbd96b6814947f1f8543453b824ca253/util/src/main/java/io/kubernetes/client/informer/cache/Controller.java" rel="nofollow noreferrer">Controller</a> <a href="https://github.com/kubernetes-client/java/blob/332c3a43bbd96b6814947f1f8543453b824ca253/util/src/main/java/io/kubernetes/client/informer/impl/DefaultSharedIndexInformer.java#L99" rel="nofollow noreferrer">(constructor line 99)</a>. Now the <code>Controller</code> is started when the <code>Informer</code> itself runs (see the <code>run()</code> method).</p>
<p>To make it even more complex, the <code>Controller</code> uses a <code>Reflector</code> for .. stuff. A <code>Reflector</code> according to the <a href="https://github.com/kubernetes/kubernetes/blob/353f0a5eabe4bd8d31bb67275ee4beeb4655be3f/staging/src/k8s.io/client-go/tools/cache/reflector.go" rel="nofollow noreferrer">reflector.go</a></p>
<blockquote>
<p>Reflector watches a specified resource and causes all changes to be reflected in the given store.</p>
</blockquote>
<p>So its job is to call <code>list</code> and <code>watch</code> until it is told to stop. So when the <code>Controller</code> starts, it also schedules its <code>Reflector</code> to <a href="https://github.com/kubernetes-client/java/blob/332c3a43bbd96b6814947f1f8543453b824ca253/util/src/main/java/io/kubernetes/client/informer/cache/Controller.java#L116" rel="nofollow noreferrer">run periodically</a></p>
<p>At last. When the <code>Reflector</code> runs, it calls the <code>list</code> method, which .. <em>drum roll</em> .. <a href="https://github.com/kubernetes-client/java/blob/332c3a43bbd96b6814947f1f8543453b824ca253/util/src/main/java/io/kubernetes/client/informer/SharedInformerFactory.java#L205" rel="nofollow noreferrer">executes the lambda</a> you were asking about. And the <code>param</code> variable in the lambda is .. <em>another drum roll</em> .. <a href="https://github.com/kubernetes-client/java/blob/332c3a43bbd96b6814947f1f8543453b824ca253/util/src/main/java/io/kubernetes/client/informer/cache/ReflectorRunnable.java#L89" rel="nofollow noreferrer">created in the Reflector here</a></p>
<p>Pretty neat, wouldn't you say?</p>
<p>Let me know, if you need further help/clarification.</p>
<p>Cheers</p>
|
<p>I need to reproduce the case when the one service in k8s because of pure resource management acts as the slow socket attack and thus causing the wrong and unexpected behavior to another service, not handling this slow socket response/requests well. But currently it is impossible to to put the slow socket service into that state on demand. To debug and verify I need to put something working as a controlled slow socket proxy (traffic speed lowering, delays, introducing timeouts and so on) between the services of interest. Is there some out-of-box solution to be deployed to k8s to work as a slow socket proxy?</p>
| <p>Kubernetes does not provide a built-in functionality to test against what you mention.</p>
<p>Instead, the failure you mentioned can be tested against with 3rd party tools. One tool you can use is Litmus Chaos. It is an application designed with chaos engineering principle.</p>
<p>Inside Litmus Chaos, different "chaos" are defined as an experiment. There are a number of built-in experiments you can use. One experiemnt is called <code>Pod Network Latency</code> as documented <a href="https://litmuschaos.github.io/litmus/experiments/categories/pods/pod-network-latency/" rel="nofollow noreferrer">here</a>.</p>
<p>As you have not provided the pod spec, I cannot help you to construct the experiment YAML. You can refer to the doc and build your "test case" after installation of Litmus Chaos.</p>
|
<p>We are accessing shared database from multiple pods in kubernetes. All the pods may writes/updates and reads from the db. How to handle data integrity in this case ? Is JPA isolation levels are enough for handling this or need to go with pessimistic locking ? Can you also help me with the difference between both ?</p>
| <p>Your question has nothing to do with Kubernets. This simply concurrent database access that you'll get if more than one connection is accessing your database.</p>
<p>If you want to avoid problems like <a href="https://www.geeksforgeeks.org/concurrency-problems-in-dbms-transactions/#:%7E:text=In%20the%20lost%20update%20problem,by%20transaction%202%20on%20X." rel="nofollow noreferrer">lost update</a> then you need locking.</p>
<p>There are two types of locking.</p>
<ul>
<li>Optimistic</li>
<li>Pessimistic</li>
</ul>
<p>Optimistic is handled by Hibernate with a version field. Pessimistic is the locking mechanism of the database.</p>
<p>You should also read Hibernates documentation about Locking.
<a href="https://docs.jboss.org/hibernate/orm/5.5/userguide/html_single/Hibernate_User_Guide.html#locking" rel="nofollow noreferrer">https://docs.jboss.org/hibernate/orm/5.5/userguide/html_single/Hibernate_User_Guide.html#locking</a></p>
|
<p>I'm trying to configure a single ALB across multiple namespaces in aws EKS, each namespace has its own ingress resource.</p>
<p>I'm trying to configure the ingress controller <code>aws-loadbalancer-controller</code> on a k8s v1.20.</p>
<p>The problem i'm facing is that each time I try to deploy a new service it always spin-up a new classic loadbalancer in addition to the shared ALB specified in the ingress config.</p>
<p><a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/" rel="nofollow noreferrer">https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/</a></p>
<p><a href="https://i.stack.imgur.com/eylHl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eylHl.png" alt="archi" /></a></p>
<pre class="lang-yaml prettyprint-override"><code># service-realm1-dev.yaml:
apiVersion: v1
kind: Service
metadata:
name: sentinel
annotations:
external-dns.alpha.kubernetes.io/hostname: realm1.dev.sentinel.mysite.io
namespace: realm1-dev
labels:
run: sentinel
spec:
ports:
- port: 5001
name: ps1
protocol: TCP
selector:
app: sentinel
type: LoadBalancer
</code></pre>
<pre class="lang-yaml prettyprint-override"><code># ingress realm1-app
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: sentinel-ingress
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-interval-seconds: "15"
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: "5"
alb.ingress.kubernetes.io/success-codes: 200-300
alb.ingress.kubernetes.io/healthy-threshold-count: "2"
alb.ingress.kubernetes.io/unhealthy-threshold-count: "2"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80}]'
name: sentinel-ingress-controller
namespace: realm1-dev
spec:
rules:
- host: realm1.dev.sentinel.mysite.io
http:
paths:
- path: /
pathType: Prefix
backend:
servicePort: use-annotation
serviceName: sentinel
</code></pre>
<p>Also I'm using external dns to create a route53 reecodset, and then I use the same configured DNS to route requests to the specific eks service, is there any issue with this approach ?</p>
| <p>I was able to make it work using only one ALB,
@YYashwanth, Using Nginx was my fallback plan, I'm trying to make the configuration as simple as possible, maybe in the future when we will try to deploy our solution in other cloud providers we will use nginx ingress controller.</p>
<p>1- To start the service type should be node port, use loadbalancer will create a classic LB.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: sentinel-srv
annotations:
external-dns.alpha.kubernetes.io/hostname: operatorv2.dev.sentinel.mysite.io
namespace: operatorv2-dev
labels:
run: jsflow-sentinel
spec:
ports:
- port: 80
targetPort: 80
name: ps1
protocol: TCP
selector:
app: sentinel-app
type: NodePort
</code></pre>
<p>2- Second we need to configure <code>group.name</code>, for the ingress controller to merge all ingress configurations using the annotation <code>alb.ingress.kubernetes.io/group.name</code></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/healthcheck-interval-seconds: "15"
alb.ingress.kubernetes.io/healthcheck-path: /
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: "5"
alb.ingress.kubernetes.io/healthy-threshold-count: "2"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80} ]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/success-codes: "200"
alb.ingress.kubernetes.io/tags: createdBy=aws-controller
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/unhealthy-threshold-count: "2"
external-dns.alpha.kubernetes.io/hostname: operatorv2.sentinel.mysite.io
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: sentinel-group
name: dev-operatorv2-sentinel-ingress-controller
namespace: operatorv2-dev
spec:
rules:
- host: operatorv2.dev.sentinel.mysite.io
http:
paths:
- path: /*
backend:
servicePort: 80
serviceName: sentinel-srv
</code></pre>
<p><a href="https://i.stack.imgur.com/XEfLF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XEfLF.png" alt="enter image description here" /></a></p>
|
<p>I am using a ReplicationController to create a pod running redis container.</p>
<p>The redis container is monitored by Redis Sentinel. And there is a problem, if redis crashed and restart too fast, it may cause trouble to Redis Sentinel when the voting is in progress. </p>
<pre><code>{
"apiVersion": "v1",
"kind": "ReplicationController",
"metadata": {
"name": "redis",
"labels": { "name" : "redis" }
},
"spec": {
"replicas": 1,
"selector": {
"name":"redis"
},
"template": {
"metadata": {
"labels": {
"name":"redis"
}
},
"spec": {
"volumes": [
//...
],
"containers": [
//...
],
"restartPolicy": "Always"
}
}
}
}
</code></pre>
<p>Would it be possible to delay the restart ? i.e. Restart the container after 60 seconds from last crash</p>
| <p>Better alternatives to <code>sleep</code>:</p>
<ol>
<li><code>terminationGracePeriodSeconds</code>:</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test
spec:
replicas: 1
template:
spec:
containers:
- name: test
image: ...
terminationGracePeriodSeconds: 60
</code></pre>
<ol start="2">
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/#define-poststart-and-prestop-handlers" rel="nofollow noreferrer">preStop handler</a>, read more about <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks" rel="nofollow noreferrer">lifecycle hooks</a></li>
</ol>
<p>Read also: <a href="https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-terminating-with-grace" rel="nofollow noreferrer">Kubernetes best practices: terminating with grace</a></p>
|
<p>I want to know a brief explanation or an example of how to migrate a <code>Kubernetes</code> application to <code>GCP</code> from <code>AWS</code>.</p>
<p>What services are implicated like <code>EKS</code> or <code>EC2</code> and <code>GKE</code> or <code>Compute Engine</code>.</p>
<p>I'm very new to migration, I don't know too much about <code>AWS</code> and I recently started using <code>GCP</code>.</p>
<p>Thanks in advance.</p>
| <p>It depends.</p>
<h1>At first, <code>AWS</code> -> <code>GCP</code> resources mapping:</h1>
<p>At first, you'll want to know the mapping between <code>AWS</code> and <code>GCP</code> resources.
There are several articles:</p>
<ul>
<li><a href="https://osamaoracle.com/2020/06/19/services-mapping-aws-azure-gcp-oc-ibm-and-alibab-cloud/" rel="nofollow noreferrer">Cloud Services Mapping For AWS, Azure, GCP ,OCI, IBM and Alibaba provider – Technology Geek</a></li>
<li><a href="https://www.lucidchart.com/blog/cloud-terminology-glossary" rel="nofollow noreferrer">Cloud Terminology Glossary for AWS, Azure, and GCP | Lucidchart</a>:</li>
<li><a href="https://www.cloudhealthtech.com/blog/cloud-comparison-guide-glossary-aws-azure-gcp" rel="nofollow noreferrer">Cloud Services Terminology Guide: Comparing AWS vs Azure vs Google | CloudHealth by VMware</a></li>
</ul>
<h1>Migrate <code>AWS EKS</code> to <code>GCP GKE</code>: the hard way</h1>
<p>If your cluster is deployed with <strong>managed kubernetes</strong> service:</p>
<ul>
<li>from <code>Elastic Kubernetes Service</code> (<code>EKS</code>)</li>
<li>to <code>Google Kubernetes Engine</code> (<code>GKE</code>)</li>
</ul>
<p>Then it would be hard to migrate. Just due to complexity of <code>kubernetes</code> architecture and differences in the approaches of manage cluster in <code>AWS</code> vs<code> </code>GCP`</p>
<h1>Migrate VMs and cluster deployed using your own <code>k8s</code> manifest.</h1>
<p>If your <code>kubernetes</code> cluster is deployed on cloud virtual machines with <code>k8s</code> or <code>helm</code> manifests, then it would be easier.</p>
<p>And there are two ways:</p>
<ul>
<li>Either migrate <code>VM</code>s using <code>GCP</code> <code>Migrate Connector</code> (as @vicente-ayala said in his answer)</li>
<li>Or import your infrastructure to the <code>terraform</code> manifest, change resources definitions step-by-step, and then apply this updated manifest to <code>GCP</code></li>
</ul>
<h2>Migrating with <code>Migrate Connector</code></h2>
<p>You can found the latest migration manual on migrating VM's here:</p>
<h3>Prerequisites</h3>
<p>As per <a href="https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/migrating-vms" rel="nofollow noreferrer"><code>GCP</code> manual</a>,</p>
<blockquote>
<p>Before you can migrate a source VM to Google Cloud, you must configure the migration environment on your on-premises data center and on Google Cloud. See:</p>
<ul>
<li><p><a href="https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/enable-services" rel="nofollow noreferrer">Enabling Migrate for Compute Engine services</a></p>
</li>
<li><p><a href="https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/migrate-connector" rel="nofollow noreferrer">Installing the Migrate Connector</a></p>
</li>
</ul>
</blockquote>
<h2>Migrating</h2>
<p><a href="https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to" rel="nofollow noreferrer">How-to Guides | Migrate for Compute Engine | Google Cloud</a></p>
<ul>
<li><a href="https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/migrating-vms" rel="nofollow noreferrer">Migrating individual VMs</a></li>
<li><a href="https://cloud.google.com/migrate/compute-engine/docs/5.0/how-to/migrating-vm-groups" rel="nofollow noreferrer">Migrating VM groups</a></li>
</ul>
<h1>Migrating using <code>Terraform</code> and <code>Terraformer</code></h1>
<p>There is a great tool for reverse Terraform <a href="https://github.com/GoogleCloudPlatform/terraformer" rel="nofollow noreferrer">GoogleCloudPlatform/terraformer. Infrastructure to Code</a></p>
<p>A CLI tool that generates <code>tf</code>/<code>json</code> and <code>tfstate</code> files based on existing infrastructure (reverse Terraform).</p>
<p>And you can import your infrastructure into <code>terraform</code> manifest:</p>
<pre><code> terraformer import aws --resources=vpc,subnet --connect=true --regions=eu-west-1 --profile=prod
</code></pre>
<p>You'll get the <code>terraform</code> manifest declared with <a href="https://registry.terraform.io/providers/hashicorp/aws/latest" rel="nofollow noreferrer"><code>aws provider</code></a></p>
<p>And you may try to replace every <code>AWS</code> resource to the appropriate <code>GCP</code> resource. There is official <code>terraform GCP</code> provider: <a href="https://registry.terraform.io/providers/hashicorp/google/latest" rel="nofollow noreferrer">hashicorp/google</a>. Unfortunately, there isn't mapping for <code>terraform</code> resources of both cloud providers. But, again, you may some of these mapping lists:</p>
<ul>
<li><a href="https://osamaoracle.com/2020/06/19/services-mapping-aws-azure-gcp-oc-ibm-and-alibab-cloud/" rel="nofollow noreferrer">Cloud Services Mapping For AWS, Azure, GCP ,OCI, IBM and Alibaba provider – Technology Geek</a></li>
<li><a href="https://www.lucidchart.com/blog/cloud-terminology-glossary" rel="nofollow noreferrer">Cloud Terminology Glossary for AWS, Azure, and GCP | Lucidchart</a>:</li>
<li><a href="https://www.cloudhealthtech.com/blog/cloud-comparison-guide-glossary-aws-azure-gcp" rel="nofollow noreferrer">Cloud Services Terminology Guide: Comparing AWS vs Azure vs Google | CloudHealth by VMware</a></li>
</ul>
<p>And then apply the new <code>GCP</code> manifest:</p>
<pre><code>terraform init
terraform plan
terraform apply
</code></pre>
<h1>Additional resources on <code>AWS</code> <-> <code>GCP</code></h1>
<ul>
<li><a href="https://insights.daffodilsw.com/blog/gcp-to-aws-migration-why-and-how-to-make-the-move" rel="nofollow noreferrer">GCP to AWS Migration: Why and How to Make the Move</a></li>
<li><a href="https://www.youtube.com/watch?v=OCQx-neI5UY" rel="nofollow noreferrer">GCP | Google Cloud Migrate for Compute Engine | AWS to GCP Migration using Velostrata - YouTube</a></li>
<li><a href="https://www.youtube.com/watch?v=WdqQEcEUwRE" rel="nofollow noreferrer">Managing a Large and Complex GCP Migration (Cloud Next '19) - YouTube</a></li>
<li><a href="https://www.leverege.com/blogpost/lessons-learned-migrating-from-gcp-to-aws" rel="nofollow noreferrer">Lessons Learned Migrating from GCP to AWS | Leverege</a></li>
<li><a href="https://searchcloudcomputing.techtarget.com/tip/How-to-approach-a-GCP-to-AWS-migration" rel="nofollow noreferrer">How to approach a GCP-to-AWS migration</a></li>
<li><a href="https://www.rapyder.com/case-studies/gcp-to-aws-case-study-ride-sharing-platform/" rel="nofollow noreferrer">How Rapyder helped ride-sharing app migrate from GCP to AWS | Rapyder</a></li>
<li><a href="https://biqmind.com/wp-content/uploads/2020/04/Biqmind-CloudMigration-UseCase-2020-04.pdf" rel="nofollow noreferrer">Cloud Migration Use Case: Moving From AWS to GCP (PDF)</a></li>
</ul>
|
<p>We are using <code>kubernetes/ingress-nginx</code> for our Azure AKS instance. I have a URI that is 9kb long approximately (it contains a <code>post_logout_redirect_uri</code> and a very long <code>id_token_hint</code> for our Identity server, running in .Net core 2.2).</p>
<p>However, I cannot get past the ingress as nginx is rejecting the query with <code>414 URI Too Long</code>. I can see the request in the Nginx logs but not on the Identity server logs, so it is clearly getting bounced before.</p>
<p>I have tried to update the nginx configuration using config map, but without success. The settings are applied (and have helped me fix other issues before). However, in this case nothing I try seems to have worked. Here is the config map I'm using:</p>
<pre><code>apiVersion: v1
data:
http2-max-header-size: "64k"
http2-max-field-size: "32k"
proxy-body-size: "100m"
client-header-buffer-size: "64k"
large-client-header-buffers: "4 64k"
kind: ConfigMap
metadata:
name: nginx-ingress-controller
namespace: kube-system
</code></pre>
<p>Here are the ingress annotations for the Identity server:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress-name
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/send_timeout: "180"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "180"
nginx.ingress.kubernetes.io/proxy-read-timeout: "180"
nginx.ingress.kubernetes.io/proxy-send-timeout: "180"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-headers: "authorization,content-type"
nginx.ingress.kubernetes.io/proxy-body-size: 250m
nginx.ingress.kubernetes.io/proxy-buffer-size: "64k"
</code></pre>
<p>Finally, if I check the nginx config on the pod it does contain my updated values, in the global config section.</p>
<pre><code>...
keepalive_timeout 75s;
keepalive_requests 100;
client_body_temp_path /tmp/client-body;
fastcgi_temp_path /tmp/fastcgi-temp;
proxy_temp_path /tmp/proxy-temp;
ajp_temp_path /tmp/ajp-temp;
client_header_buffer_size 64k;
client_header_timeout 60s;
large_client_header_buffers 4 64k;
client_body_buffer_size 8k;
client_body_timeout 60s;
http2_max_field_size 32k;
http2_max_header_size 64k;
http2_max_requests 1000;
types_hash_max_size 2048;
server_names_hash_max_size 1024;
server_names_hash_bucket_size 64;
map_hash_bucket_size 64;
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 64;
variables_hash_bucket_size 128;
variables_hash_max_size 2048;
underscores_in_headers off;
ignore_invalid_headers on;
...
</code></pre>
<p>Any info or suggestions would be appreciated, thanks!</p>
| <p>I also tried the following annotations:</p>
<pre><code>nginx.ingress.kubernetes.io/large_client_header_buffers: 200m
nginx.ingress.kubernetes.io/proxy-body-size: 200m
</code></pre>
<p>They didn't help, what did help is the snippet I added in the Ingress controller yaml:</p>
<pre><code>nginx.ingress.kubernetes.io/server-snippet: |
http2_max_header_size 256k;
http2_max_field_size 256k;
</code></pre>
|
<p>I followed the steps from <code>AWS</code> knowledge base to create persistent storage: <a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-persistent-storage/" rel="noreferrer">Use persistent storage in Amazon EKS</a></p>
<p>Unfortunately, <code>PersistentVolume</code>(PV) wasn't created:</p>
<pre><code>kubectl get pv
No resources found
</code></pre>
<p>When I checked the PVC logs, I'm getting the following provisioning failed message:</p>
<pre><code>storageclass.storage.k8s.io "ebs-sc" not found
failed to provision volume with StorageClass "ebs-sc": rpc error: code = DeadlineExceeded desc = context deadline exceeded
</code></pre>
<p>I'm using <code>Kubernetes v1.21.2-eks-0389ca3</code></p>
<hr />
<p>Update:</p>
<p>The storageclass.yaml used in the example has provisioner set to ebs.csi.aws.com</p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ebs-sc
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
</code></pre>
<p>When I updated it using @gohm'c answer, it created a pv.</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-sc
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
</code></pre>
| <pre><code>storageclass.storage.k8s.io "ebs-sc" not found
failed to provision volume with StorageClass "ebs-sc"
</code></pre>
<p>You need to create the storage class "ebs-sc" after EBS CSI driver is installed, example:</p>
<pre><code>cat << EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-sc
provisioner: ebs.csi.aws.com
parameters:
type: gp3
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
EOF
</code></pre>
<p>See <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">here</a> for more options.</p>
|
<p>GKE version - 1.14
Currently I have two private gke cluster ( Vault cluster and app cluster)</p>
<p>Getting following errors:</p>
<pre><code>vault errors -
auth.kubernetes.auth_kubernetes_b0f01fa6: login unauthorized due to: Post "https://10.V.V.194:443/apis/authentication.k8s.io/v1/tokenreviews": dial tcp `10.V.V.194`:443: i/o timeout
</code></pre>
<p>-> where</p>
<pre><code>10.V.V.194 -- is master IP address (no https://) via `kubectl cluster-info
</code></pre>
<p>Application pod logs</p>
<pre><code> * permission denied" backoff=1.324573453
2020-10-12T14:39:46.421Z [INFO] auth.handler: authenticating
2020-10-12T14:40:16.427Z [ERROR] auth.handler: error authenticating: error="Error making API request.
URL: PUT http://10.LB.LB.38:8200/v1/auth/kubernetes/login
Code: 403. Errors:
* permission denied" backoff=2.798763368
</code></pre>
<p>-> Where</p>
<pre><code>http://10.LB.LB.38:8200 is Internal LB IP
</code></pre>
<p>Vault setup</p>
<pre><code> NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
test-vault LoadBalancer 240.130.0.59 10.LB.LB.38 8200:32105/TCP,8201:31147/TCP
</code></pre>
<p>How is K8s auth methods enable</p>
<pre><code> $ export VAULT_SA_NAME=$(kubectl get sa vault-auth -o jsonpath="{.secrets[*]['name']}")
$ export SA_JWT_TOKEN=$(kubectl get secret $VAULT_SA_NAME -o jsonpath="{.data.token}" | base64 --decode; echo)
$ export SA_CA_CRT=$(kubectl get secret $VAULT_SA_NAME -o jsonpath="{.data['ca\.crt']}" | base64 --decode; echo)
# determine Kubernetes master IP address (no https://) via `kubectl cluster-info`
$ export K8S_HOST=<K8S_MASTER_IP> ----- App cluster ip
# set VAULT_TOKEN & VAULT_ADDR before next steps
$ vault auth enable kubernetes
$ vault write auth/kubernetes/config \
token_reviewer_jwt="$SA_JWT_TOKEN" \
kubernetes_host="https://$K8S_HOST:443" \
kubernetes_ca_cert="$SA_CA_CRT"
</code></pre>
<p>How is vault inject setup in application cluster</p>
<pre><code>name: AGENT_INJECT_VAULT_ADDR
value: http://10.LB.LB.38:8200
</code></pre>
<p>Cluster B ( app cluster )</p>
<pre><code>kubectl create serviceaccount vault-auth -n default
-----
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: role-tokenreview-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: vault-auth
namespace: default
vault auth enable kubernetes
-----------
vault write auth/kubernetes/config kubernetes_host="${K8S_HOST}"
kubernetes_ca_cert="${VAULT_SA_CA_CRT}"
token_reviewer_jwt="${TR_ACCOUNT_TOKEN}"
-----------
vault secrets enable -path=secret/ kv
-----------
vault policy write myapp-kv-rw - <<EOF
path "secret/myapp/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
--------------
vault write auth/kubernetes/role/myapp-role \
bound_service_account_names=default \
bound_service_account_namespaces=default \
policies=default,myapp-kv-rw \
ttl=15m
</code></pre>
<p>Can you Please let me know, If I miss anything ?</p>
| <p>Curious as to which version of k8s you're using. I was having the same issue using <code>v1.21.1</code>. I had to add an issuer to the per the docs (<a href="https://www.vaultproject.io/docs/auth/kubernetes" rel="nofollow noreferrer">https://www.vaultproject.io/docs/auth/kubernetes</a>)</p>
<blockquote>
<p>Kubernetes 1.21+ clusters may require setting the service account issuer to the same value as kube-apiserver's --service-account-issuer flag. This is because the service account JWTs for these clusters may have an issuer specific to the cluster itself, instead of the old default of kubernetes/serviceaccount</p>
</blockquote>
<p>like so</p>
<pre><code>vault write auth/kubernetes/config \
token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" \
kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
issuer="\"test-aks-cluster-dns-d6cbb78e.hcp.uksouth.azmk8s.io\""
</code></pre>
<p>and the issuer can be obtained by running <code>kubectl proxy & curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer </code></p>
|
<p>I'm running my application EKS cluster, few days back we encounter the issues, let say we have application pod is running with one replicas count in different AWS node lets call vm name as like below.</p>
<pre><code>ams-99547bd55-9fp6r 1/1 Running 0 7m31s 10.255.114.81 ip-10-255-12-11.eu-central-1.compute.internal
mongodb-58746b7584-z82nd 1/1 Running 0 21h 10.255.113.10 ip-10-255-12-11.eu-central-1.compute.internal
</code></pre>
<p>Here the my running serivces</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ams-service NodePort 172.20.81.165 <none> 3030:30010/TCP 18m
mongodb-service NodePort 172.20.158.81 <none> 27017:30003/TCP 15d
</code></pre>
<p>I have setting.conf.yaml file running as config map where i have application related configuration</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: ama-settings
labels:
conf: ams-settings
data:
config : |
"git": {
"prefixUrl": "ssh://git@10.255.12.11:30001/app-server/repos/ams/",
"author": {
"name": "app poc",
"mail": "app@domain.com"
},
"mongodb": {
"host": "10.255.12.11",
"port": "30003",
"database": "ams",
"ssl": false,
}
</code></pre>
<p>This is working as we expected, but in case if existing node ip where my pod is running, some of the reason when i'm deleting my running pod and trying to re-deploy that time my pod is placed in some other AWS node basically EC2 vm.</p>
<p>During this time my application is not working then I need edit my setting.conf.yaml file to update with new AWS node IP where my pod is running.</p>
<p>Here the question how to use the service name instead of AWS node IP, because we don't want change the ip address frequently in case if any existing VM is goes down.</p>
| <p>ideally, instead of using the AWS IP you should be using the <code>0.0.0.0/0</code> <a href="https://discuss.kubernetes.io/t/security-implications-of-binding-server-to-127-0-0-1-vs-0-0-0-0-vs-pod-ip/13880" rel="nofollow noreferrer">Refrence doc</a></p>
<p>example in Node</p>
<pre><code>const cors = require("cors");
app.use(cors());
const port = process.env.PORT || 8000;
app.listen(port,"0.0.0.0" ,() => {
console.log(`Server is running on port ${port}`);
});
</code></pre>
<p>however, if you want to add the service name :</p>
<p>you can use the full certified name, but I am not sure it will work on as host <code>0.0.0.0</code> would be better option</p>
<pre><code><service.name>.<namespace name>.svc.cluster.local
</code></pre>
<p>example</p>
<pre><code>ams-service.default.svc.cluster.local
</code></pre>
|
<p>I am tasked with integrating ouath2 proxy into an existing kubernetes deployment in order to secure the application's endpoints. We are using Azure as the IDP and HC Vault sidecar to inject secrets into the pod. The existing app is one container, and the oauth2 will be another container, in same pod. The Vault secrets are meant to be injected as environment variables, using annotations.--The annotations configuration works fine.</p>
<p>I am not sure how to wire the Vault secretes into the oauth2 container, since the oauth2 container already has runtime args it needs.</p>
<p>How can I both source the secrets from the HC Vault and pass my runtime args to the container? It seems like I can do one thing or the other with 'args:' , but not both. Here is how I can do one or the other. How can I do both?</p>
<pre><code>- name: oauth2-proxy
image: <image>
# args to source the environment variables from vault secret/sidecar injection
args: ["/bin/sh", "-c", "source /vault/secrets/app && <entrypoint script>"]
# args to pass the oauth2 runtime parameters
args:
- --provider=azure
- --email-domain=mydomain.com
- --http-address=0.0.0.0:4180
- --azure-tenant=123456789
</code></pre>
| <p>ideally, you should be using the vault-injector to add or inject the variables into to POD</p>
<p><a href="https://learn.hashicorp.com/tutorials/vault/kubernetes-sidecar" rel="nofollow noreferrer">https://learn.hashicorp.com/tutorials/vault/kubernetes-sidecar</a></p>
<p>simple helm command : <code>helm install vault hashicorp/vault --namespace vault --set "injector.externalVaultAddr=<vault address>"</code></p>
<p>Get Token and cluster details :</p>
<pre><code>VAULT_HELM_SECRET_NAME=$(kubectl get secrets -n vault --output=json | jq -r '.items[].metadata | select(.name|startswith("vault-token-")).name')
TOKEN_REVIEW_JWT=$(kubectl get secret $VAULT_HELM_SECRET_NAME -n vault --output='go-template={{ .data.token }}' | base64 --decode)
KUBE_HOST=$(kubectl config view --raw --minify --flatten --output='jsonpath={.clusters[].cluster.server}')
KUBE_CA_CERT=$(kubectl config view --raw --minify --flatten --output='jsonpath={.clusters[].cluster.certificate-authority-data}' | base64 --decode)
</code></pre>
<p>adding details into the vault of authenticating method</p>
<pre><code>vault write auth/<auth-method-name>/config \
token_reviewer_jwt="$TOKEN_REVIEW_JWT" \
kubernetes_host="$KUBE_HOST" \
kubernetes_ca_cert="$KUBE_CA_CERT"
</code></pre>
<p>You should be using the annotation to fetch the variables from vault</p>
<pre><code>annotations:
vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/auth-path: auth/<auth-method-name>
vault.hashicorp.com/agent-inject-secret-secrets: "kv/<path-of-secret>"
vault.hashicorp.com/role: 'app'
vault.hashicorp.com/agent-inject-template-secrets: |
{{- with secret "kv/<path-of-secret>" -}}
#!/bin/sh
set -e
{{- range $key, $value := .Data.data }}
export {{ $key }}={{ $value }}
{{- end }}
exec "$@"
{{- end }}
</code></pre>
<p>you can create one shell script into your app, which will check if secret exists or not, if exist shell script will inject data to environment</p>
<p><strong>run.sh</strong></p>
<pre><code>#!/bin/bash
if [ -f '/vault/secrets/secrets' ]; then
source '/vault/secrets/secrets'
fi
node dist/server.js
</code></pre>
<p>Your docker will be running your shell script file so you don't need to pass arg now, yes in this method you have to change existing docker.</p>
<p>Refrence : <a href="https://learn.hashicorp.com/tutorials/vault/kubernetes-sidecar#pod-with-annotations" rel="nofollow noreferrer">https://learn.hashicorp.com/tutorials/vault/kubernetes-sidecar#pod-with-annotations</a></p>
|
<p>This is the simplest config straight from the docs, but when I create the service, kubectl lists the target port as something random. Setting the target port to 1337 in the YAML:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: sails-svc
spec:
selector:
app: sails
ports:
- port: 1337
targetPort: 1337
type: LoadBalancer
</code></pre>
<p>And this is what k8s sets up for services:</p>
<pre><code>kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP <X.X.X.X> <none> 443/TCP 23h
sails LoadBalancer <X.X.X.X> <X.X.X.X> 1337:30203/TCP 3m6s
svc-postgres ClusterIP <X.X.X.X> <none> 5432/TCP 3m7s
</code></pre>
<p>Why is k8s setting the target port to <code>30203</code>, when I'm specifying <code>1337</code>? It does the same thing if I try other port numbers, <code>80</code> gets <code>31887</code>. I've read <a href="https://kubernetes.io/docs/concepts/services-networking/service/#load-balancer-nodeport-allocation" rel="nofollow noreferrer">the docs</a> but disabling those attributes did nothing in GCP. What am I not configuring correctly?</p>
| <p>Kubectl get services output includes <strong>Port:NodePort:Protocol</strong> information.By default and for convenience, the Kubernetes control plane will allocate a port from a range default: <strong>30000-32767</strong>(Refer the example in this <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">documentation</a>)</p>
<p>To get the TargetPort information try using</p>
<pre><code>kubectl get service <your service name> --output yaml
</code></pre>
<p>This command shows all ports details and stable external IP address under loadBalancer:ingress:</p>
<p>Refer this <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps#creating_a_service_of_type_loadbalancer" rel="nofollow noreferrer">documentation</a> from more details on creating a service type loadbalancer</p>
|
<p>I am deploying a kubernetes app via github on GCP clusters. Everything works fine then.. I came across <code>cloud deploy delivery pipeline</code>..now I am stuck.</p>
<p>Following the <a href="https://cloud.google.com/deploy/docs/quickstart-basic?_ga=2.141149938.-1343950568.1631260475&_gac=1.47309141.1631868766.CjwKCAjw-ZCKBhBkEiwAM4qfF2mz0qQw_k68XtDo-SSlglr1_U2xTUO0C2ZF8zBOdMlnf_gQVwDi3xoCQ8IQAvD_BwE" rel="nofollow noreferrer">docs</a> here</p>
<pre><code>apiVersion: skaffold/v2beta12
kind: Config
build:
artifacts:
- image: skaffold-example
deploy:
kubectl:
manifests:
- k8s-*
</code></pre>
<p>In the <code>k8s</code> folder I have my deployment files like so</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: ixh-auth-depl
labels:
app: ixh-auth
spec:
replicas: 1
selector:
matchLabels:
app: ixh-auth
template:
metadata:
labels:
app: ixh-auth
spec:
containers:
- name: ixh-auth
image: mb/ixh-auth:latest
ports:
- containerPort: 3000
resources:
requests:
cpu: 100m
memory: 500Mi
</code></pre>
<p>but it gives the error <code>invalid kubernetes manifest</code>. I cannot find anything to read on this and don't know how to proceed.</p>
| <p>The correct way to declare the manifests was this. The wildcard probably didn't work. The folder name here would be <code>k8s-manifests</code>.</p>
<pre><code>deploy:
kubectl:
manifests:
- k8s-manifests/redis-deployment.yml
- k8s-manifests/node-depl.yml
- k8s-manifests/node-service.yml
</code></pre>
|
<p>I am deploying a kubernetes app via github on GCP clusters. Everything works fine then.. I came across <code>cloud deploy delivery pipeline</code>..now I am stuck.</p>
<p>Following the <a href="https://cloud.google.com/deploy/docs/quickstart-basic?_ga=2.141149938.-1343950568.1631260475&_gac=1.47309141.1631868766.CjwKCAjw-ZCKBhBkEiwAM4qfF2mz0qQw_k68XtDo-SSlglr1_U2xTUO0C2ZF8zBOdMlnf_gQVwDi3xoCQ8IQAvD_BwE" rel="nofollow noreferrer">docs</a> here</p>
<pre><code>apiVersion: skaffold/v2beta12
kind: Config
build:
artifacts:
- image: skaffold-example
deploy:
kubectl:
manifests:
- k8s-*
</code></pre>
<p>In the <code>k8s</code> folder I have my deployment files like so</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: ixh-auth-depl
labels:
app: ixh-auth
spec:
replicas: 1
selector:
matchLabels:
app: ixh-auth
template:
metadata:
labels:
app: ixh-auth
spec:
containers:
- name: ixh-auth
image: mb/ixh-auth:latest
ports:
- containerPort: 3000
resources:
requests:
cpu: 100m
memory: 500Mi
</code></pre>
<p>but it gives the error <code>invalid kubernetes manifest</code>. I cannot find anything to read on this and don't know how to proceed.</p>
| <p>@Abhishek Rai, I agree with your answer. Google Cloud Deploy uses <strong>skaffold render</strong> to render your Kubernetes manifests, replacing untagged image names with the tagged image names of the container images you're deploying. Then when you promote the release, Google Cloud Deploy uses skaffold apply to apply the manifests and deploy the images to your Google Kubernetes Engine cluster.The content of manifest files should include the path of the yaml files as</p>
<pre><code>deploy:
kubectl:
manifests:
- PATH_TO_MANIFEST
</code></pre>
<p>so that the error will not be encountered. Refer to the <a href="https://cloud.google.com/deploy/docs/skaffold" rel="nofollow noreferrer">document</a> for more details.</p>
|
<p>I have a resource task to create nodes in EKS. My problem begin when I'm trying to define some tags according to the value of a specific variable and if not, don't declare the tag. Something like that:</p>
<pre><code>resource "aws_eks_node_group" "managed_workers" {
for_each = var.nodegroups[terraform.workspace]
cluster_name = aws_eks_cluster.cluster.name
node_group_name = each.value.Name
node_role_arn = aws_iam_role.managed_workers.arn
subnet_ids = aws_subnet.private.*.id
tags = merge(
var.tags[terraform.workspace], {
if somevar = value then
"topology.kubernetes.io/zone" = "us-east-2a" <--- THIS TAG IS CONDITIONAL
fi
"type" = each.value.type
"Name" = each.value.Name
"ob" = each.value.ob
"platform" = each.value.platform
})
</code></pre>
<p>Is there possible?</p>
| <p>Yes, you can do this. For example:</p>
<pre><code>resource "aws_eks_node_group" "managed_workers" {
for_each = var.nodegroups[terraform.workspace]
cluster_name = aws_eks_cluster.cluster.name
node_group_name = each.value.Name
node_role_arn = aws_iam_role.managed_workers.arn
subnet_ids = aws_subnet.private.*.id
tags = merge(
var.tags[terraform.workspace], {
"type" = each.value.type
"Name" = each.value.Name
"ob" = each.value.ob
"platform" = each.value.platform},
[somevar == value ? {"topology.kubernetes.io/zone" = "us-east-2a"} : null]...)
}
</code></pre>
<p>The <code>...</code> is for <a href="https://www.terraform.io/docs/language/expressions/function-calls.html#expanding-function-arguments" rel="nofollow noreferrer">expanding-function-arguments</a>.</p>
|
<p>We have the page which has some of the larger Javascript files. When we hit the page, all the small files get downloaded. However, one of the large files was not downloaded fully and failed with <strong>net::ERR_HTTP2_PROTOCOL_ERROR</strong> most of the time. We need to open the page using only a VPN connection as it does not open to all.</p>
<p>Just to add, the Nginx ingress controller is used with the following settings for that ingress:</p>
<pre><code> nginx.ingress.kubernetes.io/configuration-snippet: |
gzip on;
gzip_types text/plain text/css image/png application/javascript;
if ($request_uri ~* \.(js|css|gif|jpeg|png)) {
expires 1M;
add_header Cache-Control "public";
}
nginx.ingress.kubernetes.io/http2-push-preload: "false"
nginx.ingress.kubernetes.io/proxy-body-size: 500M
nginx.ingress.kubernetes.io/proxy-bufferings: "off"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "36000"
nginx.ingress.kubernetes.io/proxy-max-temp-file-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "36000"
nginx.ingress.kubernetes.io/proxy-send-timeout: "36000"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
</code></pre>
<p>Can we set another annotation in the Nginx ingress or this might be an issue from VPN? I wonder how can we resolve this issue.</p>
| <p>I solved this by changing the configuration for the Nginx Ingress as following:</p>
<pre><code>data:
client-max-body-size: 50M
keep-alive: "3600"
proxy-buffer-size: 500m
proxy-buffers-number: "8"
</code></pre>
<p>Glad if this is time-saving for anyone.</p>
|
<p>I'm using GKE and want to restrict my external load balancers from unwanted traffic. I found two options that problematic for me:</p>
<ol>
<li>Nginx plus + maxmind solution for geo filtering - I'm looking for an open source solution (and the maxmind lite is not available anymore).</li>
<li>GKE Ingress + Cloud armor, but I'm using nginx and other load balancers and not the GKE Ingress.</li>
</ol>
<p>I'm looking for a better solution, maybe in a global kubernetes level implemented as a daemonset or a regular deployment proxy.</p>
| <p>i would suggest checking out : <a href="https://lab.wallarm.com/how-to-protect-your-kubernetes-cluster-with-wallarm-configuration-and-finetuning-part-2-of-3/" rel="nofollow noreferrer">https://lab.wallarm.com/how-to-protect-your-kubernetes-cluster-with-wallarm-configuration-and-finetuning-part-2-of-3/</a></p>
<p>And nice <strong>Wallarm WAF ingress controller</strong> : <a href="https://github.com/wallarm/ingress" rel="nofollow noreferrer">https://github.com/wallarm/ingress</a></p>
<p>With <code>Nginx ingress</code>, there are options to increase to security</p>
<p><strong><a href="https://cloudzone.io/implementing-waf-and-mutual-tls-on-kubernetes-with-nginx-modsecurity/" rel="nofollow noreferrer">ModSecurity</a></strong> at application level metadata and proxy payload size management.</p>
<p>For DDoS protection, you can use the <a href="https://medium.com/@chadsaun/mitigating-a-ddos-attack-with-ingress-nginx-and-kubernetes-12f309072367" rel="nofollow noreferrer">rate-limiting</a> and connection handling option</p>
<pre><code>nginx.ingress.kubernetes.io/limit-connections: '2'
nginx.ingress.kubernetes.io/limit-rpm: '60'
</code></pre>
<p>you can whitelist the List of IPs also.</p>
|
<p>I am following this doc <a href="https://www.elastic.co/guide/en/cloud-on-k8s/master/k8s-deploy-elasticsearch.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/cloud-on-k8s/master/k8s-deploy-elasticsearch.html</a> to deploy Elasticsearch on Kuberenete. I have created the spec file:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 7.15.0
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
</code></pre>
<p>But I got an error when run <code>kubectl apply -f es.yml</code>:</p>
<pre><code> no matches for kind "Elasticsearch" in version "elasticsearch.k8s.elastic.co/v1"
</code></pre>
<p>what should I do to solve the issue? It seems K8S doesn't know this kind <code>Elasticsearch</code>. Do I need to install anything?</p>
| <p>it looks like you may have omitted to install the <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/" rel="nofollow noreferrer">custom resource definition (CRD)</a> for elastic.</p>
<p><code>Kubectl</code> version 1.16 and above:</p>
<pre><code>kubectl create -f https://download.elastic.co/downloads/eck/1.8.0/crds.yaml
kubectl apply -f https://download.elastic.co/downloads/eck/1.8.0/operator.yaml
</code></pre>
<p><code>Kubectl</code> version below 1.16:</p>
<pre><code>kubectl create -f https://download.elastic.co/downloads/eck/1.8.0/crds-legacy.yaml
kubectl apply -f https://download.elastic.co/downloads/eck/1.8.0/operator-legacy.y
</code></pre>
|
<p>A couple of weeks ago i published similar question regarding a Kubernetes deployment that uses Key Vault (with User Assigned Managed identity method). The issue was resolved but when trying to implemente everything from scratch something makes not sense to me.</p>
<p>Basically i am getting this error regarding mounting volume:</p>
<pre><code>Volumes:
sonar-data-new:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: sonar-data-new
ReadOnly: false
sonar-extensions-new2:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: sonar-extensions-new2
ReadOnly: false
secrets-store-inline:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: secrets-store.csi.k8s.io
FSType:
ReadOnly: true
VolumeAttributes: secretProviderClass=azure-kv-provider
default-token-zwxzg:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zwxzg
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 12s default-scheduler Successfully assigned default/sonarqube-d44d498f8-46mpz to aks-agentpool-35716862-vmss000000
Warning FailedMount 3s (x5 over 11s) kubelet MountVolume.SetUp failed for volume "secrets-store-inline" : rpc error: code = Unknown desc = failed to mount secrets store objects for pod default/sonarqube-d44d498f8-46mpz, err: rpc error: code = Unknown desc = failed to mountobjects, error: failed to get objectType:secret, objectName:username, objectVersion:: azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://SonarQubeHelm.vault.azure.net/secrets/username/?api-version=2016-10-01: StatusCode=400 -- Original Error: adal: Refresh request failed. Status Code = '400'. Response body: {"error":"invalid_request","error_description":"Identity not found"}
</code></pre>
<p>This is my <strong>secret-class.yml</strong> file (name of the keyvault is correct). Also <strong>xxx-xxxx-xxx-xxx-xxxxx4b5ec83</strong> is the <strong>objectID</strong> of the AKS managed identity (<strong>SonarQubeHelm-agentpool</strong>)</p>
<pre><code>apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: azure-kv-provider
spec:
provider: azure
secretObjects:
- data:
- key: username
objectName: username
- key: password
objectName: password
secretName: test-secret
type: Opaque
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "true"
userAssignedIdentityID: "xxx-xxxx-xxx-xxx-xxxxx4b5ec83"
keyvaultName: "SonarQubeHelm"
cloudName: ""
objects: |
array:
- |
objectName: username
objectType: secret
objectAlias: username
objectVersion: ""
- |
objectName: password
objectType: secret
objectAlias: password
objectVersion: ""
resourceGroup: "rg-LD-sandbox"
subscriptionId: "xxxx"
tenantId: "yyyy"
</code></pre>
<p>and this is my <strong>deployment.yml</strong> file</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: sonarqube
name: sonarqube
spec:
selector:
matchLabels:
app: sonarqube
replicas: 1
template:
metadata:
labels:
app: sonarqube
spec:
containers:
- name: sonarqube
image: sonarqube:8.9-developer
resources:
requests:
cpu: 500m
memory: 1024Mi
limits:
cpu: 2000m
memory: 4096Mi
volumeMounts:
- mountPath: "/mnt/"
name: secrets-store-inline
- mountPath: "/opt/sonarqube/data/"
name: sonar-data-new
- mountPath: "/opt/sonarqube/extensions/plugins/"
name: sonar-extensions-new2
env:
- name: "SONARQUBE_JDBC_USERNAME"
valueFrom:
secretKeyRef:
name: test-secret
key: username
- name: "SONARQUBE_JDBC_PASSWORD"
valueFrom:
secretKeyRef:
name: test-secret
key: password
- name: "SONARQUBE_JDBC_URL"
valueFrom:
configMapKeyRef:
name: sonar-config
key: url
ports:
- containerPort: 9000
protocol: TCP
volumes:
- name: sonar-data-new
persistentVolumeClaim:
claimName: sonar-data-new
- name: sonar-extensions-new2
persistentVolumeClaim:
claimName: sonar-extensions-new2
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "azure-kv-provider"
</code></pre>
<p>I assigned proper permissions to the AKS managed identity to get access to the keyvault (<strong>xxx-xxxx-xxx-xxx-xxxxx4b5ec83</strong> is the <strong>objectID</strong> of the AKS managed identity - <strong>SonarQubeHelm-agentpool</strong>)</p>
<pre><code>xxxx@Azure:~/clouddrive/kubernetes/sonarqubekeyvault$ az role assignment list --assignee xxx-xxxx-xxx-xxx-xxxxx4b5ec83 --all
[
{
"canDelegate": null,
"condition": null,
"conditionVersion": null,
"description": null,
"id": "/subscriptions/xxxx-xxx-xxx-xxx-xxxe22e8804e/resourceGroups/rg-LD-sandbox/providers/Microsoft.KeyVault/vaults/SonarQubeHelm/providers/Microsoft.Authorization/roleAssignments/xxxx-x-xx-xx-xx86584218f",
"name": "xxxx-xx-x-x-xx86584218f",
"principalId": "xxx-xxx-x-xxx-xx3a4b5ec83",
"principalName": "xxxx-xxxx-xxx-xxx-xx79a3906b8",
"principalType": "ServicePrincipal",
"resourceGroup": "rg-LD-sandbox",
"roleDefinitionId": "/subscriptions/xxxx-xxxx-xxxx-xxx-0e1e22e8804e/providers/Microsoft.Authorization/roleDefinitions/xxx-xxxx-xxx-xxxx-xxxfe8e74483",
"roleDefinitionName": "Key Vault Administrator",
"scope": "/subscriptions/xxxx-xxx-xxx-xxxx-xxx2e8804e/resourceGroups/rg-LD-sandbox/providers/Microsoft.KeyVault/vaults/SonarQubeHelm",
"type": "Microsoft.Authorization/roleAssignments"
},
{
"canDelegate": null,
"condition": null,
"conditionVersion": null,
"description": null,
"id": "/subscriptions/xxxx-xxxx-xxxx-xxxx-xxxe22e8804e/resourceGroups/rg-LD-sandbox/providers/Microsoft.KeyVault/vaults/SonarQubeHelm/providers/Microsoft.Authorization/roleAssignments/xxxx-xxxx-xxxx-xxxx-xxx5137f480",
"name": "xxxx-xxxx-xxxx-xxxx-xx5137f480",
"principalId": "xxxx-xxxx-xxxx-xxxx-xx3a4b5ec83",
"principalName": "xxxx-xxxx-xxxx-xxxx-xx79a3906b8",
"principalType": "ServicePrincipal",
"resourceGroup": "rg-LD-sandbox",
"roleDefinitionId": "/subscriptions/xxxx-xxxx-xxxx-xxxx-0e1e22e8804e/providers/Microsoft.Authorization/roleDefinitions/xxxx-xxxx-xxxx-xxxx-xx2c155cd7",
"roleDefinitionName": "Key Vault Secrets Officer",
"scope": "/subscriptions/xxxx/resourceGroups/rg-LD-sandbox/providers/Microsoft.KeyVault/vaults/SonarQubeHelm",
"type": "Microsoft.Authorization/roleAssignments"
}
]
</code></pre>
<p>This is the info about my Key Vault.</p>
<p><strong>az keyvault show --name SonarQubeHelm</strong></p>
<pre><code>{
"id": "/subscriptions/xxxx-xxxxx-xxxx-xxxx-xxxxxxxx804e/resourceGroups/rg-LD-sandbox/providers/Microsoft.KeyVault/vaults/SonarQubeHelm",
"location": "xxxxx",
"name": "SonarQubeHelm",
"properties": {
"accessPolicies": [
{
"applicationId": null,
"objectId": "xxxx-xxx-xxxx-xxxx-xxxxa4b5ec83",
"permissions": {
"certificates": [
"Get",
"List",
"Update",
"Create",
"Import",
"Delete",
"Recover",
"Backup",
"Restore",
"ManageContacts",
"ManageIssuers",
"GetIssuers",
"ListIssuers",
"SetIssuers",
"DeleteIssuers",
"Purge"
],
"keys": [
"Get",
"List",
"Update",
"Create",
"Import",
"Delete",
"Recover",
"Backup",
"Restore",
"Decrypt",
"Encrypt",
"UnwrapKey",
"WrapKey",
"Verify",
"Sign",
"Purge"
],
"secrets": [
"Get",
"List",
"Set",
"Delete",
"Recover",
"Backup",
"Restore",
"Purge"
],
"storage": null
},
"tenantId": "xxxx-xxxx-xxxx-xxxx-xxxxxdb8c610"
},
{
"applicationId": null,
"objectId": "xxxx-xxxx-xxxx-xxxx-xxxx531f67f8",
"permissions": {
"certificates": [
"Get",
"List",
"Update",
"Create",
"Import",
"Delete",
"Recover",
"Backup",
"Restore",
"ManageContacts",
"ManageIssuers",
"GetIssuers",
"ListIssuers",
"SetIssuers",
"DeleteIssuers",
"Purge"
],
"keys": [
"Get",
"List",
"Update",
"Create",
"Import",
"Delete",
"Recover",
"Backup",
"Restore",
"Decrypt",
"Encrypt",
"UnwrapKey",
"WrapKey",
"Verify",
"Sign",
"Purge"
],
"secrets": [
"Get",
"List",
"Set",
"Delete",
"Recover",
"Backup",
"Restore",
"Purge"
],
"storage": null
},
"tenantId": "xxxxx-xxxxxx-xxx-xxx8db8c610"
},
{
"applicationId": null,
"objectId": "xxx-xxxx-xxxx-xxx-xxxx0df6af9",
"permissions": {
"certificates": [
"Get",
"List",
"Update",
"Create",
"Import",
"Delete",
"Recover",
"Backup",
"Restore",
"ManageContacts",
"ManageIssuers",
"GetIssuers",
"ListIssuers",
"SetIssuers",
"DeleteIssuers"
],
"keys": [
"Get",
"List",
"Update",
"Create",
"Import",
"Delete",
"Recover",
"Backup",
"Restore"
],
"secrets": [
"Get",
"List",
"Set",
"Delete",
"Recover",
"Backup",
"Restore"
],
"storage": null
},
"tenantId": "xxx-xxx-xxx-xxx-xxx8db8c610"
}
],
"createMode": null,
"enablePurgeProtection": null,
"enableRbacAuthorization": false,
"enableSoftDelete": true,
"enabledForDeployment": false,
"enabledForDiskEncryption": false,
"enabledForTemplateDeployment": false,
"hsmPoolResourceId": null,
"networkAcls": null,
"privateEndpointConnections": null,
"provisioningState": "Succeeded",
"sku": {
"family": "A",
"name": "Standard"
},
"softDeleteRetentionInDays": 90,
"tenantId": "xxx-xxx-xxx-xxx-xxx68db8c610",
"vaultUri": "https://sonarqubehelm.vault.azure.net/"
},
"resourceGroup": "rg-LD-sandbox",
"systemData": null,
"tags": {},
"type": "Microsoft.KeyVault/vaults"
}
</code></pre>
<p>This is the <strong>CSI pod</strong> running at the moment:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
csi-secrets-store-provider-azure-1632148185-hggl4 1/1 Running 0 5h4m
ingress-nginx-controller-65c4f84996-99pkh 1/1 Running 0 5h49m
secrets-store-csi-driver-xsx2r 3/3 Running 0 5h4m
sonarqube-d44d498f8-46mpz 0/1 ContainerCreating 0 26m
</code></pre>
<p>I used this <strong>CSI driver</strong></p>
<p><em>helm install csi-secrets-store-provider-azure/csi-secrets-store-provider-azure --set secrets-store-csi-driver.syncSecret.enabled=true --generate-name</em></p>
<p>To assign proper permissions to the AKS managed identity i followed (just in case i used clientID as well, but it does not work). However, from the previous commands permissions to managed identity seems correct.</p>
<pre><code>export KUBE_ID=$(az aks show -g <resource group> -n <aks cluster name> --query identityProfile.kubeletidentity.objectId -o tsv)
export AKV_ID=$(az keyvault show -g <resource group> -n <akv name> --query id -o tsv)
az role assignment create --assignee $KUBE_ID --role "Key Vault Secrets Officer" --scope $AKV_ID
</code></pre>
<p>In <strong>Access Policies</strong> in the <strong>key vault</strong> "SonarQubeHelm" these are the applications added with all the permission: <strong>SonarQubeHelm-agentpool</strong> and <strong>SonarQubeHelm</strong></p>
<p>The key values in <strong>Secrets</strong> (inside Key Vault) are <strong>username</strong> and <strong>password</strong>.</p>
<p>Everything is on the same region, resource group and namespace (default) and I am working with 1 node cluster.</p>
<p>Any idea about this error?</p>
<p>Thanks in advance!</p>
| <p>After doing some tests, it seems that the process that I was following was correct. Most probably, I was using <code>principalId</code> instead of <code>clientId</code> in role assignment for the AKS managed identity.</p>
<p>Key points for someone else that is facing similar issues:</p>
<ol>
<li><p>Check what the managed identity created automatically by AKS is. Check for the <code>clientId</code>; e.g.,</p>
<pre><code>az vmss identity show -g MC_rg-LX-sandbox_SonarQubeHelm_southcentralus -n
aks-agentpool-xxxxx62-vmss -o yaml
</code></pre>
<blockquote>
<p><em>Remember that <code>MC_**</code> is the resource group that AKS creates automatically to keep AKS resources.</em> VMSS is the Virtual Machine Scale Set; you can get it under the same resource group.</p>
</blockquote>
</li>
<li><p>Check if it has correct permissions to access the Key Vault that you created: e.g., (where <em>xxxx-xxxx-xxx-xxx-xx79a3906b8</em> is the managed identity <code>clientId</code>):</p>
<pre><code>az role assignment list --assignee xxxx-xxxx-xxx-xxx-xx79a3906b8 --all
</code></pre>
<p>It should have:</p>
<pre><code>"roleDefinitionName": "Key Vault Administrator"
</code></pre>
</li>
<li><p>If it doesn't have correct permissions, assign them:</p>
<pre><code>export KUBE_ID=$(az aks show -g <resource group> -n <aks cluster name> --query identityProfile.kubeletidentity.clientId -o tsv)
export AKV_ID=$(az keyvault show -g <resource group> -n <akv name> --query id -o tsv)
az role assignment create --assignee $KUBE_ID --role "Key Vault Secrets Officer" --scope $AKV_ID
</code></pre>
</li>
</ol>
<p>The role assignments for the managed identity in AKS works for <code>clientId</code>.</p>
|
<p>How to negate exit code status to be used in <code>Kubernetes</code> <code>livenessProbe</code>?</p>
<p>I'll be using the <code>grep</code> command and I would like to do below as a check.</p>
<ul>
<li>Return exit value 0, if <code>grep</code> has no hit</li>
<li>Return exit value 1, if <code>grep</code> has a hit</li>
</ul>
<p>Since normally, grep will return 0 if there is a hit, can I negate this something like shown below?</p>
<pre><code>!$(cat filet.txt | grep keyword)
</code></pre>
| <p>Yes you can give it try</p>
<p>example :</p>
<pre><code> livenessProbe:
exec:
command:
- /bin/bash
- -c
- cat filet.txt | grep keyword
initialDelaySeconds: 10
periodSeconds: 10
</code></pre>
<p>you should checkout if helpful</p>
<pre><code>-v, --invert-match
Invert the sense of matching, to select non-matching lines.
</code></pre>
<p>You can also try <code>-c</code></p>
<pre><code>echo 'Test' | grep -c T
</code></pre>
<p><strong>1</strong></p>
<pre><code>echo 'Test' | grep -c N
</code></pre>
<p><strong>0</strong></p>
<p><strong>shell script</strong></p>
<pre><code>#!/bin/sh
if [ $(echo 'Test' | grep -c T) ]; then
exit 1
else
exit 0
fi
</code></pre>
<p>the easiest way would be writing shell script and managing the exit code for liveness as you want 0 or 1 and based on that liveness will restart the pod.</p>
<pre><code>livenessProbe:
exec:
command:
- /bin/sh
- -c
- /home/test/health.sh
</code></pre>
|
<p>I'm running Apache Flink in Standalone Kubernetes (sesion) mode without Job Manager HA. But I need to deploy Job Manager HA, because only in HA mode, Flink can be persistent (can save job's after job manager restart).
Flink runs in dedicated kubernetes namespace, and I have permissions only to namespace.</p>
<p>HA are enabled using this article:
<a href="https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/ha/kubernetes_ha/" rel="nofollow noreferrer">https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/ha/kubernetes_ha/</a></p>
<p>and I use yaml files from this article:
<a href="https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/resource-providers/standalone/kubernetes/#kubernetes-high-availability-services" rel="nofollow noreferrer">https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/resource-providers/standalone/kubernetes/#kubernetes-high-availability-services</a></p>
<p>I have for example k28s namespace named flink-namespace. In this namespace I've created:</p>
<ul>
<li>serviceAccount named flink-sa</li>
<li>role witch permissions to create/edit Configmaps in this namespace</li>
</ul>
<p>so this serviceAccount has permissions to create/editr configmaps, but only in this namespace.</p>
<p>After deployment, jobManager can't start and throws error:</p>
<p><em>Caused by: io.fabric8.kubernetes.client.KubernetesClientException: configmaps "flink-restserver-leader" is forbidden: User "system:serviceaccount:flink-namespace:flink-sa" cannot watch resource "configmaps" in API group "" in the namespace "default"</em></p>
<p>With mean that serviceAccount with Flink are using to manage Configmaps, try to create Configmap in namespace "default" not in namespace "flink-namespace"</p>
<p>Does anybody know how to config flink to manage configmaps in specified namespace ?</p>
| <p>Problem solved - possibility to tell Flink in with kubernetes namespace is running I found in Flink source code.
So, to solve this problem you should set this in config:</p>
<p><em>kubernetes.namespace: YOUR_NAMESPACE_NAME</em></p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.