prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>I have two mac mini computers and I'm trying to create a K8s cluster using k3d. I can create a cluster on a single host very easily. However, I'm having trouble finding any guidance on creating a cluster that has multiple hosts (machines). Any ideas?</p>
| <p>According to your question, you want to create k8s cluster with k3d on multiple hosts.</p>
<p>But, based on the information, described on <a href="https://github.com/rancher/k3d/issues/408#issuecomment-733883237" rel="nofollow noreferrer">k3d Issues page on GitHub</a> (issue still open), this solution:</p>
<blockquote>
<p>with the current version of k3 is not officially "supported"</p>
</blockquote>
<p>Thus, at the moment it looks like the only version with single host is possible within k3d.</p>
|
<p>Following the documentation I try to setup the Seldon-Core quick-start <a href="https://docs.seldon.io/projects/seldon-core/en/v1.11.1/workflow/github-readme.html" rel="nofollow noreferrer">https://docs.seldon.io/projects/seldon-core/en/v1.11.1/workflow/github-readme.html</a></p>
<p>I don't have LoadBalancer so I would like to use port-fowarding for accessing to the service.</p>
<p>I run the following script for setup the system:</p>
<pre class="lang-sh prettyprint-override"><code>#!/bin/bash -ev
kind create cluster --name seldon
kubectl cluster-info --context kind-seldon
sleep 10
kubectl get pods -A
istioctl install -y
sleep 10
kubectl get pods -A
kubectl create namespace seldon-system
kubens seldon-system
helm install seldon-core seldon-core-operator \
--repo https://storage.googleapis.com/seldon-charts \
--set usageMetrics.enabled=true \
--namespace seldon-system \
--set istio.enabled=true
sleep 100
kubectl get validatingwebhookconfigurations
kubectl create namespace modelns
kubens modelns
kubectl apply -f - << END
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: iris-model
namespace: modelns
spec:
name: iris
predictors:
- graph:
implementation: SKLEARN_SERVER
modelUri: gs://seldon-models/v1.12.0-dev/sklearn/iris
name: classifier
name: default
replicas: 1
END
sleep 100
kubectl get pods -A
kubectl get svc -A
INGRESS_GATEWAY_SERVICE=$(kubectl get svc --namespace istio-system --selector="app=istio-ingressgateway" --output jsonpath='{.items[0].metadata.name}')
kubectl port-forward --namespace istio-system svc/${INGRESS_GATEWAY_SERVICE} 8080:80 &
</code></pre>
<p>I gess the port-forwarding argument <code>8080:80</code> is probably wrong.</p>
<p>I'm using the following script for testing:</p>
<pre class="lang-sh prettyprint-override"><code>#!/bin/bash -ev
export INGRESS_HOST=localhost
export INGRESS_PORT=8080
SERVICE_HOSTNAME=$(kubectl get inferenceservice sklearn-iris -n kserve-test -o jsonpath='{.status.url}' | cut -d "/" -f 3)
curl -X POST http://$INGRESS_HOST:$INGRESS_PORT/seldon/modelns/iris-model/api/v1.0/predictions \
-H 'Content-Type: application/json' \
-d '{ "data": { "ndarray": [1,2,3,4] } }'
</code></pre>
<p>But I got the following error:</p>
<pre class="lang-sh prettyprint-override"><code>Handling connection for 8080
E1012 10:52:32.074812
52896 portforward.go:400] an error occurred forwarding 8080 -> 8080:
error forwarding port 8080 to pod b9bd4ff03c6334f4af632044fe54e1c2531e95976a5fe074e30b4258d145508a,
uid : failed to execute portforward in network namespace "/var/run/netns/cni-2b4d8573-3cfe-c70e-1c36-e0dc53cbd936": failed to connect to localhost:8080 inside namespace "b9bd4ff03c6334f4af632044fe54e1c2531e95976a5fe074e30b4258d145508a",
IPv4: dial tcp4 127.0.0.1:8080: connect: connection refused IPv6 dial tcp6 [::1]:8080: connect: connection refused
</code></pre>
<p>Please can somebody known how to fix this? What is the right port forwarding argument?</p>
| <p>If you install with istio enabled you also need to install the istio gateway.</p>
<p>I've tested your flow and it didn't work, and then did work after installing the following istio gateway.</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: seldon-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
</code></pre>
<p>You can read more about istio configuration on Seldon Core here: <a href="https://docs.seldon.io/projects/seldon-core/en/latest/ingress/istio.html" rel="nofollow noreferrer">https://docs.seldon.io/projects/seldon-core/en/latest/ingress/istio.html</a></p>
|
<p>Is there a way to disable service links globally. There's a field in <code>podSpec</code>:</p>
<pre><code>enableServiceLinks: false
</code></pre>
<p>but it's <code>true</code> by default. I couldn't find anything in kubelet to kill it. Or is there some cool admission webhook toolchain I could use</p>
| <p>You can use the Kubernetes-native policy engine called <a href="https://kyverno.io/" rel="nofollow noreferrer">Kyverno</a>. Kyverno policies can validate, <strong>mutate</strong> (see: <a href="https://kyverno.io/docs/writing-policies/mutate/" rel="nofollow noreferrer">Mutate Resources</a>), and generate Kubernetes resources.</p>
<p>A Kyverno policy is a collection of rules that can be applied to the entire cluster (<code>ClusterPolicy</code>) or to the specific namespace (<code>Policy</code>).</p>
<hr />
<p>I will create an example to illustrate how it may work.</p>
<p>First we need to install Kyverno, you have the option of installing Kyverno directly from the latest release manifest, or using Helm (see: <a href="https://kyverno.io/docs/introduction/#quick-start" rel="nofollow noreferrer">Quick Start guide</a>):</p>
<pre><code>$ kubectl create -f https://raw.githubusercontent.com/kyverno/kyverno/main/definitions/release/install.yaml
</code></pre>
<p>After successful installation, we can create a simple <code>ClusterPolicy</code>:</p>
<pre><code>$ cat strategic-merge-patch.yaml
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: strategic-merge-patch
spec:
rules:
- name: enableServiceLinks_false_globally
match:
resources:
kinds:
- Pod
mutate:
patchStrategicMerge:
spec:
enableServiceLinks: false
$ kubectl apply -f strategic-merge-patch.yaml
clusterpolicy.kyverno.io/strategic-merge-patch created
$ kubectl get clusterpolicy
NAME BACKGROUND ACTION READY
strategic-merge-patch true audit true
</code></pre>
<p>This policy adds <code>enableServiceLinks: false</code> to the newly created Pod.</p>
<p>Let's create a Pod and check if it works as expected:</p>
<pre><code>$ kubectl run app-1 --image=nginx
pod/app-1 created
$ kubectl get pod app-1 -oyaml | grep "enableServiceLinks:"
enableServiceLinks: false
</code></pre>
<p>It also works with <code>Deployments</code>, <code>StatefulSets</code>, <code>DaemonSets</code> etc.:</p>
<pre><code>$ kubectl create deployment deploy-1 --image=nginx
deployment.apps/deploy-1 created
$ kubectl get pod deploy-1-7cfc5d6879-kfdlh -oyaml | grep "enableServiceLinks:"
enableServiceLinks: false
</code></pre>
<p>More examples with detailed explanations can be found in the <a href="https://kyverno.io/docs/writing-policies/" rel="nofollow noreferrer">Kyverno Writing Policies documentation</a>.</p>
|
<p>I'm creating multiple pods at the same time in Openshift, and I also want to check the containers inside the pods are working correctly.
Some of these containers can take a while to start-up, and I don't want to wait for one pod to be fully running before starting up the other one.</p>
<p>Are there any Openshift / Kubernetes checks I can do to ensure a container has booted up, while also going ahead with other deployments?</p>
| <p>Please configure the <a href="https://cloud.redhat.com/blog/liveness-and-readiness-probes" rel="nofollow noreferrer">Liveness and Readiness Probes</a></p>
<blockquote>
<ul>
<li>Liveness : Under what circumstances is it appropriate to restart the pod?</li>
<li>Readiness : under what circumstances should we take the pod out of the list of service endpoints so that it no longer responds to
requests?</li>
</ul>
</blockquote>
|
<p>There is a field in the CRD called 'storage'</p>
<pre><code>apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
spec:
...
versions:
- name: v1
# Each version can be enabled/disabled by Served flag.
served: true
# One and only one version must be marked as the storage version.
storage: true
...
</code></pre>
<p>What does this mean? <br>
All the documentation says is the above comment:</p>
<pre><code> One and only one version must be marked as the storage version.
</code></pre>
<p>It just doesn't help at all.</p>
| <p>A k8s resource (including custom resource) can have support for multiple API versions (say <code>v1beta1</code>, <code>v1</code>, etc.) at once. It's there for various reasons such as API stability and backward compatibility.</p>
<p>As you know, on the creation of a resource's object, the k8s store it on persistent storage such as <code>etcd</code>. The version with <code>storage: true</code> indicates that this version should be used when persisting the resources to storage. It's inconvenient to store multiple copies of an object. That's why <code>One and only one version must be marked as the storage version</code>. On-demand of other versions with <code>storage: false</code> a conversion webhook should be used to toggle in-between versions with recommended schema changes and custom logic.</p>
<p>Ref:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#overview" rel="nofollow noreferrer">custom-resource-definition-versioning</a></li>
</ul>
|
<p>I'm creating multiple pods at the same time in Openshift, and I also want to check the containers inside the pods are working correctly.
Some of these containers can take a while to start-up, and I don't want to wait for one pod to be fully running before starting up the other one.</p>
<p>Are there any Openshift / Kubernetes checks I can do to ensure a container has booted up, while also going ahead with other deployments?</p>
| <p><code>...Some of these containers can take a while to start-up</code></p>
<p>Liveness probe is not a good option for containers that requires extended startup time, mainly because you have to set a long time to cater for startup; which is irrelevant after that - result to unable to detect problem on time during execution. Instead, you use <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-startup-probes" rel="nofollow noreferrer">startup probe</a> to handle and detect problem during startup and handover to liveness probe upon success; or restart container according to its restartPolicy should the startup probe failed.</p>
|
<p>I am trying to create a deployment with Kubectl run. I want to specify replicas as part of the command.
I get</p>
<pre><code>Error: unknown flag: --replicas
</code></pre>
<p>Is this deprecated in the current version. If yes how do we achieve replicas.</p>
| <p>Using <code>kubectl run</code> for deployments has been deprecated for quite some time and removed in a recent kubernetes version.</p>
<p>If you want to create a deployment using the kubectl commandline, use</p>
<pre><code>kubectl create deployment $deploymentname --image $image --replicas $replicacount
</code></pre>
<p>see the official documentation for <a href="https://kubernetes.io/docs/reference/kubectl/overview/" rel="nofollow noreferrer">here</a> for more detail.</p>
|
<p>I have this in a <code>selenium-hub-service.yml</code> file:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: selenium-srv
spec:
selector:
app: selenium-hub
ports:
- port: 4444
nodePort: 30001
type: NodePort
sessionAffinity: None
</code></pre>
<p>When I do <code>kubectl describe service</code> on terminal, I get the endpoint of kubernetes service as <code>192.168.49.2:8443</code>. I then take that and point the browser to <code>192.168.49.2:30001</code> but browser is not able to reach that endpoint. I was expecting to reach selenium hub.</p>
<p>When I do <code>minikube service selenium-srv --url</code>, which gives me <code>http://127.0.0.1:56498</code> and point browser to it, I can reach the hub.</p>
<p>My question is: why am I not able to reach through <code>nodePort</code>?</p>
<p>I would like to do it through <code>nodePort</code> way because I know the port beforehand and if kubernetes service end point remains constant then it may be easy to point my tests to a known endpoint when I integrate it with azure pipeline.</p>
<p>EDIT: output of <code>kubectl get service</code>:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d
selenium-srv NodePort 10.96.34.117 <none> 4444:30001/TCP 2d2h
</code></pre>
| <p>Posted community wiki based on <a href="https://github.com/kubernetes/minikube/issues/11193" rel="nofollow noreferrer">this Github topic</a>. Feel free to expand it.</p>
<p>The information below assumes that you are using <a href="https://minikube.sigs.k8s.io/docs/drivers/docker/" rel="nofollow noreferrer">the default driver docker</a>.</p>
<hr />
<p>Minikube on macOS behaves a bit differently than on Linux. While on Linux, you have special interfaces used for docker and for connecting to the minikube node port, like this one:</p>
<pre><code>3: docker0:
...
inet 172.17.0.1/16
</code></pre>
<p>And this one:</p>
<pre><code>4: br-42319e616ec5:
...
inet 192.168.49.1/24 brd 192.168.49.255 scope global br-42319e616ec5
</code></pre>
<p>There is no such solution implemented on macOS. <a href="https://github.com/kubernetes/minikube/issues/11193#issuecomment-826331511" rel="nofollow noreferrer">Check this</a>:</p>
<blockquote>
<p>This is a known issue, Docker Desktop networking doesn't support ports. You will have to use minikube tunnel.</p>
</blockquote>
<p><a href="https://github.com/kubernetes/minikube/issues/11193#issuecomment-826708118" rel="nofollow noreferrer">Also</a>:</p>
<blockquote>
<p>there is no bridge0 on Macos, and it makes container IP unreachable from host.</p>
</blockquote>
<p>That means you can't connect to your service using IP address <code>192.168.49.2</code>.</p>
<p>Check also this article: <a href="https://docs.docker.com/desktop/mac/networking/#known-limitations-use-cases-and-workarounds" rel="nofollow noreferrer">Known limitations, use cases, and workarounds - Docker Desktop for Mac</a>:</p>
<blockquote>
<p><strong>There is no docker0 bridge on macOS</strong>
Because of the way networking is implemented in Docker Desktop for Mac, you cannot see a <code>docker0</code> interface on the host. This interface is actually within the virtual machine.</p>
</blockquote>
<blockquote>
<p><strong>I cannot ping my containers</strong>
Docker Desktop for Mac can’t route traffic to containers.</p>
</blockquote>
<blockquote>
<p><strong>Per-container IP addressing is not possible</strong>
The docker (Linux) bridge network is not reachable from the macOS host.</p>
</blockquote>
<p>There are few ways to <a href="https://github.com/kubernetes/minikube/issues/11193#issuecomment-826708118" rel="nofollow noreferrer">setup minikube to use NodePort at the localhost address on Mac, like this one</a>:</p>
<pre><code>minikube start --driver=docker --extra-config=apiserver.service-node-port-range=32760-32767 --ports=127.0.0.1:32760-32767:32760-32767`
</code></pre>
<p>You can also use <code>minikube service</code> command which will return a URL to connect to a service.</p>
|
<p>We are leveraging Kubernetes ingress with external service <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#external-authentication" rel="nofollow noreferrer">JWT authentication</a> using <code>auth-url</code> as a part of the ingress.</p>
<p>Now we want to use the <code>auth-cache-key</code> annotation to control the caching of JWT token. At current our external auth service just respond with <code>200</code>/<code>401</code> by looking at the token. All our components are backend micro-services with rest api. Incoming request may not be the UI request. How do we fill in the `auth-cache-key' for a JWT token coming in.</p>
<pre><code> annotations:
nginx.ingress.kubernetes.io/auth-url: http://auth-service/validate
nginx.ingress.kubernetes.io/auth-response-headers: "authorization"
nginx.ingress.kubernetes.io/auth-cache-key: '$remote_user$http_authorization'
nginx.ingress.kubernetes.io/auth-cache-duration: '1m'
kubernetes.io/ingress.class: "nginx"
</code></pre>
<p>Looking at the example, <code>$remote_user$http_authorization</code> is specified as an example in K8s documentation. However not sure if <code>$remote_user</code> will be set in our case. Because this is not external basic auth. How do we decide on the auth cache key in case of this?</p>
<p>Not enough example/documentations exists around this.</p>
| <p>Posting general answer as no further details and explanation provided.</p>
<p>It's true that there is not so much documentation around, so I decided to dig into <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">NGINX Ingress source code</a>.</p>
<p>The value set in annotation <code>nginx.ingress.kubernetes.io/auth-cache-key</code> is a variable <code>$externalAuth.AuthCacheKey</code> <a href="https://github.com/kubernetes/ingress-nginx/blob/07e54431ff069a5452a1f68ca3dbb98da0e0f35b/rootfs/etc/nginx/template/nginx.tmpl#L986" rel="nofollow noreferrer">in code</a>:</p>
<pre><code>{{ if $externalAuth.AuthCacheKey }}
set $tmp_cache_key '{{ $server.Hostname }}{{ $authPath }}{{ $externalAuth.AuthCacheKey }}';
set $cache_key '';
</code></pre>
<p>As can see, <code>$externalAuth.AuthCacheKey</code> is used by variable <code>$tmp_cache_key</code>, which is encoded to <code>base64</code> format and set as variable <code>$cache_key</code> using <a href="https://github.com/openresty/lua-nginx-module#name" rel="nofollow noreferrer">lua NGINX module</a>:</p>
<pre><code>rewrite_by_lua_block {
ngx.var.cache_key = ngx.encode_base64(ngx.sha1_bin(ngx.var.tmp_cache_key))
}
</code></pre>
<p>Then <code>$cache_key</code> is used to set <a href="http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_key" rel="nofollow noreferrer">variable <code>$proxy_cache_key</code> which defines a key for caching</a>:</p>
<pre><code>proxy_cache_key "$cache_key";
</code></pre>
<p>Based on the above code, we can assume that we can use any <a href="https://www.javatpoint.com/nginx-variables" rel="nofollow noreferrer">NGINX variable</a> to set <code>nginx.ingress.kubernetes.io/auth-cache-key</code> annotation. Please note that some variables are only available if the <a href="http://nginx.org/en/docs/varindex.html" rel="nofollow noreferrer">corresponding module is loaded</a>.</p>
<p>Example - I set following <code>auth-cache-key</code> annotation:</p>
<pre><code>nginx.ingress.kubernetes.io/auth-cache-key: '$proxy_host$request_uri'
</code></pre>
<p>Then, on the NGINX Ingress controller pod, in the file <code>/etc/nginx/nginx.conf</code> there is following line:</p>
<pre><code>set $tmp_cache_key '{my-host}/_external-auth-Lw-Prefix$proxy_host$request_uri';
</code></pre>
<p>If you will set <code>auth-cache-key</code> annotation to nonexistent NGINX variable, the NGINX will throw following error:</p>
<pre><code>nginx: [emerg] unknown "nonexistent_variable" variable
</code></pre>
<p>It's up to you which variables you need.</p>
<p>Please check also following articles and topics:</p>
<ul>
<li><a href="https://www.nginx.com/blog/nginx-caching-guide/" rel="nofollow noreferrer">A Guide to Caching with NGINX and NGINX Plus</a></li>
<li><a href="https://github.com/kubernetes/ingress-nginx/issues/2862" rel="nofollow noreferrer">external auth provider results in a <em>lot</em> of external auth requests</a></li>
</ul>
|
<p>I have a pod running RabbitMQ. Below is the deployment manifest:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: service-rabbitmq
spec:
selector:
app: service-rabbitmq
ports:
- port: 5672
targetPort: 5672
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-rabbitmq
spec:
selector:
matchLabels:
app: deployment-rabbitmq
template:
metadata:
labels:
app: deployment-rabbitmq
spec:
containers:
- name: rabbitmq
image: rabbitmq:latest
volumeMounts:
- name: rabbitmq-data-volume
mountPath: /var/lib/rabbitmq
resources:
requests:
cpu: 250m
memory: 128Mi
limits:
cpu: 750m
memory: 256Mi
volumes:
- name: rabbitmq-data-volume
persistentVolumeClaim:
claimName: rabbitmq-pvc
</code></pre>
<p>When I deploy it in my local cluster, I see the pod running for a while and then crashing afterwards. So basically it goes under crash-loop. Following is the logs I got from the pod:</p>
<pre><code>$ kubectl logs deployment-rabbitmq-649b8479dc-kt9s4
2021-10-14 06:46:36.182390+00:00 [info] <0.222.0> Feature flags: list of feature flags found:
2021-10-14 06:46:36.221717+00:00 [info] <0.222.0> Feature flags: [ ] implicit_default_bindings
2021-10-14 06:46:36.221768+00:00 [info] <0.222.0> Feature flags: [ ] maintenance_mode_status
2021-10-14 06:46:36.221792+00:00 [info] <0.222.0> Feature flags: [ ] quorum_queue
2021-10-14 06:46:36.221813+00:00 [info] <0.222.0> Feature flags: [ ] stream_queue
2021-10-14 06:46:36.221916+00:00 [info] <0.222.0> Feature flags: [ ] user_limits
2021-10-14 06:46:36.221933+00:00 [info] <0.222.0> Feature flags: [ ] virtual_host_metadata
2021-10-14 06:46:36.221953+00:00 [info] <0.222.0> Feature flags: feature flag states written to disk: yes
2021-10-14 06:46:37.018537+00:00 [noti] <0.44.0> Application syslog exited with reason: stopped
2021-10-14 06:46:37.018646+00:00 [noti] <0.222.0> Logging: switching to configured handler(s); following messages may not be visible in this log output
2021-10-14 06:46:37.045601+00:00 [noti] <0.222.0> Logging: configured log handlers are now ACTIVE
2021-10-14 06:46:37.635024+00:00 [info] <0.222.0> ra: starting system quorum_queues
2021-10-14 06:46:37.635139+00:00 [info] <0.222.0> starting Ra system: quorum_queues in directory: /var/lib/rabbitmq/mnesia/rabbit@deployment-rabbitmq-649b8479dc-kt9s4/quorum/rabbit@deployment-rabbitmq-649b8479dc-kt9s4
2021-10-14 06:46:37.849041+00:00 [info] <0.259.0> ra: meta data store initialised for system quorum_queues. 0 record(s) recovered
2021-10-14 06:46:37.877504+00:00 [noti] <0.264.0> WAL: ra_log_wal init, open tbls: ra_log_open_mem_tables, closed tbls: ra_log_closed_mem_tables
</code></pre>
<p>This log isn't helpful too much, I can't find any error message from here. The only useful line here could be <code>Application syslog exited with reason: stopped</code>, only but it's not as far as I understand. The event log isn't helpful too:</p>
<pre><code>$ kubectl describe pods deployment-rabbitmq-649b8479dc-kt9s4
Name: deployment-rabbitmq-649b8479dc-kt9s4
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.4
Start Time: Thu, 14 Oct 2021 12:45:03 +0600
Labels: app=deployment-rabbitmq
pod-template-hash=649b8479dc
skaffold.dev/run-id=7af5e1bb-e0c8-4021-a8a0-0c8bf43630b6
Annotations: <none>
Status: Running
IP: 10.1.5.138
IPs:
IP: 10.1.5.138
Controlled By: ReplicaSet/deployment-rabbitmq-649b8479dc
Containers:
rabbitmq:
Container ID: docker://de309f94163c071afb38fb8743d106923b6bda27325287e82bc274e362f1f3be
Image: rabbitmq:latest
Image ID: docker-pullable://rabbitmq@sha256:d8efe7b818e66a13fdc6fdb84cf527984fb7d73f52466833a20e9ec298ed4df4
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: OOMKilled
Exit Code: 0
Started: Thu, 14 Oct 2021 13:56:29 +0600
Finished: Thu, 14 Oct 2021 13:56:39 +0600
Ready: False
Restart Count: 18
Limits:
cpu: 750m
memory: 256Mi
Requests:
cpu: 250m
memory: 128Mi
Environment: <none>
Mounts:
/var/lib/rabbitmq from rabbitmq-data-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9shdv (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
rabbitmq-data-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: rabbitmq-pvc
ReadOnly: false
kube-api-access-9shdv:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 23m (x6 over 50m) kubelet (combined from similar events): Successfully pulled image "rabbitmq:latest" in 4.267310231s
Normal Pulling 18m (x16 over 73m) kubelet Pulling image "rabbitmq:latest"
Warning BackOff 3m45s (x307 over 73m) kubelet Back-off restarting failed container
</code></pre>
<p>What could be the reason for this crash-loop?</p>
<blockquote>
<p><strong>NOTE:</strong> <code>rabbitmq-pvc</code> is successfully bound. No issue there.</p>
</blockquote>
<h2>Update:</h2>
<p><a href="https://stackoverflow.com/a/58610625/11317272">This answer</a> indicates that RabbitMQ should be deployed as <strong>StatefulSet</strong>. So I adjusted the manifest like so:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: service-rabbitmq
spec:
selector:
app: service-rabbitmq
ports:
- name: rabbitmq-amqp
port: 5672
- name: rabbitmq-http
port: 15672
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: statefulset-rabbitmq
spec:
selector:
matchLabels:
app: statefulset-rabbitmq
serviceName: service-rabbitmq
template:
metadata:
labels:
app: statefulset-rabbitmq
spec:
containers:
- name: rabbitmq
image: rabbitmq:latest
volumeMounts:
- name: rabbitmq-data-volume
mountPath: /var/lib/rabbitmq/mnesia
resources:
requests:
cpu: 250m
memory: 128Mi
limits:
cpu: 750m
memory: 256Mi
volumes:
- name: rabbitmq-data-volume
persistentVolumeClaim:
claimName: rabbitmq-pvc
</code></pre>
<p>The pod still undergoes crash-loop, but the logs are slightly different.</p>
<pre><code>$ kubectl logs statefulset-rabbitmq-0
2021-10-14 09:38:26.138224+00:00 [info] <0.222.0> Feature flags: list of feature flags found:
2021-10-14 09:38:26.158953+00:00 [info] <0.222.0> Feature flags: [x] implicit_default_bindings
2021-10-14 09:38:26.159015+00:00 [info] <0.222.0> Feature flags: [x] maintenance_mode_status
2021-10-14 09:38:26.159037+00:00 [info] <0.222.0> Feature flags: [x] quorum_queue
2021-10-14 09:38:26.159078+00:00 [info] <0.222.0> Feature flags: [x] stream_queue
2021-10-14 09:38:26.159183+00:00 [info] <0.222.0> Feature flags: [x] user_limits
2021-10-14 09:38:26.159236+00:00 [info] <0.222.0> Feature flags: [x] virtual_host_metadata
2021-10-14 09:38:26.159270+00:00 [info] <0.222.0> Feature flags: feature flag states written to disk: yes
2021-10-14 09:38:26.830814+00:00 [noti] <0.44.0> Application syslog exited with reason: stopped
2021-10-14 09:38:26.830925+00:00 [noti] <0.222.0> Logging: switching to configured handler(s); following messages may not be visible in this log output
2021-10-14 09:38:26.852048+00:00 [noti] <0.222.0> Logging: configured log handlers are now ACTIVE
2021-10-14 09:38:33.754355+00:00 [info] <0.222.0> ra: starting system quorum_queues
2021-10-14 09:38:33.754526+00:00 [info] <0.222.0> starting Ra system: quorum_queues in directory: /var/lib/rabbitmq/mnesia/rabbit@statefulset-rabbitmq-0/quorum/rabbit@statefulset-rabbitmq-0
2021-10-14 09:38:33.760365+00:00 [info] <0.290.0> ra: meta data store initialised for system quorum_queues. 0 record(s) recovered
2021-10-14 09:38:33.761023+00:00 [noti] <0.302.0> WAL: ra_log_wal init, open tbls: ra_log_open_mem_tables, closed tbls: ra_log_closed_mem_tables
</code></pre>
<p>The feature flags are now marked as it's seen. No other notable changes. So I still need help.</p>
<h2>! New Issue !</h2>
<p>Head over <a href="https://stackoverflow.com/q/69574698/11317272">here</a>.</p>
| <p>The pod gets oomkilled (last state, reason) and you need to assign more resources (memory) to the pod.</p>
|
<p>For <code>ReplicaSets</code> I see there is a way to use a Horizontal Pod Autoscaler (HPA) and set a max/min value for the number of replicas allowed. Is there a similar feature for <code>StatefulSets</code>? Since it also allows you to specify the number of replicas to deploy initially? For example, how would I tell Kubernetes to limit the number of pods it can deploy for a given <code>StatefulSets</code>?</p>
| <p>I have posted community wiki answer for better visibility.
<a href="https://stackoverflow.com/users/213269/jonas" title="102,968 reputation">Jonas</a> well mentioned in the comment:</p>
<blockquote>
<p>First sentence in the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">documentation</a>:</p>
</blockquote>
<blockquote>
<p>"The Horizontal Pod Autoscaler automatically scales the number of Pods in a replication controller, deployment, replica set or stateful set"</p>
</blockquote>
<p>summary, <strong>it is possible to set min / max replicas for a statefulset using HPA.</strong> In <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">this documentation</a> you will learn how HPA works, how to use it, what is supported etc. HPA will not work only with the objects that can't be scaled, for example, DaemonSets.</p>
<p>See also this <a href="https://stackoverflow.com/questions/54663845/apply-hpa-for-statefulset-in-kubernetes">related question</a>.</p>
|
<p>I want to read file paths from a persistent volume and store these file paths into a persistent queue of sorts. This would probably be done with an application contained within a pod. This persistent volume will be updated constantly with new files. This means that I will need to constantly update the queue with new file paths. What if this application that is adding items to the queue crashes? Kubernetes would be able to reboot the application, but I do not want to add in file paths that are already in the queue. The app would need to know what exists in the queue before adding in files, at least I would think. I was leaning on RabbitMQ, but apparently you cannot search a queue for specific items with this tool. What can I do to account for this issue? I am running this cluster on Google Kubernetes Engine, so this would be on the Google Cloud Platform.</p>
| <p>Have you ever heard about <a href="https://kubemq.io/" rel="nofollow noreferrer">KubeMQ</a>? There is <a href="https://github.com/kubemq-io/kubemq-community" rel="nofollow noreferrer">a KubeMQ community</a> where you can refer to with the guides and help.</p>
<p>As an alternative solution you can find useful <a href="https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/" rel="nofollow noreferrer">guide on official Kubernetes documentation</a> on creating working queue with Redis</p>
|
<p>Something wrong happend with my RPi 4 cluster based on k3sup.</p>
<p>Everything works as expected until yesterday when I had to reinstall master node operating system. For example, I have a redis installed on master node and then some pods on worker nodes. My pods can not connect to redis via DNS: <code>redis-master.database.svc.cluster.local</code> (but they do day before).</p>
<p>It throws an error that can not resolve domain when I test with busybox like:</p>
<pre><code>kubectl run -it --rm --restart=Never busybox --image=busybox:1.28 -- nslookup redis-master.database.svc.cluster.local
</code></pre>
<p>When I want to ping my service with IP (also on busybox):</p>
<pre><code>kubectl run -it --rm --restart=Never busybox --image=busybox:1.28 -- ping 10.43.115.159
</code></pre>
<p>It shows that 100% packet loss.</p>
<p>I'm able to resolve issue with DNS by simply replace coredns config (replace line with <code>forward . /etc/resolv.conf</code> to <code>forward . 192.168.1.101</code>) but I don't think that's good solution, as earlier I didn't have to do that.</p>
<p>Also, it solves issue for mapping domain to IP, but still connection via IP doesn't work.</p>
<p>My nodes:</p>
<pre><code>NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node-4 Ready <none> 10h v1.19.15+k3s2 192.168.1.105 <none> Debian GNU/Linux 10 (buster) 5.10.60-v8+ containerd://1.4.11-k3s1
node-3 Ready <none> 10h v1.19.15+k3s2 192.168.1.104 <none> Debian GNU/Linux 10 (buster) 5.10.60-v8+ containerd://1.4.11-k3s1
node-1 Ready <none> 10h v1.19.15+k3s2 192.168.1.102 <none> Debian GNU/Linux 10 (buster) 5.10.60-v8+ containerd://1.4.11-k3s1
node-0 Ready master 10h v1.19.15+k3s2 192.168.1.101 <none> Debian GNU/Linux 10 (buster) 5.10.63-v8+ containerd://1.4.11-k3s1
node-2 Ready <none> 10h v1.19.15+k3s2 192.168.1.103 <none> Debian GNU/Linux 10 (buster) 5.10.60-v8+ containerd://1.4.11-k3s1
</code></pre>
<p>Master node has a taint: <code>role=master:NoSchedule</code>.</p>
<p>Any ideas?</p>
<p><strong>UPDATE 1</strong></p>
<p>I'm able to connect into redis pod. /etc/resolv.conf from <code>redis-master-0</code></p>
<pre><code>search database.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.43.0.10
options ndots:5
</code></pre>
<p>All services on kubernetes:</p>
<pre><code>NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 6d9h
kube-system traefik-prometheus ClusterIP 10.43.94.137 <none> 9100/TCP 6d8h
registry proxy-docker-registry ClusterIP 10.43.16.139 <none> 5000/TCP 6d8h
kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 6d9h
kube-system metrics-server ClusterIP 10.43.101.30 <none> 443/TCP 6d9h
database redis-headless ClusterIP None <none> 6379/TCP 5d19h
database redis-master ClusterIP 10.43.115.159 <none> 6379/TCP 5d19h
kube-system traefik LoadBalancer 10.43.221.89 192.168.1.102,192.168.1.103,192.168.1.104,192.168.1.105 80:30446/TCP,443:32443/TCP 6d8h
</code></pre>
| <p>There was one more thing that was not mentioned. I'm using OpenVPN with NordVPN server list on master node, and use a privoxy for worker nodes.</p>
<p>When you install and run OpenVPN before running kubernetes master, OpenVPN add rules that block kubernetes networking. So, coredns does not work and you can't reach any pod via IP as well.</p>
<p>I'm using RPi 4 cluster, so for me it was good enough to just re-install master node, install kubernetes at first and then configure openvpn. Now everything is working as expected.</p>
<p>It's good enough to order your system units by adding <code>After</code> or <code>Before</code> in service definition. I have VPN systemd service that looks like below:</p>
<pre><code>[Unit]
Description=Enable VPN for System
After=network.target
After=k3s.service
[Service]
Type=simple
ExecStart=/etc/openvpn/start-nordvpn-server.sh
[Install]
WantedBy=multi-user.target
</code></pre>
<p>It guarantee that VPN will be run after kubernetes.</p>
|
<p>I noticed that during the <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">ingress-nginx</a> pod creation/termination, there is a huge number of events created. Further "investigation" showed that each nginx pod creates</p>
<pre><code>42s Normal Sync ingress/name Scheduled for sync
</code></pre>
<p>event for each ingress objects.</p>
<p>For perspective, with some approximate imaginary numbers:</p>
<ul>
<li>The moment you <code>kubectl rollout restart ingress-nginx</code> all ingress-nginx pods will terminate (not simultaneously as there is a proper PDB setup).</li>
<li>During restart, each pod will create <code>sync</code> event object for each ingress object in the cluster.</li>
<li>So if there are 100 ingress-nginx pods with 500 ingress objects, that will span 50k sync events.</li>
</ul>
<blockquote>
<p>I could not find any mentions about it in the docs/ingress-nginx issues</p>
</blockquote>
<p><strong>The question</strong>: is it expected behavior?</p>
| <p>This is expected behavior.</p>
<p>As we can see <a href="https://github.com/kubernetes/ingress-nginx/blob/6499393772ee786179b006b423e11950912e8295/internal/ingress/controller/store/store.go#L378" rel="nofollow noreferrer">here</a>, this is an informer, which creates the sync event for each valid ingress. In turn, this informer is added to the store on each ingress-controller pod, see more <a href="https://docs.nginx.com/nginx-ingress-controller/intro/how-nginx-ingress-controller-works/" rel="nofollow noreferrer">here</a>.</p>
|
<p>We want to use <a href="https://paketo.io" rel="nofollow noreferrer">Paketo.io</a> / <a href="https://buildpacks.io" rel="nofollow noreferrer">CloudNativeBuildpacks (CNB)</a> <a href="https://docs.gitlab.com/ee/ci/" rel="nofollow noreferrer">GitLab CI</a> in the most simple way. Our GitLab setup uses an AWS EKS cluster with unprivileged GitLab CI Runners leveraging <a href="https://docs.gitlab.com/runner/executors/kubernetes.html" rel="nofollow noreferrer">the Kubernetes executor</a>. We also <strong>don't want to introduce security risks</strong> by <a href="https://docs.gitlab.com/runner/executors/kubernetes.html#using-docker-in-your-builds" rel="nofollow noreferrer">using Docker in our builds</a>. So we don't have our host’s <code>/var/run/docker.sock</code> exposed nor want to use <code>docker:dind</code>.</p>
<p>We found some guides on how to use Paketo with GitLab CI like this <a href="https://tanzu.vmware.com/developer/guides/gitlab-ci-cd-cnb/" rel="nofollow noreferrer">https://tanzu.vmware.com/developer/guides/gitlab-ci-cd-cnb/</a> . But as described beneath the headline <code>Use Cloud Native Buildpacks with GitLab in GitLab Build Job WITHOUT Using the GitLab Build Template</code>, the approach relies on Docker and pack CLI. We tried to resemble this in our <code>.gitlab-ci.yml</code> which looks like this:</p>
<pre><code>image: docker:20.10.9
stages:
- build
before_script:
- |
echo "install pack CLI (see https://buildpacks.io/docs/tools/pack/)"
apk add --no-cache curl
(curl -sSL "https://github.com/buildpacks/pack/releases/download/v0.21.1/pack-v0.21.1-linux.tgz" | tar -C /usr/local/bin/ --no-same-owner -xzv pack)
build-image:
stage: build
script:
- pack --version
- >
pack build $REGISTRY_GROUP_PROJECT/$CI_PROJECT_NAME:latest
--builder paketobuildpacks/builder:base
--path .
</code></pre>
<p>But as outlined our setup does not support docker and we end up with the following error inside our logs:</p>
<pre><code>...
$ echo "install pack CLI (see https://buildpacks.io/docs/tools/pack/)" # collapsed multi-line command
install pack CLI (see https://buildpacks.io/docs/tools/pack/)
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/community/x86_64/APKINDEX.tar.gz
(1/4) Installing brotli-libs (1.0.9-r5)
(2/4) Installing nghttp2-libs (1.43.0-r0)
(3/4) Installing libcurl (7.79.1-r0)
(4/4) Installing curl (7.79.1-r0)
Executing busybox-1.33.1-r3.trigger
OK: 12 MiB in 26 packages
pack
$ pack --version
0.21.1+git-e09e397.build-2823
$ pack build $REGISTRY_GROUP_PROJECT/$CI_PROJECT_NAME:latest --builder paketobuildpacks/builder:base --path .
ERROR: failed to build: failed to fetch builder image 'index.docker.io/paketobuildpacks/builder:base': Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Cleaning up project directory and file based variables 00:01
ERROR: Job failed: command terminated with exit code 1
</code></pre>
<p>Any idea on how to use Paketo Buildpacks with GitLab CI without having Docker present inside our GitLab Kubernetes runners (which seems to be kind of a best practice)? We also don't want our setup to become to complex - e.g. by adding <a href="https://buildpacks.io/docs/tools/kpack/" rel="nofollow noreferrer">kpack</a>.</p>
| <h2>TLDR;</h2>
<p>Use the Buildpack's lifecycle directly inside your <code>.gitlab-ci.yml</code> <a href="https://gitlab.com/jonashackt/microservice-api-spring-boot/-/blob/main/.gitlab-ci.yml" rel="noreferrer">here's a fully working example</a>):</p>
<pre><code>image: paketobuildpacks/builder
stages:
- build
# We somehow need to access GitLab Container Registry with the Paketo lifecycle
# So we simply create ~/.docker/config.json as stated in https://stackoverflow.com/a/41710291/4964553
before_script:
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_JOB_TOKEN\"}}}" >> ~/.docker/config.json
build-image:
stage: build
script:
- /cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest
</code></pre>
<hr />
<h2>The details: "using the lifecycle directly"</h2>
<p>There are ongoing discussions about this topic. Especially have a look into <a href="https://github.com/buildpacks/pack/issues/564" rel="noreferrer">https://github.com/buildpacks/pack/issues/564</a> and <a href="https://github.com/buildpacks/pack/issues/413#issuecomment-565165832" rel="noreferrer">https://github.com/buildpacks/pack/issues/413#issuecomment-565165832</a>. As stated there:</p>
<blockquote>
<p>If you're looking to build images in CI (not locally), I'd encourage
you to use the lifecycle directly for that, so that you don't need
Docker. Here's an example:</p>
</blockquote>
<p>The link to the example is broken, but it refers to the <a href="https://tekton.dev/" rel="noreferrer">Tekton</a> implementation on <a href="https://github.com/tektoncd/catalog/blob/main/task/buildpacks/0.3/buildpacks.yaml" rel="noreferrer">how to use buildpacks in a Kubernetes environment</a>. Here we can get a first glue about what Stephen Levine referred to as <code>"to use the lifecycle directly"</code>. Inside it the crucial point is the usage of <code>command: ["/cnb/lifecycle/creator"]</code>. <strong>So this is the lifecycle everyone is talking about!</strong> And there's good documentaion about this command that could be found <a href="https://github.com/buildpacks/rfcs/blob/main/text/0026-lifecycle-all.md#usage" rel="noreferrer">in this CNB RFC</a>.</p>
<h2>Choosing a good image: paketobuildpacks/builder:base</h2>
<p>So how to develop a working <code>.gitlab-ci.yml</code>? Let's start simple. Digging <a href="https://github.com/tektoncd/catalog/blob/main/task/buildpacks/0.3/buildpacks.yaml" rel="noreferrer">into the Tekton implementation</a> you'll see that the lifecycle command is executed inside an environment defined in <code>BUILDER_IMAGE</code>, which itself is documented as <code>The image on which builds will run (must include lifecycle and compatible buildpacks).</code> That sound's familiar! Can't we simply pick the builder image <code>paketobuildpacks/builder:base</code> from our pack CLI command? Let's try this locally on our workstation before commiting to much noise into our GitLab. Choose a project you want to build (I created a example Spring Boot app if you'd like at <a href="https://gitlab.com/jonashackt/microservice-api-spring-boot" rel="noreferrer">gitlab.com/jonashackt/microservice-api-spring-boot</a> you can clone) and run:</p>
<pre><code>docker run --rm -it -v "$PWD":/usr/src/app -w /usr/src/app paketobuildpacks/builder bash
</code></pre>
<p>Now inside the <code>paketobuildpacks/builder</code> image powered container try to run the Paketo lifecycle directly with:</p>
<pre><code>/cnb/lifecycle/creator -app=. microservice-api-spring-boot:latest
</code></pre>
<p>I only used the <code>-app</code> parameter of <a href="https://github.com/buildpacks/rfcs/blob/main/text/0026-lifecycle-all.md#usage" rel="noreferrer">the many possible parameters for the <code>creator</code> command</a>, since most of them have quite good defaults. But as the default app directory path is not the default <code>/workspace</code> - but the current directory, I configured it. Also we need to define an <code><image-name></code> at the end, which will simply be used as the resulting container image name.</p>
<h2>The first .gitlab-ci.yml</h2>
<p>Both commands did work at my local workstation, so let's finally create a <code>.gitlab-ci.yml</code> using this approach (<a href="https://gitlab.com/jonashackt/microservice-api-spring-boot/-/blob/add-gitlab-ci/.gitlab-ci.yml" rel="noreferrer">here's a fully working example <code>.gitlab-ci.yml</code></a>):</p>
<pre><code>image: paketobuildpacks/builder
stages:
- build
build-image:
stage: build
script:
- /cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest
</code></pre>
<h2>docker login without docker</h2>
<p>As we don't have <code>docker</code> available inside our Kubernetes Runners, we can't <a href="https://docs.gitlab.com/ee/user/packages/container_registry/#authenticate-by-using-gitlab-cicd" rel="noreferrer">login into GitLab Container Registry as described in the docs</a>. So the following error occured to me using this first approach:</p>
<pre><code>===> ANALYZING
ERROR: failed to get previous image: connect to repo store "gitlab.yourcompanyhere.cloud:4567/yourgroup/microservice-api-spring-boot:latest": GET https://gitlab.yourcompanyhere.cloud/jwt/auth?scope=repository%3Ayourgroup%2Fmicroservice-api-spring-boot%3Apull&service=container_registry: DENIED: access forbidden
Cleaning up project directory and file based variables 00:01
ERROR: Job failed: command terminated with exit code 1
</code></pre>
<p>Using the approach <a href="https://stackoverflow.com/a/46422186/4964553">described in this so answer</a> fixed the problem. We need to create a <code>~/.docker/config.json</code> containing the GitLab Container Registry login information - and then <a href="https://github.com/buildpacks/spec/blob/main/platform.md#registry-authentication" rel="noreferrer">the Paketo build will pick them up, as stated in the docs</a>:</p>
<blockquote>
<p>If <code>CNB_REGISTRY_AUTH</code> is unset and a docker config.json file is
present, the lifecycle SHOULD use the contents of this file to
authenticate with any matching registry.</p>
</blockquote>
<p>Inside our <code>.gitlab-ci.yml</code> this could look like:</p>
<pre><code># We somehow need to access GitLab Container Registry with the Paketo lifecycle
# So we simply create ~/.docker/config.json as stated in https://stackoverflow.com/a/41710291/4964553
before_script:
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_JOB_TOKEN\"}}}" >> ~/.docker/config.json
</code></pre>
<h2>Our final .gitlab-ci.yml</h2>
<p>As we're using the <code>image: paketobuildpacks/builder</code> at the top of our <code>.gitlab-ci.yml</code>, we can now leverage the lifecycle directly. Which is what we wanted to do in the first place. Only remember to use the correct GitLab CI variables to describe your <code><image-name></code> like this:</p>
<pre><code>/cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest
</code></pre>
<p>Otherwise the Buildpack process analyser step will break and it finally won't get pushed to the GitLab Container Registry. So finally our <code>.gitlab-ci.yml</code> looks like this (<a href="https://gitlab.com/jonashackt/microservice-api-spring-boot/-/blob/main/.gitlab-ci.yml" rel="noreferrer">here's the fully working example</a>):</p>
<pre><code>image: paketobuildpacks/builder
stages:
- build
# We somehow need to access GitLab Container Registry with the Paketo lifecycle
# So we simply create ~/.docker/config.json as stated in https://stackoverflow.com/a/41710291/4964553
before_script:
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_JOB_TOKEN\"}}}" >> ~/.docker/config.json
build-image:
stage: build
script:
- /cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest
</code></pre>
<p>Our builds should now run successfully using Paketo/Buildpacks without pack CLI and Docker:</p>
<p><a href="https://i.stack.imgur.com/cuG16.png" rel="noreferrer"><img src="https://i.stack.imgur.com/cuG16.png" alt="enter image description here" /></a></p>
<p>See <a href="https://gitlab.com/jonashackt/microservice-api-spring-boot/-/jobs/1679982122" rel="noreferrer">the full log of the example project here</a>.</p>
|
<p>I want to read file paths from a persistent volume and store these file paths into a persistent queue of sorts. This would probably be done with an application contained within a pod. This persistent volume will be updated constantly with new files. This means that I will need to constantly update the queue with new file paths. What if this application that is adding items to the queue crashes? Kubernetes would be able to reboot the application, but I do not want to add in file paths that are already in the queue. The app would need to know what exists in the queue before adding in files, at least I would think. I was leaning on RabbitMQ, but apparently you cannot search a queue for specific items with this tool. What can I do to account for this issue? I am running this cluster on Google Kubernetes Engine, so this would be on the Google Cloud Platform.</p>
| <blockquote>
<p>What if this application that is adding items to the queue crashes?
Kubernetes would be able to reboot the application, but I do not want
to add in file paths that are already in the queue. The app would need
to know what exists in the queue before adding in files</p>
</blockquote>
<p>if you are looking for searching option also i would suggest using the Redis instead of <code>Queue</code> Running rabbitMQ on K8s i have pretty good experience when it's come to scaling and elasticity however there is <a href="https://github.com/helm/charts/tree/master/stable/rabbitmq-ha" rel="nofollow noreferrer">HA helm chart of RabbitMQ</a> you can use it.</p>
<p>i would Recomand checking out Redis and using it as backend to store the data, if you looking forward to create queue still you can use Bull : <a href="https://github.com/OptimalBits/bull" rel="nofollow noreferrer">https://github.com/OptimalBits/bull</a></p>
<p>it uses the Redis as background to store the data and you can create the queue using this library.</p>
<p>As in Redis you will be taking continuous dump at every second or so...! there is less chances to miss data however in RabbitMQ you can keep persistent messaging plus it provide option for acknowledgment and all.</p>
<p>it's about the actual requirement that you want to implement. If your application wants to order in the list you can not use the Redis in that case RabbitMQ would be best.</p>
|
<p>We need to scale up pods in our AKS Cluster. </p>
<p>Somethimes we need to spin up so much, that multiple extra nodes are needed. </p>
<p>Our docker image is 20GB big to pull. <br>
(Yes I know its way to big, but it's impossible to reduce the size due being an older application.) </p>
<p>But I'm wondering if we spin up 5 new pods on 1 new node, do they all 5 start pulling the 20GB image at startup? Due the image is not cached. </p>
<p>If they do, is there a way to prevent this? </p>
<p>I prefer to have 4 pods waiting for pod 1 to download the image then downloading it 5 times.</p>
| <p>Images are cached on the node to speedup new created pods on the same node.</p>
<p>A very simple way to prevent that would be starting one pod per node at first and then scaling it up so the container runtime can reuse the previously pulled image.</p>
|
<p>I heard ElasticSearch is already changing its license to SSPL. Because of that, it will not be considered as an OSS (OpenSource Software) anymore.</p>
<p>Do you know of a better OSS as replacement for ElasticSearch?</p>
<p>Hope suggested OSS has an official image in dockerhub since I will be using it also in Kubernetes.</p>
| <p>The <a href="https://opensearch.org/" rel="nofollow noreferrer">OpenSearch</a> alternative provided by AWS could be a correct possibility.
It's forked from Elasticsearch and provide the same <a href="https://aws.amazon.com/fr/blogs/aws/amazon-elasticsearch-service-is-now-amazon-opensearch-service-and-supports-opensearch-10/" rel="nofollow noreferrer">features</a>.</p>
|
<p>I'm looking to create a small web application that lists some data about the ingresses in my cluster. The application will be hosted in the cluster itself, so I assume i'm going to need a service account attached to a backend application that calls the kubernetes api to get the data, then serves that up to the front end through a GET via axios etc. Am I along the right lines here?</p>
| <p>You can use the JavaScript Kubernetes Client package for node directly in you node application to access kubeapi server over REST APIs</p>
<pre><code>npm install @kubernetes/client-node
</code></pre>
<p>You can use either way to provide authentication information to your kubernetes client</p>
<p>This is a code which worked for me</p>
<pre><code>const k8s = require('@kubernetes/client-node');
const cluster = {
name: '<cluster-name>',
server: '<server-address>',
caData: '<certificate-data>'
};
const user = {
name: '<cluster-user-name>',
certData: '<certificate-data>',
keyData: '<certificate-key>'
};
const context = {
name: '<context-name>',
user: user.name,
cluster: cluster.name,
};
const kc = new k8s.KubeConfig();
kc.loadFromOptions({
clusters: [cluster],
users: [user],
contexts: [context],
currentContext: context.name,
});
const k8sApi = kc.makeApiClient(k8s.NetworkingV1Api);
k8sApi.listNamespacedIngress('<namespace>').then((res) => {
console.log(res.body);
});
</code></pre>
<p>You need to Api client according to your ingress in my case I was using networkingV1Api</p>
<p>You can get further options from
<a href="https://github.com/kubernetes-client/javascript" rel="nofollow noreferrer">https://github.com/kubernetes-client/javascript</a></p>
|
<p>I want to list a node's pods and pod statues, eg.</p>
<pre><code>Node A
Pod1 Status
Pod2 Status
Node B
Pod1 Status
Pod2 Status
</code></pre>
<p>Is there a <code>kubectl</code> command I can use for this?</p>
| <p>Try this:<br />
<code>kubectl get pods -A --field-selector spec.nodeName=<node name> | awk '{print $2" "$4}'</code></p>
|
<p>I have a Docker container with MariaDB running in Microk8s (running on a single Unix machine).</p>
<pre><code># Hello World Deployment YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: mariadb
spec:
selector:
matchLabels:
app: mariadb
template:
metadata:
labels:
app: mariadb
spec:
containers:
- name: mariadb
image: mariadb:latest
env:
- name: MARIADB_ROOT_PASSWORD
value: sa
ports:
- containerPort: 3306
</code></pre>
<p>These are the logs:</p>
<pre><code>(...)
2021-09-30 6:09:59 0 [Note] mysqld: ready for connections.
Version: '10.6.4-MariaDB-1:10.6.4+maria~focal' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution
</code></pre>
<p>Now,</p>
<ul>
<li>connecting to port 3306 on the machine does not work.</li>
<li>connecting after exposing the pod with a service (any type) on port 8081 also does not work.</li>
</ul>
<p>How can I get the connection through?</p>
| <p>The answer has been written in comments section, but to clarify I am posting here solution as Community Wiki.</p>
<p>In this case problem with connection has been resolved by setting <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#selector" rel="nofollow noreferrer"><code>spec.selector</code></a>.</p>
<blockquote>
<p>The <code>.spec.selector</code> field defines how the Deployment finds which Pods to manage. In this case, you select a label that is defined in the Pod template (<code>app: nginx</code>).</p>
</blockquote>
<blockquote>
<p><code>.spec.selector</code> is a required field that specifies a <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="nofollow noreferrer">label selector</a> for the Pods targeted by this Deployment.</p>
</blockquote>
|
<p>Prometheus-operator seems to generate <code>promethues-operated</code> service which just points to Prometheus instance at port 9090.</p>
<p>What does this service do? We define other services to point at our Prometheus cluster.</p>
<p>What would be repercussions on removing <code>prometheus-operated</code> service?</p>
| <p>Based on the documentation, <code>prometheus-operated</code> is a governing service for statefulsets, in other words it's Prometheus's service endpoint which is used for its functioning.</p>
<p>Below are some references:</p>
<blockquote>
<p>What you are referring to is the governing service that point to the
synthesized Prometheus statefulsets. In the case of a second
Prometheus in the same namespace the same governing service will be
referenced, which in turn will add the IPs of all pods of the separate
Prometheus instances to the same governing service.</p>
</blockquote>
<p>Taken from <a href="https://github.com/prometheus-operator/prometheus-operator/issues/3805" rel="nofollow noreferrer">Rename Prometheus Operator Service #3805</a></p>
<p>Also another reference to the same idea:</p>
<blockquote>
<p>The Prometheus Operator reconciles services called prometheus-operated
and alertmanager-operated, which are used as governing Services for
the StatefulSets. To perform this reconciliation</p>
</blockquote>
<p>Taken from <a href="https://github.com/prometheus-operator/prometheus-operator/blob/1b13e573c7ad533010407544f88dc4a78320b134/Documentation/rbac.md" rel="nofollow noreferrer">Prometheus operator/Documentation/readme</a></p>
<p>One more commit that confirms that <code>prometheus-operated</code> is a governing service:</p>
<blockquote>
<p>pkg/prometheus: add Thanos service port to governing service
Currently, for service discovery of Prometheus instances a separate
headless service must be deployed.</p>
<p>This adds the Thanos grpc port to the existing Prometheus statefulset
governing service if a Thanos sidecar is given in the Prometheus
custom resource specification.</p>
<p>This way no additional service has to be deployed.</p>
</blockquote>
<p>Taken from <a href="https://github.com/prometheus-operator/prometheus-operator/pull/2754/commits/86d42de53ca2c2ea0d32264818d9750f409979da" rel="nofollow noreferrer">pkg/prometheus: add Thanos service port to governing service #2754</a></p>
<hr />
<blockquote>
<p>What would be repercussions on removing prometheus-operated service.</p>
</blockquote>
<p>It's quite old answer, but since this is a part of Prometheus and Prometheus components will fail if the service is removed:</p>
<blockquote>
<p>The prometheus-operated service is an implementation detail of the
Prometheus Operator, it should not be touched, especially as all
Prometheus instances will be registered in this service</p>
</blockquote>
<p>Taken from <a href="https://github.com/prometheus-operator/prometheus-operator/issues/522#issuecomment-319016693" rel="nofollow noreferrer">kube-prometheus chart creates 3 different services pointing to the same pods #522</a></p>
<hr />
<p><a href="https://github.com/prometheus-operator/prometheus-operator/blob/master/pkg/prometheus/statefulset.go#L267-L313" rel="nofollow noreferrer">Code where this service is created</a></p>
<p>Taking into consideration that:</p>
<pre><code>const (
governingServiceName = "prometheus-operated"
...
)
</code></pre>
|
<p>I am having a k3s cluster with my application pods running. In all the pods when I login ( with <code>kubectl exec <pod_name> -n <ns> -it /bin/bash</code> command ) there is <strong><code>kubernetes.io</code></strong> directory which contain secret token that anyone can get if they do <code>cat token</code> :</p>
<pre><code>root@Ubuntu-VM: kubectl exec app-test-pod -n app-system -it /bin/bash
root@app-test-pod:/var/run/secrets/kubernetes.io/serviceaccount# ls -lhrt
total 0
lrwxrwxrwx 1 root root 12 Oct 11 12:07 token -> ..data/token
lrwxrwxrwx 1 root root 16 Oct 11 12:07 namespace -> ..data/namespace
lrwxrwxrwx 1 root root 13 Oct 11 12:07 ca.crt -> ..data/ca.crt
</code></pre>
<p>This seems a security threat (or vulnerability). Can someone let me know if there is a way to remove this dependency from pod so that I can restrict users (even root users also) to access this secret if they login to pod ? Also If this is possible then how will pods do communicate with the API Server ?</p>
| <p>To clarify a couple of things:</p>
<blockquote>
<p>This seems a security threat (or vulnerability).</p>
</blockquote>
<p>It actually isn't a vulnerability unless you configured it to be one.
The ServiceAccount you are talking about is the <code>deafult</code> one which exists in every namespace.
By default that ServiceAccount does not have any permissions that make it unsafe.
If you want to you <em>can</em> add certain rights to the <code>default</code> ServiceAccount <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="noreferrer">using RBAC</a>. For example you can configure it to be able to list all Pods in the same namespace but unless you do that, the ServiceAccount is not considered a vulnerability at all and will not be able to retrieve any useful information.
This applies to <em>all</em> ServiceAccounts, not only the <code>default</code> one.</p>
<blockquote>
<p>Can someone let me know if there is a way to remove this dependency from pod so that I can restrict users (even root users also) to access this secret if they login to pod ?</p>
</blockquote>
<p>Yes it is possible, actually there are two options:</p>
<p>Firstly there is a field called <code>automountServiceAccountToken</code> for the <code>spec</code> section in Pods which you can set to <code>false</code> if you do not want the <code>default</code> ServiceAccount to be mounted at all.</p>
<p>Here is an example:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
automountServiceAccountToken: false
[...]
</code></pre>
<p>Other than that you can create/edit a ServiceAccount and assign it the <code>automountServiceAccountToken: false</code> field:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ServiceAccount
automountServiceAccountToken: false
metadata:
namespace: default
[...]
</code></pre>
<blockquote>
<p>Also If this is possible then how will pods do communicate with the API Server ?</p>
</blockquote>
<p>Pods actually do not need to communicate with the API server at all.
Even when using features like a <code>livenessProbe</code> it is not necessary for Pods to communicate with the API server at all.
As a matter of fact most Pods <em>never</em> communicate with the API server.
The only reason a Pod would need to communicate with the API server is if it is planning on directly interacting with the cluster. Usually this is never required unless you want to write a <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/" rel="noreferrer">custom operator</a> or something similar.
You will still be able to use all the functionality a Pod has to offer you even if you do not mount the ServiceAccount because all those features are based around a Kubernetes communicating with your Pod not the other way around (a <code>livenessProbe</code> for example is being evaluated by <code>kubelet</code>, there is no need at all for the Pod to communicate with the API).</p>
|
<p>I am new to Terraform and learning.
I have to provide two roles in a resource block(builtin and custom), is this supported in terraform?</p>
<pre><code>variable "role_definition_id" {
type = list(string)
description = "Role definition id"
default = ["READER", "Custom_role"]
}
resource "azurerm_role_assignment" "example" {
for_each = toset(local.principal_ids)
scope = data.azurerm_subscription.primary.id
role_definition_name = var.role_definition_id
principal_id = each.key
}
</code></pre>
<p>error:</p>
<pre><code> Error: Incorrect attribute value type
│
│ on namespace/main.tf line 109, in resource "azurerm_role_assignment" "example":
│ 109: role_definition_name = var.role_definition_id
│ ├────────────────
│ │ var.role_definition_id is a list of dynamic, known only after apply
│
│ Inappropriate value for attribute "role_definition_name": string required.
</code></pre>
<p>I am already using for_each to pull in a list of principal_ids within the resource block, so I am wondering if there is a way to set this in a loop, so both the roles are applicable to the concerned principal_id.</p>
<p>I am unable to see any good example where there are multiple roles in a resource block..
Any suggestions?</p>
| <p><code>role_definition_name</code> should be <strong>string</strong>,not a list. You can try the following:</p>
<pre><code>resource "azurerm_role_assignment" "example" {
for_each = {for idx, value in toset(local.principal_ids): idx=>value}
scope = data.azurerm_subscription.primary.id
role_definition_name = element(var.role_definition_id, each.key)
principal_id = each.value.id
}
</code></pre>
<p>The exact form depends on how <code>local.principal_ids</code> defined, but sadly you are not providing such information in the question.</p>
|
<p>I have Docker Desktop (Windows) installed, and have turned on Kubernetes.</p>
<p>I've installed the Nginx ingress controller by running the following command:</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/cloud/deploy.yaml
</code></pre>
<p>(<a href="https://kubernetes.github.io/ingress-nginx/deploy/#docker-desktop" rel="nofollow noreferrer">above command from docs</a>)</p>
<p>I've applied the following YAML...</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: blah
spec:
replicas: 1
selector:
matchLabels:
app: blah
template:
metadata:
labels:
app: blah
spec:
containers:
- name: blah
image: tutum/hello-world
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: blah
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30010
selector:
app: blah
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: blah
spec:
defaultBackend:
service:
name: blah
port:
name: http
rules:
- host: kubernetes.docker.internal
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: blah
port:
name: HTTP
</code></pre>
<p>If I do a GET request from my Windows machine <code>http://kubernetes.docker.internal/</code> - I get this...</p>
<p><a href="https://i.stack.imgur.com/L9XX6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L9XX6.png" alt="enter image description here" /></a></p>
<p>The above, I'd expect to hit my pod.</p>
<p>If I do a get on a non-existent URL (eg. 'http://kubernetes.docker.internal/nonexistant'), I get an NGINX 404.</p>
<p>If I do it on the HTTPS version 'https://kubernetes.docker.internal' - I also get a 404.</p>
<p>If I access it via the NodePort 'http://kubernetes.docker.internal:30010', then it works as expected because it's not using the ingress.</p>
<p>It's almost as if the Nginx Controller has been installed, but any requests are just hitting Nginx directly, and ignoring any ingresses I create.</p>
<p>I'm sure I'm missing something fundamental - but any ideas about what I'm doing wrong?</p>
<h1>Update</h1>
<p>Following on from @clarj's comment, I looked at the nginx controller logs, and saw the following error: "ingress does not contain a valid IngressClass". So I've added the following to the ingress...</p>
<pre><code> annotations:
kubernetes.io/ingress.class: "nginx"
</code></pre>
<p>This has got rid of the error, but not fixed the problem.</p>
<p>Here are my logs now...</p>
<pre><code>-------------------------------------------------------------------------------
NGINX Ingress controller
Release: v1.0.4
Build: 9b78b6c197b48116243922170875af4aa752ee59
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.9
-------------------------------------------------------------------------------
W1014 18:13:38.886167 7 client_config.go:615] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I1014 18:13:38.886636 7 main.go:221] "Creating API client" host="https://10.96.0.1:443"
I1014 18:13:38.890654 7 main.go:265] "Running in Kubernetes cluster" major="1" minor="21" git="v1.21.4" state="clean" commit="3cce4a82b44f032d0cd1a1790e6d2f5a55d20aae" platform="linux/amd64"
I1014 18:13:38.979187 7 main.go:104] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I1014 18:13:38.987243 7 ssl.go:531] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I1014 18:13:38.992390 7 nginx.go:253] "Starting NGINX Ingress controller"
I1014 18:13:38.995200 7 event.go:282] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"4865a09a-18fe-4760-a466-742b63ab480f", APIVersion:"v1", ResourceVersion:"14917", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
I1014 18:13:40.095580 7 store.go:371] "Found valid IngressClass" ingress="default/blah" ingressclass="nginx"
I1014 18:13:40.095725 7 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"blah", UID:"389b024c-9148-4ed0-83b6-a8be5e241655", APIVersion:"networking.k8s.io/v1", ResourceVersion:"25734", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I1014 18:13:40.193521 7 nginx.go:295] "Starting NGINX process"
I1014 18:13:40.193677 7 leaderelection.go:243] attempting to acquire leader lease ingress-nginx/ingress-controller-leader...
I1014 18:13:40.193757 7 nginx.go:315] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I1014 18:13:40.193876 7 controller.go:152] "Configuration changes detected, backend reload required"
I1014 18:13:40.195273 7 status.go:84] "New leader elected" identity="ingress-nginx-controller-5c8d66c76d-4gfd4"
I1014 18:13:40.213501 7 controller.go:169] "Backend successfully reloaded"
I1014 18:13:40.213577 7 controller.go:180] "Initial sync, sleeping for 1 second"
I1014 18:13:40.213614 7 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-5c8d66c76d-fb79x", UID:"69db0f7e-0137-48ee-b4aa-fc28d5208423", APIVersion:"v1", ResourceVersion:"25887", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I1014 18:14:23.192016 7 leaderelection.go:253] successfully acquired lease ingress-nginx/ingress-controller-leader
I1014 18:14:23.192035 7 status.go:84] "New leader elected" identity="ingress-nginx-controller-5c8d66c76d-fb79x"
</code></pre>
<h1>Update 2</h1>
<p>Actually, the fix in my 'Update' above did work. I just needed to use HTTP in my GET request.</p>
| <p>I worked it out, after a hint from @clarj about checking the Nginx Controller pod logs.</p>
<p>Turned out I was missing the following annotation from my ingress...</p>
<pre><code> annotations:
kubernetes.io/ingress.class: "nginx"
</code></pre>
<p>(see my 'Update' in my initial post)</p>
<p>After that fix - the GET request that had HTTPS then worked.</p>
|
<p>Below is the current configuration for livenessProbe:</p>
<pre><code> livenessProbe:
httpGet:
path: /heartbeat
port: 8000
initialDelaySeconds: 2
timeoutSeconds: 2
periodSeconds: 8
failureThreshold: 2
</code></pre>
<hr />
<p>But response body for URL <code>.well-known/heartbeat</code> shows <code>status: "DOWN"</code> and the http return status as 200</p>
<p>So, Kubelet does not restart the container, due to http response status 200</p>
<hr />
<p>How to ensure Kubelet reads the response body instead of http return status? using <code>livenessProbe</code> configuration</p>
| <p>You can interpret the body in your probe using shell command, example:</p>
<pre><code>livenessProbe:
exec:
command:
- sh
- -c
- curl -s localhost | grep 'status: "UP"'
</code></pre>
<p><code>grep</code> return non-zero if <code>status: "DOWN"</code> which will direct readinessProbe to fail. You can of course adjust the script according to your actual response body.</p>
|
<p>I have implemented a gRPC service, build it into a container, and deployed it using k8s, in particular AWS EKS, as a DaemonSet.</p>
<p>The Pod starts and turns to be in Running status very soon, but it takes very long, typically 300s, for the actual service to be accessible.</p>
<p>In fact, when I run <code>kubectl logs</code> to print the log of the Pod, it is empty for a long time.</p>
<p>I have logged something at the very starting of the service. In fact, my code looks like</p>
<pre class="lang-golang prettyprint-override"><code>package main
func init() {
log.Println("init")
}
func main() {
// ...
}
</code></pre>
<p>So I am pretty sure when there are no logs, the service is not started yet.</p>
<p>I understand that there may be a time gap between the Pod is running and the actual process inside it is running. However, 300s looks too long for me.</p>
<p>Furthermore, this happens randomly, sometimes the service is ready almost immediately. By the way, my runtime image is based on <a href="https://hub.docker.com/r/chromedp/headless-shell/" rel="nofollow noreferrer">chromedp headless-shell</a>, not sure if it is relevant.</p>
<p>Could anyone provide some advice for how to debug and locate the problem? Many thanks!</p>
<hr />
<p>Update</p>
<p>I did not set any readiness probes.</p>
<p>Running <code>kubectl get -o yaml</code> of my DaemonSet gives</p>
<pre><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
annotations:
deprecated.daemonset.template.generation: "1"
creationTimestamp: "2021-10-13T06:30:16Z"
generation: 1
labels:
app: worker
uuid: worker
name: worker
namespace: collection-14f45957-e268-4719-88c3-50b533b0ae66
resourceVersion: "47265945"
uid: 88e4671f-9e33-43ef-9c49-b491dcb578e4
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
app: worker
uuid: worker
template:
metadata:
annotations:
prometheus.io/path: /metrics
prometheus.io/port: "2112"
prometheus.io/scrape: "true"
creationTimestamp: null
labels:
app: worker
uuid: worker
spec:
containers:
- env:
- name: GRPC_PORT
value: "22345"
- name: DEBUG
value: "false"
- name: TARGET
value: localhost:12345
- name: TRACKER
value: 10.100.255.31:12345
- name: MONITOR
value: 10.100.125.35:12345
- name: COLLECTABLE_METHODS
value: shopping.ShoppingService.GetShop
- name: POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: DISTRIBUTABLE_METHODS
value: collection.CollectionService.EnumerateShops
- name: PERFORM_TASK_INTERVAL
value: 0.000000s
image: xxx
imagePullPolicy: Always
name: worker
ports:
- containerPort: 22345
protocol: TCP
resources:
requests:
cpu: 1800m
memory: 1Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
- env:
- name: CAPTCHA_PARALLEL
value: "32"
- name: HTTP_PROXY
value: http://10.100.215.25:8080
- name: HTTPS_PROXY
value: http://10.100.215.25:8080
- name: API
value: 10.100.111.11:12345
- name: NO_PROXY
value: 10.100.111.11:12345
- name: POD_IP
image: xxx
imagePullPolicy: Always
name: source
ports:
- containerPort: 12345
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/ssl/certs/api.crt
name: ca
readOnly: true
subPath: tls.crt
dnsPolicy: ClusterFirst
nodeSelector:
api/nodegroup-app: worker
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: ca
secret:
defaultMode: 420
secretName: ca
updateStrategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
status:
currentNumberScheduled: 2
desiredNumberScheduled: 2
numberAvailable: 2
numberMisscheduled: 0
numberReady: 2
observedGeneration: 1
updatedNumberScheduled: 2
</code></pre>
<p>Furthermore, there are two containers in the Pod. Only one of them is exceptionally slow to start, and the other one is always fine.</p>
| <p>When you use HTTP_PROXY for your solution, watchout how it may route differently from your underlying cluster network - which often result to unexpected timeout.</p>
|
<p>I have implemented a gRPC service, build it into a container, and deployed it using k8s, in particular AWS EKS, as a DaemonSet.</p>
<p>The Pod starts and turns to be in Running status very soon, but it takes very long, typically 300s, for the actual service to be accessible.</p>
<p>In fact, when I run <code>kubectl logs</code> to print the log of the Pod, it is empty for a long time.</p>
<p>I have logged something at the very starting of the service. In fact, my code looks like</p>
<pre class="lang-golang prettyprint-override"><code>package main
func init() {
log.Println("init")
}
func main() {
// ...
}
</code></pre>
<p>So I am pretty sure when there are no logs, the service is not started yet.</p>
<p>I understand that there may be a time gap between the Pod is running and the actual process inside it is running. However, 300s looks too long for me.</p>
<p>Furthermore, this happens randomly, sometimes the service is ready almost immediately. By the way, my runtime image is based on <a href="https://hub.docker.com/r/chromedp/headless-shell/" rel="nofollow noreferrer">chromedp headless-shell</a>, not sure if it is relevant.</p>
<p>Could anyone provide some advice for how to debug and locate the problem? Many thanks!</p>
<hr />
<p>Update</p>
<p>I did not set any readiness probes.</p>
<p>Running <code>kubectl get -o yaml</code> of my DaemonSet gives</p>
<pre><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
annotations:
deprecated.daemonset.template.generation: "1"
creationTimestamp: "2021-10-13T06:30:16Z"
generation: 1
labels:
app: worker
uuid: worker
name: worker
namespace: collection-14f45957-e268-4719-88c3-50b533b0ae66
resourceVersion: "47265945"
uid: 88e4671f-9e33-43ef-9c49-b491dcb578e4
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
app: worker
uuid: worker
template:
metadata:
annotations:
prometheus.io/path: /metrics
prometheus.io/port: "2112"
prometheus.io/scrape: "true"
creationTimestamp: null
labels:
app: worker
uuid: worker
spec:
containers:
- env:
- name: GRPC_PORT
value: "22345"
- name: DEBUG
value: "false"
- name: TARGET
value: localhost:12345
- name: TRACKER
value: 10.100.255.31:12345
- name: MONITOR
value: 10.100.125.35:12345
- name: COLLECTABLE_METHODS
value: shopping.ShoppingService.GetShop
- name: POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: DISTRIBUTABLE_METHODS
value: collection.CollectionService.EnumerateShops
- name: PERFORM_TASK_INTERVAL
value: 0.000000s
image: xxx
imagePullPolicy: Always
name: worker
ports:
- containerPort: 22345
protocol: TCP
resources:
requests:
cpu: 1800m
memory: 1Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
- env:
- name: CAPTCHA_PARALLEL
value: "32"
- name: HTTP_PROXY
value: http://10.100.215.25:8080
- name: HTTPS_PROXY
value: http://10.100.215.25:8080
- name: API
value: 10.100.111.11:12345
- name: NO_PROXY
value: 10.100.111.11:12345
- name: POD_IP
image: xxx
imagePullPolicy: Always
name: source
ports:
- containerPort: 12345
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/ssl/certs/api.crt
name: ca
readOnly: true
subPath: tls.crt
dnsPolicy: ClusterFirst
nodeSelector:
api/nodegroup-app: worker
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: ca
secret:
defaultMode: 420
secretName: ca
updateStrategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
status:
currentNumberScheduled: 2
desiredNumberScheduled: 2
numberAvailable: 2
numberMisscheduled: 0
numberReady: 2
observedGeneration: 1
updatedNumberScheduled: 2
</code></pre>
<p>Furthermore, there are two containers in the Pod. Only one of them is exceptionally slow to start, and the other one is always fine.</p>
| <p>I have posted community wiki answer to summarize the topic:</p>
<p>As <a href="https://stackoverflow.com/users/14704799/gohmc">gohm'c</a> has mentioned in the comment:</p>
<blockquote>
<p>Do connections made by container "source" always have to go thru HTTP_PROXY, even if it is connecting services in the cluster - do you think possible long time been taken because of proxy? Can try <code>kubectl exec -it <pod> -c <source> -- sh</code> and curl/wget external services.</p>
</blockquote>
<p>This is an good observation. Note that some connections can be made directly and that adding extra traffic through the proxy may result in delays. For example, a bottleneck may arise. You can read more information about using an HTTP Proxy to Access the Kubernetes API in the <a href="https://kubernetes.io/docs/tasks/extend-kubernetes/http-proxy-access-api/" rel="nofollow noreferrer">documentation</a>.</p>
<p>Additionally you can also create <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">readiness probes</a> to know when a container is ready to start accepting traffic.</p>
<blockquote>
<p>A Pod is considered ready when all of its containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers.</p>
<p>The kubelet uses startup probes to know when a container application has started. If such a probe is configured, it disables liveness and readiness checks until it succeeds, making sure those probes don't interfere with the application startup. This can be used to adopt liveness checks on slow starting containers, avoiding them getting killed by the kubelet before they are up and running.</p>
</blockquote>
|
<p>I'm trying to pass my client IP address through my NGINX Ingress using Kubernetes on Azure</p>
<p>I've created this configmap for the NGINX config to add the headers:</p>
<pre><code>apiVersion: v1
data:
X-Real-IP: $remote_addr;
X-Forwarded-For: $proxy_add_x_forwarded_for;
X-Forwarded-Proto: $proxy_x_forwarded_proto;
use-forwarded-headers: "true"
use-proxy-protocol: "true"
real-ip-header: "proxy_protocol"
kind: ConfigMap
metadata:
name: custom-headers
namespace: default
</code></pre>
<p>Then added this config to reference the previous file:</p>
<pre><code>apiVersion: v1
data:
proxy-set-headers: "custom-headers"
externalTrafficPolicy: Local
kind: ConfigMap
metadata:
name: ingress-nginx-controller
namespace: default
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
</code></pre>
<p>If I describe my nginx controller:</p>
<pre><code>kubectl describe deploy ingress-nginx-controller
</code></pre>
<p>I can see the line:</p>
<pre><code>--configmap=$(POD_NAMESPACE)/ingress-nginx-controller
</code></pre>
<p>If I describe the ingress-nginx-controller configmap</p>
<pre><code>kubectl describe configmap ingress-nginx-controller
</code></pre>
<p>I can see the following in the data section:</p>
<pre><code>proxy-set-headers:
----
custom-headers
</code></pre>
<p>If I log out the nginx.conf file in my controller though, I can't see the changed values. For example this is still the default:</p>
<pre><code>proxy_set_header X-Forwarded-For $remote_addr;
</code></pre>
| <p>You are missing the namespace prefix for proxy-set-headers value. Because you have deployed <code>custom-headers</code> configmap to the default namespace, it should be</p>
<pre><code>data:
proxy-set-headers: default/custom-headers
</code></pre>
|
<p>I heard ElasticSearch is already changing its license to SSPL. Because of that, it will not be considered as an OSS (OpenSource Software) anymore.</p>
<p>Do you know of a better OSS as replacement for ElasticSearch?</p>
<p>Hope suggested OSS has an official image in dockerhub since I will be using it also in Kubernetes.</p>
| <p>Elasticsearch was on SSPL, but we moved to a simpler license. check out <a href="https://www.elastic.co/blog/elastic-license-v2" rel="nofollow noreferrer">https://www.elastic.co/blog/elastic-license-v2</a> for details on that aspect</p>
|
<p>I'm using AWS EKS 1.21 with service account discovery enabled.<br />
Created an OIDC provider, the <code>.well-known/openid-configuration</code> endpoint returns a correct configuration:</p>
<pre><code>{
"issuer": "https://oidc.eks.eu-west-1.amazonaws.com/id/***",
"jwks_uri": "https://ip-***.eu-west-1.compute.internal:443/openid/v1/jwks",
"response_types_supported": [
"id_token"
],
"subject_types_supported": [
"public"
],
"id_token_signing_alg_values_supported": [
"RS256"
]
}
</code></pre>
<p>Created a ServiceAccount for one of my deployments and the pod gets this as projected volume:</p>
<pre><code> volumes:
- name: kube-api-access-b4xt9
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
</code></pre>
<p>The secret created for the ServiceAccount contains this token:</p>
<pre><code>{
"iss": "kubernetes/serviceaccount",
"kubernetes.io/serviceaccount/namespace": "sbx",
"kubernetes.io/serviceaccount/secret.name": "dliver-site-config-service-token-kz874",
"kubernetes.io/serviceaccount/service-account.name": "dliver-site-config-service",
"kubernetes.io/serviceaccount/service-account.uid": "c26ad760-9067-4d90-a327-b3d6e32bce42",
"sub": "system:serviceaccount:sbx:dliver-site-config-service"
}
</code></pre>
<p>The projected token mounted in to the pod contains this:</p>
<pre><code>{
"aud": [
"https://kubernetes.default.svc"
],
"exp": 1664448004,
"iat": 1632912004,
"iss": "https://oidc.eks.eu-west-1.amazonaws.com/id/***",
"kubernetes.io": {
"namespace": "sbx",
"pod": {
"name": "dliver-site-config-service-77494b8fdd-45pxw",
"uid": "0dd440a6-1213-4faa-a69e-398b83d2dd6b"
},
"serviceaccount": {
"name": "dliver-site-config-service",
"uid": "c26ad760-9067-4d90-a327-b3d6e32bce42"
},
"warnafter": 1632915611
},
"nbf": 1632912004,
"sub": "system:serviceaccount:sbx:dliver-site-config-service"
}
</code></pre>
<p>Kubernetes renew the projected token every hour, so everything looks fine.<br />
Except the projected token "exp" field:<br />
<code>"iat": 1632912004</code> which is <code>Wednesday, September 29, 2021 10:40:04 AM</code><br />
<code>"exp": 1664448004</code> which is <code>Thursday, September 29, 2022 10:40:04 AM</code></p>
<p>So the problem is, that the projected token expiry time is 1 year, instead of around 1 hour, which makes Kubernetes effort to renew the token basically useless.<br />
I searched for hours but was simply unable to figure out where this is coming from.<br />
The expiration flag is passed to the kube-api server: <code>--service-account-max-token-expiration="24h0m0s"</code>, so
my assumption is that this should be configured on the OIDC provider somehow, but unable to find any related documentation.</p>
<p>Any idea how to make the projected token expiry date around the same as the <code>expirationSeconds</code> in the pod projected volume?</p>
<h2>Update</h2>
<p>It only happens, when the projected token <code>expirationSeconds</code> is set to the default <code>3607</code> value, any other value gives the correct <code>exp</code> in the mounted token, which is really weird.</p>
| <p>Finally got an answer <a href="https://github.com/kubernetes/kubernetes/issues/105654#issuecomment-942777567" rel="noreferrer">elsewhere</a>.</p>
<blockquote>
<p>cluster operators can specify flag --service-account-extend-token-expiration=true to kube apiserver to allow tokens have longer expiration temporarily during the migration. Any usage of legacy token will be recorded in both metrics and audit logs.</p>
</blockquote>
<p>The "3607" magic number is part of the Bound Service Account Tokens safe rollout plan, <a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md#safe-rollout-of-time-bound-token" rel="noreferrer">described in this kep</a>.
The actual number <a href="https://github.com/kubernetes/kubernetes/blob/ea0764452222146c47ec826977f49d7001b0ea8c/pkg/serviceaccount/claims.go#L35" rel="noreferrer">hardcoded in the source code</a>.<br />
The <code>--service-account-extend-token-expiration</code> flag was <a href="https://github.com/kubernetes/kubernetes/pull/96273" rel="noreferrer">set to true</a> by default from 1.20.</p>
<p>The mentioned metric/log info can be found <a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md#serviceaccount-admission-controller-migration" rel="noreferrer">in the kep too</a> and was implemented <a href="https://github.com/kubernetes/kubernetes/pull/89549" rel="noreferrer">here</a>.<br />
To see these logs in EKS need to <a href="https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html" rel="noreferrer">enable audit logging on the cluster</a>, then check Cloudwatch for related log entries.</p>
<p>I used this query in Cloudwatch Log Insight to find which pods don't reload the token periodically:</p>
<pre class="lang-kotlin prettyprint-override"><code>filter @logStream like 'kube-apiserver-audit'
| filter ispresent(`annotations.authentication.k8s.io/stale-token`)
| parse `annotations.authentication.k8s.io/stale-token` "subject: *," as subject
| stats count(*) as staleCount by subject, `user.username`
| sort staleCount desc
</code></pre>
|
<p>After creating a deployment with Kubernetes Python Client , and exposing the service with type ClusterIP, how can i get the Cluster-IP using the python client instead of using Kubectl command ?</p>
<p>Service was created based on the example code</p>
<pre><code>def create_service():
core_v1_api = client.CoreV1Api()
body = client.V1Service(
api_version="v1",
kind="Service",
metadata=client.V1ObjectMeta(
name="service-example"
),
spec=client.V1ServiceSpec(
selector={"app": "deployment"},
ports=[client.V1ServicePort(
port=5678,
target_port=5678
)]
)
)
# Creation of the Deployment in specified namespace
# (Can replace "default" with a namespace you may have created)
core_v1_api.create_namespaced_service(namespace="default", body=body)
</code></pre>
<p>There is a command to list pod ips on the documentation</p>
<pre><code>v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" %
(i.status.pod_ip, i.metadata.namespace, i.metadata.name))
</code></pre>
<p>im wondering if there's a way to do something similar with Cluster-IP from services.</p>
| <p>Simply get the service and read its <code>spec.cluster_ip</code> property:</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes import client, config
config.load_kube_config()
api = client.CoreV1Api()
service = api.read_namespaced_service(name="kubernetes", namespace="default")
print(service.spec.cluster_ip)
# 10.100.0.1
</code></pre>
|
<p>I was under the impression in Kubernetes the way you expose a port for your application was by first exposing it in the Dockerfile using <code>EXPOSE</code>, then setting the <code>containerPort</code> setting in the deployment yaml file, and finally setting the <code>targetPort</code> in the service yaml file. I thought these all had to be the same value e.g. <code>7214</code>.</p>
<p>However I've just noticed that I've had the incorrect port exposed in one my applications Dockerfile as <code>7124</code> (but have the correct port in the other two files) like so :</p>
<p><strong>Dockerfile</strong></p>
<pre><code>expose 7124 #This is incorrect
</code></pre>
<p><strong>Deployment.yaml</strong></p>
<pre><code>ports:
- containerPort: 7214
</code></pre>
<p><strong>Service.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
ports:
- port: 7214
targetPort: 7214
</code></pre>
<p>However my other applications are hitting the service fine and the requests are being forwarded onto the application at port <code>7214</code> without issue.</p>
<p>Why is this working? Is it not required for me to expose the port in the Dockerfile at all? Does <code>containerPort</code> take precedence or something? <a href="https://stackoverflow.com/a/57565261/3275784">This users</a> answer on a similar question says the exposed port doesn't have to match, but doesn't explain why.</p>
| <p>The port on which a process is listening is only known to the developer of the process as the port binding happens in code. <code>EXPOSE</code> and <code>containerPort</code> are ways to communicate this to outside world.</p>
<p><code>EXPOSE</code> directive in the Dockerfile doesn't do anything other than acting as documentation for the person reading your dockerfile to understand which port the process in the container might be listening on and for some UI purpose when you run <code>docker ps</code> command. It's like a communication between the author of the dockerfile and another person who might be using your image or modifyng your dockerfile.</p>
<p>Even the <code>containerPort</code> section in your deployment.yaml has no affect on anything. It also acts as documentation for people reading your manifest to know on which ports your process within the pod might be listening on. Another use is that you can give a name to the port using the <code>name</code> field and then you can reference this port by name in other places like the service object.</p>
<p>The only thing that matters is the actual port that your process is listening on in the container and making sure that port is used in <code>targetPort</code> field in the service port.</p>
|
<p>I was following the <a href="https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/" rel="nofollow noreferrer">Run a Single-Instance Stateful Application</a> tutorial of Kubernetes (I changed the MySQL docker image's tag to 8), and it seems the server is running correctly:
<a href="https://i.stack.imgur.com/NztJs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NztJs.png" alt="enter image description here" /></a></p>
<p>But when I try to connect the server as the tutorial suggesting:</p>
<pre><code>kubectl run -it --rm --image=mysql:8 --restart=Never mysql-client -- mysql -h mysql -ppassword
</code></pre>
<p>I get the following error:</p>
<blockquote>
<p>ERROR 1045 (28000): Access denied for user 'root'@'10.1.0.99' (using password: YES)
pod "mysql-client" deleted</p>
</blockquote>
<hr />
<p>I already looked at those questions:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/65460113/cant-access-mysql-root-or-user-after-kubernetes-deployment">Can't access mysql root or user after kubernetes deployment</a></li>
<li><a href="https://stackoverflow.com/questions/64205150/access-mysql-kubernetes-deployment-in-mysql-workbench">Access MySQL Kubernetes Deployment in MySQL Workbench</a></li>
</ul>
<p>But changing the <code>mountPath</code> or <code>port</code> didn't work.</p>
| <p>Default behavior of <code>root</code> account can only be connected to from inside the container. Here's an updated version of the example that allows you to connect from remote:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:8.0.26
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
- name: MYSQL_ROOT_HOST
value: "%"
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
emptyDir: {}
# Following the original example, comment the emptyDir and uncomment the following if you have StorageClass installed.
# persistentVolumeClaim:
# claimName: mysql-pv-claim
</code></pre>
<p>No change to the client connect except for the image tag:</p>
<p><code>kubectl run -it --rm --image=mysql:8.0.26 --restart=Never mysql-client -- mysql -h mysql -ppassword</code></p>
<p>Test with <code>show databases;</code>:</p>
<pre><code>mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+
4 rows in set (0.00 sec)
</code></pre>
|
<p>This is my DaemonSet</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx
spec:
selector:
matchLabels:
run: nginx
template:
metadata:
labels:
run: nginx
annotations:
"cluster-autoscaler.kubernetes.io/enable-ds-eviction": "false"
"cluster-autoscaler.kubernetes.io/safe-to-evict": "false"
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
protocol: TCP
</code></pre>
<p>According to the <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-can-i-enabledisable-eviction-for-a-specific-daemonset" rel="nofollow noreferrer">documentation</a>.</p>
<p><strong>cluster-autoscaler.kubernetes.io/enable-ds-eviction: "false"</strong> should prevent scaling down.</p>
<p>But GKE cluster autoscaler ignores this annotation.</p>
<p>Is there any way to prevent DaemonSet from being evicted by the cluster autoscaler?</p>
| <p>The <code>cluster-autoscaler.kubernetes.io/enable-ds-eviction</code> annotation does not apply once a node is actually empty. During the scale down process, while the autoscaler is evicting/rescheduling pods elsewhere, it will ignore daemonsets when <code>cluster-autoscaler.kubernetes.io/enable-ds-eviction</code> is set to false. Once the node is empty (i.e. no longer has any deployed pods), the node will then be removed (and the daemonset pod will be gracefully terminated).</p>
|
<p>I have a container image that is loading multiple large files on startup. When restarting the container, all files have to be loaded again.</p>
<p>What I want to do now is to start six instances which only load one file each, given an environment variable. Now my question is how to configure this. What I could do is create a new deployment+service for each file, but that seems incorrect because 99% of the content is the same, only the environment variable is different. Another option would be to have one pod with multiple containers and one gateway-like containers. But then when the pod is restarting, all files are loaded again.</p>
<p>What's the best strategy to do this?</p>
| <p>Ideally, you should have to keep it like <strong>deployment+service</strong> and make 5-6 different secret or configmap as per need storing the environment variables files your application require.</p>
<p>Inject this <code>secret</code> or <code>configmap</code> one by one to each different deployment.</p>
<blockquote>
<p>Another option would be to have one pod with multiple containers and
one gateway-like containers.</p>
</blockquote>
<p>that's didn't look like scalable approach, if you are running the 5- container inside single pod and one gateway container also.</p>
|
<p>I recently learned about <code>helm</code> and how easy it is to deploy the whole <code>prometheus</code> stack for monitoring a Kubernetes cluster, so I decided to try it out on a staging cluster at my work.</p>
<p>I started by creating a dedicates namespace on the cluster for monitoring with:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl create namespace monitoring
</code></pre>
<p>Then, with <code>helm</code>, I added the <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack" rel="nofollow noreferrer">prometheus-community repo</a> with:</p>
<pre class="lang-sh prettyprint-override"><code>helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
</code></pre>
<p>Next, I installed the chart with a <code>prometheus</code> release name:</p>
<pre class="lang-sh prettyprint-override"><code>helm install prometheus prometheus-community/kube-prometheus-stack -n monitoring
</code></pre>
<p>At this time I didn't pass any custom configuration because I'm still trying it out.</p>
<p>After the install is finished, it all looks good. I can access the prometheus dashboard with:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl port-forward prometheus-prometheus-kube-prometheus-prometheus-0 9090 -n monitoring
</code></pre>
<p>There, I see a bunch of pre-defined alerts and rules that are monitoring but the problem is that I don't quite understand how to create new rules to check the pods in the <code>default</code> namespace, where I actually have my services deployed.</p>
<p>I am looking at <code>http://localhost:9090/graph</code> to play around with the queries and I can't seem to use any that will give me metrics on my pods in the <code>default</code> namespace.</p>
<p>I am a bit overwhelmed with the amount of information so I would like to know what did I miss or what am I doing wrong here?</p>
| <p>The Prometheus Operator includes several Custom Resource Definitions (CRDs) including <code>ServiceMonitor</code> (and <code>PodMonitor</code>). <code>ServiceMonitor</code>'s are used to define services to the Operator to be monitored.</p>
<p>I'm familiar with the Operator although not the Helm deployment but I suspect you'll want to create <code>ServiceMonitors</code> to generate metrics for your apps in any (including <code>default</code>) namespace.</p>
<p>See: <a href="https://github.com/prometheus-operator/prometheus-operator#customresourcedefinitions" rel="nofollow noreferrer">https://github.com/prometheus-operator/prometheus-operator#customresourcedefinitions</a></p>
|
<p>Kubectl provides a nice way to convert environment variable files into secrets using:</p>
<pre><code>$ kubectl create secret generic my-env-list --from-env-file=envfile
</code></pre>
<p>Is there any way to achieve this in Helm? I tried the below snippet but the result was quite different:</p>
<pre><code>kind: Secret
metadata:
name: my-env-list
data:
{{ .Files.Get "envfile" | b64enc }}
</code></pre>
| <p>It appears kubectl just does the simple thing and only <a href="https://github.com/kubernetes/kubernetes/blob/v1.22.0/staging/src/k8s.io/kubectl/pkg/cmd/util/env_file.go#L56" rel="nofollow noreferrer">splits on a single <code>=</code> character</a> so the Helm way would be to replicate that behavior (helm has <code>regexSplit</code> which will suffice for our purposes):</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Secret
data:
{{ range .Files.Lines "envfile" }}
{{ if . }}
{{ $parts := regexSplit "=" . 2 }}
{{ index $parts 0 }}: {{ index $parts 1 | b64enc }}
{{ end }}
{{ end }}
</code></pre>
<p>that <code>{{ if . }}</code> is because <code>.Files.Lines</code> returned an empty string which of course doesn't comply with the pattern</p>
<p>Be aware that kubectl's version accepts <a href="https://github.com/kubernetes/kubernetes/blob/v1.22.0/staging/src/k8s.io/kubectl/pkg/cmd/util/env_file.go#L65-L66" rel="nofollow noreferrer">barewords looked up from the environment</a> which helm has no support for doing, so if your <code>envfile</code> is formatted like that, this specific implementation will fail</p>
|
<p>I have a pod that exposes an HTTP service.<br />
This pod has some HTTP endpoints which are available under <code>/accounts</code>.<br />
My goal is to access this sub-path via <code>accounts.example.com</code>.</p>
<p>For example if the url <code>accounts.example.com/test</code> is requested, the nginx ingress should route the request to <code>/accounts/test</code> on my pod.</p>
<p>In the nginx ingress documentation I cannot find a use-case with examples like this.</p>
| <p>You should use rewrite to accomplish your request.</p>
<p><a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/examples/rewrite/</a></p>
<ul>
<li>Here is an example:
<ul>
<li>Focus on this line: <code>path: /something(/|$)(.*)</code></li>
</ul>
</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: rewrite
namespace: default
spec:
rules:
- host: rewrite.bar.com
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
path: /something(/|$)(.*)
</code></pre>
<hr />
<ul>
<li>Look on the different annotations for the re-write:</li>
</ul>
<p><a href="https://i.stack.imgur.com/hNihu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hNihu.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/cWtTm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cWtTm.png" alt="enter image description here" /></a></p>
|
<p>I am trying to mount dags folder to be able to run a python script inside of the KubernetesPodOperator on Aiflow, but can't figure out how to do it. In production I would like to do it in Google Composer. Here is my task:</p>
<pre><code>kubernetes_min_pod = KubernetesPodOperator(
task_id='pod-ex-minimum',
cmds=["bash", "-c"],
arguments=["cd /usr/local/tmp"],
namespace='default',
image='toru2220/scrapy-chrome:latest',
is_delete_operator_pod=True,
get_logs=True,
in_cluster=False,
volumes=[
Volume("my-volume", {"persistentVolumeClaim": {"claimName": "my-volume"}})
],
volume_mounts=[
VolumeMount("my-volume", "/usr/local/tmp", sub_path=None, read_only=False)
],
)
</code></pre>
<p>I am trying to understand what is the easiest way to mount the current folder where dag is?</p>
| <p>As per this <a href="https://cloud.google.com/composer/docs/composer-2/cloud-storage" rel="nofollow noreferrer">doc</a>, when you create an environment, Cloud Composer creates a Cloud Storage bucket and associates the bucket with your environment. The name of the bucket is based on the environment region, name, and a random ID such as “us-central1-b1-6efannnn-bucket”. Cloud Composer stores the source code for your workflows (DAGs) and their dependencies in specific folders in Cloud Storage and uses <a href="https://cloud.google.com/storage/docs/gcs-fuse" rel="nofollow noreferrer">Cloud Storage FUSE</a> to map the folders to the Airflow instances in your Cloud Composer environment.</p>
<p>The Cloud Composer runs on top of a GKE cluster with all the DAGs, tasks, and services running on a single node pool. As per your requirement, you are trying to mount a DAGs folder in your code, which is already mounted in the Airflow pods under “<strong>/home/airflow/gcs/dags</strong>” path. Please refer to this <a href="https://cloud.google.com/composer/docs/how-to/using/using-kubernetes-pod-operator#begin" rel="nofollow noreferrer">doc</a> for more information about KubernetesPodOperator in Cloud Composer.</p>
|
<p>Looking for the best way to integrate ACR with AKS for Producation environment, Seems there are multiple ways like, during installation, and after installation, using service principala,a nd using image pull secret etc..</p>
<p>So for our production environment looking for most recommended option, where the requirement as follows.</p>
<ul>
<li>Is it mandatory to attach acr during aks creation itself</li>
<li>What will be the advantage if we are integrating ACR along with AKS instalation itself. (seems , we dont want to pass the image pull secret to the pod spec in that case and for other options we need to)</li>
<li>What is the another way to integrate ACR with AKS ( az aks update) command will help in this case? if yes, what will be the difference from the previous method where we integrated during AKS installation.</li>
<li>IF I want to setup a secodary AKS cluster in another region, but need to connect the ACR georeplicated instance of Primary instance of ACR , How i can get it done? In this case is it mandaory to attach tge ACR during AKS installation or later post installation also its good to go?</li>
</ul>
| <p>IMHO the best way is Azure RBAC. You dont need to attach the ACR while creating the AKS. You can leverage Azure RBAC and assign the Role "AcrPull" to the Kubelet identity of your nodepool. This can be done for every ACR you have:</p>
<pre><code>export KUBE_ID=$(az aks show -g <resource group> -n <aks cluster name> --query identityProfile.kubeletidentity.objectId -o tsv)
export ACR_ID=$(az acr show -g <resource group> -n <acr name> --query id -o tsv)
az role assignment create --assignee $KUBE_ID --role "AcrPull" --scope $ACR_ID
</code></pre>
<p>Terraform:</p>
<pre><code> resource "azurerm_role_assignment" "example" {
scope = azurerm_container_registry.acr.id
role_definition_name = "AcrPull"
principal_id = azurerm_kubernetes_cluster.aks.kubelet_identity[0].object_id
}
</code></pre>
|
<p>I'm trying to forward port 8080 for my nginx ingress controller for Kubernetes. I'm running the command:</p>
<pre><code>kubectl -n nginx-ingress port-forward nginx-ingress-768dfsssd5bf-v23ja 8080:8080 --request-timeout 0
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
</code></pre>
<p>It just hangs on <code>::1</code> forever. Is there a log somewhere I can view to see why its hanging?</p>
| <p>Well, it's expected behaviour - <code>kubectl port-forward</code> is not getting <a href="https://stackoverflow.com/questions/48863164/kubernetes-prompt-freezes-at-port-forward-command/48866118#48866118">daemonized by default</a>. From the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/#forward-a-local-port-to-a-port-on-the-pod" rel="nofollow noreferrer">official Kubernetes documentation</a>:</p>
<blockquote>
<p><strong>Note:</strong> <code>kubectl port-forward</code> does not return. To continue with the exercises, you will need to open another terminal.</p>
</blockquote>
<p>The all logs will be shown in <code>kubectl port-forward</code> command output:</p>
<pre><code>user@shell:~$ kubectl port-forward deployment/nginx-deployment 80:80
Forwarding from 127.0.0.1:80 -> 80
</code></pre>
<p><a href="https://stackoverflow.com/questions/53799600/kubernetes-port-forwarding-connection-refused">Including errors</a>:</p>
<pre><code>Handling connection for 88
Handling connection for 88
E1214 01:25:48.704335 51463 portforward.go:331] an error occurred forwarding 88 -> 82: error forwarding port 82 to pod a017a46573bbc065902b600f0767d3b366c5dcfe6782c3c31d2652b4c2b76941, uid : exit status 1: 2018/12/14 08:25:48 socat[19382] E connect(5, AF=2 127.0.0.1:82, 16): Connection refused
</code></pre>
<p>If you don't have any logs that means you didn't make an attempt to connect or you specified a wrong address / port.</p>
<p>As earlier mentioned, you can open a new terminal window, you can also run <code>kubectl port-forward</code> <a href="https://linuxize.com/post/how-to-run-linux-commands-in-background/" rel="nofollow noreferrer">in the background</a> by adding <code>&</code> at the end of the command:</p>
<pre><code>kubectl -n nginx-ingress port-forward nginx-ingress-768dfsssd5bf-v23ja 8080:8080 --request-timeout 0 &
</code></pre>
<p>If you want to run <code>kubectl port-forward</code> in background and save all logs to the file you can use <a href="https://linux.101hacks.com/unix/nohup-command/" rel="nofollow noreferrer"><code>nohup</code> command</a> + <code>&</code> at the end:</p>
<pre><code>nohup kubectl -n nginx-ingress port-forward nginx-ingress-768dfsssd5bf-v23ja 8080:8080 --request-timeout 0 &
</code></pre>
|
<p>I'm trying to add a self-signed certificate in my AKS cluster using Cert-Manager.</p>
<p>I created a <code>ClusterIssuer</code> for the CA certificate (to sign the certificate) and a second <code>ClusterIssuer</code> for the Certificate (self-signed) I want to use.</p>
<p>I am not sure if the <code>certificate2</code> is being used correctly by Ingress as it looks like it is waiting for some event.</p>
<p>Am I following the correct way to do this?</p>
<p>This is the first <code>ClusterIssuer</code> "clusterissuer.yml":</p>
<pre><code>apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: selfsigned
spec:
selfSigned: {}
</code></pre>
<p>This is the CA certificate "certificate.yml":</p>
<pre><code>apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: selfsigned-certificate
spec:
secretName: hello-deployment-tls-ca-key-pair
dnsNames:
- "*.default.svc.cluster.local"
- "*.default.com"
isCA: true
issuerRef:
name: selfsigned
kind: ClusterIssuer
</code></pre>
<p>This is the second <code>ClusterIssuer</code> "clusterissuer2.yml" for the certificate I want to use:</p>
<pre><code>apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: hello-deployment-tls
spec:
ca:
secretName: hello-deployment-tls-ca-key-pair
</code></pre>
<p>and finally this is the self-signed certificate "certificate2.yml":</p>
<pre><code>apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: selfsigned-certificate2
spec:
secretName: hello-deployment-tls-ca-key-pair2
dnsNames:
- "*.default.svc.cluster.local"
- "*.default.com"
isCA: false
issuerRef:
name: hello-deployment-tls
kind: ClusterIssuer
</code></pre>
<p>I am using this certificate in an Ingress:</p>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: "hello-deployment-tls"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
name: sonar-ingress
spec:
tls:
- secretName: "hello-deployment-tls-ca-key-pair2"
rules:
- http:
paths:
- pathType: Prefix
path: "/"
backend:
serviceName: sonarqube
servicePort: 80
</code></pre>
<p>As I do not have any registered domain name I just want to use the public IP to access the service over <code>https://<Public_IP></code>.</p>
<p>When I access to the service <code>https://<Public_IP></code> I can see that "Kubernetes Ingress Controller Fake Certificate" so i guess this is because the certificate is not globally recognize by the browser.</p>
<p>The strange thing is here. Theoretically the Ingress deployment is using the <code>selfsigned-certificate2</code> but looks like it is not ready:</p>
<pre><code>kubectl get certificate
NAME READY SECRET AGE
selfsigned-certificate True hello-deployment-tls-ca-key-pair 4h29m
selfsigned-certificate2 False hello-deployment-tls-ca-key-pair2 3h3m
selfsigned-secret True selfsigned-secret 5h25m
</code></pre>
<pre><code>kubectl describe certificate selfsigned-certificate2
.
.
.
Spec:
Dns Names:
*.default.svc.cluster.local
*.default.com
Issuer Ref:
Kind: ClusterIssuer
Name: hello-deployment-tls
Secret Name: hello-deployment-tls-ca-key-pair2
Status:
Conditions:
Last Transition Time: 2021-10-15T11:16:15Z
Message: Waiting for CertificateRequest "selfsigned-certificate2-3983093525" to complete
Reason: InProgress
Status: False
Type: Ready
Events: <none>
</code></pre>
<p>Any idea?</p>
<p>Thank you in advance.</p>
| <h2>ApiVersions</h2>
<p>First I noticed you're using <code>v1alpha2</code> apiVersion which is depricated and will be removed in <code>1.6</code> cert-manager:</p>
<pre><code>$ kubectl apply -f cluster-alpha.yaml
Warning: cert-manager.io/v1alpha2 ClusterIssuer is deprecated in v1.4+, unavailable in v1.6+; use cert-manager.io/v1 ClusterIssuer
</code></pre>
<p>I used <code>apiVersion: cert-manager.io/v1</code> in reproduction.</p>
<p>Same for <code>v1beta1</code> ingress, consider updating it to <code>networking.k8s.io/v1</code>.</p>
<h2>What happens</h2>
<p>I started reproducing your setup step by step.</p>
<p>I applied <code>clusterissuer.yaml</code>:</p>
<pre><code>$ kubectl apply -f clusterissuer.yaml
clusterissuer.cert-manager.io/selfsigned created
$ kubectl get clusterissuer
NAME READY AGE
selfsigned True 11s
</code></pre>
<p>Pay attention that <code>READY</code> is set to <code>True</code>.</p>
<p><strong>Next</strong> I applied <code>certificate.yaml</code>:</p>
<pre><code>$ kubectl apply -f cert.yaml
certificate.cert-manager.io/selfsigned-certificate created
$ kubectl get cert
NAME READY SECRET AGE
selfsigned-certificate True hello-deployment-tls-ca-key-pair 7s
</code></pre>
<p><strong>Next step</strong> is to add the second <code>ClusterIssuer</code> which is referenced to <code>hello-deployment-tls-ca-key-pair</code> secret:</p>
<pre><code>$ kubectl apply -f clusterissuer2.yaml
clusterissuer.cert-manager.io/hello-deployment-tls created
$ kubectl get clusterissuer
NAME READY AGE
hello-deployment-tls False 6s
selfsigned True 3m50
</code></pre>
<p>ClusterIssuer <code>hello-deployment-tls</code> is <strong>not</strong> ready. Here's why:</p>
<pre><code>$ kubectl describe clusterissuer hello-deployment-tls
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ErrGetKeyPair 10s (x5 over 75s) cert-manager Error getting keypair for CA issuer: secret "hello-deployment-tls-ca-key-pair" not found
Warning ErrInitIssuer 10s (x5 over 75s) cert-manager Error initializing issuer: secret "hello-deployment-tls-ca-key-pair" not found
</code></pre>
<p>This is expected behaviour since:</p>
<blockquote>
<p>When referencing a Secret resource in ClusterIssuer resources (eg
apiKeySecretRef) the Secret needs to be in the same namespace as the
cert-manager controller pod. You can optionally override this by using
the --cluster-resource-namespace argument to the controller.</p>
</blockquote>
<p><a href="https://docs.cert-manager.io/en/release-0.11/reference/clusterissuers.html" rel="noreferrer">Reference</a></p>
<h2>Answer - how to move forward</h2>
<p>I edited the <code>cert-manager</code> deployment so it will look for <code>secrets</code> in <code>default</code> namespace (this is not ideal, I'd use <code>issuer</code> instead in <code>default</code> namespace):</p>
<pre><code>$ kubectl edit deploy cert-manager -n cert-manager
spec:
containers:
- args:
- --v=2
- --cluster-resource-namespace=default
</code></pre>
<p>It takes about a minute for <code>cert-manager</code> to start. Redeployed <code>clusterissuer2.yaml</code>:</p>
<pre><code>$ kubectl delete -f clusterissuer2.yaml
clusterissuer.cert-manager.io "hello-deployment-tls" deleted
$ kubectl apply -f clusterissuer2.yaml
clusterissuer.cert-manager.io/hello-deployment-tls created
$ kubectl get clusterissuer
NAME READY AGE
hello-deployment-tls True 3s
selfsigned True 5m42s
</code></pre>
<p>Both are <code>READY</code>. Moving forward with <code>certificate2.yaml</code>:</p>
<pre><code>$ kubectl apply -f cert2.yaml
certificate.cert-manager.io/selfsigned-certificate2 created
$ kubectl get cert
NAME READY SECRET AGE
selfsigned-certificate True hello-deployment-tls-ca-key-pair 33s
selfsigned-certificate2 True hello-deployment-tls-ca-key-pair2 6s
$ kubectl get certificaterequest
NAME APPROVED DENIED READY ISSUER REQUESTOR AGE
selfsigned-certificate-jj98f True True selfsigned system:serviceaccount:cert-manager:cert-manager 52s
selfsigned-certificate2-jwq5c True True hello-deployment-tls system:serviceaccount:cert-manager:cert-manager 25s
</code></pre>
<h2>Ingress</h2>
<p>When <code>host</code> is not added to <code>ingress</code>, it doesn't create any certificates and seems to used some fake one from <code>ingress</code> which is issued by <code>CN = Kubernetes Ingress Controller Fake Certificate</code>.</p>
<p>Events from <code>ingress</code>:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BadConfig 5s cert-manager TLS entry 0 is invalid: secret "example-cert" for ingress TLS has no hosts specified
</code></pre>
<p>When I added DNS to <code>ingress</code>:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CreateCertificate 4s cert-manager Successfully created Certificate "example-cert"
</code></pre>
<h2>Answer, part 2 (about ingress, certificates and issuer)</h2>
<p>You don't need to create a certificate if you're referencing to <code>issuer</code> in <code>ingress</code> rule. Ingress will issue certificate for you when all details are presented, such as:</p>
<ul>
<li>annotation <code>cert-manager.io/cluster-issuer: "hello-deployment-tls"</code></li>
<li><code>spec.tls</code> part with host within</li>
<li><code>spec.rules.host</code></li>
</ul>
<p><strong>OR</strong></p>
<p>if you want to create certificate manually and ask ingress to use it, then:</p>
<ul>
<li>remove annotation <code>cert-manager.io/cluster-issuer: "hello-deployment-tls"</code></li>
<li>create certificate manually</li>
<li>refer to it in <code>ingress rule</code>.</li>
</ul>
<p>You can check certificate details in browser and find that it no longer has issuer as <code>CN = Kubernetes Ingress Controller Fake Certificate</code>, in my case it's empty.</p>
<h2>Note - cert-manager v1.4</h2>
<p>Initially I used a bit outdated <code>cert-manager v1.4</code> and got <a href="https://github.com/jetstack/cert-manager/issues/4142" rel="noreferrer">this issue</a> which has gone after updating to <code>1.4.1</code>.</p>
<p>It looks like:</p>
<pre><code>$ kubectl describe certificaterequest selfsigned-certificate2-45k2c
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal cert-manager.io 41s cert-manager Certificate request has been approved by cert-manager.io
Warning DecodeError 41s cert-manager Failed to decode returned certificate: error decoding certificate PEM block
</code></pre>
<h2>Useful links:</h2>
<ul>
<li><a href="https://docs.cert-manager.io/en/release-0.11/tasks/issuers/setup-selfsigned.html" rel="noreferrer">Setting up self-singed Issuer</a></li>
<li><a href="https://docs.cert-manager.io/en/release-0.11/tasks/issuers/setup-ca.html" rel="noreferrer">Setting up CA issuers</a></li>
<li><a href="https://docs.cert-manager.io/en/release-0.11/reference/clusterissuers.html" rel="noreferrer">Cluster Issuers</a></li>
</ul>
|
<p>I tried to deploy prometheus with using <code>https://prometheus-community.github.io/helm-charts</code>
But I wanted to have a custom <code>prometheus.yml</code> Therefore the approach I used is the build the prometheus docker image by copying the customised prometheus.yml file. (via a simple pipeline)</p>
<p>Dockerfile I use</p>
<pre><code>FROM quay.io/prometheus/prometheus:v2.26.0
ADD config /etc/config/
</code></pre>
<p>It builds the image successfully and when I try to deploy this image via the helm,
container fails with the following error.</p>
<pre><code>level=error ts=2021-10-14T15:26:02.525Z caller=main.go:347 msg="Error loading config (--config.file=/etc/config/prometheus.yml)" err="open /etc/config/prometheus.yml: no such file or directory"
</code></pre>
<p>I am not sure if this is the ideal approach.
What can I do to have a customised <code>prometheus.yml</code> inside the prometheus pod.
(I can have the config within the values.yaml in helm but I prefer to have a separate file. So I can manage it easily)</p>
| <p>The challenege was to manage the prometheus.yml as a seperate file. I have found <code>extraConfigmapMounts</code> option in the community Prometheus stack.</p>
<p>Therefore creating a configMap using the <code>prometheus.yml</code> and mount it to the application seems like a good way of achieving it.
Simply use of the value <code>server.extraConfigmapMounts</code> to add the configuration.</p>
<pre><code>extraConfigmapMounts:
- name: prometheus-configmap
mountPath: /prometheus/config/
subPath: ""
configMap: <configMap name>
readOnly: true
</code></pre>
|
<p>I have created a Kubernetes cluster in the <a href="https://upcloud.com/community/tutorials/deploy-kubernetes-using-kubespray/" rel="nofollow noreferrer">cloud- using this tutorial</a> and deployed [to the cluster] a backend application called <code>chatapp</code> from the Docker private registry. Since there is no option to include service type as <code>LoadBalancer</code>, I had to restore to <code>NodePort</code> type.
Here is the <code>chatapp-deployment.yml</code> file for reference:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: chatapp
spec:
selector:
app: chatapp
ports:
- protocol: "TCP"
port: 6443
targetPort: 3000
type: NodePort
externalIPs:
- A.B.C.D
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: chatapp
labels:
app: chatapp
spec:
replicas: 2
selector:
matchLabels:
app: chatapp
template:
metadata:
labels:
app: chatapp
spec:
imagePullSecrets:
- name: regsecret
containers:
- name: chatapp
image: sebastian/chatapp
imagePullPolicy: Always
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello; sleep 10;done"]
ports:
- containerPort: 3000
</code></pre>
<p><strong>Note:</strong> I removed the external IP for security reasons.</p>
<p>I had to assign external IP manually since I couldn't set-up <code>LoadBalancer</code> as service type. Whenever I try accessing <code>http://A.B.C.D:6443</code>, I get the following:</p>
<pre><code>Client sent an HTTP request to an HTTPS server.
</code></pre>
<p>I went through this <a href="https://stackoverflow.com/questions/61017106/client-sent-an-http-request-to-an-https-server">link</a> but couldn't fix my issue with it. The external IP I have used is from the <code>master-o</code>.</p>
<p>While trying to access it with <a href="https://A.B.C.D:6443" rel="nofollow noreferrer">https://A.B.C.D:6443</a>, I get the following <code>403</code> message:</p>
<pre><code>{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
</code></pre>
<p>How can I authorize access to my cluster? Any feedbacks and suggestions would be appreciated.</p>
| <p>Your request has reached the k8s api-server at 6443 instead of your chatapp. To access your chatapp; first retrieve the nodePort number: <code>kubectl describe service chatapp | grep -i nodeport</code>, then use this # to access your app at <code>http://a.b.c.d:<nodePort></code></p>
|
<p>I have used this document for creating kafka <a href="https://kow3ns.github.io/kubernetes-kafka/manifests/" rel="nofollow noreferrer">https://kow3ns.github.io/kubernetes-kafka/manifests/</a></p>
<p>able to create zookeeper, facing issue with the creation of kafka.getting error to connect with the zookeeper.</p>
<p>this is the manifest i have used for creating
for kafka:</p>
<p><a href="https://kow3ns.github.io/kubernetes-kafka/manifests/kafka.yaml" rel="nofollow noreferrer">https://kow3ns.github.io/kubernetes-kafka/manifests/kafka.yaml</a>
for Zookeeper</p>
<p><a href="https://github.com/kow3ns/kubernetes-zookeeper/blob/master/manifests/zookeeper.yaml" rel="nofollow noreferrer">https://github.com/kow3ns/kubernetes-zookeeper/blob/master/manifests/zookeeper.yaml</a></p>
<p><strong>The logs of the kafka</strong></p>
<pre><code> kubectl logs -f pod/kafka-0 -n kaf
[2021-10-19 05:37:14,535] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
compression.type = producer
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delete.topic.enable = false
fetch.purgatory.purge.interval.requests = 1000
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 0.10.2-IV0
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,TRACE:TRACE,SASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT
listeners = PLAINTEXT://:9093
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /var/lib/kafka
log.dirs = /tmp/kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.format.version = 0.10.2-IV0
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 1440
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 3
offsets.topic.segment.bytes = 104857600
port = 9092
principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
producer.purgatory.purge.interval.requests = 1000
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.enabled.mechanisms = [GSSAPI]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism.inter.broker.protocol = GSSAPI
security.inter.broker.protocol = PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = null
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
unclean.leader.election.enable = true
zookeeper.connect = zk-cs.default.svc.cluster.local:2181
zookeeper.connection.timeout.ms = 6000
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2021-10-19 05:37:14,569] INFO starting (kafka.server.KafkaServer)
[2021-10-19 05:37:14,570] INFO Connecting to zookeeper on zk-cs.default.svc.cluster.local:2181 (kafka.server.KafkaServer)
[2021-10-19 05:37:14,579] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2021-10-19 05:37:14,583] INFO Client environment:zookeeper.version=3.4.9-1757313, built on 08/23/2016 06:50 GMT (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:host.name=kafka-0.kafka-hs.kaf.svc.cluster.local (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:java.version=1.8.0_131 (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:java.class.path=:/opt/kafka/bin/../libs/aopalliance-repackaged-2.5.0-b05.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/connect-api-0.10.2.1.jar:/opt/kafka/bin/../libs/connect-file-0.10.2.1.jar:/opt/kafka/bin/../libs/connect-json-0.10.2.1.jar:/opt/kafka/bin/../libs/connect-runtime-0.10.2.1.jar:/opt/kafka/bin/../libs/connect-transforms-0.10.2.1.jar:/opt/kafka/bin/../libs/guava-18.0.jar:/opt/kafka/bin/../libs/hk2-api-2.5.0-b05.jar:/opt/kafka/bin/../libs/hk2-locator-2.5.0-b05.jar:/opt/kafka/bin/../libs/hk2-utils-2.5.0-b05.jar:/opt/kafka/bin/../libs/jackson-annotations-2.8.0.jar:/opt/kafka/bin/../libs/jackson-annotations-2.8.5.jar:/opt/kafka/bin/../libs/jackson-core-2.8.5.jar:/opt/kafka/bin/../libs/jackson-databind-2.8.5.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.8.5.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.8.5.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.8.5.jar:/opt/kafka/bin/../libs/javassist-3.20.0-GA.jar:/opt/kafka/bin/../libs/javax.annotation-api-1.2.jar:/opt/kafka/bin/../libs/javax.inject-1.jar:/opt/kafka/bin/../libs/javax.inject-2.5.0-b05.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/opt/kafka/bin/../libs/jersey-client-2.24.jar:/opt/kafka/bin/../libs/jersey-common-2.24.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.24.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.24.jar:/opt/kafka/bin/../libs/jersey-guava-2.24.jar:/opt/kafka/bin/../libs/jersey-media-jaxb-2.24.jar:/opt/kafka/bin/../libs/jersey-server-2.24.jar:/opt/kafka/bin/../libs/jetty-continuation-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-http-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-io-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-security-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-server-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-servlet-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-servlets-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-util-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jopt-simple-5.0.3.jar:/opt/kafka/bin/../libs/kafka-clients-0.10.2.1.jar:/opt/kafka/bin/../libs/kafka-log4j-appender-0.10.2.1.jar:/opt/kafka/bin/../libs/kafka-streams-0.10.2.1.jar:/opt/kafka/bin/../libs/kafka-streams-examples-0.10.2.1.jar:/opt/kafka/bin/../libs/kafka-tools-0.10.2.1.jar:/opt/kafka/bin/../libs/kafka_2.11-0.10.2.1-sources.jar:/opt/kafka/bin/../libs/kafka_2.11-0.10.2.1-test-sources.jar:/opt/kafka/bin/../libs/kafka_2.11-0.10.2.1.jar:/opt/kafka/bin/../libs/log4j-1.2.17.jar:/opt/kafka/bin/../libs/lz4-1.3.0.jar:/opt/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/kafka/bin/../libs/reflections-0.9.10.jar:/opt/kafka/bin/../libs/rocksdbjni-5.0.1.jar:/opt/kafka/bin/../libs/scala-library-2.11.8.jar:/opt/kafka/bin/../libs/scala-parser-combinators_2.11-1.0.4.jar:/opt/kafka/bin/../libs/slf4j-api-1.7.21.jar:/opt/kafka/bin/../libs/slf4j-log4j12-1.7.21.jar:/opt/kafka/bin/../libs/snappy-java-1.1.2.6.jar:/opt/kafka/bin/../libs/validation-api-1.1.0.Final.jar:/opt/kafka/bin/../libs/zkclient-0.10.jar:/opt/kafka/bin/../libs/zookeeper-3.4.9.jar (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:os.version=5.4.141-67.229.amzn2.x86_64 (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:user.name=kafka (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:user.home=/home/kafka (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,583] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,584] INFO Initiating client connection, connectString=zk-cs.default.svc.cluster.local:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@5e0826e7 (org.apache.zookeeper.ZooKeeper)
[2021-10-19 05:37:14,591] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2021-10-19 05:37:14,592] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.I0Itec.zkclient.exception.ZkException: Unable to connect to zk-cs.default.svc.cluster.local:2181
at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:72)
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1228)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:157)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:131)
at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:106)
at kafka.utils.ZkUtils$.apply(ZkUtils.scala:88)
at kafka.server.KafkaServer.initZk(KafkaServer.scala:326)
at kafka.server.KafkaServer.startup(KafkaServer.scala:187)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.net.UnknownHostException: zk-cs.default.svc.cluster.local: Name or service not known
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at org.apache.zookeeper.client.StaticHostProvider.<init>(StaticHostProvider.java:61)
at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:445)
at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:380)
at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:70)
... 10 more
[2021-10-19 05:37:14,594] INFO shutting down (kafka.server.KafkaServer)
[2021-10-19 05:37:14,597] INFO shut down completed (kafka.server.KafkaServer)
[2021-10-19 05:37:14,597] FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
org.I0Itec.zkclient.exception.ZkException: Unable to connect to zk-cs.default.svc.cluster.local:2181
at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:72)
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1228)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:157)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:131)
at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:106)
at kafka.utils.ZkUtils$.apply(ZkUtils.scala:88)
at kafka.server.KafkaServer.initZk(KafkaServer.scala:326)
at kafka.server.KafkaServer.startup(KafkaServer.scala:187)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.net.UnknownHostException: zk-cs.default.svc.cluster.local: Name or service not known
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at org.apache.zookeeper.client.StaticHostProvider.<init>(StaticHostProvider.java:61)
at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:445)
at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:380)
at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:70)
... 10 more
</code></pre>
<p><a href="https://i.stack.imgur.com/P1FuF.png" rel="nofollow noreferrer">Crash-Loop-Kafka</a>
<a href="https://i.stack.imgur.com/Ang7w.png" rel="nofollow noreferrer">kafka deployed manifest</a></p>
| <p>Your Kafka and Zookeeper deployments are running in the <code>kaf</code> namespace according to your screenshots, presumably you have set this up manually and applied the configurations while in that namespace? Neither the Kafka or Zookeeper YAML files explicitly state a namespace in metadata, so will be deployed to the active namespace when created.</p>
<p>Anyway, the Kafka deployment YAML you have is hardcoded to assume Zookeeper is setup in the <code>default</code> namespace, with the following line:</p>
<pre><code> --override zookeeper.connect=zk-cs.default.svc.cluster.local:2181 \
</code></pre>
<p>Change this to:</p>
<pre><code> --override zookeeper.connect=zk-cs.kaf.svc.cluster.local:2181 \
</code></pre>
<p>and it should connect. Whether that's by downloading and locally editing the YAML file etc.</p>
<p>Alternatively deploy Zookeeper into the <code>default</code> namespace.</p>
<p>I also recommend looking at other options like <a href="https://github.com/bitnami/charts/tree/master/bitnami/kafka/#installing-the-chart" rel="nofollow noreferrer">Bitnami Kafka Helm charts</a> which deploy Zookeeper as needed with Kafka, manages most of the connection details and allows for easier customisation. It is also kept far more up to date.</p>
|
<p>I have an application running in multithreading and requires many cores on a SINGLE instance.
Just wondering how to merge multiple pod (ie containers) into a single big node, so the application can run on this big node.
For example: 64 pods into a single one (ie 64 cores).</p>
<p>This is not for production or HA service, just computation.
Application cannot be re-written.</p>
<p>Have <a href="https://stackoverflow.com/questions/20306628/how-to-make-all-distributed-nodes-ram-available-to-a-single-node">this reference</a>, a bit outdated.</p>
| <p>First of all, running multiple pods on a single node couldn't be called "merging", assigning or scheduling are better verbs I guess.</p>
<p>you just need to label your node, then schedule your pods to be spawned on nodes with that label:</p>
<ol>
<li><p>Run: <code>kubectl get nodes</code> , note the node name.</p>
</li>
<li><p><code>kubectl label nodes <node-name> bignode=yes</code></p>
</li>
<li><p>verify your node is labelled currectly:</p>
<p><code>kubectl get nodes --show-labels</code></p>
</li>
<li><p>then in your pod definition, define a node selector:</p>
</li>
</ol>
<pre><code> apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
bignode: yes <-----------
</code></pre>
<p>More info in Kubernetes docs:
<a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/</a></p>
<p>Update:
the other way is to set your CPU requests as high as you need,
so kubernetes will schedule your pods with sufficient CPU cores.</p>
<p>it could be like:</p>
<pre><code> apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
resources:
requests:
memory: "2048Mi"
cpu: "60"
</code></pre>
<p>more info: <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/</a></p>
<p>if your concern is only number of cpu cores, second solution is more reliable,
if its important to have all pods on a single node, you can read about <a href="https://www.cncf.io/blog/2021/07/27/advanced-kubernetes-pod-to-node-scheduling/" rel="nofollow noreferrer">affinity</a> rules too.</p>
|
<p>Note: I am not running locally on Minikube or something, but GKE - but could be any provider.</p>
<p>I want to be able to create users/contexts in K8s with openssl:</p>
<pre><code>openssl x509 -req -in juan.csr -CA CA_LOCATION/ca.crt -CAKey CA_LOCATION/ca.key -CAcreateserial -out juan.crt -days 500
</code></pre>
<p>How do I get the K8s <code>ca.crt</code> and <code>ca.key</code>? - I found this for <code>ca.crt</code>, but is this the way and still missing the ca.key?</p>
<pre><code>kubectl get secret -o jsonpath="{.items[?(@.type==\"kubernetes.io/service-account-token\")].data['ca\.crt']}" | base64 --decode
</code></pre>
<p>And, other way than logging into master node <code>/etc/kubernetes/pki/.</code></p>
| <p>You don't. Google do not expose the keys as per the <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-trust#root_of_trust" rel="nofollow noreferrer">documentation</a>.</p>
<p>Specifically and I quote:</p>
<blockquote>
<p>An internal Google service manages root keys for this CA, which are non-exportable. This service accepts certificate signing requests, including those from the kubelets in each GKE cluster. Even if the API server in a cluster were compromised, the CA would not be compromised, so no other clusters would be affected.</p>
</blockquote>
<p>Using Kubernetes v1.19 and higher you can sign your CSR using the Kubernetes API itself as referenced <a href="https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/" rel="nofollow noreferrer">here</a>.</p>
<p>Encode your CSR <code>$ ENCODED=$(cat mia.csr | base64 | tr -d "\n")</code>.</p>
<p>Then post it to k8s:</p>
<pre><code>$ cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: mia
spec:
request: $ENCODED
signerName: kubernetes.io/kube-apiserver-client
#expirationSeconds: 86400 # Only supported on >=1.22
usages:
- client auth
EOF
</code></pre>
<p>And then approve the CSR:</p>
<pre><code>$ kubectl certificate approve mia
certificatesigningrequest.certificates.k8s.io/mia approved
</code></pre>
<p>Then download the signed certificate:</p>
<pre><code>kubectl get csr mia -o jsonpath='{.status.certificate}'| base64 -d > mia.crt
</code></pre>
<p>I wrote an example of the end to end flow <a href="https://medium.com/@mattgillard/creating-a-kubernetes-rbac-user-in-google-kubernetes-engine-gke-fa930217a052" rel="nofollow noreferrer">here</a>.</p>
|
<p>I use Autopilot on GKE. I've created some log based metrics that I'd like to use to scale up pods.</p>
<p>To begin with - I'm not sure if it's great idea - the metric is just number of records in DB to process... I have a feeling using logs to scale app might bring in some weird infinite loop or something....</p>
<p>Anyhow - I've tried entering <code>logging.googleapis.com|user|celery-person-count</code> as an external metric and got <code>HPA cannot read metric value</code>. Installed Stackdriver adapter but not too sure how to use it either.</p>
| <p>GKE Autopilot clusters have <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity" rel="nofollow noreferrer">Workload Identity</a> enabled for consuming other GCP services, including Cloud Monitoring.</p>
<p>You'll want to follow the steps <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/workload-metrics-autoscaling#step3" rel="nofollow noreferrer">here</a> in order to deploy the Custom Metrics Adapter on Autopilot clusters.</p>
<pre><code>kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin --user "$(gcloud config get-value account)"
kubectl create namespace custom-metrics
kubectl create serviceaccount --namespace custom-metrics \
custom-metrics-stackdriver-adapter
gcloud iam service-accounts create GSA_NAME
gcloud projects add-iam-policy-binding PROJECT_ID \
--member "serviceAccount:GSA_NAME@PROJECT_ID.iam.gserviceaccount.com" \
--role "roles/monitoring.viewer"
gcloud iam service-accounts add-iam-policy-binding \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:PROJECT_ID.svc.id.goog[custom-metrics/custom-metrics-stackdriver-adapter]" \
GSA_NAME@PROJECT_ID.iam.gserviceaccount.com
kubectl annotate serviceaccount \
--namespace custom-metrics custom-metrics-stackdriver-adapter \
iam.gke.io/gcp-service-account=GSA_NAME@PROJECT_ID.iam.gserviceaccount.com
kubectl apply -f manifests/adapter_new_resource_model.yaml
</code></pre>
<p>Given that you've already deployed the adapter, you'll want to delete the deployment first, although you might just be able to run the steps starting at <code>gcloud iam ... </code></p>
<p>You'll need to replace GSA_NAME with a name of your choosing and PROJECT_ID with your Google Cloud project ID.</p>
|
<p>We've been using <code>/stats/summary</code> to get <code>fs</code> metrics, which is like:</p>
<pre><code>"fs": {
"time": "2021-10-14T03:46:05Z",
"availableBytes": 17989276262,
"capacityBytes": 29845807308,
"usedBytes": 5856531046,
"inodesFree": 16799593,
"inodes": 17347097,
"inodesUsed": 57504
},
</code></pre>
<p>And due to this <a href="https://github.com/elastic/beats/issues/12792" rel="nofollow noreferrer">Move away from kubelet stats/summary</a>, we need to get the same data in another way.</p>
<p>We've tried <code>/metrics/cadvisor</code> and <code>/metrics/resources</code>, but were not successful to get <code>fs</code> data.
Also, it seems that CAdvisor will also be deprecated (in TBD+2 <a href="https://github.com/kubernetes/kubernetes/issues/68522" rel="nofollow noreferrer">here</a>)</p>
<p>We've been searching the net for possible solution but can't seem to find any.</p>
<p>Any ideas on how this can be done?
Or probably point us to the right direction or documentation?</p>
<p>Thank you in advance.</p>
| <p>Posted community wiki based on Github topic. Feel free to expand it.</p>
<hr />
<p>Personally, I have not found any equivalent of this call (<code>/api/v1/nodes/<node name>/proxy/stats/summary</code>), and as it is still working and not deprecated in the Kubernetes newest versions ( <code>1.21</code> and <code>1.22</code>), I'd recommend just using it and wait for information about replacement from the Kubernetes team. Check below information:</p>
<p>Information from this <a href="https://github.com/kubernetes/kubernetes/issues/68522" rel="nofollow noreferrer">GitHub topic - # Reduce the set of metrics exposed by the kubelet #68522</a> (last edited: November 2020, issue open):</p>
<p>It seems that <code>/stats/summary/</code> does not have any replacement recommendation ready:</p>
<blockquote>
<p>[TBD] Propose out-of-tree replacements for kubelet monitoring endpoints</p>
</blockquote>
<p>They will keep the Summary API for the next four versions counting from the version in which replacement will be implemented:</p>
<blockquote>
<p>[TBD+4] Remove the Summary API, cAdvisor prometheus metrics and remove the <code>--enable-container-monitoring-endpoints</code> flag.</p>
</blockquote>
<hr />
<p>In Kubernetes <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md" rel="nofollow noreferrer"><code>v1.23</code> changelog</a> there is no information about changing anything related to the Summary API.</p>
<p>I'd suggest observing and pinging Kubernetes developers directly in <a href="https://github.com/kubernetes/kubernetes/issues/68522" rel="nofollow noreferrer">this GitHub topic</a> for more information.</p>
|
<p>I wish to create Persistent Volumes across multiple zones.</p>
<p>There are two lists inside the data object, one for az's and the other for vol id's.</p>
<p>The structure is:</p>
<pre><code>persistentVolume:
az:
- eu-west-2a
- eu-west-2b
- eu-west-2c
storageClassName: postgres
storage: 10Gi
accessModes: ReadWriteOnce
volumeID:
- vol-123
- vol-456
- vol-789
fsType: ext4
</code></pre>
<p>The Helm config:</p>
<pre><code>{{- $fullName := include "postgres.fullname" $ -}}
{{- range .Values.persistentVolume }}
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ $fullName }}
labels:
type: {{ $fullName }}
az: {{ .az }}
spec:
storageClassName: {{ .Values.persistentVolume.storageClassName }}
capacity:
storage: {{ .Values.persistentVolume.storage }}
accessModes:
- {{ .Values.persistentVolume.accessModes }}
awsElasticBlockStore:
volumeID: {{ .volumeID }}
fsType: {{ .Values.persistentVolume.fsType }}
{{- end }}
</code></pre>
<p>But this error with:</p>
<pre><code>template: postgres/templates/pv.yaml:10:11: executing "postgres/templates/pv.yaml" at <.az>: can't evaluate field az in type interface {}
</code></pre>
<p>Any tips or pointers would be much appreciated :-)</p>
| <p>It is not possible to do what you want with the current structure you have, but if you are willing to change your <code>values.yaml</code> a bit:</p>
<pre><code>persistentVolume:
az:
- region: eu-west-2a
volumeID: vol-123
- region: eu-west-2b
volumeID: vol-456
- region: eu-west-2c
volumeID: vol-789
storageClassName: postgres
storage: 10Gi
accessModes: ReadWriteOnce
fsType: ext4
</code></pre>
<p>then you could do (simplified example, but I am sure you will get the idea):</p>
<pre><code>{{- $storageClassName := .Values.persistentVolume.storageClassName }}
{{- $volumes := .Values.persistentVolume }}
{{- range $volumes.az }}
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: test
labels:
type: test
az: {{ .region }}
spec:
storageClassName: {{ $storageClassName }}
awsElasticBlockStore:
volumeID: {{ .volumeID }}
{{- end }}
</code></pre>
<p>Note the <code>{{- $storageClassName := .Values.persistentVolume.storageClassName }}</code>. You need to do this because <code>helm</code> has the notion of "scope", which you can read a lot more in the docs.</p>
|
<p>The .spec.selector field defines how the Deployment finds which Pods to manage. But we also defines labels inside template then what is the extra things we are getting with .spec.selectors field because Deployment can find the Pods to be managed with the label defined in template also ?</p>
<p>In the below code how can the pod with "label occloud.oracle.com/open-network-policy: allow" is managed by deployment as it is not described in spec.selectors</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cheeseshop
spec:
replicas: 1
progressDeadlineSeconds: 180
selector:
matchLabels:
app.kubernetes.io/name: tutorial
app.kubernetes.io/component: cheeseshop
template:
metadata:
labels:
app.kubernetes.io/name: tutorial
app.kubernetes.io/component: cheeseshop
occloud.oracle.com/open-network-policy: allow
name: cheeseshop
</code></pre>
| <p><code>spec.seletor</code> field is used by Deployment/Replicaset controllers. It must be a <strong>subset</strong> of the labels specified in the <code>podTemplate</code>. That is why you may have additional labels in your pods. But they will still be managed by the deployment.</p>
<p><code>spec.selector</code> is also used to check is any existing <code>ReplicaSet</code> already matched these conditions. If the <code>Deployment controller</code> found an orphan <code>ReplicaSet</code>, it will be used by the deployment instead of creating a new one.
See <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/deployment/deployment_controller.go#L222" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/deployment/deployment_controller.go#L222</a></p>
|
<p>My remotely working setup is as follows:</p>
<p><code>PersistentVolume</code> is mounted to a <code>gcePersistentDisk</code>. Pod "Lagg" makes a claim on the entirety of the persistent disk. "Lagg" is a google containers <a href="http://gcr.io/google-containers/volume-nfs" rel="nofollow noreferrer">volume-nfs</a> image, which acts as the middleman between the <code>ReadWriteOnce</code> volume and a NFS <code>ReadWriteMany</code> that all of my other pods can access. Below is the Lagg NFS persistent volume YAML:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: lagg-volume
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
nfs:
# kustomize does not add prefixes here, so they're placed ahead of time
server: test-lagg.test-project.svc.cluster.local
path: "/"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: lagg-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 5Gi
</code></pre>
<p>There is a second <code>PersistentVolume</code> that mounts to the pod via NFS, that other pods can claim. One of those pods is "Digit" which you can see the volume defining part below:</p>
<pre><code>spec:
template:
spec:
containers:
- name: digit
volumeMounts:
- name: lagg-connection
mountPath: "/cache"
volumes:
- name: lagg-connection
persistentVolumeClaim:
claimName: lagg-claim
</code></pre>
<p>Because I don't have a <code>gcePersistentDisk</code> for local testing, my local version of this cluster instead uses another persistent volume called "Lagg-local" which simply takes the place of the <code>gcePersistentDisk</code>, and looks like this:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: lagg-local-volume
labels:
type: local
spec:
storageClassName: manual
persistentVolumeReclaimPolicy: Delete
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
local:
path: /run/desktop/mnt/host/c/project/cache
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: lagg-local-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
</code></pre>
<p>When I try to run this locally, I only get one error, and it's in the Digit pod, using describe, it says:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 47s default-scheduler Successfully assigned test-project/test-digit-cff6bd9c6-gz2sn to docker-desktop
Warning FailedMount 11s (x7 over 43s) kubelet MountVolume.SetUp failed for volume "test-lagg-volume" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs test-lagg.test-project.svc.cluster.local:/ /var/lib/kubelet/pods/80f686cf-47bb-478b-a581-c179794e2182/volumes/kubernetes.io~nfs/test-lagg-volume
Output: mount.nfs: Failed to resolve server test-lagg.test-project.svc.cluster.local: Name or service not known
</code></pre>
<p>From what I can see, the pods simply can't contact the NFS server or possibly can't resolve the DNS.
test-lagg exists and is running, and test-project is the namespace that both test-lagg (the service which points to the lagg NFS pods) and test-digit reside in. So I'm not entirely sure what is happening here.</p>
<p>I do believe the NFS server is working correctly, as a file "index.html" is created in the root of the volume that simply contains "Hello from NFS!"</p>
<p>The same error also happens if I use <code>cpuguy83/nfs-server</code> image instead of <code>google_containers/volume-nfs</code></p>
<p>A different error happens if I define the <code>clusterIP</code> rather than the DNS name, stating it doesn't have permissions.</p>
<p>I also don't think there's an issue with the connection to the service because running nslookup on the digit pod returns this:</p>
<pre><code>root@test-digit-7c6dc66659-q4trw:/var/www/static# nslookup test-lagg.test-project.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: test-lagg.test-project.svc.cluster.local
Address: 10.105.85.125
</code></pre>
<p>The NFS pod itself also has the volume mounted correctly:</p>
<pre><code>On GKE:
PS C:\Users\ral\Documents\Projects\Project\Kubernetes> kubectl exec next-lagg-69884bf49b-fn544 -- bash -c "findmnt /exports -o TARGET,SOURCE,FSTYPE"
TARGET SOURCE FSTYPE
/exports /dev/sdb ext4
On local:
PS C:\Users\ral\Documents\Projects\Project\Kubernetes> kubectl exec test-lagg-547cbb779-4qgbl -- bash -c "findmnt /exports -o TARGET,SOURCE,FSTYPE"
TARGET SOURCE FSTYPE
/exports C:\[/Project/cache] 9p
</code></pre>
| <p>DNS resolution problems with <code>google_containers/volume-nfs</code>, on non-GKE clusters, is a known issue:</p>
<ul>
<li><a href="https://github.com/kubernetes/examples/issues/390" rel="nofollow noreferrer">NFS example with a cluster local service name only works on GKE but not for minikube/kubeadm #390</a></li>
<li><a href="https://github.com/kubernetes/examples/issues/418" rel="nofollow noreferrer">Failed to resolve server nfs-server.default.svc.cluster.local: Name or service not known #418</a></li>
</ul>
<p>Basically, NFS server does not support hostnames, only IPs.</p>
<p>Alternatively you could use <a href="https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/deploy/example/nfs-provisioner/README.md" rel="nofollow noreferrer">csi-driver-nfs</a></p>
|
<p>I am testing a log previous command and for that I need a pod to restart.</p>
<p>I can get my pods using a command like</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get pods -n $ns -l $label
</code></pre>
<p>Which shows that my pods did not restart so far. I want to test the command:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl logs $podname -n $ns --previous=true
</code></pre>
<p>That command fails because my pod did not restart making the <code>--previous=true</code> switch meaningless.</p>
<p>I am aware of this command to restart pods when configuration changed:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl rollout restart deployment myapp -n $ns
</code></pre>
<p>This does not restart the containers in a way that is meaningful for my log command test but rather terminates the old pods and creates new pods (which have a restart count of 0).</p>
<p>I tried various versions of exec to see if I can shut them down from within but most commands I would use are not found in that container:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl exec $podname -n $ns -- shutdown
kubectl exec $podname -n $ns -- shutdown now
kubectl exec $podname -n $ns -- halt
kubectl exec $podname -n $ns -- poweroff
</code></pre>
<p>How can I use a <code>kubectl</code> command to forcefully restart the pod with it retaining its identity and the restart counter increasing by one so that my test log command has a previous instance to return the logs from.</p>
<p>EDIT:
Connecting to the pod is <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/" rel="nofollow noreferrer">well described</a>.</p>
<pre class="lang-sh prettyprint-override"><code>kubectl -n $ns exec --stdin --tty $podname -- /bin/bash
</code></pre>
<p>The process list shows only a handful running processes:</p>
<pre class="lang-sh prettyprint-override"><code>ls -1 /proc | grep -Eo "^[0-9]{1,5}$"
</code></pre>
<p>proc 1 seems to be the one running the pod.
<code>kill 1</code> does nothing, not even kill the proc with pid 1</p>
<p>I am still looking into this at the moment.</p>
| <p>There are different ways to achieve your goal. I'll describe below most useful options.</p>
<h2>Crictl</h2>
<p>Most correct and efficient way - restart the pod on container runtime level.</p>
<p>I tested this on Google Cloud Platform - GKE and minikube with <code>docker</code> driver.</p>
<p>You need to <code>ssh</code> into the worker node where the pod is running. Then find it's <code>POD ID</code>:</p>
<pre><code>$ crictl ps
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
9863a993e0396 87a94228f133e 3 minutes ago Running nginx-3 2 6d17dad8111bc
</code></pre>
<p>OR</p>
<pre><code>$ crictl pods -s ready
POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME
6d17dad8111bc About an hour ago Ready nginx-3 default 2 (default)
</code></pre>
<p>Then stop it:</p>
<pre><code>$ crictl stopp 6d17dad8111bc
Stopped sandbox 6d17dad8111bc
</code></pre>
<p>After some time, <code>kubelet</code> will start this pod again (with different POD ID in CRI, however kubernetes cluster treats this pod as the same):</p>
<pre><code>$ crictl ps
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
f5f0442841899 87a94228f133e 41 minutes ago Running nginx-3 3 b628e1499da41
</code></pre>
<p>This is how it looks in cluster:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-3 1/1 Running 3 48m
</code></pre>
<p>Getting logs with <code>--previous=true</code> flag also confirmed it's the same POD for kubernetes.</p>
<h2>Kill process 1</h2>
<p>It works with most images, however not always.</p>
<p>E.g. I tested on simple pod with <code>nginx</code> image:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 27h
$ kubectl exec -it nginx -- /bin/bash
root@nginx:/# kill 1
root@nginx:/# command terminated with exit code 137
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 1 27h
</code></pre>
<h2>Useful link:</h2>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/crictl/" rel="nofollow noreferrer">Debugging Kubernetes nodes with crictl</a></li>
</ul>
|
<p>I created a mutating admission webhook in my Kubernetes cluster.
The api mutation webhook adds tolerations to a Deployment YAML.
However, it says an error <code>Internal error occurred: jsonpatch add operation does not apply: doc is missing path: "/spec/template/spec/tolerations/"</code></p>
<p>My Yaml Sample file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: webhook
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 100%
maxUnavailable: 50%
type: RollingUpdate
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: "nginx:1.17"
imagePullPolicy: Always
ports:
- containerPort: 80
name: nginx
</code></pre>
<p>Python code:</p>
<pre><code> mutations = []
mutations.append({"op": "add", "path": "/spec/template/spec/tolerations/" + str(counter),
"value": {key: t[key]}})
</code></pre>
<p>However, upon testing error appears as stated above.
Please help. :(</p>
| <p>I don't see the whole file, therefore I cannot correct it.
But the problem is with your path, you are probably linking an array -- you have to specify an index. When you want to add an element to the end of an array, use <code>-1</code> like this: <code>spec/template/spec/containers/0/env/-1</code></p>
<hr />
<p>Have a look at this example</p>
<ul>
<li>shortened yaml file:</li>
</ul>
<pre><code>...
spec:
template:
spec:
containers: # this is an array, there is 0 in the path
- name: somename
image: someimage
env: # another array, we are putting the element at the end, therefore index is -1
- name: SOME_VALUE_0
value: "foo"
- name: SOME_VALUE_1
value: "bar"
# I want to add second env variable SOME_VALUE_2 here
...
</code></pre>
<ul>
<li><code>kustomization.yaml</code></li>
</ul>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../link/to/your/yaml
patchesJson6902:
- target:
...
patch: |-
- op: "add"
path: "/spec/jobTemplate/spec/template/spec/containers/0/env/-1"
value: {name: "SOME_VALUE_2", value: "baz"}
</code></pre>
|
<p>We have been using Terraform for almost a year now to manage all kinds of resources on AWS from bastion hosts to VPCs, RDS and also EKS.</p>
<p>We are sometimes really baffled by the EKS module. It could however be due to lack of understanding (and documentation), so here it goes:</p>
<p><strong>Problem:</strong> Upsizing Disk (volume)</p>
<pre><code>
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "12.2.0"
cluster_name = local.cluster_name
cluster_version = "1.19"
subnets = module.vpc.private_subnets
#...
node_groups = {
first = {
desired_capacity = 1
max_capacity = 5
min_capacity = 1
instance_type = "m5.large"
}
}
</code></pre>
<p>I thought the default value for this (dev) k8s cluster's node can easily be the default 20GBs but it's filling up fast so I know want to change <code>disk_size</code> to let's say 40GBs.</p>
<p>=> I thought I could just add something like <code>disk_size=40</code> and done.</p>
<p><code>terraform plan</code> tells me I need to replace the node. This is a 1 node cluster, so not good. And even if it were I don't want to e.g. drain nodes. That's why I thought we are using managed k8s like EKS.</p>
<p><strong>Expected behaviour:</strong> since these are elastic volumes I should be able to upsize but not downsize, why is that not possible? I can def. do so from the AWS UI.</p>
<p>Sure with a slightly scary warning:</p>
<p><em>Are you sure that you want to modify volume vol-xx?
It may take some time for performance changes to take full effect.
You may need to extend the OS file system on the volume to use any newly-allocated space</em></p>
<p>But I can work with the provided docs on that: <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html?icmpid=docs_ec2_console" rel="nofollow noreferrer">https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html?icmpid=docs_ec2_console</a></p>
<p>Any guidelines on how to up the storage? If I do so with the UI but don't touch Terraform then my EKS state will be nuked/out of sync.</p>
| <p>To my knowledge, there is currently no way to resize an EKS node volume without recreating the node using Terraform.</p>
<p>Fortunately, there is a workaround: As you also found out, you can directly change the node size via the AWS UI or API. To update your state file afterward, you can run <code>terraform apply -refresh-only</code> to download the latest data (e.g., the increased node volume size). After that, you can change the node size in your Terraform plan to keep both plan and state in sync.</p>
<p>For the future, you might want to look into moving to ephemeral nodes as (at least my) experience shows that you will have unforeseeable changes to clusters and nodes from time to time. Already planning with replaceable nodes in mind will make these changes substantially easier.</p>
|
<p>I am new in Kubernetes and stuck on the issue. I was trying to renew letsencrypt SSL certificate. But when I try to get certificate by running following command</p>
<pre><code>kubectl get certificate
</code></pre>
<p>System throwing this exception</p>
<pre><code>Error from server: conversion webhook for cert-manager.io/v1alpha2, Kind=Certificate failed: Post https://cert-manager-webhook.default.svc:443/convert?timeout=30s: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "cert-manager-webhook-ca")
</code></pre>
<p>I have checked the pods also</p>
<p><a href="https://i.stack.imgur.com/R94Wn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R94Wn.png" alt="enter image description here" /></a></p>
<p>The "cert-manager-webhook" is in running state. When I check logs of this pod, I get the following response</p>
<p><a href="https://i.stack.imgur.com/BF0AR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BF0AR.png" alt="enter image description here" /></a></p>
<p>I have also tried to apply cluster-issuer after deleting it but face same issue</p>
<pre><code>kubectl apply -f cluster-issuer.yaml
</code></pre>
<p><a href="https://i.stack.imgur.com/hzNQa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hzNQa.png" alt="enter image description here" /></a></p>
<p>I also have done R&D about this but could not find any suitable solution. Whats the issue here? Can someone please help me regarding this? Thanks.</p>
| <p>The problem was with "<strong>cert-manager-cainjector</strong>" pod status which was "<strong>CrashLoopBackOff</strong>" due to FailedMount as <strong>secret</strong> was not found for mounting. I have created that secret and after that it start working fine.</p>
|
<p>A node have a plenty of info for metrics collection, under cgroups kubepods.slice for example. but to complete the metric you have to relate a pod metric to a pod name. a name and namespace itself are kind of static metric of a pod, so they are first things alongside with pod uuid that it should have to describe a pod. How can i get this info from a node not using kubectl and not accessing remote kubernetes database of pods?</p>
<p>i can find only container ids/ pod uuids as the parts of cgroup structure.. where is name and namespace? (ideally a whole yaml manifest, its not that hard to store it on a node that is running pod, right?)</p>
<p>If its not having this info - why? that violates collection practices, you cannot do instrumentation as a single program that will collect pod metrics - you will require external process that will post-process such metrics, corresponding uuids to pods names and namespaces, and that is definetely not right approach, when you can have 1 small program running on the node itself</p>
| <p>You may use <code>docker</code> inspect:</p>
<pre><code>docker inspect <container-id> --format='{{index .Config.Labels "io.kubernetes.pod.name"}
docker inspect <container-id> --format='{{index .Config.Labels "io.kubernetes.pod.namespace"}}'
</code></pre>
<p>Following command will list all the pod and their namespace running on the node. you can use the docker's data directory where it maintains the pod info.</p>
<pre><code>find /var/lib/docker -name config.v2.json -exec perl -lnpe 's/.*pod\.name"*\s*:\s*"*([^"]+).*pod\.namespace"*\s*:\s*"*([^"]+).*/pod-name=$1 pod-namespace=$2/g' {} + |awk '!a[$0]++'|column -t
</code></pre>
<p>Exmaple:</p>
<pre><code>find /var/lib/docker -name config.v2.json -exec perl -lnpe 's/.*pod\.name"*\s*:\s*"*([^"]+).*pod\.namespace"*\s*:\s*"*([^"]+).*/pod-name=$1 pod-namespace=$2/g' {} + |awk '!a[$0]++'|column -t
pod-name=huha pod-namespace=foo
pod-name=zoobar1 pod-namespace=default
pod-name=kube-proxy-l2hsb pod-namespace=kube-system
pod-name=weave-net-rbbwf pod-namespace=kube-system
pod-name=zoobar2 pod-namespace=default
</code></pre>
<p>IMO, the parsing could be done with <code>jq</code>, I have used <code>regex</code> here to show
the possibility of getting these values using docker data dir.</p>
<p>Note: for <code>crio</code> a similar json is placed under overlay directory. see OP comment below.</p>
|
<p>I have a Deployment with x amount of gameservers (pods) running. I'm using Agones to make sure gameservers with players connected to them won't get stopped by downscaling. In addition, I use a Service ("connected" to all of the gameserves) which acts as a LoadBalancer for the pods and as I understand it, it will randomly choose a gameserver when a player connects to the service. This all works great when upscaling, but not so much when downscaling. Since Agones prevents gameservers with players on them from scaling down, the amount of pods will essentially never decrease because the service doesn't consider the amount of desired replicas (the actual amount is higher because gameservers with players on them won't be downscaled).</p>
<p>Is there a way to prevent the LoadBalancer service from picking a gameserver (replica) that's no longer desired? For example: current network load only requires 3 replicas, but currently there's 5 because there's 5 servers with players on them preventing them from shutting down. I would like to only spread new load accross the 3 desired replicas (gameservers) to give the other 2 the chance to reach 0 players so it's eventually able to shut itself down.</p>
| <p>Instead of using a LoadBalancer to spread players across your game instances, I'd recommend using the Agones <a href="https://agones.dev/site/docs/reference/gameserverallocation/" rel="nofollow noreferrer">GameServerAllocation</a> API to let Agones find an available game server for you.</p>
<p>If you allow multiple players to connect to the same game server, check out the <a href="https://agones.dev/site/docs/integration-patterns/player-capacity/" rel="nofollow noreferrer">integration pattern for allocating based on player capacity</a>. Agones will pack players onto game servers with available capacity (instead of spreading them out) which will prevent you from having a very small number of players spread across all game servers, which is what happens when you use a load balancer to assign players to game servers.</p>
|
<p>I have two cluster in GCP.</p>
<ol>
<li>GKE cluster which has only postgres installed using Kubernetes.</li>
<li>A dataproc cluster.</li>
</ol>
<p>Now if i make the service of postgres as Internally load balanced to provide security i can access it using my VPN configurations .</p>
<p>But the problem got while accessing the Postgres from the dataproc cluster. The communication wasnt successful. Hence i had to made the postgres public load balanced.</p>
<p>I want suggestions here how we can achieve security here.? making database less accessible however it should be still accessible by Dataproc cluster.</p>
| <p>If you are using the <strong>LoadBancer</strong> to expose the service directly and not using the Ingress you can use the <strong>IP whitelisting</strong> option to Whitelist your <strong>Data Cluster</strong> IPs.</p>
<p>Example</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
ports:
- port: 8765
targetPort: 9376
selector:
app: example
type: LoadBalancer
loadBalancerIP: 79.78.77.76
loadBalancerSourceRanges:
- 130.211.204.1/32
- 130.211.204.2/32
</code></pre>
<p>You can add the <strong>Data cluster</strong> <strong>IPs</strong> (or the whole VPC subnet IP range in which the cluster is) in <strong>LoadBalancer</strong> service and only requests coming from cluster will be access the database.</p>
<p>Refer to the <a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-cidr-ip-address-loadbalancer/" rel="nofollow noreferrer">link</a> for more information</p>
<p><strong>Ingress</strong></p>
<p>If you are using the ingress to expose the database</p>
<p>You can use the annotation :</p>
<pre><code>ingress.kubernetes.io/whitelist-source-range
</code></pre>
<p>to whitelist the IPs</p>
|
<p>I have a set of Kubernetes pods (Kafka). They have been created by Terraform but somehow, they "fell" out of the state (Terraform does not recognize them) and are wrong configured (I don't need them anyways anymore).</p>
<p><a href="https://i.stack.imgur.com/zyuNP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zyuNP.png" alt="enter image description here" /></a></p>
<p>I now want to remove the pods from the cluster completely now. The main problem is, that even after I kill/delete them they keep being recreated/restarting.</p>
<p>I tried:</p>
<pre><code>kubectl get deployments --all-namespaces
</code></pre>
<p>and then deleted the namespace the pods were in with</p>
<pre><code>kubectl delete -n <NS> deployment <DEPLOY>
</code></pre>
<p>This namespace got removed correctly. Still, if I now try to remove/kill the pods (forced and with cascade) they still re-appear. In the events, I can see they are re-created by kubelet but I don't know why nor how I can stop this behavior.</p>
<p><a href="https://i.stack.imgur.com/4KabW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4KabW.png" alt="enter image description here" /></a></p>
<p>I also checked</p>
<pre><code>kubectl get jobs --all-namespaces
</code></pre>
<p>But there are no resources found. And also</p>
<pre><code>kubectl get daemonsets.app --all-namespaces
kubectl get daemonsets.extensions --all-namespaces
</code></pre>
<p><a href="https://i.stack.imgur.com/T79R4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/T79R4.png" alt="enter image description here" /></a></p>
<p>But I don't think that one of those is relevant for the Kafka deployment at all.</p>
<p>What else can I try to remove those pods? Any help is welcome.</p>
| <p>Ok, I was able to find the root cause.</p>
<p>With:</p>
<pre><code>kubectl get all --all-namespaces
</code></pre>
<p>I looked up everything that is related to the name of the pods. In this cause, I found services that were related. After I deleted those services, the pods did not get recreated again.</p>
<p>I still think this is not a good solution to the problem ("Just delete everything that has the same name" ...) and I would be happy if someone can suggest a better solution to resolve this.</p>
|
<ul>
<li><p>Can Kubernetes pods share significant amount of memory?</p>
</li>
<li><p>Does copy-on-write style forking exist for pods?</p>
</li>
</ul>
<p>The purpose is to make pods spawn faster and use less memory.</p>
<p>Our scenario is that we have a dedicated game server to host in kubernetes. The problem is that one instance of the dedicated game server would take up a few GB of memory upfront (e.g. 3 GBs).</p>
<p>Also, we have a few such docker images of game servers, each for game A, game B... Let's call a pod that's running game A's image for game A <code>pod A</code>.</p>
<p>Let's say we now have 3 x <code>pod A</code>, 5 x <code>pod B</code>. Now players rushing into game B, so I need let's say another 4 * <code>pod B</code> urgently.</p>
<p>I can surely spawn 4 more <code>pod B</code>. Kubernetes supports this perfectly. However there are 2 problems:</p>
<ul>
<li>The booting of my game server is very slow (30s - 1min). Players don't want to wait.</li>
<li>More importantly for us, the cost of having this many pods that take up so much memory is very high. Because pods do not share memory as far as I know. Where as if it were plain old EC2 machine or bare metal, processes can share memory because they can fork and then copy-on-write.</li>
</ul>
<p>Copy-on-write style forking and memory sharing seems to solve both problems.</p>
| <p>One of Kubernetes' assumptions is that <em>pods</em> are scheduled on different Nodes, which contradicts the idea of sharing common resources (does not apply for storage where <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">there are many options and documentation available</a>). The situation is different when it comes to sharing resources between <em>containers in one pod</em>, but for your issue this doesn't apply.</p>
<p>However, it seems that there is some possibility to share memory - not well documented and I guess very uncommon in Kubernetes. Check my answers with more details below:</p>
<blockquote>
<p>Can Kubernetes pods share significant amount of memory?</p>
</blockquote>
<p>What I found is that pods can share a common <a href="https://en.wikipedia.org/wiki/Inter-process_communication" rel="nofollow noreferrer">IPC</a> with the host (node).
You can check <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/" rel="nofollow noreferrer">Pod Security Policies</a>, especially <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#host-namespaces" rel="nofollow noreferrer">field <code>hostIPC</code></a>:</p>
<blockquote>
<p><strong>HostIPC</strong> - Controls whether the pod containers can share the host IPC namespace.</p>
</blockquote>
<p>Some usage examples and possible security issues <a href="https://github.com/BishopFox/badPods/tree/main/manifests/hostipc#bad-pod-7-hostipc" rel="nofollow noreferrer">can be found here</a>:</p>
<ul>
<li><a href="https://github.com/BishopFox/badPods/tree/main/manifests/hostipc#inspect-devshm---look-for-any-files-in-this-shared-memory-location" rel="nofollow noreferrer">Shared <code>/dev/sh</code> directory</a></li>
<li><a href="https://github.com/BishopFox/badPods/tree/main/manifests/hostipc#look-for-any-use-of-inter-process-communication-on-the-host" rel="nofollow noreferrer">Use existing IPC facilities</a></li>
</ul>
<p>Keep in mind that this solution is not common in Kubernetes. Pods with elevated <a href="https://kubernetes.io/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/#why-is-podsecuritypolicy-going-away" rel="nofollow noreferrer">privileges are granted broader permissions than needed</a>:</p>
<blockquote>
<p>The way PSPs are applied to Pods has proven confusing to nearly everyone that has attempted to use them. It is easy to accidentally grant broader permissions than intended, and difficult to inspect which PSP(s) apply in a given situation.</p>
</blockquote>
<p>That's why the Kubernetes team marked Pod Security Policies as deprecated from Kubernetes <code>v1.21</code> - <a href="https://kubernetes.io/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/" rel="nofollow noreferrer">check more information in this article</a>.</p>
<p>Also, if you are using multiple nodes in your cluster you <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">should use nodeSelector</a> to make sure that pods will be assigned to same node that means they will be able to share one (host's) IPC.</p>
<blockquote>
<p>Does copy-on-write style forking exist for pods?</p>
</blockquote>
<p>I did a re-search and I didn't find any information about this possibility, so I think it is not possible.</p>
<hr />
<p>I think the main issue is that your game architecture is not "very suitable" for Kubernetes. Check these articles and websites about dedicated game servers in Kubernetes- maybe you will them useful:</p>
<ul>
<li><a href="https://agones.dev/site/" rel="nofollow noreferrer">Agones</a></li>
<li><a href="https://www.gamedeveloper.com/programming/scaling-dedicated-game-servers-with-kubernetes-part-3-scaling-up-nodes" rel="nofollow noreferrer">Scaling Dedicated Game Servers with Kubernetes: Part 3 – Scaling Up Nodes</a></li>
<li><a href="https://cloud.google.com/files/DedicatedGameServerSolution.pdf" rel="nofollow noreferrer">Google Cloud - Dedicated Game Server Solution</a></li>
<li><a href="https://cloud.google.com/game-servers" rel="nofollow noreferrer">Google Cloud - Game Servers</a></li>
</ul>
|
<p>I would like to deploy an application and the pod should not go to running status(it should be non-operational). User might trigger this when it really requires using Infrastructure as Code (Terraform). I am aware of using <code>kubectl scale -- replicas=0 </code>. Any other leads or info will be well appreciated.</p>
| <p>You can keep the replica count to <strong>zero</strong> for the <strong>Deployment</strong> or <strong>POD</strong> into your <strong>YAML</strong> file if you are using it.</p>
<p>Or if you are using the <strong>Terraform</strong></p>
<pre><code>resource "kubernetes_deployment" "example" {
metadata {
name = "terraform-example"
labels = {
test = "MyExampleApp"
}
}
spec {
replicas = 0
selector {
match_labels = {
test = "MyExampleApp"
}
}
template {
metadata {
labels = {
test = "MyExampleApp"
}
}
spec {
container {
image = "nginx:1.7.8"
name = "example"
resources {
limits = {
cpu = "0.5"
memory = "512Mi"
}
requests = {
cpu = "250m"
memory = "50Mi"
}
}
liveness_probe {
http_get {
path = "/nginx_status"
port = 80
http_header {
name = "X-Custom-Header"
value = "Awesome"
}
}
initial_delay_seconds = 3
period_seconds = 3
}
}
}
}
}
}
</code></pre>
<p>There is no other way around you can use the <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">client</a> of Kubernetes to do this if don't want to use the Terraform.</p>
<p>If you want to edit the local file using the terraform checkout <strong>local-exec</strong></p>
<blockquote>
<p>This invokes a process on the machine running Terraform, not on the
resource.</p>
</blockquote>
<pre><code>resource "aws_instance" "web" {
# ...
provisioner "local-exec" {
command = "echo ${self.private_ip} >> private_ips.txt"
}
}
</code></pre>
<p>using <strong>sed</strong> command in <strong>local-exec</strong> or any other command you can update the YAML and apply it.</p>
<p><a href="https://www.terraform.io/docs/language/resources/provisioners/local-exec.html" rel="nofollow noreferrer">https://www.terraform.io/docs/language/resources/provisioners/local-exec.html</a></p>
|
<p>I have a set of Kubernetes pods (Kafka). They have been created by Terraform but somehow, they "fell" out of the state (Terraform does not recognize them) and are wrong configured (I don't need them anyways anymore).</p>
<p><a href="https://i.stack.imgur.com/zyuNP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zyuNP.png" alt="enter image description here" /></a></p>
<p>I now want to remove the pods from the cluster completely now. The main problem is, that even after I kill/delete them they keep being recreated/restarting.</p>
<p>I tried:</p>
<pre><code>kubectl get deployments --all-namespaces
</code></pre>
<p>and then deleted the namespace the pods were in with</p>
<pre><code>kubectl delete -n <NS> deployment <DEPLOY>
</code></pre>
<p>This namespace got removed correctly. Still, if I now try to remove/kill the pods (forced and with cascade) they still re-appear. In the events, I can see they are re-created by kubelet but I don't know why nor how I can stop this behavior.</p>
<p><a href="https://i.stack.imgur.com/4KabW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4KabW.png" alt="enter image description here" /></a></p>
<p>I also checked</p>
<pre><code>kubectl get jobs --all-namespaces
</code></pre>
<p>But there are no resources found. And also</p>
<pre><code>kubectl get daemonsets.app --all-namespaces
kubectl get daemonsets.extensions --all-namespaces
</code></pre>
<p><a href="https://i.stack.imgur.com/T79R4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/T79R4.png" alt="enter image description here" /></a></p>
<p>But I don't think that one of those is relevant for the Kafka deployment at all.</p>
<p>What else can I try to remove those pods? Any help is welcome.</p>
| <p>It's really like a <code>statefulset</code>, who control the <code>pods</code></p>
<blockquote>
<p>the Pods in a StatefulSet have a sticky, unique identity. This identity is based on a unique ordinal index that is assigned to each Pod by the StatefulSet controller.
The Pods' names take the form "<statefulset name>-<ordinal index>".</p>
</blockquote>
<p>So, take a try for <code>kubectl get statefulset --all-namespaces</code></p>
|
<p>I'm trying to figure out how to use nginx proxy cache with some specific rules. For exemple, when i'm hosting Ghost or Wordpress, I don't want to cache admin section. Using server snippet, I've tried a lot of different combinaison but still have issues with cache in admin section.</p>
<pre><code>nginx.ingress.kubernetes.io/proxy-buffering: "on"
nginx.ingress.kubernetes.io/server-snippet: |-
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
proxy_ignore_headers Set-Cookie;
proxy_cache app_cache;
proxy_cache_lock on;
proxy_cache_valid any 30m;
add_header X-Cache-Status $upstream_cache_status;
</code></pre>
<p>I want to use nginx code snippet for (ghost|sinout) paths to bypass cache when in admin area, but i'm loosing the proxy_pass context resulting to a 502 bad gateway.</p>
<p>Here is the current ingress config caching every pages, admin path too:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-buffering: "on"
nginx.ingress.kubernetes.io/server-snippet: |-
proxy_cache my_blog_cache;
proxy_cache_lock on;
proxy_cache_valid any 30m;
add_header X-Cache-Status $upstream_cache_status;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
name: my-blog
namespace: web
spec:
rules:
- host: blog.example.com
http:
paths:
- backend:
serviceName: ingress-541322b8660dbd2ceb1e8ff1813f0dd5
servicePort: 2368
path: /
tls:
- hosts:
- blog.example.com
secretName: my-blog-cert
status:
loadBalancer:
ingress:
- ip: 1.2.3.4
</code></pre>
<p>Here is the nginx config i'm trying to get but not compatible with ingress annotations:</p>
<pre><code> location / {
proxy_cache my_blog_cache;
proxy_cache_valid 200 30m;
proxy_cache_valid 404 1m;
proxy_pass http://ghost_upstream;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
proxy_ignore_headers Set-Cookie;
proxy_hide_header Set-Cookie;
proxy_hide_header X-powered-by;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
expires 10m;
}
location /content/images {
alias /path/to/ghost/content/images;
access_log off;
expires max;
}
location /assets {
alias /path/to/ghost/content/themes/uno-master/assets;
access_log off;
expires max;
}
location /public {
alias /path/to/ghost/core/built/public;
access_log off;
expires max;
}
location /ghost/scripts {
alias /path/to/ghost/core/built/scripts;
access_log off;
expires max;
}
location ~ ^/(?:ghost|signout) {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://ghost_upstream;
add_header Cache-Control "no-cache, private, no-store, must-revalidate, max-stale=0, post-check=0, pre-check=0";
}
</code></pre>
<p>Thanks for your help !</p>
| <p>I had exact same desire: cache Ghost responses with respect to their <code>Cache-Control</code> headers directly on Nginx Ingress in the Kubernetes cluster.</p>
<p>After some hours spent on it here is my solution:</p>
<h1>First step</h1>
<p>First of all, you need to define <code>proxy_cache_path</code> on Nginx Ingress <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/" rel="noreferrer"><code>ConfigMap</code> level</a> (the docs are really unclear on how to apply it tbh).</p>
<p>In my case, I manage Nginx Ingress installation via Helm, so I've added it to Helm values chart:</p>
<pre class="lang-yaml prettyprint-override"><code># Default values https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml
controller:
config:
http-snippet: "proxy_cache_path /tmp/nginx_my_cache levels=1:2 keys_zone=mycache:2m use_temp_path=off max_size=2g inactive=48h;"
</code></pre>
<p>Then apply this change:</p>
<pre><code>helm upgrade -f my-nginx-ingress-values.yaml ingress-nginx ingress-nginx/ingress-nginx --recreate-pods
</code></pre>
<h1>Second step</h1>
<p>Now that we have <code>proxy_cache_path</code> set, we need to configure Ingress for particular host with annotations:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myingress
namespace: mynamespace
annotations:
kubernetes.io/ingress.class: "nginx"
# Buffering must be enabled for Nginx disk cache to work.
nginx.ingress.kubernetes.io/proxy-buffering: "on"
# See https://www.nginx.com/blog/nginx-caching-guide/
# Cache Key Zone is configured in Helm config.
nginx.ingress.kubernetes.io/server-snippet: |
proxy_cache mycache;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_cache_background_update on;
proxy_cache_revalidate on;
proxy_cache_lock on;
add_header X-Cache-Status $upstream_cache_status;
</code></pre>
<p>Note:</p>
<blockquote>
<p>I spent most time figuring out why I was still getting <code>MISS</code>es. Turned out it's due to <code>nginx.ingress.kubernetes.io/proxy-buffering</code> <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#proxy-buffering" rel="noreferrer">default in Ingress</a> — <code>off</code> — this DISABLES Nginx caching, thus you <strong>have</strong> to set it to <code>on</code> which is what we do.</p>
</blockquote>
<p>Apply the change to Ingress.</p>
<h1>Debugging resulting Nginx config</h1>
<p>You can and should I think verify resulting <code>nginx.conf</code> used for Ingress that is generated as result of applying <code>ConfigMap</code> and Ingress-level annotations.</p>
<p>To do so, you can copy <code>nginx.conf</code> from Ingress Controller pod to your local machine and verify its content (or <code>exec</code> into pod and see it there):</p>
<pre><code># Make sure to use correct namespace where Ingress Controller is deployed
# and correct Ingress Controller Pod name
kubectl cp -n default ingress-nginx-controller-xxxx:/etc/nginx/nginx.conf ~/Desktop/nginx.conf
</code></pre>
<p>It should contain all the changes we've made!</p>
<h1>Debugging actual response caching</h1>
<p>Now that we have everything configured — it's time to verify actual caching. Note that we have added <code>X-Cache-Status</code> header which will indicate if it's a <code>HIT</code> or <code>MISS</code>.</p>
<p>I personally like <a href="https://httpie.io/" rel="noreferrer">httpie</a> for HTTP requests from Terminal, you can use <code>curl</code> or browser:</p>
<p>First request is going to be <code>MISS</code>:</p>
<pre><code>http https://example.com/myimage.jpg
HTTP/1.1 200 OK
Accept-Ranges: bytes
Cache-Control: public, max-age=31536000
Connection: keep-alive
Content-Length: 53588
Content-Type: image/jpeg
Date: Wed, 20 Oct 2021 10:39:06 GMT
ETag: W/"d154-17c3aa43389"
Last-Modified: Fri, 01 Oct 2021 06:56:52 GMT
Strict-Transport-Security: max-age=15724800; includeSubDomains
X-Cache-Status: HIT
X-Powered-By: Express
X-Request-ID: 0c73f97cb51d3071f14968720a26a99a
+-----------------------------------------+
| NOTE: binary data not shown in terminal |
+-----------------------------------------+
</code></pre>
<p>Second request to same URL is now a <code>HIT</code> and doesn't hit the actual Ghost installation, success!</p>
<pre><code>http https://example.com/myimage.jpg
HTTP/1.1 200 OK
Accept-Ranges: bytes
Cache-Control: public, max-age=31536000
Connection: keep-alive
Content-Length: 53588
Content-Type: image/jpeg
Date: Wed, 20 Oct 2021 10:39:43 GMT
ETag: W/"d154-17c3aa43389"
Last-Modified: Fri, 01 Oct 2021 06:56:52 GMT
Strict-Transport-Security: max-age=15724800; includeSubDomains
X-Cache-Status: HIT
X-Powered-By: Express
X-Request-ID: 0c73f97cb51d3071f14968720a26a99a
+-----------------------------------------+
| NOTE: binary data not shown in terminal |
+-----------------------------------------+
</code></pre>
<p>It's also useful to verify logs on Ghost to double-check that cache HIT requests actually served directly from Nginx and never hit Ghost.</p>
<hr />
|
<p>Although in an ideal world of Kubernetes you don't need to care about dependencies, the reality is different; we do have applications (consumer services) that rely on backing services (e.g. databases) to be available at startup.</p>
<p>To achieve this deployment order for services we have a few options available with k8s:</p>
<ul>
<li>check the backing service readiness in an init container</li>
<li>check the backing service readiness in the readiness probe</li>
<li>check the backing service readiness in the application itself</li>
</ul>
<p>Now, the backing services do own some configuration properties that are required by the consumer services as well e.g. the k8s service name, credentials, ports etc.</p>
<p>The usual approach I encountered is that these configuration properties are hardcoded into the initContainer/readinessProbe implementation for the consumer service. I think this is suboptimal as you have to manually keep the backing service and consumer service configuration in sync, duplicate the configuration in both services and manually reconfigure the consumer service when the backing service updates its config.</p>
<p>What are some best practices/patterns to keep the consumer service in sync with a backing service configuration changes?</p>
<p>Is it a good practice to rely on operators for backing service deployment and inject secrets/configmaps as requested by the consumer service through CRs?</p>
<p>Thanks!</p>
| <p>You are right,</p>
<p>i think the reality is that without automation, it is impossible to manage the large scalable system of Consumers and Producers.</p>
<p>Operators are widely used for handling database backups & and other automation tasks.</p>
<p>If you see with some scenarios like vault : <a href="https://www.hashicorp.com/blog/dynamic-database-credentials-with-vault-and-kubernetes" rel="nofollow noreferrer">https://www.hashicorp.com/blog/dynamic-database-credentials-with-vault-and-kubernetes</a></p>
<p>They are also using the Operator to solve problems.</p>
<blockquote>
<p>HashiCorp Vault solves this problem by enabling operators to provide
dynamically generated credentials for applications. Vault manages the
lifecycle of credentials, rotating and revoking as required.</p>
</blockquote>
<blockquote>
<p>What are some best practices/patterns to keep the consumer service in
sync with a backing service configuration changes?</p>
</blockquote>
<p>You can use the <strong>vault</strong> to store the <strong>key-value</strong> pair at the central level or secret & configmap to store store creds or files inside it and inject it further.</p>
|
<p>I am new to K8s, and I am facing issues trying to connect to K8s NodePort service from outside the cluster.</p>
<p>I am unable to load the nginx default page when I try accessing it from my local machine using the URL: http://localhost:31008</p>
<p>I understand this is a common problem and I referred the below solutions,</p>
<p><a href="https://stackoverflow.com/questions/66212319/cannot-access-nodeport-service-outside-kubernetes-cluster">Cannot access NodePort service outside Kubernetes cluster</a></p>
<p><a href="https://stackoverflow.com/questions/62918016/cannot-access-microk8s-service-from-browser-using-nodeport-service">Cannot access Microk8s service from browser using NodePort service</a></p>
<p>However none of them are working for me.</p>
<p>Any guidance on this issue would be really appreciated. Thank you.</p>
<p>Setup:</p>
<p>Server OS: Ubuntu Server on AWS</p>
<p>K8s: minikube</p>
<p>Below is my deployment YAML:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: front-end
name: nginx
template:
metadata:
name: nginx
labels:
app: front-end
name: nginx
spec:
containers:
- name: nginx
image: nginx
</code></pre>
<p>Below is my Service YAML:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: front-end
name: nginx
ports:
- port: 8080
targetPort: 8080
nodePort: 31008
</code></pre>
<p>Below is the output of the command <code>kubectl get all</code>,</p>
<p><a href="https://i.stack.imgur.com/VTQHj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VTQHj.png" alt="enter image description here" /></a></p>
| <p>There is an issue in the target port config as <strong>Nginx</strong> run on default port <strong>80</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: front-end
name: nginx
ports:
- port: 8080
targetPort: 80
nodePort: 31008
</code></pre>
<p>The <strong>target port</strong> should be <code>80</code></p>
<p>Config of Nginx :</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
</code></pre>
<p>Ref document : <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/</a></p>
|
<p>For some application,start or restart need more resources than running。for exapmle:es/flink。if a node have network jitter,all pods would restart at the same time in this node。When this happens,cpu usage becomes very high in this node。it would increase resource competition in this node。</p>
<p>now i want to start pods in batches for only one node。how to realize the function now?</p>
| <p>Kubernetes have auto-healing</p>
<p>You can let the POD crash and Kubernetes will auto re-start them soon as get the sufficient memory or resource requirement</p>
<p>Or else if you want to put the wait somehow so that deployment wait and gradually start one by one</p>
<p>you can use the sidecar and use the POD lifecycle hooks to start the main container, this process is not best but can resolve your issue.</p>
<p>Basic example :</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: sidecar-starts-first
spec:
containers:
- name: sidecar
image: my-sidecar
lifecycle:
postStart:
exec:
command:
- /bin/wait-until-ready.sh
- name: application
image: my-application
</code></pre>
<p><a href="https://i.stack.imgur.com/PtREE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PtREE.png" alt="enter image description here" /></a></p>
<p><strong>OR</strong></p>
<p>You can also use the <strong>Init container</strong> to check the other container's health and start the main container POD once one POD is of another service.</p>
<p><a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">Init container</a></p>
<p>i would also recommend to check the Priority class : <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/</a></p>
|
<p>I have created a server named <a href="https://github.com/CyBear-Jinni/cbj_remote-pipes" rel="nofollow noreferrer">Remote Pipes</a> that get clients connections with streams and transfer data between them.</p>
<p>On one side there is a computer Hub and on the other side there are a number of App clients.</p>
<p>The Hub client connects to the Remote Pipes and a two-way stream remains open.</p>
<p>All the Apps clients connect to the Remote Pipes and a <a href="https://github.com/CyBear-Jinni/cbj_remote-pipes/blob/main/lib/domain/hube_server/smart_server_u.dart" rel="nofollow noreferrer">two-way stream remains open</a>.</p>
<p>Whenever the Hub wants to send data for the Apps clients he sends it to the Remote Pipes, and the Remote pipes send each connected App client the data through (already opened) opened stream.</p>
<p>Whenever one of the app clients wants to send data for the Hub he sends it to the Remote Pipes which combines all streams from the Apps and sends them through a single (already opened) stream to the Hub.</p>
<p><a href="https://i.stack.imgur.com/fbpjym.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fbpjym.jpg" alt="enter image description here" /></a></p>
<p>Remote Pipes do not store data nor use local storage nor use local DB and each instance is intended for one family.</p>
<p>So I want to create a Kubernetes pod with Remote Pipes for each family and all family members need to connect to the same pod.</p>
<p>No need for a persistent pod, if one pod gets deleted (in case there are no connections) a new one is ok as long as all the family members Apps clients and the Hub client connect to the same pod.</p>
<p><strong>The Question:</strong></p>
<p>Searching for a way to make <strong>multiple users connect to the same kubernetes pod</strong> (like game/zoom lobbies?) and I am not sure what is the best option.</p>
<p>The routing must be created <strong>dynamically and be scalable</strong> so routing based on ports and Name-based routing are not a good fit.</p>
<p>Here are a number of terms that I found that may be related</p>
<ol>
<li>Stateful application</li>
<li>Headless services</li>
<li>Auto-labeling</li>
<li><a href="https://github.com/linode/linode-cloud-controller-manager" rel="nofollow noreferrer">kube-proxy</a></li>
<li>Host based routing</li>
<li>Path based routing</li>
<li>Header based routing</li>
<li>Software/Application Load balancer</li>
</ol>
<p>I am using Linode so using Linode NodeBalancers is preferable if load balancer is required.</p>
| <p><strong>Kafka</strong> might help if we put it in between however not sure what you are streaming if it's payload data that would be good</p>
<p><a href="https://i.stack.imgur.com/wqmBY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wqmBY.png" alt="enter image description here" /></a></p>
<p>as you have mentioned</p>
<blockquote>
<p>No need for a persistent pod</p>
</blockquote>
<p>You can use the <strong>Kind</strong> : <strong>Deployment</strong> for application while Kafka will be stateful sets.</p>
<p>If you are using the <strong>WebRTC</strong> there won't be any issue.
<a href="https://i.stack.imgur.com/NPBlC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NPBlC.png" alt="enter image description here" /></a></p>
<p>You should checkout : <a href="https://cloud.google.com/architecture/orchestrating-gpu-accelerated-streaming-apps-using-webrtc" rel="nofollow noreferrer">https://cloud.google.com/architecture/orchestrating-gpu-accelerated-streaming-apps-using-webrtc</a></p>
|
<p>I'm working on a simple python 3 application that would be using print statements.
It is crutial for me to see the logs when i execute kubectl log command.
However, the logs are never present.
It looks like the application may not be running in the pod.
When i'm executing kubectl --it exexc and go in to pod I land up in working directory, then when i manually execute python myApp.py it is printing the logs.</p>
<p>My Dockerfile looks like this:</p>
<pre><code>FROM python:3.8-slim-buster
ENV PYTHONUNBUFFERED=0
WORKDIR /code
COPY . .
CMD [ "python", "/.main.py" ]
</code></pre>
<p>For a simple example I can use a piece of code which just prints:</p>
<pre><code>from time import sleep
while True:
print("Im printing ...")
sleep(10)
</code></pre>
<p>My pod definition look like this:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: logbox
namespace: default
spec:
containers:
- name: logbox
image: <8888888888888888>logbox:0.0.1
imagePullPolicy: "IfNotPresent"
stdin: true
args:
- sh
- -c
- sleep infinity
</code></pre>
| <p>It might be that the <code>args</code> commands in the pod definition are overriding the Dockerfile <code>CMD</code> command, I'd give it a try without the definition args or by matching them (so the args calls the <code>python ./main.py</code></p>
|
<p>I want to know what nodes correspond to pods in a K8s cluster. I am working with a 3 node K8s cluster which has 2 specific pods among other pods.</p>
<p><strong>How can I see which pod exists in which node using <code>kubectl</code>?</strong></p>
<p>When I use <code>kubectl get pods</code>, I get the following:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
pod1-7485f58945-zq8tg 1/1 Running 2 2d
pod2-64c4564b5c-8rh5x 1/1 Running 0 2d1h
</code></pre>
<p>Following is the version of K8s (<code>kubectl version</code>) that I am using</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.13", GitCommit:"53c7b65d4531a749cd3a7004c5212d23daa044a9", GitTreeState:"clean", BuildDate:"2021-07-15T20:53:19Z", GoVersion:"go1.15.14", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| <p>Try <code>kubectl get pods -o wide</code>.</p>
<p>You can get more details in this very detailed <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/" rel="nofollow noreferrer">Kubernetes cheatsheet</a>.</p>
|
<p>I am trying to make this basic example work on docker desktop on windows, I am not using <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">minikube</a>.</p>
<p>I managed to reach the service using NodePort with:</p>
<pre><code>http://localhost:31429
</code></pre>
<p>But when I try <code>http://hello-world.info</code> (made sure to add it in hosts) - <code>404 not found</code>.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20m
default web NodePort 10.111.220.81 <none> 8080:31429/TCP 6m47s
ingress-nginx ingress-nginx-controller LoadBalancer 10.107.29.182 localhost 80:30266/TCP,443:32426/TCP 19m
ingress-nginx ingress-nginx-controller-admission ClusterIP 10.101.138.244 <none> 443/TCP 19m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 20m
kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
example-ingress <none> hello-world.info 80 21m</code></pre>
</div>
</div>
</p>
<p>I am lost, can someone please help ?
I also noticed that ADDRESS is empty.</p>
<p>Many thanks.</p>
| <p><em>Reproduced this case on Docker Desktop 4.1.1, Windows 10 Pro</em></p>
<ol>
<li><p>Install <a href="https://kubernetes.github.io/ingress-nginx/deploy/#docker-desktop" rel="nofollow noreferrer">Ingress Controller for Docker Desktop</a>:</p>
<p><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/cloud/deploy.yaml</code></p>
</li>
<li><p>As I understand it, @dev1334 used an example from <a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" rel="nofollow noreferrer">Set up Ingress on Minikube with the NGINX Ingress Controller</a> article. I also tried it with some modifications to the original example.</p>
</li>
<li><p>In the example for the <code>example-ingress.yaml</code> file in the <code>spec.rules</code> section, the host <code>hello-world.info</code> is specified. Since Docker Desktop for Windows adds to a hosts file in <code>C:\Windows\System32\drivers\etc\hosts</code> during installation the following entry: <code>127.0.0.1 kubernetes.docker.internal</code> I changed the host in the <code>example-ingress.yaml</code> from <code>hello-world.info</code> to <code>kubernetes.docker.internal</code></p>
</li>
<li><p>But Ingress still didn't work as expected due to the following error:
<code>"Ignoring ingress because of error while validating ingress class" ingress="default/example-ingress" error="ingress does not contain a valid IngressClass"</code></p>
<p>I added this line <code>kubernetes.io/ingress.class: "nginx"</code> to the annotations section in <code>example-ingress.yaml</code></p>
</li>
</ol>
<p>So, the final version of the <code>example-ingress.yaml</code> file is below.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: kubernetes.docker.internal
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
- path: /v2
pathType: Prefix
backend:
service:
name: web2
port:
number: 8080
</code></pre>
<p><strong>Test results</strong></p>
<pre><code>C:\Users\Andrew_Skorkin>kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default web-79d88c97d6-c8xnf 1/1 Running 0 112m
default web2-5d47994f45-cxtzm 1/1 Running 0 94m
ingress-nginx ingress-nginx-admission-create-sjdcq 0/1 Completed 0 114m
ingress-nginx ingress-nginx-admission-patch-wccc9 0/1 Completed 1 114m
ingress-nginx ingress-nginx-controller-5c8d66c76d-jb4w9 1/1 Running 0 114m
...
C:\Users\Andrew_Skorkin>kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d15h
default web NodePort 10.101.43.157 <none> 8080:32651/TCP 114m
default web2 NodePort 10.100.4.84 <none> 8080:30081/TCP 96m
ingress-nginx ingress-nginx-controller LoadBalancer 10.106.138.217 localhost 80:30287/TCP,443:32664/TCP 116m
ingress-nginx ingress-nginx-controller-admission ClusterIP 10.111.208.242 <none> 443/TCP 116m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 7d15h
C:\Users\Andrew_Skorkin>curl kubernetes.docker.internal
Hello, world!
Version: 1.0.0
Hostname: web-79d88c97d6-c8xnf
C:\Users\Andrew_Skorkin>curl kubernetes.docker.internal/v2
Hello, world!
Version: 2.0.0
Hostname: web2-5d47994f45-cxtzm
</code></pre>
|
<p>Hi I would like to update a yaml like string into a yaml</p>
<p>i do have the following yaml file <code>argocd.yaml</code></p>
<pre><code>---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
namespace: mynamespace
name:my-app
spec:
project: xxx
destination:
server: xxx
namespace: xxx
source:
repoURL: xxx
targetRevision: dev
path: yyy
helm:
values: |-
image:
tag: "mytag"
repository: "myrepo-image"
registry: "myregistry"
</code></pre>
<p>and ultimatively I want to replace the value of tag. Unfortunately this is a yaml in a yaml configuration.</p>
<p>My Idea so far was:</p>
<ol>
<li>extract value into another values.yaml</li>
<li>Update the tag</li>
<li>evaluate the values in the argocd.yaml with the values.yaml
So what worked is:</li>
</ol>
<pre><code># get the yaml in the yaml and save as yaml
yq e .spec.source.helm.values argocd.yaml > helm_values.yaml
# replace the tag value
yq e '.image.tag=newtag' helm_values.yaml
</code></pre>
<p>and then I want to add the content of the <code>helm_values.yaml</code> file as string into the <code>argocd.yaml</code>
I tried it the following but I can't get it work</p>
<pre><code># idea 1
###################
yq eval 'select(fileIndex==0).spec.source.helm.values = select(fileIndex==1) | select(fileIndex==0)' argocd.yaml values.yaml
# this does not update the values but add back slashes
values: "\nimage:\n tag: \"mytag\"\n repository: \"myrepo-image\"\n registry: \"myregistry\""
# idea 2
##################
yq eval '.spec.source.helm.values = "'"$(< values.yaml)"'"' argocd.yam
here i am not getting the quite escape correctly and it fails with
Error: Parsing expression: Lexer error: could not match text starting at 2:9 failing at 2:12.
unmatched text: "newtag"
</code></pre>
<p>Any Idea how to solve this or is there a better way to replace a value in such a file?
I am using yq from <a href="https://mikefarah.gitbook.io/yq" rel="nofollow noreferrer">https://mikefarah.gitbook.io/yq</a></p>
| <p>Your second approach can work, but in a roundabout way, as <a href="https://github.com/mikefarah/yq" rel="nofollow noreferrer">mikefarah/yq</a> does not support updating <a href="http://yaml-multiline.info/" rel="nofollow noreferrer">multi-line block literals</a> yet</p>
<p>One way to solve this, with the existing constructs would be to do below, without having to create a temporary YAML file</p>
<pre class="lang-none prettyprint-override"><code>o="$(yq e '.spec.source.helm.values' yaml | yq e '.image.tag="footag"' -)" yq e -i '.spec.source.helm.values = strenv(o)' argocd.yaml
</code></pre>
<p>The solution above relies on <a href="https://mikefarah.gitbook.io/yq/operators/env-variable-operators" rel="nofollow noreferrer">passing a user defined variable</a> as a string, that can be used in the yq expression using <code>strenv()</code>. The value for the variable <code>o</code> is set by using <a href="https://www.gnu.org/software/bash/manual/html_node/Command-Substitution.html" rel="nofollow noreferrer">Command substitution $(..)</a> feature in bash. Inside the substitution construct, we extract the block literal content as a YAML object and apply another filter to modify the tag as we need. So at the end of substitution, the value of <code>o</code> will be set to</p>
<pre class="lang-yaml prettyprint-override"><code>image:
tag: "footag"
repository: "myrepo-image"
registry: "myregistry"
</code></pre>
<p>The obtained result above is now set to the value of <code>.spec.source.helm.values</code> directly which the inplace modification flag (<code>-i</code>) to apply the changes on the actual file</p>
<hr />
<p>I have raised a feature request in the author's repo to provide a simpler way to do this <a href="https://github.com/mikefarah/yq/issues/974" rel="nofollow noreferrer">Support for updating YAML multi-line strings #974</a></p>
|
<p>I have a k8s cluster that runs just fine. It has several standalone mongodb statefulsets connected via NFC. The problem is, whenever their is a power outage, the mongodb databases get corrupt:</p>
<pre><code>{"t":{"$date":"2021-10-15T13:10:06.446+00:00"},"s":"W", "c":"STORAGE", "id":22271, "ctx":"initandlisten","msg":"Detected unclean shutdown - Lock file is not empty","attr":{"lockFile":"/data/db/mongod.lock"}}
{"t":{"$date":"2021-10-15T13:10:07.182+00:00"},"s":"E", "c":"STORAGE", "id":22435, "ctx":"initandlisten","msg":"WiredTiger error","attr":{"error":0,"message":"[1634303407:182673][1:0x7f9515eb7a80], file:WiredTiger.wt, connection: __wt_block_read_off, 283: WiredTiger.wt: read checksum error for 4096B block at offset 12288: block header checksum of 0xc663f362 doesn't match expected checksum of 0xb8e27418"}}
</code></pre>
<p>The pods status remain at CrashLoopBackOff so I cannot do <code>kubectl exec -it usersdb-0 -- mongod --repair</code> because it is not running.</p>
<p>I have tried deleting wiredTiger.lock and mongod.lock but nothing seems to work. How can I repair this databases?</p>
| <p>well after several attempts I think I have finally made some breakthrough so I wanted to leave this here for someone else.</p>
<p>Since the mongodb is not running, add the command</p>
<pre><code>command: ["sleep"]
args: ["infinity"]
</code></pre>
<p>in the resource file (hoping it is a statefulset).
Then repair the database using the command</p>
<pre><code>kubectl exec -it <NAME-OF-MONGODB-POD> -- mongod --dbpath /data/db --repair
</code></pre>
<p>This will repair the standalone mongodb pod. Now remove the comment, apply the resource yaml file then kill the pod to recreate it afresh.</p>
<p>Now the mongodb pod should be working fine.</p>
|
<p>How to fix pop up error <a href="http:///Users/162408.suryadi/Library/Application%20Support/Lens/node_module/lenscloud-lens-extension" rel="nofollow noreferrer">permission denied</a>?</p>
<p>The log:</p>
<pre><code> 50 silly saveTree +-- lens-survey@5.2.5-latest.20211001.2
50 silly saveTree +-- lens-telemetry@5.2.5-latest.20211001.2
50 silly saveTree `-- lenscloud-lens-extension@5.2.5-latest.20211001.2
51 warn Lens No description
52 warn Lens No repository field.
53 warn Lens No license field.
54 verbose stack Error: EACCES: permission denied, access '/Users/162408.suryadi/Library/Application Support/Lens/node_modules/lenscloud-lens-extension'
55 verbose cwd /Users/162408.suryadi/Library/Application Support/Lens
56 verbose Darwin 20.4.0
57 verbose argv "/Applications/Lens.app/Contents/Frameworks/Lens Helper.app/Contents/MacOS/Lens Helper" "/Applications/Lens.app/Contents/Resources/app.asar/node_modules/npm/bin/npm-cli.js" "install" "--no-audit" "--only=prod" "--prefer-offline" "--no-package-lock"
58 verbose node v14.16.0
59 verbose npm v6.14.13
60 error code EACCES
61 error syscall access
62 error path /Users/162408.suryadi/Library/Application Support/Lens/node_modules/lenscloud-lens-extension
63 error errno -13
64 error Error: EACCES: permission denied, access '/Users/162408.suryadi/Library/Application Support/Lens/node_modules/lenscloud-lens-extension'
64 error [Error: EACCES: permission denied, access '/Users/162408.suryadi/Library/Application Support/Lens/node_modules/lenscloud-lens-extension'] {
64 error errno: -13,
64 error code: 'EACCES',
Could not load extensions: npm WARN checkPermissions Missing write access to /Users/162408.suryadi/Library/Application Support/Lens/node_modules/lenscloud-lens-extension
npm WARN enoent ENOENT: no such file or directory, open '/Users/162408.suryadi/Library/Application Support/Lens/node_modules/untitled folder/package.json'
npm WARN Lens No description
npm WARN Lens No repository field.
npm WARN Lens No license field.
npm ERR! code EACCES
npm ERR! syscall access
npm ERR! path /Users/162408.suryadi/Library/Application Support/Lens/node_modules/lenscloud-lens-extension
npm ERR! errno -13
npm ERR! Error: EACCES: permission denied, access '/Users/162408.suryadi/Library/Application Support/Lens/node_modules/lenscloud-lens-extension'
npm ERR! [Error: EACCES: permission denied, access '/Users/162408.suryadi/Library/Application Support/Lens/node_modules/lenscloud-lens-extension'] {
npm ERR! errno: -13,
npm ERR! code: 'EACCES',
npm ERR! syscall: 'access',
npm ERR! path: '/Users/162408.suryadi/Library/Application Support/Lens/node_modules/lenscloud-lens-extension'
npm ERR! }
npm ERR!
npm ERR! The operation was rejected by your operating system.
npm ERR! It is likely you do not have the permissions to access this file as the current user
npm ERR!
npm ERR! If you believe this might be a permissions issue, please double-check the
npm ERR! permissions of the file and its containing directories, or try running
npm ERR! the command again as root/Administrator.
npm ERR! A complete log of this run can be found in:
npm ERR! /Users/162408.suryadi/.npm/_logs/2021-10-21T02_16_36_016Z-debug.log
</code></pre>
| <p>In your logs you can find the description of your problem:</p>
<blockquote>
<p>It is likely you do not have the permissions to access this file as the current user.</p>
</blockquote>
<p>And one more thing to check:</p>
<blockquote>
<p>If you believe this might be a permissions issue, please double-check the permissions of the file and its containing directories, or try running the command again as root/Administrator.</p>
</blockquote>
<p>The error you got is because you cannot access the resource <code>/Users/162408.suryadi/Library/Application Support/Lens/node_modules/lenscloud-lens-extension</code> from the current user. If you want to fix it, you have to change the permissions for this resource or change its owner. It is also possible (since you are using Kubernetes) that you will have to make such a change in the image of the system you are using.</p>
<p>To change owner the resource run</p>
<pre><code>sudo chown -R $USER /Users/162408.suryadi/Library/Application Support/Lens/node_modules/lenscloud-lens-extension
</code></pre>
<p>You can find also many similar problems. In most cases, the only difference will be the different path to the resource. The mode of operation and the solution to the problem remains the same:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/48910876/error-eacces-permission-denied-access-usr-local-lib-node-modules">Error: EACCES: permission denied, access '/usr/local/lib/node_modules'</a></li>
<li><a href="https://stackoverflow.com/questions/52979927/npm-warn-checkpermissions-missing-write-access-to-usr-local-lib-node-modules">npm WARN checkPermissions Missing write access to /usr/local/lib/node_modules</a></li>
<li><a href="https://progressivecoder.com/how-to-easily-fix-missing-write-access-error-npm-install/" rel="nofollow noreferrer">https://progressivecoder.com/how-to-easily-fix-missing-write-access-error-npm-install/</a></li>
<li><a href="https://flaviocopes.com/npm-fix-missing-write-access-error/" rel="nofollow noreferrer">https://flaviocopes.com/npm-fix-missing-write-access-error/</a></li>
</ul>
|
<p>Below is the manifest for Pod within a deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-deployment
namespace: monitoring
labels:
app: prometheus
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
annotations:
version: "1"
spec:
containers:
- name: prometheus-server
image: prom/prometheus:v2.9.2
imagePullPolicy: "IfNotPresent"
args:
- --config.file=/etc/config/prometheus.yml
- --storage.tsdb.path=/data
- --web.console.libraries=/etc/prometheus/console_libraries
- --web.console.templates=/etc/prometheus/consoles
- --web.enable-lifecycle
volumeMounts:
- name: config-volume
mountPath: /etc/config/prometheus.yml
subPath: prometheus.yml
- name: prometheus-data
mountPath: /data
subPath: ""
resources:
limits:
cpu: 200m
memory: 512Mi
requests:
cpu: 100m
memory: 256Mi
ports:
- containerPort: 9090
readinessProbe:
httpGet:
path: /-/ready
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
livenessProbe:
httpGet:
path: /-/healthy
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
runAsNonRoot: true
runAsUser: 65534
volumes:
- name: config-volume
configMap:
name: prometheus-config
- name: prometheus-data
emptyDir: {}
</code></pre>
<hr />
<p>where <code>prometheus-config</code> is being configured as shown below:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
namespace: monitoring
data:
prometheus.yml: |
scrape_configs:
- job_name: prometheus
static_configs:
- targets:
- localhost:9090
</code></pre>
<hr />
<ol>
<li><p>Does rolling out <code>prometheus-config</code>(only) with new changes, make it visible to running Pod(<code>prometheus-server</code>)?</p>
</li>
<li><p>Does <code>configMap:</code> overwrite the contents of <code>/etc/prometheus/prometheus.yml</code>(if any)?</p>
</li>
</ol>
| <p>The ConfigMap needs to exist at the time of the creation of the POD unless the ConfigMap reference of the deployment is marked as "optional".</p>
<pre><code> volumes:
- name: config-volume
configMap:
name: prometheus-config
optional: true
</code></pre>
<p>In the case where the optionally referenced ConfigMap comes to exist after the POD is created, the Kubernetes periodic sync will update the projected content in the volume. Kubernates documentation hints at the time it might take.</p>
<blockquote>
<p>Kubelet checks whether the mounted ConfigMap is fresh on every
periodic sync. However, it uses its local TTL-based cache for getting
the current value of the ConfigMap. As a result, the total delay from
the moment when the ConfigMap is updated to the moment when new keys
are projected to the pod can be as long as kubelet sync period (1
minute by default) + TTL of ConfigMaps cache (1 minute by default) in
kubelet. You can trigger an immediate refresh by updating one of the
pod's annotations.</p>
</blockquote>
<p>Reference: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#optional-references" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#optional-references</a></p>
<hr />
<p>There is no reference in the docs to the effect that updating the values of an existing ConfigMap is going to update the volume mount or ENV of the container.</p>
|
<p>I've recently started using NFS volumes for my clusters on-prem. This is the simplest and best solution for me, however it seems pretty limited in regards to the actual mounts options.</p>
<p>Is there anyway to set mount options on the node/cluster in the volume.yml files?</p>
<ul>
<li>NFSv3</li>
<li>NFSv4/4.1</li>
<li>lookupcache</li>
<li>noatime</li>
<li>rsize,wsize</li>
</ul>
<p>I have application that requires a specific version and also these mount options for performance.</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
# FIXME: use the right IP
server: 10.244.1.4
path: "/"
</code></pre>
<p>Is there anyway to add mount flags here?</p>
| <p>If someone is looking for answers in 2021, here is what is working for me.</p>
<pre><code> mountOptions:
- hard
- timeo=600
- retrans=3
- proto=tcp
- nfsvers=4.2
- port=2050
- rsize=4096
- wsize=4096
- noacl
- nocto
- noatime
- nodiratime
</code></pre>
|
<p>I have <a href="https://docs.aws.amazon.com/eks/latest/userguide/horizontal-pod-autoscaler.html" rel="nofollow noreferrer">HPA</a> for my Kubernetes-deployed app with <a href="https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html" rel="nofollow noreferrer">cluster autoscaler</a>.
Scaling works properly for both pods and nodes, but during production load spikes I see a lot of 502 errors from ALB (<a href="https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html" rel="nofollow noreferrer">aws-load-balancer-controller</a>).</p>
<p>It seems like I have enabled everything to achieve zero-downtime deployment / scaling:</p>
<ul>
<li>pod readiness probe is in place</li>
</ul>
<pre><code> readinessProbe:
httpGet:
path: /_healthcheck/
port: 80
</code></pre>
<ul>
<li>pod readiness gate <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/deploy/pod_readiness_gate/" rel="nofollow noreferrer">is enabled</a></li>
<li>ingress annotation uses <code>ip</code> target type</li>
</ul>
<pre><code>alb.ingress.kubernetes.io/target-type: ip
</code></pre>
<ul>
<li>healthcheck parameters are specified on the ingress resource</li>
</ul>
<pre><code>alb.ingress.kubernetes.io/healthcheck-path: "/healthcheck/"
alb.ingress.kubernetes.io/healthcheck-interval-seconds: "10"
</code></pre>
<p>but that doesn't help.</p>
<p>How to properly debug this kind of issue and which other parameters should I tune to completely eliminate 5xx errors from my load balancer?</p>
| <p>Here's a list of some extra things that I've added to my configuration alongside those mentioned above</p>
<ul>
<li>container <code>preStop</code> <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks" rel="nofollow noreferrer">hook</a></li>
</ul>
<pre><code>lifecycle:
preStop:
exec:
command: ["/bin/sleep", "30"]
</code></pre>
<ul>
<li><p><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination" rel="nofollow noreferrer">termination grace period</a> on a pod <code>terminationGracePeriodSeconds: 40</code> (sleep time from the above + 10-15 seconds)</p>
</li>
<li><p>tune deregistration delay value on a target group by setting</p>
</li>
</ul>
<pre><code>alb.ingress.kubernetes.io/target-group-attributes: deregistration_delay.timeout_seconds=30
</code></pre>
<p>this annotation on an ingress resources. Usually the value should match your timeout on backend webserver (we don't want to have a target around more than it requires for the longest possible request to finish).</p>
<p><strong>The main idea</strong> behind this tuning is to make sure changes of the Pods state have enough time to propagate to the underlying AWS resources, so traffic is no longer routed from ALB to the pod within target group that has been already marked as terminated/unhealthy by k8s.</p>
<p>P.S. Make sure to always have enough pods to handle incoming requests (this is especially important for synchronous workers when doing rolling redeploy). <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment" rel="nofollow noreferrer">Consider</a> lower values for <code>maxUnavailable</code> and higher values for <code>maxSurge</code> in case your cluster/worker nodes have the capacity to allocate these extra pods. So if your pod handles 100 reqs/min on average on your load is 400 reqs/min make sure <code>num of replicas</code> - <code>maxUnavailable</code> > 4 (total reqs / reqs per pod)</p>
|
<p>Can ingress rewrite 405 to the origin url and change the http-errors <code>405</code> to <code>200</code>?</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /page/user/(.*)
pathType: Prefix
backend:
serviceName: front-user
servicePort: 80
- path: /page/manager/(.*)
pathType: Prefix
backend:
serviceName: front-admin
servicePort: 80
</code></pre>
<p>Ngnix can realize that visit a html page by a <code>post</code> method but I want to know how to realize by ingress.</p>
<pre><code>server {
listen 80;
# ...
error_page 405 =200 @405;
location @405 {
root /srv/http;
proxy_method GET;
proxy_pass http://static_backend;
}
}
</code></pre>
<p>This is an e.g. that ngnix realize that visit a html page by a <code>post</code> method to change <code>405</code> to <code>200</code> and change the method to <code>get</code></p>
| <p>You can use <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#server-snippet" rel="nofollow noreferrer">server snippet</a> annotation to achieve it.</p>
<p>Also I rewrote your ingress from <code>extensions/v1beta1</code> apiVersion to <code>networking.k8s.io/v1</code>, because starting kubernetes <code>v1.22</code> previous <code>apiVersion</code> is be removed:</p>
<pre><code>$ kubectl apply -f ingress-snippit.yaml
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
</code></pre>
<p><code>Ingress-snippet-v1.yaml</code></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frontend-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/server-snippet: | # adds this block to server
error_page 405 =200 @405;
location @405 {
root /srv/http;
proxy_method GET;
proxy_pass http://static_backend; # tested with IP since I don't have this upstream
}
spec:
rules:
- http:
paths:
- path: /page/user/(.*)
pathType: Prefix
backend:
service:
name: front-user
port:
number: 80
- path: /page/manager/(.*)
pathType: Prefix
backend:
service:
name: front-admin
port:
number: 80
</code></pre>
<hr />
<p>Applying manifest above and verifying <code>/etc/nginx/nginx.conf</code> in <code>ingress-nginx-controller</code> pod:</p>
<pre><code>$ kubectl exec -it ingress-nginx-controller-xxxxxxxxx-yyyy -n ingress-nginx -- cat /etc/nginx/nginx.conf | less
...
## start server _
server {
server_name _ ;
listen 80 default_server reuseport backlog=4096 ;
listen 443 default_server reuseport backlog=4096 ssl http2 ;
set $proxy_upstream_name "-";
ssl_certificate_by_lua_block {
certificate.call()
}
# Custom code snippet configured for host _
error_page 405 =200 @405;
location @405 {
root /srv/http;
proxy_method GET;
proxy_pass http://127.0.0.1; # IP for testing purposes
}
location ~* "^/page/manager/(.*)" {
set $namespace "default";
set $ingress_name "frontend-ingress";
set $service_name "front-admin";
set $service_port "80";
set $location_path "/page/manager/(.*)";
set $global_rate_limit_exceeding n;
...
</code></pre>
|
<p>I am using Dockerfile to run shell script of jupyter notebook. When this jupyter terminal starts up, it's starting at /root path, but I want terminal to start with default path /nfs.
What change can be made in the Dockerfile such that this terminal starts at /nfs path ?</p>
| <p>you can add below entry in your dockerfile so everything after mentioned the WORKDIR step would be in the directory you have mentioned.</p>
<pre><code>WORKDIR /nfs
</code></pre>
|
<p>I recently upgraded <code>ingress-nginx</code> to version 1.0.3.</p>
<p>As a result, I removed the <code>kubernetes.io/ingress.class</code> annotation from my ingress, and put <code>.spec.ingressClassName</code> instead.</p>
<p>I am running <code>cert-manager-v1.4.0</code>.</p>
<p>This morning I had an email saying that my Let's Encrypt certificate will expire in 10 days. I tried to figure out what was wrong with it - not positive that it was entirely due to the ingress-nginx upgrade.</p>
<p>I deleted the <code>CertificateRequest</code> to see if it would fix itself. I got a new <code>Ingress</code> with the challenge, but:</p>
<ol>
<li><p>The challenge ingress had the <code>kubernetes.io/ingress.class</code> annotation set correctly, even though my ingress has <code>.spec.ingressClassName</code> instead - don't know how or why, but it seems like it should be OK.</p>
</li>
<li><p>However, the challenge ingress wasn't picked up by the ingress controller, it said:</p>
</li>
</ol>
<p><code>ingress class annotation is not equal to the expected by Ingress Controller</code></p>
<p>I guess it wants only the <code>.spec.ingressClassName</code> even though I thought the annotation was supposed to work as well.</p>
<p>So I manually set <code>.spec.ingressClassName</code> on the challenge ingress. It was immediately seen by the ingress controller, and the rest of the process ran smoothly, and I got a new cert - yay.</p>
<p>It seems to me like this will happen again, so I need to know how to either:</p>
<ol>
<li><p>Convince <code>cert-manager</code> to create the challenge ingress with <code>.spec.ingressClassName</code> instead of <code>kubernetes.io/ingress.class</code>. Maybe this is fixed in 1.5 or 1.6?</p>
</li>
<li><p>Convince <code>ingress-nginx</code> to respect the <code>kubernetes.io/ingress.class</code> annotation for the challenge ingress. I don't know why this doesn't work.</p>
</li>
</ol>
| <h2>Issue</h2>
<p>The issue was fixed by certificate renewal, it works fine without manually set <code>spec.ingressClassName</code> in challenge ingress (I saw it with older version), issue was somewhere else.</p>
<p>Also with last available (at the writing moment) <code>cert-manager v1.5.4</code> challenge ingress has the right setup "out of the box":</p>
<pre><code>spec:
ingressClassName: nginx
---
$ kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
cm-acme-http-solver-szxfg nginx dummy-host ip_address 80 11s
</code></pre>
<h2>How it works (concept)</h2>
<p>I'll describe main steps how this process works so troubleshooting will be straight-forward in almost all cases. I'll take a <code>letsencypt staging</code> as an <code>issuer</code>.</p>
<p>There's a chain when <code>certificate</code> is requested to be created which <code>issuer</code> follows to complete (all resources have owners - previous resource in chain):</p>
<p><code>main ingress resource</code> -> <code>certificate</code> -> <code>certificaterequest</code> -> <code>order</code> -> <code>challenge</code> -> <code>challenge ingress</code>.</p>
<p>Knowing this, if something failed, you can go down by the chain and using <code>kubectl describe</code> command find where the issue appeared.</p>
<h2>Troubleshooting example</h2>
<p>I intentionally added a wrong domain in ingress to <code>.spec.tls.hosts</code> and applied it. Below how the chain will look like (all names will be unique!):</p>
<p>See certificates:</p>
<pre><code>$ kubectl get cert
NAME READY SECRET AGE
lets-secret-test-2 False lets-secret-test-2 15m
</code></pre>
<p>Describe <code>certificate</code> we are interested in (you can notice I changed domain, there was already secret):</p>
<pre><code>$ kubectl describe cert lets-secret-test-2
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Issuing 16m cert-manager Existing issued Secret is not up to date for spec: [spec.commonName spec.dnsNames]
Normal Reused 16m cert-manager Reusing private key stored in existing Secret resource "lets-secret-test-2"
Normal Requested 16m cert-manager Created new CertificateRequest resource "lets-secret-test-2-pvb25"
</code></pre>
<p>Nothing suspicious here, moving forward.</p>
<pre><code>$ kubectl get certificaterequest
NAME APPROVED DENIED READY ISSUER REQUESTOR AGE
lets-secret-test-2-pvb25 True False letsencrypt-staging system:serviceaccount:cert-manager:cert-manager 19m
</code></pre>
<p>Describing <code>certificaterequest</code>:</p>
<pre><code>$ kubectl describe certificaterequest lets-secret-test-2-pvb25
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal cert-manager.io 19m cert-manager Certificate request has been approved by cert-manager.io
Normal OrderCreated 19m cert-manager Created Order resource default/lets-secret-test-2-pvb25-2336849393
</code></pre>
<p>Again, everything looks fine, no errors, moving forward to <code>order</code>:</p>
<pre><code>$ kubectl get order
NAME STATE AGE
lets-secret-test-2-pvb25-2336849393 pending 21m
</code></pre>
<p>It says <code>pending</code>, that's closer:</p>
<pre><code>$ kubectl describe order lets-secret-test-2-pvb25-2336849393
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Created 21m cert-manager Created Challenge resource "lets-secret-test-2-pvb25-2336849393-3788447910" for domain "dummy-domain"
</code></pre>
<p><code>Challenge</code> may shed some light, moving forward:</p>
<pre><code>$ kubectl get challenge
NAME STATE DOMAIN AGE
lets-secret-test-2-pvb25-2336849393-3788447910 pending dummy-domain 23m
</code></pre>
<p>Describing it:</p>
<pre><code>$ kubectl describe challenge lets-secret-test-2-pvb25-2336849393-3788447910
</code></pre>
<p>Checking <code>status</code>:</p>
<pre><code>Status:
Presented: true
Processing: true
Reason: Waiting for HTTP-01 challenge propagation: failed to perform self check GET request 'http://dummy-domain/.well-known/acme-challenge/xxxxyyyyzzzzz': Get "http://dummy-domain/.well-known/acme-challenge/xxxxyyyyzzzzz": dial tcp: lookup dummy-domain on xx.yy.zz.ww:53: no such host
State: pending
</code></pre>
<p>Now it's clear that something is wrong with <code>domain</code>, worth checking it:</p>
<p>Found and fixed the "mistake":</p>
<pre><code>$ kubectl apply -f ingress.yaml
ingress.networking.k8s.io/ingress configured
</code></pre>
<p>Certificate is <code>ready</code>!</p>
<pre><code>$ kubectl get cert
NAME READY SECRET AGE
lets-secret-test-2 True lets-secret-test-2 26m
</code></pre>
<h2>Correct way to renew a certificate using cert-manager</h2>
<p>It's possible to renew a certificate by deleting corresponding secret, however <a href="https://cert-manager.io/docs/usage/certificate/#actions-triggering-private-key-rotation" rel="nofollow noreferrer">documentation says it's not recommended</a>:</p>
<blockquote>
<p>Deleting the Secret resource associated with a Certificate resource is
<strong>not a recommended solution</strong> for manually rotating the private key. The
recommended way to manually rotate the private key is to trigger the
reissuance of the Certificate resource with the following command
(requires the kubectl cert-manager plugin):</p>
<p><code>kubectl cert-manager renew cert-1</code></p>
</blockquote>
<p><code>Kubectl cert-manager</code> command installation process is described <a href="https://cert-manager.io/docs/usage/kubectl-plugin/#installation" rel="nofollow noreferrer">here</a> as well as other commands and examples.</p>
<h2>Useful links:</h2>
<ul>
<li><a href="https://cert-manager.io/docs/concepts/certificate/" rel="nofollow noreferrer">Certificate in cert-manager - concept</a></li>
<li><a href="https://cert-manager.io/docs/concepts/certificaterequest/" rel="nofollow noreferrer">Cert-manager - certificate request</a></li>
<li><a href="https://cert-manager.io/docs/concepts/acme-orders-challenges/" rel="nofollow noreferrer">ACME Orders and Challenges</a></li>
</ul>
|
<p>In my case, I have to deploy a deployment first and then patch a preStop hook to the deployment in jenkins.</p>
<p>I try to use</p>
<pre><code>kubectl -n mobile patch deployment hero-orders-app --type "json" -p '[
{"op":"add","path":"/spec/template/spec/containers/0/lifecycle/preStop/exec/command","value":[]},
{"op":"add","path":"/spec/template/spec/containers/0/lifecycle/preStop/exec/command/-","value":"/bin/sleep"},
{"op":"add","path":"/spec/template/spec/containers/0/lifecycle/preStop/exec/command/-","value":"10"}]'
</code></pre>
<p>but it returns</p>
<pre><code>the request is invaild
</code></pre>
<p>If patch command can add non-existence path? or I need to change another solution?</p>
<p>And here is hero-orders-app deployment file</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: hero-orders-app$SUFFIX
namespace: $K8S_NAMESPACE
labels:
branch: $LABEL
run: hero-orders-app$SUFFIX
spec:
selector:
matchLabels:
run: hero-orders-app$SUFFIX
revisionHistoryLimit: 5
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
template:
metadata:
labels:
run: hero-orders-app$SUFFIX
namespace: $K8S_NAMESPACE
branch: $LABEL
role: hero-orders-app
spec:
dnsConfig:
options:
- name: ndots
value: "1"
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
run: hero-orders-app$SUFFIX
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: run
operator: In
values:
- hero-orders-app$SUFFIX
topologyKey: kubernetes.io/hostname
imagePullSecrets:
- name: $K8S_IMAGE_SECRETS
containers:
- name: hero-orders-$CLUSTER_NAME
image: $K8S_IMAGE
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 1000
allowPrivilegeEscalation: false
capabilities:
drop:
- CHOWN
- NET_RAW
- SETPCAP
ports:
- containerPort: 3000
protocol: TCP
resources:
limits:
cpu: $K8S_CPU_LIMITS
memory: $K8S_RAM_LIMITS
requests:
cpu: $K8S_CPU_REQUESTS
memory: $K8S_RAM_REQUESTS
readinessProbe:
httpGet:
path: /gw-api/v2/_manage/health
port: 3000
initialDelaySeconds: 15
timeoutSeconds: 10
livenessProbe:
httpGet:
path: /gw-api/v2/_manage/health
port: 3000
initialDelaySeconds: 20
timeoutSeconds: 10
periodSeconds: 45
</code></pre>
<p>And it running on AWS with service, pdb and hpa.</p>
<p>here is my kubectl version</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:59:11Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.9", GitCommit:"7a576bc3935a6b555e33346fd73ad77c925e9e4a", GitTreeState:"clean", BuildDate:"2021-07-15T20:56:38Z", GoVersion:"go1.15.14", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| <p>I will use a simpler deployment to demonstrate patching a lifecycle hook, where you can use the same technique for your own deployment.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox
spec:
replicas: 1
selector:
matchLabels:
app: busybox
template:
metadata:
labels:
app: busybox
spec:
containers:
- name: busybox
image: busybox
command: ["ash","-c","while :; do echo $(date); sleep 1; done"]
</code></pre>
<p>The path is up to <code>/spec/template/spec/containers/0/lifecycle</code> or you will get the response <strong>"The request is invalid."</strong></p>
<pre><code>kubectl patch deployment busybox --type json -p '[{"op":"add","path":"/spec/template/spec/containers/0/lifecycle","value":{"preStop": {"exec": {"command": ["/bin/sleep","10"]}}}}]'
deployment.apps/busybox patched
</code></pre>
<p>Upon patched the deployment will restart. You can do a <code>kubectl get deployment busybox -o yaml</code> to examine the patched. If you patch again with same value there will be no change.</p>
|
<p>I am running Airflow on Kubernetes from the <a href="https://github.com/helm/charts/tree/master/stable/airflow" rel="nofollow noreferrer">stable helm chart</a>. I'm running this in an AWS environment. This error exists with and without mounting any external volumes for log storage. I tried to set the configuration of the [logs] section to point to an EFS volume that I created. The PV gets mounted through a PVC but my containers are crashing (scheduler and web) due to the following error:</p>
<pre><code>*** executing Airflow initdb...
Unable to load the config, contains a configuration error.
Traceback (most recent call last):
File "/usr/local/lib/python3.6/logging/config.py", line 565, in configure
handler = self.configure_handler(handlers[name])
File "/usr/local/lib/python3.6/logging/config.py", line 738, in configure_handler
result = factory(**kwargs)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/utils/log/file_processor_handler.py", line 50, in __init__
os.makedirs(self._get_log_directory())
File "/usr/local/lib/python3.6/os.py", line 220, in makedirs
mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: '/opt/airflow/logs/scheduler/2020-08-20'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 25, in <module>
from airflow.configuration import conf
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/__init__.py", line 47, in <module>
settings.initialize()
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/settings.py", line 374, in initialize
LOGGING_CLASS_PATH = configure_logging()
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/logging_config.py", line 68, in configure_logging
raise e
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/logging_config.py", line 63, in configure_logging
dictConfig(logging_config)
File "/usr/local/lib/python3.6/logging/config.py", line 802, in dictConfig
dictConfigClass(config).configure()
File "/usr/local/lib/python3.6/logging/config.py", line 573, in configure
'%r: %s' % (name, e))
ValueError: Unable to configure handler 'processor': [Errno 13] Permission denied: '/opt/airflow/logs/scheduler/2020-08-20'
</code></pre>
<p>Persistent volume (created manually not from the stable/airflow chart)</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"efs-pv"},"spec":{"accessModes":["ReadWriteMany"],"capacity":{"storage":"5Gi"},"csi":{"driver":"efs.csi.aws.com","volumeHandle":"fs-e476a166"},"persistentVolumeReclaimPolicy":"Retain","storageClassName":"efs-sc","volumeMode":"Filesystem"}}
pv.kubernetes.io/bound-by-controller: "yes"
creationTimestamp: "2020-08-20T15:47:21Z"
finalizers:
- kubernetes.io/pv-protection
name: efs-pv
resourceVersion: "49476860"
selfLink: /api/v1/persistentvolumes/efs-pv
uid: 45d9f5ea-66c1-493e-a2f5-03e17f397747
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 5Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: efs-claim
namespace: airflow
resourceVersion: "49476857"
uid: 354103ea-f8a9-47f1-a7cf-8f449f9a2e8b
csi:
driver: efs.csi.aws.com
volumeHandle: fs-e476a166
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
volumeMode: Filesystem
status:
phase: Bound
</code></pre>
<p>Persistent Volume Claim for logs (created manually not from the stable/airflow chart):</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"efs-claim","namespace":"airflow"},"spec":{"accessModes":["ReadWriteMany"],"resources":{"requests":{"storage":"5Gi"}},"storageClassName":"efs-sc"}}
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
creationTimestamp: "2020-08-20T15:47:46Z"
finalizers:
- kubernetes.io/pvc-protection
name: efs-claim
namespace: airflow
resourceVersion: "49476866"
selfLink: /api/v1/namespaces/airflow/persistentvolumeclaims/efs-claim
uid: 354103ea-f8a9-47f1-a7cf-8f449f9a2e8b
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
storageClassName: efs-sc
volumeMode: Filesystem
volumeName: efs-pv
status:
accessModes:
- ReadWriteMany
capacity:
storage: 5Gi
phase: Bound
</code></pre>
<p>My <code>values.yaml</code> below:</p>
<pre><code>airflow:
image:
repository: apache/airflow
tag: 1.10.10-python3.6
## values: Always or IfNotPresent
pullPolicy: IfNotPresent
pullSecret: ""
executor: KubernetesExecutor
fernetKey: "XXXXXXXXXHIVb8jK6lfmSAvx4mO6Arehnc="
config:
AIRFLOW__CORE__REMOTE_LOGGING: "True"
AIRFLOW__CORE__REMOTE_BASE_LOG_FOLDER: "s3://mybucket/airflow/logs"
AIRFLOW__CORE__REMOTE_LOG_CONN_ID: "MyS3Conn"
AIRFLOW__KUBERNETES__WORKER_CONTAINER_REPOSITORY: "apache/airflow"
AIRFLOW__KUBERNETES__WORKER_CONTAINER_TAG: "1.10.10-python3.6"
AIRFLOW__KUBERNETES__WORKER_CONTAINER_IMAGE_PULL_POLICY: "IfNotPresent"
AIRFLOW__KUBERNETES__WORKER_PODS_CREATION_BATCH_SIZE: "10"
AIRFLOW__KUBERNETES__LOGS_VOLUME_CLAIM: "efs-claim"
AIRFLOW__KUBERNETES__GIT_REPO: "git@github.com:org/myrepo.git"
AIRFLOW__KUBERNETES__GIT_BRANCH: "develop"
AIRFLOW__KUBERNETES__GIT_DAGS_FOLDER_MOUNT_POINT: "/opt/airflow/dags"
AIRFLOW__KUBERNETES__DAGS_VOLUME_SUBPATH: "repo/"
AIRFLOW__KUBERNETES__GIT_SSH_KEY_SECRET_NAME: "airflow-git-keys"
AIRFLOW__KUBERNETES__NAMESPACE: "airflow"
AIRFLOW__KUBERNETES__DELETE_WORKER_PODS: "True"
AIRFLOW__KUBERNETES__RUN_AS_USER: "50000"
AIRFLOW__CORE__LOAD_EXAMPLES: "False"
AIRFLOW__SCHEDULER__DAG_DIR_LIST_INTERVAL: "60"
AIRFLOW__KUBERNETES__WORKER_SERVICE_ACCOUNT_NAME: "airflow"
podAnnotations: {}
extraEnv: []
extraConfigmapMounts: []
extraContainers: []
extraPipPackages: []
extraVolumeMounts: []
extraVolumes: []
scheduler:
resources: {}
nodeSelector: {}
affinity: {}
tolerations: []
labels: {}
podLabels: {}
annotations: {}
podAnnotations: {}
podDisruptionBudget:
enabled: true
maxUnavailable: "100%"
minAvailable: ""
connections:
- id: MyS3Conn
type: aws
extra: |
{
"aws_access_key_id": "XXXXXXXXX",
"aws_secret_access_key": "XXXXXXXX",
"region_name":"us-west-1"
}
refreshConnections: true
variables: |
{}
pools: |
{}
numRuns: -1
initdb: true
preinitdb: false
initialStartupDelay: 0
extraInitContainers: []
web:
resources: {}
replicas: 1
nodeSelector: {}
affinity: {}
tolerations: []
labels: {}
podLabels: {}
annotations: {}
podAnnotations: {}
service:
annotations: {}
sessionAffinity: "None"
sessionAffinityConfig: {}
type: ClusterIP
externalPort: 8080
loadBalancerIP: ""
loadBalancerSourceRanges: []
nodePort:
http: ""
baseUrl: "http://localhost:8080"
serializeDAGs: false
extraPipPackages: []
initialStartupDelay: 0
minReadySeconds: 5
readinessProbe:
enabled: false
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
livenessProbe:
enabled: true
scheme: HTTP
initialDelaySeconds: 300
periodSeconds: 30
timeoutSeconds: 3
successThreshold: 1
failureThreshold: 2
secretsDir: /var/airflow/secrets
secrets: []
secretsMap:
workers:
enabled: false
resources: {}
replicas: 1
nodeSelector: {}
affinity: {}
tolerations: []
labels: {}
podLabels: {}
annotations: {}
podAnnotations: {}
autoscaling:
enabled: false
maxReplicas: 2
metrics: []
initialStartupDelay: 0
celery:
instances: 1
gracefullTermination: false
gracefullTerminationPeriod: 600
terminationPeriod: 60
secretsDir: /var/airflow/secrets
secrets: []
secretsMap:
flower:
enabled: false
resources: {}
nodeSelector: {}
affinity: {}
tolerations: []
labels: {}
podLabels: {}
annotations: {}
podAnnotations: {}
basicAuthSecret: ""
basicAuthSecretKey: ""
urlPrefix: ""
service:
annotations: {}
type: ClusterIP
externalPort: 5555
loadBalancerIP: ""
loadBalancerSourceRanges: []
nodePort:
http: ""
initialStartupDelay: 0
extraConfigmapMounts: []
logs:
path: /opt/airflow/logs
persistence:
enabled: true
existingClaim: efs-claim
subPath: ""
storageClass: efs-sc
accessMode: ReadWriteMany
size: 1Gi
dags:
path: /opt/airflow/dags
doNotPickle: false
installRequirements: false
persistence:
enabled: false
existingClaim: ""
subPath: ""
storageClass: ""
accessMode: ReadOnlyMany
size: 1Gi
git:
url: git@github.com:org/myrepo.git
ref: develop
secret: airflow-git-keys
sshKeyscan: false
privateKeyName: id_rsa
repoHost: github.com
repoPort: 22
gitSync:
enabled: true
resources: {}
image:
repository: alpine/git
tag: latest
pullPolicy: Always
refreshTime: 60
initContainer:
enabled: false
resources: {}
image:
repository: alpine/git
tag: latest
pullPolicy: Always
mountPath: "/dags"
syncSubPath: ""
ingress:
enabled: false
web:
annotations: {}
path: ""
host: ""
livenessPath: ""
tls:
enabled: false
secretName: ""
precedingPaths: []
succeedingPaths: []
flower:
annotations: {}
path: ""
host: ""
livenessPath: ""
tls:
enabled: false
secretName: ""
rbac:
create: true
serviceAccount:
create: true
name: ""
annotations: {}
extraManifests: []
postgresql:
enabled: true
postgresqlDatabase: airflow
postgresqlUsername: postgres
postgresqlPassword: airflow
existingSecret: ""
existingSecretKey: "postgresql-password"
persistence:
enabled: true
storageClass: ""
accessModes:
- ReadWriteOnce
size: 5Gi
externalDatabase:
type: postgres
host: localhost
port: 5432
database: airflow
user: airflow
passwordSecret: ""
passwordSecretKey: "postgresql-password"
redis:
enabled: false
password: airflow
existingSecret: ""
existingSecretKey: "redis-password"
cluster:
enabled: false
slaveCount: 1
master:
resources: {}
persistence:
enabled: false
storageClass: ""
accessModes:
- ReadWriteOnce
size: 8Gi
slave:
resources: {}
persistence:
enabled: false
storageClass: ""
accessModes:
- ReadWriteOnce
size: 8Gi
externalRedis:
host: localhost
port: 6379
databaseNumber: 1
passwordSecret: ""
passwordSecretKey: "redis-password"
serviceMonitor:
enabled: false
selector:
prometheus: kube-prometheus
path: /admin/metrics
interval: "30s"
prometheusRule:
enabled: false
additionalLabels: {}
groups: []
</code></pre>
<p>I'm not really sure what to do here if anyone knows how to fix the permission error.</p>
| <p>I have had this issue with the Google Cloud Plateform and the helm airflow 1.2.0 chart (which uses airflow 2).
What ended up working was:</p>
<pre class="lang-yaml prettyprint-override"><code>extraInitContainers:
- name: fix-volume-logs-permissions
image: busybox
command: [ "sh", "-c", "chown -R 50000:0 /opt/airflow/logs/" ]
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: /opt/airflow/logs/
name: logs
</code></pre>
<p>by tweaking based on Ajay's answer. Please note that:</p>
<ul>
<li>the values 50000:0 are based on uid and gid setup in your values.yaml</li>
<li>you need to use extraInitContainers under scheduler and not worker</li>
<li>"logs" seems to be the volume name automatically used by the helm logging config when enabled</li>
<li>Security context was necessary for me or else the chown failed due to unprivileged rights</li>
</ul>
|
<p>We have a simple release test for a <code>Redis</code> chart. After running <code>helm test myReleaseName --tls --cleanup</code>, we got </p>
<pre><code>RUNNING: myReleaseName-redis
ERROR: timed out waiting for the condition
</code></pre>
<p>There are several issues in Github repository at <a href="https://github.com/helm/helm/search?q=timed+out+waiting+for+the+condition&type=Issues" rel="noreferrer">https://github.com/helm/helm/search?q=timed+out+waiting+for+the+condition&type=Issues</a> but I did not find a solution to it. </p>
<p>What's going on here?</p>
| <p>For me, helm couldn't pull the image as it was in private repo</p>
<p><code>kubectl get events</code> helped me get the logs.</p>
<pre><code>9m38s Warning Failed pod/airflow-scheduler-bbd8696bf-5mfg7 Failed to pull image
</code></pre>
<p>After authenticating, helm install command worked.</p>
<p>REF: <a href="https://github.com/helm/charts/issues/11904" rel="nofollow noreferrer">https://github.com/helm/charts/issues/11904</a></p>
|
<p>I am working on a microservice app and I use nginx ingress. I setup rules with 3 services, when I mention host in the rules like this bellow it always gives me 404 for all the services</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
cert-manager.io/issuer: "local-selfsigned"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
tls:
- hosts:
- "tradephlo.local"
secretName: tls-ca
rules:
- host: "tradephlo.local"
- http:
paths:
- path: /api/main/?(.*)
pathType: Prefix
backend:
service:
name: tradephlo-main-srv
port:
number: 4000
- path: /api/integration/?(.*)
pathType: Prefix
backend:
service:
name: tradephlo-integration-srv
port:
number: 5000
- path: /?(.*)
pathType: Prefix
backend:
service:
name: tradephlo-client-srv
port:
number: 3000
</code></pre>
<p>However if I put wildcard in the host under the rules it works perfectly</p>
<pre><code>rules:
- host: "*.tradephlo.local"
</code></pre>
<p>I don't want to generate wildcard SSL in the production. Please help me point out what I am doing wrong here.</p>
| <p>The problem is in dash <code>-</code> in the following line:</p>
<pre><code>rules:
- host: "tradephlo.local"
- http:
</code></pre>
<p>Otherwise, it is 2 different hosts - <code>tradephlo.local</code> abd <code>*</code>.</p>
<p>We can check this with the following command:</p>
<p><code>kubectl describe ing ingress-srv</code></p>
<p>And we get this:</p>
<pre><code>$ kubectl describe ing ingress-srv
Name: ingress-srv
Namespace: default
Address: xxxxxxxxxx
Default backend: default-http-backend:80 (10.60.0.9:8080)
TLS:
tls-ca terminates tradephlo.local
Rules:
Host Path Backends
---- ---- --------
*
/api/main/?(.*) nginx:80 (yyyyy:80)
</code></pre>
<p>And we get this after removed <code>-</code>:</p>
<pre><code>$ kubectl describe ing ingress-srv
Name: ingress-srv
Namespace: default
Address: xxxx
Default backend: default-http-backend:80 (10.60.0.9:8080)
TLS:
tls-ca terminates tradephlo.local
Rules:
Host Path Backends
---- ---- --------
tradephlo.local
/api/main/?(.*) nginx:80 (yyyyyy:80)
</code></pre>
<p>So there is no need to use wildcard, when you do this, ingress treats <code>*.tradephlo.local</code> as different host and proceeds to * rule.</p>
|
<p>I have an application running in multithreading and requires many cores on a SINGLE instance.
Just wondering how to merge multiple pod (ie containers) into a single big node, so the application can run on this big node.
For example: 64 pods into a single one (ie 64 cores).</p>
<p>This is not for production or HA service, just computation.
Application cannot be re-written.</p>
<p>Have <a href="https://stackoverflow.com/questions/20306628/how-to-make-all-distributed-nodes-ram-available-to-a-single-node">this reference</a>, a bit outdated.</p>
| <p>You can't do this with Kubernetes alone. K8s can't merge nodes into a single computational unit.</p>
<p>If you have 1 node, with 60 CPU cores, you can assign 60 cores to your application.<br />
If you have 2 nodes, with 30 CPU cores each, you can only assign 30 CPU cores for your application, on each node.</p>
<p>Solutions in the post you linked are the way to go in your case.</p>
|
<p>For pod with java application there is security context:</p>
<pre><code>spec:
securityContext:
runAsUser: 888
runAsGroup: 888
fsGroup: 888
</code></pre>
<p>Deployment manifest:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: springboot-demo
spec:
replicas: 1
selector:
matchLabels:
app: springboot-demo
template:
metadata:
labels:
app: springboot-demo
spec:
securityContext:
runAsUser: 888
runAsGroup: 888
containers:
- name: springboot-demo
image: k8s.192.168.20.15.nip.io:5443/springboot-demo:8.0.0
resources:
limits:
memory: "1024Mi"
cpu: "1000m"
ports:
- containerPort: 9000
---
apiVersion: v1
kind: Service
metadata:
name: springboot-demo
spec:
type: NodePort
selector:
app: springboot-demo
ports:
- port: 9000
targetPort: 9000
</code></pre>
<p>If I enter the container and run</p>
<blockquote>
<p>jcmd GC.heap_dump</p>
</blockquote>
<p>-> permission denied
<a href="https://i.stack.imgur.com/rxP05.png" rel="nofollow noreferrer">pic 1</a></p>
<p>If I enter as a root and run</p>
<blockquote>
<p>jcmd GC.heap_dump</p>
</blockquote>
<p>-> unable to open socket file /proc/1/root/tmp/.java_pid1 ...<a href="https://i.stack.imgur.com/TEQJf.png" rel="nofollow noreferrer">pic2</a></p>
<p>If I delete security context and run jcmd GC.heap_dump -> it works ok.</p>
<p>What could be the problem?</p>
| <p>I think this can have something to do with linux capabilies: <a href="https://linux-audit.com/linux-capabilities-101/" rel="nofollow noreferrer">https://linux-audit.com/linux-capabilities-101/</a></p>
<p>I would check which capabilies are available to sh/bash process started by 888 user, and compare those bash/sh started by root.</p>
|
<p>I have a bunch of services deployed to a Kubernetes cluster (various pods and services all load-balanced behind a gateway). I am making a REST call to one of them and getting unexpected errors, but the problem is I'm not actually sure <em>which</em> pod/service is actually throwing the error. I would like to check all the logs of every pod/service.</p>
<p>When I run <code>kubectl get namespace</code> I get:</p>
<pre><code>NAME STATUS AGE
another-app Active 22d
myapp Active 22d
myapp-database Active 22d
default Active 22d
kube-public Active 22d
kube-system Active 22d
zanzabar Active 22d
</code></pre>
<p>Is there a <code>kubectl log</code> command that can scan the entire cluster for logs and search them for a specific error message? For instance, say the error message I'm getting back from the REST (<code>curl</code>) is "Sorry mario your princess is in another castle". Is there any way I can use <code>kubectl log</code> to scan all pod/service logs for that phrase and display the results back to me? If not, then whats the best/easiest way <strong>using <code>kubectl</code></strong> to find the pod/service with the error message (and hopefully, more details behind my error)?</p>
| <p>You can fetch the logs of a particular pod or container(use -c flag for container) and grep the error logs by pipelining the log command with the grep command.</p>
<p>For example if I want to get logs of a pod with name my-pod and want to grep "error exists" word then command goes like:</p>
<pre><code>kubectl logs my-pod | grep “error exist”
</code></pre>
<p>Refer this <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#interacting-with-running-pods" rel="nofollow noreferrer">document</a> for multiple ways of fetching logs and this <a href="https://stackoverflow.com/questions/53348331/extract-lines-from-kubernetes-log">similar question</a> for more information on grep usage.</p>
|
<p>thanks for checking out my topic.</p>
<p>I'm currently working to have kustomize to download the resource and base files from our git repository.
We have tried a few options some of them following the documentation and some of them not, see below. But anyhow still not able to download from our remote repo and while trying to run the kubectl apply it looks for a local resource based on the git url and file names.</p>
<pre><code>resources:
- ssh://git@SERVERURL:$PORT/$REPO.GIT
- git::ssh://git@SERVERURL:$PORT/$REPO.GIT
- ssh::git@SERVERURL:$PORT/$REPO.GIT
- git::git@SERVERURL:$PORT/$REPO.GIT
- git@SERVERURL:$PORT/$REPO.GIT
</code></pre>
<p>As a workaround I have added the git clone for the expected folder to my pipeline, but the goal is to have the bases/resources downloaded directly from the kustomization url.
Any ideas or some hints on how to get it running?</p>
| <p>Use <code>bases</code> instead of <code>resources</code>:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/ecr/?ref=release-1.0
resources:
- rbac.yaml
- manifest.yaml
</code></pre>
<p>add the complete route to your source and add param <code>ref</code> to the tag or branch you want to download.</p>
|
<p>I see issues in the Spring cloud config server (Springboot) logs when connecting to the repo where configs are stored. I'm not sure if it's unable to clone because of credentials or something else (git-upload-pack not permitted). Any pointers to this would be great.</p>
<pre><code>2021-10-06 22:52:51.763 INFO 1 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet'
2021-10-06 22:52:51.764 INFO 1 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet'
2021-10-06 22:52:51.765 INFO 1 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 1 ms
2021-10-06 22:52:54.769 WARN 1 --- [nio-8080-exec-1] .c.s.e.MultipleJGitEnvironmentRepository : Error occured cloning to base directory.
org.eclipse.jgit.api.errors.TransportException: https://github.asdf.asdf.asdf.com/asdfad/sdasdf: git-upload-pack not permitted on 'https://github.asdf.asdf.adsf.com/sdfdf/asdfsad-configs/'
at org.eclipse.jgit.api.FetchCommand.call(FetchCommand.java:254) ~[org.eclipse.jgit-5.1.3.201810200350-r.jar!/:5.1.3.201810200350-r]
at org.eclipse.jgit.api.CloneCommand.fetch(CloneCommand.java:306) ~[org.eclipse.jgit-5.1.3.201810200350-r.jar!/:5.1.3.201810200350-r]
at org.eclipse.jgit.api.CloneCommand.call(CloneCommand.java:200) ~[org.eclipse.jgit-5.1.3.201810200350-r.jar!/:5.1.3.201810200350-r]
at org.springframework.cloud.config.server.environment.JGitEnvironmentRepository.cloneToBasedir(JGitEnvironmentRepository.java:612) [spring-cloud-config-server-3.0.4.jar!/:3.0.4]
at org.springframework.cloud.config.server.environment.JGitEnvironmentRepository.copyRepository(JGitEnvironmentRepository.java:587) [spring-cloud-config-server-3.0.4.jar!/:3.0.4]
at org.springframework.cloud.config.server.environment.JGitEnvironmentRepository.createGitClient(JGitEnvironmentRepository.java:570) [spring-cloud-config-server-3.0.4.jar!/:3.0.4]
at org.springframework.cloud.config.server.environment.JGitEnvironmentRepository.refresh(JGitEnvironmentRepository.java:267) [spring-cloud-config-server-3.0.4.jar!/:3.0.4]
at org.springframework.cloud.config.server.environment.JGitEnvironmentRepository.getLocations(JGitEnvironmentRepository.java:245) [spring-cloud-config-server-3.0.4.jar!/:3.0.4]
at org.springframework.cloud.config.server.environment.MultipleJGitEnvironmentRepository.getLocations(MultipleJGitEnvironmentRepository.java:139) [spring-cloud-config-server-3.0.4.jar!/:3.0.4]
</code></pre>
<p>The Spring boot app.properties for the config server app looks like this -</p>
<pre><code>spring.cloud.config.server.git.uri=https://github.sdf.sdasdf.asdf.com/asdf/asdf-configs
spring.cloud.config.server.git.username=github-token
spring.cloud.config.server.git.password={github_token}
</code></pre>
<p>The endpoint returns the response below:</p>
<pre><code>{"status":"DOWN","components":{"clientConfigServer":{"status":"UNKNOWN","details":{"error":"no property sources located"}},"configServer":{"status":"DOWN","details":{"repository":{"application":"app","profiles":"default"},"error":"org.springframework.cloud.config.server.environment.NoSuchRepositoryException: Cannot clone or checkout repository: https://github-token@github.sdf.sdf.dsfs.com/sdf/sdfsd-configs"}},"discoveryComposite":{"description":"Discovery Client not initialized","status":"UNKNOWN","components":{"discoveryClient":{"description":"Discovery Client not initialized","status":"UNKNOWN"}}},"diskSpace":{"status":"UP","details":{"total":103880232960,"free":24558080000,"threshold":10485760,"exists":true}},"livenessState":{"status":"UP"},"ping":{"status":"UP"},"readinessState":{"status":"UP"},"refreshScope":{"status":"UP"}},"groups":["liveness","readiness"]}
</code></pre>
| <p>Github token needs to be passed as username which I was configuring against the password property for the spring boot app. The password property needs to be left empty and the Github-token needs to be assigned to the username like below-</p>
<pre><code>spring.cloud.config.server.git.username=asdfasdxxxxxxxyssssysysyssysy
spring.cloud.config.server.git.password=
</code></pre>
|
<p><strong>Getting error</strong>
postgres deployment for service is getting fail. Checked yaml with yamllint and it is valid, but still getting the error. Deployment file contains ServiceAccount , Service and Statefulset.</p>
<pre><code>install.go:158: [debug] Original chart version: ""
install.go:175: [debug] CHART PATH: /builds/xxx/xyxy/xyxyxy/xx/xxyy/src/main/helm
Error: YAML parse error on postgresdeployment.yaml: error converting YAML to JSON: yaml: line 24: did not find expected key
helm.go:75: [debug] error converting YAML to JSON: yaml: line 24: did not find expected key
YAML parse error on postgresdeployment.yaml
helm.sh/helm/v3/pkg/releaseutil.(*manifestFile).sort
/home/circleci/helm.sh/helm/pkg/releaseutil/manifest_sorter.go:146
helm.sh/helm/v3/pkg/releaseutil.SortManifests
/home/circleci/helm.sh/helm/pkg/releaseutil/manifest_sorter.go:106
helm.sh/helm/v3/pkg/action.(*Configuration).renderResources
/home/circleci/helm.sh/helm/pkg/action/install.go:489
helm.sh/helm/v3/pkg/action.(*Install).Run
/home/circleci/helm.sh/helm/pkg/action/install.go:230
main.runInstall
/home/circleci/helm.sh/helm/cmd/helm/install.go:223
main.newUpgradeCmd.func1
/home/circleci/helm.sh/helm/cmd/helm/upgrade.go:113
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:826
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:914
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:864
main.main
/home/circleci/helm.sh/helm/cmd/helm/helm.go:74
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
</code></pre>
<p><strong>postgresdeployment.yaml</strong></p>
<ol>
<li>Is there is any invalid yaml syntax?</li>
<li>Any indentation is missing ?</li>
<li>Which node is missing here?</li>
</ol>
<pre><code>{{- if contains "-dev" .Values.istio.suffix }}
# Postgre ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: postgres
---
# PostgreSQL StatefulSet Service
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
---
# Postgre StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
selector:
matchLabels:
app: postgres
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: postgres
spec:
serviceAccountName: postgres
securityContext:
{{- toYaml .Values.securityContext | nindent 8 }}
terminationGracePeriodSeconds: {{ default 60 .Values.terminationGracePeriodSeconds }}
volumes:
{{ include "xxx.volumes.logs.spec" . | indent 8 }}
- emptyDir: { }
name: postgres-disk
containers:
- name: postgres
image: "{{ template "xxx.dockerRegistry.hostport" . }}/postgres:latest"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: postgres
containerPort: 5432
livenessProbe:
tcpSocket:
port: 5432
failureThreshold: 3
initialDelaySeconds: 240
periodSeconds: 45
timeoutSeconds: 5
readinessProbe:
tcpSocket:
port: 5432
failureThreshold: 2
initialDelaySeconds: 180
periodSeconds: 5
timeoutSeconds: 20
resources:
{{ if .Values.lowResourceMode }}
{{- toYaml .Values.resources.low | nindent 12 }}
{{ else }}
{{- toYaml .Values.resources.high | nindent 12 }}
{{ end }}
env:
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
volumeMounts:
- name: postgres-disk
mountPath: /var/lib/postgresql/data
{{- end }}
</code></pre>
| <p>The templating mustaches in helm (and its golang text/template peer) must be one token, otherwise yaml believes that <code>{</code> opens a dict, and then <code>{</code> tries to open a <em>child</em> dict and just like in JSON that's not a valid structure</p>
<p>So you'll want:</p>
<pre class="lang-yaml prettyprint-override"><code> serviceAccountName: postgres
securityContext:
{{- toYaml .Values.securityContext | nindent 8 }}
</code></pre>
|
<p>How can egress from a Kubernetes pod be limited to only specific FQDN/DNS with Azure CNI Network Policies?</p>
<p>This is something that can be achieved with:</p>
<p>Istio</p>
<pre><code>apiVersion: config.istio.io/v1alpha2
kind: EgressRule
metadata:
name: googleapis
namespace: default
spec:
destination:
service: "*.googleapis.com"
ports:
- port: 443
protocol: https
</code></pre>
<p>Cilium</p>
<pre><code>apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "fqdn"
spec:
endpointSelector:
matchLabels:
app: some-pod
egress:
- toFQDNs:
- matchName: "api.twitter.com"
- toEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": kube-system
"k8s:k8s-app": kube-dns
toPorts:
- ports:
- port: "53"
protocol: ANY
rules:
dns:
- matchPattern: "*"
</code></pre>
<p>OpenShift</p>
<pre><code>apiVersion: network.openshift.io/v1
kind: EgressNetworkPolicy
metadata:
name: default-rules
spec:
egress:
- type: Allow
to:
dnsName: www.example.com
- type: Deny
to:
cidrSelector: 0.0.0.0/0
</code></pre>
<p>How can something similar be done with Azure CNI Network Policies?</p>
| <p>ATM network policies with FQDN/DNS rules are not supported on AKS.</p>
<p>If you use Azure CNI & Azure Policy Plugin you get the default Kubernetes Network Policies.</p>
<p>If you use Azure CNI & Calico Policy Plugin you get advanced possibilities like Global Network Polices but not the FQDN/DNS one. This is a paid feature on Calico Cloud unfortunately.</p>
|
<p>I have a cronjob which run once every midnight, however one day I deployed wrong version of it and consequently the pod it made went failed soon.</p>
<p>So.. the problem is when I delete the failed pod the cronjob immediately recreates it. How can I stop it? Well anyhow its image is already broken so however it recreates new pod again and again it'd go fail.</p>
<p>my question is "How can I delete failed pod created by cronjob?"</p>
<p>P.S. I'm using rancher but I'm not sure my problem is related to it.</p>
| <p><code>my question is "How can I delete failed pod created by cronjob?"</code></p>
<p>You can use <code>ttlSecondsAfterFinished</code> to control how long you want to keep <strong>either Complete or Failed job</strong> in your cluster.</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
...
spec:
schedule: ...
...
jobTemplate:
spec:
ttlSecondsAfterFinished: 43200 <-- example 12 hrs. 0 to clean immediately.
template:
...
</code></pre>
<p>Another thing to note this is a K8s v1.21 beta feature at the point of this writing.</p>
|
<h3>What I'm Trying To Do</h3>
<p>I'm trying to make a deployment and watch for k8s events until the deployment is ready using <strong>k8s node api</strong> (Watch): <a href="https://github.com/kubernetes-client/javascript/blob/master/examples/typescript/watch/watch-example.ts" rel="noreferrer">https://github.com/kubernetes-client/javascript/blob/master/examples/typescript/watch/watch-example.ts</a></p>
<hr />
<h3>My Questions</h3>
<p>I have read this section: <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#watch-bookmarks" rel="noreferrer">https://kubernetes.io/docs/reference/using-api/api-concepts/#watch-bookmarks</a> over and over but I still can't understand, <strong>from the client perspective</strong>:</p>
<ol>
<li>What is this feature try to solve?</li>
<li>When should I use it?</li>
<li>Can I not use it and not miss events?</li>
<li>How do I use it?</li>
<li>What should I do when I receive bookmark event.</li>
</ol>
| <p>I'll attempt to answer your questions from the client's perspective:</p>
<h2>What is this feature trying to solve? </h2>
<p>Its intended to reduce the load on kube-apiserver from clients that issue Watch requests for a resource type. Bookmarks are intended to let the client know that the server has sent the client all events up to the <code>resourceVersion</code> specified in the Bookmark event. This is to make sure that if the watch on the client side fails or the channel closes (after timeout), the client can resume the watch from <code>resourceVersion</code> last returned in the Bookmark event. It's more like a checkpoint event in the absence of real mutating events like Added, Modified, Deleted.</p>
<p>The client could already be caching the resourceVersion that is received as part of the regular Added, Modified, Deleted events, but consider a case where the resource you're watching has had no updates in the last 30 minutes and the <code>resourceVersion</code> you have cached is "100".
</p>
<p>In the absence of bookmark events, you will attempt to resume from "100", but the server could return a "410 Gone" indicating that it doesn't have that version or it is too old. In that case, the client will have to do a List to get the latest resourceVersion and initiate a watch from there or start the watch from latest by specifying no resourceVersion, which is more expensive for the server to process.</p>
<p>In the presence of bookmark events, the server might attempt to send you a Bookmark event with resourceVersion "150" indicating that you have processed up that version and in the event of a restart you can start processing from there onwards.
This contrived example in the documentation shows just that. You start your watcher at rv="10245", you process some events up to rv="10596" and then the server sends you a bookmark with rv="12746" so in the event of a restart you can start processing from there onwards.</p>
<pre><code>GET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245&allowWatchBookmarks=true
---
200 OK
Transfer-Encoding: chunked
Content-Type: application/json
{
"type": "ADDED",
"object": {"kind": "Pod", "apiVersion": "v1", "metadata": {"resourceVersion": "10596", ...}, ...}
}
...
{
"type": "BOOKMARK",
"object": {"kind": "Pod", "apiVersion": "v1", "metadata": {"resourceVersion": "12746"} }
}
</code></pre>
<h2>When should I use it?</h2>
<p>For efficient processing of change events on the resource your watching, it seems better to process Bookmarks than to not. There are no guarantees round the delivery of Bookmarks though - the server might not support it (if it is running an older version of Kubernetes), or it might not send them at all.</p>
<h2>Can I not use and not miss events?</h2>
<p>Yes, you could totally do that, but your likelihood of missing events decreases if you use Bookmarks. Also remember that it not only benefits the client but also the kube-apiserver, because it could return events from its local cache if it has the "fresh" resourceVerion you received in a Bookmark, in the absence of which it will have to do a consistent-read from etcd. I recommend <a href="https://www.youtube.com/watch?v=PLSDvFjR9HY" rel="noreferrer">this</a> video to understand how Watch events work under the hood.</p>
<h2>How do I use it?</h2>
<p>Depending on which client library you're suing the usage might be different, but in a raw HTTP request you can specify this query param: <strong>allowWatchBookmarks=true</strong></p>
<pre><code>GET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245&allowWatchBookmarks=true
</code></pre>
<p>If you're using <a href="https://github.com/kubernetes/client-go" rel="noreferrer">client-go</a>, there is a tool called Reflector, and <a href="https://github.com/kubernetes/client-go/blob/master/tools/cache/reflector.go#L409" rel="noreferrer">this</a> is how it requests bookmarks and <a href="https://github.com/kubernetes/client-go/blob/master/tools/cache/reflector.go#L518" rel="noreferrer">this</a> is how it processes it.</p>
<h2>What should I do when I receive a Bookmark event?</h2>
<p>Extract the resourceVersion from the Bookmark event and cache it for when you try to restart your watcher for any reason.</p>
|
<p>I'm working with Docker and Kubernetes on AWS (cloudformation yaml)</p>
<p>Yaml</p>
<pre><code>...
resources:
requests:
memory: "4Gi"
cpu: "0.5"
limits:
memory: "4Gi"
cpu: "0.5"
</code></pre>
<p>Dockerfile</p>
<pre><code>FROM amazonlinux:latest
RUN yum -y install java-1.8.0
EXPOSE 8103
ADD myapp.jar
ENTRYPOINT ["java","-XX:MinRAMPercentage=50.0","-XX:MaxRAMPercentage=80.0","-jar","myapp.jar"]
</code></pre>
<p>Follow you can find the version of java "1.8.0_282" from the bash of my pod</p>
<pre><code>bash-4.2# java -XX:+PrintFlagsFinal -version | grep -iE 'HeapSize|PermSize|ThreadStackSize'
intx CompilerThreadStackSize = 0 {pd product}
uintx ErgoHeapSizeLimit = 0 {product}
uintx HeapSizePerGCThread = 87241520 {product}
uintx InitialHeapSize := 67108864 {product}
uintx LargePageHeapSizeThreshold = 134217728 {product}
uintx MaxHeapSize := 1073741824 {product}
intx ThreadStackSize = 1024 {pd product}
intx VMThreadStackSize = 1024 {pd product}
openjdk version "1.8.0_282"
OpenJDK Runtime Environment (build 1.8.0_282-b08)
OpenJDK 64-Bit Server VM (build 25.282-b08, mixed mode)
</code></pre>
<p>I read this <a href="https://merikan.com/2019/04/jvm-in-a-container/#java-10" rel="noreferrer">blog</a> where the author has explained that <em>"If you are running Java 8 update 191 or later, or Java 10, 11,12, 13 etc. you must NOT use the UseCGroupMemoryLimitForHeap option. Instead you should use the UseContainerSupport that is activated by default."</em></p>
<p>For this reason I have added the options "-XX:MinRAMPercentage=50.0","-XX:MaxRAMPercentage=80.0" in my ENTRYPOINT instruction.</p>
<pre><code>ENTRYPOINT ["java","-XX:MinRAMPercentage=50.0","-XX:MaxRAMPercentage=80.0","-jar","myapp.jar"]
</code></pre>
<p>Unfortunately the UseContainerSupport values didn't change on my pod as you can see follow</p>
<pre><code>bash-4.2# java -XX:+PrintFlagsFinal -version | grep -E "UseContainerSupport | InitialRAMPercentage | MaxRAMPercentage | MinRAMPercentage"
double InitialRAMPercentage = 1.562500 {product}
double MaxRAMPercentage = 25.000000 {product}
double MinRAMPercentage = 50.000000 {product}
bool UseContainerSupport = true {product}
openjdk version "1.8.0_282"
OpenJDK Runtime Environment (build 1.8.0_282-b08)
OpenJDK 64-Bit Server VM (build 25.282-b08, mixed mode)
</code></pre>
<p>I have also tried to add JAVA_OPTS in my YAML as following but nothing change</p>
<pre><code>env:
- name: JAVA_OPTS
value: "-XX:MinRAMPercentage=50.0 -XX:MaxRAMPercentage=80.0"
</code></pre>
<p>Finally the question :-)</p>
<p>How and where have I to set the -XX:MinRAMPercentage and -XX:MaxRAMPercentage options in order to modify the default value of MaxRAMPercentage? I want use 3Gi of memory of 4Gi dedicated (on yaml)</p>
<p>Thanks to everyone</p>
| <p>Please change your env variable from <code>JAVA_OPTS</code> to <code>JAVA_TOOL_OPTIONS</code>.
This worked for me with Java8.</p>
|
<p>I have installed a 3 nodes cluster with <strong>K3S</strong>. Nodes are correctly detected by <code>kubectl</code> and I'm able to deploy images.</p>
<pre><code>$ k3s kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane,etcd,master 4h31m v1.22.2+k3s1
worker-01 Ready <none> 3h59m v1.22.2+k3s1
worker-02 Ready <none> 4h3m v1.22.2+k3s1
</code></pre>
<p>I've also installed <strong>Rancher</strong> latest version (<code>2.6.0)</code> via docker-compose:</p>
<pre><code>version: '2'
services:
rancher:
image: rancher/rancher:latest
restart: always
ports:
- "8080:80/tcp"
- "4443:443/tcp"
volumes:
- "rancher-data:/var/lib/rancher"
privileged: true
volumes:
rancher-data:
</code></pre>
<p>The dashboard is reachable from every node and I've imported an existing cluster, running the following command:</p>
<pre><code>curl --insecure -sfL https://192.168.1.100:4443/v3/import/66txfzmv4fnw6bqj99lpmdt6jlx4rpwblzhx96wvljc8gczphcn2c2_c-m-nz826pgl.yaml | kubectl apply -f -
</code></pre>
<p>The cluster apears as <strong>Active</strong> but with 0 nodes and with message:</p>
<pre><code>[Pending] waiting for full cluster configuration
</code></pre>
<p>The full yaml status is here:</p>
<pre><code>apiVersion: provisioning.cattle.io/v1
kind: Cluster
metadata:
annotations:
field.cattle.io/creatorId: user-5bk6w
creationTimestamp: "2021-10-05T10:06:35Z"
finalizers:
- wrangler.cattle.io/provisioning-cluster-remove
generation: 1
managedFields:
- apiVersion: provisioning.cattle.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.: {}
v:"wrangler.cattle.io/provisioning-cluster-remove": {}
f:spec: {}
f:status:
.: {}
f:clientSecretName: {}
f:clusterName: {}
f:conditions: {}
f:observedGeneration: {}
f:ready: {}
manager: rancher
operation: Update
time: "2021-10-05T10:08:30Z"
name: ofb
namespace: fleet-default
resourceVersion: "73357"
uid: 1d03f05e-77b7-4361-947d-2ef5b50928f5
spec: {}
status:
clientSecretName: ofb-kubeconfig
clusterName: c-m-nz826pgl
conditions:
- lastUpdateTime: "2021-10-05T10:08:30Z"
status: "False"
type: Reconciling
- lastUpdateTime: "2021-10-05T10:06:35Z"
status: "False"
type: Stalled
- lastUpdateTime: "2021-10-05T14:08:52Z"
status: "True"
type: Created
- lastUpdateTime: "2021-10-05T10:06:35Z"
status: "True"
type: RKECluster
- lastUpdateTime: "2021-10-05T10:06:35Z"
status: "True"
type: BackingNamespaceCreated
- lastUpdateTime: "2021-10-05T10:06:35Z"
status: "True"
type: DefaultProjectCreated
- lastUpdateTime: "2021-10-05T10:06:35Z"
status: "True"
type: SystemProjectCreated
- lastUpdateTime: "2021-10-05T10:06:35Z"
status: "True"
type: InitialRolesPopulated
- lastUpdateTime: "2021-10-05T10:06:35Z"
status: "True"
type: CreatorMadeOwner
- lastUpdateTime: "2021-10-05T10:08:15Z"
status: "True"
type: Pending
- lastUpdateTime: "2021-10-05T10:08:15Z"
message: waiting for full cluster configuration
reason: Pending
status: "True"
type: Provisioned
- lastUpdateTime: "2021-10-05T14:08:52Z"
message: Waiting for API to be available
status: "True"
type: Waiting
- lastUpdateTime: "2021-10-05T10:06:35Z"
status: "True"
type: NoDiskPressure
- lastUpdateTime: "2021-10-05T10:06:35Z"
status: "True"
type: NoMemoryPressure
- lastUpdateTime: "2021-10-05T10:06:39Z"
status: "False"
type: Connected
- lastUpdateTime: "2021-10-05T14:04:52Z"
status: "True"
type: Ready
observedGeneration: 1
ready: true
</code></pre>
<p>Cluster agent shows no particular issue:</p>
<pre><code>$ kubectl -n cattle-system logs -l app=cattle-cluster-agent
time="2021-10-05T13:54:30Z" level=info msg="Connecting to wss://192.168.1.100:4443/v3/connect with token starting with 66txfzmv4fnw6bqj99lpmdt6jlx"
time="2021-10-05T13:54:30Z" level=info msg="Connecting to proxy" url="wss://192.168.1.100:4443/v3/connect"
</code></pre>
<p>Is there something I need to do to make the cluster fully running ? I've tried to downgrade the Rancher version to <code>2.5.0</code> but I got the same issue.</p>
| <p>I believe this is an incompatibility with Kubernetes v1.22.</p>
<p>Having encountered the same issue with Rancher v2.6.0 when importing a new v1.22.2 cluster (running on IBM Cloud VPC infrastructure), I tailed the logs of the Docker container running Rancher and observed:</p>
<pre><code>2021/10/20 14:40:31 [INFO] Starting cluster controllers for c-m-cs78tnxc
E1020 14:40:31.346373 33 reflector.go:139] pkg/mod/github.com/rancher/client-go@v0.21.0-rancher.1/tools/cache/reflector.go:168: Failed to watch *v1beta1.Ingress: failed to list *v1beta1.Ingress: the server could not find the requested resource (get ingresses.meta.k8s.io)
</code></pre>
<p><a href="https://kubernetes.io/blog/2021/07/26/update-with-ingress-nginx/" rel="nofollow noreferrer">Kubernetes v1.22 updates NGINX-Ingress to v1.x</a>, which appears to be the cause, and there is an <a href="https://github.com/rancher/rancher/issues/33312" rel="nofollow noreferrer">open issue on the Rancher GitHub</a> to update it for compatibility with Kubernetes v1.22.</p>
<p>In the meantime, after recreating the new cluster using Kubernetes v1.21.5 on the same infrastructure, I was able to import it into Rancher successfully.</p>
|
<p>Whats the best way to get the helm subchart service names to reference into my ingress controller that will sit in the parent chart</p>
<pre><code>values.yaml
---
ingress:
paths:
- serviceName: app-1
path: /app-1/*
port: 8080
- serviceName: app-2
path: /app-2/*
port: 8080
ingress.yaml
---
{{- range .Values.ingress.paths }}
- path: {{ .path }}
backend:
{{- $subchart := .serviceName -}}
serviceName: {{- include "$subchart.fullname" .}}
servicePort: {{ .port }}
{{- end }}
</code></pre>
<p>template: no template "$subchart.fullname" associated with template "gotpl"</p>
| <p>helm 3.7 version has solved the problem
<a href="https://github.com/helm/helm/pull/9957" rel="noreferrer">https://github.com/helm/helm/pull/9957</a>.<br />
You can use like this</p>
<pre><code>{{ template "bar.fullname" .Subcharts.bar }}
</code></pre>
|
<p>Two of my microk8s clusters running version 1.21 just stopped working.</p>
<p>kubectl locally returns <code>The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?</code></p>
<p>microk8s.status says not running, and microk8s.inspect just checks four services:</p>
<pre><code>Inspecting services
Service snap.microk8s.daemon-cluster-agent is running
Service snap.microk8s.daemon-containerd is running
Service snap.microk8s.daemon-apiserver-kicker is running
Service snap.microk8s.daemon-kubelite is running
</code></pre>
<p>Apiserver not mentioned, and it's not running (checking status for that separately says "Will not run along with kubelite")</p>
<p>I didn't change anything on any of the machines.</p>
<p>I tried upgrading microk8s to 1.22 - no change.</p>
<p>journal.log for apiserver says:</p>
<pre><code>Oct 18 07:57:05 myserver microk8s.daemon-kubelite[30037]: I1018 07:57:05.143264 30037 daemon.go:65] Starting API Server
Oct 18 07:57:05 myserver microk8s.daemon-kubelite[30037]: Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
Oct 18 07:57:05 myserver microk8s.daemon-kubelite[30037]: I1018 07:57:05.144650 30037 server.go:654] external host was not specified, using 192.168.1.10
Oct 18 07:57:05 myserver microk8s.daemon-kubelite[30037]: W1018 07:57:05.144719 30037 authentication.go:507] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
</code></pre>
<p>snap services:</p>
<pre><code>Service Startup Current Notes
microk8s.daemon-apiserver enabled inactive -
microk8s.daemon-apiserver-kicker enabled active -
microk8s.daemon-cluster-agent enabled active -
microk8s.daemon-containerd enabled active -
microk8s.daemon-control-plane-kicker enabled inactive -
microk8s.daemon-controller-manager enabled inactive -
microk8s.daemon-etcd enabled inactive -
microk8s.daemon-flanneld enabled inactive -
microk8s.daemon-kubelet enabled inactive -
microk8s.daemon-kubelite enabled active -
microk8s.daemon-proxy enabled inactive -
microk8s.daemon-scheduler enabled inactive -
</code></pre>
<p>It's not this (<a href="https://github.com/ubuntu/microk8s/issues/2486" rel="nofollow noreferrer">https://github.com/ubuntu/microk8s/issues/2486</a>), both info.yaml and cluster.yaml have the correct contents.</p>
<p>All machines are virtual Ubuntus running in Hyper-V in a Windows Server cluster.</p>
| <p>Turns out there were two different problems in the cluster, and that I hadn't changed anything was not entirely true.</p>
<h3>Single-node cluster:</h3>
<p>cluster.yaml was not correct, it was empty. Copying the contents of localnode.yaml to cluster.yaml fixed the problem.</p>
<h3>Multi-node cluster:</h3>
<p>One node had gone offline (microk8s not running) due to a stuck unsuccessful auto-refresh of the microk8s snap.</p>
<p>I had temporarily shut down one node for a couple of days. That left only one node to hold the vote on master for dqlite, which failed. When the shut down node was turned back on the cluster had already failed. Unsticking the auto-refresh on the third node fixed the cluster.</p>
|
<p>I want to put my docker image running react into kubernetes and be able to hit the main page. I am able to get the main page just running <code>docker run --rm -p 3000:3000 reactdemo</code> locally. When I try to deploy to my kubernetes (running locally via docker-desktop) I get no response until eventually a timeout.</p>
<p>I tried this same process below with a springboot docker image and I am able to get a simple json response in my browser.</p>
<p>Below is my Dockerfile, deployment yaml (with service inside it), and commands I'm running to try and get my results. Morale is low, any help would be appreciated!</p>
<p>Dockerfile:</p>
<pre><code># pull official base image
FROM node
# set working directory
RUN mkdir /app
WORKDIR /app
# install app dependencies
COPY package.json /app
RUN npm install
# add app
COPY . /app
#Command to build ReactJS application for deploy might not need this...
RUN npm run build
# start app
CMD ["npm", "start"]
</code></pre>
<p>Deployment file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
replicas: 1
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
containers:
- name: reactdemo
image: reactdemo:latest
imagePullPolicy: Never
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: demo
spec:
type: NodePort
selector:
app: demo
ports:
- port: 3000
targetPort: 3000
protocol: TCP
nodePort: 31000
selector:
app: demo
</code></pre>
<p>I then open a port on my local machine to the nodeport for the service:</p>
<pre><code>PS C:\WINDOWS\system32> kubectl port-forward pod/demo-854f4d78f6-qv4mt 31000:3000
Forwarding from 127.0.0.1:31000 -> 3000
</code></pre>
<p>My assumption is that everything is in place at this point and I should be able to open a browser to hit <code>localhost:31000</code>. I expected to see that spinning react symbol for their landing page just like I do when I only run a local docker container.</p>
<p>Here is it all running:</p>
<pre><code>$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/demo-854f4d78f6-7dn7c 1/1 Running 0 4s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/demo NodePort 10.111.203.209 <none> 3000:31000/TCP 4s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/demo 1/1 1 1 4s
NAME DESIRED CURRENT READY AGE
replicaset.apps/demo-854f4d78f6 1 1 1 4s
</code></pre>
<p>Some extra things to note:</p>
<ul>
<li>Although I don't have it setup currently I did have my springboot service in the deployment file. I logged into it's pod and ensured the react container was reachable. It was.</li>
<li>I haven't done anything with my firewall settings (but sort of assume I dont have to since the run with the springboot service worked?)</li>
<li>I see this in chrome developer tools and so far don't think it's related to my problem: crbug/1173575, non-JS module files deprecated. I see this response in the main browser page after some time:</li>
</ul>
<hr />
<pre><code>localhost didn’t send any data.
ERR_EMPTY_RESPONSE
</code></pre>
| <p>Thanks for all the feedback peeps! In trying out the solutions presented I found my error and it was pretty silly. I tried removing the service and trying the different port configs mentioned above. What solved it was using 127.0.0.1:31000 instead of localhost. Not sure why that fixed it but it did!</p>
<p>That being said a few comments I found while looking at the above comments.</p>
<ol>
<li>I found that I couldnt hit the cluster without doing the port forwarding regardless of whether I had a service defined or not.</li>
<li>ContainerPort to my understanding is for kubernetes to work on pod to pod communication and doesnt impact application function from a user perspective (could be wrong!)</li>
<li>good to know on minikube Im thinking about trying it out and if I do I'll know why port 3000 stops working.</li>
</ol>
<p>Thanks</p>
|
<p>The command 'kubectl addons list' throws an error:
Error: unknown command "addons" for "kubectl"
Run 'kubectl --help' for usage.</p>
<p>The command 'kubectl plugin list' seems to return something different.
error: unable to find any kubectl plugins in your PATH</p>
<p>Thanks</p>
| <p>as you have stated that you want to enable ingress in your kubernetes cluster:</p>
<p>it works differently on full-blown k8s than in minikube.</p>
<p>in kubernetes, you will need to deploy and configure and ingress controller. having done that the ingress controller will then watch resources of type "ingress".</p>
<p>check the official documentation <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="nofollow noreferrer">here</a> for a list of supported ingress controllers and <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">here</a> for an example how to deploy the nginx ingress controller</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.