prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>I enabled <code>ingress</code> on <code>minikube</code></p>
<pre><code>C:\WINDOWS\system32>minikube addons enable ingress
- Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
- Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
- Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
* Verifying ingress addon...
* The 'ingress' addon is enabled
</code></pre>
<p>But when I list it, I don't see it</p>
<pre><code>C:\WINDOWS\system32>minikube kubectl -- get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-74ff55c5b-px725 1/1 Running 0 13d
etcd-minikube 1/1 Running 0 13d
kube-apiserver-minikube 1/1 Running 6 13d
kube-controller-manager-minikube 1/1 Running 0 13d
kube-proxy-h7r79 1/1 Running 0 13d
kube-scheduler-minikube 1/1 Running 0 13d
storage-provisioner 1/1 Running 76 13d
</code></pre>
<p>Is the <code>ingress</code> not enabled? How can I check?</p>
| <p>I have recreated this situation and got the same situation. After execution the command:</p>
<pre><code>minikube addons enable ingress
</code></pre>
<p>I have same output as yours:</p>
<pre><code> - Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
- Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
- Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
* Verifying ingress addon...
* The 'ingress' addon is enabled
</code></pre>
<p>I have also the same output, when I have executed:</p>
<pre><code>minikube kubectl -- get pod -n kube-system
</code></pre>
<hr />
<p><strong>Solution:</strong>
First you can list namespaces with command:</p>
<pre><code>minikube kubectl get namespaces
</code></pre>
<p>And your output should be as follow:</p>
<pre><code>NAME STATUS AGE
default Active 4m46s
ingress-nginx Active 2m28s
kube-node-lease Active 4m47s
kube-public Active 4m47s
kube-system Active 4m47s
</code></pre>
<p>The ingress should be in the <code>ingress-nginx</code> namespace. Execute:</p>
<pre><code>minikube kubectl -- get pods --namespace ingress-nginx
</code></pre>
<p>and then your output should be as follow:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-nqnvj 0/1 Completed 0 2m56s
ingress-nginx-admission-patch-62z9z 0/1 Completed 0 2m55s
ingress-nginx-controller-5d88495688-ssv5c 1/1 Running 0 2m56s
</code></pre>
<p><strong>Summary - your ingress controller should work, just in a different namespace.</strong></p>
|
<p>I want to forward the ports</p>
<pre><code>kubectl port-forward ...
</code></pre>
<p>But for this I need to find out the name of pods, I run the command</p>
<pre><code>kubectl -n main_sp get pods
</code></pre>
<p>Getting a list:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
main-ms-hc-78469b74c-7lfdh 1/1 Running 0 13h
</code></pre>
<p>I'm trying</p>
<pre><code>kubectl port-forward main-ms-hc-78469b74c-7lfdh 8080:80
</code></pre>
<p>and I get</p>
<pre><code>Error from server (NotFound): pods "main-ms-hc-78469b74c-7lfdh" not found
</code></pre>
<p>What am I doing wrong?</p>
| <p>You need to mention the namespace too while using port-forward:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl port-forward -n main_sp main-ms-hc-78469b74c-7lfdh 8080:80
</code></pre>
<p>To port-forward a pod:</p>
<pre><code>$ kubectl port-forward -n <namespace> <pod-name> <local-port>:<target-port>
</code></pre>
<p>To port-forward a pod via service name:</p>
<pre><code>$ kubectl port-forward -n <namespace> svc/<servic-name> <local-port>:<target-port>
</code></pre>
|
<p>I have a Kubernetes <code>Job</code>, <code>job.yaml</code> :</p>
<pre><code>---
apiVersion: v1
kind: Namespace
metadata:
name: my-namespace
---
apiVersion: batch/v1
kind: Job
metadata:
name: my-job
namespace: my-namespace
spec:
template:
spec:
containers:
- name: my-container
image: gcr.io/project-id/my-image:latest
command: ["sh", "run-vpn-script.sh", "/to/download/this"] # need to run this multiple times
securityContext:
privileged: true
allowPrivilegeEscalation: true
restartPolicy: Never
</code></pre>
<p>I need to run <code>command</code> for different parameters. I have like 30 parameters to run. I'm not sure what is the best solution here. I'm thinking to create container in a loop to run all parameters. How can I do this? I want to run the <code>commands</code> or containers all simultaneously.</p>
| <p>Some of the ways that you could do it outside of the solutions proposed in other answers are following:</p>
<ul>
<li>With a templating tool like <code>Helm</code> where you would template the exact specification of your workload and then iterate over it with different values (see the example)</li>
<li>Use the Kubernetes official documentation on work queue topics:
<ul>
<li><a href="https://kubernetes.io/docs/tasks/job/indexed-parallel-processing-static/" rel="nofollow noreferrer">Indexed Job for Parallel Processing with Static Work Assignment</a> - alpha</li>
<li><a href="https://kubernetes.io/docs/tasks/job/parallel-processing-expansion/" rel="nofollow noreferrer">Parallel Processing using Expansions</a></li>
</ul>
</li>
</ul>
<hr />
<h3><code>Helm</code> example:</h3>
<p><code>Helm</code> in short is a templating tool that will allow you to template your manifests (<code>YAML</code> files). By that you could have multiple instances of <code>Jobs</code> with different name and a different command.</p>
<p>Assuming that you've installed <code>Helm</code> by following guide:</p>
<ul>
<li><em><a href="https://helm.sh/docs/intro/install/" rel="nofollow noreferrer">Helm.sh: Docs: Intro: Install</a></em></li>
</ul>
<p>You can create an example Chart that you will modify to run your <code>Jobs</code>:</p>
<ul>
<li><code>helm create chart-name</code></li>
</ul>
<p>You will need to delete everything that is in the <code>chart-name/templates/</code> and clear the <code>chart-name/values.yaml</code> file.</p>
<p>After that you can create your <code>values.yaml</code> file which you will iterate upon:</p>
<pre class="lang-yaml prettyprint-override"><code>jobs:
- name: job1
command: ['"perl", "-Mbignum=bpi", "-wle", "print bpi(3)"']
image: perl
- name: job2
command: ['"perl", "-Mbignum=bpi", "-wle", "print bpi(20)"']
image: perl
</code></pre>
<ul>
<li><code>templates/job.yaml</code></li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>{{- range $jobs := .Values.jobs }}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ $jobs.name }}
namespace: default # <-- FOR EXAMPLE PURPOSES ONLY!
spec:
template:
spec:
containers:
- name: my-container
image: {{ $jobs.image }}
command: {{ $jobs.command }}
securityContext:
privileged: true
allowPrivilegeEscalation: true
restartPolicy: Never
---
{{- end }}
</code></pre>
<p>If you have above files created you can run following command on what will be applied to the cluster beforehand:</p>
<ul>
<li><code>$ helm template .</code> (inside the <code>chart-name</code> folder)</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>---
# Source: chart-name/templates/job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: job1
namespace: default
spec:
template:
spec:
containers:
- name: my-container
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(3)"]
securityContext:
privileged: true
allowPrivilegeEscalation: true
restartPolicy: Never
---
# Source: chart-name/templates/job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: job2
namespace: default
spec:
template:
spec:
containers:
- name: my-container
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(20)"]
securityContext:
privileged: true
allowPrivilegeEscalation: true
restartPolicy: Never
</code></pre>
<blockquote>
<p>A side note #1!</p>
<p>This example will create <code>X</code> amount of <code>Jobs</code> where each one will be separate from the other. Please refer to the documentation on data persistency if the files that are downloaded are needed to be stored persistently (example: <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes" rel="nofollow noreferrer">GKE</a>).</p>
</blockquote>
<blockquote>
<p>A side note #2!</p>
<p>You can also add your <code>namespace</code> definition in the templates (<code>templates/namespace.yaml</code>) so it will be created before running your <code>Jobs</code>.</p>
</blockquote>
<p>You can also run above Chart by:</p>
<ul>
<li><code>$ helm install chart-name .</code> (inside the <code>chart-name</code> folder)</li>
</ul>
<p>After that you should be seeing 2 <code>Jobs</code> that are completed:</p>
<ul>
<li><code>$ kubectl get pods</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code>NAME READY STATUS RESTARTS AGE
job1-2dcw5 0/1 Completed 0 82s
job2-9cv9k 0/1 Completed 0 82s
</code></pre>
<p>And the output that they've created:</p>
<ul>
<li><code>$ echo "one:"; kubectl logs job1-2dcw5; echo "two:"; kubectl logs job2-9cv9k</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code>one:
3.14
two:
3.1415926535897932385
</code></pre>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://stackoverflow.com/questions/68159868/kubernetes-creation-of-multiple-deployment-with-one-deployment-file/68251390#68251390">Stackoverflow.com: Questions: Kubernetes creation of multiple deployment with one deployment file</a></em></li>
</ul>
|
<p>Whenever I set up a Rancher Kubernetes cluster with RKE, the cluster sets up perfectly. However, I'm getting the following warning message:</p>
<pre><code>WARN[0011] [reconcile] host [host.example.com] is a control plane node without reachable Kubernetes API endpoint in the cluster
WARN[0011] [reconcile] no control plane node with reachable Kubernetes API endpoint in the cluster found
</code></pre>
<p><i>(in the above message, the <code>host.example.com</code> is a placeholder for my actual host name, this message is given for each controlplane host specified in the cluster.yml)</i></p>
<p>How can I modify the RKE <code>cluster.yml</code> file or any other setting to avoid this warning?</p>
| <p>I don't believe you can suppress this warning since as you indicate in your comments, the warning is valid on the first <code>rke up</code> command. It is only a warning, and a valid one at that, even though your configuration appears to have a handle on that. If you are worried about the logs, you could perhaps have your log aggregation tool ignore the warning if it is in close proximity to the initial <code>rke up</code> command, or even filter it out. However, I would think twice about filtering blindly on it as it would indicate a potential issue (if, for example, you thought the control plane containers were already running).</p>
|
<p>According to Kubernetes documentation</p>
<blockquote>
<p>The metadata in an annotation can be small or large, structured or unstructured, and can include characters not permitted by labels.</p>
<p>Annotations, like labels, are key/value maps</p>
</blockquote>
<p>Then there is <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/#syntax-and-character-set" rel="nofollow noreferrer">a detailed explanation</a> on the syntax of the annotation keys. But it says nothing about the value part.</p>
<p>Where can I find more about the allowed length and character set for the value of an annotation in Kubernetes?</p>
| <p><a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/validation/objectmeta.go#L47" rel="nofollow noreferrer">Here</a> you can find the code that validates annotations in current master:</p>
<pre class="lang-golang prettyprint-override"><code>func ValidateAnnotations(annotations map[string]string, fldPath *field.Path) field.ErrorList {
allErrs := field.ErrorList{}
for k := range annotations {
for _, msg := range validation.IsQualifiedName(strings.ToLower(k)) {
allErrs = append(allErrs, field.Invalid(fldPath, k, msg))
}
}
if err := ValidateAnnotationsSize(annotations); err != nil {
allErrs = append(allErrs, field.TooLong(fldPath, "", TotalAnnotationSizeLimitB))
}
return allErrs
}
</code></pre>
<p>The <code>keys</code> are validated according to the rules that you mentioned.
The only validation applied to the values is the <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/validation/objectmeta.go#L60" rel="nofollow noreferrer">total length of all annotations</a> (size of keys + size of values for all annotations) that can't be longer than 256 kB.</p>
<pre class="lang-golang prettyprint-override"><code>const TotalAnnotationSizeLimitB int = 256 * (1 << 10) // 256 kB
...
func ValidateAnnotationsSize(annotations map[string]string) error {
var totalSize int64
for k, v := range annotations {
totalSize += (int64)(len(k)) + (int64)(len(v))
}
if totalSize > (int64)(TotalAnnotationSizeLimitB) {
return fmt.Errorf("annotations size %d is larger than limit %d", totalSize, TotalAnnotationSizeLimitB)
}
return nil
}
</code></pre>
|
<p>I try to deploy 2 replicas on <strong>k3s</strong> - each one to a different node. According to documentation, it should be pretty easy. However, I must be doing some silly mistake I am not able to find. When I apply the deploy file, both of my pods are running the same node (<em>node1</em>). In case I switch that node off, these pods start on another 2 nodes (<em>node2</em>, <em>node3</em>). When I start the <em>node1</em> back and redeploy the app, it runs again on the same node1.
If somebody can advise please, what I have wrong in my configuration, I would be really grateful.
(I run fresh new k3s on 3 servers with the same HW configuration)</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: tbd-node-js-runner
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: tbd-node-js-runner
template:
metadata:
labels:
app: tbd-node-js-runner
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- tbd-node-js-runner
topologyKey: topology.kubernetes.io/hostname
containers:
- name: tbd-node-js-runner
image: tabidoo/nodejs-runner:latest
ports:
- containerPort:
env:
- name: CONNECTION_STRING
value: "..."
...
imagePullPolicy: Always
imagePullSecrets:
- name: regcred
</code></pre>
| <ul>
<li><p>It is due to incorrect topologyKey , It should be 'kubernetes.io/hostname' not 'topology.kubernetes.io/hostname' .</p>
</li>
<li><p>So it would be as following :</p>
</li>
</ul>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: tbd-node-js-runner
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: tbd-node-js-runner
template:
metadata:
labels:
app: tbd-node-js-runner
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- tbd-node-js-runner
topologyKey: "kubernetes.io/hostname"
containers:
- name: tbd-node-js-runner
image: tabidoo/nodejs-runner:latest
</code></pre>
|
<p>I am implementing kubernetes network policy for my app on K3s. I want to allow <code>egress</code> (external call to internet from pod) for port <code>443</code> i.e. <code>https</code> calls <em><strong>only</strong></em> and deny/block all egress calls on <code>80</code> port i.e. <code>http</code>. In short, allow <code>https</code> egress calls and deny <code>http</code> egress calls.<br/>
I am testing this with below <code>custom-dns.yaml</code> file:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: foo-deny-egress
spec:
podSelector:
matchLabels:
app: foo
policyTypes:
- Egress
egress:
# allow DNS resolution
- ports:
- port: 443
protocol: UDP
- port: 443
protocol: TCP
</code></pre>
<p>After <code>kubectl apply -f custom-dns.yaml</code> I create and login to pod :<code>kubectl run --restart=Never pod-v1 --image=busybox -i -t -l app=foo</code> and test the http and https url by command:</p>
<ol>
<li><code>wget https://www.google.com</code></li>
<li><code>wget http://drive.google.com/drive/u/0/my-drive</code> <br/><br/></li>
</ol>
<p>Both the wget commands are giving <code>wget: bad address</code> error. <br/><br/>
But when I do not apply this network policy, the same wget commands are working & giving below result from same pod:<br/>
<strong>i.</strong></p>
<pre><code> wget https://www.google.com
Connecting to www.google.com (172.217.167.164:443)
wget: note: TLS certificate validation not implemented
saving to 'index.html'
index.html 100% |******************************************************| 15264 0:00:00 ETA
'index.html' saved
</code></pre>
<p><strong>ii.</strong></p>
<pre><code>wget http://drive.google.com/drive/u/0/my-drive
Connecting to drive.google.com (142.250.192.46:80)
Connecting to drive.google.com (142.250.192.46:443)
wget: note: TLS certificate validation not implemented
Connecting to accounts.google.com (142.250.192.77:443)
saving to 'my-drive'
my-drive 100% |******************************************************| 92019 0:00:00 ETA
'my-drive' saved
</code></pre>
<p><strong>iii.</strong> <code>Telnet</code> to google.com IP <code>172.217.167.164</code> with <code>80</code> & <code>443</code> port happening</p>
<pre><code>#telnet 172.217.166.164 80
Connected to 172.217.166.164
^]q
# telnet 172.217.166.164 443
Connected to 172.217.166.164
^]
</code></pre>
<p><strong>iv.</strong> Similarly <code>Telnet</code> to drive.com IP <code>142.250.192.46</code> with <code>80</code> & <code>443</code> port happening</p>
<p>What am I missing here ?</p>
| <ul>
<li><p>networkPolicy you have mentioned in the post just allows https/traffic on 443 but you have not mentioned anything in it to deny http(port 80) traffic.</p>
</li>
<li><p>There are two ways to achieve it :</p>
<ul>
<li>Setup a default deny policy for egress</li>
</ul>
<p>either</p>
<ul>
<li>create another egress policy where you deny egress on port 80</li>
</ul>
</li>
</ul>
|
<p>Using Kubernetes, we could control how many pods (app instances) are created with a new deployment rollout simultaneously. It's achievable by using propertiesΒ <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-surge" rel="nofollow noreferrer">max surge</a>Β andΒ <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-unavailable" rel="nofollow noreferrer">max unavailable</a>. If we have tens of app instances, the default configuration will rollout multiple new instances and at the same time, multiple instances will be terminating.
The most preferable configuration for us is changing that toΒ <code>max surge = 1</code>Β andΒ <code>max unavailable = 0</code> (reason - to achieve smooth Kafka rebalancing). In that case, only a single app instance will be started at a specific time, and up to a few instances will be terminating.</p>
<p>As I see, the deployment strategy if we have <code>max surge = N</code>Β andΒ <code>max unavailable = 0</code> is the following:</p>
<ul>
<li>step 1: N new pods (the first batch) starting</li>
<li>step 2: N pods terminating, and N new pods (the second batch) starting</li>
<li>step 3: N more pods terminating (and together with step 2, we could experience
up to 2N terminating pods), and N new pods (the third batch)
starting</li>
<li>step 4: same logic, and here we could experience more than 2N
pods still terminating</li>
<li>and so on...</li>
</ul>
<p>For our micro-service with total number of pods 30, with <code>max surge = 1</code>Β andΒ <code>max unavailable = 0</code>, we have up to 3 terminating pods simultaneously, and with <code>max surge = 3</code>Β andΒ <code>max unavailable = 0</code> - up to 7 terminating pods.</p>
<p><strong>Is it possible to control the maximum number of simultaneously terminating pods during deployment rollout?</strong> Let's say I want to see at most one pod in a terminating state. So until pod will not be shut down completely, no new pods started.</p>
<p><code>kubectl version</code>:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.10", GitCommit:"62876fc6d93e891aa7fbe19771e6a6c03773b0f7", GitTreeState:"clean", BuildDate:"2020-10-15T01:52:24Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.10", GitCommit:"62876fc6d93e891aa7fbe19771e6a6c03773b0f7", GitTreeState:"clean", BuildDate:"2020-10-15T01:43:56Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| <p><code>maxUnavailable</code> and <code>maxSurge</code> is definitely the way to go. The steps you listed are on point, and it is how it's supposed to work.</p>
<p>What you are experiencing is most probably a bug. There is one similiar issue on GitHub (<a href="https://github.com/kubernetes/kubernetes/issues/99513" rel="nofollow noreferrer">#99513</a>) about Kubernetes not respecting <code>maxUnavailable</code> and <code>maxSurge</code>.</p>
<p>If you can, create new issue on GitHub, as yours is a bit different then aforementioned one.</p>
<hr />
<p><strong>EDIT</strong></p>
<p>I found another similiar issue (<a href="https://github.com/kubernetes/kubernetes/issues/95498" rel="nofollow noreferrer">#95498</a>) with a response, stating it's working as intended. <sup>[<a href="https://github.com/kubernetes/kubernetes/issues/95498#issuecomment-712508628" rel="nofollow noreferrer">source</a>]</sup></p>
<blockquote>
<p>When the Deployment controller performs a rolling update on a Deployment, it scales up/down ReplicaSets based on the <code>.spec.replicas </code>of the ReplicaSets it controls. When a ReplicaSet is scaled down, the ReplicaSet's <code>.spec.replicas</code> will be reduced and some of its Pods will be terminating, and the Deployment controller simply ignores those terminating Pods when calculating maxSurge / maxUnavailable.</p>
<p>Similarly, ReplicaSet controller ignores terminating Pods while doing operations, such as calculating its <code>.status.availableReplicas</code></p>
</blockquote>
<p>but I'm having my doubts. Creating new issue on GitHub, and getting a developer to look into it, is still the good option.</p>
|
<p>I am trying to set up an envoy for k8s. But the envoy service does not start and I see the following error in the log:</p>
<pre><code>"The v2 xDS major version is deprecated and disabled by default.
Support for v2 will be removed from Envoy at the start of Q1 2021.
You may make use of v2 in Q4 2020 by following the advice in https://www.envoyproxy.io/docs/envoy/latest/faq/api/transition "
</code></pre>
<p>I understand that I need to rewrite the configuration in v3. I ask for help, as I am not very good at this. Here is my config.</p>
<pre><code>static_resources:
listeners:
- name: k8s-controllers-listener
address:
socket_address: { address: 0.0.0.0, port_value: 6443 }
filter_chains:
- filters:
- name: envoy.tcp_proxy
config:
stat_prefix: ingress_k8s_control
cluster: k8s-controllers
clusters:
- name: k8s-controllers
connect_timeout: 0.5s
type: STRICT_DNS
lb_policy: round_robin
http2_protocol_options: {}
load_assignment:
cluster_name: k8s-controllers
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address: { address: ${CONTROLLER0_IP}, port_value: 6443 }
- endpoint:
address:
socket_address: { address: ${CONTROLLER1_IP}, port_value: 6443 }
- endpoint:
address:
socket_address: { address: ${CONTROLLER2_IP}, port_value: 6443 }
</code></pre>
| <p>This topic is covered by the <a href="https://www.envoyproxy.io/docs/envoy/latest/faq/api/envoy_v3#how-do-i-configure-envoy-to-use-the-v3-api" rel="nofollow noreferrer">official Envoy FAQ section</a>:</p>
<blockquote>
<p>All bootstrap files are expected to be v3.</p>
<p>For dynamic configuration, we have introduced two new fields to
<a href="https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/core/v3/config_source.proto#envoy-v3-api-msg-config-core-v3-configsource" rel="nofollow noreferrer">config sources</a>, transport API version and resource API version.
The distinction is as follows:</p>
<ul>
<li><p>The <a href="https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/core/v3/config_source.proto#envoy-v3-api-field-config-core-v3-apiconfigsource-transport-api-version" rel="nofollow noreferrer">transport API</a> version indicates the API endpoint and version of <code>DiscoveryRequest</code>/<code>DiscoveryResponse</code> messages used.</p>
</li>
<li><p>The <a href="https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/core/v3/config_source.proto#envoy-v3-api-field-config-core-v3-configsource-resource-api-version" rel="nofollow noreferrer">resource API</a> version indicates whether a v2 or v3 resource, e.g. v2 <code>RouteConfiguration</code> or v3 <code>RouteConfiguration</code>, is delivered.</p>
</li>
</ul>
<p>The API version must be set for both transport and resource API
versions.</p>
<p>If you see a warning or error with <code>V2 (and AUTO) xDS transport protocol versions are deprecated</code>, it is likely that you are missing
explicit V3 configuration of the transport API version.</p>
</blockquote>
<ul>
<li><p>Check out <a href="https://pjausovec.medium.com/the-v2-xds-major-version-is-deprecated-and-disabled-by-default-envoy-60672b1968cb" rel="nofollow noreferrer">this source</a> for an example.</p>
</li>
<li><p>There is <a href="https://www.getenvoy.io/" rel="nofollow noreferrer">this open source tool</a> that makes it easy to install and upgrade Envoy.</p>
</li>
</ul>
<hr />
<p>Also, you can still use v2 by:</p>
<ul>
<li><p>Setting <a href="https://www.envoyproxy.io/docs/envoy/v1.17.0/operations/cli#cmdoption-bootstrap-version" rel="nofollow noreferrer">βbootstrap-version</a> 2 on the CLI for a v2 bootstrap file.</p>
</li>
<li><p>Enabling the runtime <code>envoy.reloadable_features.enable_deprecated_v2_api</code> feature. This is implicitly enabled if a v2 <a href="https://www.envoyproxy.io/docs/envoy/v1.17.0/operations/cli#cmdoption-bootstrap-version" rel="nofollow noreferrer">βbootstrap-version</a> is set.</p>
</li>
</ul>
|
<p>I've scanned through all resources, still cannot find a way to change <code>extraPortMappings</code> in Kind cluster without deleting and creating again.</p>
<p>Is it possible and how?</p>
| <p>It's not said explicitly in the official docs, but I found some references that confirm: your thoughts are correct and changing <code>extraPortMappings</code> (as well as other cluster settings) is only possible with recreation of the kind cluster.</p>
<blockquote>
<p>if you use extraPortMappings in your config, they are βfixedβ and
cannot be modified, unless you recreate the cluster.</p>
</blockquote>
<p><a href="https://blog.devgenius.io/updating-kind-kubernetes-api-certificate-after-reboot-1521b43f7574" rel="nofollow noreferrer">Source - Issues I Came Across</a></p>
<blockquote>
<p>Note that the cluster configuration cannot be changed. The only
workaround is to delete the cluster (see below) and create another one
with the new configuration.</p>
</blockquote>
<p><a href="https://10clouds.com/blog/kubernetes-environment-devs/" rel="nofollow noreferrer">Source - Kind Installation</a></p>
<blockquote>
<p>However, there are obvious disadvantages in the configuration and
update of the cluster, and the cluster can only be configured and
updated by recreating the cluster. So you need to consider
configuration options when you initialize the cluster.</p>
</blockquote>
<p><a href="https://www.programmersought.com/article/12347371148/" rel="nofollow noreferrer">Source - Restrictions</a></p>
|
<p>i have trouble in gitlab ci, when am executing <code>terraform apply</code> locally all is ok(kubectl is working correctly in gitlab ci container and locally), but in executing the same script in gitlab ci throws error that showed below</p>
<p>terraform version locally <code>v0.12.24</code></p>
<p>terraform version in gitlab ci container <code>v0.12.25</code></p>
<p>main.tf</p>
<pre><code>provider "google" {
project = "profiline-russia"
region = "us-central1"
zone = "us-central1-c"
}
resource "google_container_cluster" "primary" {
name = "main-cluster"
location = "europe-west3"
remove_default_node_pool = true
initial_node_count = 1
}
resource "google_container_node_pool" "primary_nodes" {
name = "node-pool"
location = "europe-west3"
cluster = google_container_cluster.primary.name
node_count = 1
node_config {
machine_type = "n1-standard-1"
}
}
# dashboard ui
# module "kubernetes_dashboard" {
# source = "cookielab/dashboard/kubernetes"
# version = "0.9.0"
# kubernetes_namespace_create = true
# kubernetes_dashboard_csrf = "random-string"
# }
# deployment server
resource "kubernetes_deployment" "deployment-server" {
metadata {
name = var.data-deployment-server.metadata.name
labels = {
App = var.data-deployment-server.labels.App
}
}
spec {
replicas = 1
selector {
match_labels = {
App = var.data-deployment-server.labels.App
}
}
template {
metadata {
labels = {
App = var.data-deployment-server.labels.App
}
}
spec {
container {
image = var.data-deployment-server.image.name # for passing this i made gcr public
name = var.data-deployment-server.container.name
command = var.data-deployment-server.container.command
port {
container_port = var.data-deployment-server.container.port
}
env {
name = "ENV"
value = "production"
}
env {
name = "DB_USERNAME"
value_from {
secret_key_ref {
name = kubernetes_secret.secret-db.metadata.0.name
key = "db_username"
}
}
}
env {
name = "DB_PASSWORD"
value_from {
secret_key_ref {
name = kubernetes_secret.secret-db.metadata.0.name
key = "db_password"
}
}
}
env {
name = "DB_NAME"
value_from {
secret_key_ref {
name = kubernetes_secret.secret-db.metadata.0.name
key = "db_name"
}
}
}
env {
name = "DEFAULT_BUCKET_NAME"
value = var.default-bucket-name
}
env {
name = "DATABASE_ClOUD_SQL_NAME"
value = var.database-cloud-sql-name
}
env {
name = "PROJECT_GCP_ID"
value = var.project-gcp-id
}
env {
name = "K8S_SA_CLOUD_STORAGE"
value_from {
secret_key_ref {
name = kubernetes_secret.secret-sa-cloud-storage.metadata.0.name
key = "sa-cloud-storage.json"
}
}
}
env {
name = "GOOGLE_APPLICATION_CREDENTIALS"
value = "/app/secrets/sa-cloud-storage.json"
}
liveness_probe {
http_get {
path = "/swagger"
port = var.data-deployment-server.container.port
}
initial_delay_seconds = 10
period_seconds = 10
}
}
container {
image = var.data-cloud-sql-proxy.image.name
name = var.data-cloud-sql-proxy.container.name
command = var.data-cloud-sql-proxy.container.command
volume_mount {
name = var.data-cloud-sql-proxy.volume.name
mount_path = "/secrets/"
read_only = true
}
}
volume {
name = var.data-cloud-sql-proxy.volume.name
secret {
secret_name = kubernetes_secret.secret-gsa.metadata.0.name
}
}
}
}
}
}
resource "kubernetes_service" "service-server" { # wget http://name-service-server:8000/swagger
metadata {
name = var.data-deployment-server.service.name
}
spec {
selector = {
App = var.data-deployment-server.labels.App
}
port {
port = var.data-deployment-server.container.port
}
type = var.data-deployment-server.service.type
}
}
# deployment client-web
resource "kubernetes_deployment" "deployment-client-web" {
metadata {
name = var.data-deployment-client-web.metadata.name
labels = {
App = var.data-deployment-client-web.labels.App
}
}
spec {
replicas = 1
selector {
match_labels = {
App = var.data-deployment-client-web.labels.App
}
}
template {
metadata {
labels = {
App = var.data-deployment-client-web.labels.App
}
}
spec {
container {
image = var.data-deployment-client-web.image.name
command = var.data-deployment-client-web.container.command
name = var.data-deployment-client-web.container.name
port {
container_port = var.data-deployment-client-web.container.port
}
liveness_probe {
http_get {
path = "/"
port = var.data-deployment-client-web.container.port
}
initial_delay_seconds = 300
period_seconds = 10
}
}
}
}
}
}
resource "kubernetes_service" "service-client-web" { # wget http://name-service-server:8000/swagger
metadata {
name = var.data-deployment-client-web.service.name
}
spec {
selector = {
App = var.data-deployment-client-web.labels.App
}
port {
port = var.data-deployment-client-web.container.port
}
type = var.data-deployment-client-web.service.type
}
}
# database
resource "google_sql_database" "database" {
name = "database-profiline-russia"
instance = google_sql_database_instance.db-instance.name
}
resource "google_sql_database_instance" "db-instance" {
name = "db-master-instance"
region = "europe-west3"
database_version = "POSTGRES_11"
settings {
tier = "db-f1-micro"
}
}
resource "google_sql_user" "db-user" {
name = "..."
instance = google_sql_database_instance.db-instance.name
password = "..."
}
resource "kubernetes_secret" "secret-db" {
metadata {
name = "name-secret-db"
}
data = {
db_username = google_sql_user.db-user.name
db_password = google_sql_user.db-user.password
db_name = google_sql_database.database.name
}
type = "Opaque"
}
resource "kubernetes_secret" "secret-gsa" {
metadata {
name = "name-secret-gsa"
}
data = {
"service_account.json" = file(var.cred-sa-default)
}
type = "Opaque"
}
resource "kubernetes_secret" "secret-sa-cloud-storage" {
metadata {
name = "name-secret-sa-cloud-storage"
}
data = {
"sa-cloud-storage.json" = file(var.cred-sa-cloud-storage)
}
type = "Opaque"
}
</code></pre>
<p>vars.tf</p>
<pre><code>variable "default-bucket-name" {
type = string
description = "default bucket name(bucket doesnt recreated(created previously by hands))"
}
variable "database-cloud-sql-name" {
type = string
description = "full database name"
}
variable "project-gcp-id" {
type = string
description = "gcp project id"
}
variable "cred-sa-default" {
type = string
description = "default service account credentials file"
}
variable "cred-sa-cloud-storage" {
type = string
description = "cloud storage service account credentials file"
}
variable "data-deployment-server" {
type = object({
metadata = object({
name = string
})
image = object({
name = string
})
labels = object({
App = string
})
container = object({
name = string
command = list(string)
port = number
})
service = object({
name = string
type = string
})
})
}
variable "data-cloud-sql-proxy" {
type = object({
image = object({
name = string
})
container = object({
name = string
command = list(string)
})
volume = object({
name = string
})
})
}
variable "data-deployment-client-web" {
type = object({
metadata = object({
name = string
})
image = object({
name = string
})
labels = object({
App = string
})
container = object({
name = string
command = list(string)
port = number
})
service = object({
name = string
type = string
})
})
}
</code></pre>
<p>terraform.tfvars has values of private vars</p>
<p>error in gitlab ci container:</p>
<pre><code> $ terraform apply -auto-approve
kubernetes_secret.secret-sa-cloud-storage: Refreshing state... [id=default/name-secret-sa-cloud-storage]
kubernetes_secret.secret-gsa: Refreshing state... [id=default/name-secret-gsa]
module.kubernetes_dashboard.kubernetes_secret.kubernetes_dashboard_certs: Refreshing state... [id=kubernetes-dashboard/kubernetes-dashboard-certs]
module.kubernetes_dashboard.kubernetes_namespace.kubernetes_dashboard[0]: Refreshing state... [id=kubernetes-dashboard]
module.kubernetes_dashboard.kubernetes_service.kubernetes_dashboard: Refreshing state... [id=kubernetes-dashboard/kubernetes-dashboard]
module.kubernetes_dashboard.kubernetes_service_account.kubernetes_dashboard: Refreshing state... [id=kubernetes-dashboard/kubernetes-dashboard]
module.kubernetes_dashboard.kubernetes_cluster_role.kubernetes_dashboard: Refreshing state... [id=kubernetes-dashboard]
module.kubernetes_dashboard.kubernetes_cluster_role_binding.kubernetes_dashboard: Refreshing state... [id=kubernetes-dashboard]
module.kubernetes_dashboard.kubernetes_role.kubernetes_dashboard: Refreshing state... [id=kubernetes-dashboard/kubernetes-dashboard]
module.kubernetes_dashboard.kubernetes_secret.kubernetes_dashboard_csrf: Refreshing state... [id=kubernetes-dashboard/kubernetes-dashboard-csrf]
module.kubernetes_dashboard.kubernetes_config_map.kubernetes_dashboard_settings: Refreshing state... [id=kubernetes-dashboard/kubernetes-dashboard-settings]
google_container_cluster.primary: Refreshing state... [id=projects/profiline-russia/locations/europe-west3/clusters/main-cluster]
module.kubernetes_dashboard.kubernetes_service.kubernetes_metrics_scraper: Refreshing state... [id=kubernetes-dashboard/dashboard-metrics-scraper]
kubernetes_service.service-server: Refreshing state... [id=default/name-service-server]
google_sql_database_instance.db-instance: Refreshing state... [id=db-master-instance]
kubernetes_service.service-client-web: Refreshing state... [id=default/name-service-client-web]
module.kubernetes_dashboard.kubernetes_role_binding.kubernetes_dashboard: Refreshing state... [id=kubernetes-dashboard/kubernetes-dashboard]
module.kubernetes_dashboard.kubernetes_secret.kubernetes_dashboard_key_holder: Refreshing state... [id=kubernetes-dashboard/kubernetes-dashboard-key-holder]
google_sql_user.db-user: Refreshing state... [id=username//db-master-instance]
google_sql_database.database: Refreshing state... [id=projects/profiline-russia/instances/db-master-instance/databases/database-profiline-russia]
module.kubernetes_dashboard.kubernetes_deployment.kubernetes_dashboard: Refreshing state... [id=kubernetes-dashboard/kubernetes-dashboard]
module.kubernetes_dashboard.kubernetes_deployment.kubernetes_metrics_scraper: Refreshing state... [id=kubernetes-dashboard/kubernetes-metrics-scraper]
kubernetes_deployment.deployment-client-web: Refreshing state... [id=default/deployment-client-web]
google_container_node_pool.primary_nodes: Refreshing state... [id=projects/profiline-russia/locations/europe-west3/clusters/main-cluster/nodePools/node-pool]
kubernetes_secret.secret-db: Refreshing state... [id=default/name-secret-db]
Error: Get "http://localhost/api/v1/namespaces/kubernetes-dashboard/serviceaccounts/kubernetes-dashboard": dial tcp [::1]:80: connect: connection refused
Error: Get "http://localhost/api/v1/namespaces/kubernetes-dashboard/services/dashboard-metrics-scraper": dial tcp [::1]:80: connect: connection refused
Error: Get "http://localhost/apis/apps/v1/namespaces/kubernetes-dashboard/deployments/kubernetes-dashboard": dial tcp [::1]:80: connect: connection refused
Error: Get "http://localhost/apis/apps/v1/namespaces/default/deployments/deployment-client-web": dial tcp [::1]:80: connect: connection refused
Error: Get "http://localhost/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-key-holder": dial tcp [::1]:80: connect: connection refused
Error: Get "http://localhost/api/v1/namespaces/default/services/name-service-client-web": dial tcp [::1]:80: connect: connection refused
Error: Get "http://localhost/apis/apps/v1/namespaces/kubernetes-dashboard/deployments/kubernetes-metrics-scraper": dial tcp [::1]:80: connect: connection refused
Error: Get "http://localhost/api/v1/namespaces/default/secrets/name-secret-gsa": dial tcp [::1]:80: connect: connection refused
Error: Get "http://localhost/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/kubernetes-dashboard": dial tcp [::1]:80: connect: connection refused
Error: Get "http://localhost/apis/rbac.authorization.k8s.io/v1/clusterroles/kubernetes-dashboard": dial tcp [::1]:80: connect: connection refused
Error: Get "http://localhost/apis/rbac.authorization.k8s.io/v1/namespaces/kubernetes-dashboard/roles/kubernetes-dashboard": dial tcp [::1]:80: connect: connection refused
Error: Get "http://localhost/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-certs": dial tcp [::1]:80: connect: connection refused
Error: Get "http://localhost/api/v1/namespaces/default/services/name-service-server": dial tcp [::1]:80: connect: connection refused
Error: Get "http://localhost/api/v1/namespaces/kubernetes-dashboard": dial tcp [::1]:80: connect: connection refused
Error: Get "http://localhost/api/v1/namespaces/kubernetes-dashboard/services/kubernetes-dashboard": dial tcp [::1]:80: connect: connection refused
Error: Get "http://localhost/api/v1/namespaces/default/secrets/name-secret-sa-cloud-storage": dial tcp [::1]:80: connect: connection refused
Error: Get "http://localhost/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp [::1]:80: connect: connection refused
Error: Get "http://localhost/apis/rbac.authorization.k8s.io/v1/namespaces/kubernetes-dashboard/rolebindings/kubernetes-dashboard": dial tcp [::1]:80: connect: connection refused
Error: Get "http://localhost/api/v1/namespaces/default/secrets/name-secret-db": dial tcp [::1]:80: connect: connection refused
Error: Get "http://localhost/api/v1/namespaces/kubernetes-dashboard/configmaps/kubernetes-dashboard-settings": dial tcp [::1]:80: connect: connection refused
Running after_script
00:01
Uploading artifacts for failed job
00:02
ERROR: Job failed: exit code 1
</code></pre>
| <p>make sure that in the kubernetes provider configuration <code>load_config_file</code> is <code>false</code> if you dont use a local config file. this fix the error in my case.</p>
<pre class="lang-yaml prettyprint-override"><code>load_config_file = false # when you wish not to load the local config file
</code></pre>
|
<p>Goal is to terminate the pod after completion of Job.
This is my yaml file. Currently, my pod status is <strong>completed</strong> after running the job.</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
# Unique key of the Job instance
name: example-job
spec:
template:
metadata:
name: example-job
spec:
containers:
- name: container-name
image: my-img
command: ["python", "main.py"]
# Do not restart containers after they exit
restartPolicy: Never
# of retries before marking as failed.
backoffLimit: 4
</code></pre>
| <p>You can configure and remove the jobs once complete</p>
<p>inside the YAML you can configure limit of keeping the PODs</p>
<pre><code>successfulJobsHistoryLimit: 0
failedJobsHistoryLimit: 0
</code></pre>
<p>you can set the <strong>history limits</strong> using above config in YAML.</p>
<blockquote>
<p>The .spec.successfulJobsHistoryLimit and .spec.failedJobsHistoryLimit
fields are optional. These fields specify how many completed and
failed jobs should be kept. By default, they are set to 3 and 1
respectively. Setting a limit to 0 corresponds to keeping none of the
corresponding kind of jobs after they finish.</p>
</blockquote>
<p><code>backoffLimit: 4</code> will retires a number of times and try to run the job before marking it as a failed one.</p>
<p>Read more at : <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#jobs-history-limits" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#jobs-history-limits</a></p>
|
<p>I am searching for a tutorial or a good reference to perform docker container live migration in Kubernetes between two hosts (embedded devices - arm64 architecture).</p>
<p>As far as I searched on the internet resources, I could not find a complete documentation about it. I am a newbe and it will be really helpful if someone could provide me any good reference materials so that I can improve myself.</p>
| <p>Posting this as a community wiki, feel free to edit and expand.</p>
<hr />
<p>As @David Maze said in terms of containers and pods, it's not really a live migration. Usually pods are managed by <code>deployments</code> which have <code>replicasets</code> which control pods state: they are created and in requested amount. Any changes in amount of pods (e.g. you delete it) or using image will trigger pods recreation.</p>
<p>This also can be used for scheduling pods on different nodes when for instance you need to perform maintenance on the node or remove/add one.</p>
<hr />
<p>As for your question in comments, it's not necessarily the same volume as it can I suppose have a short downtime.</p>
<p>Sharing volumes between kubernetes clusters on premise (cloud may differ) is not a built-in feature. You may want to look at <code>nfs</code> server deployed in your network:</p>
<p><a href="https://www.linuxtechi.com/configure-nfs-persistent-volume-kubernetes/" rel="nofollow noreferrer">Mounting external NFS share to pods</a></p>
|
<p>I have a simple web app that uses <code>volume</code>/<code>persistent volume claim</code> to serve static data from there. Pods got scheduled only on the first worker node where volume resides.
How to deal with the <code>shared</code> volume between nodes in case I want to scale pods and have them allocated across all the available nodes?</p>
<pre><code> - name: app-nginx
image: nginx:1.20.0
command: ["/usr/sbin/nginx", "-g", "daemon off;"]
volumeMounts:
- name: static-volume
mountPath: "/opt/static"
volumes:
- name: static-volume
persistentVolumeClaim:
claimName: pvc-static
</code></pre>
<p>One option is to use <code>NFS</code> and not physically allocate volume on an EC2 instance.
Another way is to duplicate static files for each pod (populate them into proper directory with init container) but that requires extra time and is not feasible for a lot a static data.</p>
<p>What's the proper way to deal with such a case within Kubernetes? Is there a way to declare deployment which will be using logically same volume but physically different instances located on the different nodes?</p>
| <p>What you are looking for is a volume provider that supports the <code>ReadOnlyMany</code>
or <code>ReadWriteMany</code> <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">Access Mode</a>.</p>
<p>Follow the documentation link to get a list of the officially supported ones.</p>
<p>If you are on <code>AWS</code> than probably using <code>EFS</code> through the <code>NFS</code> plugin will be the easiest solution, but please take into account the fact it is an NFS-based solution and you might hit a performance penalty.</p>
<p>As a side note, what you are trying to do smeels like an anti-pattern to me.
Docker images for an application should be self contained. In your case, having a container serving a static website should contain all the static files it needs to in order to be fully portable. This would remove the need to have an external volume with the data completely.</p>
|
<p>I have an EBS volume that I am using that is not encrypted and has a bunch on data on it.
I want to take a snapshot of the data on that volume, create a new EBS volume from that snapshot but encrypt it, and then and use it in my EKS cluster.
I know how to create a persistent volume, persistent volume claim and then mount it in a pod for an unencrypted EBS volume. How do I do this with an encrypted EBS volume?
I did try the above, restored the snapshot and selected to use encryption with the default key and successfully mounted the encrypted EBS volume to the pod and I could see the files but when I opened the files they were indeed unreadable and therefore encrypted. I assume I need to apply the key somewhere somehow to allow me to properly read the files in the pod?</p>
<p>Here is the code to create the persistent volume :</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: existing-volume-2
annotations:
#pv.kubernetes.io/provisioned-by: ebs.csi.aws.com
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: standard-rwo
claimRef:
name: my-pvc
namespace: default
awsElasticBlockStore:
volumeID: "vol-xxx82072b1bd3a222"
fsType: ext4
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- us-east-1a
</code></pre>
<p>Here is the code for the persistent volume claim -</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
storageClassName: standard-rwo
volumeName: existing-volume-2
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
</code></pre>
<p>Here is the code to bring up the pos that will use the PVC -</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: web-server
spec:
containers:
- name: web-server
image: alpine:latest
command:
- /bin/sh
- "-c"
- "sleep 60m"
volumeMounts:
- mountPath: /tmp
name: data
volumes:
- name: data
persistentVolumeClaim:
claimName: my-pvc
</code></pre>
| <p>I figured it out -
(from <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html</a>)</p>
<p>1 - Create a snapshot of the original unencrypted volume.</p>
<p>2 - Create a copy of the snapshot you just took and check the option to make it encrypted.</p>
<p>3 - Create the new encrypted volume by restoring the copy that you just encrypted. That volume will be encrypted by default.</p>
<p>4 - Create the persistent volume using the new encrypted volume.</p>
|
<p>I accidentally deleted the config file from ~/.kube/config. Every kubectl command fails due to config missing.</p>
<p>Example:</p>
<pre><code>kubectl get nodes
</code></pre>
<blockquote>
<p>The connection to the server localhost:8080 was refused - did you
specify the right host or port?</p>
</blockquote>
<p>I have already install k3s using:</p>
<pre><code>export K3S_KUBECONFIG_MODE="644"
curl -sfL https://get.k3s.io | sh -s - --docker
</code></pre>
<p>and kubectl using:</p>
<pre><code>snap install kubectl --classic
</code></pre>
<p>Does anyone know how to fix this?</p>
| <p>The master copy is available at /etc/rancher/k3s/k3s.yaml. So, copy it back to ~/.kube/config</p>
<pre><code>cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
</code></pre>
<p>Reference: <a href="https://rancher.com/docs/k3s/latest/en/cluster-access/" rel="nofollow noreferrer">https://rancher.com/docs/k3s/latest/en/cluster-access/</a></p>
|
<p>im having the following code which works on yq 3 and when I try to upgrade to yq4 it fails</p>
<p>this works on yq3
<code>yq w -i dep.yaml 'spec.spec.image' $(MY_VAL)</code></p>
<p>on yq4 I got error that it doenst know <code>w</code>how can I make it works
I didn't find any match example which can help to my case</p>
<p><a href="https://mikefarah.gitbook.io/yq/upgrading-from-v3" rel="nofollow noreferrer">https://mikefarah.gitbook.io/yq/upgrading-from-v3</a></p>
| <p>Take a look at the section 'Updating / writing documents' of the <a href="https://mikefarah.gitbook.io/yq/upgrading-from-v3" rel="nofollow noreferrer">migration guide</a>.</p>
<p>The following command should work for your task with version 4 of <a href="https://mikefarah.gitbook.io/yq/" rel="nofollow noreferrer">yq</a>:</p>
<p><code>dep.yaml</code> before execution</p>
<pre class="lang-yaml prettyprint-override"><code>a:
b: 1
spec:
spec:
image: image_old.jpg
c:
d: 2
</code></pre>
<p><code>MY_VAL="image_new.jpg" yq -i e '.spec.spec.image = strenv(MY_VAL)' dep.yaml</code></p>
<p><code>dep.yaml</code> after execution</p>
<pre class="lang-yaml prettyprint-override"><code>a:
b: 1
spec:
spec:
image: image_new.jpg
c:
d: 2
</code></pre>
|
<p>I have a very simple docker-compose app with nginx and php-fpm services.</p>
<p>I would like to create a Helm chart that would deploy this exact same thing as a unit (a POD I guess?) but I would like to be able to scale the php-fpm service.</p>
<p>How can I achieve that?</p>
<p>My idea is to have these containers in a single deployment so I don't have a lot of deployments scattered.</p>
<p>i.e</p>
<pre><code>App1:
- Container 1 (php-fpm autoscaled or manually scaled)
- Container 2 (nginx)
- Container 3 (Redis)
- Container 4 (something else that this app needs)
App2
- Container 1
- Container 2
- Container 3 etc...
</code></pre>
<p>I could do different deployments but then it would be like</p>
<pre><code>app1_php-fpm
app1_nginx
app1_redis
app2_container1
app2_container2
app2_container3
</code></pre>
<p>Instead of a single pod.</p>
| <blockquote>
<p>The "one-container-per-Pod" model is the most common Kubernetes use
case; in this case, you can think of a Pod as a wrapper around a
single container; Kubernetes manages Pods rather than managing the
containers directly.</p>
</blockquote>
<p>Nginx, Redis should run as separate services, so that they can scale independent of your application in a decoupled manner.</p>
<p>Nginx can reverse proxy multiple applications so you don't need to have a separate Nginx instance for every app.</p>
<p><a href="https://kubernetes.io/docs/concepts/workloads/pods/#using-pods" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/#using-pods</a></p>
|
<p>I have a very simple docker-compose app with nginx and php-fpm services.</p>
<p>I would like to create a Helm chart that would deploy this exact same thing as a unit (a POD I guess?) but I would like to be able to scale the php-fpm service.</p>
<p>How can I achieve that?</p>
<p>My idea is to have these containers in a single deployment so I don't have a lot of deployments scattered.</p>
<p>i.e</p>
<pre><code>App1:
- Container 1 (php-fpm autoscaled or manually scaled)
- Container 2 (nginx)
- Container 3 (Redis)
- Container 4 (something else that this app needs)
App2
- Container 1
- Container 2
- Container 3 etc...
</code></pre>
<p>I could do different deployments but then it would be like</p>
<pre><code>app1_php-fpm
app1_nginx
app1_redis
app2_container1
app2_container2
app2_container3
</code></pre>
<p>Instead of a single pod.</p>
| <p>It's best practice to keep the single container inside the pod however we can run the multiple containers also.</p>
<p>Keep the <strong>Redis</strong> & <strong>Nginx</strong> containers as separate deployment and you can add the HPA on that they can scale up and down based on load.</p>
<p>By keeping the deployment separate would also help in doing any maintenance if required or deployment without downtime.</p>
<p>However, if you don't have an option to keep <strong>Nginx</strong> aside you can keep that also with <strong>fpm</strong> container as it's also a <strong>light weight</strong> container.</p>
<p>For example here I have one fpm and Nginx containers running inside single POD : <a href="https://github.com/harsh4870/Kubernetes-wordpress-php-fpm-nginx/blob/master/wordpress-deployment.yaml" rel="nofollow noreferrer">https://github.com/harsh4870/Kubernetes-wordpress-php-fpm-nginx/blob/master/wordpress-deployment.yaml</a></p>
<p>if possible divide all the applications as a single deployment so that it will be very easy to manage the deployments and maintenance and scaling also.</p>
|
<p>I have different sets of <code>environment</code> variables per deployment/microservice and <code>vaule</code> for each environment (dev/test/qa) are different.</p>
<p>Do I need <code>overlay</code> file for each deployment/microservice against each environment (dev/test/qa) or I can managed with single overlay per environment?</p>
<p>Deployment - <code>app-1.yaml</code>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: example-1
spec:
template:
spec:
containers:
- name: example-1
image: example:1.0
env:
-Name: "A1"
-Value: "B1"
env:
-Name: "D1"
-Value: "E1"
</code></pre>
<p>Deployment - <code>app-2.yaml</code>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: example-2
spec:
template:
spec:
containers:
- name: example-2
image: example:2.0
env:
-Name: "X1"
-Value: "Y1"
env:
-Name: "P1"
-Value: "Q1"
</code></pre>
| <p>You can keep everything inside the single YAML file and divide the YAML as per need.</p>
<p>you can use the <code>---</code> to merge the YAML configuration files in one file like the below in the example.</p>
<p>In a single YAML file, you can add everything Secret, deployment, service and etc as per requirement.</p>
<p>However if for each environment you have a different cluster to manage then applying the single YAML file might create the issues, in that case, it's better to keep the files separate.</p>
<p>If you are planning to set up the CI/CD and automation for deployment i would suggest keeping the single deployment file with a variable approach.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: DEPLOYMENT_NAME
spec:
template:
spec:
containers:
- name: example-2
image: IMAGE_NAME
env:
-Name: "A1"
-Value: "B1"
</code></pre>
<p>using the ubuntu <code>sed</code> command you have to run time to replace the values of <code>IMAGE_NAME</code>, <code>DEPLOYMENT_NAME</code> and <code>A1</code> etc based on the environment and as soon as your file get ready you can apply from the CI/CD server</p>
<p>Single merged file :</p>
<p><strong>deployment.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: example-1
spec:
template:
spec:
containers:
- name: example-1
image: example:1.0
env:
-Name: "A1"
-Value: "B1"
env:
-Name: "D1"
-Value: "E1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-2
spec:
template:
spec:
containers:
- name: example-2
image: example:2.0
env:
-Name: "X1"
-Value: "Y1"
env:
-Name: "P1"
-Value: "Q1"
</code></pre>
<p><strong>EDIT</strong></p>
<p>If managing the environment is the only case you can also use the <strong>secret</strong> or <strong>configmap</strong> to manage the environment variables.</p>
<p><strong>secret.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: dev-env-sample
annotations:
kubernetes.io/service-account.name: "sa-name"
type: kubernetes.io/service-account-token
data:
extra: YmFyCg==
A1 : B1
D1 : E1
</code></pre>
<p>this is a single secret file storing all the Dev environment variables.</p>
<p>inject all variables to the Dev deployment, add the below config to the deployment file so all your variables inside that <code>config map</code> or <code>secret</code> will get injected into the deployment.</p>
<pre><code>envFrom:
- SecretRef:
name: dev-env-sample
</code></pre>
<p><a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/secret/</a></p>
<p><strong>configmap.yaml</strong></p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: dev-configmap
data:
extra: YmFyCg==
A1 : B1
D1 : E1
</code></pre>
<p>and you can inject the configmap to deployment using the</p>
<pre><code>envFrom:
- configMapRef:
name: dev-configmap
</code></pre>
<p>The difference between <strong>secret</strong> and <strong>configmap</strong> is that secret save the data in <strong>base64</strong> encoded format while configmap save in plain text.</p>
<p>You can also merge the multiple <code>secrets</code> or <code>config map</code> in single YAML file</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: dev-env-sample
annotations:
kubernetes.io/service-account.name: "sa-name"
type: kubernetes.io/service-account-token
data:
extra: YmFyCg==
A1 : B1
D1 : E1
---
apiVersion: v1
kind: Secret
metadata:
name: stag-env-sample
annotations:
kubernetes.io/service-account.name: "sa-name"
type: kubernetes.io/service-account-token
data:
extra: YmFyCg==
A1 : B1
D1 : E1
---
apiVersion: v1
kind: Secret
metadata:
name: prod-env-sample
annotations:
kubernetes.io/service-account.name: "sa-name"
type: kubernetes.io/service-account-token
data:
extra: YmFyCg==
A1 : B1
D1 : E1
</code></pre>
<p>inject the <strong>secret</strong> into deployment as per need per environment.</p>
<p><code>dev-env-sample</code> can be added in to the deployment file for <strong>dev</strong> environment.</p>
|
<p>how to set image name/tag for container images specified in CRDs through the <code>kustomization.yaml</code> using the images field?</p>
<p>The <a href="https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/images/" rel="nofollow noreferrer">images</a> field works well when the container images are specified in either <code>Deployment</code> or <code>StatefulSet</code>, but not transform a CRD resource from:</p>
<pre><code>apiVersion: foo.example.com/v1alpha1
kind: Application
spec:
image: xxx
</code></pre>
<p>To:</p>
<pre><code>apiVersion: foo.example.com/v1alpha1
kind: Application
spec:
image: new-image:tag
</code></pre>
| <p>Your task can be solved easily using <code>yq</code>. The command depends on the <code>yq</code> implementation you are using:</p>
<h3><a href="https://mikefarah.gitbook.io/yq/" rel="nofollow noreferrer">mikefarah/yq - version 4</a></h3>
<p><code>IMAGE="new-image:tag" yq e '.spec.image = strenv(IMAGE)'</code></p>
<h3><a href="https://github.com/kislyuk/yq" rel="nofollow noreferrer">kislyuk/yq</a></h3>
<p><code>yq -y --arg IMAGE "new-image:tag" '.spec.image |= $IMAGE'</code></p>
|
<p>I have different sets of <code>environment</code> variables per deployment/microservice and <code>vaule</code> for each environment (dev/test/qa) are different.</p>
<p>Do I need <code>overlay</code> file for each deployment/microservice against each environment (dev/test/qa) or I can managed with single overlay per environment?</p>
<p>Deployment - <code>app-1.yaml</code>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: example-1
spec:
template:
spec:
containers:
- name: example-1
image: example:1.0
env:
-Name: "A1"
-Value: "B1"
env:
-Name: "D1"
-Value: "E1"
</code></pre>
<p>Deployment - <code>app-2.yaml</code>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: example-2
spec:
template:
spec:
containers:
- name: example-2
image: example:2.0
env:
-Name: "X1"
-Value: "Y1"
env:
-Name: "P1"
-Value: "Q1"
</code></pre>
| <p>You can use variable in the env value field as below.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: example-1
spec:
template:
spec:
containers:
- name: example-1
image: example:1.0
env:
- Name: "A1"
Value: ${A1_VALUE}
- Name: "D1"
Value: ${D1_VALUE}
</code></pre>
<p>Then, on <code>dev</code> env you can do the following.</p>
<pre><code>export A1_VALUE=<your dev env value for A1>
export D1_VALUE=<your dev env value for D1>
envsubst < example-1.yaml | kubectl apply -f -
</code></pre>
<p>Then, on <code>qa</code> env you can do the following.</p>
<pre><code>export A1_VALUE=<your qa env value for A1>
export D1_VALUE=<your qa env value for D1>
envsubst < example-1.yaml | kubectl apply -f -
</code></pre>
<p>You can also put those env variables in a file. For example, you can have the following two env files.</p>
<p><code>dev.env</code> file</p>
<pre><code>A1_VALUE=a1_dev
D1_VALUE=b1_dev
</code></pre>
<p><code>qa.env</code> file</p>
<pre><code>A1_VALUE=a1_qa
D1_VALUE=b1_qa
</code></pre>
<p>Then, on <code>dev</code> environment, just run:</p>
<pre><code>β― source dev.env
β― envsubst < example-1.yaml| kubectl apply -f -
</code></pre>
<p>On <code>qa</code> environment, just run:</p>
<pre><code>β― source qa.env
β― envsubst < example-1.yaml| kubectl apply -f -
</code></pre>
<blockquote>
<p>Note that you have to install <code>envsubst</code> in your machine.</p>
</blockquote>
|
<p>Given below is my command to install bitnami keycloak on my kubernetes cluster</p>
<pre><code>helm install kc --set auth.adminPassword=admin,auth.adminUser=admin,service.httpPort=8180 bitnami/keycloak -n my-namespace
</code></pre>
<p>I want to import realms(contains users,groups,clients and roles) into my keycloak but before i do that i need to enable upload scripts flag , most of you might already know that we can do that in using standalone.sh as given below
on standalone keycloak installation</p>
<pre><code>bin/standalone.bat -Djboss.socket.binding.port-offset=10 -Dkeycloak.profile.featur
e.upload_scripts=enabled
</code></pre>
<p>can someone help me how can I do this using helm install command by passing flags just as I am doing for <code>auth.adminPassword=admin,auth.adminUser=admin,service.httpPort=8180</code></p>
<p>thanks in advance</p>
| <p>In your Keycloak yaml file you need to add the field <code>extraEnvVars</code> and set the <code>KEYCLOAK_EXTRA_ARGS</code> environment variable as shown in the example below:</p>
<pre><code>keycloak:
enabled: true
auth:
adminUser: admin
adminPassword: secret
extraEnvVars:
- name: KEYCLOAK_EXTRA_ARGS
value: -Dkeycloak.profile.feature.upload_scripts=enabled
extraVolumeMounts:
...
</code></pre>
<p>Bear in mind, however, that the feature <code>upload_scripts</code> will be remove from Keycloak in the future.</p>
<p>From <a href="https://www.keycloak.org/docs/latest/server_development/#using-keycloak-administration-console-to-upload-scripts" rel="nofollow noreferrer">Keycloak Documentation</a>:</p>
<blockquote>
<p>Ability to upload scripts through the admin console is deprecated and
will be removed in a future version of Keycloak</p>
</blockquote>
|
<p>I am trying to implement blue/green deployment for my application. I am using istio <code>VirtuaService</code> for navigating to blue environment or green environment based on clientId in request header. Backends are working fine.</p>
<p>My concern is the frontend. How can I implement blue green for Angular ui at frontend? Since itβs a single page application, entire ui loads up during initial load.</p>
<p>What should be the strategy for angular blue/green deployment?</p>
| <p>It's hard for you to get an unambiguous answer about</p>
<blockquote>
<p>What should be the strategy for angular blue / green deployment?</p>
</blockquote>
<p>It may all depend on how you set up the cluster, what your application configuration looks like, what your network settings are, and much more. However, you can use many guides on how to correctly create a blue / green deployment:</p>
<ul>
<li><a href="https://medium.com/geekculture/simple-yet-scalable-blue-green-deployments-in-aws-eks-87815aa37c03" rel="nofollow noreferrer">simple blue/green deployment in AWS EKS</a></li>
<li><a href="https://semaphoreci.com/blog/continuous-blue-green-deployments-with-kubernetes" rel="nofollow noreferrer">continuous blue/green deployments</a></li>
<li><a href="http://blog.itaysk.com/2017/11/20/deployment-strategies-defined" rel="nofollow noreferrer">blue/green deployment strategies defined</a></li>
<li><a href="https://codefresh.io/blue-green-deployments-kubernetes/" rel="nofollow noreferrer">how work blue/green deployment</a></li>
</ul>
<p>One more point to consider. You will need two separate complete environments to be able to create a blue / green update. Look at the <a href="https://stackoverflow.com/questions/42358118/blue-green-deployments-vs-rolling-deployments">differences</a> between blue/green deployments and <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/" rel="nofollow noreferrer">rolling update</a>:</p>
<blockquote>
<p>In <strong>Blue Green Deployment</strong>, you have <strong>TWO</strong> complete environments.
One is Blue environment which is running and the Green environment to which you want to upgrade. Once you swap the environment from blue to green, the traffic is directed to your new green environment. You can delete or save your old blue environment for backup until the green environment is stable.
In <strong>Rolling Deployment</strong>, you have only <strong>ONE</strong> complete environment.
Once you start upgrading your environment. The code is deployed in the subset of instances of the same environment and moves to another subset after completion</p>
</blockquote>
<p>So if you decide to blue/green update, you need to create 2 separate, equivalent environments, then modify the environment with Angular UI, and then update.</p>
|
<p>Iβm getting an error when using terraform to provision node group on AWS EKS.
Error: error waiting for EKS Node Group (xxx) creation: <code>NodeCreationFailure: Unhealthy nodes in the kubernetes cluster.</code></p>
<p>And I went to console and inspected the node. There is a message <code>βruntime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker network plugin is not ready: cni config uninitializedβ</code>.</p>
<p>I have 5 private subnets and connect to Internet via NAT.</p>
<p>Is someone able to give me some hint on how to debug this?</p>
<p>Here are some details on my env.</p>
<pre><code>Kubernetes version: 1.18
Platform version: eks.3
AMI type: AL2_x86_64
AMI release version: 1.18.9-20201211
Instance types: m5.xlarge
</code></pre>
<p>There are three workloads set up in the cluster.</p>
<pre><code>coredns, STATUS (2 Desired, 0 Available, 0 Ready)
aws-node STATUS (5 Desired, 5 Scheduled, 0 Available, 0 Ready)
kube-proxy STATUS (5 Desired, 5 Scheduled, 5 Available, 5 Ready)
</code></pre>
<p>go inside the <code>coredns</code>, both pods are in pending state, and conditions has <code>βAvailable=False, Deployment does not have minimum availabilityβ</code> and <code>βProgress=False, ReplicaSet xxx has timed out progressingβ</code>
go inside the one of the pod in <code>aws-node</code>, the status shows <code>βWaiting - CrashLoopBackOffβ</code></p>
| <p>Add pod network add-on</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
</code></pre>
|
<p>I am using a baremetal cluster of 1 master and 2 nodes on premise in my home lab with istio, metallb and calico.</p>
<p>I want to create a DNS server in kubernetes that translates IPs for the hosts on the LAN.</p>
<p>Is it possible to use the coreDNS already installed in k8s?</p>
| <p>Yes, it's possible but there are some points to consider when doing that. Most of them are described in the Stackoverflow answer below:</p>
<ul>
<li><em><a href="https://stackoverflow.com/questions/55834721/how-to-expose-kubernetes-dns-externally">Stackoverflow.com: Questions: How to expose Kubernetes DNS externally</a></em></li>
</ul>
<p>For example: The DNS server would be resolving the queries that are internal to the Kubernetes cluster (like <code>nslookup kubernetes.default.svc.cluster.local</code>).</p>
<hr />
<p>I've included the example on how you can expose your <code>CoreDNS</code> to external sources and add a <code>Service</code> that would be pointing to some IP address</p>
<p>Steps:</p>
<ul>
<li>Modify the <code>CoreDNS</code> <code>Service</code> to be available outside.</li>
<li>Modify the <code>configMap</code> of your <code>CoreDNS</code> accordingly to:
<ul>
<li><em><a href="https://coredns.io/plugins/k8s_external/" rel="nofollow noreferrer">CoreDNS.io: Plugins: K8s_external</a></em></li>
</ul>
</li>
<li>Create a <code>Service</code> that is pointing to external device.</li>
<li>Test</li>
</ul>
<hr />
<h3>Modify the <code>CoreDNS</code> <code>Service</code> to be available outside.</h3>
<p>As you are new to Kubernetes you are probably aware on how <code>Services</code> work and which can be made available outside. You will need to change your <code>CoreDNS</code> <code>Service</code> from <code>ClusterIP</code> to either <code>NodePort</code> or <code>LoadBalancer</code> (I'd reckon <code>LoadBalancer</code> would be a better idea considering the <code>metallb</code> is used and you will access the <code>DNS</code> server on a port: <code>53</code>)</p>
<ul>
<li><code>$ kubectl edit --namespace=kube-system service/coredns </code> (or <code>kube-dns</code>)</li>
</ul>
<blockquote>
<p>A side note!</p>
<p><code>CoreDNS</code> is using <code>TCP</code> and <code>UDP</code> simultaneously, it could be an issue when creating a <code>LoadBalancer</code>. Here you can find more information on it:</p>
<ul>
<li><a href="https://metallb.universe.tf/usage/" rel="nofollow noreferrer">Metallb.universe.tf: Usage</a> (at the bottom)</li>
</ul>
</blockquote>
<hr />
<h3>Modify the <code>configMap</code> of your <code>CoreDNS</code></h3>
<p>If you would like to resolve domain like for example: <code>example.org</code> you will need to edit the <code>configMap</code> of <code>CoreDNS</code> in a following way:</p>
<ul>
<li><code>$ kubectl edit configmap --namespace=kube-system coredns</code></li>
</ul>
<p>Add the line to the <code>Corefile</code>:</p>
<pre><code> k8s_external example.org
</code></pre>
<blockquote>
<p>This plugin allows an additional zone to resolve the external IP address(es) of a Kubernetes service. This plugin is only useful if the kubernetes plugin is also loaded.</p>
<p>The plugin uses an external zone to resolve in-cluster IP addresses. It only handles queries for A, AAAA and SRV records; all others result in NODATA responses. To make it a proper DNS zone, it handles SOA and NS queries for the apex of the zone.</p>
<p>-- <em><a href="https://coredns.io/plugins/k8s_external/" rel="nofollow noreferrer">CoreDNS.io: Plugins: K8s_external</a></em></p>
</blockquote>
<hr />
<h3>Create a <code>Service</code> that is pointing to external device.</h3>
<p>Following on the link that I've included, you can now create a <code>Service</code> that will point to an IP address:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: test
namespace: default
spec:
clusterIP: None
externalIPs:
- 192.168.200.123
type: ClusterIP
</code></pre>
<hr />
<h3>Test</h3>
<p>I've used <code>minikube</code> with <code>--driver=docker</code> (with <code>NodePort</code>) but I'd reckon your can use the <code>ExternalIP</code> of your <code>LoadBalancer</code> to check it:</p>
<ul>
<li><code>dig @192.168.49.2 test.default.example.org -p 32261 +short</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code>192.168.200.123
</code></pre>
<p>where:</p>
<ul>
<li><code>@192.168.49.2</code> - IP address of <code>minikube</code></li>
<li><code>test.default.example.org</code> - service-name.namespace.k8s_external_domain</li>
<li><code>-p 32261</code> - <code>NodePort</code> port</li>
<li><code>+short</code> - to limit the output</li>
</ul>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://linux.die.net/man/1/dig" rel="nofollow noreferrer">Linux.die.net: Man: Dig</a></em></li>
</ul>
|
<p>I am trying to list the cpu and memory usage of all the nodes in kubernetes and echo "load exceed" if the memory or cpu limit exceed some digit. I am listing the cpu and memory this command but how to apply the logic to echo that <code>load exceeded</code></p>
<p><code>kubectl describe nodes | grep -A 3 "Resource .*Requests .*Limits"</code></p>
<p>Output:</p>
<pre><code>Resource Requests Limits
-------- -------- ------
cpu 360m (18%) 13 (673%)
memory 2800Mi (84%) 9Gi (276%)
--
Resource Requests Limits
-------- -------- ------
cpu 1430m (74%) 22300m (1155%)
memory 2037758592 (58%) 15426805504 (441%)
--
Resource Requests Limits
-------- -------- ------
cpu 240m (12%) 5 (259%)
memory 692Mi (20%) 3Gi (92%)
--
Resource Requests Limits
-------- -------- ------
cpu 930m (48%) 3100m (160%)
memory 1971Mi (59%) 3412Mi (102%)
--
Resource Requests Limits
-------- -------- ------
cpu 270m (13%) 7 (362%)
memory 922Mi (27%) 4Gi (122%)
--
Resource Requests Limits
-------- -------- ------
cpu 530m (27%) 5 (259%)
memory 1360Mi (40%) 3Gi (92%)
--
Resource Requests Limits
-------- -------- ------
cpu 440m (22%) 5250m (272%)
memory 1020Mi (30%) 3884Mi (116%)
</code></pre>
| <p>Try this to extract the attributes you want by matching a regex pattern</p>
<pre class="lang-sh prettyprint-override"><code>kubectl describe nodes | grep -E -A 3 "Resource|Requests|Limits"
</code></pre>
<p>You can extend it like this to extract <code>CPU</code> or <code>MEMORY</code> values</p>
<pre><code>grep -E -A 3 "Resource|Requests|Limits" | awk '/cpu/{print $2}'
</code></pre>
<p><strong>EDIT</strong>
To print if the limit is exceeded or not (example for cpu exceeding 1),</p>
<pre><code>grep -E -A 3 "Limits" | awk '/cpu/{if($2 > 1) print "Limit Exceeded"; else print "Within Limits";}'
</code></pre>
<p>You will have to do</p>
<pre><code>| awk '/memory/{print $2}' | awk -vFS="" '{print $1}'
</code></pre>
<p>to extract the number from memory before applying the condition as it gives the metric <code>G</code> with the number.</p>
<p><strong>EDIT 2</strong></p>
<p>This can give you the ratio based on the provided output of your grep command.</p>
<p>CPU</p>
<pre><code>| awk '/cpu/{print $1,$2,$4}' | awk '{if($3 ~ /[0-9]$/) {print $1,$2/($3*1000)} else {print $1,$2/$3}}'
</code></pre>
<p>Output</p>
<pre><code>cpu 0.0276923
cpu 0.0641256
cpu 0.048
cpu 0.3
cpu 0.0385714
cpu 0.106
cpu 0.0838095
</code></pre>
<p>MEMORY</p>
<pre><code>| awk '/memory/{print $1,$2,$4}' | awk '{if($3 ~ /Gi$/) {print $1,$2/($3*1024)} else {print $1,$2/$3}}'
</code></pre>
<p>Output</p>
<pre><code>memory 0.303819
memory 0.132092
memory 0.22526
memory 0.577667
memory 0.225098
memory 0.442708
memory 0.262616
</code></pre>
|
<p>I have the airflow deployed in Kubernetes and it is using the persistent volume method for dag deployment. I am trying to write a script (using GitHub action for CI/CD) for the deployment of my airflow dags which is somewhat like -</p>
<blockquote>
<pre><code>DAGS=(./*.py)
for dag in ${DAGS[@]};
do
kubectl cp "${dag}" --namespace=${NAMESPACE} ${WEB_SERVER_POD_NAME}:/path/to/dags/folder
done
</code></pre>
</blockquote>
<p>I can successfully deploy new dags and even update them.</p>
<p>But the problem is, I am unable to remove old dags (I used for testing purpose) present in the dags folder of airflow.</p>
<p>Is there a way I can do it?</p>
<p><em>P.S.</em> I cannot use the below command as it would delete any running dags -</p>
<pre><code>kubectl exec --namespace=${NAMESPACE} ${WEB_SERVER_POD_NAME} -- bash -c "rm -rf /path/to/dags/folder/*"
</code></pre>
| <p>I don't think this was an option when you originally posted, but for others:</p>
<p>Github Actions lets you create workflows that are manually triggered, and accept input values. <a href="https://github.blog/changelog/2020-07-06-github-actions-manual-triggers-with-workflow_dispatch/" rel="nofollow noreferrer">https://github.blog/changelog/2020-07-06-github-actions-manual-triggers-with-workflow_dispatch/</a></p>
<p>The command would be something like:</p>
<pre><code>kubectl exec --namespace=${NAMESPACE} ${WEB_SERVER_POD_NAME} -- bash -c "rm /path/to/dags/folder/${USER_INPUT_DAG_FILE_NAME}"
</code></pre>
|
<p>My aim is to gracefully terminate my REST application Pods. I have 2 pods running and when 1 Pod is deleted, requests should be gracefully migrated to other pods. I am using Minikube (v1.20.0)for testing with Jmeter script running 100 API calls. I have also added a delay of 3 seconds inside my API.<br />
I tried to set value of terminationgraceperiodseconds to 60 but learnt that pod was getting deleted within 30 seconds and there were some failures in my jemeter test requests.
I tried reducing value of terminationgraceperiodseconds to 10 seconds and saw that Pod gets deleted within 10 seconds.</p>
<p>Next I read about preStop hook and I added a preStop hook with sleep time as 15 seconds. With this change, my requests started gracefully migrating to other pods even if i deleted the pod which was swerving requests.</p>
<p>I would like to know why parameter terminationgraceperiodseconds whose value set to 60 seconds is taking 30 seconds to terminate the pod. Any other settings I need to change ?</p>
| <p>this is working as expected! when you kill/delete a pod a SIGTERM signal is sent to the pod. kubernetes waits up to "terminationgraceperiodseconds" for the pod to shutdown normally after receiving the SIGTERM.</p>
<p>after terminationgraceperiodseconds has passed and the pod has shutdown itself then kubernetes will send a SIGKILL signal, killing the pod without waiting for it to gracefully shutdown.</p>
<p>so if your pod is terminated after 30 seconds while you have 60 seconds configured for terminationgraceperiodseconds that just means, that the pod shutdown its processes after 30 seconds.</p>
|
<p>I created simple kubernetes cluster with demo app.
When creating the cluster I installed prometheus-stack and nginx-ingress-controller with helm (default values files).</p>
<p>After the cluster is set up I create Ingress object to expose prometheus, grafana and alertmanager with:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: monitoring-ingress
namespace: monitoring
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- http:
paths:
- path: /prometheus(/|$)(.*)
pathType: Prefix
backend:
service:
name: prometheus-operated
port:
number: 9090
- path: /alertmanager(/|$)(.*)
pathType: Prefix
backend:
service:
name: prometheus-stack-kube-prom-alertmanager
port:
number: 9093
- path: /grafana(/|$)(.*)
pathType: Prefix
backend:
service:
name: prometheus-stack-grafana
port:
number: 80
</code></pre>
<p>When I try to access prometheus via <code><ingress-controller's external IP>/prometheus</code> it resolves to <code><ingress-controller's external IP>/graph</code> and displays 404 - page not found.</p>
<p>If I use for example <code>kubectl port-forward svc/prometheus-operated 9090:9090 -n monitoring</code> I can reach prometheus without problem.</p>
<p>I can reach alertmanager through <code><ingress-controller's external IP>/alertmanager</code>. The path is resolved to <code><ingress-controller's external IP>/alertmanager/#/alerts</code></p>
<p>I suspect there is something wrong with the path rewriting but can't figure out what.</p>
<p>Please help...</p>
| <p>In the end I was able to find 2 solutions to this problem.</p>
<p><strong>Option 1</strong></p>
<p>I had DNS zone in azure (where my cluster lives as well) and there I added subdomains for grafana, prometheus and alertmanager pointing to the ingress-controller external IP.</p>
<p>When deploying kube-prometheus-stack with Helm Chart I provided default root path for prometheus in values.yaml file with folowing configuration:</p>
<pre class="lang-yaml prettyprint-override"><code>prometheus:
prometheusSpec:
externalUrl: http://prometheus.mydomainname.something
</code></pre>
<p>The Ingress manifest for (for example Prometheus) then needs to include the host address and contain only root path "/".</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: prometheus-new
namespace: monitoring
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: prometheus.mydomainname.something
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: prometheus-stack-kube-prom-prometheus
port:
number: 9090
</code></pre>
<p><strong>Option 2</strong></p>
<p>You can also provide default paths for each of the applications from kube-prometheus-stack (prometheus, grafana, alertmanager) as:</p>
<pre class="lang-yaml prettyprint-override"><code>prometheus:
prometheusSpec:
externalUrl: http://mydomainname.something/prometheus
</code></pre>
<p>and make Ingress manifest to redirect based on the path with:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: prometheus-new
namespace: monitoring
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /prometheus
pathType: Prefix
backend:
service:
name: prometheus-stack-kube-prom-prometheus
port:
number: 9090
</code></pre>
|
<p>I have deployed my application into a Kubernetes pod along with a fluent-bit sidecar container that collects logs from the sample application.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-flb-sidecar
namespace: default
labels:
app.kubernetes.io/name: default
helm.sh/chart: default-0.1.0
app.kubernetes.io/instance: flb-sidecar
app.kubernetes.io/version: "1.0"
app.kubernetes.io/managed-by: Tiller
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: default
app.kubernetes.io/instance: flb-sidecar
template:
metadata:
labels:
app.kubernetes.io/name: default
app.kubernetes.io/instance: flb-sidecar
spec:
containers:
- name: default
image: "nginx:stable"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{}
volumeMounts:
- name: log-volume
mountPath: var/log/nginx
- name: default-fluentbit
image: "fluent/fluent-bit:1.3-debug"
imagePullPolicy: IfNotPresent
ports:
- name: metrics
containerPort: 2020
protocol: TCP
volumeMounts:
- name: config-volume
mountPath: /fluent-bit/etc/
- name: log-volume
mountPath: var/log/nginx
volumes:
- name: log-volume
emptyDir: {}
- name: config-volume
configMap:
name: nginx-flb-sidecar
</code></pre>
<p>and my fluent-bit is configured to tail logs from <code>/var/log/ngnix/access.log</code></p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-flb-sidecar
namespace: default
labels:
app.kubernetes.io/name: default
helm.sh/chart: default-0.1.0
app.kubernetes.io/instance: flb-sidecar
app.kubernetes.io/version: "1.0"
app.kubernetes.io/managed-by: Tiller
data:
# Configuration files: server, input, filters and output
# ======================================================
fluent-bit.conf: |
[SERVICE]
Flush 5
Log_Level info
Daemon off
Parsers_File parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
[INPUT]
Name tail
Tag nginx.access
Parser nginx
Path /var/log/nginx/access.log
[INPUT]
Name tail
Tag nginx.error
Parser nginx
Path /var/log/nginx/error.log
[OUTPUT]
Name stdout
Match *
[OUTPUT]
Name forward
Match *
Host test-l-LoadB-2zC78B5KYFQJC-13137e1aac9bf29c.elb.us-east-2.amazonaws.com
Port 24224
parsers.conf: |
[PARSER]
Name apache
Format regex
Regex ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
Time_Key time
Time_Format %d/%b/%Y:%H:%M:%S %z
[PARSER]
Name apache2
Format regex
Regex ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
Time_Key time
Time_Format %d/%b/%Y:%H:%M:%S %z
[PARSER]
Name apache_error
Format regex
Regex ^\[[^ ]* (?<time>[^\]]*)\] \[(?<level>[^\]]*)\](?: \[pid (?<pid>[^\]]*)\])?( \[client (?<client>[^\]]*)\])? (?<message>.*)$
[PARSER]
Name nginx
Format regex
Regex ^(?<remote>[^ ]*) (?<host>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*))" "(?<agent>[^\"]*)"(?: "(?<target>[^\"]*))"$
Time_Key time
Time_Format %d/%b/%Y:%H:%M:%S %z
[PARSER]
Name json
Format json
Time_Key time
Time_Format %d/%b/%Y:%H:%M:%S %z
[PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
Time_Keep On
[PARSER]
Name syslog
Format regex
Regex ^\<(?<pri>[0-9]+)\>(?<time>[^ ]* {1,2}[^ ]* [^ ]*) (?<host>[^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?(?:[^\:]*\:)? *(?<message>.*)$
Time_Key time
Time_Format %b %d %H:%M:%S
</code></pre>
<p>If I do not have the volume mounts the logs from my application are routed to stdout/stderr.</p>
<p>I need to enable fluent-bit to read from stdout/stderr. How can I acheive this?</p>
<p>Thanks</p>
| <p>To be clear, there's no way to get access to stdout/stderr directly in fluentbit running in kubernetes. You'll need your logs to be written to disk somewhere. In fact, even though it seems to be a bit wasteful, I find that writing to both stdout AND a location on disk is actually better because you get tighter control over the log format and don't have to jump through as many hoops in fluentbit to massage log line into something that works for you (this is great for application logs using a logging provider like log4net or Serilog).</p>
<p>In any case, I thought I'd leave this blurb here since it seems like an approachable solution if you can get logs to stdout AND a location on disk.</p>
<p>At the time of this writing, AWS EKS on Fargate was a little bit "bleeding edge" so we decided to go the sidecar approach since it's a little more feature-rich. Specifically, there's no support for multi-line log messages (which is common when logging exceptions) and there's no support for adding K8s info like pod name, etc through the Kubernetes filter.</p>
<p>In any case, here's a simplified example of my deployment.yml (replace anything surrounded by angle brackets with your stuff.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: <appName>
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: <appName>
spec:
containers:
- image: <imageName>
imagePullPolicy: IfNotPresent
name: <appName>
volumeMounts:
- name: logs
mountPath: /logs
- image: public.ecr.aws/aws-observability/aws-for-fluent-bit:2.12.0
name: fluentbit
imagePullPolicy: IfNotPresent
env:
- name: APP_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['app']
volumeMounts:
- name: fluent-bit-config
mountPath: /fluent-bit/etc/
- name: logs
mountPath: /logs
volumes:
- name: fluent-bit-config
configMap:
name: fluent-bit-config
- name: logs
emptyDir: {}
</code></pre>
<p>And a simplified version of my configmap.yml (you can create this if you create the <code>fluent-bit.conf</code> and <code>parsers.conf</code> file using the `kubectl create configmap fluent-bit-config --from-file=fluent-bit.conf --from-file=parsers.conf --dry-run=cluent -o yml > configmap.yml). These files end up getting mounted as files under /fluent-bit/etc/ on the running container (which is why I'm referencing parsers.conf in /fluent-bit/etc).</p>
<pre><code>apiVersion: v1
data:
fluent-bit.conf: |-
[SERVICE]
Parsers_File /fluent-bit/etc/parsers.conf
[INPUT]
Name tail
Tag logs.*
Path /logs/*.log
DB /logs/flb_kube.db
Parser read_firstline
Mem_Buf_Limit 100MB
Skip_Long_Lines On
Refresh_Interval 5
[FILTER]
Name modify
Match logs.*
RENAME log event
SET source ${HOSTNAME}
SET sourcetype ${APP_NAME}
SET host ${KUBERNETES_SERVICE_HOST}
[OUTPUT]
Name stdout
parsers.conf: |-
[PARSER]
Name apache
Format regex
Regex ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
Time_Key time
Time_Format %d/%b/%Y:%H:%M:%S %z
[PARSER]
Name apache2
Format regex
Regex ^(?<host>[^ ]*) [^ ]* (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^ ]*) +\S*)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
Time_Key time
Time_Format %d/%b/%Y:%H:%M:%S %z
[PARSER]
Name apache_error
Format regex
Regex ^\[[^ ]* (?<time>[^\]]*)\] \[(?<level>[^\]]*)\](?: \[pid (?<pid>[^\]]*)\])?( \[client (?<client>[^\]]*)\])? (?<message>.*)$
[PARSER]
Name nginx
Format regex
Regex ^(?<remote>[^ ]*) (?<host>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")?$
Time_Key time
Time_Format %d/%b/%Y:%H:%M:%S %z
[PARSER]
Name json
Format json
Time_Key time
Time_Format %d/%b/%Y:%H:%M:%S %z
[PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
Time_Keep On
[PARSER]
# http://rubular.com/r/tjUt3Awgg4
Name cri
Format regex
Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>[^ ]*) (?<message>.*)$
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L%z
[PARSER]
Name syslog
Format regex
Regex ^\<(?<pri>[0-9]+)\>(?<time>[^ ]* {1,2}[^ ]* [^ ]*) (?<host>[^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?(?:[^\:]*\:)? *(?<message>.*)$
Time_Key time
Time_Format %b %d %H:%M:%S
kind: ConfigMap
metadata:
creationTimestamp: null
name: fluent-bit-config
</code></pre>
<p>Note, one of the clunky parts of this is that any changes to fluentbit configuration require you to force a deployment of your applications because you need the fluentbit sidecar to pick up the new config (you can do so using an annotation with a DateTime or a commit hash, or you could probably even get clever with a readiness probe).</p>
<p>Also note the <code>[FILTER]</code> section. This is where the magic happens with respect to getting kubernetes-contextual-info from the runtime environment (HOSTNAME and KUBERNETES_SERVICE_HOST are provided from K8s and you're injecting the label in your metadata section as APP_NAME). Injecting labels was only added to K8s DownwardAPI in 1.19, so you'll need to be on a newish version.</p>
|
<p>I want to run a pod that listens for updates to endpoint lists (I'm not yet ready to adopt the alpha-level feature of endpoint sets, but I'll expand to that eventually.)</p>
<p>I have this code:</p>
<pre><code>package main
import (
"fmt"
"os"
"os/signal"
"sync"
"syscall"
"k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/cache"
)
func ReadKubeConfig() (*rest.Config, *kubernetes.Clientset, error) {
config, err := rest.InClusterConfig()
if err != nil {
return nil, nil, err
}
clients, err := kubernetes.NewForConfig(config)
if err != nil {
return nil, nil, err
}
return config, clients, nil
}
func main() {
_, cs, err := ReadKubeConfig()
if err != nil {
fmt.Printf("could not create Clientset: %s\n", err)
os.Exit(1)
}
factory := informers.NewSharedInformerFactory(cs, 0)
ifmr := factory.Core().V1().Endpoints().Informer()
stop := make(chan struct{})
ifmr.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(next interface{}) {
fmt.Printf("AddFunc(%v)\n", next)
},
UpdateFunc: func(prev, next interface{}) {
fmt.Printf("UpdateFunc(%v, %v)\n", prev, next)
},
DeleteFunc: func(prev interface{}) {
fmt.Printf("DeleteFunc(%v)\n", prev)
},
})
wg := &sync.WaitGroup{}
wg.Add(1)
go func() {
defer runtime.HandleCrash()
ifmr.Run(stop)
wg.Done()
}()
ch := make(chan os.Signal, 1)
signal.Notify(ch, os.Interrupt)
signal.Notify(ch, os.Signal(syscall.SIGTERM))
signal.Notify(ch, os.Signal(syscall.SIGHUP))
sig := <-ch
fmt.Printf("Received signal %s\n", sig)
close(stop)
wg.Wait()
}
</code></pre>
<p>I get this error when deploying and running:</p>
<pre><code>kubeendpointwatcher.go:55: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:serviceaccount:eng:default" cannot list resource "endpoints" in API group "" at the cluster scope
</code></pre>
<p>I have the following role and role binding defined and deployed to the "eng" namespace:</p>
<pre><code>watch_endpoints$ kubectl -n eng get role mesh-endpoint-read -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: "2021-07-08T19:59:20Z"
name: mesh-endpoint-read
namespace: eng
resourceVersion: "182975428"
selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/eng/roles/mesh-endpoint-read
uid: fcadcc2a-19d0-4d6e-bee1-78413f51b91b
rules:
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- list
- watch
</code></pre>
<p>I have the following rolebinding:</p>
<pre><code>watch_endpoints$ kubectl -n eng get rolebinding mesh-endpoint-read -o yaml | sed -e 's/^/ /g'
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: "2021-07-08T19:59:20Z"
name: mesh-endpoint-read
namespace: eng
resourceVersion: "182977845"
selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/eng/rolebindings/mesh-endpoint-read
uid: 705a3e50-2a73-47ed-aa62-0ea48f3493ee
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: mesh-endpoint-read
subjects:
- kind: ServiceAccount
name: default
namespace: default
</code></pre>
<p>You will note that I apply it to both the <code>default</code> namespace and the <code>eng</code> namespace serviceaccount named <code>default</code> although the error message seems to indicate that it is indeed running in the <code>default</code> serviceaccount in the <code>eng</code> namespace.</p>
<p>I have previously used Role and RoleBinding and ServiceAccount objects that work as expected, so I don't understand why this doesn't work. What am I missing?</p>
<p>For testing/reproduction purposes, I run this program by doing <code>kubectl cp</code> of a built binary (cgo off) into a container created with <code>kubectl -n eng create deplpoy</code> with a vanilla <code>ubuntu</code> image running <code>/bin/sh -c sleep 999999999</code>, and then executing a /bin/bash shell in that pod-container.</p>
| <p>You have created <code>role</code> and <code>rolebinding</code> for <code>eng</code> namespace. However, as per the error message:</p>
<pre><code>kubeendpointwatcher.go:55: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:serviceaccount:eng:default" cannot list resource "endpoints" in API group "" at the cluster scope
</code></pre>
<p>you are doing query for endpoints at the "<code>cluster</code>" scope. Either try to limit your query to eng namespace or use <code>clusterrole</code>/<code>clusterbindings</code></p>
<p>Error message provide hint(<code>system:serviceaccount:eng:default</code>)that <code>serviceaccount</code> running in <code>eng</code> namespace, whose name is <code>default</code> does not have permission to query <code>ep</code> at cluster scope.</p>
<p><strong>To validate this</strong>, you can run two <code>curl</code> calls, first <code>exec</code> into the pod using the same <code>sa</code> and then run the following for <code>eng</code> namespace and later try it on other namespaces.</p>
<pre><code>curl -v --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://kubernetes.default.svc/api/v1/namespaces/default/pods
</code></pre>
|
<p>What I am trying to do, is to deploy an API on Kubernetes and, using Google-managed SSL certificates, redirect it to point on my domain on HTTPS protocol.</p>
<p>I have already spent some time on it and done a lot of debugging, but there is one thing that I can't succeed to fix.</p>
<p>What is already done and works:</p>
<ul>
<li>Static IP is reserved</li>
<li>Google-managed SSL certificate is Active and verified</li>
<li>Both Ingress and Service NodePort are deployed using <strong>443 HTTPS protocol</strong>.</li>
<li>Health Checks I managed to put on HTTPS as well.</li>
</ul>
<p>Problem:</p>
<ul>
<li>I cannot change the default configuration for loadbalancer backend service. It is always on HTTP.</li>
</ul>
<p><a href="https://i.stack.imgur.com/vFFXy.png" rel="nofollow noreferrer">Problematic place</a></p>
<p><strong>BUT</strong> if I change it manually to HTTPS, <strong>API works as expected on my domain</strong> <em>api.mydomain.com</em>. The problem is that in 5 minutes, the default configurations are sync with the current configuration in K8s, and the protocol changes to HTTP automatically.</p>
<p>My question: how can set a default configuration to HTTPS for backend service which will not be overwritten afterward.</p>
<p>Here is the guide that I partially followed:</p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs#console" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs#console</a></p>
<p>And my configurations for Ingress, Service and Health Check</p>
<p><strong>healthcheck.yaml</strong></p>
<pre><code>apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: api-default-config
spec:
healthCheck:
checkIntervalSec: 60
timeoutSec: 60
healthyThreshold: 1
unhealthyThreshold: 10
type: HTTPS
requestPath: /
port: 31303
</code></pre>
<p><strong>service.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: management-api-test-service
annotations:
cloud.google.com/backend-config: '{"default": "api-default-config"}'
spec:
type: NodePort
selector:
app: management-api-test
environment: test
ports:
- protocol: TCP
port: 443
targetPort: 5000
nodePort: 31303
</code></pre>
<p><strong>ingress.yaml</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: api-test2
networking.gke.io/managed-certificates: test-cert
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.allow-http: "false"
spec:
defaultBackend:
service:
name: management-api-test-service
port:
number: 443
rules:
- host: api.mydomain.com
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: management-api-test-service
port:
number: 443
</code></pre>
<p><strong>kubectl describe svc management-api-test-service -n web-application</strong></p>
<pre><code>Name: management-api-test-service
Namespace: web-application
Labels: <none>
Annotations: cloud.google.com/backend-config: {"default": "api-default-config"}
Selector: app=management-api-test,environment=test
Type: NodePort
IP Families: <none>
IP: **.***.**.130
IPs: <none>
Port: <unset> 443/TCP
TargetPort: 5000/TCP
NodePort: <unset> 31303/TCP
Endpoints: **.***.*.13:5000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p><strong>kubectl describe ingress api-ingress -n web-application</strong></p>
<pre><code>Name: api-ingress
Namespace: web-application
Address: **.***.***.196
Default backend: management-api-test-service:443 (***.**.**.13:5000)
Rules:
Host Path Backends
---- ---- --------
api.mydomain.com
/* management-api-test-service:443 (***.**.**.13:5000)
Annotations: ingress.gcp.kubernetes.io/pre-shared-cert: mcrt-blablabla
ingress.kubernetes.io/backends: {"k8s-be-31303--efb221b572e568cb":"HEALTHY"}
ingress.kubernetes.io/https-forwarding-rule: k8s2-fs-1xka7p8q-web-application-api-ingress-5jc6y1ty
ingress.kubernetes.io/https-target-proxy: k8s2-ts-1xka7p8q-web-application-api-ingress-5jc6y1ty
ingress.kubernetes.io/ssl-cert: mcrt-blablabla
ingress.kubernetes.io/url-map: k8s2-um-1xka7p8q-web-application-api-ingress-5jc6y1ty
kubernetes.io/ingress.allow-http: false
kubernetes.io/ingress.class: gce
kubernetes.io/ingress.global-static-ip-name: api-test2
networking.gke.io/managed-certificates: test-cert
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 5m2s (x22 over 155m) loadbalancer-controller Scheduled for sync
</code></pre>
<p><strong>I TRIED</strong>:</p>
<ul>
<li><code>kubernetes.io/ingress.allow-http: false</code> changes nothing</li>
<li>some configuration with nginx, where http set to "false", but I cannot find it and it did not work.</li>
</ul>
<p><strong>Thanks in advance!</strong></p>
| <p>Found it!!!</p>
<p>In the <strong>service.yaml</strong> in the annotations had to add another config and attribute a name to my port. Here is a new config</p>
<p><strong>service.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: management-api-test-service
annotations:
cloud.google.com/backend-config: '{"default": "api-default-config"}'
cloud.google.com/app-protocols: '{"my-https-port":"HTTPS"}' # new line
spec:
type: NodePort
selector:
app: management-api-test
environment: test
ports:
- name: my-https-port # add port name
protocol: TCP
port: 443
targetPort: 5000
nodePort: 31303
</code></pre>
|
<p>I am preparing for the CKAD exam and I am doing practice with the questions provide <a href="https://github.com/dgkanatsios/CKAD-exercises/blob/master/c.pod_design.md" rel="nofollow noreferrer">here</a></p>
<p>I have a doubt over this two different way of executing commands. Here the example provided is with a job but I think the question might be more general and extendable to all containers.</p>
<p>Under the job exercises there are two requests:</p>
<p><a href="https://github.com/dgkanatsios/CKAD-exercises/blob/master/c.pod_design.md#create-a-job-named-pi-with-image-perl-that-runs-the-command-with-arguments-perl--mbignumbpi--wle-print-bpi2000" rel="nofollow noreferrer"><em>Create a job named pi with image perl that runs the command with arguments "perl -Mbignum=bpi -wle 'print bpi(2000)'"</em></a></p>
<pre><code>kubectl create job pi --image=perl -- perl -Mbignum=bpi -wle 'print bpi(2000)'
</code></pre>
<p><a href="https://github.com/dgkanatsios/CKAD-exercises/blob/master/c.pod_design.md#create-a-job-with-the-image-busybox-that-executes-the-command-echo-hellosleep-30echo-world" rel="nofollow noreferrer"><em>Create a job with the image busybox that executes the command 'echo hello;sleep 30;echo world</em></a></p>
<pre><code>kubectl create job busybox --image=busybox -- /bin/sh -c 'echo hello;sleep 30;echo world'
</code></pre>
<p>Why in the second command I need to provide /bin/sh -c as well?</p>
<p>How could I understand when to use it and when not?</p>
| <p>Because in first example</p>
<pre><code>kubectl create job pi --image=perl -- perl -Mbignum=bpi -wle 'print bpi(2000)'
</code></pre>
<p>you call a <code>perl</code> interpreter, <code>perl -Mbignum=bpi -wle 'print bpi(2000)'</code></p>
<p>and in second example you call a bash shell command, <code>echo hello;sleep 30;echo world</code> therefore <code>/bin/sh -c</code></p>
<pre><code>kubectl create job busybox --image=busybox -- /bin/sh -c 'echo hello;sleep 30;echo world'
</code></pre>
<p>The <code>--</code> is saying after this it is a command is executed inside the container.</p>
|
<p>I want to customize the certificate validity duration and renewal throughout the cluster. Iguess doing that with ClusterIssuer is feasible. Is there a way to do so ?</p>
| <p>For that same you can configure it using the below two field</p>
<pre><code>duration: 2160h # 90d
renewBefore: 360h # 15d
</code></pre>
<p>Things to take care :</p>
<p>The <code>renewBefore</code> and <code>duration</code> fields must be specified using Golangβs <code>time.Time</code> string format, which does not allow the <strong>d</strong> (<strong>days</strong>).</p>
<p>Example certificate</p>
<pre><code>apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: example-com
namespace: default
spec:
secretName: example-com-tls
duration: 2160h # 90d
renewBefore: 360h # 15d
commonName: example.com
dnsNames:
- example.com
- www.example.com
uriSANs:
- spiffe://cluster.local/ns/sandbox/sa/example
issuerRef:
name: ca-issuer
# We can reference ClusterIssuers by changing the kind here.
# The default value is Issuer (i.e. a locally namespaced Issuer)
kind: Issuer
</code></pre>
|
<p>I am trying to deploy a rest api application in kubernetes with helm. Some of the configuration files have credentials in them and I would like to replace the variables inside the helm templates during the deployment with Kubernetes secrets.</p>
<p>Does anyone have a pointer to a documentation where I can explore this please ?</p>
| <p>If you are looking forward to directly deploy the <code>ENV</code> to the deployment file you can also do it if you can few environment variables however best practices to create the <strong>secret</strong> and inject them all into the <strong>deployment</strong>.</p>
<p>here sharing the direct example to inject the secret into the deployment</p>
<p><strong>deployment.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ .Chart.Name }}-deployment"
labels:
chart: '{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}'
spec:
replicas: {{ .Values.replicaCount }}
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
selector:
matchLabels:
app: "{{ .Chart.Name }}-selector"
version: "current"
revisionHistoryLimit: {{ .Values.revisionHistoryLimit }}
template:
metadata:
labels:
app: "{{ .Chart.Name }}-selector"
version: "current"
spec:
containers:
- name: "{{ .Chart.Name }}"
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.service.servicePort}}
resources:
requests:
cpu: "{{ .Values.image.resources.requests.cpu }}"
memory: "{{ .Values.image.resources.requests.memory }}"
env:
- name: PORT
value : "{{ .Values.service.servicePort }}"
{{- if .Values.image.livenessProbe }}
livenessProbe:
{{ toYaml .Values.image.livenessProbe | indent 10 }}
{{- end }}
{{- if .Values.image.readinessProbe }}
readinessProbe:
{{ toYaml .Values.image.readinessProbe | indent 10 }}
{{- end }}
</code></pre>
<p><strong>values.yaml</strong></p>
<pre><code>image:
repository: nodeserver
tag: 1.0.0
pullPolicy: IfNotPresent
resources:
requests:
cpu: 200m
memory: 300Mi
readinessProbe: {}
# Example (replace readinessProbe: {} with the following):
# readinessProbe:
# httpGet:
# path: /ready
# port: 3000
# initialDelaySeconds: 3
# periodSeconds: 5
livenessProbe: {}
# Example (replace livenessProbe: {} with the following)::
# livenessProbe:
# httpGet:
# path: /live
# port: 3000
# initialDelaySeconds: 40
# periodSeconds: 10
service:
name: Node
type: NodePort
servicePort: 3000
</code></pre>
<p>you can see inside the deployment.yaml code block</p>
<pre><code>env:
- name: PORT
value : "{{ .Values.service.servicePort }}"
</code></pre>
<p>it's fetching the values from <code>values.yaml</code> file</p>
<pre><code>service:
name: Node
type: NodePort
servicePort: 3000
</code></pre>
<p>if you don't want to update the <code>values.yaml</code> file you can rewrite the value using the command also</p>
<pre><code>helm install chart my-chart -n namespace-name --set service.servicePort=5000
</code></pre>
|
<p>I want to run a kuberenetes cronjob, but my command for the cronjob relies on environment variables to be defined or it will not work. When I set the env variables in the cronjob yaml it mentions that this is invalid YAML, with this message:</p>
<blockquote>
<p>error: error parsing mapping_rake.yaml: error converting YAML to JSON:
yaml: line 77: did not find expected key</p>
</blockquote>
<p>Line 77 is the line defining the command (command: ["rake", "mapping:check"]). I am not sure why inclusion of env variables would invalidate/make impossible the command to be passed to the pod that will be instantiated to execute the cronjob. Here is my yaml body:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: bob-mapping-rake-cron
spec:
schedule: "57 * * * *"
jobTemplate:
spec:
template:
spec:
volumes:
- name: nfs-volume
nfs:
server: vcsnas.pvt # Change this!
path: /ifs/ifs/AssetFlow
containers:
- env:
- name: ORIGIN_PATH
value: /mnt/Content/BobBarkerTesting/est_pricing
- name: FINAL_PATH
value: /mnt/Content/BobBarkerTesting/est_pricing/final
- name: SCANNED_PATH
value: /mnt/Content/BobBarkerTesting/est_pricing/scanned
- name: EST_BUNDLES
value: /mnt/Content/BobBarkerTesting/est_bundles
- name: SCANNED_BUNDLES
value: /mnt/Content/BobBarkerTesting/incoming_est_bundles/scanned
- name: INCOMING_BUNDLES
value: /mnt/Content/BobBarkerTesting/incoming_est_bundles
- name: INCOMING_SWAPS
value: /mnt/Content/BobBarkerTesting/locker_swap/ingest
- name: OUTPUT_FOLDER
value: /mnt/Content/BobBarkerTesting/locker_swap/output
- name: PROCESSED_FOLDER
value: /mnt/Content/BobBarkerTesting/locker_swap/processed
- name: FAILED_FOLDER
value: /mnt/Content/BobBarkerTesting/locker_swap/failed
- name: LDAP_HOST
value: 172.17.157.21
- name: LDAP_PORT
value: '636'
- name: LDAP_ATTRIBUTE
value: uid
- name: LDAP_BASE
value: ou=people,dc=cox,dc=net
- name: LDAP_USER
valueFrom:
secretKeyRef:
name: ldap
key: username
- name: LDAP_PASSWORD
valueFrom:
secretKeyRef:
name: ldap
key: password
- name: LDAP_SSL
value: simple_tls
- name: DB_HOST
value: mysql.bb.svc.cluster.local
- name: DB_USER
valueFrom:
secretKeyRef:
name: db
key: username
- name: DB_PW
valueFrom:
secretKeyRef:
name: db
key: password
- name: DB
value: pesto_prod
volumeMounts:
- name: nfs-volume
mountPath: /mnt
name: bob-barker-mapping-rake-cron
image: repo.corp.cox.com:5005/vodcontent/bob-barker-mapping-rake:v2
command: ["rake", "mapping:check"]
restartPolicy: Never
concurrencyPolicy: Replace
</code></pre>
<p>Are you allowed to define env variables for the containers that will execute a Kubernetes cronjob? Why or why not? Is there an alternative to do this by way of cronjob? If not I can employ a different way, but this is more idiomatic for my use case so I wanted to try it.</p>
| <p>Your CronJob YAML has some indentation issues. Here, is the correctly indented YAML:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: bob-mapping-rake-cron
spec:
schedule: "57 * * * *"
concurrencyPolicy: Replace
jobTemplate:
spec:
template:
spec:
volumes:
- name: nfs-volume
nfs:
server: vcsnas.pvt # Change this!
path: /ifs/ifs/AssetFlow
containers:
- name: bob-barker-mapping-rake-cron
image: repo.corp.cox.com:5005/vodcontent/bob-barker-mapping-rake:v2
command: ["rake", "mapping:check"]
volumeMounts:
- name: nfs-volume
mountPath: /mnt
env:
- name: ORIGIN_PATH
value: /mnt/Content/BobBarkerTesting/est_pricing
- name: FINAL_PATH
value: /mnt/Content/BobBarkerTesting/est_pricing/final
- name: SCANNED_PATH
value: /mnt/Content/BobBarkerTesting/est_pricing/scanned
- name: EST_BUNDLES
value: /mnt/Content/BobBarkerTesting/est_bundles
- name: SCANNED_BUNDLES
value: /mnt/Content/BobBarkerTesting/incoming_est_bundles/scanned
- name: INCOMING_BUNDLES
value: /mnt/Content/BobBarkerTesting/incoming_est_bundles
- name: INCOMING_SWAPS
value: /mnt/Content/BobBarkerTesting/locker_swap/ingest
- name: OUTPUT_FOLDER
value: /mnt/Content/BobBarkerTesting/locker_swap/output
- name: PROCESSED_FOLDER
value: /mnt/Content/BobBarkerTesting/locker_swap/processed
- name: FAILED_FOLDER
value: /mnt/Content/BobBarkerTesting/locker_swap/failed
- name: LDAP_HOST
value: 172.17.157.21
- name: LDAP_PORT
value: '636'
- name: LDAP_ATTRIBUTE
value: uid
- name: LDAP_BASE
value: ou=people,dc=cox,dc=net
- name: LDAP_USER
value: pesto_prod
- name: LDAP_PASSWORD
valueFrom:
secretKeyRef:
name: ldap
key: username
- name: LDAP_SSL
valueFrom:
secretKeyRef:
name: ldap
key: password
- name: DB_HOST
value: simple_tls
- name: DB_USER
value: mysql.bb.svc.cluster.local
- name: DB_PW
valueFrom:
secretKeyRef:
name: db
key: username
- name: DB
valueFrom:
secretKeyRef:
name: db
key: password
restartPolicy: Never
</code></pre>
|
<p>I want to customize the certificate validity duration and renewal throughout the cluster. Iguess doing that with ClusterIssuer is feasible. Is there a way to do so ?</p>
| <p>You can specify the duration of a self signed certificate by specifying the <code>duration</code> field in the <code>Certificate</code> CR:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: example
spec:
duration: 24h
...
</code></pre>
<p>You can control how long before the certificate expires it gets renewed using the <code>renewBefore</code> field:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: example
spec:
renewBefore: 12h
...
</code></pre>
<p>Details in <a href="https://docs.cert-manager.io/en/release-0.8/reference/certificates.html#certificate-duration-and-renewal-window" rel="nofollow noreferrer">the documentation</a>.</p>
|
<p>I have a funny situation with my fluxcd workload on k8s. I am trying to configure fluxcd workload on my k8s (on eks) to deploy app from my repo. The log shows that it was able to access github and found new released k8s object files. but all subsequent access return this error:</p>
<pre><code>ERROR: Repository not found.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
</code></pre>
<p>K8s version = 1.18
flux version = 1.21.0</p>
<p>Any suggestion on diagnosing this issue?
Here is the full log.</p>
<pre><code>ts=2021-01-22T07:11:41.660683058Z caller=main.go:259 version=1.21.0
ts=2021-01-22T07:11:41.660729123Z caller=main.go:412 msg="using kube config: \"/root/.kube/config\" to connect to the cluster"
ts=2021-01-22T07:11:41.694067171Z caller=main.go:492 component=cluster identity=/etc/fluxd/ssh/identity
ts=2021-01-22T07:11:41.694102094Z caller=main.go:493 component=cluster identity.pub="ssh-rsa xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx fluxcd"
ts=2021-01-22T07:11:41.694126615Z caller=main.go:498 host=https://172.20.0.1:443 version=kubernetes-v1.18.9-eks-d1db3c
ts=2021-01-22T07:11:41.694166271Z caller=main.go:510 kubectl=/usr/local/bin/kubectl
ts=2021-01-22T07:11:41.695265432Z caller=main.go:527 ping=true
ts=2021-01-22T07:11:41.695536819Z caller=main.go:666 url=ssh://git@github.com/xxx/xxx.git user=xxx email=xxx@xxx.com signing-key= verify-signatures-mode=none sync-tag=build-flux-sync state=git readonly=false registry-disable-scanning=true notes-ref=build-flux-sync set-author=false git-secret=false sops=false
ts=2021-01-22T07:11:41.695576145Z caller=main.go:751 component=upstream URL=ws://fluxcloud
ts=2021-01-22T07:11:41.696562798Z caller=upstream.go:133 component=upstream connecting=true
ts=2021-01-22T07:11:41.696893569Z caller=main.go:795 addr=:3030
ts=2021-01-22T07:11:41.696977557Z caller=loop.go:67 component=sync-loop info="Registry scanning is disabled; no image updates will be attempted"
ts=2021-01-22T07:11:41.697063581Z caller=sync.go:51 component=daemon warning="failed to load last-synced resources. sync event may be inaccurate" err="git repo not ready: git repo has not been cloned yet"
ts=2021-01-22T07:11:41.697104186Z caller=loop.go:108 component=sync-loop err="git repo not ready: git repo has not been cloned yet"
ts=2021-01-22T07:11:41.699018988Z caller=upstream.go:147 component=upstream connected=true
ts=2021-01-22T07:11:42.174047585Z caller=checkpoint.go:24 component=checkpoint msg="up to date" latest=1.20.1
ts=2021-01-22T07:11:52.629446199Z caller=loop.go:134 component=sync-loop event=refreshed url=ssh://git@github.com/xxx/xxx.git branch=xxx HEAD=ad71e60d5b61fb17c85646c5ef3af010f33ca2ec
ts=2021-01-22T07:13:29.206177525Z caller=sync.go:61 component=daemon info="trying to sync git changes to the cluster" old=3d3df47da9826002698e9d6faef603347d71607d new=ad71e60d5b61fb17c85646c5ef3af010f33ca2ec
ts=2021-01-22T07:14:38.430957604Z caller=sync.go:79 method=Sync info="not applying resource; ignore annotation in file" resource=flux:deployment/flux source=fluxcd/deployment-flux.yaml
ts=2021-01-22T07:14:38.431682477Z caller=sync.go:540 method=Sync cmd=apply args= count=67
ts=2021-01-22T07:14:40.412253941Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=1.980475048s err="running kubectl: exit status 1, stderr: Error from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found\nError from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found\nError from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found\nError from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found\nError from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found\nError from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found\nError from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found\nError from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found\nError from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found\nError from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found\nError from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found\nError from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found" output="namespace/admin unchanged\nnamespace/backstage unchanged\nnamespace/cert-manager unchanged\nnamespace/default unchanged\nnamespace/flux unchanged\nnamespace/kube-node-lease unchanged\nnamespace/kube-public unchanged\nnamespace/kube-system unchanged\nnamespace/kubernetes-dashboard unchanged\nserviceaccount/efs-provisioner unchanged\nclusterrole.rbac.authorization.k8s.io/efs-provisioner-runner unchanged\nserviceaccount/fluent-bit created\nclusterrole.rbac.authorization.k8s.io/fluent-bit-read created\nserviceaccount/flux configured\nclusterrole.rbac.authorization.k8s.io/flux configured\nservice/fluxcloud configured\nclusterrole.rbac.authorization.k8s.io/ingress-nginx created\nclusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created\nrole.rbac.authorization.k8s.io/leader-locking-efs-provisioner unchanged\nservice/memcached configured\nserviceaccount/metrics-server unchanged\nservice/metrics-server unchanged\nservice/sealed-secrets-controller unchanged\nserviceaccount/sealed-secrets-controller unchanged\nrole.rbac.authorization.k8s.io/sealed-secrets-key-admin unchanged\nrole.rbac.authorization.k8s.io/sealed-secrets-service-proxier unchanged\ncustomresourcedefinition.apiextensions.k8s.io/sealedsecrets.bitnami.com unchanged\nclusterrole.rbac.authorization.k8s.io/secrets-unsealer unchanged\nclusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader unchanged\nclusterrole.rbac.authorization.k8s.io/system:metrics-server unchanged\nconfigmap/fluent-bit-config created\nclusterrolebinding.rbac.authorization.k8s.io/fluent-bit-read created\nclusterrolebinding.rbac.authorization.k8s.io/flux configured\npersistentvolumeclaim/flux-tmp configured\nclusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created\nclusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created\nrolebinding.rbac.authorization.k8s.io/leader-locking-efs-provisioner unchanged\nrolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader unchanged\nclusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator unchanged\nclusterrolebinding.rbac.authorization.k8s.io/run-efs-provisioner unchanged\nrolebinding.rbac.authorization.k8s.io/sealed-secrets-controller unchanged\nclusterrolebinding.rbac.authorization.k8s.io/sealed-secrets-controller unchanged\nrolebinding.rbac.authorization.k8s.io/sealed-secrets-service-proxier unchanged\nclusterrolebinding.rbac.authorization.k8s.io/system:metrics-server unchanged\ndeployment.apps/efs-provisioner configured\ndaemonset.apps/fluent-bit created\ndeployment.apps/fluxcloud configured\ndeployment.apps/memcached configured\ndeployment.apps/metrics-server unchanged\ndeployment.apps/sealed-secrets-controller configured\nstorageclass.storage.k8s.io/aws-efs-standard unchanged\nstorageclass.storage.k8s.io/ebs-csi-gp2 unchanged\nsealedsecret.bitnami.com/git-ssh-key configured\nvalidatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created\napiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged"
ts=2021-01-22T07:14:40.811174903Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=398.82547ms err=null output="namespace/admin unchanged"
ts=2021-01-22T07:14:41.394748916Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=583.515075ms err=null output="namespace/backstage unchanged"
ts=2021-01-22T07:14:41.900681885Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=505.881649ms err=null output="namespace/cert-manager unchanged"
ts=2021-01-22T07:14:42.397868741Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=497.106578ms err=null output="namespace/default unchanged"
ts=2021-01-22T07:14:42.891058742Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=493.137805ms err=null output="namespace/flux unchanged"
ts=2021-01-22T07:14:43.413384242Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=522.267484ms err=null output="namespace/kube-node-lease unchanged"
ts=2021-01-22T07:14:43.913865089Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=500.425913ms err=null output="namespace/kube-public unchanged"
ts=2021-01-22T07:14:44.408359028Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=494.417052ms err=null output="namespace/kube-system unchanged"
ts=2021-01-22T07:14:44.99409166Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=585.68475ms err=null output="namespace/kubernetes-dashboard unchanged"
ts=2021-01-22T07:14:45.487302521Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=493.154343ms err=null output="serviceaccount/efs-provisioner unchanged"
ts=2021-01-22T07:14:45.986967413Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=499.613885ms err=null output="clusterrole.rbac.authorization.k8s.io/efs-provisioner-runner unchanged"
ts=2021-01-22T07:14:46.507919018Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=520.893135ms err=null output="serviceaccount/fluent-bit unchanged"
ts=2021-01-22T07:14:47.023825463Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=515.859875ms err=null output="clusterrole.rbac.authorization.k8s.io/fluent-bit-read unchanged"
ts=2021-01-22T07:14:47.4935634Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=469.689953ms err=null output="serviceaccount/flux unchanged"
ts=2021-01-22T07:14:47.912133528Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=418.525693ms err=null output="clusterrole.rbac.authorization.k8s.io/flux unchanged"
ts=2021-01-22T07:14:48.387499012Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=475.318357ms err=null output="service/fluxcloud unchanged"
ts=2021-01-22T07:14:48.871262811Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=483.705164ms err="running kubectl: exit status 1, stderr: Error from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found" output=
ts=2021-01-22T07:14:49.397015116Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=525.697104ms err=null output="clusterrole.rbac.authorization.k8s.io/ingress-nginx unchanged"
ts=2021-01-22T07:14:49.87199228Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=474.919539ms err="running kubectl: exit status 1, stderr: Error from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found" output=
ts=2021-01-22T07:14:50.355258006Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=483.211989ms err="running kubectl: exit status 1, stderr: Error from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found" output=
ts=2021-01-22T07:14:50.812340209Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=457.025093ms err=null output="clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission unchanged"
ts=2021-01-22T07:14:51.355618608Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=543.227442ms err="running kubectl: exit status 1, stderr: Error from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found" output=
ts=2021-01-22T07:14:51.872033727Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=516.367722ms err="running kubectl: exit status 1, stderr: Error from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found" output=
ts=2021-01-22T07:14:52.371578968Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=499.49742ms err="running kubectl: exit status 1, stderr: Error from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found" output=
ts=2021-01-22T07:14:52.798805188Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=427.180173ms err=null output="role.rbac.authorization.k8s.io/leader-locking-efs-provisioner unchanged"
ts=2021-01-22T07:14:53.292518268Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=493.664061ms err=null output="service/memcached unchanged"
ts=2021-01-22T07:14:53.695930907Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=403.364685ms err=null output="serviceaccount/metrics-server unchanged"
ts=2021-01-22T07:14:54.193472131Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=497.485626ms err=null output="service/metrics-server unchanged"
ts=2021-01-22T07:14:54.70293546Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=509.415046ms err=null output="service/sealed-secrets-controller unchanged"
ts=2021-01-22T07:14:55.205694885Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=502.704135ms err=null output="serviceaccount/sealed-secrets-controller unchanged"
ts=2021-01-22T07:14:55.694254797Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=488.509092ms err=null output="role.rbac.authorization.k8s.io/sealed-secrets-key-admin unchanged"
ts=2021-01-22T07:14:56.208222605Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=513.920697ms err=null output="role.rbac.authorization.k8s.io/sealed-secrets-service-proxier unchanged"
ts=2021-01-22T07:14:56.694122349Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=485.841913ms err=null output="customresourcedefinition.apiextensions.k8s.io/sealedsecrets.bitnami.com unchanged"
ts=2021-01-22T07:14:57.211083636Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=516.911932ms err=null output="clusterrole.rbac.authorization.k8s.io/secrets-unsealer unchanged"
ts=2021-01-22T07:14:57.692955699Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=481.822011ms err=null output="clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader unchanged"
ts=2021-01-22T07:14:58.105620938Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=412.616954ms err=null output="clusterrole.rbac.authorization.k8s.io/system:metrics-server unchanged"
ts=2021-01-22T07:14:58.597035893Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=491.365636ms err=null output="configmap/fluent-bit-config unchanged"
ts=2021-01-22T07:14:59.096279092Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=499.193705ms err=null output="clusterrolebinding.rbac.authorization.k8s.io/fluent-bit-read unchanged"
ts=2021-01-22T07:14:59.609137559Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=512.808678ms err=null output="clusterrolebinding.rbac.authorization.k8s.io/flux unchanged"
ts=2021-01-22T07:15:00.113543907Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=504.353462ms err=null output="persistentvolumeclaim/flux-tmp unchanged"
ts=2021-01-22T07:15:00.606935711Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=493.330415ms err=null output="clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged"
ts=2021-01-22T07:15:01.174866675Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=567.880665ms err="running kubectl: exit status 1, stderr: Error from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found" output=
ts=2021-01-22T07:15:01.607053543Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=432.139535ms err=null output="clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged"
ts=2021-01-22T07:15:02.159227347Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=552.117123ms err="running kubectl: exit status 1, stderr: Error from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found" output=
ts=2021-01-22T07:15:02.671269693Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=511.980415ms err="running kubectl: exit status 1, stderr: Error from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found" output=
ts=2021-01-22T07:15:03.110989126Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=439.656913ms err=null output="rolebinding.rbac.authorization.k8s.io/leader-locking-efs-provisioner unchanged"
ts=2021-01-22T07:15:03.597787026Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=486.74369ms err=null output="rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader unchanged"
ts=2021-01-22T07:15:04.102917786Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=505.084409ms err=null output="clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator unchanged"
ts=2021-01-22T07:15:04.597446493Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=494.473142ms err=null output="clusterrolebinding.rbac.authorization.k8s.io/run-efs-provisioner unchanged"
ts=2021-01-22T07:15:05.105545977Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=508.051542ms err=null output="rolebinding.rbac.authorization.k8s.io/sealed-secrets-controller unchanged"
ts=2021-01-22T07:15:05.693874047Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=588.276311ms err=null output="clusterrolebinding.rbac.authorization.k8s.io/sealed-secrets-controller unchanged"
ts=2021-01-22T07:15:06.198681789Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=504.759864ms err=null output="rolebinding.rbac.authorization.k8s.io/sealed-secrets-service-proxier unchanged"
ts=2021-01-22T07:15:06.707216771Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=508.485939ms err=null output="clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server unchanged"
ts=2021-01-22T07:15:07.20203917Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=494.775224ms err=null output="deployment.apps/efs-provisioner configured"
ts=2021-01-22T07:15:07.687488657Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=485.401142ms err=null output="daemonset.apps/fluent-bit configured"
ts=2021-01-22T07:15:08.197431353Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=509.895359ms err=null output="deployment.apps/fluxcloud unchanged"
ts=2021-01-22T07:15:08.686987288Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=489.509786ms err="running kubectl: exit status 1, stderr: Error from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found" output=
ts=2021-01-22T07:15:09.258416008Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=571.385297ms err="running kubectl: exit status 1, stderr: Error from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found" output=
ts=2021-01-22T07:15:09.7778928Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=519.425596ms err="running kubectl: exit status 1, stderr: Error from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found" output=
ts=2021-01-22T07:15:10.2221443Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=444.199377ms err=null output="deployment.apps/memcached configured"
ts=2021-01-22T07:15:10.695917417Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=473.723335ms err=null output="deployment.apps/metrics-server unchanged"
ts=2021-01-22T07:15:11.224662658Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=528.694215ms err=null output="deployment.apps/sealed-secrets-controller configured"
ts=2021-01-22T07:15:11.787295286Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=562.582572ms err=null output="storageclass.storage.k8s.io/aws-efs-standard unchanged"
ts=2021-01-22T07:15:12.206831282Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=419.488021ms err=null output="storageclass.storage.k8s.io/ebs-csi-gp2 unchanged"
ts=2021-01-22T07:15:12.696859048Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=489.978989ms err=null output="sealedsecret.bitnami.com/git-ssh-key unchanged"
ts=2021-01-22T07:15:13.216985598Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=520.054356ms err=null output="validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission configured"
ts=2021-01-22T07:15:13.69546204Z caller=sync.go:606 method=Sync cmd="kubectl apply -f -" took=478.426535ms err=null output="apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged"
ts=2021-01-22T07:15:14.294910311Z caller=sync.go:231 component=daemon err="ingress-nginx:serviceaccount/ingress-nginx: running kubectl: exit status 1, stderr: Error from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found; ingress-nginx:role/ingress-nginx: running kubectl: exit status 1, stderr: Error from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found; ingress-nginx:role/ingress-nginx-admission: running kubectl: exit status 1, stderr: Error from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found; ingress-nginx:serviceaccount/ingress-nginx-admission: running kubectl: exit status 1, stderr: Error from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found; ingress-nginx:service/ingress-nginx-controller: running kubectl: exit status 1, stderr: Error from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found; ingress-nginx:service/ingress-nginx-controller-admission: running kubectl: exit status 1, stderr: Error from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found; ingress-nginx:rolebinding/ingress-nginx: running kubectl: exit status 1, stderr: Error from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found; ingress-nginx:rolebinding/ingress-nginx-admission: running kubectl: exit status 1, stderr: Error from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found; ingress-nginx:configmap/ingress-nginx-controller: running kubectl: exit status 1, stderr: Error from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found; ingress-nginx:job/ingress-nginx-admission-create: running kubectl: exit status 1, stderr: Error from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found; ingress-nginx:job/ingress-nginx-admission-patch: running kubectl: exit status 1, stderr: Error from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found; ingress-nginx:deployment/ingress-nginx-controller: running kubectl: exit status 1, stderr: Error from server (NotFound): error when creating \"STDIN\": namespaces \"ingress-nginx\" not found"
ts=2021-01-22T07:15:14.316975589Z caller=daemon.go:704 component=daemon event="Sync: ad71e60, <cluster>:clusterrole/fluent-bit-read, <cluster>:clusterrole/ingress-nginx, <cluster>:clusterrole/ingress-nginx-admission, <cluster>:clusterrolebinding/fluent-bit-read, <cluster>:clusterrolebinding/ingress-nginx, <cluster>:clusterrolebinding/ingress-nginx-admission, <cluster>:validatingwebhookconfiguration/ingress-nginx-admission, admin:configmap/fluent-bit-config, admin:daemonset/fluent-bit, admin:serviceaccount/fluent-bit, ingress-nginx:configmap/ingress-nginx-controller, ingress-nginx:deployment/ingress-nginx-controller, ingress-nginx:job/ingress-nginx-admission-create, ingress-nginx:job/ingress-nginx-admission-patch, ingress-nginx:role/ingress-nginx, ingress-nginx:role/ingress-nginx-admission, ingress-nginx:rolebinding/ingress-nginx, ingress-nginx:rolebinding/ingress-nginx-admission, ingress-nginx:service/ingress-nginx-controller, ingress-nginx:service/ingress-nginx-controller-admission, ingress-nginx:serviceaccount/ingress-nginx, ingress-nginx:serviceaccount/ingress-nginx-admission" logupstream=true
ts=2021-01-22T07:17:02.44292057Z caller=loop.go:108 component=sync-loop err="pushing tag to origin: fatal: Could not read from remote repository., full output:\n ERROR: Repository not found.\nfatal: Could not read from remote repository.\n\nPlease make sure you have the correct access rights\nand the repository exists.\n"
ts=2021-01-22T07:17:02.443021657Z caller=loop.go:127 component=sync-loop url=ssh://git@github.com/xxx/xxx.git err="git repo not ready: git clone --mirror: fatal: Could not read from remote repository., full output:\n Cloning into bare repository '/tmp/flux-gitclone159237772'...\nERROR: Repository not found.\nfatal: Could not read from remote repository.\n\nPlease make sure you have the correct access rights\nand the repository exists.\n"
ts=2021-01-22T07:32:02.443091782Z caller=sync.go:51 component=daemon warning="failed to load last-synced resources. sync event may be inaccurate" err="git repo not ready: git clone --mirror: fatal: Could not read from remote repository., full output:\n Cloning into bare repository '/tmp/flux-gitclone422736558'...\nERROR: Repository not found.\nfatal: Could not read from remote repository.\n\nPlease make sure you have the correct access rights\nand the repository exists.\n"
ts=2021-01-22T07:32:02.443152229Z caller=loop.go:108 component=sync-loop err="git repo not ready: git clone --mirror: fatal: Could not read from remote repository., full output:\n Cloning into bare repository '/tmp/flux-gitclone422736558'...\nERROR: Repository not found.\nfatal: Could not read from remote repository.\n\nPlease make sure you have the correct access rights\nand the repository exists.\n"
</code></pre>
| <p>For FluxCD to be able to to access the private Github repositories, it needs to be able to authenticate to Github.
You could create a dedicated Flux user, or use your own.</p>
<p>This is done using a Kubernetes secret as such: <a href="https://fluxcd.io/docs/components/source/gitrepositories/#ssh-authentication" rel="nofollow noreferrer">https://fluxcd.io/docs/components/source/gitrepositories/#ssh-authentication</a></p>
<p>You can see the various Git authentication methdods supported here: <a href="https://fluxcd.io/docs/cmd/flux_create_secret_git/#examples" rel="nofollow noreferrer">https://fluxcd.io/docs/cmd/flux_create_secret_git/#examples</a></p>
|
<p>I have a single page app. It is served by (and talks to) an API server running on a Kubernetes deployment with 2 replicas. I have added a <code>X-API-Version</code> header that my API sends on every request, and my client can compare with, to figure out if it needs to inform the user their client code is outdated.</p>
<p>One issue I am facing however, is when I deploy, I want to ensure only ever 1 version of the API is running. I do not want a situation where a client can be refreshed many times in a loop, as it receives different API versions.</p>
<p>I basically want it to go from 2 replicas running version A, to 2 replicas running Version A, 2 running version B. Then switch the traffic to version B once health checks pass, then tear down the old version A's.</p>
<p>Does Kubernetes support this using the <code>RollingDeploy</code> strategy?</p>
| <p>For blue-green deployment in Kubernetes, I will recommend to use some third party solution like <a href="https://argoproj.github.io/argo-rollouts/" rel="nofollow noreferrer">Argo Rollouts</a>, <a href="https://docs.nginx.com/nginx-service-mesh/tutorials/trafficsplit-deployments/" rel="nofollow noreferrer">NGINX</a>, <a href="https://docs.harness.io/article/1qfb4gh9e8-set-up-kubernetes-traffic-splitting" rel="nofollow noreferrer">Istio</a> etc.. They will let you split the traffic between the versions of your application.</p>
<p>However, Kubernentes is introducing <a href="https://kubernetes.io/blog/2021/04/22/evolving-kubernetes-networking-with-the-gateway-api/" rel="nofollow noreferrer">Gateway API</a> which has built-in support for <a href="https://gateway-api.sigs.k8s.io/guides/traffic-splitting/" rel="nofollow noreferrer">traffic splitting</a>.</p>
|
<p>I know that with Azure AKS , master components are fully managed by the service. But I'm a little confused here when it comes to pick the node pools. I understand that there are two kind of pools system and user, where the user nodes pool offer hosting my application pods. I read on official documentation that <strong>System node pools serve the primary purpose of hosting critical system pods such as CoreDNS and tunnelfront.</strong> And i'm aware that we can only rely on system nodes to create and run our kubernetes cluster.</p>
<p>My question here, do they mean here by the system node the <strong>MASTER node</strong> ? If it is so, why then we have the option to not create the user nodes (worker node by analogy) ? because as we know -in on prem kubernetes solution- we cannot create a kubernetes cluster with master nodes only.</p>
<p><a href="https://i.stack.imgur.com/IHNyo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IHNyo.png" alt="enter image description here" /></a></p>
<p>I'll appreciate any help</p>
| <p>System node pools in AKS does not contain Master nodes. Master nodes in AKS are 100% managed by Azure and are outside your VNet. A system node pool contains worker nodes on which AKS automatically assigns the label <code>kubernetes.azure.com/mode: system</code>, that's about it. AKS then use that label to deploy critical pod like <code>tunnelfront</code>, which is use to create a secure communication from your nodes to the control plane. You need at least 1 system node pool per cluster and they have the <a href="https://learn.microsoft.com/en-us/azure/aks/use-system-pools#system-and-user-node-pools" rel="nofollow noreferrer">following restrictions</a> :</p>
<ul>
<li>System pools osType must be Linux.</li>
<li>System pools must contain at least one node, and user node pools may contain zero or more nodes.</li>
<li>System node pools require a VM SKU of at least 2 vCPUs and 4GB memory. But burstable-VM(B series) is not recommended.</li>
<li>System node pools must support at least 30 pods as described by the minimum and maximum value formula for pods.</li>
</ul>
|
<p>Through Terraform, I am trying to create a VPC-Native GKE cluster in a single zone (europe-north1-b), with a separate node-pool, with the GKE cluster and node-pool in their own VPC Network.</p>
<p>My code looks like the following:</p>
<pre><code>resource "google_container_cluster" "gke_cluster" {
description = "GKE Cluster for personal projects"
initial_node_count = 1
location = "europe-north1-b"
name = "prod"
network = google_compute_network.gke.self_link
remove_default_node_pool = true
subnetwork = google_compute_subnetwork.gke.self_link
ip_allocation_policy {
cluster_secondary_range_name = local.cluster_secondary_range_name
services_secondary_range_name = local.services_secondary_range_name
}
}
resource "google_compute_network" "gke" {
auto_create_subnetworks = false
delete_default_routes_on_create = false
description = "Compute Network for GKE nodes"
name = "${terraform.workspace}-gke"
routing_mode = "GLOBAL"
}
resource "google_compute_subnetwork" "gke" {
name = "prod-gke-subnetwork"
ip_cidr_range = "10.255.0.0/16"
region = "europe-north1"
network = google_compute_network.gke.id
secondary_ip_range {
range_name = local.cluster_secondary_range_name
ip_cidr_range = "10.0.0.0/10"
}
secondary_ip_range {
range_name = local.services_secondary_range_name
ip_cidr_range = "10.64.0.0/10"
}
}
locals {
cluster_secondary_range_name = "cluster-secondary-range"
services_secondary_range_name = "services-secondary-range"
}
resource "google_container_node_pool" "gke_node_pool" {
cluster = google_container_cluster.gke_cluster.name
location = "europe-north1-b"
name = terraform.workspace
node_count = 1
node_locations = [
"europe-north1-b"
]
node_config {
disk_size_gb = 100
disk_type = "pd-standard"
image_type = "cos_containerd"
local_ssd_count = 0
machine_type = "g1-small"
preemptible = false
service_account = google_service_account.gke_node_pool.email
}
}
resource "google_service_account" "gke_node_pool" {
account_id = "${terraform.workspace}-node-pool"
description = "The default service account for pods to use in ${terraform.workspace}"
display_name = "GKE Node Pool ${terraform.workspace} Service Account"
}
resource "google_project_iam_member" "gke_node_pool" {
member = "serviceAccount:${google_service_account.gke_node_pool.email}"
role = "roles/viewer"
}
</code></pre>
<p>However, whenever I apply this Terraform code, I receive the following error:</p>
<pre><code>google_container_cluster.gke_cluster: Still creating... [24m30s elapsed]
google_container_cluster.gke_cluster: Still creating... [24m40s elapsed]
β·
β Error: Error waiting for creating GKE cluster: All cluster resources were brought up, but: component "kube-apiserver" from endpoint "gke-xxxxxxxxxxxxxxxxxxxx-yyyy" is unhealthy.
β
β with google_container_cluster.gke_cluster,
β on gke.tf line 1, in resource "google_container_cluster" "gke_cluster":
β 1: resource "google_container_cluster" "gke_cluster" {
β
β΅
</code></pre>
<p>My cluster is then auto-deleted.</p>
<p>I can find no problem with my Terraform code/syntax, and have searched through Google Cloud Logging to find a more detailed error message with no luck.</p>
<p>So, how do I create a HEALTHY VPC-Native GKE cluster with Terraform?</p>
| <p>Turns out the issue seemed to be with having the large subnetwork secondary ranges.</p>
<p>As shown in the question, I had ranges:</p>
<ul>
<li><code>10.0.0.0/10</code> for the <code>cluster_secondary_range</code>.</li>
<li><code>10.64.0.0/10</code> for the <code>services_secondary_range</code>.</li>
</ul>
<p>These <code>/10</code> CIDRs cover <code>4194304</code> IP addresses each, which I figured might be too large for Google/GKE to handle(?) - especially since all of the GKE documentation uses CIDRs covering much smaller ranges for the cluster & services.</p>
<p>I decided to shrink these CIDR ranges to see if would help:</p>
<ul>
<li><code>10.0.0.0/12</code> for the <code>cluster_secondary_range</code>.</li>
<li><code>10.16.0.0/12</code> for the <code>services_secondary_range</code>.</li>
</ul>
<p>These <code>/12</code> CIDRs cover <code>1048576</code> IP addresses each.<br />
My cluster was created successfully after this change:</p>
<pre><code>google_container_cluster.gke_cluster: Creation complete after 5m40s
</code></pre>
<p>Not sure WHY Google / GKE can't handle larger CIDR ranges for the cluster & services, but <code>/12</code> is good enough for me and allows for successful creation of the cluster.</p>
|
<p>I'm using the following tech:</p>
<ul>
<li>helm</li>
<li>argocd</li>
<li>k8s</li>
</ul>
<p>I created a secret:</p>
<pre><code>β°οοο kubectl create secret generic my-secret --from-file=my-secret=/Users/superduper/project/src/main/resources/config-file.json --dry-run=client -o yaml
apiVersion: v1
data:
my-secret: <content>
kind: Secret
metadata:
creationTimestamp: null
name: my-secret
</code></pre>
<p>I then added the secret to my pod via a volume mount:</p>
<pre><code>volumeMounts:
- mountPath: "/etc/config"
name: config
readOnly: true
</code></pre>
<pre><code>volumes:
- name: config
secret:
secretName: my-secret
</code></pre>
<p>but the problem is that when i view the /etc/config diretory, the contents shows <code>my-secret</code> under a timestamp directory:</p>
<pre><code>directory:/etc/config/..2021_07_10_20_14_55.980073047
file:/etc/config/..2021_07_10_20_14_55.980073047/my-secret
</code></pre>
<p>is this normal? is there anyway i can get rid of that timestamp so I can programmatically grab the config secret?</p>
| <p>This is the way Kubernetes mounts Secrets and ConfigMaps by default in order to propagate changes downward to those volume mounts if an upstream change occurs. If you would rather not use a symlink and want to forfeit that ability, use the <code>subPath</code> directive and your mount will appear as you wish.</p>
<pre class="lang-yaml prettyprint-override"><code> volumeMounts:
- mountPath: /etc/config/my-secret
name: config
subPath: my-secret
readOnly: true
volumes:
- name: config
secret:
secretName: my-secret
</code></pre>
<pre class="lang-sh prettyprint-override"><code>$ k exec alpine -it -- /bin/ash
/ # ls -lah /etc/config/
total 12K
drwxr-xr-x 2 root root 4.0K Jul 10 22:58 .
drwxr-xr-x 1 root root 4.0K Jul 10 22:58 ..
-rw-r--r-- 1 root root 9 Jul 10 22:58 my-secret
/ # cat /etc/config/my-secret
hi there
</code></pre>
|
<p>I'm a bit confuse on how are handle TCP probe with Kubernetes the documentation says:</p>
<blockquote>
<p>A third type of liveness probe uses a TCP socket. With this
configuration, the kubelet will attempt to open a socket to your
container on the specified port. If it can establish a connection, the
container is considered healthy, if it can't it is considered a
failure.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-tcp-liveness-probe" rel="nofollow noreferrer">source</a></p>
<p>But has far as I known, socket client is <em>connected</em> before the server perform <code>accept</code> on the socket. This TCP handshake is managed by the OS... So how Kubernetes "known" the state of the socket?</p>
<p>To give a little be of context I'm trying to write a unit test in my application (C++) and I cannot figure how K8s handle this, but in k8s it does work as expected (I mean that if I do not accept the connection it will declare my container as <em>not alive</em>).</p>
<p>Thank you for your time and consideration!</p>
<h1>Edit 1</h1>
<p>Sorry @Steffen Ullrich it take me some time but here a sample of code: <a href="https://github.com/quentingodeau/k8s-probe" rel="nofollow noreferrer">https://github.com/quentingodeau/k8s-probe</a></p>
<p>And then the traces that I get:</p>
<pre><code>$ kubectl logs -f $(kubectl get pods | egrep -o 'sample-deployment-[^ ]*')
[2021-07-10 18:46:22.837] [info] Server acccept the client...
[2021-07-10 18:46:23.838] [info] Server acccept the client...
[2021-07-10 18:46:24.840] [info] Server acccept the client...
[2021-07-10 18:46:25.837] [info] Server acccept the client...
[2021-07-10 18:46:26.836] [info] Server acccept the client...
[2021-07-10 18:46:27.839] [info] Server acccept the client...
[2021-07-10 18:46:28.840] [info] Server acccept the client...
[2021-07-10 18:46:29.836] [info] Server acccept the client...
[2021-07-10 18:46:30.843] [info] Server acccept the client...
[2021-07-10 18:46:31.028] [info] Send SIGUSR1
[2021-07-10 18:46:31.836] [info] Server acccept the client...
[2021-07-10 18:46:31.836] [info] Start to not procssing incoming connection
[2021-07-10 18:46:35.855] [info] End of application (signal=15)
</code></pre>
<h1>Edit 2</h1>
<p><a href="https://github.com/kubernetes/kubernetes/issues/103632" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/103632</a></p>
| <blockquote>
<p>But has far as I known, socket client is connected before the server perform accept on the socket</p>
</blockquote>
<p>While it is true that the connection might be established in the OS before <code>accept</code> is called, it is only established after <code>listen</code> is called on the socket. If the application is not running (failed to start, crashed) then there is no listening socket so any connection to it will fail. If the listen queue is full since the application fails to handle new connections in time, then the connection will fail too.</p>
<p>This kind of cheap probe is sufficient in many cases but it surely does not handle every case, like making sure that the application responds correctly and responds within the expected time. If such checks are needed more elaborate probes and maybe even application specific probes need to be done.</p>
|
<p>Installed PostgreSQL in AWS Eks through Helm <a href="https://bitnami.com/stack/postgresql-ha/helm" rel="nofollow noreferrer">https://bitnami.com/stack/postgresql-ha/helm</a></p>
<p>I need to fulfill some tasks in deployments with root rights, but when</p>
<pre><code>su -
</code></pre>
<p>requires a password that I don't know and where to take it, and to access the desired folders, such as /opt/bitnami/postgresql/</p>
<p>Error: Permission denied</p>
<p>How to get the necessary rights or what password?</p>
<p>Image attached: <a href="https://i.stack.imgur.com/MUxVQ.jpg" rel="nofollow noreferrer">bitnami root error</a></p>
| <blockquote>
<p>I need [...] to place the .so libraries I need for postgresql in [...] <code>/opt/bitnami/postgresql/lib</code></p>
</blockquote>
<p>I'd consider this "extending" rather than "configuring" PostgreSQL; it's not a task you can do with a Helm chart alone. On a standalone server it's not something you could configure with only a text editor, for example, and while <a href="https://github.com/bitnami/charts/tree/master/bitnami/postgresql-ha" rel="nofollow noreferrer">the Bitnami PostgreSQL-HA chart</a> has a pretty wide swath of configuration options, none of them allow providing extra binary libraries.</p>
<p>The first step to doing this is to create a custom Docker image that includes the shared library. That can start <code>FROM</code> the Bitnami PostgreSQL image this chart uses:</p>
<pre class="lang-sh prettyprint-override"><code>ARG postgresql_tag=11.12.0-debian-10-r44
FROM bitnami/postgresql:${postgresql_tag}
# assumes the shared library is in the same directory as
# the Dockerfile
COPY whatever.so /opt/bitnami/postgresql/lib
# or RUN curl ..., or RUN apt-get, or ...
#
# You do not need EXPOSE, ENTRYPOINT, CMD, etc.
# These come from the base image
</code></pre>
<p>Build this image and push it to a Docker registry, the same way you do for your application code. (In a purely local context you might be able to <code>docker build</code> the image in minikube's context.)</p>
<p>When you deploy the chart, it has options to override the image it runs, so you can point it at your own custom image. Your Helm values could look like:</p>
<pre class="lang-yaml prettyprint-override"><code>postgresqlImage:
registry: registry.example.com:5000
repository: infra/postgresql
tag: 11.12.0-debian-10-r44
# `docker run registry.example.com:5000/infra/postgresql:11.12.0-debian-10-r44`
</code></pre>
<p>and then you can provide this file via the <code>helm install -f</code> option when you deploy the chart.</p>
<p>You should almost never try to manually configure a Kubernetes pod by logging into it with <code>kubectl exec</code>. It is extremely routine to delete pods, and in many cases Kubernetes does this automatically (if the image tag in a Deployment or StatefulSet changes; if a HorizontalPodAutoscaler scales down; if a Node is taken offline); in these cases your manual changes will be lost. If there are multiple replicas of a pod (with an HA database setup there almost certainly will be) you also need to make identical changes in every replica.</p>
|
<p>Is it possible for a pod/deployment/statefulset to be moved to another node or be recreated on another node automatically if the first node fails? The pod in question is set to 1 replica. So is it possible to configure some sort of failover for kubernetes pods? I've tried out pod affinity settings but nothing is moved automatically it has been around 10 minutes.</p>
<p>the yaml for the said pod is like below:</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ceph-rbd-sc-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: ceph-rbd-sc
---
apiVersion: v1
kind: Pod
metadata:
name: ceph-rbd-pod-pvc-sc
labels:
app: ceph-rbd-pod-pvc-sc
spec:
containers:
- name: ceph-rbd-pod-pvc-sc
image: busybox
command: ["sleep", "infinity"]
volumeMounts:
- mountPath: /mnt/ceph_rbd
name: volume
nodeSelector:
etiket: worker
volumes:
- name: volume
persistentVolumeClaim:
claimName: ceph-rbd-sc-pvc
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
name: ceph-rbd-pod-pvc-sc
topologyKey: "kubernetes.io/hostname"
</code></pre>
<p>Edit:</p>
<p>I managed to get it to work. But now i have another problem, the newly created pod in the other node is stuck in "container creating" and the old pod is stuck in "terminating". I also get <code>Multi-Attach error for volume</code> stating that the pv is still in use by the old pod. The situation is the same for any deployment/statefulset with a pv attached, the problem is resolved only when the failed node comes back online. Is there a solution for this?</p>
| <p>Answer from coderanger remains valid regarding Pods. Answering to your last edit:</p>
<p>Your issue is with CSI.</p>
<ul>
<li><p>When your Pod uses a PersistentVolume whose accessModes is RWO.</p>
</li>
<li><p>And when the Node hosting your Pod gets unreachable, prompting Kubernetes scheduler to Terminate the current Pod and create a new one on another Node</p>
</li>
</ul>
<p>Your PersistentVolume can not be attached to the new Node.</p>
<p>The reason for this is that CSI introduced some kind of "lease", marking a volume as bound.</p>
<p>With previous CSI spec & implementations, this lock was not visible, in terms of Kubernetes API. If your ceph-csi deployment is recent enough, you should find a corresponding "VolumeAttachment" object that could be deleted, to fix your issue:</p>
<pre><code># kubectl get volumeattachments -n ci
NAME ATTACHER PV NODE ATTACHED AGE
csi-194d3cfefe24d5f22616fabd3d2fb2ce5f79b16bdca75088476c2902e7751794 rbd.csi.ceph.com pvc-902c3925-11e2-4f7f-aac0-59b1edc5acf4 melpomene.xxx.com true 14d
csi-24847171efa99218448afac58918b6e0bb7b111d4d4497166ff2c4e37f18f047 rbd.csi.ceph.com pvc-b37722f7-0176-412f-b6dc-08900e4b210d clio.xxx.com true 90d
....
kubectl delete -n ci volumeattachment csi-xxxyyyzzz
</code></pre>
<p>Those VolumeAttachments are created by your CSI provisioner, before the device mapper attaches a volume.</p>
<p>They would be deleted only once the corresponding PV would have been released from a given Node, according to its device mapper - that needs to be running, kubelet up/Node marked as Ready according to the the API. Until then, other Nodes can't map it. There's no timeout, should a Node get unreachable due to network issues or an abrupt shutdown/force off/reset: its RWO PV are stuck.</p>
<p>See: <a href="https://github.com/ceph/ceph-csi/issues/740" rel="nofollow noreferrer">https://github.com/ceph/ceph-csi/issues/740</a></p>
<p>One workaround for this would be not to use CSI, and rather stick with legacy StorageClasses, in your case installing rbd on your nodes.</p>
<p>Though last I checked -- k8s 1.19.x -- I couldn't manage to get it working, I can't recall what was wrong, ... CSI tends to be "the way" to do it, nowadays. Despite not being suitable for production use, sadly, unless running in an IAAS with auto-scale groups deleting Nodes from the Kubernetes API (eventually evicting the corresponding VolumeAttachments), or using some kind of MachineHealthChecks like OpenShift 4 implements.</p>
|
<p>I have a simple task, but it is not solved even after studying dozens of articles.</p>
<p>There is a simple AWS EKS cluster created from a demo template using eksctl, ElasticIP and installed without changes <a href="https://bitnami.com/stack/nginx-ingress-controller/helm" rel="nofollow noreferrer">https://bitnami.com/stack/nginx-ingress-controller/helm</a></p>
<p>There is a domain <a href="https://stage.mydomain.com" rel="nofollow noreferrer">https://stage.mydomain.com</a> which I want to forward to ElasticIP, using DNS A-record, on AWS EKS nginx ingress controller 1234567890.eu-central-1.elb.amazonaws.com so that all the services of my cluster are available at this ElasticIP address.</p>
<p>I tried through Load Balancer and Network Balancer, but it doesn't work.</p>
<p>Is there a proven article or sequence of actions for solving this problem and with this set of services?</p>
| <p>Yes that's common in AWS articles also NLD value coming like this way only</p>
<p><a href="https://aws.amazon.com/blogs/opensource/network-load-balancer-nginx-ingress-controller-eks/" rel="nofollow noreferrer">https://aws.amazon.com/blogs/opensource/network-load-balancer-nginx-ingress-controller-eks/</a></p>
<p>In the above article installing the NGINX controller using the NLB as backend which providing the IP the same way.</p>
<p>In this case you can add the <code>DNS</code> with <code>A</code> or <code>CNAME</code></p>
<p>Once your ingress controller setup is done you will get <code>LB</code> endpoint you have to add this to <code>DNS</code> as <code>A</code> record or <code>CNAME</code></p>
<p>this will forward the request to the cluster.</p>
<p>Now inside the cluster, you have to create the ingress using applying the YAML</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></p>
<p><strong>ingress.yaml</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
</code></pre>
<p>For <code>NLB</code> in you can add the annotation to service</p>
<pre><code>**service.beta.kubernetes.io/aws-load-balancer-type: nlb**
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '60'
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
service.beta.kubernetes.io/aws-load-balancer-type: nlb
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
</code></pre>
|
<p>given a database that is part of a statefulset and behind a headless service, how can I use a local client (outside of the cluster) to access the database? Is it possible to create a separate service that targets a specific pod by its stable id?</p>
| <p>There are multiple ways you can conect to this database service</p>
<p>You can use</p>
<p>Port-forward : <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/</a></p>
<p>Service as LoadBalancer : <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer</a></p>
<p>Service as Nodeport : <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#nodeport</a></p>
<p>Example MySQL database running on K8s : <a href="https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/</a></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
</code></pre>
<p>The easiest way is to try with <code>port-forwarding</code> :</p>
<pre><code>kubectl port-forward -n <NAMESPACE Name> <POD name> 3306:3306
</code></pre>
<p>using the above command you can create the proxy from local to K8s cluster and test the <code>localhost:3306</code></p>
<p>This is not a method for <strong>Prod</strong> use case it's can be used for <strong>debugging</strong>.</p>
<p>NodePort : Expose the port but use the worker node IPs so if worker not get killed during autoscaling IP may changed with time</p>
<p>I would recommend creating a new service with the respective <strong>label</strong> and type as <code>LoadBalancer</code>.</p>
|
<p>I have a business case where I want to access a clustered Redis cache from one account (let's say account A) to an account B.</p>
<p>I have used the solution mentioned in the below link and for the most part, it works <a href="https://medium.com/@programmerohit/accessing-aws-elasticache-redis-from-multiple-aws-accounts-via-aws-privatelink-8938891bec6c" rel="nofollow noreferrer">Base Solution</a></p>
<p>The base solution works fine if I am trying to access the clustered Redis via <code>redis-py</code> however if I try to use it with <code>redis-py-cluster</code> it fails.</p>
<p>I am testing all this in a staging environment where the Redis cluster has only one node but in the production environment, it has two nodes, so the <code>redis-py</code> approach will not work for me.</p>
<p>Below is my sample code</p>
<pre><code>redis = "3.5.3"
redis-py-cluster = "2.1.3"
==============================
from redis import Redis
from rediscluster import RedisCluster
respCluster = 'error'
respRegular = 'error'
host = "vpce-XXX.us-east-1.vpce.amazonaws.com"
port = "6379"
try:
ru = RedisCluster(startup_nodes=[{"host": host, "port": port}], decode_responses=True, skip_full_coverage_check=True)
respCluster = ru.get('ABC')
except Exception as e:
print(e)
try:
ru = Redis(host=host, port=port, decode_responses=True)
respRegular = ru.get('ABC')
except Exception as e:
print(e)
return {"respCluster": respCluster, "respRegular": respRegular}
</code></pre>
<p>The above code works perfectly in account A but in account B the output that I got was</p>
<pre><code>{'respCluster': 'error', 'respRegular': '123456789'}
</code></pre>
<p>And the error that I am getting is</p>
<pre><code>rediscluster.exceptions.ClusterError: TTL exhausted
</code></pre>
<p>In account A we are using AWS ECS + EC2 + docker to run this and</p>
<p>In account B we are running the code in an AWS EKS Kubernetes pod.</p>
<p>What should I do to make the <code>redis-py-cluster</code> work in this case? or is there an alternative to <code>redis-py-cluster</code> in python to access a multinode Redis cluster?</p>
<p>I know this is a highly specific case, any help is appreciated.</p>
<p><strong>EDIT 1</strong>: Upon further research, it seems that TTL exhaust is a general error, in the logs the initial error is</p>
<pre><code>redis.exceptions.ConnectionError:
Error 101 connecting to XX.XXX.XX.XXX:6379. Network is unreachable
</code></pre>
<p>Here the XXXX is the IP of the Redus cluster in Account A.
This is strange since the <code>redis-py</code> also connects to the same IP and port,
this error should not exist.</p>
| <p>So turns out the issue was due to how <code>redis-py-cluster</code> manages host and port.</p>
<p>When a new <code>redis-py-cluster</code> object is created it gets a list of host IPs from the Redis server(i.e. Redis cluster host IPs form account A), after which the client tries to connect to the new host and ports.</p>
<p>In normal cases, it works as the initial host and the IP from the response are one and the same.(i.e. the host and port added at the time of object creation)</p>
<p>In our case, the object creation host and port are obtained from the DNS name from the Endpoint service of Account B.</p>
<p>It leads to the code trying to access the actual IP from account A instead of the DNS name from account B.</p>
<p>The issue was resolved using <a href="https://github.com/Grokzen/redis-py-cluster/blob/3b68c18810c2e8cea20d7e900064b1f8ec811260/docs/client.rst#host-port-remapping" rel="nofollow noreferrer">Host port remapping</a>, here we bound the IP returned from the Redis server from Account A with IP Of Account B's endpoints services DNA name.</p>
|
<p>In my k8s environment where spring-boot applications runs, I checked log location in <code>/var/log</code> and <code>/var/lib</code> but both empty. Then I found log location in <code>/tmp/spring.log</code> . It seems this the default log location. My problem are</p>
<ol>
<li>How <code>kubectl log</code> knows it should read logs from <code>/tmp</code> location. I get log output on <code>kubectl logs</code> command.</li>
<li>I have fluent-bit configured where it has input as</li>
</ol>
<p>following</p>
<pre><code> [INPUT]
Name tail
Tag kube.dev.*
Path /var/log/containers/*dev*.log
DB /var/log/flb_kube_dev.db
</code></pre>
<p>This suggest it should reads logs from <code>/var/log/containers/</code> but it does not have logs. However i am getting fluent-bit logs successfully. What am i missing here ?</p>
| <p>Docker logs only contain the logs that are dumped on STDOUT by your container's process with PID 1 (your container's <code>entrypoint</code> or <code>cmd</code> process).</p>
<p>If you want to see the logs via <code>kubectl logs</code> or <code>docker logs</code>, you should redirect your application logs to STDOUT instead of file <code>/tmp/spring.log</code>. <a href="https://serverfault.com/a/634296">Here's</a> an excellent example of how this can achieved with minimal effort.</p>
<hr />
<p>Alternatively, you can also use <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostPath</a> volumeMount. This way, you can directly access the log from the path on the host.</p>
<h5>Warning when using hostPath volumeMount</h5>
<p>If the pod is shifted to another host due to some reason, you logs will not move along with it. A new log file will be created on this new host at the same path.</p>
|
<p>I've Docker Desktop Kubernetes cluster setup in my local machine and it is working fine.
Now i'm trying to deploy .Net Core gRPC server and .Net core Console load generator to my cluster.</p>
<p>I'm using VisualStudio(2019)'s default template for gRPC application</p>
<p><strong>Server:</strong></p>
<p>proto file</p>
<pre><code>syntax = "proto3";
option csharp_namespace = "KubernetesLoadSample";
package greet;
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply);
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings.
message HelloReply {
string message = 1;
}
</code></pre>
<p>.net core gRPC application</p>
<pre><code>public class GreeterService : Greeter.GreeterBase
{
private readonly ILogger<GreeterService> _logger;
public GreeterService(ILogger<GreeterService> logger)
{
_logger = logger;
}
public override Task<HelloReply> SayHello(HelloRequest request, ServerCallContext context)
{
_logger.LogInformation("Compute started");
double result = 0;
for (int i = 0; i < 10000; i++)
{
for (int j = 0; j < i; j++)
{
result += Math.Sqrt(i) + Math.Sqrt(j);
}
}
return Task.FromResult(new HelloReply
{
Message = "Completed"
}); ;
}
}
</code></pre>
<p>and DockerFile for this project as follows,</p>
<pre><code>FROM mcr.microsoft.com/dotnet/aspnet:3.1 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:3.1 AS build
WORKDIR /src
COPY ["KubernetesLoadSample.csproj", "KubernetesLoadSample/"]
RUN dotnet restore "KubernetesLoadSample/KubernetesLoadSample.csproj"
WORKDIR "/src/KubernetesLoadSample"
COPY . .
RUN dotnet build "KubernetesLoadSample.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "KubernetesLoadSample.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "KubernetesLoadSample.dll"]
</code></pre>
<p>i was able to check this image working locally using</p>
<pre><code>PS C:\Users\user> docker run -it -p 8000:80 kubernetesloadsample:latest
info: Microsoft.Hosting.Lifetime[0]
Now listening on: http://[::]:80
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
Content root path: /app
info: KubernetesLoadSample.GreeterService[0]
Compute started // called from BloomRPC Client
</code></pre>
<p><strong>Client</strong></p>
<p>Client is a .net console application, that calls server in a loop</p>
<pre><code> static async Task Main(string[] args)
{
var grpcServer = Environment.GetEnvironmentVariable("GRPC_SERVER");
Channel channel = new Channel($"{grpcServer}", ChannelCredentials.Insecure);
Console.WriteLine($"Sending load to port {grpcServer}");
while(true)
{
try
{
var client = new Greeter.GreeterClient(channel);
var reply = await client.SayHelloAsync(
new HelloRequest { Name = "GreeterClient" });
Console.WriteLine("result: " + reply.Message);
await Task.Delay(1000);
}
catch (Exception ex)
{
Console.WriteLine($"{DateTime.UtcNow} : tried to connect : {grpcServer} Crashed : {ex.Message}");
}
}
}
</code></pre>
<p>Docker file for client:</p>
<pre><code>FROM mcr.microsoft.com/dotnet/aspnet:3.1 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:3.1 AS build
WORKDIR /src
COPY ["GrpcClientConsole.csproj", "GrpcClientConsole/"]
RUN dotnet restore "GrpcClientConsole/GrpcClientConsole.csproj"
WORKDIR "/src/GrpcClientConsole"
COPY . .
RUN dotnet build "GrpcClientConsole.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "GrpcClientConsole.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "GrpcClientConsole.dll"]
</code></pre>
<p>and deployment file as follows,</p>
<pre><code>---
apiVersion: v1
kind: Namespace
metadata:
name: core-load
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
name: compute-server
namespace: core-load
spec:
replicas: 4
selector:
matchLabels:
app: compute-server-svc
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: compute-server-svc
spec:
containers:
- env:
image: kubernetesloadsample:latest
imagePullPolicy: Never
name: compute-server-svc
ports:
- containerPort: 80
name: grpc
resources: {}
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: compute-server-svc
namespace: core-load
spec:
clusterIP: None
ports:
- name: grpc
port: 5000
targetPort: 80
protocol: TCP
selector:
app: compute-server-svc
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
name: compute-client
namespace: core-load
spec:
replicas: 1
selector:
matchLabels:
app: compute-client
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: compute-client
spec:
containers:
- env:
- name: GRPC_SERVER
value: compute-server-svc.core-load.svc.cluster.local:5000
image: grpc-client-console:latest
imagePullPolicy: Never
name: compute-client
resources: {}
status: {}
---
</code></pre>
<p><strong>Problem</strong></p>
<p>client is not able to connect gRPC server with this compute-server-svc.core-load.svc.cluster.local:5000 name. I tried compute-server-svc.core-load this as well, but facing below issue</p>
<pre><code>PS E:\study\core\k8sgrpc\KubernetesLoadSample> k get pods -n core-load
NAME READY STATUS RESTARTS AGE
compute-client-bff5f666-cjwf5 1/1 Running 0 15s
compute-server-545567f589-5blkv 1/1 Running 0 15s
compute-server-545567f589-bv4r2 1/1 Running 0 15s
compute-server-545567f589-mdp2x 1/1 Running 0 15s
compute-server-545567f589-wdff5 1/1 Running 0 15s
PS E:\study\core\k8sgrpc\KubernetesLoadSample> k logs compute-client-bff5f666-cjwf5 -n core-load --tail 5
07/09/2021 17:18:35 : tried to connect : compute-server-svc.core-load.svc.cluster.local:5000 Crashed : Status(StatusCode=Unavailable, Detail="failed to connect to all addresses")
07/09/2021 17:18:35 : tried to connect : compute-server-svc.core-load.svc.cluster.local:5000 Crashed : Status(StatusCode=Unavailable, Detail="failed to connect to all addresses")
07/09/2021 17:18:35 : tried to connect : compute-server-svc.core-load.svc.cluster.local:5000 Crashed : Status(StatusCode=Unavailable, Detail="failed to connect to all addresses")
07/09/2021 17:18:35 : tried to connect : compute-server-svc.core-load.svc.cluster.local:5000 Crashed : Status(StatusCode=Unavailable, Detail="failed to connect to all addresses")
07/09/2021 17:18:35 : tried to connect : compute-server-svc.core-load.svc.cluster.local:5000 Crashed : Status(StatusCode=Unavailable, Detail="failed to connect to all addresses")
</code></pre>
<p>I didnt get any solution from the stackoverflow questions similar to this, so im creating this.</p>
<p>Can anyone please let me know what i've missed or doing wrong?</p>
<p>TIA</p>
| <p>You defined your service with the:</p>
<pre><code>clusterIP: None
</code></pre>
<p>which is used to create an headless service. This may be the cause of the problem, so removing it could resolve your error.</p>
<hr />
<p>When you create a <code>ClusterIP</code> type service (which is the default type) Kubernetes automatically assign the service a virtual IP (called also cluster IP, as the type suggests) which is then used to proxy communication towards the Pods selected by the service in question.</p>
<p>This means that there is a "new" IP address (visible only from inside the cluster), different from the various IP assigned to the Pods (or single Pod) behind the service, which then routes the traffic with a sort of load balancing to the Pods standing behind.</p>
<p>If you specify</p>
<pre><code>clusterIP: None
</code></pre>
<p>you create an headless service. You are basically telling Kubernetes that you don't want a virtual IP to be assigned to the service. There is no load balancing by the proxy as there is no IP to load balance.</p>
<p>Instead, the DNS configuration will return A records (the IP addresses) for each of the Pods behind (selected) by the service.</p>
<p>This can be useful if your application needs to discover each Pod behind the service and then do whatever they want with the IP address on their own.</p>
<p>Maybe to load balance with an internal implementation, maybe because different Pods (behidn the same service) are used for different things.. or maybe because each one of those Pods wants to discover the other Pods (think about multi-instance primary applications such as Kafka or Zookeeper, for example)</p>
<hr />
<p>I'm not sure on what exactly could be your problem, it may depends on how the hostname is resolved by that particular app.. but you shouldn't use an headless service, unless you have the necessity to decide which of the Pods selected by the svc you want to contact.</p>
<p>Using DNS round robin to load balance is also (almost always) not a good idea compared to a virtual IP.. as applications could cache the DNS resolution and, if Pods then change IP address (since Pods are ephimeral, they change IP address whenever they restart, for example), there could be network problems in reaching them.. and more.</p>
<p>There's a huge amount of info in the docs:
<a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a></p>
|
<p>I set up a local Kubernetes cluster using Kind, and then I run Apache-Airflow on it using Helm.</p>
<p>To actually create the pods and run Airflow, I use the command:</p>
<pre><code>helm upgrade -f k8s/values.yaml airflow bitnami/airflow
</code></pre>
<p>which uses the chart <code>airflow</code> from the <code>bitnami/airflow</code> repo, and "feeds" it with the configuration of <code>values.yaml</code>.
The file <code>values.yaml</code> looks something like:</p>
<pre><code>web:
extraVolumeMounts:
- name: functions
mountPath: /dir/functions/
extraVolumes:
- name: functions
hostPath:
path: /dir/functions/
type: Directory
</code></pre>
<p>where <code>web</code> is one component of Airflow (and one of the pods on my setup), and the directory <code>/dir/functions/</code> is successfully mapped from the cluster inside the pod. However, I fail to do the same for a single, specific file, instead of a whole directory.</p>
<p>Does anyone knows the syntax for that? Or have an idea for an alternative way to map the file into the pod (its whole directory is successfully mapped into the cluster)?</p>
| <p>There is a <code>File</code> type for <code>hostPath</code> which should behave like you desire, as it states in the <a href="https://v1-17.docs.kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">docs</a>:</p>
<blockquote>
<p>File: A file must exist at the given path</p>
</blockquote>
<p>which you can then use with the precise file path in <code>mountPath</code>. Example:</p>
<pre><code>web:
extraVolumeMounts:
- name: singlefile
mountPath: /path/to/mount/the/file.txt
extraVolumes:
- name: singlefile
hostPath:
path: /path/on/the/host/to/the/file.txt
type: File
</code></pre>
<p>Or if it's not a problem, you could mount the whole directory containing it at the expected path.</p>
<hr />
<p>With this said, I want to point out that using <code>hostPath</code> is (almost always) never a good idea.</p>
<p>If you have a cluster with more than one node, saying that your Pod is mounting an <code>hostPath</code> doesn't restrict it to run on a specific host (even tho you can enforce it with <code>nodeSelectors</code> and so on) which means that if the Pod starts on a different node, it may behave differently, not finding the directory and / or file it was expecting.</p>
<p>But even if you restrict the application to run on a specific node, you need to be ok with the idea that, if such node becomes unavailable, the Pod will not be scheduled on its own somewhere else.. meaning you'll need manual intervention to recover from a single node failure (unless the application is multi-instance and can resist one instance going down)</p>
<hr />
<p>To conclude:</p>
<ul>
<li>if you want to mount a path on a particular host, for whatever reason, I would go for <a href="https://v1-17.docs.kubernetes.io/docs/concepts/storage/volumes/#local" rel="nofollow noreferrer">local</a> volumes.. or at least use hostPath and restrict the Pod to run on the specific node it needs to run on.</li>
<li>if you want to mount small, textual files, you could consider mounting them from <a href="https://kubernetes.io/docs/concepts/configuration/configmap/#using-configmaps-as-files-from-a-pod" rel="nofollow noreferrer">ConfigMaps</a></li>
<li>if you want to configure an application, providing a set of files at a certain path when the app starts, you could go for an init container <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/" rel="nofollow noreferrer">which prepares files for the main container in an emptyDir volume</a></li>
</ul>
|
<p>Any idea how to go about this, can't find much clear info on google, to measure the Errors (40x, and 50x) on my service endpoints. My services are up and when I delete pods just for a test, I can see in the blackbox metrics that the prometheus gets and error, but not specified like 40x type or 50x.</p>
<p>Edit 1:</p>
<ul>
<li>Yes, I have setup my cluster, at this stage is experimental, I have set it up on a VirtualBox+Vagrant+K3s. I have created two simple services one front end one backend, and configured prometheus Jobs to discover the services and probe their uptime via Blackbox monitor. My goal is to get somehow some metrics on a grafana dashboard the measure the number of 40x or 50x errors for all the requests to these services within a period of time. Currently whats on my mind is measuring the number of 2xx and reporting only Non-2xx status codes but that would include more errors/status than 40x and 50x.</li>
</ul>
<p>Prometheus is deployed as a helm stack, same with the Blackbox monitor. Everything is deployed on the default namespace, because at this stage is just for testing on how to achieve this goal.</p>
| <p>Based on <a href="https://groups.google.com/g/prometheus-users/c/QUY9NsLPsZk" rel="nofollow noreferrer">this topic</a>:</p>
<blockquote>
<p>Services in Kubernetes are kind of like load-balancers - they just route requests to underlying pods. The pods themselves actually contain the application that does the work and returns the status code.
You don't monitor kubernetes services <em>per-se</em> for 4xx or 5xx errors, you need to monitor the underlying application itself.</p>
</blockquote>
<p>So, you need to create an architecture to monitoring your application. Prometheus only collects metrics and makes graphs out of it, it does not process anything by itself. Metrics must be exposed by the application. <a href="https://sysdig.com/blog/kubernetes-monitoring-prometheus/" rel="nofollow noreferrer">Here</a> you can find topic - Kubernetes monitoring with Prometheus, the ultimate guide. Is very comprehensive and explains perfectly how to monitor an application. For you, the most interesting part should be <a href="https://sysdig.com/blog/kubernetes-monitoring-prometheus/#services" rel="nofollow noreferrer">How to monitor a Kubernetes service with Prometheus</a>. You can also find there a <a href="https://sysdig.com/blog/kubernetes-monitoring-prometheus-operator-part3/" rel="nofollow noreferrer">Prometheus Operator Tutorial</a>. It could help you with automation deployment for Prometheus, Alertmanager and Grafana.</p>
<p>Once you've installed everything, you'll be able to collect metrics. It is good practice to use <a href="https://prometheus.io/docs/practices/instrumentation/#use-labels" rel="nofollow noreferrer">lables</a>. This allows you to easily distinguish between different response codes from your application.</p>
<blockquote>
<p>For example, rather than <code>http_responses_500_total</code> and <code>http_responses_403_total</code>, create a single metric called <code>http_responses_total</code> with a <code>code</code> label for the HTTP response code. You can then process the entire metric as one in rules and graphs.</p>
</blockquote>
|
<p>Is it possible to promote a Kubernetes worker node to master to quickly recover from the loss of a master (1 of 3) and restore safety to the cluster? Preferably without disrupting all the pods already running on it. Bare metal deployment. Tx.</p>
| <p>It doesn't look like a worker node can be promoted to master in general. However it is easy to sort out for a specific case:</p>
<ul>
<li>Control plane node disappears from the network</li>
<li>Node is manually deleted: <code>k delete node2.example.com --ignore-daemonsets --delete-local-data</code></li>
<li>Some time later it reboots and rejoins the cluster</li>
</ul>
<p>Check that it has rejoined the etcd cluster:</p>
<pre><code># k exec -it etcd-node1.example.com -n kube-system -- /bin/sh
# etcdctl --endpoints 127.0.0.1:2379 --cacert /etc/kubernetes/pki/etcd/ca.crt \
--cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key \
member list
506801cdae97607b, started, node1.example.com, https://65.21.128.36:2380, https://xxx:2379, false
8051adea81dc4c6a, started, node2.example.com, https://95.217.56.177:2380, https://xxx:2379, false
ccd32aaf544c8ef9, started, node3.example.com, https://65.21.121.254:2380, https://xxx:2379, false
</code></pre>
<p>If it is part of the cluster then re-label it:</p>
<pre><code>k label node node2.example.com node-role.kubernetes.io/control-plane=
k label node node2.example.com node-role.kubernetes.io/master=
</code></pre>
|
<p>I have a kubernetes pod running which has two containers, say A and B. They share common volume of type <code>emptyDir</code>. I have a usecase in which application running in container A takes files uploaded by client and places them at mount point. Then it signals to container B to process the file. Container B then sends back response to A (The response is instantaneous and not batched). Now as the file is processed it can be deleted (It must be, due to obvious storage constraints). Deleting files one by one as they are processed could be lot of deletions (or not?) so thought of considering batch deletions periodically.</p>
<ol>
<li>What is the best way to delete this file?</li>
<li>Is it good to use this volume type for this usecase? If not what?</li>
</ol>
<pre><code># deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: FileProcessor
labels:
app: FileProcessor
spec:
selector:
matchLabels:
app: FileProcessor
template:
metadata:
labels:
app: FileProcessor
spec:
containers:
- image: processor:0.2
name: Container_B
ports:
- containerPort: 80
name: Container_B
volumeMounts:
- name: upload-files
mountPath: /app/upload-files
- image: sample:0.1
name: Container_A
ports:
- containerPort: 8000
name: Container_A
volumeMounts:
- name: upload-files
mountPath: /app/uploads
volumes:
- name: upload-files
emptyDir: {}
</code></pre>
<p>PS: This is to be deployed on GKE.
Update 1: Still looking for a better solution</p>
| <p>As you are using <code>emptyDir</code>, files must be deleted from one of the sidecars. Now, lets check what are your options here:</p>
<ol>
<li>Container A or B can delete the files after processing them.</li>
<li>Container A or B delete the files once they reach a certain amount (say 1Gi).</li>
<li>Add another container C which periodically cleanup the files.</li>
</ol>
<p>Now, lets check advantage and disadvantage of these solutions:</p>
<ul>
<li><p>If you go with solution 1, your container A or B will have to do little extra work after each processing. If the files size are not large enough this extra time shouldn't be significant</p>
</li>
<li><p>If you go with solution 2, you might save extra work after each processing. However, after a certain period container A or B will require a relatively long time to cleanup those files. Furthermore, you have to add logic when to cleanup the files. If you can do it intelligently, let say when your containers are idle, then this solution should fit best.</p>
</li>
<li><p>Now, if you go with solution 3, you have to ensure that your container C does not delete files that are being processed by container B.</p>
</li>
</ul>
<p>In case, you want to use a different type of volume, which can be mounted from an external pod, then you can have a CronJob to periodically cleanup those data. In this case, same constraint of solution 3 is applicable.</p>
|
<p>I have a dedicated server build of a in-development realtime multiplayer game that I'd like to run on a Kubernetes cluster. The game uses WebRTC <code>RTCDataChannel</code>s for network communication as it targets browsers. When running the game server on a Kubernetes node players can connect directly to it if I configure my Pods with <code>hostNetwork: true</code>. In the absence of <code>hostNetwork: true</code> players can still connect directly but only if they're behind well behaving NATs, if they're not then the only option for a successful connection is to introduce a TURN server to relay the traffic.</p>
<p>The reason for this appears to be that the game server runs on a Kubernetes node behind what is essentially a symmetric NAT i.e. the NATs mapping behaviour is address and port dependent. I've confirmed this by firing two STUN messages at different STUN servers from the same UDP socket within the container - the binding responses have the same public IP but different ports.</p>
<p>This NAT behaviour impacts WebRTC by reducing the success rate of players connecting directly to the game server in the absence of using a TURN server to relay traffic. Now I'd very much like to avoid relaying traffic through a TURN server where possible due to the added latency - it's not overly desirable for a realtime multiplayer game.</p>
<p>I'd have liked to just set <code>hostNetwork: true</code> and be done with it but I'm considering using <a href="https://agones.dev/site/" rel="nofollow noreferrer">Agones</a> which doesn't support it (as it takes away the ability to securely do sidecars).</p>
<p>So I'm wondering if I have any other options (ideally without introducing yet another server to relay traffic through) to tweak this NAT behaviour as I don't believe it's feasible to try to forward the full range of random UDP ports that WebRTC is going to try to use for communication?</p>
<hr />
<p><strong>Update</strong>: Reading through the <a href="https://www.rfc-editor.org/rfc/rfc8656.html" rel="nofollow noreferrer">TURN RFC</a> and I'm now thinking the below diagrammed setup may be possible using a TURN server running in the same container as the dedicated game server (thus latency introduction should be minimal).</p>
<p><a href="https://i.stack.imgur.com/nSLZm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nSLZm.png" alt="Game Server + TURN Server + Kubernetes" /></a></p>
<p>The game clients will act as the TURN Clients i.e. create the allocation on the TURN server, etc.</p>
<p>The Game Server will act as the peer with which the clients want to communicate. The Game Server itself wouldn't need to be a TURN client and if my understanding is correct not even need to know that the TURN server is a TURN server.</p>
| <p>Indeed, a TURN server can be deployed in the backend to allow exposing only a single UDP and/or TCP port to WebRTC clients. It's not the classic usage but it's something I've seen in production a couple times. Only the client requires the TURN server to be set up, the game server will communicate directly with the TURN server as it'll relay on the local network address. Additionally, you should force ICE relay on the client.</p>
<p>Another option could be to set a port range in the WebRTC agent configuration and forward the port range on the NAT. In that case, instead of setting a STUN server, you can manually override emitted host candidates with the external address to optimize connection establishment.</p>
|
<p>I have a pod <code>egress-operator-controller-manager</code> created from <a href="https://github.com/monzo/egress-operator/blob/master/Makefile" rel="nofollow noreferrer">makefile</a> by command <code>make deploy IMG=my_azure_repo/egress-operator:v0.1</code>.
<a href="https://i.stack.imgur.com/eT2Sm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eT2Sm.png" alt="enter image description here" /></a></p>
<p>This pod was showing <code>unexpected status: 401 Unauthorized</code> error in description, so I created <code>imagePullSecrets</code> and trying to update this pod with secret by creating pod's deployment.yaml [<code>egress-operator-manager.yaml</code>] file. <br/>But when I am applying this yaml file its giving below error:</p>
<pre><code>root@Ubuntu18-VM:~/egress-operator# kubectl apply -f /home/user/egress-operator-manager.yaml
The Deployment "egress-operator-controller-manager" is invalid: spec.selector: Invalid value:
v1.LabelSelector{MatchLabels:map[string]string{"moduleId":"egress-operator"},
MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
</code></pre>
<p><em><strong>egress-operator-manager.yaml</strong></em></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: egress-operator-controller-manager
namespace: egress-operator-system
labels:
moduleId: egress-operator
spec:
replicas: 1
selector:
matchLabels:
moduleId: egress-operator
strategy:
type: Recreate
template:
metadata:
labels:
moduleId: egress-operator
spec:
containers:
- image: my_azure_repo/egress-operator:v0.1
name: egress-operator
imagePullSecrets:
- name: mysecret
</code></pre>
<p>Can somene let me know that how can I update this pod's deployment.yaml ?</p>
| <p>Delete the deployment once and try applying the YAML agian.</p>
<p>it could be due to K8s service won't allow the <strong>rolling update</strong> once deployed the <strong>label selectors</strong> of <strong>K8s service</strong> can not be updated until you decide to delete the existing <strong>deployment</strong></p>
<pre><code>Changing selectors leads to undefined behaviors - users are not expected to change the selectors
</code></pre>
<p><a href="https://github.com/kubernetes/kubernetes/issues/50808" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/50808</a></p>
|
<p>I have created a Nginx Ingress and Service with the following code:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myservice
spec:
type: ClusterIP
selector:
name: my-app
ports:
- port: 8000
targetPort: 8000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myingress
annotations:
kubernetes.io/ingress.class: nginx
labels:
name: myingress
spec:
rules:
- host: mydomain.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: my-service
port:
number: 8000
</code></pre>
<p>Nginx ingress installed with:
<code>helm install ingress-nginx ingress-nginx/ingress-nginx</code>.</p>
<p>I have also enabled proxy protocols for ELB. But in nginx logs I don't see the real client ip for X-Forwarded-For and X-Real-IP headers. This is the final headers I see in my app logs:</p>
<pre><code>X-Forwarded-For:[192.168.21.145] X-Forwarded-Port:[80] X-Forwarded-Proto:[http] X-Forwarded-Scheme:[http] X-Real-Ip:[192.168.21.145] X-Request-Id:[1bc14871ebc2bfbd9b2b6f31] X-Scheme:[http]
</code></pre>
<p>How do I get the real client ip instead of the ingress pod IP? Also is there a way to know which headers the ELB is sending to the ingress?</p>
| <p>One solution is to use <code>externalTrafficPolicy</code>: <code>Local</code> (see <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip" rel="noreferrer">documentation</a>).</p>
<p>In fact, according to the <code>kubernetes documentation</code>:</p>
<blockquote>
<p>Due to the implementation of this feature, the source IP seen in the target container is not the original source IP of the client.
...
service.spec.externalTrafficPolicy - denotes if this Service desires to route external traffic to node-local or cluster-wide endpoints. There are two available options: Cluster (default) and Local. Cluster obscures the client source IP and may cause a second hop to another node, but should have good overall load-spreading. Local preserves the client source IP and avoids a second hop for LoadBalancer and NodePort type services, but risks potentially imbalanced traffic spreading.</p>
</blockquote>
<p>If you want to follow this route, update your <code>nginx ingress controller</code> <code>Service</code> and add the <code>externalTrafficPolicy</code> field:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-controller
spec:
...
externalTrafficPolicy: Local
</code></pre>
<p>A possible alternative could be to use <a href="https://www.nginx.com/resources/admin-guide/proxy-protocol/" rel="noreferrer">Proxy protocol</a> (see <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-proxy-protocol" rel="noreferrer">documentation</a>)</p>
|
<p>I deployed a PVC, which dynamically created a PV.
After that I deleted the PVC and now my PV looks like below:</p>
<pre><code>PS Kubernetes> kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-1b59942c-eb26-4603-b78e-7054d9418da6 2G RWX Retain Released default/db-pvc hostpath 26h
</code></pre>
<p>When I recreate my PVC, that creates a new PV.
Is there a way to reattach the existing PV to my PVC ?
Is there a way to do it automatically ?</p>
<p>I tried to attach the PV with my PVC using "volumeName" option, but it did not work.</p>
<pre><code>NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
db-pvc Pending pvc-1b59942c-eb26-4603-b78e-7054d9418da6 0 hostpath 77s
</code></pre>
| <p>When a PVC is deleted, the PV stays in the "Released" state with the claimRef uid of the deleted PVC.</p>
<p>To reuse a PV, you need to delete the claimRef to make it go to the "Available" state</p>
<p>You may either edit the PV and manually delete the claimRef section, or run the patch command as under:</p>
<pre><code>kubectl patch pv pvc-1b59942c-eb26-4603-b78e-7054d9418da6 --type json -p '[{"op": "remove", "path": "/spec/claimRef"}]'
</code></pre>
<p>Subsequently, you recreate the PVC.</p>
|
<p>I am new to Kubernetes. I have a Kubernetes secret yaml file:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: my-secret
type: Opaque
data:
API_KEY: 123409uhttt
SECRET_KEY: yu676jfjehfuehfu02
</code></pre>
<p>that I have encoded using gpg encryption:</p>
<pre><code> gpg -a --symmetric --cipher-algo AES256 -o "secrets.yaml.gpg" "secrets.yaml"
</code></pre>
<p>and decrypting it in github action's workflow like this:</p>
<pre><code>gpg -q --batch --yes --decrypt --passphrase=$GPG_SECRET my/location/to/secrets.yaml.gpg | kubectl apply -n $NAMESPACE -f -
</code></pre>
<p>When I run:</p>
<pre><code>kubectl get secret my-secret -n my-namespace -o yaml
</code></pre>
<p>I get yaml showing correct values set for API_KEY and SECRET_KEY, like this:</p>
<pre><code>apiVersion: v1
data:
API_KEY: 123409uhttt
SECRET_KEY: yu676jfjehfuehfu02
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"API_KEY":"123409uhttt","SECRET_KEY":"yu676jfjehfuehfu02"},"kind":"Secret","metadata":{"annotations":{},"name":"my-secret","namespace":"my-namespace"},"type":"Opaque"}
creationTimestamp: "2021-07-12T23:28:56Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:API_KEY: {}
f:SECRET_KEY: {}
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:type: {}
manager: kubectl-client-side-apply
operation: Update
time: "2021-07-10T23:28:56Z"
name: my-secret
namespace: my-namespace
resourceVersion: "29813715"
uid: 89a34b6d-914eded509
type: Opaque
</code></pre>
<p>But when application requests using SECRET_KEY and API_KEY, it shows these values in broken encoding. I get these values printed When I log them:</p>
<pre><code>Api_Key - Ñ¢¹ï¿½ï¿½4yΓΒ·ΓΒΓ―ΒΏΒ½ΓΒ―uΓ―ΒΏΒ½Γ―ΒΏ8
Secret_Key - Γ―ΒΏΒ½VΓ―ΒΏΒ½sΓ―ΒΏΒ½Γ―ΒΏΒ½Γ[Γ―ΒΆΓΒΏzoΓ―Β½9sΓ―ΒΏΒ½Γ―ΒΏΒ½{Γ―ΒΏΒ½Γ―ΒΏ
</code></pre>
<p>When I don't use Api_Key and Secret_Key from secrets.yaml (as a hardcoded value in application) then it works as expected.</p>
<p>I need help to access secret data (Api_Key and Secret_Key) with correct values in container running node js application.</p>
| <p>it appears as though the value of your secrets are not base64 encoded.
either change the type of data to "stringData" which does not need to be base64 encoded or encode the value of your secrets first.</p>
<p>e.g. <code>echo "Β§SECRET_KEY" | base64 </code> and use this value in your secrets.
the problem you describe happens as the values of the secret get injected base64 decoded into your pods.</p>
<p>However, when you try to decode the values you supplied by</p>
<p><code>echo "123409uhttt" | base64 -d</code>
you get the following output: <code>οΏ½mοΏ½οΏ½Ϋ‘οΏ½οΏ½base64: invalid input</code></p>
|
<p>I'm a novice k8s engineer.
Until now, I have managed the k8s certificate by manually renewing it once a year.</p>
<p>However, I became curious about how to set the certificates below for more than 1 year in the first place.</p>
<pre><code>[root@master ~]# kubeadm alpha certs check-expiration
CERTIFICATE EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
admin.conf May 21, 2022 01:29 UTC 311d no
apiserver May 21, 2022 01:29 UTC 311d no
apiserver-etcd-client May 21, 2022 01:29 UTC 311d no
apiserver-kubelet-client May 21, 2022 01:29 UTC 311d no
controller-manager.conf May 21, 2022 01:29 UTC 311d no
etcd-healthcheck-client May 21, 2022 01:29 UTC 311d no
etcd-peer May 21, 2022 01:29 UTC 311d no
etcd-server May 21, 2022 01:29 UTC 311d no
front-proxy-client May 21, 2022 01:29 UTC 311d no
scheduler.conf May 21, 2022 01:29 UTC 311d no
</code></pre>
<p>Can these certificates be extended by more than a year? (nearly 10 years..?)</p>
<p>k8s version is 1.16</p>
<p>I can use a private certificate. It doesn't matter which way.
(There is a limit to searching because I am not good at English.)</p>
<p>Please tell me how to renew the K8s certificate(api..) only once every 10 years!</p>
| <p>It is not recommended to have the certificates for more than one year. Kubernetes provides hassle-free way to create and renew certs every one year.
<a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/" rel="nofollow noreferrer">Kubernetes Certificates</a>
Since you need it for some special requirement:</p>
<ul>
<li>Check certs expiration</li>
<li>Back up the existing Kubernetes certificates. Backup all the certs in
the pki dir to somewhere safe and controlled access.</li>
<li>Backup the existing and necessary configurtion files</li>
<li>Add --cluster-signing-duration flag for kube-controller-manager. <a href="https://kubernetes.io/docs/tasks/tls/certificate-rotation/" rel="nofollow noreferrer">kubernetes doc for signing duration</a></li>
</ul>
<p>Edit /etc/kubernetes/manifests/kube-controller-manager.yaml</p>
<pre><code> apiVersion: v1
kind: Pod
metadata:
...
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
...
- --experimental-cluster-signing-duration=87600h
...
...
</code></pre>
<p>87600h ~ 10 years</p>
<ul>
<li><p>Renew all certs kubeadm alpha certs renew all --config
/etc/kubernetes/kubeadm-config.yaml</p>
</li>
<li><p>Follow the CSR request and approve method.</p>
</li>
<li><p>Restart the necessary componenets like etcd, kube-apiserver, kube-scheduler , controller, kubelet</p>
</li>
<li><p>Check the new cert expiry</p>
</li>
</ul>
<p>Please follow this in the test lab scenario before doing on any prod envs while the clusters are running</p>
|
<p>I have a statefulset and I need to know what is the current replica count from inside the pod. To do so, I tried:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: sample-mariadb
namespace: demo
spec:
replicas: 3
template:
spec:
containers:
- env:
- name: REPLICAS
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.replicas
</code></pre>
<p>and got this error:</p>
<blockquote>
<p>Warning FailedCreate 4m15s (x17 over 9m43s) statefulset-controller create Pod sample-mariadb-0 in StatefulSet sample-mariadb failed error: Pod "sample-mariadb-0" is invalid: spec.containers[1].env[3].valueFrom.fieldRef.fieldPath: Invalid value: "spec.replicas": error converting fieldPath: field label not supported: spec.replicas</p>
</blockquote>
<p>How can I get current replica count from inside the pod?</p>
| <p>You can only send the fields that are part of Pod specification. The <code>spec.replicas</code> field is part of StatefulSet specification not the underlying pod's. The <code>template</code> part of a StatefulSet is the pod specification. Hence, you are getting this error.</p>
|
<p>Is there any way to access Google cloud Kubernetes persistent volume data without using pod. I cannot start pod due to data corruption in persistent volume. Have any command line tool or any other way.</p>
| <p>If you have any concerns running a pod with any specific application, in that case, you can run the <strong>Ubuntu</strong> POD and attach that pod to the <strong>PVC</strong> and access the data.</p>
<p>There also another option to clone the PV and PVC, perform the testing, and newly created PV and PVC while the old one will work as the backup option.</p>
<p>For cloning PV and PVC you can also use the tool : <a href="https://velero.io/" rel="nofollow noreferrer">https://velero.io/</a></p>
<p>You can also attach the PVC to the POD in <strong>read-only</strong> mode and try accessing the data.</p>
|
<p>Hiii!!!</p>
<p>i have deployed to Kubernetes keyrock, apache and mysql..</p>
<p>After i used the hpa and my stateful database scaled up, i can't login to my simple site..
This is my sql code:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
serviceName: mysql
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.7.21
imagePullPolicy: Always
resources:
requests:
memory: 50Mi #50
cpu: 50m
limits:
memory: 500Mi #220?
cpu: 400m #65
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-storage
mountPath: /var/lib/mysql
subPath: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret # MARK P
key: password
- name: MYSQL_ROOT_HOST
valueFrom:
secretKeyRef:
name: mysql-secret # MARK P
key: host
volumeClaimTemplates:
- metadata:
name: mysql-storage
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard #?manual
resources:
requests:
storage: 5Gi
</code></pre>
<p>And it's headless service:</p>
<pre><code># Headless service for stable DNS entries of StatefulSet members.
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql #x-app #
spec:
ports:
- name: mysql
port: 3306
clusterIP: None
selector:
app: mysql #x-app
</code></pre>
<p>Anyone can help me?</p>
<p>I'm using gke.. Keyrock and apache are deployments and mysql is statefulset..</p>
<p>Thank you!!</p>
| <p>You can't just scale up a Standalone database. Using HPA for stateless application works but not for stateful applications like a database.</p>
<p>Increasing replica of your StaetefulSet will just create another pod with a new MySQL instance. This new replica isn't aware of the data in your old replica. Basically, you now have completely two different databases. That's why you can't login after scaling up. When your request get routed to the new replica, this instance does not have the user info that you created in the old replica.</p>
<p>In this case, you should deploy your database in clustered mode. Then, you can take advantage of horizontal scaling.</p>
<p>I recommend to use a database operator like <a href="https://github.com/mysql/mysql-operator" rel="nofollow noreferrer">mysql/mysql-operator</a>, <a href="https://github.com/presslabs/mysql-operator" rel="nofollow noreferrer">presslabs/mysql-operator</a>, or <a href="https://kubedb.com/" rel="nofollow noreferrer">KubeDB</a> to manage your database in Kubernetes. Out of these operators, KubeDB has autoscaling feature. I am not sure that other operators provide this feature.</p>
<p><strong>Disclosure:</strong> I am one of the developer of KubeDB operator.</p>
|
<p>From my understanding, we can use Ingress class annotation to use multiple Nginx ingress controllers within a cluster. But I have a use case where I need to use multiple ingress controllers within the same namespace to expose different services in the same namespace using the respective ingress rules created.
I follow <a href="https://kubernetes.github.io/ingress-nginx/deploy/#azure" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/#azure</a> to create a sample ingress controller.
What all params should I modify if I want to have multiple Nginx ingress controllers within the same namespace.</p>
<p>Thanks in advance</p>
| <p>It's not clear from your post if you intend to deploy multiple nginx-ingress controllers or different ingress controllers. However, both can be deployed in the same namespace.</p>
<p>In the case of deploying different ingress controllers, it should be easy enough to deploy in the same namespace and use class annotations to specify which ingress rule is processed by which Ingress-controller.
However, in case you want to deploy multiple nginx-ingress-controllers in the same namespace, you would have to use update the name/labels or other identifiers to something different.</p>
<p>E.g - The link you mentioned, <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/cloud/deploy.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/cloud/deploy.yaml</a>
, would need to be updated as -</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
labels:
helm.sh/chart: ingress-nginx-3.33.0
app.kubernetes.io/name: ingress-nginx-internal
app.kubernetes.io/instance: ingress-nginx-internal
app.kubernetes.io/version: 0.47.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-internal
namespace: ingress-nginx
automountServiceAccountToken: true
</code></pre>
<p>assuming we call the 2nd nginx-ingress-controller as <em>ingress-nginx-internal</em>; Likewise, all resources created in your link need to be modified and to deploy them in the same namespace.</p>
<p>In addition, you would have to update the deployment args to specify the ingress.class, your controllers would target -</p>
<pre><code>spec:
template:
spec:
containers:
- name: nginx-ingress-internal-controller
args:
- /nginx-ingress-controller
- '--ingress-class=nginx-internal'
- '--configmap=ingress/nginx-ingress-internal-controller'
</code></pre>
<p>The link <a href="https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/</a> explains how to control multiple ingress controllers.</p>
|
<p>I have deployed keycloak on kubernetes cluster and I want to access it with ingress path url, but I am getting 503 service unavilable when trying to access. But with cluster-ip I am able to access keycloak. With /auth I am able to access the main page of keycloak, i.e <a href="https://my-server.com/keycloak-development/auth/" rel="nofollow noreferrer">https://my-server.com/keycloak-development/auth/</a>, but when I try to access admin console it goes to 503 error.</p>
<p>deployment.yaml</p>
<pre><code>---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "keycloak-development"
namespace: "development"
spec:
selector:
matchLabels:
app: "keycloak-development"
replicas: 1
strategy:
type: "RollingUpdate"
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: "keycloak-development"
spec:
containers:
-
name: "keycloak-development"
image: "mykeycloak-image:latest"
imagePullPolicy: "Always"
env:
-
name: "NODE_ENV"
value: "development"
-
name: "PROXY_ADDRESS_FORWARDING"
value: "true"
-
name: "KEYCLOAK_URL"
value: "https://my-server.com/keycloak-development/"
ports:
-
containerPort: 53582
imagePullSecrets:
-
name: "keycloak"
</code></pre>
<p>service.yaml</p>
<pre><code>--
apiVersion: "v1"
kind: "Service"
metadata:
name: "keycloak-development"
namespace: "development"
labels:
app: "keycloak-development"
spec:
ports:
-
port: 53582
targetPort: 8080
selector:
app: "keycloak-development"
</code></pre>
<p>ingress.yaml</p>
<pre><code>---
apiVersion: "networking.k8s.io/v1beta1"
kind: "Ingress"
metadata:
name: "keycloak-development-ingress"
namespace: "development"
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/$1"
spec:
rules:
-
host: "my-server.com"
http:
paths:
-
backend:
serviceName: "keycloak-development"
servicePort: 53582
path: "/keycloak-development/(.*)"
</code></pre>
<p>dockerfile</p>
<pre><code>FROM registry.access.redhat.com/ubi8-minimal
ENV KEYCLOAK_VERSION 12.0.1
ENV JDBC_POSTGRES_VERSION 42.2.5
ENV JDBC_MYSQL_VERSION 8.0.22
ENV JDBC_MARIADB_VERSION 2.5.4
ENV JDBC_MSSQL_VERSION 8.2.2.jre11
ENV LAUNCH_JBOSS_IN_BACKGROUND 1
ENV PROXY_ADDRESS_FORWARDING false
ENV JBOSS_HOME /opt/jboss/keycloak
ENV LANG en_US.UTF-8
ARG GIT_REPO
ARG GIT_BRANCH
ARG KEYCLOAK_DIST=https://github.com/keycloak/keycloak/releases/download/$KEYCLOAK_VERSION/keycloak-$KEYCLOAK_VERSION.tar.gz
USER root
RUN microdnf update -y && microdnf install -y glibc-langpack-en gzip hostname java-11-openjdk-headless openssl tar which && microdnf clean all
ADD tools /opt/jboss/tools
ENV KEYCLOAK_USER admin
ENV KEYCLOAK_PASSWORD admin
RUN /opt/jboss/tools/build-keycloak.sh
USER 1000
EXPOSE 8080
EXPOSE 8443
ENTRYPOINT [ "/opt/jboss/tools/docker-entrypoint.sh" ]
CMD ["-b", "0.0.0.0"]
</code></pre>
<p>Note:- I am able to access keycloak and admin page with cluster-ip</p>
| <p>After finding a lot I found the solution, we need to add these env variables to our deployment.yaml file to work</p>
<ol>
<li>KEYCLOAK_USER</li>
<li>KEYCLOAK_PASSWORD</li>
<li>PROXY_ADDRESS_FORWARDING (value:"true")</li>
<li>KEYCLOAK_FRONTEND_URL (In my case it was something like this:- <a href="https://my-server.com/keycloak-development/auth/" rel="nofollow noreferrer">https://my-server.com/keycloak-development/auth/</a> )</li>
<li>KEYCLOAK_ADMIN_URL (In my case value for it was something like this:- <a href="https://my-server.com/keycloak-development/auth/realms/master/admin/" rel="nofollow noreferrer">https://my-server.com/keycloak-development/auth/realms/master/admin/</a>)</li>
</ol>
<p>For Docker image you can use (quay.io/keycloak/keycloak:8.0.2)</p>
<p>While accessing the key-cloak application if you are using ingress based routing you need to add /auth/ to your ingress path url to access (Somthing like this:- <a href="https://my-server.com/keycloak-development/auth/" rel="nofollow noreferrer">https://my-server.com/keycloak-development/auth/</a> )</p>
|
<p>Is there a way to find the immutable fields in the workload's spec? I could see there are few fields mentioned as immutable in some of the workload resource documentation, But for example in <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> is not clear which are immutable fields. Is there a better way to find out?</p>
<p>Sorry, I am not so familiar with reading the Kubernetes API spec yet, I couldn't figure it out. Is there a better way?</p>
<p>Thanks in advance, Naga</p>
| <p>Welcome to the community.</p>
<p>Unfortunately there's no such list with all <code>immutable fields</code> combined in one place.</p>
<p>There are two options:</p>
<ol>
<li>As you started through reading documentation and see if this is specified explicitly.</li>
<li>Start with <code>kubernetes API</code> description. You can find it here: <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#-strong-api-overview-strong-" rel="nofollow noreferrer">Kubernetes API</a>. This is also available in more human-readable form <a href="https://kubernetes.io/docs/reference/kubernetes-api/" rel="nofollow noreferrer">here</a>. Same applies here - it's not specified explicitly whether field is immutable or not.</li>
</ol>
<p>For instance all objects and fields for <code>statefulset</code> can be found <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#statefulset-v1-apps" rel="nofollow noreferrer">here</a>.</p>
<p>(I will update it if I find a better way)</p>
|
<p>I'm aware of the concept "provisioner" but i do not understand what intree ebs driver means.
Is ebs.csi.aws.com the csi driver maintained by the aws and the other maintained by k8s itself?
Is one better than the other?</p>
| <p>As per <a href="https://kubernetes.io/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta/#why-are-we-migrating-in-tree-plugins-to-csi" rel="noreferrer">the official documentation</a>:</p>
<blockquote>
<p>Prior to CSI, Kubernetes provided a powerful volume plugin system. These volume plugins were βin-treeβ meaning their code was part of the core Kubernetes code and shipped with the core Kubernetes binaries. However, adding support for new volume plugins to Kubernetes was challenging. Vendors that wanted to add support for their storage system to Kubernetes (or even fix a bug in an existing volume plugin) were forced to align with the Kubernetes release process. In addition, third-party storage code caused reliability and security issues in core Kubernetes binaries and the code was often difficult (and in some cases impossible) for Kubernetes maintainers to test and maintain. Using the Container Storage Interface in Kubernetes resolves these major issues.</p>
</blockquote>
<blockquote>
<p>As more CSI Drivers were created and became production ready, we wanted all Kubernetes users to reap the benefits of the CSI model. However, we did not want to force users into making workload/configuration changes by breaking the existing generally available storage APIs. The way forward was clear - we would have to replace the backend of the βin-tree pluginβ APIs with CSI.</p>
</blockquote>
<p>So answering your question - yes, ebs.csi.aws.com is maintained by AWS while the in-tree plugin is maintained by Kubernetes but it seems like they've stopped implementing new features as per <a href="https://grepmymind.com/its-all-about-the-data-a-journey-into-kubernetes-csi-on-aws-f2b998676ce9" rel="noreferrer">this article</a>:</p>
<blockquote>
<p>The idea of this journey started picking up steam when I realized that the in-tree storage plugins were deprecated and no new enhancements were being made to them starting with Kubernetes 1.20. When I discovered that simply switching from gp2 to gp3 volumes meant I had to start using the AWS CSI Driver I realized I was behind the times.</p>
</blockquote>
<p>Answering your last question it's probably better to use ebs.csi.aws.com as per <a href="https://aws.amazon.com/about-aws/whats-new/2021/05/amazon-ebs-container-storage-interface-driver-is-now-generally-available/" rel="noreferrer">this note</a>:</p>
<blockquote>
<p>The existing in-tree EBS plugin is still supported, but by using a CSI
driver, you benefit from the decoupling between the Kubernetes
upstream release cycle and the CSI driver release cycle.</p>
</blockquote>
|
<p>I have some questions regarding my minikube cluster, specifically why there needs to be a tunnel, what the tunnel means actually, and where the port numbers come from.</p>
<h2>Background</h2>
<p>I'm obviously a total kubernetes beginner...and don't have a ton of networking experience.</p>
<p>Ok. I have the following docker image which I pushed to docker hub. It's a hello express app that just prints out "Hello world" at the <code>/</code> route.</p>
<p>DockerFile:</p>
<pre><code>FROM node:lts-slim
RUN mkdir /code
COPY package*.json server.js /code/
WORKDIR /code
RUN npm install
EXPOSE 3000
CMD ["node", "server.js"]
</code></pre>
<p>I have the following pod spec:</p>
<p>web-pod.yaml:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: web-pod
spec:
containers:
- name: web
image: kahunacohen/hello-kube:latest
ports:
- containerPort: 3000
</code></pre>
<p>The following service:</p>
<p>web-service.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
selector:
app: web-pod
ports:
- port: 8080
targetPort: 3000
protocol: TCP
name: http
</code></pre>
<p>And the following deployment:</p>
<p>web-deployment.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 2
selector:
matchLabels:
app: web-pod
service: web-service
template:
metadata:
labels:
app: web-pod
service: web-service
spec:
containers:
- name: web
image: kahunacohen/hello-kube:latest
ports:
- containerPort: 3000
protocol: TCP
</code></pre>
<p>All the objects are up and running and look good after I create them with kubectl.</p>
<p>I do this:</p>
<pre><code>$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7h5m
web-service NodePort 10.104.15.61 <none> 8080:32177/TCP 25m
</code></pre>
<ol start="4">
<li>Then, as per a book I'm reading if I do:</li>
</ol>
<pre><code>$ curl $(minikube ip):8080 # or :32177, # or :3000
</code></pre>
<p>I get no response.</p>
<p>I found when I do this, however I can access the app by going to <code>http://127.0.0.1:52650/</code>:</p>
<pre><code>$ minikube service web-service
|-----------|-------------|-------------|---------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|-------------|-------------|---------------------------|
| default | web-service | http/8080 | http://192.168.49.2:32177 |
|-----------|-------------|-------------|---------------------------|
π Starting tunnel for service web-service.
|-----------|-------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|-------------|-------------|------------------------|
| default | web-service | | http://127.0.0.1:52472 |
|-----------|-------------|-------------|------------------------|
</code></pre>
<h2>Questions</h2>
<ol>
<li>what this "tunnel" is and why we need it?</li>
<li>what the targetPort is for (8080)?</li>
<li>What this line means when I do <code>kubectl get services</code>:</li>
</ol>
<pre><code>web-service NodePort 10.104.15.61 <none> 8080:32177/TCP 25m
</code></pre>
<p>Specifically, what is that port mapping means and where <code>32177</code> comes from?</p>
<ol start="4">
<li>Is there some kind of problem with simply mapping the internal port to the same port number externally, e.g. 3000:3000? If so, do we specifically have to provide this mapping?</li>
</ol>
| <p>Let me answer on all your questions.</p>
<p>0 - There's no need to create pods separately (unless it's something to test), this should be done by creating deployments (or statefulsets, depends on the app and needs) which will create a <code>replicaset</code> which will be responsible for keeping right amount of pods in operational conditions. (you can get familiar with <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">deployments in kubernetes</a>.</p>
<hr />
<p>1 - <a href="https://minikube.sigs.k8s.io/docs/commands/tunnel/" rel="nofollow noreferrer">Tunnel</a> is used to expose the service from inside of VM where minikube is running to the host machine's network. Works with <code>LoadBalancer</code> service type. Please refer to <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/" rel="nofollow noreferrer">access applications in minikube</a>.</p>
<p>1.1 - Reason why the application is not accessible on the <code>localhost:NodePort</code> is NodePort is exposed within VM where <code>minikube</code> is running, not on your local machine.</p>
<p>You can find minikube VM's IP by running <code>minikube IP</code> and then <code>curl %GIVEN_IP:NodePort</code>. You should get a response from your app.</p>
<hr />
<p>2 - <code>targetPort</code> indicates the service with which port connection should be established. Please refer to <a href="https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service" rel="nofollow noreferrer">define the service</a>.</p>
<p>In <code>minikube</code> it may be confusing since it's pointed to the <code>service port</code>, not to the <code>targetPort</code> which is define within the service. I think idea was to indicate on which port <code>service</code> is accessible within the cluster.</p>
<hr />
<p>3 - As for this question, there are headers presented, you can treat them literally. For instance:</p>
<pre><code>$ kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
web-service NodePort 10.106.206.158 <none> 80:30001/TCP 21m app=web-pod
</code></pre>
<p><code>NodePort</code> comes from your <code>web-service.yaml</code> for <code>service</code> object. <code>Type</code> is explicitly specified and therefore <code>NodePort</code> is allocated. If you don't specify <code>type</code> of service, it will be created as <code>ClusterIP</code> type and will be accessible only within kubernetes cluster. Please refer to <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">Publishing Services (ServiceTypes)</a>.</p>
<p>When service is created with <code>ClusterIP</code> type, there won't be a <code>NodePort</code> in output. E.g.</p>
<pre><code>$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web-service ClusterIP 10.106.206.158 <none> 80/TCP 23m
</code></pre>
<p><code>External-IP</code> will pop up when <code>LoadBalancer</code> service type is used. Additionally for <code>minikube</code> address will appear once you run <code>minikube tunnel</code> in a different shell. After your service will be accessible on your host machine by <code>External-IP</code> + <code>service port</code>.</p>
<hr />
<p>4 - There are not issues with such mapping. Moreover this is a default behaviour for kubernetes:</p>
<blockquote>
<p>Note: A Service can map any incoming port to a targetPort. By default
and for convenience, the targetPort is set to the same value as the
port field.</p>
</blockquote>
<p>Please refer to <a href="https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service" rel="nofollow noreferrer">define a service</a></p>
<hr />
<p>Edit:</p>
<p>Depending on the <a href="https://minikube.sigs.k8s.io/docs/drivers/" rel="nofollow noreferrer">driver</a> of <code>minikube</code> (usually this is a <code>virtual box</code> or <code>docker</code> - can be checked on linux VM in <code> .minikube/profiles/minikube/config.json</code>), <code>minikube</code> can have different port forwarding. E.g. I have a <code>minikube</code> based on <code>docker</code> driver and I can see some mappings:</p>
<pre><code>$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ebcbc898b557 gcr.io/k8s-minikube/kicbase:v0.0.23 "/usr/local/bin/entrβ¦" 5 days ago Up 5 days 127.0.0.1:49157->22/tcp, 127.0.0.1:49156->2376/tcp, 127.0.0.1:49155->5000/tcp, 127.0.0.1:49154->8443/tcp, 127.0.0.1:49153->32443/tcp minikube
</code></pre>
<p>For instance 22 for ssh to ssh into <code>minikube VM</code>. This may be an answer why you got response from <code>http://127.0.0.1:52650/</code></p>
|
<p>I've recently started a proof-of-concept to extend our Airflow to use the KubernetesPodOperator to spin up a pod in our kubernetes environment, that also hosts our airflow. This all works; however I've noticed that the logs that we get have the Info for running the task instance and the task instance success; however the stdout from the container is not captured in the log files.</p>
<p>I can access this information if I set the KubernetesPodOperator to leave the pod and then I can do a kubectl logs from the container and get the stdout information.</p>
<p>Example Log Output:</p>
<pre><code>[2020-11-17 03:09:16,604] {{taskinstance.py:670}} INFO - Dependencies all met for <TaskInstance: alex_kube_test.passing-task 2020-11-17T02:50:00+00:00 [queued]>
[2020-11-17 03:09:16,632] {{taskinstance.py:670}} INFO - Dependencies all met for <TaskInstance: alex_kube_test.passing-task 2020-11-17T02:50:00+00:00 [queued]>
[2020-11-17 03:09:16,632] {{taskinstance.py:880}} INFO -
--------------------------------------------------------------------------------
[2020-11-17 03:09:16,632] {{taskinstance.py:881}} INFO - Starting attempt 2 of 3
[2020-11-17 03:09:16,632] {{taskinstance.py:882}} INFO -
--------------------------------------------------------------------------------
[2020-11-17 03:09:16,650] {{taskinstance.py:901}} INFO - Executing <Task(KubernetesPodOperator): passing-task> on 2020-11-17T02:50:00+00:00
[2020-11-17 03:09:16,652] {{standard_task_runner.py:54}} INFO - Started process 1380 to run task
[2020-11-17 03:09:16,669] {{standard_task_runner.py:77}} INFO - Running: ['airflow', 'run', 'alex_kube_test', 'passing-task', '2020-11-17T02:50:00+00:00', '--job_id', '113975', '--pool', 'default_pool', '--raw', '-sd', 'DAGS_FOLDER/alex_kube_test.py', '--cfg_path', '/tmp/tmpmgyu498h']
[2020-11-17 03:09:16,670] {{standard_task_runner.py:78}} INFO - Job 113975: Subtask passing-task
[2020-11-17 03:09:16,745] {{logging_mixin.py:112}} INFO - Running %s on host %s <TaskInstance: alex_kube_test.passing-task 2020-11-17T02:50:00+00:00 [running]> airflow-worker-686849bf86-bpq4w
[2020-11-17 03:09:16,839] {{logging_mixin.py:112}} WARNING - /usr/local/lib/python3.6/site-packages/urllib3/connection.py:395: SubjectAltNameWarning: Certificate for us-east-1-services-kubernetes-private.vevodev.com has no `subjectAltName`, falling back to check for a `commonName` for now. This feature is being removed by major browsers and deprecated by RFC 2818. (See https://github.com/urllib3/urllib3/issues/497 for details.)
SubjectAltNameWarning,
[2020-11-17 03:09:16,851] {{logging_mixin.py:112}} WARNING - /usr/local/lib/python3.6/site-packages/airflow/kubernetes/pod_launcher.py:330: DeprecationWarning: Using `airflow.contrib.kubernetes.pod.Pod` is deprecated. Please use `k8s.V1Pod`.
security_context=_extract_security_context(pod.spec.security_context)
[2020-11-17 03:09:16,851] {{logging_mixin.py:112}} WARNING - /usr/local/lib/python3.6/site-packages/airflow/kubernetes/pod_launcher.py:77: DeprecationWarning: Using `airflow.contrib.kubernetes.pod.Pod` is deprecated. Please use `k8s.V1Pod` instead.
pod = self._mutate_pod_backcompat(pod)
[2020-11-17 03:09:18,960] {{taskinstance.py:1070}} INFO - Marking task as SUCCESS.dag_id=alex_kube_test, task_id=passing-task, execution_date=20201117T025000, start_date=20201117T030916, end_date=20201117T030918
</code></pre>
<p>What the KubeCtl Logs output returns:</p>
<pre><code>uptime from procps-ng 3.3.10
</code></pre>
<p>Shouldn't this stdout be in the log if I have get_logs=True? How do I make sure that the logs capture the stdout of the container?</p>
| <p>I felt I had the same issue... but maybe not as you didn't mention if you were using a subdag (I'm using dag factories methodology). I was clicking on the dag task -> view logs in the UI. Since I was using a subdag for the first time I didn't realize I needed to zoom into it to view the logs.</p>
<p><img src="https://i.stack.imgur.com/HOL1j.png" alt="subdag zoom" /></p>
|
<p>I have a simple ASP.NET core Web API. It works locally. I deployed it in Azure AKS using the following yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: sa-be
spec:
selector:
matchLabels:
name: sa-be
template:
metadata:
labels:
name: sa-be
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- name: sa-be
image: francotiveron/anapi:latest
resources:
limits:
memory: "64Mi"
cpu: "250m"
---
apiVersion: v1
kind: Service
metadata:
name: sa-be-s
spec:
type: LoadBalancer
selector:
name: sa-be
ports:
- port: 8080
targetPort: 80
</code></pre>
<p>The result is:</p>
<pre><code>> kubectl get service sa-be-s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
sa-be-s LoadBalancer 10.0.157.200 20.53.188.247 8080:32533/TCP 4h55m
> kubectl describe service sa-be-s
Name: sa-be-s
Namespace: default
Labels: <none>
Annotations: <none>
Selector: name=sa-be
Type: LoadBalancer
IP Families: <none>
IP: 10.0.157.200
IPs: <none>
LoadBalancer Ingress: 20.53.188.247
Port: <unset> 8080/TCP
TargetPort: 80/TCP
NodePort: <unset> 32533/TCP
Endpoints: 10.244.2.5:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>I expected to reach the Web API at <a href="http://20.53.188.247:32533/" rel="nofollow noreferrer">http://20.53.188.247:32533/</a>, instead it is reachable only at <a href="http://20.53.188.247:8080/" rel="nofollow noreferrer">http://20.53.188.247:8080/</a></p>
<p>Can someone explain</p>
<ul>
<li>Is this is he expected behaviour</li>
<li>If yes, what is the use of the NodePort (32533)?</li>
</ul>
| <p>Yes, expected.
<a href="https://nigelpoulton.com/explained-kubernetes-service-ports/" rel="nofollow noreferrer">Explained: Kubernetes Service Ports</a> - please read full article to understand what is going on in the background.</p>
<p>Loadbalancer part:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: sa-be-s
spec:
type: LoadBalancer
selector:
name: sa-be
ports:
- port: 8080
targetPort: 80
</code></pre>
<blockquote>
<p><strong>port</strong> is the port the cloud load balancer will listen on (8080 in our
example) and <strong>targetPort</strong> is the port the application is listening on in
the Pods/containers. Kubernetes works with your cloudβs APIs to create
a load balancer and everything needed to get traffic hitting the load
balancer on port 8080 all the way back to the Pods/containers in your
cluster listening on targetPort 80.</p>
</blockquote>
<p>now main:
<strong>Behind the scenes, many implementations create NodePorts to glue the cloud load balancer to the cluster</strong>. The traffic flow is usually like this.</p>
<p><a href="https://i.stack.imgur.com/FW9tw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FW9tw.png" alt="enter image description here" /></a></p>
|
<p>I am trying to use the kubectl run command to create a Pod that uses a custom serviceaccount "svcacct1" instead of default serviceaccout. There seems to be no switch for providing a specific serviceaccount within the run command so leveraging --overrides switch to provide JSON as shown below. </p>
<pre><code>kubectl run ng2 --image=nginx --namespace=test --overrides='{ "apiVersion": "apps/v1", "spec": { "serviceAccount": "svcacct1" , "serviceAccountName": "svcacct1" } }' -o yaml
</code></pre>
<p>This does create the Pod (and a deployment) successfully but does not leverage the "svcacct1" and instead uses "default" as shown below (only relevant part of the output is shown)</p>
<pre><code>> kubectl get po ng2-569d45c6b5-c9zhp -o yaml -n test
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: ng2
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-cpwjr
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: minikube
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
</code></pre>
<p>Also, the account does exist in the correct namespace.</p>
<pre><code>> kubectl get serviceaccount svcacct1 -o yaml -n test
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2019-05-01T11:56:52Z"
name: svcacct1
namespace: test
resourceVersion: "632061"
selfLink: /api/v1/namespaces/test/serviceaccounts/svcacct1
uid: 3573ffc6-6c08-11e9-9c81-0800270172ea
secrets:
- name: svcacct1-token-q9ksc
</code></pre>
<p>It is not clear what is missing for this to work?</p>
| <p>The <code>--serviceaccount</code> flag isn't present in current versions (1.21) and got deprecated as mentioned in the <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md" rel="noreferrer">changelog</a>.</p>
<p>But with a slight modification to the command mentioned in the question, it works:</p>
<pre><code>kubectl run -ti debug --image=praqma/network-multitool --overrides='{ "spec": { "serviceAccount": "your-sa-here" } }' sh
</code></pre>
|
<p>All,</p>
<p>Is there a way to find the nodes associated with Persistent Volume Claim.</p>
<p><strong>> kubectl get pvc -n namespace</strong></p>
<p>gives me the list of Persistent Volume Claims. But I need the node also to which each of the Claim is associated with.</p>
<p>Even describing the PVC does not give me the node</p>
<blockquote>
<p><strong>kubectl describe pvc pvcname -n namespace</strong></p>
</blockquote>
<p>Thanks,<br />
grajee</p>
| <ul>
<li>whenever PVC is used by a POD , kubernetes creates a object called <code>volumeattachment</code> which contains the node information where the pvc is attached.</li>
</ul>
<pre><code> - kubectl get volumeattachements | grep <pv name> // to get the volumeattachment name
- kubectl describe volumeattachment <volumeattachment name from above command> | grep -i 'node name'
</code></pre>
|
<p>As the title says, im trying to mount a secret, as a volume, into a deployment.</p>
<p>I found out i can do it in this way if <code>kind: Pod</code> but couldnt replicate it on <code> kind: Deployment</code></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
volumeMounts:
- name: certs-vol
mountPath: "/certs"
readOnly: true
volumes:
- name: certs-vol
secret:
secretName: certs-secret
</code></pre>
<p>the error shows as follows <code>ValidationError(Deployment.spec.template.spec.volumes[1]): unknown field "secretName" in io.k8s.api.core.v1.Volume, ValidationError(Deployment.spec.template.spec.volumes[2]</code></p>
<p>is there a way to do this, exclusivelly on deployment?</p>
| <p>As <a href="https://stackoverflow.com/users/10008173/david-maze">David Maze</a> mentioned in the comment:</p>
<blockquote>
<p>Does <code>secretName:</code> need to be indented one step further (a child of <code>secret:</code>)?</p>
</blockquote>
<p>Your yaml file should be as follow:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
volumeMounts:
- name: certs-vol
mountPath: "/certs"
readOnly: true
volumes:
- name: certs-vol
secret:
secretName: certs-secret
</code></pre>
<p>You can read more about <a href="https://medium.com/avmconsulting-blog/secrets-management-in-kubernetes-378cbf8171d0" rel="nofollow noreferrer">mounting secret as a file</a>. This could be the most interesing part:</p>
<blockquote>
<p>It is possible to create <code>Secret</code> and pass it as a <strong>file</strong> or multiple <strong>files</strong> to <code>Pods</code>.<br />
I've created a simple example for you to illustrate how it works. Below you can see a sample <code>Secret</code> manifest file and <code>Deployment</code> that uses this Secret:<br />
<strong>NOTE:</strong> I used <code>subPath</code> with <code>Secrets</code> and it works as expected.</p>
</blockquote>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
secret.file1: |
c2VjcmV0RmlsZTEK
secret.file2: |
c2VjcmV0RmlsZTIK
---
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: secrets-files
mountPath: "/mnt/secret.file1" # "secret.file1" file will be created in "/mnt" directory
subPath: secret.file1
- name: secrets-files
mountPath: "/mnt/secret.file2" # "secret.file2" file will be created in "/mnt" directory
subPath: secret.file2
volumes:
- name: secrets-files
secret:
secretName: my-secret # name of the Secret
</code></pre>
<blockquote>
<p><strong>Note:</strong> <code>Secret</code> should be created before <code>Deployment</code>.</p>
</blockquote>
|
<p>I have a running cluster with single master. A load balancer(kube.company.com) is configured to accept traffic at 443 and forwards it to the k8s master 6443.</p>
<p>I tried to change my ~/.kube/config <code>server</code> field definition from $masterIP:6443 to kube.company.com:443.</p>
<p>It throws the error x509: certificate signed by unknown authority.</p>
<p>I guess there should be some configuration that should be done to make this work, I just can't find it in the official docs</p>
<p>This is a bare metal setup using k8s version 1.21.2, containerd in RHEL env. The load balancer is nginx. Cluster is installed via kubeadm</p>
| <p>When using <code>kubeadm</code> to deploy a cluster, if you want to use a custom name to access the <code>Kubernetes API Server</code>, you need to specify the <code>--apiserver-cert-extra-sans</code> flag of <code>kubeadm init</code>.</p>
<blockquote>
<p>Optional extra Subject Alternative Names (SANs) to use for the API Server serving certificate. Can be both IP addresses and DNS names.</p>
</blockquote>
<p>This is untested, but theoretically if you want to do this on an existing cluster, you should be able to log in <code>in every master node</code> and run this:</p>
<pre><code># remove current apiserver certificates
sudo rm /etc/kubernetes/pki/apiserver.*
# generate new certificates
sudo kubeadm init phase certs apiserver --apiserver-cert-extra-sans=<your custom dns name here>
</code></pre>
|
<p>I am new to the kubernetes world and I am currently stuck with figuring out how to enable endpoints for <code>kube-controller-manager</code> & <code>kube-scheduler</code>. In some future, I'll be using the helm <code>kube-prometheus-stack</code> to scrape those endpoints for metrics. However, for now what would be the right approach to set up those endpoints?</p>
<pre><code>$ kubectl get ep -n kube-system
NAME ENDPOINTS AGE
kube-controller-manager <none> 105d
kube-scheduler <none> 105d
</code></pre>
| <ul>
<li><p>No need to create endpoints for <code>kube-controller-manage</code> and <code>kube-scheduler</code> because they use <code>hostNetwork</code> and uses ports <code>10257</code> and <code>10259</code> respectively.</p>
</li>
<li><p>you can verify it checking the manifests "/etc/kubernetes/manifests/" and netstat -nltp or ss -nltp on masternode</p>
</li>
</ul>
<pre><code>ss -nltp | grep kube
LISTEN 0 128 127.0.0.1:10257 0.0.0.0:* users:(("kube-controller",pid=50301,fd=7))
LISTEN 0 128 127.0.0.1:10259 0.0.0.0:* users:(("kube-scheduler",pid=50400,fd=7))
</code></pre>
<ul>
<li>so they should be accessible over < masternodeip >:<10257/10259></li>
</ul>
|
<p>I've been using <strong>Docker Desktop for Windows</strong> for a while and recently I updated to the latest version (<em>3.5.1</em>) but now I'm having problems with Kubernetes because it updated the <strong>client version</strong> (<em>1.21.2</em>) but the <strong>server version</strong> was not updated and continues on the version (<em>1.19.7</em>).</p>
<p>How can I update the server version to avoid the conflicts that K8s faces when the versions between client and server are more than 1 version different?</p>
| <p>To try to solve this problem I decided to install Minikube, after that, all those problems were solved. Thanks everyone</p>
|
<p>I'm trying to deploy a MongoDB replica set by using the MongoDB Community Kubernetes Operator in Minikube.<br />
I followed the instructions on the official GitHub, so:</p>
<ul>
<li>Install the CRD</li>
<li>Install the necessary roles and role-bindings</li>
<li>Install the Operator Deploy the Replicaset</li>
</ul>
<p>By default, the operator will creates three pods, each of them automatically linked to a new persistent volume claim bounded to a new persistent volume also created by the operator (so far so good).</p>
<p>However, I would like the data to be saved in a specific volume, mounted in a specific host path. So in order I would need to create three persistent volumes, each mounted to a specific host path, and then automatically I would want to configure the replicaset so that each pod would connect to its respective persistent volume (perhaps using the matchLabels selector).
So I created three volumes by applying the following file:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: mongodb-pv-00
namespace: $NAMESPACE
labels:
type: local
service: mongo
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/mongodata/00"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongodb-pv-01
namespace: $NAMESPACE
labels:
type: local
service: mongo
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/mongodata/01"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongodb-pv-02
namespace: $NAMESPACE
labels:
type: local
service: mongo
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/mongodata/02"
</code></pre>
<p>and then I set up the replica set configuration file in the following way, but it still fails to connect the pods to the volumes:</p>
<pre><code>apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: mongo-rs
namespace: $NAMESPACE
spec:
members: 3
type: ReplicaSet
version: "4.4.0"
persistent: true
podSpec:
persistence:
single:
labelSelector:
matchLabels:
type: local
service: mongo
storage: 5Gi
storageClass: manual
statefulSet:
spec:
volumeClaimTemplates:
- metadata:
name: data-volume
spec:
accessModes: [ "ReadWriteOnce", "ReadWriteMany" ]
resources:
requests:
storage: 5Gi
selector:
matchLabels:
type: local
service: mongo
storageClassName: manual
security:
authentication:
modes: ["SCRAM"]
users:
- ...
additionalMongodConfig:
storage.wiredTiger.engineConfig.journalCompressor: zlib
</code></pre>
<p>I can't find any documentation online, except the <a href="https://github.com/mongodb/mongodb-kubernetes-operator/blob/master/config/samples/arbitrary_statefulset_configuration/mongodb.com_v1_custom_volume_cr.yaml" rel="nofollow noreferrer">mongodb.com_v1_custom_volume_cr.yaml</a>, has anyone faced this problem before? How could I make it work?</p>
| <p>I think you could be interested into using local type of volumes. It works, like this:</p>
<p><strong>First</strong>, you create a storage class for the local volumes. Something like the following:</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
</code></pre>
<p>Since it has <code>no-provisioner</code>, it will be usable only if you manually create local PVs. <code>WaitForFirstConsumer</code> instead, will prevent attaching a PV to a PVC of a Pod which cannot be scheduled on the host on which the PV is available.</p>
<p>Second, you create the local PVs. Similarly to how you created them in your example, something like this:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /path/on/the/host
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- the-node-hostname-on-which-the-storage-is-located
</code></pre>
<p>Notice the definition, it tells the path on the host, the capacity.. and then it explains on which node of the cluster, such PV can be used (with the nodeAffinity). It also link them to the storage class we created early.. so that if someone (a claim template) requires storage with that class, it will now find this PV.</p>
<p>You can create 3 PVs, on 3 different nodes.. or 3 PVs on the same node at different paths, you can organize things as you desire.</p>
<p><strong>Third</strong>, you can now use the <code>local-storage</code> class in claim template. The claim template could be something similar to this:</p>
<pre><code>volumeClaimTemplates:
- metadata:
name: the-name-of-the-pvc
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 5Gi
</code></pre>
<p>And each Pod of the StatefulSet will try to be scheduled on a node with a <code>local-storage</code> PV available.</p>
<hr />
<p>Remember that with local storages or, in general, with volumes that utilize host paths.. you may want to spread the various Pods of your app on different nodes, so that the app may resist the failure of a single node on its own.</p>
<hr />
<p>In case you want to be able to decide which Pod links to which volume, the easiest way is to create one PV at a time, then wait for the Pod to <code>Bound</code> with it.. before creating the next one. It's not optimal but it's the easiest way.</p>
|
<p>I have an application that runs health checks on pods. Given the health check, I am attempting to patch a pod's label selector from being active: true to active: false. The following is the code for the iteration of pods to change each pod's labels.</p>
<pre><code>CoreV1Api corev1Api = new CoreV1Api();
for (V1Pod pod : fetchPodsByNamespaceAndLabel.getItems()) {
String jsonPatchBody = "[{\"op\":\"replace\",\"path\":\"/spec/template/metadata/labels/active\",\"value\":\"true\"}]";
V1Patch patch = new V1Patch(jsonPatchBody);
corev1Api.patchNamespacedPodCall(pod.getMetadata.getName(), namespace, patch, null, null, null, null, null);
}
</code></pre>
<p>I have adapted the jsonPatchBody from the <a href="https://github.com/kubernetes-client/java/blob/master/examples/examples-release-12/src/main/java/io/kubernetes/client/examples/PatchExample.java" rel="nofollow noreferrer">Patch Example</a> on the Kubernetes documentation section for examples.</p>
<p>The output of the run spits out no errors. The expected behavior is for the labels of these pods for active to all be set to true. These changes are not reflected. I believe the issue to be caused by the syntax provided by the body of the patch. Is the above the correct syntax for accessing labels in a pod?</p>
| <p>After researching more of the current implementation, the client provides the <a href="https://github.com/kubernetes-client/java/blob/master/util/src/main/java/io/kubernetes/client/util/PatchUtils.java" rel="nofollow noreferrer">PatchUtils</a> api that allows me to build a type of patch.</p>
<pre><code>CoreV1Api coreV1Api = new CoreV1Api();
String body = "{\"metadata\":{\"labels\":{\"active\":\"true\"}}}";
V1Pod patch =
PatchUtils.patch(
V1Pod.class,
() ->
coreV1Api.patchNamespacedPodCall(
Objects.requireNonNull(pod.getMetadata().getName()),
namespace,
new V1Patch(body),
null,
null,
null,
null,
null),
V1Patch.PATCH_FORMAT_STRATEGIC_MERGE_PATCH,
coreV1Api.getApiClient());
System.out.println("Pod name: " + Objects.requireNonNull(pod.getMetadata()).getName() + "Patched by json-patched: " + body);
</code></pre>
<p>I wanted to ensure that the patch updated the current values for a property in my labels selector, so I implemented a <code>PATCH_FORMAT_STRATEGIC_MERGE_PATCH</code> from the <code>V1Patch</code> api. I referenced the Kubernetes <a href="https://github.com/kubernetes-client/java/blob/master/examples/examples-release-15/src/main/java/io/kubernetes/client/examples/PatchExample.java" rel="nofollow noreferrer">Patch Example</a> to build the structure of the Patch.</p>
|
<p>There are many questions like this I can find in the internet but none of the solutions provided worked.</p>
<p>I am using <code>jboss/keycloak:14.0.0</code> docker image. The following properties are set in my <code>ConfigMap</code>:</p>
<pre><code>KEYCLOAK_FRONTEND_URL: /mycontext/access-management
PROXY_ADDRESS_FORWARDING: "true"
</code></pre>
<p>Please note that, change the <code>KEYCLOAK_FRONTEND_URL</code> to an absolute URL like this <code>https://mycompany.com/mycontext/access-managemen</code> makes no difference.</p>
<p>Now the ingress has been defined as below:</p>
<pre><code>Path: /mycontext/access-management(/|$)(.*)
Rewrite To: /$2
Annotations:
ingress.kubernetes.io/ssl-redirect: "False"
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header X-Request-ID $request_id;
proxy_set_header X-Trace-ID $request_id;
gzip off;
nginx.ingress.kubernetes.io/proxy-connect-timeout: "180"
nginx.ingress.kubernetes.io/proxy-read-timeout: "180"
nginx.ingress.kubernetes.io/proxy-send-timeout: "180"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/server-snippet: |
add_header X-Request-ID $request_id;
add_header X-Trace-ID $request_id;
nginx.org/redirect-to-https: "True"
</code></pre>
<p>What happens is very strange. See below it shows you where it takes me when I hit a URL:</p>
<pre><code>Go to [server]/mycontext/access-management =takes you to=> [server]/auth
Go to [server]/mycontext/access-management/auth =takes you to=> [server]/mycontext/access-management/auth (works fine)
</code></pre>
<p>As you can see the second link works fine and you can see the Keycloak Welcome page with a number of links in it. One of the links is <code>Administration Console</code> that is broken. If you hover your mouse the link is <code>[server]/mycontext/access-management/admin</code> instead of <code>[server]/mycontext/access-management/auth/admin</code> (comparing it with my local Keycloak server). Now if we ignore the link and put the right path in the address bar as <code>[server]/mycontext/access-management/auth/admin</code> another strange thing happens and that changes the URL to <code>[server]/auth/mycontext/access-management/admin/master/console/</code>.</p>
<p>I really don't understand what is happening here. Setting <code>KEYCLOAK_FRONTEND_URL</code> on my local also breaks the links.</p>
<p>I have tried to change the <code>rewrite</code> annotation of the ingress to <code>/mycontext/access-management/$2</code> but this configuration doesn't work at all.</p>
<p>In the Keycloak documentation <a href="https://www.keycloak.org/docs/latest/server_installation/#_hostname" rel="nofollow noreferrer">here</a> it talks about a property named <code>adminUrl</code>, however, setting <code>-DadminUrl</code> or <code>-Dkeycloak.adminUrl</code> seems to be ignored completely by the docker image when using <code>JAVA_OPTS_APPEND</code> according to JBoss documentation page.</p>
<p>What have I missed here? Is there anything that I have missed in my configuration?</p>
<p>Please note that, we have no choice other than exposing it under a context-path followed by the name (i.e. <code>/mycontext/access-management</code>). This is because of both our client requirements as well as the fact that we have many micro-services deployed under <code>/mycontext</code> each of which has its own ingress configuration.</p>
<p>Any help is appreciated.</p>
| <p>Okay I managed to get this working by gathering all the solutions that are mentioned out there.</p>
<p>So basically, <code>web-context</code> needs to be set and that's something that is not mentioned anywhere in any documentation except word of mouth.</p>
<p>To set that, you can write a <code>cli</code> script:</p>
<pre><code>set CONTEXT=${env.KEYCLOAK_WEB_CONTEXT}
echo $CONTEXT
embed-server --server-config=standalone-ha.xml --std-out=echo
/subsystem=keycloak-server/:write-attribute(name=web-context,value=$CONTEXT)
stop-embedded-server
embed-server --server-config=standalone.xml --std-out=echo
/subsystem=keycloak-server/:write-attribute(name=web-context,value=$CONTEXT)
stop-embedded-server
</code></pre>
<p>This is mentioned <a href="https://stackoverflow.com/questions/46892613/keycloak-undertow-jboss-cli-set-web-context-from-environment-variable">here</a> but the important things are 1. you need to start embed server otherwise there is no server to connect to and second you should change both <code>standalone.xml</code> and <code>standalone-ha.xml</code> unless you know which one exactly you are running.</p>
<p>See this answer <a href="https://stackoverflow.com/a/58929616/988738">here</a> on how to copy custom scripts to your docker image.</p>
<p><strong>However, there is one important point here! Although Keycloak documentation says that you can change the front-end URL to whatever you want, you actually HAVE TO add <code>/auth</code> at the end of your new URL. It seems many pages in Keycloak are hardcoded to that path</strong></p>
<p>Now you need to set these two properties in your ConfigMap:</p>
<pre><code>#ConfigMap
# Has to end with /auth and has to be absolute URL
KEYCLOAK_FRONTEND_URL: https://your.website.address/mycontext/access-management/auth
PROXY_ADDRESS_FORWARDING: "true"
# The following is our own env var that will be used by the above cli
# Note that it SHOULD NOT start with / and it must END with /auth
KEYCLOAK_WEB_CONTEXT: mycontext/access-management/auth
</code></pre>
<p>This is a bit annoying because <code>KEYCLOAK_FRONTEND_URL</code> cannot be relative path and has to be full absolute URL. The issue is you constructing that makes things not nice and elegant. Luckily we have a <code>host</code> property in <code>global</code> and is shared across the Helm's subcharts, so I could use it but really this makes the design a bit nasty.</p>
<p>But that's not just this, because of all these crap settings, now you have to change your <code>liveness</code> and <code>readiness</code> probes to <code>GET /mycontext/access-management</code>
And even worst, if you have micro-services, which is nearly 20 for us, you'll need to change all the auth server URLs that was previously as simple as <code>http://access-management:8080/auth</code> to <code>http://access-management:8080/mycontext/access-management/auth</code>.</p>
<p>Now make sure your ingress also includes this new path plus another important property <code>proxy-buffer-size</code>. If you don't have a large buffer size Keycloak requests may not work and you will get <code>Bad Gateway</code> error:</p>
<pre><code>Path: /mycontext/access-management(/|$)(.*)
Rewrite To: /mycontext/access-management/$2
nginx.ingress.kubernetes.io/proxy-buffer-size: 8k
</code></pre>
<p>I really wished we could manage this with ingress without having to touch all these things but it seems not possible. I hope <code>Keycloak.X</code> will solve all these bad coding.</p>
|
<p>I'm seeking the answer regarding how to use the Kubernetes Python API to get cluster information (<code>kubectl get clusters</code>).</p>
<pre><code>~$ kubectl -n <namespace> get clusters
NAME AGE
cluster-1 6d17h
cluster-2 6d17h
</code></pre>
| <p>This one maybe help you, <a href="https://github.com/kubernetes-client/python/blob/d8f283e7483848647804eab345645106b6fb357d/examples/remote_cluster.py#L24" rel="nofollow noreferrer">at</a></p>
<pre><code>from pick import pick # install pick using `pip install pick`
from kubernetes import client, config
from kubernetes.client import configuration
def main():
contexts, active_context = config.list_kube_config_contexts()
if not contexts:
print("Cannot find any context in kube-config file.")
return
contexts = [context['name'] for context in contexts]
active_index = contexts.index(active_context['name'])
cluster1, first_index = pick(contexts, title="Pick the first context",
default_index=active_index)
cluster2, _ = pick(contexts, title="Pick the second context",
default_index=first_index)
client1 = client.CoreV1Api(
api_client=config.new_client_from_config(context=cluster1))
client2 = client.CoreV1Api(
api_client=config.new_client_from_config(context=cluster2))
print("\nList of pods on %s:" % cluster1)
for i in client1.list_pod_for_all_namespaces().items:
print("%s\t%s\t%s" %
(i.status.pod_ip, i.metadata.namespace, i.metadata.name))
print("\n\nList of pods on %s:" % cluster2)
for i in client2.list_pod_for_all_namespaces().items:
print("%s\t%s\t%s" %
(i.status.pod_ip, i.metadata.namespace, i.metadata.name))
</code></pre>
|
<p>I want to use <a href="https://github.com/baarde/cert-manager-webhook-ovh" rel="nofollow noreferrer">cert-manager OVH's webhook</a> in order to deliver a https wildcard certificate, but I still can't figure out why the cert-manager can't access to OVH credentials secret (required to create a DNS entry in OVH)</p>
<p>The <code>ovh-credentials</code> secret has been created on the <code>default</code> namespace</p>
<p>The cert-manager is on a <code>cert-manager</code> namespace and the <code>cert-manager-webhook-ovh</code> is on the <code>default</code> namespace</p>
<p>I have a <code>ClusterIssuer</code> which is calling the <code>cert-manager-webhook-ovh</code></p>
<p>I have defined a <code>cert-manager-webhook-ovh:secret-reader</code> <code>ClusterRole</code></p>
<p>And a <code>ClusterRole</code> binding between <code>cert-manager-webhook-ovh</code> service account and</p>
<p>the <code>cert-manager-webhook-ovh:secret-reader</code> <code>ClusterRole</code></p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cert-manager-webhook-ovh:secret-reader
rules:
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["ovh-credentials"]
verbs: ["get", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cert-manager-webhook-ovh:secret-reader
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cert-manager-webhook-ovh:secret-reader
subjects:
- apiGroup: ""
kind: ServiceAccount
name: cert-manager-webhook-ovh
namespace: cert-manager
</code></pre>
<p>here are my pods</p>
<pre><code>β alaya-studio git:(main) β kubectl get pods --namespace=cert-manager
NAME READY STATUS RESTARTS AGE
cert-manager-6588898cb4-nfn7p 1/1 Running 1 27d
cert-manager-cainjector-7bcbdbd99f-tgllj 1/1 Running 5 27d
cert-manager-webhook-5fd9f9dd86-csjzq 1/1 Running 0 27d
β alaya-studio git:(main) β kubectl get pods
NAME READY STATUS RESTARTS AGE
cert-manager-webhook-ovh-77f9c7cc7b-xfdfx 1/1 Running 4 15d
express-59df84c766-8wjxw 1/1 Running 0 27d
express-59df84c766-l4cf8 1/1 Running 0 27d
express-59df84c766-w6hbt 1/1 Running 0 27d
ngingress-ingress-nginx-controller-5586599f89-5fsgn 1/1 Running 0 28d
</code></pre>
<p>When creating the challenge I have this error</p>
<pre><code> Type Reason Age From Message
---- ------ ---- ---- -------
Normal Started 9s cert-manager Challenge scheduled for processing
Warning PresentError 5s (x3 over 7s) cert-manager Error presenting challenge: secrets "ovh-credentials" is forbidden: User "system:serviceaccount:default:cert-manager-webhook-ovh" cannot get resource "secrets" in API group "" in the namespace "cert-manager"
</code></pre>
| <p>The error message indicating to ServiceAccount <code>cert-manager-webhook-ovh</code> of <code>default</code> namespace. You have given permission to the the ServiceAccount of <code>cert-manager</code> namespace.</p>
<pre><code>secrets "ovh-credentials" is forbidden: User "system:serviceaccount:default:cert-manager-webhook-ovh" cannot get resource "secrets" in API group "" in the namespace "cert-manager"
</code></pre>
|
<p>I am trying to limit the resource usage of each container in a pod dynamically. For docker, I used --cgroup-parent to put containers in a specific cgroup directory. However, in Kubernetes, I haven't found any option that I can do this.</p>
| <p>You can use the <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/#using-the-cgroupfs-driver" rel="nofollow noreferrer">cgroupfs</a> driver supported by Kubernetes.</p>
<p>But perhaps you just want to <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/" rel="nofollow noreferrer">set the <code>Pod</code> and <code>Container</code> resources</a> alone? Kubernetes allows setting the limits for specific containers <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory" rel="nofollow noreferrer">as well</a>, therefore you can have app with <code>500Mi</code> RAM and logs with <code>50Mi</code> RAM (<code>requests</code>), but expand both to <code>1Gi</code> if necessary (<code>limits</code>):</p>
<pre class="lang-yaml prettyprint-override"><code>spec:
containers:
- name: app
image: images.my-company.example/app:v4
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
- name: log-aggregator
image: images.my-company.example/log-aggregator:v6
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
</code></pre>
<hr />
<p>Update:</p>
<p>As mentioned by <a href="https://stackoverflow.com/users/10008173">David Maze</a>, you can also utilize autoscalers such as <a href="https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler" rel="nofollow noreferrer">Vertical Pod Autoscaler</a> which will adjust the Pod <em>resources</em>. For that however a <a href="https://kubernetes.io/docs/concepts/api-extension/custom-resources/" rel="nofollow noreferrer"><code>CustomResourceDefinition</code></a> is necessary, therefore even the appropriate permissions to the cluster for creating CRDs and deploying it (thus may not work if you have limited access and you would need to contact the cluster admin).</p>
<blockquote>
<p>... it will set the requests automatically based on usage ... It will also maintain ratios between limits and requests that were specified in initial containers configuration.</p>
</blockquote>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: my-app-vpa
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: my-app
updatePolicy:
updateMode: "Auto"
</code></pre>
<p>or even <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal autoscaling</a> in case you need to have separate <em>instances</em> of the applications (thus is managing the <code>Pod</code> <em>count</em>).</p>
|
<pre><code> api.name: spark-history-server
file.upload.path: x
gcp.server.property.file.path: x
git.files.update.path: x
onprem.server.property.file.path: x
preferred.id.deployment.file.path: x
preferred.id.file.path: x
server.error.whitelabel.enabled: "false"
server.port: "18080"
server.property.file.path: x
server.servlet.context-path: /
spark.history.fs.cleaner.enabled: "true"
spark.history.fs.cleaner.interval: "1h"
spark.history.fs.cleaner.maxAge: "12h"
spring.thymeleaf.prefix: classpath:/templates/dev/
spring.thymeleaf.view-names: index,devForm,error
temp.repo.location: x
</code></pre>
<p>I am trying to clear my spark history server logs which I have deployed in Kubernetes using these three parameters as mentioned, I found the answer here <a href="https://stackoverflow.com/questions/42817924/cleaning-up-spark-history-logs">Cleaning up Spark history logs</a></p>
<p>it works when I restart the pods manually and deletes logs older than 12 hours but with time it starts pickingup old logs again and spark history server takes 1-2 hours to restart, is there another way I can do this so I don't have to manually restart the pods with time.</p>
<p>I asked around and found that it may be because I am using a shared starage like nfs.</p>
| <p>The problem was that I was trying to add these parameters in Configmap.yaml file instead of Deployment.yaml file. Just add these paramters in SPARK_HISTORY_OPTS.</p>
<h1 id="example-ipk8">Example</h1>
<ul>
<li><p>name: SPARK_HISTORY_OPTS</p>
<p>value: "-Dspark.history.fs.logDirectory=/FS/YOU/CREATED/ABOVE -Dspark.history.fs.cleaner.enabled=true -Dspark.history.fs.cleaner.interval=1d -Dspark.history.fs.cleaner.maxAge=7d"</p>
</li>
</ul>
<p>This article helped me
<a href="https://wbassler23.medium.com/spark-history-server-on-dc-os-516fb71523a5" rel="nofollow noreferrer">https://wbassler23.medium.com/spark-history-server-on-dc-os-516fb71523a5</a></p>
|
<p>I'm developing a Kubernetes controller. The desired state for this controller is captured in CRD-A and then it creates a deployment and statefulset to achieve the actual state. Currently I'm using server side apply to create/update these deployment and statefulsets.</p>
<p>The controller establishes watch on both CRD-A as well as deployments, statefulset. This to ensure that if there is a change in the deployment/statefulset, the reconcile() is notified and takes action to fix it. Currently the reconcile() always calls server side apply to create/update and this leads another watch event (resource version changes on every server side apply) resulting in repeated/infinite calls to reconcile()</p>
<p>One approach I've been thinking about is to leverage 'generation' on deployment/statefulset i.e. the controller will maintain a in-memory map of (k8s object -> generation) and on reconcile() compare the value in this map to what is present in the indexed informer cache; do you see any concerns with this approach? And are there better alternatives to prevent repeated/infinite reconcile() calls?</p>
| <p>Ideally, if the object you provide in the server side apply is not changed, the generation and the resourceVersion of the object should BOTH NOT be changed.</p>
<p>But sometimes that's not the case, see this github issue:<a href="https://github.com/kubernetes/kubernetes/issues/95460" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/95460</a></p>
<p>Fortunately, the generation always stays the same, so yes, you can leverage this field to avoid the reconcile dead loop, by adding a <code>GenerationChangedPredicate</code> filter to your controller, which will skip reconciling if the generation does not change, and it's often used in conjuction with the <code>LabelChangedPredicate</code>, which filters events when the object's labels does not change.</p>
<p>Here's how you will set up your controller with these two predicates:</p>
<pre><code>ctrl.NewControllerManagedBy(mgr).
For(&Object{}).
Owns(&appsv1.StatefulSet{}).
Owns(&appsv1.Deployment{}).
// together with Server Side Apply
// this predicates prevents meaningless reconcilations from being triggered
WithEventFilter(predicate.Or(predicate.GenerationChangedPredicate{}, predicate.LabelChangedPredicate{})).
Complete(r)
</code></pre>
|
<p>I'm currently working on an fully managed by Terraform environment but I reached some limitations and I would like to know if there is a feature or a workaround to do what I want :</p>
<p>I have an environment described like that :</p>
<ul>
<li>A VPC with public and private subnets</li>
<li>An EKS cluster with some workers</li>
<li>Some kubernetes configurations (using the kubernetes provider)</li>
<li>Some helm configurations (using the helm provider)</li>
</ul>
<p>Everything work, but when I want to do a full clean up by running <code>terraform destroy</code> I have to destroy some resources one by one with <code>-target</code> option and then delete manually some references in the state file.</p>
<p>Two reasons why I would like to do that:</p>
<ol>
<li><p><em>Just because it fails</em></p>
<p>Don't know why, but I was unable to destroy with Terraform and manually some subnets (imposible to detach the internet gateway) but I was able to destroy them by simply destroying the VPC.</p>
<p>In this case, I would like to tell to Terraform to only delete the VPC, by doing that, AWS automatically destroy related components such as Subnets, Internet Gateway, Route table, Networks ACL, etc.</p></li>
<li><p><em>Because it would be faster</em></p>
<p>I'm using Terraform with Kubernetes (and Helm) provider to define my Kubernetes configuration, but when I run <code>terraform destroy</code> it will delete all the Kubernetes resources, then the Cluster (and workers).</p>
<p>Only deleting the Cluster would be really faster.</p></li>
</ol>
<hr>
<p>So here is my question :
<strong>Is it possible to exclude resources to the destroy process in the Terraform configuration files ?</strong></p>
| <p>This allows you to create a plan that destroys all resources except the module "module.exclude.me"</p>
<pre><code>terraform plan -destroy $(for r in `terraform state list | fgrep -v module.exclude.me` ; do printf " -target ${r} "; done) -out destroy.plan
</code></pre>
<p>Credits go to cmacrae from the corresponding github issue:<a href="https://github.com/hashicorp/terraform/issues/2253" rel="nofollow noreferrer">https://github.com/hashicorp/terraform/issues/2253</a></p>
|
<p>How can I trigger the update (redeploy) of the hearth through the k8s golang client.</p>
<p>At the moment, I use these libraries to get information about pods and namespaces:</p>
<pre><code>v1 "k8s.io/api/core/v1
k8s.io/apimachinery/pkg/apis/meta/v1
k8s.io/client-go/kubernetes
k8s.io/client-go/rest
</code></pre>
<p>Maybe there is another library or it can be done through linux signals</p>
| <p>The standard way to trigger a rolling restart is set/update an annotation in the pod spec with the current timestamp. The change itself does nothing but that changes the pod template hash which triggers the Deployment controller to do its thang. You can use <code>client-go</code> to do this, though maybe work in a language you're more comfortable with if that's not Go.</p>
|
<p>I have deploy <code>helm upgrade --install gitlab gitlab/gitlab --timeout 600s -f gitlab.yaml</code></p>
<p>gitlab.yaml is here, ip is <code>minikube ip</code>.</p>
<pre class="lang-yaml prettyprint-override"><code># values-minikube.yaml
# This example intended as baseline to use Minikube for the deployment of GitLab
# - Services that are not compatible with how Minikube runs are disabled
# - Configured to use 192.168.99.100, and nip.io for the domain
# Minimal settings
global:
ingress:
configureCertmanager: false
class: "nginx"
hosts:
domain: "${ip}.nip.io"
externalIP: "${ip}"
rails:
bootsnap:
enabled: false
shell:
# Configure the clone link in the UI to include the high-numbered NodePort
# value from below (gitlab.gitlab-shell.service.nodePort)
port: 32022
psql:
host: ${POSTGRES_K8S_SERVICE}
database: postgres
username: postgres
password:
secret: ${POSTGRES_K8S_SERVICE}
key: postgresql-password
# Don't use certmanager, we'll self-sign
certmanager:
install: false
# Use the "ingress" addon, not our Ingress (can't map 22/80/443)
nginx-ingress:
enabled: false
# Save resources, only 3 CPU
prometheus:
install: false
gitlab-runner:
install: false
# Reduce replica counts, reducing CPU & memory requirements
gitlab:
webservice:
minReplicas: 1
maxReplicas: 1
sidekiq:
minReplicas: 1
maxReplicas: 1
gitlab-shell:
minReplicas: 1
maxReplicas: 1
# Map gitlab-shell to a high-numbered NodePort to support cloning over SSH since
# Minikube takes port 22.
service:
type: NodePort
nodePort: 32022
registry:
hpa:
minReplicas: 1
maxReplicas: 1
</code></pre>
<p>After deploying, it will generate several ingresses, but <strong>we cannot access them by using external machine</strong>.
<a href="https://i.stack.imgur.com/B4M8j.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B4M8j.png" alt="enter image description here" /></a></p>
<p>So I try to forward them by</p>
<pre><code>kubectl port-forward --namespace default svc/gitlab-webservice-default 9000:8080 --address 0.0.0.0
kubectl port-forward --namespace default svc/gitlab-webservice-default 9001:8081 --address 0.0.0.0
</code></pre>
<p>8080 are from <code>ingress/gitlab-webservice-default</code>. <strong>9001</strong> cannot be access, It means I cannot access https</p>
<pre><code> rules:
- host: gitlab.192.168.49.2.nip.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: gitlab-webservice-default
servicePort: 8181
- path: /admin/sidekiq
pathType: ImplementationSpecific
backend:
serviceName: gitlab-webservice-default
servicePort: 8080
</code></pre>
<p>But it seems not work when I try to login.</p>
<blockquote>
<p>422</p>
<p>The change you requested was rejected. Make sure you have access to the thing you tried to change.</p>
<p>Please contact your GitLab administrator if you think this is a mistake.</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/4fWAz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4fWAz.png" alt="enter image description here" /></a></p>
| <p>This issue is poorly documented via gitlab itself and the below 'answer' is for any googlers (hint, it's not really an answer):</p>
<p>The gitlab minikube setup spins up it's own ingress controllers that are bypassed, which is where TLS would normally be terminated. The '422' error is legitimate because <code>localhost:8080</code> does not provide a valid authenticity token to be processed by the host <code>gitlab.192.168.49.2.nip.io</code></p>
<p>You can confirm this by tailing the logs of the webserver container (it outputs all logging to stdout).</p>
<p>So you can either disable to CSRF token check in the <code>omniauth.rb</code> file inside <code>config/initializers</code> and restart the rails instance, or move away from minikube (we used KinD to get this working) since ingress appeared to be broken with the gitlab setup.</p>
|
<p>Two of my cluster nodes gets <code>Kubelet stopped posting node status</code> in <code>kubectl describe node</code> sometimes. In logs of that nodes i see this:</p>
<pre><code>Dec 11 12:01:03 alma-kube1 kubelet[946]: E1211 06:01:03.166998 946 controller.go:115] failed to ensure node lease exists, will retry in 6.4s, error: Get https://192.168.151.52:6443/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/alma-kube1?timeout=10s: read tcp 192.168.170.7:46824->192.168.151.52:6443: use of closed network connection
Dec 11 12:01:03 alma-kube1 kubelet[946]: W1211 06:01:03.167045 946 reflector.go:289] object-"kube-public"/"myregistrykey": watch of *v1.Secret ended with: very short watch: object-"kube-public"/"myregistrykey": Unexpected watch close - watch lasted less than a second and no items received
Dec 11 12:01:03 alma-kube1 kubelet[946]: W1211 06:01:03.167356 946 reflector.go:289] object-"kube-system"/"kube-router-token-bfzkn": watch of *v1.Secret ended with: very short watch: object-"kube-system"/"kube-router-token-bfzkn": Unexpected watch close - watch lasted less than a second and no items received
Dec 11 12:01:03 alma-kube1 kubelet[946]: W1211 06:01:03.167418 946 reflector.go:289] object-"kube-public"/"default-token-kcnfl": watch of *v1.Secret ended with: very short watch: object-"kube-public"/"default-token-kcnfl": Unexpected watch close - watch lasted less than a second and no items received
Dec 11 12:01:13 alma-kube1 kubelet[946]: E1211 06:01:13.329262 946 kubelet_node_status.go:385] Error updating node status, will retry: failed to patch status "{\"status\":{\"$setElementOrder/conditions\":[{\"type\":\"MemoryPressure\"},{\"type\":\"DiskPressure\"},{\"type\":\"PIDPressure\"},{\"type\":\"Ready\"}],\"conditions\":[{\"lastHeartbeatTime\":\"2019-12-11T06:01:03Z\",\"type\":\"MemoryPressure\"},{\"lastHeartbeatTime\":\"2019-12-11T06:01:03Z\",\"type\":\"DiskPressure\"},{\"lastHeartbeatTime\":\"2019-12-11T06:01:03Z\",\"type\":\"PIDPressure\"},{\"lastHeartbeatTime\":\"2019-12-11T06:01:03Z\",\"type\":\"Ready\"}]}}" for node "alma-kube1": Patch https://192.168.151.52:6443/api/v1/nodes/alma-kube1/status?timeout=10s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
</code></pre>
| <p>TL;DR</p>
<pre><code>ssh <failing node>
sudo systemctl restart kubelet
</code></pre>
<p>I should have asked myself the magical words of "Have you tried turning it on and off?" I don't know what was causing my <code>kubelet</code> to fail, but I just <code>ssh</code> into the VM and restarted the <code>kubelet</code> service, and everything started working again.</p>
|
<p>I have already setup a service in a k3s cluster using:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myservice
namespace: mynamespace
labels:
app: myapp
spec:
type: LoadBalancer
selector:
app: myapp
ports:
- port: 9012
targetPort: 9011
protocol: TCP
</code></pre>
<blockquote>
<p>kubectl get svc -n mynamespace</p>
</blockquote>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
minio ClusterIP None <none> 9011/TCP 42m
minio-service LoadBalancer 10.32.178.112 192.168.40.74,192.168.40.88,192.168.40.170 9012:32296/TCP 42m
</code></pre>
<blockquote>
<p>kubectl describe svc myservice -n mynamespace</p>
</blockquote>
<pre><code>Name: myservice
Namespace: mynamespace
Labels: app=myapp
Annotations: <none>
Selector: app=myapp
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.32.178.112
IPs: 10.32.178.112
LoadBalancer Ingress: 192.168.40.74, 192.168.40.88, 192.168.40.170
Port: <unset> 9012/TCP
TargetPort: 9011/TCP
NodePort: <unset> 32296/TCP
Endpoints: 10.42.10.43:9011,10.42.10.44:9011
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>I assume from the above that I sould be able to access the minIO console from:
<a href="http://192.168.40.74:9012" rel="nofollow noreferrer">http://192.168.40.74:9012</a> but it is not possible.</p>
<p>Error message:</p>
<blockquote>
<p>curl: (7) Failed to connect to 192.168.40.74 port 9012: Connection
timed out</p>
</blockquote>
<p>Furthemore, If I execute</p>
<blockquote>
<p>kubectl get node -o wide -n mynamespace</p>
</blockquote>
<pre><code>NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
antonis-dell Ready control-plane,master 6d v1.21.2+k3s1 192.168.40.74 <none> Ubuntu 18.04.1 LTS 4.15.0-147-generic containerd://1.4.4-k3s2
knodeb Ready worker 5d23h v1.21.2+k3s1 192.168.40.88 <none> Raspbian GNU/Linux 10 (buster) 5.4.51-v7l+ containerd://1.4.4-k3s2
knodea Ready worker 5d23h v1.21.2+k3s1 192.168.40.170 <none> Raspbian GNU/Linux 10 (buster) 5.10.17-v7l+ containerd://1.4.4-k3s2
</code></pre>
<p>As it is shown above the INTERNAL-IPs of nodes are the same as the EXTERNAL-IPs of Load Balancer. Am I doing something wrong here?</p>
| <h2 id="k3s-cluster-initial-configuration">K3S cluster initial configuration</h2>
<p>To reproduce the environment I created a two node <code>k3s</code> cluster following next steps:</p>
<ol>
<li><p>Install k3s control-plane on required host:</p>
<pre><code>curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC='--write-kubeconfig-mode=644' sh -
</code></pre>
</li>
<li><p>Verify it works:</p>
<pre><code>k8s kubectl get nodes -o wide
</code></pre>
</li>
<li><p>To add a worker node, this command should be run on a worker node:</p>
<pre><code>curl -sfL https://get.k3s.io | K3S_URL=https://control-plane:6443 K3S_TOKEN=mynodetoken sh -
</code></pre>
</li>
</ol>
<p>Where <code>K3S_URL</code> is a control-plane URL (with IP or DNS)</p>
<p><code>K3S_TOKEN</code> can be got by:</p>
<pre><code>sudo cat /var/lib/rancher/k3s/server/node-token
</code></pre>
<p>You should have a running cluster:</p>
<pre><code>$ k3s kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k3s-cluster Ready control-plane,master 27m v1.21.2+k3s1 10.186.0.17 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-gcp containerd://1.4.4-k3s2
k3s-worker-1 Ready <none> 18m v1.21.2+k3s1 10.186.0.18 <none> Ubuntu 18.04.5 LTS 5.4.0-1046-gcp containerd://1.4.4-k3s2
</code></pre>
<h2 id="reproduction-and-testing">Reproduction and testing</h2>
<p>I created a simple deployment based on <code>nginx</code> image by:</p>
<pre><code>$ k3s kubectl create deploy nginx --image=nginx
</code></pre>
<p>And exposed it:</p>
<pre><code>$ k3s kubectl expose deploy nginx --type=LoadBalancer --port=8080 --target-port=80
</code></pre>
<p>This means that <code>nginx</code> container in pod is listening to port <code>80</code> and <code>service</code> is accessible on port <code>8080</code> within the cluster:</p>
<pre><code>$ k3s kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 29m <none>
nginx LoadBalancer 10.43.169.6 10.186.0.17,10.186.0.18 8080:31762/TCP 25m app=nginx
</code></pre>
<p>Service is accessible on both IPs or <code>localhost</code> AND port <code>8080</code> or <code>NodePort</code> as well.</p>
<p><strong>+</strong> taking into account the error you get <code>curl: (7) Failed to connect to 192.168.40.74 port 9012: Connection timed out</code> means that service is configured, but it doesn't respond properly (it's not 404 from ingress or <code>connection refused</code>).</p>
<h2 id="answer-on-second-question-loadbalancer">Answer on second question - Loadbalancer</h2>
<p>From <a href="https://rancher.com/docs/k3s/latest/en/networking/#how-the-service-lb-works" rel="noreferrer">rancher k3s official documentation about LoadBalancer</a>, <a href="https://github.com/k3s-io/klipper-lb" rel="noreferrer">Klipper Load Balancer</a> is used. From their github repo:</p>
<blockquote>
<p>This is the runtime image for the integrated service load balancer in
klipper. This works by using a host port for each service load
balancer and setting up iptables to forward the request to the cluster
IP.</p>
</blockquote>
<p>From <a href="https://rancher.com/docs/k3s/latest/en/networking/#how-the-service-lb-works" rel="noreferrer">how the service loadbalancer works</a>:</p>
<blockquote>
<p>K3s creates a controller that creates a Pod for the service load
balancer, which is a Kubernetes object of kind Service.</p>
<p>For each service load balancer, a DaemonSet is created. The DaemonSet
creates a pod with the svc prefix on each node.</p>
<p>The Service LB controller listens for other Kubernetes Services. After
it finds a Service, it creates a proxy Pod for the service using a
DaemonSet on all of the nodes. This Pod becomes a proxy to the other
Service, so that for example, requests coming to port 8000 on a node
could be routed to your workload on port 8888.</p>
<p>If the Service LB runs on a node that has an external IP, it uses the
external IP.</p>
</blockquote>
<p>In other words yes, this is expected that loadbalancer has the same IP addresses as hosts' <code>internal-IP</code>s. Every service with loadbalancer type in k3s cluster will have its own <code>daemonSet</code> on each node to serve direct traffic to the initial service.</p>
<p>E.g. I created the second deployment <code>nginx-two</code> and exposed it on port <code>8090</code>, you can see that there are two pods from two different deployments AND four pods which act as a loadbalancer (please pay attention to names - <code>svclb</code> at the beginning):</p>
<pre><code>$ k3s kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-6799fc88d8-7m4v4 1/1 Running 0 47m 10.42.0.9 k3s-cluster <none> <none>
svclb-nginx-jc4rz 1/1 Running 0 45m 10.42.0.10 k3s-cluster <none> <none>
svclb-nginx-qqmvk 1/1 Running 0 39m 10.42.1.3 k3s-worker-1 <none> <none>
nginx-two-6fb6885597-8bv2w 1/1 Running 0 38s 10.42.1.4 k3s-worker-1 <none> <none>
svclb-nginx-two-rm594 1/1 Running 0 2s 10.42.0.11 k3s-cluster <none> <none>
svclb-nginx-two-hbdc7 1/1 Running 0 2s 10.42.1.5 k3s-worker-1 <none> <none>
</code></pre>
<p>Both services have the same <code>EXTERNAL-IP</code>s:</p>
<pre><code>$ k3s kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 10.43.169.6 10.186.0.17,10.186.0.18 8080:31762/TCP 50m
nginx-two LoadBalancer 10.43.118.82 10.186.0.17,10.186.0.18 8090:31780/TCP 4m44s
</code></pre>
|
<p>I'm testing kubernetes behavior when pod getting error.</p>
<p>I now have a pod in CrashLoopBackOff status caused by liveness probe failed, from what I can see in kubernetes events, pod turns into CrashLoopBackOff after 3 times try and begin to back off restarting, but the related Liveness probe failed events won't update?</p>
<pre><code>β ~ kubectl describe pods/my-nginx-liveness-err-59fb55cf4d-c6p8l
Name: my-nginx-liveness-err-59fb55cf4d-c6p8l
Namespace: default
Priority: 0
Node: minikube/192.168.99.100
Start Time: Thu, 15 Jul 2021 12:29:16 +0800
Labels: pod-template-hash=59fb55cf4d
run=my-nginx-liveness-err
Annotations: <none>
Status: Running
IP: 172.17.0.3
IPs:
IP: 172.17.0.3
Controlled By: ReplicaSet/my-nginx-liveness-err-59fb55cf4d
Containers:
my-nginx-liveness-err:
Container ID: docker://edc363b76811fdb1ccacdc553d8de77e9d7455bb0d0fb3cff43eafcd12ee8a92
Image: nginx
Image ID: docker-pullable://nginx@sha256:353c20f74d9b6aee359f30e8e4f69c3d7eaea2f610681c4a95849a2fd7c497f9
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 15 Jul 2021 13:01:36 +0800
Finished: Thu, 15 Jul 2021 13:02:06 +0800
Ready: False
Restart Count: 15
Liveness: http-get http://:8080/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r7mh4 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-r7mh4:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 37m default-scheduler Successfully assigned default/my-nginx-liveness-err-59fb55cf4d-c6p8l to minikube
Normal Created 35m (x4 over 37m) kubelet Created container my-nginx-liveness-err
Normal Started 35m (x4 over 37m) kubelet Started container my-nginx-liveness-err
Normal Killing 35m (x3 over 36m) kubelet Container my-nginx-liveness-err failed liveness probe, will be restarted
Normal Pulled 31m (x7 over 37m) kubelet Container image "nginx" already present on machine
Warning Unhealthy 16m (x32 over 36m) kubelet Liveness probe failed: Get "http://172.17.0.3:8080/": dial tcp 172.17.0.3:8080: connect: connection refused
Warning BackOff 118s (x134 over 34m) kubelet Back-off restarting failed container
</code></pre>
<p>BackOff event updated 118s ago, but Unhealthy event updated 16m ago?</p>
<p>and why I'm getting only 15 times Restart Count while BackOff events with 134 times?</p>
<p>I'm using minikube and my deployment is like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx-liveness-err
spec:
selector:
matchLabels:
run: my-nginx-liveness-err
replicas: 1
template:
metadata:
labels:
run: my-nginx-liveness-err
spec:
containers:
- name: my-nginx-liveness-err
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /
port: 8080
</code></pre>
| <p>I think you might be confusing Status Conditions and Events. Events don't "update", they just exist. It's a stream of event data from the controllers for debugging or alerting on. The <code>Age</code> column is the relative timestamp to the most recent instance of that event type and you can see if does some basic de-duplication. Events also age out after a few hours to keep the database from exploding.</p>
<p>So your issue has nothing to do with the liveness probe, your container is crashing on startup.</p>
|
<p>This is a pretty basic question so I figure I must be missing something obvious, Does openshift service uses round-robin to load balance between pods? Or does it forward requests to the pod with the greatest amount of available resources? Or is it totally random?</p>
<p>My service configuration looks like that:</p>
<pre><code>kind: service
metadata:
name: temp
labels:
app: temp
spec:
port:
targetPort: temp-port
to:
kind: Service
name: temp
</code></pre>
| <p>In Kubernetes (OpenShift is just a Kubernetes distribution), Services result in iptables rules. That means for a Service with more than one Pods, traffic is distributed / redirected via iptables to the different Pods selected by the Service.</p>
<p>So for example if we have three Pods selected by a Service, we can see the following resulting iptables entries with <code>--probability</code> on the underlying Worker Nodes:</p>
<pre><code>-A KUBE-SVC-C5D5TE7O3IX6LYPU -m comment --comment "openshift-logging/fluentd:metrics" -m statistic --mode random --probability 0.20000000019 -j KUBE-SEP-K7BWKR3YFNRALYRO
-A KUBE-SVC-C5D5TE7O3IX6LYPU -m comment --comment "openshift-logging/fluentd:metrics" -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-SLOSD6E2CHTNQQZ7
-A KUBE-SVC-C5D5TE7O3IX6LYPU -m comment --comment "openshift-logging/fluentd:metrics" -m statistic --mode random --probability 0.33333333349 -j KUBE-SEP-I2MJAF47DZ7EPTNC
-A KUBE-SVC-C5D5TE7O3IX6LYPU -m comment --comment "openshift-logging/fluentd:metrics" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-QCINKYOFNQTK2FRX
-A KUBE-SVC-C5D5TE7O3IX6LYPU -m comment --comment "openshift-logging/fluentd:metrics" -j KUBE-SEP-RWL5ZKQM57XO3TAF
</code></pre>
<p>So the answer to your question is that <strong>traffic distribution via a Service is random</strong>.</p>
<p>On the other hand, an OpenShift Route provides more control over how the traffic is distributed to the Pods. You can choose different load-balancing algorithms. Available options are <code>source</code>, <code>roundrobin</code>, and <code>leastconn</code>.
You can find more options in the <a href="https://docs.openshift.com/container-platform/4.7/networking/routes/route-configuration.html" rel="nofollow noreferrer">documentation about OpenShift Routes</a>.</p>
|
<p>Trying to figure out how to authenticate with the storage API from within a GKE cluster.</p>
<p>Code:</p>
<pre><code>Storage storage = StorageOptions.newBuilder()
.setCredentials(ServiceAccountCredentials.getApplicationDefault())
.setProjectId(gcpProjectId)
.build().getService();
</code></pre>
<p><code>getApplicationDefault()</code> is documented to use these means to authenticate with the API:</p>
<ol>
<li>Credentials file pointed to by the {@code GOOGLE_APPLICATION_CREDENTIALS} environment variable</li>
<li>Credentials provided by the Google Cloud SDK {@code gcloud auth application-default login} command</li>
<li>Google App Engine built-in credentials</li>
<li>Google Cloud Shell built-in credentials</li>
<li>Google Compute Engine built-in credentials</li>
</ol>
<p>The application is using the GCP workload identity feature, so the application (in-cluster) service account is annotated with:</p>
<pre><code>serviceAccount.annotations.iam.gke.io/gcp-service-account: my-service-account@my-project.iam.gserviceaccount.com
</code></pre>
<p>Now the call to the storage account fails with the following error:</p>
<pre><code>{
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "Primary: /namespaces/my-project.svc.id.goog with additional claims does not have storage.objects.create access to the Google Cloud Storage object.",
"reason" : "forbidden"
} ],
"message" : "Primary: /namespaces/my-project.svc.id.goog with additional claims does not have storage.objects.create access to the Google Cloud Storage object."
}
</code></pre>
<p>This makes me think that the workload identity is not working correctly. I am expecting to receive an error message for my annotated service account and not the default one.</p>
<p>Is there anything else I should have been doing?</p>
| <p>The answer, in part, aside from the annotation syntax, is that, just like me, you probably didn't look closely enough at this part in the documentation:</p>
<pre><code> gcloud iam service-accounts add-iam-policy-binding \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:PROJECT_ID.svc.id.goog[K8S_NAMESPACE/KSA_NAME]" \
GSA_NAME@PROJECT_ID.iam.gserviceaccount.com
</code></pre>
<p>Notice the <code>PROJECT_ID.svc.id.goog[K8S_NAMESPACE/KSA_NAME]</code> piece. It's something they give no examples on as far as syntax but it looks like this in my terraform.</p>
<pre><code>resource "google_project_iam_member" "app-binding-2" {
role = "roles/iam.workloadIdentityUser"
member = "serviceAccount:${local.ws_vars["project-id"]}.svc.id.goog[mynamespace/myk8ssaname]"
}
</code></pre>
<p>Weirdly, I didn't know you could bind an IAM policy to a k8s service account, even more weirdly you can bind this in the terraform even if the namespace doesn't exist, much less the service account. So you can run this first before deployments.</p>
<p>I truly wish Google would provide better documentation and support, this took me several hours to figure out.</p>
|
<p>I am new to Kubernetes. Setting up nginx-ingress in a test cluster. One of our senior people rolled by and noticed the following.</p>
<pre><code># kubectl get services
...
ingress-ingress-nginx-controller-admission ClusterIP xx.xxx.xxx.xxx <none> 443/TCP
...
</code></pre>
<p>What's that, he asked. Get rid of it if you don't need it.</p>
<p>Before I rip it out and maybe get cripple my test cluster .. what <em>is</em> ingress-nginx-controller-admission and why do I need it?</p>
| <p>It's the service for the validating webhook that ingress-nginx includes. If you remove it, you'll be unable to create or update Ingress objects unless you also remove the webhook configuration.</p>
<p>tl;dr it's important, no touchy</p>
|
<p>I have a .NET 5.0 App that connects to a SQL Server database. If I host the App in Azure App service and the database in Azure SQL database, all is fine.</p>
<p>Now I put the App in a Docker container and deploy it in AKS. It doesn't work anymore (can't connect to the Azure SQL database).</p>
<p>How should I configure my AKS deployment to have it working?</p>
| <p>Did you check about the ingress and firewall rules ?</p>
<p><a href="https://learn.microsoft.com/en-us/azure/azure-sql/database/firewall-configure" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/azure-sql/database/firewall-configure</a></p>
<p>Open the firewall or edit ingress rules so that AKS PODs can connect to the database or else you have to the VPC peering.</p>
<p><a href="https://learn.microsoft.com/en-us/azure/azure-sql/database/private-endpoint-overview" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/azure-sql/database/private-endpoint-overview</a></p>
|
<p>Kustomize directory structure</p>
<pre><code>βββ base
βΒ Β βββ deployment.yaml
βΒ Β βββ kustomization.yaml
βββ overlays
βββ prod
βββ kustomization.yaml
βββ namespace-a
βΒ Β βββ deployment-a1
βΒ Β βΒ Β βββ kustomization.yaml
βΒ Β βΒ Β βββ patch.yaml
βΒ Β βββ deployment-a2
βΒ Β βΒ Β βββ kustomization.yaml
βΒ Β βΒ Β βββ patch.yaml
βΒ Β βββ kustomization.yaml
βΒ Β βββ namespace.yaml
βββ namespace-b
βΒ Β βββ deployment-b1
βΒ Β βΒ Β βββ kustomization.yaml
βΒ Β βΒ Β βββ patch.yaml
βΒ Β βββ deployment-b2
βΒ Β βΒ Β βββ kustomization.yaml
βΒ Β βΒ Β βββ patch.yaml
βΒ Β βββ kustomization.yaml
βΒ Β βββ namespace.yaml
βββ namespace-c
</code></pre>
<p>As you can see above, I have <code>prod</code> environment with <code>namesapce-a</code> and <code>namespace-b</code> and few more.
To create deployment for all, I can simply run the below command:</p>
<pre><code> > kustomize overlays/prod
</code></pre>
<p>Which works flawlessly, both namespaces are created along with other deployment files for all deployments.</p>
<p>To create a deployment for only namespace-a:</p>
<pre><code> > kustomize overlays/prod/namespace-a
</code></pre>
<p>That also works. :)</p>
<p>But that's not where the story ends for me at-least.</p>
<p>I would like to keep the current functionality and be able to deploy <code>deployment-a1, deployment-a2 ...</code></p>
<pre><code> > kustomize overlays/prod/namespace-a/deployment-a1
</code></pre>
<p>If I put the namespace.yaml inside <code>deployment-a1</code> folder and add it in <code>kustomization.yaml</code>
then the above command works but previous 2 fails with error because now we have 2 namespace files with same name.</p>
<p>I have 2 queries.</p>
<ol>
<li>Can this directory structure be improved?</li>
<li>How can I create namesapce with single deployment without breaking the other functionality?</li>
</ol>
<p>Full code can be seen <a href="https://github.com/deepak-gc/kustomize-namespace-issue" rel="nofollow noreferrer">here</a></p>
| <p>In your particular case, in the most ideal scenario, all the required namespaces should already be created before running the <code>kustomize</code> command.
However, I know that you would like to create namespaces dynamically as needed.</p>
<p>Using a Bash script as some kind of wrapper can definitely help with this approach, but I'm not sure if you want to use this.</p>
<p>Below, I'll show you how this can work, and you can choose if it's right for you.</p>
<hr />
<p>First, I created a <code>kustomize-wrapper</code> script that requires two arguments:</p>
<ol>
<li>The name of the Namespace you want to use.</li>
<li>Path to the directory containing the <code>kustomization.yaml</code> file.</li>
</ol>
<p><strong>kustomize-wrapper.sh</strong></p>
<pre><code>$ cat kustomize-wrapper.sh
#!/bin/bash
if [ -z "$1" ] || [ -z "$2" ]; then
echo "Pass required arguments !"
echo "Usage: $0 NAMESPACE KUSTOMIZE_PATH"
exit 1
else
NAMESPACE=$1
KUSTOMIZE_PATH=$2
fi
echo "Creating namespace"
sed -i "s/name:.*/name: ${NAMESPACE}/" namespace.yaml
kubectl apply -f namespace.yaml
echo "Setting namespace: ${NAMESPACE} in the kustomization.yaml file"
sed -i "s/namespace:.*/namespace: ${NAMESPACE}/" base/kustomization.yaml
echo "Deploying resources in the ${NAMESPACE}"
kustomize build ${KUSTOMIZE_PATH} | kubectl apply -f -
</code></pre>
<p>As you can see, this script creates a namespace using the <code>namespace.yaml</code> file as the template. It then sets the same namespace in the <code>base/kustomization.yaml</code> file and finally runs the <code>kustomize</code> command with the path you provided as the second argument.</p>
<p><strong>namespace.yaml</strong></p>
<pre><code>$ cat namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name:
</code></pre>
<p><strong>base/kustomization.yaml</strong></p>
<pre><code>$ cat base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace:
resources:
- deployment.yaml
</code></pre>
<p><strong>Directory structure</strong></p>
<pre><code>$ tree
.
βββ base
β βββ deployment.yaml
β βββ kustomization.yaml
βββ kustomize-wrapper.sh
βββ namespace.yaml
βββ overlays
βββ prod
βββ deployment-a1
β βββ kustomization.yaml
β βββ patch.yaml
βββ deployment-a2
β βββ kustomization.yaml
β βββ patch.yaml
βββ kustomization.yaml
</code></pre>
<p>We can check if it works as expected.</p>
<p>Creating the <code>namespace-a</code> Namespace along with <code>app-deployment-a1</code> and <code>app-deployment-a2</code> Deployments:</p>
<pre><code>$ ./kustomize-wrapper.sh namespace-a overlays/prod
Creating namespace
namespace/namespace-a created
Setting namespace: namespace-a in the kustomization.yaml file
deployment.apps/app-deployment-a1 created
deployment.apps/app-deployment-a2 created
</code></pre>
<p>Creating only the <code>namespace-a</code> Namespace and <code>app-deployment-a1</code> Deployment:</p>
<pre><code>$ ./kustomize-wrapper.sh namespace-a overlays/prod/deployment-a1
Creating namespace
namespace/namespace-a created
Setting namespace: namespace-a in the kustomization.yaml file
deployment.apps/app-deployment-a1 created
</code></pre>
|
<p>I am trying to follow this tutorial to learn to use <a href="https://kind.sigs.k8s.io/" rel="nofollow noreferrer">kind</a>. The version I just installed using brew install is:
kind version 0.11.1</p>
<p>The config file looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30080
hostPort: 80
listenAddress: "0.0.0.0"
protocol: TCP
</code></pre>
<p>Apparently that version is wrong because I get an error <code>ERROR: failed to create cluster: unknown apiVersion: kind.sigs.k8s.io/v1alpha3</code> when I try to create the cluster: <code>$ kind create cluster --name mycluster --config config/kind.config.yaml --wait 5m</code>.</p>
<p>I found an example of some other version string, but when trying to add the <code>spec</code> block in that same tutorial I get a configuration error. I assume this means because the API broke between the version and the yaml I am using.</p>
<p>Why do I get the original "failed to create cluster" error, and where can I find documentation associating kind versions with yaml syntax?</p>
| <p>The version needs to be set to <code>apiVersion: kind.x-k8s.io/v1alpha4</code></p>
<p>Notice the change from <code>kind.sigs.k8s.io</code> to <code>kind.x-k8s.io</code> in addition to changing to <code>v1alpha4</code>.</p>
|
<p>I'm trying to follow <a href="https://github.com/hetznercloud/hcloud-cloud-controller-manager/issues/74#issuecomment-678073133" rel="nofollow noreferrer">this suggestion</a> to use <a href="https://github.com/hetznercloud/hcloud-cloud-controller-manager/blob/master/docs/load_balancers.md" rel="nofollow noreferrer">Hetzner load balancer</a> as a Nginx ingress controller.</p>
<p><code>helm install ingress-nginx</code> with these configurations:</p>
<pre><code>controller:
config:
use-proxy-protocol: "true"
replicaCount: 3
service:
type: LoadBalancer
annotations:
load-balancer.hetzner.cloud/name: nginx-controller-new
load-balancer.hetzner.cloud/location: hel1
load-balancer.hetzner.cloud/use-private-ip: true
load-balancer.hetzner.cloud/algorithm-type: least_connections
load-balancer.hetzner.cloud/uses-proxyprotocol: true
load-balancer.hetzner.cloud/hostname: somehost.com
</code></pre>
<p>Deployments:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: echo1
spec:
selector:
matchLabels:
app: echo1
replicas: 3
template:
metadata:
labels:
app: echo1
spec:
containers:
- name: echo1
image: hashicorp/http-echo
args:
- "-text=echo1"
ports:
- containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
name: echo-service
spec:
selector:
app: echo1
ports:
- name: http
protocol: TCP
port: 80
targetPort: 5678
ipFamilyPolicy: PreferDualStack
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx
spec:
ingressClassName: nginx
rules:
- http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: echo-service
port:
number: 80
host: somehost.com
</code></pre>
<p>After installation, a Hetzner load balancer is successfully provisioned, however, it isn't able to detect the services:</p>
<p><a href="https://i.stack.imgur.com/zwQOi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zwQOi.png" alt="enter image description here" /></a></p>
<p>I'm at a loss here. How can I connect the echo1 app to the ingress-nginx-controller service? I check out <a href="https://github.com/kubernetes/ingress-nginx/blob/master/charts/ingress-nginx/values.yaml" rel="nofollow noreferrer">all of the available helm values</a> but I cannot find something like <code>service.selector</code> to target echo1's service and make it publicly available. Can someone help me? Are there any alternatives?</p>
| <p>I am not the Kubernetes master (more a noob) but I got it working with a L4-loadbalancer.</p>
<p>An annotation has to be set to your Ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
</code></pre>
<p>And to use it without nginx-ingress, that worked for me (not tested with your labels)</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "nginx-hello-world"
labels:
app: "hello-world"
spec:
selector:
matchLabels:
app: "hello-world"
strategy:
type: "Recreate"
template:
metadata:
labels:
app: "hello-world"
spec:
containers:
- image: "rancher/hello-world"
name: "nginx-hello-world"
imagePullPolicy: "Always"
ports:
- containerPort: 80
name: "http"
</code></pre>
<p><strong>Service</strong>
for HTTP (to test your deployment)</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: "v1"
kind: Service
metadata:
name: "nginx-hello-world"
labels:
app: "hello-world"
annotations:
load-balancer.hetzner.cloud/name: lb-development
load-balancer.hetzner.cloud/hostname: somehost.com
load-balancer.hetzner.cloud/protocol: http
load-balancer.hetzner.cloud/health-check-port: 10254
spec:
type: LoadBalancer
selector:
app: "hello-world"
ports:
- name: "http"
port: 80
targetPort: 80
</code></pre>
<p>for SSL</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: nginx-hello-world
labels:
app: hello-world
annotations:
load-balancer.hetzner.cloud/hostname: somehost.com
load-balancer.hetzner.cloud/http-certificates: managed-certificate-1-wildcard-somehost.com
load-balancer.hetzner.cloud/name: lb-development
load-balancer.hetzner.cloud/protocol: https
spec:
ports:
- name: https
nodePort: 32725
port: 443
protocol: TCP
targetPort: 80
selector:
app: hello-world
type: LoadBalancer
</code></pre>
|
<p>I have created a Kubernetes cluster on 2 Rasberry Pis (Model 3 and 3B+) to use as a Kubernetes playground.</p>
<p>I have deployed a postgresql and an spring boot app (called meal-planer) to play around with.
The meal-planer should read and write data from and to the postgresql.</p>
<p>However, the app can't reach the Database.</p>
<p>Here is the deployment-descriptor of the postgresql:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: postgres
namespace: home
labels:
app: postgres
spec:
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
name: postgres
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: postgres
namespace: home
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13.2
imagePullPolicy: IfNotPresent
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: dev-db-secret
key: username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: dev-db-secret
key: password
- name: POSTGRES_DB
value: home
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-data
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-pv-claim
---
</code></pre>
<p>Here is the deployments-descriptor of the meal-planer</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: meal-planner
namespace: home
labels:
app: meal-planner
spec:
type: ClusterIP
selector:
app: meal-planner
ports:
- port: 8080
name: meal-planner
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: meal-planner
namespace: home
spec:
replicas: 1
selector:
matchLabels:
app: meal-planner
template:
metadata:
labels:
app: meal-planner
spec:
containers:
- name: meal-planner
image: 08021986/meal-planner:v1
imagePullPolicy: Always
ports:
- containerPort: 8080
---
</code></pre>
<p>The meal-planer image is an arm32v7 image running a jar file.
Inside the cluster, the meal-planer uses the connection-string <code> jdbc:postgresql://postgres:5432/home</code> to connect to the DB.</p>
<p>I am absolutely sure, that the DB-credentials are correct, since i can access the DB when i port-forward the service.</p>
<p>When deploying both applications, I can <code>kubectl exec -it <<podname>> -n home -- bin/sh</code> into it. If I call <code>wget -O- postgres</code> or <code>wget -O- postgres.home</code> from there, I always get <code>Connecting to postgres (postgres)|10.43.62.32|:80... failed: Network is unreachable</code>.</p>
<p>I don't know, why the network is unreachable and I don't know what I can do about it.</p>
| <p>First of all, don't use Deployment workloads for applications that require saving the state. This could get you into some trouble and even data loss.
For that purpose, you should use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">statefulset</a></p>
<blockquote>
<p>StatefulSet is the workload API object used to manage stateful
applications.</p>
<p>Manages the deployment and scaling of a set of Pods, and provides
guarantees about the ordering and uniqueness of these Pods.</p>
<p>Like a Deployment, a StatefulSet manages Pods that are based on an
identical container spec. Unlike a Deployment, a StatefulSet maintains
a sticky identity for each of their Pods. These pods are created from
the same spec, but are not interchangeable: each has a persistent
identifier that it maintains across any rescheduling.</p>
</blockquote>
<p>Also for databases, the storage should be as close to the engine as possible (due to latency) most preferably <code>hostpath storageClass</code> with <code>ReadWriteOnce</code>.</p>
<p>Now regarding your issue, my guess is it's either the problem with how you connect to DB in your application or maybe the remote connection is refused by definitions in pg_hba.conf</p>
<p>Here is a minimal working example that'll help you get started:</p>
<pre><code>kind: Namespace
apiVersion: v1
metadata:
name: test
labels:
name: test
---
kind: Service
apiVersion: v1
metadata:
name: postgres-so-test
namespace: test
labels:
app: postgres-so-test
spec:
selector:
app: postgres-so-test
ports:
- port: 5432
targetPort: 5432
name: postgres-so-test
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
namespace: test
name: postgres-so-test
spec:
replicas: 1
serviceName: postgres-so-test
selector:
matchLabels:
app: postgres-so-test
template:
metadata:
labels:
app: postgres-so-test
spec:
containers:
- name: postgres
image: postgres:13.2
imagePullPolicy: IfNotPresent
env:
- name: POSTGRES_USER
value: johndoe
- name: POSTGRES_PASSWORD
value: thisisntthepasswordyourelokingfor
- name: POSTGRES_DB
value: home
ports:
- containerPort: 5432
</code></pre>
<p>Now let's test this. NOTE: I'll also create a deployment from Postgres image just to have a pod in this namespace which will have <a href="https://www.postgresql.org/docs/9.3/app-pg-isready.html" rel="nofollow noreferrer">pg_isready</a> binary in order to test the connection to created db.</p>
<pre class="lang-bash prettyprint-override"><code>pi@rsdev-pi-master:~/test $ kubectl apply -f test_db.yml
namespace/test created
service/postgres-so-test created
statefulset.apps/postgres-so-test created
pi@rsdev-pi-master:~/test $ kubectl apply -f test_container.yml
deployment.apps/test-container created
pi@rsdev-pi-master:~/test $ kubectl get pods -n test
NAME READY STATUS RESTARTS AGE
postgres-so-test-0 1/1 Running 0 19s
test-container-d77d75d78-cgjhc 1/1 Running 0 12s
pi@rsdev-pi-master:~/test $ sudo kubectl get all -n test
NAME READY STATUS RESTARTS AGE
pod/postgres-so-test-0 1/1 Running 0 26s
pod/test-container-d77d75d78-cgjhc 1/1 Running 0 19s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/postgres-so-test ClusterIP 10.43.242.51 <none> 5432/TCP 30s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/test-container 1/1 1 1 19s
NAME DESIRED CURRENT READY AGE
replicaset.apps/test-container-d77d75d78 1 1 1 19s
NAME READY AGE
statefulset.apps/postgres-so-test 1/1 27s
pi@rsdev-pi-master:~/test $ kubectl exec -it test-container-d77d75d78-cgjhc -n test -- /bin/bash
root@test-container-d77d75d78-cgjhc:/# pg_isready -d home -h postgres-so-test -p 5432 -U johndoe
postgres-so-test:5432 - accepting connections
</code></pre>
<p>If you'll still have trouble connecting to DB, please attach following:</p>
<ol>
<li><code>kubectl describe pod <<postgres_pod_name>></code></li>
<li><code>kubectl logs <<postgres_pod_name>></code> Idealy afrer you've tried to connect to it</li>
<li><code>kubectl exec -it <<postgres_pod_name>> -- cat /var/lib/postgresql/data/pg_hba.conf</code></li>
</ol>
<p>Also research topic of K8s operators. They are useful for deploying more complex production-ready application stacks (Ex. Database with master + replicas + LB)</p>
|
<p>I am setting up HPA on custom metrics - basically on no. of threads of a deployment.</p>
<p>I have created a PrometheusRule to get average of threads (5 min. based). On the container, I am doing cont. load to increase the threads and average value is also going high.</p>
<p>I started with 2 replicas and when current value is crossing the target value, am not seeing my deployment scaling out.</p>
<p>As you can see, have set target as 44 and current value is 51.55 for more than 10 min but still no scale up.
<a href="https://i.stack.imgur.com/Uo6No.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Uo6No.png" alt="enter image description here" /></a></p>
<p><strong>Version Info</strong></p>
<ul>
<li>Kubernetes (AKS) : 1.19.11</li>
<li>Prometheus : 2.22.1</li>
<li>Setup done via prometheus-operator (0.7)</li>
<li>Autoscaling api version : autoscaling/v2beta2</li>
</ul>
<p><strong>Prometheus Rule</strong></p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: rdp-rest
namespace: default
labels:
app.kubernetes.io/name: node-exporter
app.kubernetes.io/version: 1.0.1
prometheus: k8s
role: alert-rules
run: rdp-rest
app: rdp-rest
spec:
groups:
- name: hpa-rdp-rest
interval: 10s
rules:
- expr: 'avg_over_time(container_threads{container="rdp-rest"}[5m])'
record: hpa_custom_metrics_container_threads_rdp_rest
labels:
service: rdp-rest
</code></pre>
<p><strong>Manifests</strong> - <a href="https://github.com/prometheus-operator/kube-prometheus/tree/release-0.7/manifests" rel="nofollow noreferrer">https://github.com/prometheus-operator/kube-prometheus/tree/release-0.7/manifests</a></p>
<p><strong>Update (6th July)</strong> - HPA with custom metrics is working fine for any other technology like nodejs/ngnix, etc. but not working for netty api</p>
<p>Any thoughts?</p>
| <p>Finally after a week, found the root cause.</p>
<p>So the issue was with the label. I had 2 deployments with same label. So what internal hpa is doing is it's getting stats for all the pods with that label and then doing scale up/down. As soon as I corrected the labels, hpa worked as expected.</p>
<p>But the same on prometheus UI shows stats for ONLY one type of pods.
Looks like some internal bug or something. Not getting when we provide name why it's going and fetching stats based on label.</p>
<p>Point to remember : Always double check your labels.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.