prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>I have a parent chart which contains 4 subcharts, out of these I want to deploy the 1 specific subchart to different namespace and all the template files in that subchart are referring to <code>{{ .Release.Namespace. }}</code>. Is their any way to modify the <code>.Release.Namespace.</code> of subchart from the parent chart?</p>
| <p>I don't believe this is possible using vanilla Helm and charts you don't control.</p>
<p>When a chart depends on a subchart, there's fairly little that it's possible to customize. The parent chart can provide a default set of values for the subchart, but nothing computed, and those can be overridden by the person running <code>helm install</code>.</p>
<p>If, and only if, the subchart is specifically written to deploy into an alternate namespace</p>
<pre class="lang-yaml prettyprint-override"><code># Every object in the subchart must have this configuration
metadata:
namespace: {{ .Values.namespace | default .Release.Namespace }}
</code></pre>
<p>then you could supply that value to the subchart; but this isn't a default configuration.</p>
<p>My general experience has been that Helm "umbrella charts" are inflexible in a couple of important ways. There are higher-level tools like <a href="https://github.com/roboll/helmfile" rel="noreferrer">Helmfile</a> and <a href="https://github.com/Praqma/helmsman" rel="noreferrer">Helmsman</a> that provide a single-command installation of multiple Helm charts with a full set of options (Helmsman is simpler, Helmfile allows Helm-style templating almost everywhere which is both more powerful and more complex). If you need to install four charts, three into one namespace and one into another, these tools might work better.</p>
|
<p>I want to collect docker container logs, By default, log files will be deleted when removing container. It cause several logs lost each time i update my service. How to keep log files after removing containers?
Or, Is there another way to collect all logs from containers without losing?</p>
| <p>There will be two situations:</p>
<ul>
<li>If your logs are the stdout or stderr, you can save them before removing the container:</li>
</ul>
<pre class="lang-sh prettyprint-override"><code>docker logs CONTAINER_ID > container.log
</code></pre>
<ul>
<li>If your logs are stored in some files, in this case, you can copy them out or mount a directory for them while running the container:</li>
</ul>
<pre class="lang-sh prettyprint-override"><code># Copy the logs out to the host
docker copy CONTAINER_ID:/path/to/your/log_file /host/path/to/store
# Mount a directory for them
docker run -d \
-v /host/path/to/store/logs:/container/path/stored/logs \
your-image
</code></pre>
|
<p>we successfully configured the <a href="https://learn.microsoft.com/en-us/azure/key-vault/general/key-vault-integrate-kubernetes" rel="nofollow noreferrer">CSI Driver for Azure Kubernetes Services</a>. This allows us to pull in secrets from Azure Key Vault as "files" in our Pods (in our AKS Kubernetes cluster). The <strong>name</strong> of the file is the name of the secret. The <strong>contents</strong> of the file is the value of the secret. I can of course read these files to get the secrets. But I am wondering if there is some kind of IConfigurationProvider/IConfigurationSource.... already implemented that allows me to access these secrets using the standard .Net core IConfiguration object. That would allow me to do something like :</p>
<pre><code>builder.AddSecretsFromCSI(someRootPathForTheFiles);
</code></pre>
<p>in the program.cs file. Just avoiding that I have to write this myself. (I don't want to use the keyvault provider inside my .net core code to avoid that dependency inside my code - files work everywhere)</p>
| <p>It turns out .Net core has exactly what is needed for this (although a little underdocumented IMHO) :</p>
<pre><code>config.AddKeyPerFile("/app/secrets", optional: true);
</code></pre>
<p>(see also : <a href="https://learn.microsoft.com/en-us/dotnet/core/extensions/configuration-providers#key-per-file-configuration-provider" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/dotnet/core/extensions/configuration-providers#key-per-file-configuration-provider</a>)</p>
|
<p>In my project I have to create a kubernetes cluster on my GCP with an External Load Balancer service for my django app. I create it with this <code>yaml</code> file:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mydjango
namespace: test1
labels:
app: mydjango
spec:
ports:
- name: http
port: 8000
targetPort: 8000
selector:
app: mydjango
type: LoadBalancer
</code></pre>
<p>I apply it and all work is done on my cluster except for the fact that kubernetes create a Load balancer using <code>http</code>.</p>
<p>How can I modify my <code>yaml</code> to create the same Load Balancer using <code>https</code> instead <code>http</code> using my google managed certs?</p>
<p>So many thanks in advance
Manuel</p>
| <p>I whole wholeheartedly agree with the answer provided by @guillaume blaquiere.</p>
<p>You should use following guide to have the <code>HTTPS</code> connection to your Django.</p>
<p>I would also like to add some additional information/resources to the whole question.</p>
<hr />
<p>Addressing the following statement:</p>
<blockquote>
<p>I apply it and all work done on my cluster except for the fact that kubernetes create a Load balancer using http.</p>
</blockquote>
<p><strong>In fact you are creating a network load balancer</strong> (layer 4), (<strong><code>TCP</code></strong>/<code>UDP</code>):</p>
<blockquote>
<p>When you create a Service of type LoadBalancer, a Google Cloud controller wakes up and configures a network load balancer in your project. The load balancer has a stable IP address that is accessible from outside of your project.</p>
<p>-- <em><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/service#services_of_type_loadbalancer" rel="nofollow noreferrer">Cloud.google.com: Kubernetes Engine: Docs: Concepts: Service: Service of type LoadBalancer</a></em></p>
</blockquote>
<p>This type of a load balancer will forward the packets to its destination but it won't be able to accomplish things like path based routing or SSL termination.</p>
<p>To have the ability to connect to your Django app with HTTPS you can:</p>
<ul>
<li>Use <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">GKE Ingress for HTTP(S) Load Balancing</a> (as pointed by guillaume blaquiere)</li>
</ul>
<p>In the whole process you will be using an <code>Ingress</code> resource to forward the traffic to the specific backend. Your <code>Ingress</code> controller will also be responsible for handling <code>SSL</code>.</p>
<blockquote>
<p>A side note!</p>
<p>I'd reckon you could change the <code>Service</code> of type <code>LoadBalancer</code> to a <code>Service</code> of type <code>NodePort</code>.</p>
</blockquote>
<p>You final <code>Ingress</code> definition will look similar to the one below:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: INGRESS_NAME
namespace: test1
annotations:
kubernetes.io/ingress.global-static-ip-name: ADDRESS_NAME
networking.gke.io/managed-certificates: CERTIFICATE_NAME
kubernetes.io/ingress.class: "gce"
spec:
defaultBackend:
service:
name: mydjango
port:
number: 8080
</code></pre>
<p><strong>Alternatively</strong> you can:</p>
<ul>
<li>Use different <code>Ingress</code> controller like <code>nginx-ingress</code> and add the certificate to handle the <code>HTTPS</code> either by (this will not use Google managed certificate):
<ul>
<li><a href="https://kubernetes.github.io/ingress-nginx/user-guide/tls/" rel="nofollow noreferrer">Kubernetes.github.io: Ingress nginx: User guide: TLS</a></li>
<li><a href="https://cert-manager.io/docs/installation/kubernetes/" rel="nofollow noreferrer">Cert-manager.io: Docs: Installation: Kubernetes</a></li>
</ul>
</li>
</ul>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Services networking: Ingress</a></em></li>
</ul>
<p>I'd reckon you could also take a look on this answer (on how the communication is happening with <code>nginx-ingress</code>):</p>
<ul>
<li><em><a href="https://stackoverflow.com/questions/64647258/how-nginx-ingress-controller-back-end-protocol-annotation-works-in-path-based-ro/64662822#64662822">Stackoverflow.com: Questions: How NGINX Ingress controller back-end protocol annotation works in path based routing?</a></em></li>
</ul>
|
<p>I heard that statefulset is suitable for database.
But StatefulSet will create different pvc for echo pod.
If I set the replicas=3.then I get 3 pod and 3 different pvc with different data.
For database users,they only want a database not 3 database.
So Its clear we should not use statefulset in this situation.
But when should we use statefulset.</p>
| <p>A StatefulSet does three big things differently from a Deployment:</p>
<ol>
<li>It creates a new PersistentVolumeClaim for each replica;</li>
<li>It gives the pods sequential names, starting with <code>statefulsetname-0</code>; and</li>
<li>It starts the pods in a specific order (ascending numerically).</li>
</ol>
<p>This is useful when the database itself knows how to replicate data between different copies of itself. In Elasticsearch, for example, indexes are broken up into shards. There are by default two copies of each shard. If you have five Pods running Elasticsearch, each one will have a different fraction of the data, but internally the database system knows how to route a request to the specific server that has the datum in question.</p>
<p>I'd recommend using a StatefulSet in preference to manually creating a PersistentVolumeClaim. For database workloads that can't be replicated, you can't set <code>replicas:</code> greater than 1 in either case, but the PVC management is valuable. You usually can't have multiple databases pointing at the same physical storage, containers or otherwise, and most types of Volumes can't be shared across Pods.</p>
|
<pre><code> <hz:properties>
.
.
<hz:property name="service-label-name">
${com.app.hazelcast.service.label.name:app}
</hz:property>
<hz:property name="service-label-value">
${com.app.hazelcast.service.label.value:app}
</hz:property>
</hz:properties>
<hz:join>
<hz:multicast enabled="false" />
<hz:tcp-ip enabled="false">
<hz:members>${com.test.app.hazelcast.network.members:127.0.0.1}</hz:members>
</hz:tcp-ip>
<hz:kubernetes enabled="true" />
</hz:join>
</code></pre>
<p>When I am running the pod I am getting below error:-</p>
<pre><code>28-Jun-2021 07:37:43.404 WARNING [hz.distributedCacheInstance.IO.thread-in-0] com.hazelcast.internal.server.tcp.TcpServerConnection.null [10.84.0.139]:5701 [cxth-cache-runtime] [4.2] Connection[id=51, /10.84.0.139:54029->/10.84.0.108:8080, qualifier=null, endpoint=[10.84.0.108]:8080, alive=false, connectionType=NONE, planeIndex=-1] closed. Reason: Exception in Connection[id=51, /10.84.0.139:54029->/10.84.0.108:8080, qualifier=null, endpoint=[10.84.0.108]:8080, alive=true, connectionType=NONE, planeIndex=-1], thread=hz.distributedCacheInstance.IO.thread-in-0
java.lang.IllegalStateException: Unknown protocol: HTT
at com.hazelcast.internal.server.tcp.UnifiedProtocolDecoder.onRead(UnifiedProtocolDecoder.java:117)
at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:137)
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKey(NioThread.java:383)
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKeys(NioThread.java:368)
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:294)
at com.hazelcast.internal.networking.nio.NioThread.executeRun(NioThread.java:249)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102)
28-Jun-2021 07:37:43.804 WARNING [hz.distributedCacheInstance.IO.thread-in-1] com.hazelcast.internal.server.tcp.TcpServerConnection.null [10.84.0.139]:5701 [cxth-cache-runtime] [4.2] Connection[id=54, /10.84.0.139:35779->/10.84.0.108:8080, qualifier=null, endpoint=[10.84.0.108]:8080, alive=false, connectionType=NONE, planeIndex=-1] closed. Reason: Exception in Connection[id=54, /10.84.0.139:35779->/10.84.0.108:8080, qualifier=null, endpoint=[10.84.0.108]:8080, alive=true, connectionType=NONE, planeIndex=-1], thread=hz.distributedCacheInstance.IO.thread-in-1
java.lang.IllegalStateException: Unknown protocol: HTT
at com.hazelcast.internal.server.tcp.UnifiedProtocolDecoder.onRead(UnifiedProtocolDecoder.java:117)
at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:137)
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKey(NioThread.java:383)
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKeys(NioThread.java:368)
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:294)
at com.hazelcast.internal.networking.nio.NioThread.executeRun(NioThread.java:249)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102)
28-Jun-2021 07:37:43.804 WARNING [hz.distributedCacheInstance.IO.thread-in-2] com.hazelcast.internal.server.tcp.TcpServerConnection.null [10.84.0.139]:5701 [cxth-cache-runtime] [4.2] Connection[id=53, /10.84.0.139:54853->/10.84.0.109:8080, qualifier=null, endpoint=[10.84.0.109]:8080, alive=false, connectionType=NONE, planeIndex=-1] closed. Reason: Exception in Connection[id=53, /10.84.0.139:54853->/10.84.0.109:8080, qualifier=null, endpoint=[10.84.0.109]:8080, alive=true, connectionType=NONE, planeIndex=-1], thread=hz.distributedCacheInstance.IO.thread-in-2
java.lang.IllegalStateException: Unknown protocol: HTT
at com.hazelcast.internal.server.tcp.UnifiedProtocolDecoder.onRead(UnifiedProtocolDecoder.java:117)
at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:137)
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKey(NioThread.java:383)
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKeys(NioThread.java:368)
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:294)
at com.hazelcast.internal.networking.nio.NioThread.executeRun(NioThread.java:249)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:102)
</code></pre>
<p>Why my pod is trying to communicate to port 8080. How to resolve this issue?
I have added the <strong>service-label-name</strong> and <strong>service-label-value</strong> property but, still I didn't get any success.</p>
<p>Is there any problem with my configuration?</p>
| <p>Please set <code>service-port</code> to <code>5701</code> in order to avoid Hazelcast communicating with <code>8080</code>.</p>
<p>Read more at <a href="https://github.com/hazelcast/hazelcast-kubernetes#hazelcast-configuration" rel="nofollow noreferrer">Hazelcast Kubernetes: Hazelcast Configuration</a>.</p>
|
<p>How to do a canary upgrade to existing istio customised setup.</p>
<p><strong>Requirements:</strong></p>
<ul>
<li>We have existing customised setup of istio 1.7.3 (installed using istoctl method and no revision set for this) for AKS 1.18.14.</li>
<li>Now we need to upgrade to istio 1.8 with no downtime or minimal.</li>
<li>The upgrade should be safer and it wont break our prod environemnt in any ways.</li>
</ul>
<p>How we installed the current istio customised environment:</p>
<ol>
<li>created manifest.</li>
</ol>
<pre class="lang-text prettyprint-override"><code>istioctl manifest generate --set profile=default -f /manifests/overlay/overlay.yaml > $HOME/generated-manifest.yaml
</code></pre>
<ol start="2">
<li>installed istio.</li>
</ol>
<pre class="lang-text prettyprint-override"><code>istioctl install --set profile=default -f /manifests/overlay/overlay.yaml
</code></pre>
<ol start="3">
<li>Verified istio against the deployed manifest.</li>
</ol>
<pre class="lang-text prettyprint-override"><code>istioctl verify-install -f $HOME/generated-manifest.yaml
</code></pre>
<h3>Planned upgrade process <sup><a href="https://istio.io/latest/docs/setup/upgrade/canary/" rel="nofollow noreferrer">Reference</a></sup></h3>
<ol>
<li>Precheck for upgrade.</li>
</ol>
<pre class="lang-text prettyprint-override"><code>istioctl x precheck
</code></pre>
<ol start="2">
<li>export the current used configuration of istio using below command to a yaml file.</li>
</ol>
<pre class="lang-text prettyprint-override"><code>kubectl -n istio-system get iop installed-state-install -o yaml > /tmp/iop.yaml
</code></pre>
<ol start="3">
<li>Download istio 1.8 binary and extract the directory and navigate the directory to where we have the 1.8 version istioctl binary.</li>
</ol>
<pre class="lang-text prettyprint-override"><code>cd istio1.8\istioctl1.8
</code></pre>
<ol start="4">
<li>from the new version istio directory, create a new controlplane for istio(1.8) with proper revision set and use the previously exported installed-state "iop.yaml".</li>
</ol>
<pre class="lang-text prettyprint-override"><code>./istioctl1.8 install --set revision=1-8 --set profile=default -f /tmp/iop.yaml
</code></pre>
<p>Expect that it will create new control plane with existing
costamised configuration and now we will have two control plane
deployments and services running side-by-side:</p>
<pre class="lang-text prettyprint-override"><code>kubectl get pods -n istio-system -l app=istiod
NAME READY STATUS RESTARTS AGE
istiod-786779888b-p9s5n 1/1 Running 0 114m
istiod-1-7-6956db645c-vwhsk 1/1 Running 0 1m
</code></pre>
<ol start="5">
<li>After this, we need to change the existing label of all our cluster namespaces where we need to inject the istio proxy containers. Need to remove the old "istio-injection" label, and add the istio.io/rev label to point to the canary revision 1-8.</li>
</ol>
<pre class="lang-text prettyprint-override"><code>kubectl label namespace test-ns istio-injection- istio.io/rev=1-8
</code></pre>
<p>Hope, at this point also the environment is stable with old istio configurations and we can make decision on which app pods can be restarted to make the new control plane changes as per our downtime, and its allowed to run some apps with older control plane and another with new controller plane configs t this point. eg:</p>
<pre class="lang-text prettyprint-override"><code>kubectl rollout restart deployment -n test-ns (first)
kubectl rollout restart deployment -n test-ns2 (later)
kubectl rollout restart deployment -n test-ns3 (again after sometieme later)
</code></pre>
<ol start="6">
<li>Once we planed for downtime and restarted the deployments as we decided, confirm all the pods are now using dataplane proxy injector of version 1.8 only</li>
</ol>
<pre class="lang-text prettyprint-override"><code>kubectl get pods -n test-ns -l istio.io/rev=1-8
</code></pre>
<ol start="7">
<li>To verify that the new pods in the test-ns namespace are using the istiod-canary service corresponding to the canary revision</li>
</ol>
<pre class="lang-text prettyprint-override"><code>istioctl proxy-status | grep ${pod_name} | awk '{print $7}'
</code></pre>
<ol start="8">
<li>After upgrading both the control plane and data plane, can uninstall the old control plane</li>
</ol>
<pre class="lang-text prettyprint-override"><code>istioctl x uninstall -f /tmp/iop.yaml.
</code></pre>
<h3>Need to clear below points before upgrade.</h3>
<ol>
<li><p>Are all the steps prepared for the upgrade above are good to proceed for highly used Prod environment?</p>
</li>
<li><p>By exporting the installed state iop is enough to get all customised step to proceed the canary upgrade? or is there any chance of braking the upgrade or missing any settings?</p>
</li>
<li><p>Whether the step 4 above will create the 1.8 istio control plane with all the customization as we already have without any break or missing something?</p>
</li>
<li><p>after the step 4, do we need to any any extra configuration related to istiod service configuration> the followed document is not clear about that,</p>
</li>
<li><p>for the step 5 above, how we can identy all the namespaces, where we have the istio-injection enabled and only modify those namespace alone and leave others as it was before?</p>
</li>
<li><p>so for the step 8 above, how to ensure we are uninstalling old control plane only ? we have to get the binary for old controlplane say (1.7 in my case)and use that binary with same exported <code>/tmp/iop.yaml</code>?</p>
</li>
<li><p>No Idea about how to rollback any issues happened in between.. before or after the old controlplane deleted</p>
</li>
</ol>
| <ol>
<li><p>No. You should go through <a href="https://istio.io/latest/news/releases/1.8.x/announcing-1.8/change-notes/" rel="nofollow noreferrer">changelog</a> and <a href="https://istio.io/latest/news/releases/1.8.x/announcing-1.8/upgrade-notes/" rel="nofollow noreferrer">upgrade notes</a>. See what's new, what's changed, depracted etc. Adjust your configs accordingly.</p>
</li>
<li><p>In theory - yes, in practice - no. See above. That's why you should always check upgarde notes/changelog and plan accordingly. There is always a slim chance something will go wrong.</p>
</li>
<li><p>It should, but again, be prepared that something may break (One more time - go through changelog/upgrade notes, this is important).</p>
</li>
<li><p>No.</p>
</li>
<li><p>You can find all namespaces with Istio injection enabled with:</p>
</li>
</ol>
<pre class="lang-text prettyprint-override"><code>kubectl get namespaces -l=istio-injection=enabled
</code></pre>
<p>Istio upgrade process should only modify namespaces with injection enabled (and <code>istio-system</code> namespace).</p>
<ol start="6">
<li>If your old control plane does not have a <code>revision</code> label, you have to uninstall it using its original installation options (old yaml file)</li>
</ol>
<pre class="lang-text prettyprint-override"><code>istioctl x uninstall -f /path/to/old/config.yaml
</code></pre>
<p>If it does have <code>revision</code> label:</p>
<pre class="lang-text prettyprint-override"><code>istioctl x uninstall --revision <revision>
</code></pre>
<ol start="7">
<li>You can just uninstall new control plane with</li>
</ol>
<pre class="lang-text prettyprint-override"><code>istioctl x uninstall revision=1-8
</code></pre>
<p>This will revert to the old control plane, assuming you have not yet uninstalled it. However, you will have to reinstall gateways for the old version manually, as the uninstall command does not revert them automatically.</p>
<hr />
<p>I would strongly recommend creating a temporary test environment. Recreating existing cluster on test env. Performing upgrade there, and adjusting the process to meet your needs.<br />
This way you will avoid catastrofic failures on your production environment.</p>
|
<p>Is it possible to use pipe output as input for grep or git grep? The data im trying to pass to grep/git grep is the following</p>
<pre><code> kubectl get namespace -o name -l app.kubernetes.io/instance!=applications | cut -f2 -d "/"
argocd
default
kube-node-lease
kube-public
kube-system
nsx-system
pks-system
</code></pre>
<p>I've tried to extent the command but this results in an error:</p>
<pre><code> kubectl get namespace -o name -l app.kubernetes.io/instance!=applications | cut -f2 -d "/" | xargs git grep -i
fatal: ambiguous argument 'default': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'
</code></pre>
<p>Using just grep results in:</p>
<pre><code> kubectl get namespace -o name -l app.kubernetes.io/instance!=applications | cut -f2 -d "/" | xargs grep -i
grep: default: No such file or directory
grep: kube-node-lease: No such file or directory
grep: kube-public: No such file or directory
grep: kube-system: No such file or directory
grep: nsx-system: No such file or directory
grep: pks-system: No such file or directory
</code></pre>
<p>The issue im facing with grep in general in this particular case is, that even if i soley use grep within my directory, it takes ages till it's done, whereas git grep is done within seconds. If I'm not doing something terrible wrong that would explain the slow results of grep, getting git grep to work would be preferred.</p>
<p>I've found this other Stackoverflow <a href="https://stackoverflow.com/questions/9754236/git-grep-gives-unknown-revision-or-path-not-in-the-working-tree">Question</a> that somewhat explains what the issue is, but I don't know how to "process" the output into git grep properly.</p>
| <p>The problem is that (as your screenshot shows) the result is multiple terms which I'm guessing you want to be <em>OR</em>-ed together, and not searching for the first term in the files identified by the last terms (which is what the current xargs command does)</p>
<p>Since OR in regex is via the <code>|</code> character, you can use <code>xargs echo</code> to fold the vertical list into a space delimited horizontal list then replace the spaces with <code>|</code> and be pretty close to what you want</p>
<pre class="lang-sh prettyprint-override"><code>printf 'alpha\nbeta\ncharlie\n' | xargs echo | tr ' ' '|' | xargs git grep -i
</code></pre>
<p>although due to the folding operation, that command is an xargs of one line, and thus would be conceptually easier to reason about using just normal <code>$()</code> interpolation:</p>
<pre class="lang-sh prettyprint-override"><code>git grep -i $(printf 'alpha\nbeta\ncharlie\n' | xargs echo | tr ' ' '|')
</code></pre>
<p>The less "whaaa" shell pipeline would be to use <code>kubectl get -o go-template=</code> to actually emit a pipe-delimited list and feed that right into xargs (or <code>$()</code>), bypassing the need to massage the output text first</p>
|
<p>I am building an application which should execute tasks in a separate container/pods.
this application would be running in a specific namespace the new pods must be created in the same namespace as well.</p>
<p>I understand we can similar via custom CRD and Operators, but I found it is overly complicated and we need Golang knowledge for the same.</p>
<p>Is there any way this could be achived without having to learn Operators and GoLang?</p>
<p>I am ok to use <code>kubctl</code> or <code>api</code> within my container and wanted to connect the host and to the same namespace.</p>
| <p>Yes, this is certainly possible using a <code>ServiceAccount</code> and then connecting to the API from within the Pod.</p>
<ul>
<li><p>First, create a <code>ServiceAccount</code> in your namespace using</p>
<pre><code>kubectl create serviceaccount my-service-account
</code></pre>
</li>
<li><p>For your newly created <code>ServiceAccount</code>, give it the permissions you want using <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer"><code>Roles</code> and <code>RoleBindings</code></a>. The subject would be something like this:</p>
<pre><code>subjects:
- kind: ServiceAccount
name: my-service-account
namespace: my-namespace
</code></pre>
</li>
<li><p>Then, add the <code>ServiceAccount</code> to the Pod from where you want to create other Pods from (see <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="nofollow noreferrer">documentation</a>). Credentials are automatically mounted inside the Pod using <code>automountServiceAccountToken</code>.</p>
</li>
<li><p>Now from inside the Pod you can either use <code>kubectl</code> or call the API using the credentials inside the Pod. There are libraries for a lot of programming languages to talk to Kubernetes, use those.</p>
</li>
</ul>
|
<p>I understand kubectl gets the kubeconfig file in the order</p>
<ol>
<li>command line option <code>--kubeconfig</code></li>
<li>environment variable <code>KUBECONFIG=</code></li>
<li>default path <code>~/.kube/config</code></li>
</ol>
<p>But is there a way to get the kubeconfig path/file details from the kubectl which one being currently used?</p>
<p>Something like <code>kubectl config path</code></p>
| <p><strong>Question: But is there a way to get the kubeconfig path/file details from the kubectl which one being currently used?</strong></p>
<p>Yes, you can run any <code>kubectl</code> command with <code>verbose</code> level 6+ to see the <code>kubeconfig</code> in use.</p>
<pre><code>kubectl get pod -v6
I0629 04:48:25.050954 14444 loader.go:379] Config loaded from file: /home/ps/.kube/config
I0629 04:48:25.146072 14444 round_trippers.go:445] GET https://kubemaster:6443/api/v1/namespaces/default/pods?limit=500 200 OK in 10 milliseconds
No resources found in default namespace.
</code></pre>
<p>Few examples demonstrating the same:</p>
<pre><code>kubectl get pod -v6 2>&1 |awk '/Config loaded from file:/{print $NF}'
/home/ps/.kube/config
</code></pre>
<p>Changed the<code>kubeconfig</code> to <code>/tmp/config</code></p>
<pre><code>export KUBECONFIG=/tmp/config
kubectl get pod -v6 2>&1 |awk '/Config loaded from file:/{print $NF}'
/tmp/config
</code></pre>
<p>Remove the <code>awk</code> command to see the whole output.</p>
<p>Windows output:</p>
<p><a href="https://i.stack.imgur.com/5GLxT.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/5GLxT.jpg" alt="enter image description here" /></a></p>
|
<p>From the <a href="https://github.com/bazelbuild/rules_docker/pull/310/files" rel="nofollow noreferrer">PR</a> that implemented empty_dirs, it seems there's support for defining dir owners (with the names argument) and mode into the add_empty_dir method of TarFile class.</p>
<p>But the container_image rule (and container_layer) supports only mode.</p>
<p>This works:</p>
<pre><code>container_image(
name = "with_empty_dirs",
empty_dirs = [
"etc",
"foo",
"bar",
],
mode = "0o777",
)
</code></pre>
<p>But this returns an error: "ERROR: (...) no such attribute 'names' in 'container_image_' rule":</p>
<pre><code>container_image(
name = "with_empty_dirs",
empty_dirs = [
"etc",
"foo",
"bar",
],
names = "nginx",
)
</code></pre>
<p>Do we need to “<a href="https://github.com/bazelbuild/rules_docker/blob/master/docs/container.md#imageimplementation" rel="nofollow noreferrer">write a customized container_image</a>” if we want to add support for owner of empty_dirs?</p>
| <p>In a BUILD file, the attribute you're looking for is <code>ownername</code>. See the <a href="https://github.com/bazelbuild/rules_pkg/blob/main/pkg/docs/reference.md#pkg_tar" rel="nofollow noreferrer">pkg_tar reference documentation</a> documentation for more details. Also, I don't think you can pass it directly to <code>container_image</code>, you have to create a separate <code>pkg_tar</code> first. Like this:</p>
<pre class="lang-py prettyprint-override"><code>pkg_tar(
name = "with_empty_dirs_tar",
empty_dirs = [
"etc",
"foo",
"bar",
],
ownername = "nginx.nginx",
)
container_image(
name = "with_empty_dirs",
tars = [":with_empty_dirs_tar"],
)
</code></pre>
<p>In general, <code>container_image</code> has a subset of <code>pkg_tar</code> as direct attributes to make simple forms of adding files, but for complex use cases you should create a <code>pkg_tar</code> yourself for full access to all of its features for adding/arranging files and setting their ownership.</p>
<p>The <code>names</code> you see in that PR is a variable in a Python helper tool which the BUILD file rules use as part of the implementation. There's a layer of abstraction between what you write in a BUILD file and that Python code.</p>
|
<p>I have a helm chart and I want to add it to my gitlab repository. But when I run:</p>
<pre><code>helm repo add repo_name url
</code></pre>
<p>I am getting the following error:</p>
<pre><code>Error: looks like "https://gitlab.<domain>.com/group/infra/repo/helm/charts/" is not a valid chart repository or cannot be reached: error converting YAML to JSON: yaml: line 3: mapping values are not allowed in this context
</code></pre>
<p>Linter shows it is a valid chart.</p>
<p>Here is <code>index.yaml</code>:</p>
<pre><code>apiVersion: v1
entries:
helloworld:
- apiVersion: v2
appVersion: 1.0.0
created: "2021-06-28T14:05:53.974207+01:00"
description: This Helm chart will be used to create hello world
digest: f290432f0280fe3f66b126c28a0bb21263d64fd8f73a16808ac2070b874619e7
name: helloworld
type: application
urls:
- https://gitlab.<domain>.com/group/infra/repo/helm/charts/helloworld-0.1.0.tgz
version: 0.1.0
generated: "2021-06-28T14:05:53.973549+01:00"
</code></pre>
<p>Not sure what is missing here.</p>
| <p>It looks like you want to use the helm chart that is hosted on the gitlab. Unfortunately, it won't work as you want it to. As <a href="https://stackoverflow.com/users/1518100/lei-yang">Lei Yang</a> mentioned well in the comment:</p>
<blockquote>
<p><code>helm</code> repo and <code>git</code> repo are different things.</p>
</blockquote>
<p>In the official documentation of Helm, you can find <a href="https://helm.sh/docs/topics/chart_repository/#create-a-chart-repository" rel="nofollow noreferrer">The Chart Repository Guide</a>.
You can find it also a guide <a href="https://helm.sh/docs/topics/chart_repository/#create-a-chart-repository" rel="nofollow noreferrer">how to create a chart repository</a>:</p>
<blockquote>
<p>A <em>chart repository</em> is an HTTP server that houses an <code>index.yaml</code> file and optionally some packaged charts. When you're ready to share your charts, the preferred way to do so is by uploading them to a chart repository.</p>
</blockquote>
<p>Here you can find section, how to properly <a href="https://helm.sh/docs/topics/chart_repository/#hosting-chart-repositories" rel="nofollow noreferrer">host chart repos</a>. There are several ways to do this - for example you can use a Google Cloud Storage (GCS) bucket, Amazon S3 bucket, GitHub Pages, or even create your own web server.</p>
<p>You can also use the <a href="https://chartmuseum.com/docs/#using-with-local-filesystem-storage" rel="nofollow noreferrer">ChartMuseum</a> server to host a chart repository from a local file system.</p>
<blockquote>
<p>ChartMuseum is an open-source Helm Chart Repository server written in Go (Golang), with support for cloud storage backends, including <a href="https://cloud.google.com/storage/" rel="nofollow noreferrer">Google Cloud Storage</a>, <a href="https://aws.amazon.com/s3/" rel="nofollow noreferrer">Amazon S3</a>, <a href="https://azure.microsoft.com/en-us/services/storage/blobs/" rel="nofollow noreferrer">Microsoft Azure Blob Storage</a>, <a href="https://www.alibabacloud.com/product/oss" rel="nofollow noreferrer">Alibaba Cloud OSS Storage</a>, <a href="https://developer.openstack.org/api-ref/object-store/" rel="nofollow noreferrer">Openstack Object Storage</a>, <a href="https://cloud.oracle.com/storage" rel="nofollow noreferrer">Oracle Cloud Infrastructure Object Storage</a>, <a href="https://cloud.baidu.com/product/bos.html" rel="nofollow noreferrer">Baidu Cloud BOS Storage</a>, <a href="https://intl.cloud.tencent.com/product/cos" rel="nofollow noreferrer">Tencent Cloud Object Storage</a>, <a href="https://www.digitalocean.com/products/spaces/" rel="nofollow noreferrer">DigitalOcean Spaces</a>, <a href="https://min.io/" rel="nofollow noreferrer">Minio</a>, and <a href="https://etcd.io/" rel="nofollow noreferrer">etcd</a>.</p>
</blockquote>
<p>Alternatively it could be also possible to <a href="https://jfrog.com/blog/host-your-helm-chart-in-chartcenter-directly-from-source/" rel="nofollow noreferrer">host helm charts in JFrog</a>.</p>
|
<p>I have a PostgreSQL cluster with three nodes with Patroni. The cluster manages a very high workload and for this reason, it runs in production on bare metal machines.
We need to migrate this infrastructure to Kubernetes (for several reasons) and I am doing some performance tests executed with PgBench. First I compared Baremetal vs Virtual Machine and I got very small degradation. Then I compared VSI vs Kubernetes to understand the overhead added by K8s.</p>
<p>Now I am trying to fine-tune CPU and memory. K8s runs on Worker nodes with 48 vCPU and 192 Gb. However, once PostgreSQL was deployed I still see:</p>
<pre><code>NAME CPU(cores) MEMORY(bytes)
postgresql-deployment-5c98f5c949-q758d 2m 243Mi
</code></pre>
<p>even if I allocated the following to the PostgreSQL container:</p>
<pre><code>resources:
requests:
memory: 64Gi
limits:
memory: 64Gi
</code></pre>
<p>if I run:</p>
<p><code>kubectl top pod <pod name> -n <namespace></code></p>
<p>I got the following:</p>
<pre><code>NAME CPU(cores) MEMORY(bytes)
postgresql-deployment-5c98f5c949-q758d 2m 244Mi
</code></pre>
<p>the same appears from K8s dashboard even if the result of:</p>
<pre><code>kubectl describe pod <pod name> -n <namespace>
</code></pre>
<p>show that the Pod runs with a Guarantee QoS and 64Gi of RAM for requested and limit.</p>
<p><strong>How this is supposed to work?</strong></p>
<p>Another thing I don't understand is the CPU <code>limit</code> and <code>requested</code>. I expect to enter something like this:</p>
<pre><code>resources:
requests:
cpu: 40
memory: 64Gi
limits:
cpu: 40
memory: 64Gi
</code></pre>
<p>I expected to reserve 40 vCPU to my container but during the deployment, I see insufficient CPU on the node when I run <code>kubectl describe pod <pod name> -n <namespace></code>. The max value I can use is 1.</p>
<p><strong>How this is supposed to work?</strong></p>
<p>Obviously, I read the documentation and searched for different examples, but when I put things in practice I see test results different from the theory. I know I am missing something.</p>
| <p>this is a great question and it also took me some time earlier this year to find out by experience.</p>
<p>It is important to understand that <strong>request have no actual effect on the resource usage of containers</strong>. You can check by connecting to your Server and running <code>htop</code> or <code>kubectl top</code> like you did, and you see that even though you defined <code>requests: memory: 64Gi</code> only 244Mi are used.</p>
<p>The main purpose of requests is to influence scheduling behavior. When the Kubernetes Scheduler looks for a fitting Node to place a new Pod on it, it checks for the currently requested CPU and Memory of the Nodes. You can check current status of nodes yourself by running the following command.</p>
<pre><code>$ kubectl describe node worker01
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 200m (10%) 1100m (55%)
memory 506Mi (13%) 2098Mi (54%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
</code></pre>
<p>If the Requests (of either CPU or Memory) would exceed 100%, then the Pod cant be scheduled and goes into a Pending State.</p>
<p>Setting the correct Requests can be quite tricky, if you set them to high, you wont efficiently use the resources of your node, as you cant schedule that many pods and if you set them to low you are in danger of having Applications constantly crash or throttle during Performance Peaks.</p>
<p>The main purpose of limits is to control the max Resource usage of Pods.</p>
<blockquote>
<p>Because CPU can be compressed, Kubernetes will make sure your
containers get the CPU they requested and will throttle the rest.
Memory cannot be compressed, so Kubernetes needs to start making
decisions on what containers to terminate if the Node runs out of
memory[1]</p>
</blockquote>
<p>So if a Container exceeds its Limit, it gets terminated or throttled. This led to the best practice in my company to not put limits on databases in our Cluster.</p>
<p>The referenced blogs post helped me to get some good insights:<br />
[1] <a href="https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-resource-requests-and-limits" rel="noreferrer">https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-resource-requests-and-limits</a><br />
[2] <a href="https://sysdig.com/blog/kubernetes-limits-requests/" rel="noreferrer">https://sysdig.com/blog/kubernetes-limits-requests/</a></p>
|
<p>my current setup involves Helm charts,Kubernetes</p>
<p>I have a requirement where i have to replace a property in configMap.yaml file with an environment variable declared in the deployment.yaml file</p>
<p>here is a section my configMap.yaml which declares a property file</p>
<pre><code>data:
rest.properties: |
map.dirs=/data/maps
catalog.dir=/data/catalog
work.dir=/data/tmp
map.file.extension={{ .Values.rest.mapFileExtension }}
unload.time=1
max.flow.threads=10
max.map.threads=50
trace.level=ERROR
run.mode={{ .Values.runMode }}
{{- if eq .Values.cache.redis.location "external" }}
redis.host={{ .Values.cache.redis.host }}
{{- else if eq .Values.cache.redis.location "internal" }}
redis.host=localhost
{{- end }}
redis.port={{ .Values.cache.redis.port }}
redis.stem={{ .Values.cache.redis.stem }}
redis.database={{ .Values.cache.redis.database }}
redis.logfile=redis.log
redis.loglevel=notice
exec.log.dir=/data/logs
exec.log.file.count=5
exec.log.file.size=100
exec.log.level=all
synchronous.timeout=300
{{- if .Values.global.linkIntegration.enabled }}
authentication.enabled=false
authentication.server=https://{{ .Release.Name }}-product-design-server:443
config.dir=/opt/runtime/config
{{- end }}
{{- if .Values.keycloak.enabled }}
authentication.keycloak.enabled={{ .Values.keycloak.enabled }}
authentication.keycloak.serverUrl={{ .Values.keycloak.serverUrl }}
authentication.keycloak.realmId={{ .Values.keycloak.realmId }}
authentication.keycloak.clientId={{ .Values.keycloak.clientId }}
authentication.keycloak.clientSecret=${HIP_KEYCLOAK_CLIENT_SECRET}
{{- end }}
</code></pre>
<p>i need to replace ${HIP_KEYCLOAK_CLIENT_SECRET} which is defined in deployment.yaml file as shown below</p>
<pre><code>containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.global.hImageRegistry }}/{{ include "image.runtime.repo" . }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
{{- if .Values.keycloak.enabled }}
- name: HIP_KEYCLOAK_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: {{ .Values.keycloak.secret }}
key: clientSecret
{{ end }}
</code></pre>
<p>the idea is to have the property file in the deployed pod under /opt/runtime/rest.properties</p>
<p>here is my complete deployment.yaml file</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "lnk-service.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "lnk-service.name" . }}
helm.sh/chart: {{ include "lnk-service.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "lnk-service.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "lnk-service.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
{{- if .Values.global.hImagePullSecret }}
imagePullSecrets:
- name: {{ .Values.global.hImagePullSecret }}
{{- end }}
securityContext:
runAsUser: 998
runAsGroup: 997
fsGroup: 997
volumes:
- name: configuration
configMap:
name: {{ include "lnk-service.fullname" . }}-server-config
- name: core-configuration
configMap:
name: {{ include "lnk-service.fullname" . }}-server-core-config
- name: hch-configuration
configMap:
name: {{ include "lnk-service.fullname" . }}-hch-config
- name: data
{{- if .Values.persistence.enabled }}
persistentVolumeClaim:
{{- if .Values.global.linkIntegration.enabled }}
claimName: lnk-shared-px
{{- else }}
claimName: {{ include "pvc.name" . }}
{{- end }}
{{- else }}
emptyDir: {}
{{- end }}
- name: hch-data
{{- if .Values.global.linkIntegration.enabled }}
persistentVolumeClaim:
claimName: {{ include "unicapvc.fullname" . }}
{{- else }}
emptyDir: {}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.global.hImageRegistry }}/{{ include "image.runtime.repo" . }}:{{ .Values.image.tag }}"
#command: ['/bin/sh']
#args: ['-c', 'echo $HIP_KEYCLOAK_CLIENT_SECRET']
#command: [ "/bin/sh", "-c", "export" ]
#command: [ "/bin/sh", "-ce", "export" ]
command: [ "/bin/sh", "-c", "export --;trap : TERM INT; sleep infinity & wait" ]
#command: ['sh', '-c', 'sed -i "s/REPLACEME/$HIP_KEYCLOAK_CLIENT_SECRET/g" /opt/runtime/rest.properties']
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: "HIP_CLOUD_LICENSE_SERVER_URL"
value: {{ include "license.url" . | quote }}
- name: "HIP_CLOUD_LICENSE_SERVER_ID"
value: {{ include "license.id" . | quote }}
{{- if .Values.keycloak.enabled }}
- name: HIP_KEYCLOAK_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: {{ .Values.keycloak.secret }}
key: clientSecret
{{ end }}
envFrom:
- configMapRef:
name: {{ include "lnk-service.fullname" . }}-server-env
{{- if .Values.rest.extraEnvConfigMap }}
- configMapRef:
name: {{ .Values.rest.extraEnvConfigMap }}
{{- end }}
{{- if .Values.rest.extraEnvSecret }}
- secretRef:
name: {{ .Values.rest.extraEnvSecret }}
{{- end }}
ports:
- name: http
containerPort: {{ .Values.image.port }}
protocol: TCP
- name: https
containerPort: 8443
protocol: TCP
volumeMounts:
- name: configuration
mountPath: /opt/runtime/rest.properties
subPath: rest.properties
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
</code></pre>
<p>i have tried init containers and replacing the string in rest.properties which works however it involves creating volumes with emptyDir.</p>
<p>can someone kindly help me if there is any simpler way to do this</p>
| <p>confd will give you the solution, you can tell it to look at the configmap and change all the environment variables that is expected by the file to the env values that had been set.</p>
|
<p>I have two <code>kind: Deployment</code> in my <code>yaml</code> file
The main one</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: accounts-management-service
labels:
app: accounts-management-service
spec:
replicas: $($env:WEB_REPLICAS)
selector:
matchLabels:
app: accounts-management-service
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 20%
maxUnavailable: 10%
progressDeadlineSeconds: 3600
template:
metadata:
labels:
app: accounts-management-service
spec:
containers:
- image: registry$(GetHash).azurecr.io/$(GetContext 'ApplicationContainerName')
name: accounts-management-service
command: ["npm"]
args: ["run", "start:production:web"]
resources:
requests:
memory: "500Mi"
cpu: "1000m"
limits:
memory: "4096Mi"
cpu: "1001m"
env:
- name: CONFIG_DEPLOYMENT_UNIT
value: $(GetContext 'DeploymentUnit')
- name: NODE_ENV
value: $(GetContext 'DeploymentUnit')
- name: TENANT
value: "$(GetContext 'DeploymentUnit' | Format -NoHyphens)$(GetContext 'Cluster')"
- name: ROLE
value: $(GetContext 'Cluster')
ports:
- containerPort: 1337
protocol: TCP
volumeMounts:
- name: secret-agent
mountPath: /var/run/secret-agent
readinessProbe:
httpGet:
path: /v0.1/status
port: 1337
successThreshold: 2
failureThreshold: 3
initialDelaySeconds: 60
periodSeconds: 10
livenessProbe:
httpGet:
path: /v0.1/status_without_db
port: 1337
failureThreshold: 3
initialDelaySeconds: 60
periodSeconds: 30
volumes:
- name: secret-agent
hostPath:
path: /var/run/secret-agent
type: DirectoryOrCreate
</code></pre>
<p>and the second one</p>
<pre><code># second
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: accounts-management-service-second
labels:
app: accounts-management-service-second
spec:
replicas: $($env:second_REPLICAS)
selector:
matchLabels:
app: accounts-management-service-second
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 10%
maxUnavailable: 10%
template:
metadata:
labels:
app: accounts-management-service-second
spec:
containers:
- image: registry$(GetHash).azurecr.io/$(GetContext 'ApplicationContainerName')
name: accounts-management-service-second
command: ["npm"]
args: ["run", "start:production:second"]
resources:
requests:
memory: "500Mi"
cpu: "250m"
limits:
memory: "8192Mi"
cpu: "1001m"
env:
- name: CONFIG_DEPLOYMENT_UNIT
value: $(GetContext 'DeploymentUnit')
- name: NODE_ENV
value: $(GetContext 'DeploymentUnit')
- name: TENANT
value: "$(GetContext 'DeploymentUnit' | Format -NoHyphens)$(GetContext 'Cluster')"
- name: ROLE
value: $(GetContext 'Cluster')
ports:
- containerPort: 1337
protocol: TCP
volumeMounts:
- name: secret-agent
mountPath: /var/run/secret-agent
readinessProbe:
httpGet:
path: /status
port: 1337
initialDelaySeconds: 60
periodSeconds: 10
livenessProbe:
httpGet:
path: /status
port: 1337
initialDelaySeconds: 60
periodSeconds: 10
volumes:
- name: secret-agent
hostPath:
path: /var/run/secret-agent
type: DirectoryOrCreate
</code></pre>
<p>They both point to the same volume path. I am new to Kubernetes and I am trying to understand the relation between pod creation and two <code>kind: Deployment</code>s. It would be nice if someone can explain thos. I hope that this falls into the SO allowed questions category.</p>
| <p>Deployment manage the replicas count basically for any workload configured.</p>
<p>Deployment in the background uses the <code>Replicasets</code> or <code>ReplicationController</code>.</p>
<p>So if in deployment you have the desired replica 1 it will get managed by deployment.</p>
<p>Deployment continuously checks for the <strong>desired</strong> replicas and scalable replicas if scaling is in place.</p>
<p>It's suggested to implement the scaling above the deployment and scaling <strong>HPA</strong> will inform to deployment for scaling the <strong>replicas</strong> to <code>3-4</code> etc.</p>
<blockquote>
<p>You describe a desired state in a Deployment, and the Deployment
Controller changes the actual state to the desired state at a
controlled rate. You can define Deployments to create new ReplicaSets,
or to remove existing Deployments and adopt all their resources with
new Deployments.</p>
</blockquote>
<p>Also in deployment, you configure the policy for an update if you are updating the PODs or container so ideally one by one they go down and a new ones come.</p>
<p>So less downtime occurred during the process, this policy managed and configurable at deployment level.</p>
<p>Example</p>
<pre><code>strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 20%
maxUnavailable: 10%
</code></pre>
<p>also, two deployments mean there two different types of applications or microservice is configured or workloads in YAML. One for each micro service configuration.</p>
<p>Feel free to add questions in a comment if you have any.</p>
|
<p>I'm running Jenkins on Kubernetes with a dynamic slave pod provisioning setup.</p>
<p>It used to work well for a long time but somehow started to giving this issue from yesterday.</p>
<p>This is the log that I'm getting from the slave pods</p>
<pre><code>Warning: SECRET is defined twice in command-line arguments and the environment variable
Warning: AGENT_NAME is defined twice in command-line arguments and the environment variable
Jun 26, 2021 10:48:16 AM hudson.remoting.jnlp.Main createEngine
INFO: Setting up agent: default-jxrsk
Jun 26, 2021 10:48:16 AM hudson.remoting.jnlp.Main$CuiListener <init>
INFO: Jenkins agent is running in headless mode.
Jun 26, 2021 10:48:16 AM hudson.remoting.Engine startEngine
INFO: Using Remoting version: 3.40
Jun 26, 2021 10:48:16 AM org.jenkinsci.remoting.engine.WorkDirManager initializeWorkDir
INFO: Using /home/jenkins/remoting as a remoting work directory
Jun 26, 2021 10:48:16 AM org.jenkinsci.remoting.engine.WorkDirManager setupLogging
INFO: Both error and output logs will be printed to /home/jenkins/remoting
Jun 26, 2021 10:48:16 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Locating server among [http://jenkins.jenkins:8080]
Jun 26, 2021 10:48:16 AM org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver resolve
INFO: Remoting server accepts the following protocols: [JNLP4-connect, Ping]
Jun 26, 2021 10:48:16 AM org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver resolve
INFO: Remoting TCP connection tunneling is enabled. Skipping the TCP Agent Listener Port availability check
Jun 26, 2021 10:48:16 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Agent discovery successful
Agent address: jenkins-agent
Agent port: 50000
Identity: 04:b1:b3:70:fe:e4:89:1b:30:62:4d:c9:6a:53:8a:3b
Jun 26, 2021 10:48:16 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Handshaking
Jun 26, 2021 10:48:16 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Connecting to jenkins-agent:50000
Jun 26, 2021 10:48:16 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Trying protocol: JNLP4-connect
Jun 26, 2021 10:48:16 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Remote identity confirmed: 04:b1:b3:70:fe:e4:89:1b:30:62:4d:c9:6a:53:8a:3b
Jun 26, 2021 10:48:16 AM org.jenkinsci.remoting.protocol.impl.ConnectionHeadersFilterLayer onRecv
INFO: [JNLP4-connect connection to jenkins-agent/10.245.115.232:50000] Local headers refused by remote: Unknown client name: default-jxrsk
Jun 26, 2021 10:48:16 AM hudson.remoting.jnlp.Main$CuiListener status
INFO: Protocol JNLP4-connect encountered an unexpected exception
java.util.concurrent.ExecutionException: org.jenkinsci.remoting.protocol.impl.ConnectionRefusalException: Unknown client name: default-jxrsk
at org.jenkinsci.remoting.util.SettableFuture.get(SettableFuture.java:223)
at hudson.remoting.Engine.innerRun(Engine.java:579)
at hudson.remoting.Engine.run(Engine.java:490)
Caused by: org.jenkinsci.remoting.protocol.impl.ConnectionRefusalException: Unknown client name: default-jxrsk
at org.jenkinsci.remoting.protocol.impl.ConnectionHeadersFilterLayer.newAbortCause(ConnectionHeadersFilterLayer.java:378)
at org.jenkinsci.remoting.protocol.impl.ConnectionHeadersFilterLayer.onRecvClosed(ConnectionHeadersFilterLayer.java:433)
at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClosed(ProtocolStack.java:816)
at org.jenkinsci.remoting.protocol.FilterLayer.onRecvClosed(FilterLayer.java:287)
at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.onRecvClosed(SSLEngineFilterLayer.java:172)
at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClosed(ProtocolStack.java:816)
at org.jenkinsci.remoting.protocol.NetworkLayer.onRecvClosed(NetworkLayer.java:154)
at org.jenkinsci.remoting.protocol.impl.BIONetworkLayer.access$1500(BIONetworkLayer.java:48)
at org.jenkinsci.remoting.protocol.impl.BIONetworkLayer$Reader.run(BIONetworkLayer.java:247)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at hudson.remoting.Engine$1.lambda$newThread$0(Engine.java:98)
at java.lang.Thread.run(Thread.java:748)
Suppressed: java.nio.channels.ClosedChannelException
... 7 more
Jun 26, 2021 10:48:16 AM hudson.remoting.jnlp.Main$CuiListener error
SEVERE: The server rejected the connection: None of the protocols were accepted
java.lang.Exception: The server rejected the connection: None of the protocols were accepted
at hudson.remoting.Engine.onConnectionRejected(Engine.java:664)
at hudson.remoting.Engine.innerRun(Engine.java:604)
at hudson.remoting.Engine.run(Engine.java:490)
</code></pre>
<p>And getting this from the master Jenkins</p>
<pre><code>2021-06-26 10:47:57.232+0000 [id=73] INFO o.j.r.p.i.ConnectionHeadersFilterLayer#onRecv: [JNLP4-connect connection from 10.244.75.115/10.244.75.115:48028] Refusing headers from remote: Unknown client name: default-t09rx
2021-06-26 10:48:01.652+0000 [id=29] INFO o.c.j.p.k.KubernetesCloud#provision: Excess workload after pending Kubernetes agents: 1
2021-06-26 10:48:01.653+0000 [id=29] INFO o.c.j.p.k.KubernetesCloud#provision: Template for label null: default
2021-06-26 10:48:11.659+0000 [id=73] INFO o.c.j.p.k.KubernetesLauncher#launch: Created Pod: jenkins/default-jxrsk
2021-06-26 10:48:11.660+0000 [id=73] WARNING o.c.j.p.k.KubernetesLauncher#launch: Error in provisioning; agent=KubernetesSlave name: default-jxrsk, template=PodTemplate{inheritFrom='', name='default', namespace='', hostNetwork=false, label='jenkins-jenkins-slave', serviceAccount='default', nodeSelector='doks.digitalocean.com/node-pool=k8s-generic', nodeUsageMode=NORMAL, workspaceVolume=EmptyDirWorkspaceVolume [memory=false], volumes=[HostPathVolume [mountPath=/var/run/docker.sock, hostPath=/var/run/docker.sock]], containers=[ContainerTemplate{name='jnlp', image='myrepo/myimage:jenkins_jnlp-3.40-1-helm3', alwaysPullImage=true, workingDir='/home/jenkins', command='', args='${computer.jnlpmac} ${computer.name}', resourceRequestCpu='1000m', resourceRequestMemory='4096Mi', resourceLimitCpu='1000m', resourceLimitMemory='4096Mi', envVars=[ContainerEnvVar [getValue()=http://jenkins.jenkins:8080, getKey()=JENKINS_URL], KeyValueEnvVar [getValue()=jenkins-agent:50000, getKey()=JENKINS_TUNNEL]], livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@2e38f1da}], imagePullSecrets=[org.csanchez.jenkins.plugins.kubernetes.PodImagePullSecret@5a6fc800]}
java.lang.NoSuchMethodError: io.fabric8.kubernetes.client.dsl.PodResource.watch(Ljava/lang/Object;)Ljava/lang/Object;
at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.launch(KubernetesLauncher.java:134)
at hudson.slaves.SlaveComputer.lambda$_connect$0(SlaveComputer.java:294)
at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)
at jenkins.security.ImpersonatingExecutorService$2.call(ImpersonatingExecutorService.java:71)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2021-06-26 10:48:11.661+0000 [id=73] INFO o.c.j.p.k.KubernetesSlave#_terminate: Terminating Kubernetes instance for agent default-jxrsk
</code></pre>
<p>Any idea? I was trying to Google it for a whole day, but didn't find anything really helpful.</p>
<p>Need your help guys. Thanks.</p>
| <p>Self reply:</p>
<p>I was able to provision slave pods again after I upgraded my Jenkins and Kubernetes plugin to the latest.</p>
<p>The version I'm at is Jenkins 2.289.1, and Kubernetes plugin 1.30.0.</p>
|
<p>By default most people seem to avoid running anything on the masters. These nodes are less likely to be re-provisioned or moved around than the rest of the cluster. It would make them a perfect fit for ingress controllers.</p>
<p>Is there any security and/or management implications/risks in using the masters as ingress nodes?</p>
| <p>As always, the risk is that if your Ingress Controller eats all your IOPS (or memory or CPU but in this case it would probably be IOPS) then your control plane can go unavailable, leaving you fewer ways to fix the problem.</p>
|
<p>I have two <code>kind: Deployment</code> in my <code>yaml</code> file
The main one</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: accounts-management-service
labels:
app: accounts-management-service
spec:
replicas: $($env:WEB_REPLICAS)
selector:
matchLabels:
app: accounts-management-service
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 20%
maxUnavailable: 10%
progressDeadlineSeconds: 3600
template:
metadata:
labels:
app: accounts-management-service
spec:
containers:
- image: registry$(GetHash).azurecr.io/$(GetContext 'ApplicationContainerName')
name: accounts-management-service
command: ["npm"]
args: ["run", "start:production:web"]
resources:
requests:
memory: "500Mi"
cpu: "1000m"
limits:
memory: "4096Mi"
cpu: "1001m"
env:
- name: CONFIG_DEPLOYMENT_UNIT
value: $(GetContext 'DeploymentUnit')
- name: NODE_ENV
value: $(GetContext 'DeploymentUnit')
- name: TENANT
value: "$(GetContext 'DeploymentUnit' | Format -NoHyphens)$(GetContext 'Cluster')"
- name: ROLE
value: $(GetContext 'Cluster')
ports:
- containerPort: 1337
protocol: TCP
volumeMounts:
- name: secret-agent
mountPath: /var/run/secret-agent
readinessProbe:
httpGet:
path: /v0.1/status
port: 1337
successThreshold: 2
failureThreshold: 3
initialDelaySeconds: 60
periodSeconds: 10
livenessProbe:
httpGet:
path: /v0.1/status_without_db
port: 1337
failureThreshold: 3
initialDelaySeconds: 60
periodSeconds: 30
volumes:
- name: secret-agent
hostPath:
path: /var/run/secret-agent
type: DirectoryOrCreate
</code></pre>
<p>and the second one</p>
<pre><code># second
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: accounts-management-service-second
labels:
app: accounts-management-service-second
spec:
replicas: $($env:second_REPLICAS)
selector:
matchLabels:
app: accounts-management-service-second
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 10%
maxUnavailable: 10%
template:
metadata:
labels:
app: accounts-management-service-second
spec:
containers:
- image: registry$(GetHash).azurecr.io/$(GetContext 'ApplicationContainerName')
name: accounts-management-service-second
command: ["npm"]
args: ["run", "start:production:second"]
resources:
requests:
memory: "500Mi"
cpu: "250m"
limits:
memory: "8192Mi"
cpu: "1001m"
env:
- name: CONFIG_DEPLOYMENT_UNIT
value: $(GetContext 'DeploymentUnit')
- name: NODE_ENV
value: $(GetContext 'DeploymentUnit')
- name: TENANT
value: "$(GetContext 'DeploymentUnit' | Format -NoHyphens)$(GetContext 'Cluster')"
- name: ROLE
value: $(GetContext 'Cluster')
ports:
- containerPort: 1337
protocol: TCP
volumeMounts:
- name: secret-agent
mountPath: /var/run/secret-agent
readinessProbe:
httpGet:
path: /status
port: 1337
initialDelaySeconds: 60
periodSeconds: 10
livenessProbe:
httpGet:
path: /status
port: 1337
initialDelaySeconds: 60
periodSeconds: 10
volumes:
- name: secret-agent
hostPath:
path: /var/run/secret-agent
type: DirectoryOrCreate
</code></pre>
<p>They both point to the same volume path. I am new to Kubernetes and I am trying to understand the relation between pod creation and two <code>kind: Deployment</code>s. It would be nice if someone can explain thos. I hope that this falls into the SO allowed questions category.</p>
| <p>Basically Deployments find out which pods they are managing by labels defined in the deployments spec section as well as under .spec.template.spec section.</p>
<p>Deployments check for running pods that match the supplied labels while figuring out whether the available replica count is the desired replica count.</p>
<p>If desired count > available count the DeploymentController creates a new pod based on the template section (actually, the deployment controls a ReplicaSet which in turn controls the pods, but this is additional knowledge not really needed for your understanding). However, deployments do not only create and delete pods based on the template. If there are already pods running in the namespace the deployment is on which matches the labels specified in your deployment, it will count this pod as being part of your deployment and towards the available replicas.</p>
<p>Hope this clarifies your understanding of deployments a bit.</p>
|
<p>Need to understand why pods are automounting the service accounts secret.</p>
<p>If we disable the automout of service account, will this affect any operation of our application which is already have service account specified in the pod spec part, but automount of the service account is not disabled.</p>
| <p>How to disable automounting of the service account is explained in the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server" rel="nofollow noreferrer">linked documentation</a>:</p>
<blockquote>
<p>In version 1.6+, you can opt out of automounting API credentials for a
service account by setting <code>automountServiceAccountToken: false</code> on
the service account.</p>
<p>In version 1.6+, you can also opt out of automounting API credentials
for a particular pod.</p>
</blockquote>
<p>There are also some solutions suggested to mitigate the security issue:</p>
<ul>
<li><p><a href="https://github.com/kubernetes/kubernetes/issues/57601#issuecomment-353824159" rel="nofollow noreferrer">Using RBAC</a></p>
</li>
<li><p><a href="https://github.com/kubernetes/kubernetes/issues/57601#issuecomment-659304477" rel="nofollow noreferrer">Using mutating webhooks</a></p>
</li>
</ul>
<hr />
<blockquote>
<p>If we disable the automout of service account, will this affect any operation of our application which is already have service account specified in the pod spec part</p>
</blockquote>
<p>If you disable automounting of the SA secret, the Pod won't be able to access the K8s API server or do any other operation that requires authenticating as a Service Account. It's hard to tell if that would impact your workload or not, only you can tell. A web server or a worker Pod that only talks to other user-defined services might do fine without SA access, but if they want e.g. to spawn K8s Jobs from an application Pod they would need the SA.</p>
<hr />
<blockquote>
<p>But would like to understand why the secrete of the Service account getting mounted to the pods eventhough it's a security escalation.</p>
</blockquote>
<p>The point seems to be, as often in computer security, that we need to weigh convenience vs security. Automatically mounting SA secretes into a Pod makes it easy (=> goes to convenience) to use K8s API. Disabling this by default is more secure but also less convenient, as you need to explicitly mark those Pods that need access to the K8s API. Whether this is too much of a burden depends very much on the workload, and there's likely no default answer that fits everyone.</p>
<hr />
<blockquote>
<p>Why was it not changed to the more secure default?</p>
</blockquote>
<p>This was answered <a href="https://github.com/kubernetes/kubernetes/issues/57601#issuecomment-494986292" rel="nofollow noreferrer">here</a>:</p>
<blockquote>
<p>disabling by default is not backwards compatible, so is not a
realistic option until (if) a v2 Pod API is made</p>
</blockquote>
<p>and <a href="https://github.com/kubernetes/kubernetes/issues/57601#issuecomment-522209979" rel="nofollow noreferrer">here</a>:</p>
<blockquote>
<p>I'm not saying that it's unreasonable, just that it's going to be a
hard pill to swallow for GA distributions of Kubernetes. I could see
this happening in the v2 pod API.</p>
</blockquote>
|
<p>I have two <code>kind: Deployment</code> in my <code>yaml</code> file
The main one</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: accounts-management-service
labels:
app: accounts-management-service
spec:
replicas: $($env:WEB_REPLICAS)
selector:
matchLabels:
app: accounts-management-service
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 20%
maxUnavailable: 10%
progressDeadlineSeconds: 3600
template:
metadata:
labels:
app: accounts-management-service
spec:
containers:
- image: registry$(GetHash).azurecr.io/$(GetContext 'ApplicationContainerName')
name: accounts-management-service
command: ["npm"]
args: ["run", "start:production:web"]
resources:
requests:
memory: "500Mi"
cpu: "1000m"
limits:
memory: "4096Mi"
cpu: "1001m"
env:
- name: CONFIG_DEPLOYMENT_UNIT
value: $(GetContext 'DeploymentUnit')
- name: NODE_ENV
value: $(GetContext 'DeploymentUnit')
- name: TENANT
value: "$(GetContext 'DeploymentUnit' | Format -NoHyphens)$(GetContext 'Cluster')"
- name: ROLE
value: $(GetContext 'Cluster')
ports:
- containerPort: 1337
protocol: TCP
volumeMounts:
- name: secret-agent
mountPath: /var/run/secret-agent
readinessProbe:
httpGet:
path: /v0.1/status
port: 1337
successThreshold: 2
failureThreshold: 3
initialDelaySeconds: 60
periodSeconds: 10
livenessProbe:
httpGet:
path: /v0.1/status_without_db
port: 1337
failureThreshold: 3
initialDelaySeconds: 60
periodSeconds: 30
volumes:
- name: secret-agent
hostPath:
path: /var/run/secret-agent
type: DirectoryOrCreate
</code></pre>
<p>and the second one</p>
<pre><code># second
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: accounts-management-service-second
labels:
app: accounts-management-service-second
spec:
replicas: $($env:second_REPLICAS)
selector:
matchLabels:
app: accounts-management-service-second
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 10%
maxUnavailable: 10%
template:
metadata:
labels:
app: accounts-management-service-second
spec:
containers:
- image: registry$(GetHash).azurecr.io/$(GetContext 'ApplicationContainerName')
name: accounts-management-service-second
command: ["npm"]
args: ["run", "start:production:second"]
resources:
requests:
memory: "500Mi"
cpu: "250m"
limits:
memory: "8192Mi"
cpu: "1001m"
env:
- name: CONFIG_DEPLOYMENT_UNIT
value: $(GetContext 'DeploymentUnit')
- name: NODE_ENV
value: $(GetContext 'DeploymentUnit')
- name: TENANT
value: "$(GetContext 'DeploymentUnit' | Format -NoHyphens)$(GetContext 'Cluster')"
- name: ROLE
value: $(GetContext 'Cluster')
ports:
- containerPort: 1337
protocol: TCP
volumeMounts:
- name: secret-agent
mountPath: /var/run/secret-agent
readinessProbe:
httpGet:
path: /status
port: 1337
initialDelaySeconds: 60
periodSeconds: 10
livenessProbe:
httpGet:
path: /status
port: 1337
initialDelaySeconds: 60
periodSeconds: 10
volumes:
- name: secret-agent
hostPath:
path: /var/run/secret-agent
type: DirectoryOrCreate
</code></pre>
<p>They both point to the same volume path. I am new to Kubernetes and I am trying to understand the relation between pod creation and two <code>kind: Deployment</code>s. It would be nice if someone can explain thos. I hope that this falls into the SO allowed questions category.</p>
| <p>If you want to figure out which pods were created by specified deployment, you could use <code>kubectl get pods</code> command with <code>--selector</code> option to filter these pods.</p>
<p>The labels you defined in deployment templates were <code>app=accounts-management-service</code> and <code>app=accounts-management-service-second</code>, you could figure out these pods by:</p>
<pre><code>$ kubectl get pods --selector=app=accounts-management-service
$ kubectl get pods --selector=app=accounts-management-service-second
</code></pre>
|
<p>I'm trying to encode a database string using base64 on the command line in linux.</p>
<p>Once I do I add the value to a secret in kubernetes but my application is failing to connect to the database due to the db string not being accepted. There seems to be a newline getting added when I check the value in lens and this is not there in a similar cluster in the same secret</p>
<p>jdbc:postgresql://test.xxxxxxxx.eu-west-2.rds.amazonaws.com/test</p>
<pre><code>deirdre$ echo jdbc:postgresql://test.xxxxxxxx.eu-west-2.rds.amazonaws.com/test | base64 | tr -d "\n"
amRiYzpwb3N0Z3Jlc3FsOi8vdGVzdC54eHh4eHh4eC5ldS13ZXN0LTIucmRzLmFtYXpvbmF3cy5jb20vdGVzdAo=
</code></pre>
<p>Is there something I am doing wrong? or is there an issue with the /?</p>
| <p>You can fix those easy with</p>
<pre><code>echo -n "string" | base64
</code></pre>
<p>"echo -n" removes the trailing newline character.</p>
<p>You can also see my last answer i gave to following Question
<a href="https://stackoverflow.com/questions/68032810/kubernetes-secrets-as-environment-variable-add-space-character/68033302#68033302">Kubernetes secrets as environment variable add space character</a></p>
|
<p>I am working in microservices architecture. Every service has few long-running tasks (data processing, report generation) which can take up to 1-2 hours. We are using Kafka for the queue.
How to handle cases where pod restart or any deployment happens just before the completion of the task? The task will start again and will take that much time again. Is there any way to run these tasks independently to the application pod?</p>
| <p>you can use the Kubernetes jobs to do these types of tasks so as soon as the task is done Kubernetes will auto-delete the pods also.</p>
<p>Jobs are also configurable and will be running standalone so if you will deploy the job again it fetches the data from Kafka and jobs new job will start.</p>
|
<p>I'm writing a program that can deploy to Kubernetes.
The main problem that I'm facing is "Offline mode" when I disconnect the computer from the router Kubernetes stops working because it needs the default route in the network interfaces.</p>
<p>Does anyone know how to set up Kubernetes so it will work without the default network interface?</p>
<p>I tried Minikube and MicroK8S without success.</p>
| <p>Few Kubernetes installers support air-gapped installation and doing it yourself is way out of scope for a new user. If this is for work, you'll want to talk to some of the major commercial distros (OpenShift I'm pretty sure has an air-gap installer, probably also Tanzu) but for new-user home use you should consider this not an option.</p>
|
<p>I have a pod and NodePort service running on GKE.</p>
<p>In the Dockerfile for the container in my pod, I'm using <code>gosu</code> to run a command as a specific user:</p>
<p>startup.sh</p>
<pre><code>exec /usr/local/bin/gosu mytestuser "$@"
</code></pre>
<p>Dockerfile</p>
<pre><code>FROM ${DOCKER_HUB_PUBLIC}/opensuse/leap:latest
# Download and verify gosu
RUN gpg --batch --keyserver-options http-proxy=${env.HTTP_PROXY} --keyserver hkps://keys.openpgp.org \
--recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 && \
curl -o /usr/local/bin/gosu -SL "https://github.com/tianon/gosu/releases/download/1.12/gosu-amd64" && \
curl -o /usr/local/bin/gosu.asc -SL "https://github.com/tianon/gosu/releases/download/1.12/gosu-amd64.asc" && \
gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu && \
chmod +x /usr/local/bin/gosu
# Add tini
ENV TINI_VERSION v0.18.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini", "--", "/startup/startup.sh"]
# Add mytestuser
RUN useradd mytestuser
# Run startup.sh which will use gosu to execute following `CMD` as `mytestuser`
RUN /startup/startup.sh
CMD ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar", "/helloworld.jar"]
</code></pre>
<p>I've just noticed that when I log into the container on GKE and look at the processes running, the java process that I would expect to be running as <code>mytestuser</code> is actually running as <code>chronos</code>:</p>
<pre><code>me@gke-cluster-1-default-ool-1234 ~ $ ps aux | grep java
root 9551 0.0 0.0 4296 780 ? Ss 09:43 0:00 /tini -- /startup/startup.sh java -Djava.security.egd=file:/dev/./urandom -jar /helloworld.jar
chronos 9566 0.6 3.5 3308988 144636 ? Sl 09:43 0:12 java -Djava.security.egd=file:/dev/./urandom -jar /helloworld.jar
</code></pre>
<p>Can anyone explain what's happening, i.e. who is the <code>chronos</code> user, and why my process is not running as <code>mytestuser</code>?</p>
| <p>When you run a useradd inside the container (or as part of the image build), it adds am entry to the <code>/etc/passwd</code> <em>inside the container</em>. The uid/gid will be in a shared namespace with the host, unless you enable user namespaces. However the mapping of those ids to names will be specific to the filesystem namespace where the process is running. Therefore in this scenario, the uid of mytestuser inside the container happens to be the same uid as chronos on the host.</p>
|
<p>I have a pod and NodePort service running on GKE.</p>
<p>In the Dockerfile for the container in my pod, I'm using <code>gosu</code> to run a command as a specific user:</p>
<p>startup.sh</p>
<pre><code>exec /usr/local/bin/gosu mytestuser "$@"
</code></pre>
<p>Dockerfile</p>
<pre><code>FROM ${DOCKER_HUB_PUBLIC}/opensuse/leap:latest
# Download and verify gosu
RUN gpg --batch --keyserver-options http-proxy=${env.HTTP_PROXY} --keyserver hkps://keys.openpgp.org \
--recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 && \
curl -o /usr/local/bin/gosu -SL "https://github.com/tianon/gosu/releases/download/1.12/gosu-amd64" && \
curl -o /usr/local/bin/gosu.asc -SL "https://github.com/tianon/gosu/releases/download/1.12/gosu-amd64.asc" && \
gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu && \
chmod +x /usr/local/bin/gosu
# Add tini
ENV TINI_VERSION v0.18.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini", "--", "/startup/startup.sh"]
# Add mytestuser
RUN useradd mytestuser
# Run startup.sh which will use gosu to execute following `CMD` as `mytestuser`
RUN /startup/startup.sh
CMD ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar", "/helloworld.jar"]
</code></pre>
<p>I've just noticed that when I log into the container on GKE and look at the processes running, the java process that I would expect to be running as <code>mytestuser</code> is actually running as <code>chronos</code>:</p>
<pre><code>me@gke-cluster-1-default-ool-1234 ~ $ ps aux | grep java
root 9551 0.0 0.0 4296 780 ? Ss 09:43 0:00 /tini -- /startup/startup.sh java -Djava.security.egd=file:/dev/./urandom -jar /helloworld.jar
chronos 9566 0.6 3.5 3308988 144636 ? Sl 09:43 0:12 java -Djava.security.egd=file:/dev/./urandom -jar /helloworld.jar
</code></pre>
<p>Can anyone explain what's happening, i.e. who is the <code>chronos</code> user, and why my process is not running as <code>mytestuser</code>?</p>
| <p>When you <code>RUN adduser</code>, it assigns a user ID in the <em>image's</em> <code>/etc/passwd</code> file. Your script launches the process using that numeric user ID. When you subsequently run <code>ps</code> from the host, though, it looks up that user ID in the <em>host's</em> <code>/etc/passwd</code> file, and gets something different.</p>
<p>This difference doesn't usually matter. Only the numeric user ID matters for things like filesystem permissions, if you're bind-mounting a directory from the host. For security purposes it's important that the numeric user ID not be 0, but that's pretty universally named <code>root</code>.</p>
|
<p>Defined labels in temp with top of the same deployment.yml file-</p>
<pre><code>{{- define "chart.labels" }}
version: v1.0
method: http
internet: enabled
{{- end }}
</code></pre>
<p>I have deployment.yml file in template folder-</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: app1-deployment
namespace: {{ .Values.global.namespace }}
labels:
app: app1
type: microservice1
spec:
replicas: 3
selector:
matchLabels:
app: app1
type: microservice1
strategy:
type: {{ .Values.global.strategytype }}
template:
metadata:
labels:
app: app1
type: microservice1
{{- template "chart.labels" }}
</code></pre>
<p>The Two ways - one from the keyword <em>template</em> (last line of the below code)</p>
<p>And second one from the <em>include</em> keyword I am trying to call the template.</p>
<pre><code>{{include "chart.labels" . | indent 8 }}
</code></pre>
<ul>
<li>I am getting <strong>error</strong> ( when I am using keyword <em>template</em> to call
the template).</li>
</ul>
<blockquote>
<p>Error: YAML parse error on chart/templates/deployment.yml: error
converting YAML to JSON: yaml: line 27: did not find expected key
helm.go:81: [debug] error converting YAML to JSON: yaml: line 27: did
not find expected key YAML parse error on
chart/templates/deployment.yml
helm.sh/helm/v3/pkg/releaseutil.(*manifestFile).sort
helm.sh/helm/v3/pkg/releaseutil/manifest_sorter.go:146 helm.sh/helm/v3/pkg/releaseutil.SortManifests
helm.sh/helm/v3/pkg/releaseutil/manifest_sorter.go:106 helm.sh/helm/v3/pkg/action.(*Configuration).renderResources
helm.sh/helm/v3/pkg/action/action.go:165 helm.sh/helm/v3/pkg/action.(*Install).Run
helm.sh/helm/v3/pkg/action/install.go:247</p>
</blockquote>
<ul>
<li>Getting another <strong>error</strong> (when I use <em>Include</em> keyword to call the
template)</li>
</ul>
<blockquote>
<p>Error: YAML parse error on chart/templates/deployment.yml: error
converting YAML to JSON: yaml: line 29: mapping values are not allowed
in this context helm.go:81: [debug] error converting YAML to JSON:
yaml: line 29: mapping values are not allowed in this context YAML
parse error on chart/templates/deployment.yml
helm.sh/helm/v3/pkg/releaseutil.(*manifestFile).sort
helm.sh/helm/v3/pkg/releaseutil/manifest_sorter.go:146 helm.sh/helm/v3/pkg/releaseutil.SortManifests
helm.sh/helm/v3/pkg/releaseutil/manifest_sorter.go:106 helm.sh/helm/v3/pkg/action.(*Configuration).renderResources
helm.sh/helm/v3/pkg/action/action.go:165 helm.sh/helm/v3/pkg/action.(*Install).Run
helm.sh/helm/v3/pkg/action/install.go:247 main.runInstall</p>
</blockquote>
<p>What am I missing here ?</p>
| <p>You need to follow sane indentation. You have:</p>
<pre><code>{{- define "chart.labels" }}
version: v1.0
method: http
internet: enabled
{{- end }}
</code></pre>
<p>Note there is no double space in chart.labels definition below.</p>
<p>The below works:</p>
<pre><code>{{- define "chart.labels" }}
version: v1.0
method: http
internet: enabled
{{- end }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "test.fullname" . }}
labels:
{{- include "test.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "test.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "test.selectorLabels" . | nindent 8 }}
{{include "chart.labels" . | nindent 8 }}
</code></pre>
<p>Edit: Or only change the nindent to match chart.labels in the template meta as below:</p>
<pre><code>{{include "chart.labels" . | nindent 6 }}
</code></pre>
|
<p>I am trying to connect a pod which is running in <strong>Kind</strong> with a local Postgres database which runs in a Docker container. I tried to add the following service but the pod still cannot connect when using the DNS name <code>postgres.dev.svc</code>.</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: postgres
namespace: dev
spec:
type: ExternalName
externalName: 10.0.2.2
</code></pre>
<p>Is there another way to connect these two components?</p>
| <p>First of all it's not the correct usage of the <code>ExternalName</code> service type. Although putting an IP address in <code>externalName</code> field it's perfectly feasible i.e. the resource will be created and you won't get any complaint from kubernetes API server. ❗<strong>But this value is treated as a domain name, comprised of digits, not as an IP adress</strong>. You can read about it in <a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname" rel="nofollow noreferrer">the official kubernetes docs</a>:</p>
<blockquote>
<p><strong>Note:</strong> ExternalName accepts an IPv4 address string, but as a DNS names comprised of digits, not as an IP address. ExternalNames that
resemble IPv4 addresses are not resolved by CoreDNS or ingress-nginx
because ExternalName is intended to specify a canonical DNS name. To
hardcode an IP address, consider using <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">headless
Services</a>.</p>
</blockquote>
<p>So what you really need here is <a href="https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors" rel="nofollow noreferrer">Service without a selector</a>:</p>
<blockquote>
<p>Services most commonly abstract access to Kubernetes Pods, but they
can also abstract other kinds of backends. For example:</p>
<ul>
<li>You want to have an external database cluster in production, but in your test environment you use your own databases.</li>
<li>You want to point your Service to a Service in a different <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces" rel="nofollow noreferrer">Namespace</a>
or on another cluster.</li>
<li>You are migrating a workload to Kubernetes. While evaluating the approach, you run only a portion of your backends in Kubernetes.</li>
</ul>
<p>In any of these scenarios you can define a Service <em>without</em> a Pod
selector. For example:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9376
</code></pre>
<p>Because this Service has no selector, the corresponding Endpoints
object is not created automatically. You can manually map the Service
to the network address and port where it's running, by adding an
Endpoints object manually:</p>
<pre><code>apiVersion: v1
kind: Endpoints
metadata:
name: my-service
subsets:
- addresses:
- ip: 192.0.2.42
ports:
- port: 9376
</code></pre>
</blockquote>
<p>In your particular case your <code>Service</code> definition may look as follows:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
ports:
- protocol: TCP
port: 5432
targetPort: 5432
</code></pre>
<p>and the corresponding <code>Endpoints</code> object may look like this:</p>
<pre><code>apiVersion: v1
kind: Endpoints
metadata:
name: postgres
subsets:
- addresses:
- ip: 10.0.2.2
ports:
- port: 5432
</code></pre>
<p>Of course the IP address <code>10.0.2.2</code> must be reachable from within your kubernetes cluster.</p>
|
<p>I have some microservices deployed on Google Kubernetes Engine composed of 3 "e2-medium" nodes with 10GB of disk each.</p>
<p>Every time I perform a new deployment of the services I get the following load shedding event on the nodes:</p>
<pre><code>Stopping container alice
The node was low on resource: ephemeral-storage. Container alice was using 236831, which exceeds its request of 0.
</code></pre>
<p>The node disk does not appear to be full and occurs when I deploy instances that are not related to the one that is ejected.</p>
<p>How could ephemeral-storage be increased?</p>
| <p>In order to get a sense of your of your ephemeral-storage, you may check the following steps:</p>
<p>Menu>Monitoring>Metrics Explorer></p>
<p>Resource type: kubernetes node</p>
<p>Metric: Ephemeral Storage</p>
<p>Also have a look at the graph in the following <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture#local_ephemeral_storage" rel="nofollow noreferrer">link</a>. Ephemeral storage is backed by boot disk size so increasing boot disk size will give you more ephemeral storage.</p>
<p>If you need to set requests and limits on ephemeral storage, please have a look at this <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#local-ephemeral-storage" rel="nofollow noreferrer">example</a>.</p>
|
<p>When I am trying to create an ingress resource for my Kubernetes cluster(ingress controller is already created), Ingress resource/rules are creating and I am able to see in the kubectl get ing.
But when I do kubectl describe, I am seeing a error:</p>
<blockquote>
<p>Default backend: default-http-backend:80 (<error: endpoints
“default-http-backend” not found>)</p>
</blockquote>
<p>Is this expected?? I am not even able to connect to my application using the DNS name (<code>hotel.example.com</code>) which I defined in Ingress resource. Is it due to this http-backend error?
If not, any suggestions to make the application connect!!</p>
<pre><code>[dockuser@hostname]$ kubectl describe ing hotel-ingress -n hotel
Name: hotel-ingress
Namespace: hotel
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
hotel.example.com
/ hotel-svc:80 (10.36.0.2:80,10.44.0.2:80)
Annotations:
Events:
</code></pre>
<p>deployment files:
namespaces.yaml</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: hotel
</code></pre>
<p>ingress.yaml</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hotel-ingress
namespace: hotel
spec:
rules:
- host: hotel.example.com
http:
paths:
- path: /
backend:
serviceName: hotel-svc
servicePort: 80
</code></pre>
<p>deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: hotel
namespace: hotel
spec:
replicas: 2
selector:
matchLabels:
app: hotel
template:
metadata:
labels:
app: hotel
spec:
containers:
- name: hotel
image: nginxdemos/hello:plain-text
ports:
- containerPort: 80
</code></pre>
<p>service.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: hotel-svc
namespace: hotel
spec:
selector:
app: hotel
ports:
- port: 80
targetPort: 80
</code></pre>
| <p>You may want add <code>defaultBackend</code> as a part of your Ingress definition like so</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: default-backend-ingress-example
spec:
defaultBackend:
service:
name: hotel-svc
port:
number: 80
</code></pre>
<p>Environment</p>
<pre><code>minikube version: v1.21.0
kubectl version: v1.20.7
</code></pre>
|
<p>Assuming I have set <code>resource.limits.ephemeral-storage</code> for containers in a Kubernetes cluster (using Docker), and the following Docker daemon.json logging configuration on the worker nodes:</p>
<pre><code>{
"log-driver": "json-file",
"log-opts": {
"max-size": "100m",
"max-file": "10",
}
}
</code></pre>
<p>My understanding is that all log files (even the rotated log files) will count towards this ephemeral storage limit. This means that to determine the value for <code>resource.limits.ephemeral-storage</code>, I have to consider the maximum allowed log size (here 10*100MB) to the calculation.</p>
<p>Is there a way to "exclude" log files from counting towards the container's ephemeral-storage limit?</p>
<p>Since log handling is done "outside" of Kubernetes, I want to avoid that the resource limits for Kubernetes workloads depend on the Docker log configuration. Otherwise any change to the rotation settings (e.g. increase to 10*200MB) could cause pods to be evicted, if one would forget to adjust the limit for each and every container.</p>
| <p>Based on the function <a href="https://github.com/kubernetes/kubernetes/blob/d88fadbd65c5e8bde22630d251766a634c7613b0/pkg/kubelet/stats/helper.go#L344" rel="nofollow noreferrer">calcEphemeralStorage</a> from <a href="https://github.com/kubernetes/kubernetes/tree/d88fadbd65c5e8bde22630d251766a634c7613b0" rel="nofollow noreferrer">release 1.17.16 source code</a>, if you want to exclude logs from calculation you can comment or remove those lines and rebuild kubelet:</p>
<pre><code>if podLogStats != nil {
result.UsedBytes = addUsage(result.UsedBytes, podLogStats.UsedBytes)
result.InodesUsed = addUsage(result.InodesUsed, podLogStats.InodesUsed)
result.Time = maxUpdateTime(&result.Time, &podLogStats.Time)
}
</code></pre>
<p>This part of the code is responsible for counting ephemeral storage usage for logs. But removing that part of code may also require to adjust some test files which expect that logs amount is calculated.
All statistics are instead counted in <a href="https://github.com/kubernetes/kubernetes/blob/ebcb4a2d88c83096e6068aa56e9a5281976e1fec/pkg/kubelet/stats/cri_stats_provider.go#L392" rel="nofollow noreferrer">this function</a>:</p>
<pre><code>func (p *criStatsProvider) makePodStorageStats(s *statsapi.PodStats, rootFsInfo *cadvisorapiv2.FsInfo) {
podNs := s.PodRef.Namespace
podName := s.PodRef.Name
podUID := types.UID(s.PodRef.UID)
vstats, found := p.resourceAnalyzer.GetPodVolumeStats(podUID)
if !found {
return
}
logStats, err := p.hostStatsProvider.getPodLogStats(podNs, podName, podUID, rootFsInfo)
if err != nil {
klog.ErrorS(err, "Unable to fetch pod log stats", "pod", klog.KRef(podNs, podName))
// If people do in-place upgrade, there might be pods still using
// the old log path. For those pods, no pod log stats is returned.
// We should continue generating other stats in that case.
// calcEphemeralStorage tolerants logStats == nil.
}
etcHostsStats, err := p.hostStatsProvider.getPodEtcHostsStats(podUID, rootFsInfo)
if err != nil {
klog.ErrorS(err, "Unable to fetch pod etc hosts stats", "pod", klog.KRef(podNs, podName))
}
ephemeralStats := make([]statsapi.VolumeStats, len(vstats.EphemeralVolumes))
copy(ephemeralStats, vstats.EphemeralVolumes)
s.VolumeStats = append(append([]statsapi.VolumeStats{}, vstats.EphemeralVolumes...), vstats.PersistentVolumes...)
s.EphemeralStorage = calcEphemeralStorage(s.Containers, ephemeralStats, rootFsInfo, logStats, etcHostsStats, true)
}
</code></pre>
<p>In the last line you can find a usage of <code>calcEphemeralStorage</code>.</p>
<p>In the recent version the <a href="https://github.com/kubernetes/kubernetes/blob/ebcb4a2d88c83096e6068aa56e9a5281976e1fec/pkg/kubelet/stats/helper.go#L374" rel="nofollow noreferrer">mentioned code</a> include the same log calculation section, so the solution should work for the <a href="https://github.com/kubernetes/kubernetes/tree/092fbfbf53427de67cac1e9fa54aaa09a28371d7" rel="nofollow noreferrer">latest release</a> too.</p>
<p>See also:</p>
<ul>
<li><a href="https://github.com/kubernetes/kubernetes/blob/d88fadbd65c5e8bde22630d251766a634c7613b0/pkg/kubelet/eviction/eviction_manager.go#L504" rel="nofollow noreferrer">variable with total usage of ephemeral storage</a></li>
<li><a href="https://github.com/kubernetes/kubernetes/tree/d88fadbd65c5e8bde22630d251766a634c7613b0" rel="nofollow noreferrer">commit of your version</a></li>
<li><a href="https://developer.ibm.com/components/kubernetes/articles/setup-guide-for-kubernetes-developers/" rel="nofollow noreferrer">Setup guide for Kubernetes developers</a></li>
<li><a href="https://learn.microsoft.com/en-us/virtualization/windowscontainers/kubernetes/compiling-kubernetes-binaries" rel="nofollow noreferrer">Compiling Kubernetes binaries</a></li>
</ul>
|
<p>We are running a Spring Boot app in a k8s pod that is hosted behind an NGINX ingress with a EC2 load balancer. Our app occasionally needs to send a very large file (10/20 GB). We have observed that this operation occasionally times out when querying through the ingress, but does not timeout when queried directly. To more easily reproduce this, we created a simple endpoint to request a file of arbitrary size (<code>/files/SIZE</code>). That is what you can see below.</p>
<p>When a request times out, the ingress controller does not seem to post any logs. From the HTTP client, when the request times out, here is what we are given:</p>
<pre><code>{ [3744 bytes data]
100 16.4G 0 16.4G 0 0 22.7M 0 --:--:-- 0:12:23 --:--:-- 23.9M* TLSv1.2 (IN), TLS alert, close notify (256):
{ [2 bytes data]
100 16.5G 0 16.5G 0 0 22.7M 0 --:--:-- 0:12:23 --:--:-- 23.6M
* Connection #0 to host INGRESS_URL left intact
* Closing connection 0
curl INGRESS_URL/files/21474836480 -v 31.47s user 26.92s system 7% cpu 12:23.81 total
</code></pre>
<p>Here is the configuration of our ingress:</p>
<pre><code>kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: USER
namespace: NAMESPACE
selfLink: /apis/extensions/v1beta1/namespaces/NAMESPACE/ingresses/USER
uid: d84f3ab2-7f2c-42c1-a44f-c6a7d432f03e
resourceVersion: '658287365'
generation: 1
creationTimestamp: '2021-06-29T13:21:45Z'
labels:
app.kubernetes.io/instance: USER
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: APP
helm.sh/chart: CHART
annotations:
kubernetes.io/ingress.class: nginx-l4-ext
meta.helm.sh/release-name: USER
meta.helm.sh/release-namespace: NAMESPACE
nginx.ingress.kubernetes.io/client-max-body-size: '0'
nginx.ingress.kubernetes.io/proxy-body-size: '0'
nginx.ingress.kubernetes.io/proxy-buffering: 'off'
nginx.ingress.kubernetes.io/proxy-max-temp-file-size: '0'
nginx.ingress.kubernetes.io/proxy-read-timeout: '1800'
nginx.ingress.kubernetes.io/proxy-send-timeout: '1800'
nginx.ingress.kubernetes.io/websocket-services: core-service
nginx.org/websocket-services: core-service
managedFields:
- manager: Go-http-client
operation: Update
apiVersion: networking.k8s.io/v1beta1
time: '2021-06-29T13:21:45Z'
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
.: {}
'f:kubernetes.io/ingress.class': {}
'f:meta.helm.sh/release-name': {}
'f:meta.helm.sh/release-namespace': {}
'f:nginx.ingress.kubernetes.io/client-max-body-size': {}
'f:nginx.ingress.kubernetes.io/proxy-body-size': {}
'f:nginx.ingress.kubernetes.io/proxy-buffering': {}
'f:nginx.ingress.kubernetes.io/proxy-max-temp-file-size': {}
'f:nginx.ingress.kubernetes.io/proxy-read-timeout': {}
'f:nginx.ingress.kubernetes.io/proxy-send-timeout': {}
'f:nginx.ingress.kubernetes.io/websocket-services': {}
'f:nginx.org/websocket-services': {}
'f:labels':
.: {}
'f:app.kubernetes.io/instance': {}
'f:app.kubernetes.io/managed-by': {}
'f:app.kubernetes.io/name': {}
'f:helm.sh/chart': {}
'f:spec':
'f:rules': {}
- manager: nginx-ingress-controller
operation: Update
apiVersion: networking.k8s.io/v1beta1
time: '2021-06-29T13:21:59Z'
fieldsType: FieldsV1
fieldsV1:
'f:status':
'f:loadBalancer':
'f:ingress': {}
spec:
rules:
- host: HOST_URL.com
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: SERVICE_NAME
servicePort: 9081
status:
loadBalancer:
ingress:
- hostname: LOAD_BALANCER_URL
</code></pre>
<p>We are running <code>ingress-nginx</code>@<code>v0.46.0</code></p>
<p>If anyone has any suggestions for why our large downloads are timing out, that would be great!</p>
<p>Testing Already Done:</p>
<ol>
<li>Verified the params are actually appearing in the generated nginx.conf</li>
<li>Tried changing client-body-timeout - this had to effect.</li>
<li>Recreated the whole environment on my local minikube instance. The application works there. Is it possible this is an Amazon ELB issue?</li>
<li>Changing <code>spring.mvc.async.request-timeout</code> does not fix the issue.</li>
<li>The issue only occurs when making HTTPS calls. HTTP calls run totally fine</li>
</ol>
| <p>I had a similar issue with one of my SpringBoot Apps and the issue was with the Springboot configuration in the application.properties file.</p>
<pre><code>spring:
mvc:
async:
request-timeout: 3600000
</code></pre>
<p>Reference: <a href="https://stackoverflow.com/a/43496244/2777988">https://stackoverflow.com/a/43496244/2777988</a></p>
|
<p>I am reaching out for ideas on the following topic. Imagine that we have 1 application. This application needs to execute an interval (cron) job. It doesn't make sense to execute however on all instances of the app. It needs to elect just one of X and execute the task, then on the next execution it would again pick one (like load-balancing).</p>
<p>I know that Kubernetes offers <code>Jobs</code> and <code>CronJob</code>s, but those serve only a single purpose of executing a task and then turning off, which would add more complexity to the solution.</p>
<p>An option is of course to have an external coordinator doing this, but I was wondering what is the Kubernetes way to handle such scenario.</p>
<p>Any ideas welcome, thanks.</p>
| <p>Job pattern is what you are looking for. There are many examples in official <a href="https://kubernetes.io/docs/tasks/job/" rel="nofollow noreferrer">Kubernetes Documentation/Tasks/Run Jobs</a></p>
<p>Your need seems to match with working with a queue. In this example, as each pod is created, it picks up one unit of work from a task queue, completes it, deletes it from the queue, and exits.</p>
<ol>
<li><strong>Start a message queue service</strong>. In this example, we use RabbitMQ, but you could use another one. In practice you would set up a message queue service once and reuse it for many jobs.</li>
<li><strong>Create a queue, and fill it with messages.</strong> Each message represents one task to be done. In this example, a message is an integer that we will do a lengthy computation on.</li>
<li><strong>Start a Job that works on tasks from the queue</strong>. The Job starts several pods. Each pod takes one task from the message queue, processes it, and repeats until the end of the queue is reached.</li>
</ol>
|
<p>Asked in <a href="https://stackoverflow.com/questions/65358738/is-there-any-way-to-configure-skaffold-to-build-images-on-my-local-docker-daemon/65395223?noredirect=1#comment120508501_65395223">a different question</a>:</p>
<blockquote>
<p>why does <code>skaffold</code> need two tags to the same image?</p>
</blockquote>
| <p>During deployment, Skaffold rewrites the image references in the Kubernetes manifests being deployed to ensure that the cluster pulls the the newly-built images and doesn't use stale copies (read about <code>imagePullPolicy</code> and some of the issues that it attempts to address). Skaffold can't just use the computed image tag as many tag conventions do not produce unique tags and the tag can be overwritten by another developer and point to a different image. It's not unusual for a team of devs, or parallel tests, to push images into the same image repository and encounter tag clashes. For example, <code>latest</code> will be overwritten by the next build, and the default <code>gitCommit</code> tagger generates tags like <code>v1.17.1-38-g1c6517887</code> which uses the most recent version tag and the current commit SHA and so isn't unique across uncommitted source changes.</p>
<p>When pushing to a registry, Skaffold can use the image's <em>digest</em>, the portion after the <code>@</code> in <code>gcr.io/my-project/image:latest@sha256:xxx</code>. This digest is the hash of the image configuration and layers and uniquely identifies a specific image. A container runtime ignores the tag (<code>latest</code> here) when there is a digest.</p>
<p>When loading an image to a Docker daemon, as happens when deploying to minikube, the Docker daemon does not maintain image digests. So Skaffold instead tags the image with a second tag using a <em>computed digest</em>. It's extremely unlikely that two <em>different</em> images will have the same computed digest, unless they're the same image.</p>
<p>Tags are cheap: they're like symlinks, pointing to an image identifier.</p>
|
<p>I have a container running in a GKE autopilot K8s cluster. I have the following in my deployment manifest (only relevant parts included):</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
resources:
requests:
memory: "250Mi"
cpu: "512m"
</code></pre>
<p>So I've requested the minimum resources that <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview#allowable_resource_ranges" rel="nofollow noreferrer">GKE autopilot allows for normal pods</a>. Note that I have not specified a <code>limits</code>.</p>
<p>However, having applied the manifest and looking at the yaml I see that it does not match what's in the manifest I applied:</p>
<pre><code> resources:
limits:
cpu: 750m
ephemeral-storage: 1Gi
memory: 768Mi
requests:
cpu: 750m
ephemeral-storage: 1Gi
memory: 768Mi
</code></pre>
<p>Any idea what's going on here? Why has GKE scaled up the resources. This is costing me more money unnecessarily?</p>
<p>Interestingly it was working as intended until recently. This behaviour only seemed to start in the past few days.</p>
| <p>If the resources that you've requested are following:</p>
<pre><code> memory: "250Mi"
cpu: "512m"
</code></pre>
<p>Then they are not compliant with the minimal amount of resources that <code>GKE Autopilot</code> will assign. Please take a look on the documentation:</p>
<blockquote>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>NAME</th>
<th>Normal Pods</th>
</tr>
</thead>
<tbody>
<tr>
<td>CPU</td>
<td>250 mCPU</td>
</tr>
<tr>
<td>Memory</td>
<td><strong>512 MiB</strong></td>
</tr>
<tr>
<td>Ephemeral storage</td>
<td>10 MiB (per container)</td>
</tr>
</tbody>
</table>
</div>
<p>-- <em><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview#allowable_resource_ranges" rel="noreferrer">Cloud.google.com: Kubernetes Engine: Docs: Concepts: Autopilot overview: Allowable resource ranges</a></em></p>
</blockquote>
<p><strong>As you can see the amount of memory you've requested was too small</strong> and that's why you saw the following message (and the manifest was modified to increate the <code>requests</code>/<code>limits</code>):</p>
<pre class="lang-sh prettyprint-override"><code>Warning: Autopilot increased resource requests for Deployment default/XYZ to meet requirements. See http://g.co/gke/autopilot-resources.
</code></pre>
<p>To fix that you will need to assign resources that are within the limits of the documentation, I've included in the link above.</p>
|
<p>I have a deployment (starterservice) that deploys a single pod with a persistent volume claim. This works. However restart fails:</p>
<pre><code>kubectl rollout restart deploy starterservice
</code></pre>
<p>The new pod is started before the old one has terminated and it cannot attach the volume (Multi-Attach error for volume "pvc-..."). I can work around this by scaling to zero and then back up to 1 instead:</p>
<pre><code>kubectl scale --replicas=0 deployment/starterservice
kubectl scale --replicas=1 deployment/starterservice
</code></pre>
<p>I was wondering if there was a way to get <code>kubectl rollout restart</code> to wait for the old pod to terminate before starting a new one? Tx.</p>
| <p>You need to set deployment strategy = recreate.</p>
<pre><code>spec:
strategy:
type: Recreate
</code></pre>
<p>The difference between the <code>Recreate</code> strategy compared to <code>RollingUpdate</code> (default) is that <code>Recreate</code> will terminate the old pod before creating new one while <code>RollingUpdate</code> will create new pod before terminating the old one.</p>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#recreate-deployment" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#recreate-deployment</a></p>
|
<p>The issue I have is that <code>kubeadm</code> will never fully initialize. The output:</p>
<pre><code>...
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
...
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
...
</code></pre>
<p>and <code>journalctl -xeu kubelet</code> shows the following interesting info:</p>
<pre><code>Dec 03 17:54:08 ip-10-83-62-10.ec2.internal kubelet[14709]: W1203 17:54:08.017925 14709 plugins.go:105] WARNING: aws built-in cloud provider is now deprecated. The AWS provider is deprecated. The AWS provider is deprecated and will be removed in a future release
Dec 03 17:54:08 ip-10-83-62-10.ec2.internal kubelet[14709]: I1203 17:54:08.018044 14709 aws.go:1235] Building AWS cloudprovider
Dec 03 17:54:08 ip-10-83-62-10.ec2.internal kubelet[14709]: I1203 17:54:08.018112 14709 aws.go:1195] Zone not specified in configuration file; querying AWS metadata service
Dec 03 17:56:08 ip-10-83-62-10.ec2.internal kubelet[14709]: F1203 17:56:08.332951 14709 server.go:265] failed to run Kubelet: could not init cloud provider "aws": error finding instance i-03e00e9192370ca0d: "error listing AWS instances: \"RequestError: send request failed\\ncaused by: Post \\\"https://ec2.us-east-1.amazonaws.com/\\\": dial tcp 10.83.60.11:443: i/o timeout
</code></pre>
<p>The context is: it's a fully private AWS VPC. There is a proxy that is propagated to k8s manifests.</p>
<p>the kubeadm.yaml config is pretty innocent and looks like this</p>
<pre><code>---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
extraArgs:
cloud-provider: aws
clusterName: cdspidr
controlPlaneEndpoint: ip-10-83-62-10.ec2.internal
controllerManager:
extraArgs:
cloud-provider: aws
configure-cloud-routes: "false"
kubernetesVersion: stable
networking:
dnsDomain: cluster.local
podSubnet: 10.83.62.0/24
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
nodeRegistration:
name: ip-10-83-62-10.ec2.internal
kubeletExtraArgs:
cloud-provider: was
</code></pre>
<p>I'm looking for help to figure out a couple of things here:</p>
<ol>
<li><p>why does kubeadm use this address (<a href="https://ec2.us-east-1.amazonaws.com" rel="nofollow noreferrer">https://ec2.us-east-1.amazonaws.com</a>) to retrieve availability zones? It does not look correct. IMO, it should be something like <code>http://169.254.169.254/latest/dynamic/instance-identity/document</code></p>
</li>
<li><p>why does it fail? With the same proxy settings, a curl request from the terminal returns the web page.</p>
</li>
<li><p>To workaround it, how can I specify availability zones on my own in kubeadm.yaml or via a command like for kubeadm?</p>
</li>
</ol>
<p>I would appreciate any help or thoughts.</p>
| <p>You can create a VPC endpoint for accessing Ec2 (service name - com.amazonaws.us-east-1.ec2), this will allow the kubelet to talk to Ec2 without internet and fetch the required info.</p>
<p>While creating the VPC endpoint please make sure to enable private DNS resolution option.</p>
<p>Also from the error it looks like that kubelet is trying to fetch the instance not just availability zone. ("aws": error finding instance i-03e00e9192370ca0d: "error listing AWS instances).</p>
|
<p>I am trying to deploy a <a href="https://kubernetes.io/docs/tasks/run-application/configure-pdb/" rel="nofollow noreferrer">PodDisruptionBudget</a> for my deployment, but when I deploy this example</p>
<pre><code>apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: example-pdb
spec:
minAvailable: 1
selector:
matchLabels:
app: example-deployment
</code></pre>
<p>with this deployment</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: example-deployment
spec:
replicas: 1
selector:
matchLabels:
app: example-deployment-app
template:
metadata:
labels:
app: example-deployment-app
spec:
...
</code></pre>
<p>I get the response</p>
<pre><code>$ kubectl get pdb
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
example-pdb 1 N/A 0 7s
</code></pre>
<p>What does it mean for "ALLOWED DISRUPTIONS" to be 0?</p>
| <p>As mentioned by <a href="https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget" rel="noreferrer">Specifying a PodDisruptionBudget</a>:</p>
<blockquote>
<p>A <code>PodDisruptionBudget</code> has three fields:</p>
<ul>
<li><p>A label selector <code>.spec.selector</code> to specify the set of pods to which it applies. This field is required.</p>
</li>
<li><p><code>.spec.minAvailable</code> which is a description of the number of pods from that set that must still be available after the eviction, even in
the absence of the evicted pod. <code>minAvailable</code> can be either an
absolute number or a percentage.</p>
</li>
<li><p><code>.spec.maxUnavailable</code> (available in Kubernetes 1.7 and higher) which is a description of the number of pods from that set that can be
unavailable after the eviction. It can be either an absolute number or
a percentage.</p>
</li>
</ul>
</blockquote>
<p>In your case the <code>.spec.minAvailable</code> is set to <code>1</code>, so <code>1</code> Pod must always be available, even during a disruption.</p>
<p>Now looking at your Deployment's <code>.spec.replicas</code> is set to <code>1</code> which in combination of <code>.spec.minAvailable: 1</code> means that there are no disruptions allowed for that config.</p>
<p>Take a look at the <a href="https://kubernetes.io/docs/tasks/run-application/configure-pdb/#check-the-status-of-the-pdb" rel="noreferrer">official example</a>:</p>
<blockquote>
<p>Use <code>kubectl</code> to check that your PDB is created.</p>
<p>Assuming you don't actually have pods matching <code>app: zookeeper</code> in
your namespace, then you'll see something like this:</p>
<pre><code>kubectl get poddisruptionbudgets
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
zk-pdb 2 N/A 0 7s
</code></pre>
<p>If there are matching pods (say, 3), then you would see something like
this:</p>
<pre><code>kubectl get poddisruptionbudgets
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
zk-pdb 2 N/A 1 7s
</code></pre>
<p>The non-zero value for <code>ALLOWED DISRUPTIONS</code> means that the disruption
controller has seen the pods, counted the matching pods, and updated
the status of the PDB.</p>
<p>You can get more information about the status of a PDB with this
command:</p>
<pre><code>kubectl get poddisruptionbudgets zk-pdb -o yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
annotations:
…
creationTimestamp: "2020-03-04T04:22:56Z"
generation: 1
name: zk-pdb
…
status:
currentHealthy: 3
desiredHealthy: 2
disruptionsAllowed: 1
expectedPods: 3
observedGeneration: 1
</code></pre>
</blockquote>
<p>You can see that if the <code>.spec.minAvailable</code> is set to 2 and there are 3 running Pods than the <code>disruptionsAllowed</code> is actually <code>1</code>. You can check the same with your use case.</p>
|
<p>I'm using the <a href="https://github.com/kubernetes-client/javascript" rel="nofollow noreferrer">Javascript Kubernetes Client</a> and I'm trying to read all resources from a custom resource definition. In particular I want to run <code>kubectl get prometheusrule</code> (prometheusrule is my CRD).</p>
<p>I couldn't find a wat to do this yet. I can read resources like this:</p>
<pre><code>const kc = new k8s.KubeConfig();
kc.loadFromDefault();
const k8sApi = kc.makeApiClient(k8s.CoreV1Api);
k8sApi.listNamespacedPod('default').then((res) => {
res.body.items.forEach(pod => console.log(pod.metadata.name));
});
</code></pre>
<p>But it does not provide a method for reading CRDs.</p>
<p>I also tried</p>
<pre><code>const k8Client = k8s.KubernetesObjectApi.makeApiClient(kc);
k8Client.read({ kind: "service"}).then(res => console.log(res));
</code></pre>
<p>But this way I get the error <code>UnhandledPromiseRejectionWarning: Error: Unrecognized API version and kind: v1 service</code></p>
<p>Any idea how I can achieve this?</p>
| <p>You can use the <a href="https://github.com/kubernetes-client/javascript/blob/master/src/gen/api/customObjectsApi.ts#L1491" rel="nofollow noreferrer">listNamespacedCustomObject</a> function. This function has four required arguments as described below:</p>
<ul>
<li><strong>group</strong> - the custom resource's group name</li>
<li><strong>version</strong> - the custom resource's version</li>
<li><strong>namespace</strong> - the custom resource's namespace</li>
<li><strong>plural</strong> - the custom resource's plural name.</li>
</ul>
<hr />
<p>I've created a sample script that lists all <code>PrometheusRules</code> to illustrate how it works:</p>
<pre><code>$ cat list_rules.js
const k8s = require('@kubernetes/client-node')
const kc = new k8s.KubeConfig()
kc.loadFromDefault()
const k8sApi = kc.makeApiClient(k8s.CustomObjectsApi)
k8sApi.listNamespacedCustomObject('monitoring.coreos.com','v1','default', 'prometheusrules').then((res) => {
res.body.items.forEach(rule => console.log(rule.metadata.name));
});
</code></pre>
<p>We can check if it works as expected:</p>
<pre><code>$ node list_rules.js
prometheus-kube-prometheus-alertmanager.rules
prometheus-kube-prometheus-etcd
prometheus-kube-prometheus-general.rules
prometheus-kube-prometheus-k8s.rules
...
</code></pre>
|
<p>I am using argo events/sensors to create a Kubernetes Job , the sensor gets triggered correctly but it is giving me error "the server could not find the requested resource"</p>
<p>Here is my sensor.yaml</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
name: exp-webhook
spec:
template:
serviceAccountName: crypto-job-sa
dependencies:
- name: payload
eventSourceName: webhook
eventName: devops-toolkit
triggers:
- template:
name: sample-job
k8s:
group: batch
version: v1
resource: Job
operation: create
source:
resource:
apiVersion: batch/v1
kind: Job
metadata:
name: exp-job-crypto
# annotations:
# argocd.argoproj.io/hook: PreSync
# argocd.argoproj.io/hook-delete-policy: HookSucceeded
spec:
ttlSecondsAfterFinished: 100
template:
spec:
restartPolicy: OnFailure
containers:
- name: crypto-job
image: docker.artifactory.xyz.com/xyz/pqr:master-b1b347a
</code></pre>
<p>And here is the error details.</p>
<pre><code> {"level":"error","ts":1624900390.6760905,"logger":"argo-events.sensor","caller":"sensors/listener.go:271","msg":"failed to execute a trigger","sensorName":"exp-webhook","error":"failed to execute trigger: timed out waiting for the condition: the server could not find the requested resource",
"errorVerbose":"timed out waiting for the condition: the server could not find the requested resource\nfailed to execute trigger\ngithub.com/argoproj/argo-events/sensors.
(*SensorContext).triggerOne\n\t/home/jenkins/agent/workspace/argo-events_master/sensors/listener.go:328\ngithub.com/argoproj/argo-events/sensors.(*SensorContext).triggerActions\n\t/home/jenkins/agent/workspace/argo-events_master/sensors/listener.go:269\ngithub.com/argoproj/argo-events/sensors.(*SensorContext).listenEvents.func1.3\n\t/home/jenkins/agent/workspace/argo-events_master/sensors/listener.go:181\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357","triggerName":"sample-job","triggeredBy":["payload"],"triggeredByEvents":["32396264373063382d306336312d343039322d616536652d623965383531346666373234"],"stacktrace":"github.com/argoproj/argo-events/sensors.
(*SensorContext).triggerActions\n\t/home/jenkins/agent/workspace/argo-events_master/sensors/listener.go:271\ngithub.com/argoproj/argo-events/sensors.(*SensorContext).listenEvents.func1.3\n\t/home/jenkins/agent/workspace/argo-events_master/sensors/listener.go:181"}
</code></pre>
<p>But it does not tell what resource is not found. Can someone please help? I don't know what is the mistake here.</p>
| <p>So the error was , instead of</p>
<pre><code>resource: Job
</code></pre>
<p>it should be</p>
<pre><code>resource: jobs
</code></pre>
<p>That fixed this issue.</p>
|
<p>I got this error messages in my pod, using this command:</p>
<pre><code>kubectl create deploy fastapi-helloworld --image=juanb3r/fastapi-multi:latest
</code></pre>
<p>I don´t know why the container can't be created.</p>
<p><a href="https://i.stack.imgur.com/qXV6r.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qXV6r.png" alt="enter image description here" /></a></p>
| <p>I just needed to install this:</p>
<pre><code>zypper install apparmor-parser
</code></pre>
<p>on my vagrant</p>
|
<h3>Take this scenario:</h3>
<p><a href="https://i.stack.imgur.com/wSqHg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wSqHg.png" alt="enter image description here" /></a></p>
<p>I want to delete every <strong>running</strong> pod <strong>automatically</strong> using the <strong>Commandline</strong> without having to type <code>kubectl delete pod <pod_name> -n <namespace></code> for each pod.</p>
| <p>You can use <code>awk</code> to filter pod names based on their <code>STATUS==RUNNING</code>. Below code will delete all(in Running state) the pods from <code>$NAMESPACE</code> namespace.</p>
<pre><code> kubectl get pod -n $NAMESPACE|awk '$3=="Running"{print $1}'
</code></pre>
<p>Example:</p>
<pre><code>for pod in $(kubectl get pod -n $NAMESPACE |awk '$3=="Running"{print $1}'); do
kubectl delete pod -n $NAMESPACE $pod
done
</code></pre>
<p><strong>OR</strong></p>
<p>You may use <code>jsonpath</code>,</p>
<pre><code>NAMESPACE=mynamespace
for pod in $(kubectl get pod -n $NAMESPACE -o jsonpath='{.items[?(@.status.phase=="Running")].metadata.name}{"\n"}'); do
kubectl delete pod -n $NAMESPACE "$pod"
done
</code></pre>
<p>NOTE: Above code will cause deletion of all the pods in <code>$NAMESPACE</code> variable.</p>
<p><strong>Example:</strong></p>
<pre><code>kubectl get pod -n mynamespace
NAME READY STATUS RESTARTS AGE
foo-mh6j7 0/1 Completed 0 5d3h
nginx 1/1 Running 2 7d10h
mongo 2/2 Running 12 57d
busybox 1/1 Running 187 61d
</code></pre>
<p><code>jsonpath</code> query to print all pods in <code>Running</code> state:</p>
<pre><code>kubectl get pod -n mynamespace -o jsonpath='{.items[?(@.status.phase=="Running")].metadata.name}{"\n"}'
nginx mongo busybox
</code></pre>
<p>Although, you have not asked for <code>ready</code> state, but following query can be used to <strong>list pods in ready state</strong>.</p>
<pre><code>kubectl get pod -n mynamespace -o jsonpath='{range .items[*]}{.status.containerStatuses[*].ready.true}{.metadata.name}{ "\n"}{end}'
foo-mh6j7
nginx
mongo
busybox
</code></pre>
<p>Similarly, this can be done via <code>grep</code>:</p>
<pre><code>kubectl get pod -n $NAMESPACE |grep -P '\s+([1-9]+)\/\1\s+'
</code></pre>
<p><strong>NOTE:</strong> Either of the solution will not prevent <code>pods</code> from getting respawned <strong>if</strong> they are created via <code>replicaset</code> or <code>deployment</code> or <code>statefulset</code> etc. This means, they will get deleted and respawned.</p>
|
<p>I created a cluster on an Ubuntu server using this command:</p>
<pre><code>> kubeadm init --cri-socket /var/run/dockershim.sock --control-plane-endpoint servername.local --apiserver-cert-extra-sans servername.local
</code></pre>
<p>I added Calico like this:</p>
<pre><code>> curl https://docs.projectcalico.org/manifests/calico.yaml -o calico.yaml
> kubectl apply -f calico.yaml
</code></pre>
<p>The Calico pod prints errors:</p>
<pre><code>> kubectl --namespace kube-system logs calico-node-2cg7x
2021-01-05 16:34:46.846 [INFO][8] startup/startup.go 379: Early log level set to info
2021-01-05 16:34:46.846 [INFO][8] startup/startup.go 395: Using NODENAME environment for node name
2021-01-05 16:34:46.846 [INFO][8] startup/startup.go 407: Determined node name: servername
2021-01-05 16:34:46.847 [INFO][8] startup/startup.go 439: Checking datastore connection
2021-01-05 16:34:46.853 [INFO][8] startup/startup.go 454: Hit error connecting to datastore - retry error=Get "https://10.96.0.1:443/api/v1/nodes/foo": dial tcp 10.96.0.1:443: connect: connection refused
2021-01-05 16:34:47.859 [INFO][8] startup/startup.go 454: Hit error connecting to datastore - retry error=Get "https://10.96.0.1:443/api/v1/nodes/foo": dial tcp 10.96.0.1:443: connect: connection refused
2021-01-05 16:34:48.866 [INFO][8] startup/startup.go 454: Hit error connecting to datastore - retry error=Get "https://10.96.0.1:443/api/v1/nodes/foo": dial tcp 10.96.0.1:443: connect: connection refused
2021-01-05 16:34:49.872 [INFO][8] startup/startup.go 454: Hit error connecting to datastore - retry error=Get "https://10.96.0.1:443/api/v1/nodes/foo": dial tcp 10.96.0.1:443: connect: connection refused
2021-01-05 16:34:50.878 [INFO][8] startup/startup.go 454: Hit error connecting to datastore - retry error=Get "https://10.96.0.1:443/api/v1/nodes/foo": dial tcp 10.96.0.1:443: connect: connection refused
2021-01-05 16:34:51.884 [INFO][8] startup/startup.go 454: Hit error connecting to datastore - retry error=Get "https://10.96.0.1:443/api/v1/nodes/foo": dial tcp 10.96.0.1:443: connect: connection refused
2021-01-05 16:34:52.890 [INFO][8] startup/startup.go 454: Hit error connecting to datastore - retry error=Get "https://10.96.0.1:443/api/v1/nodes/foo": dial tcp 10.96.0.1:443: connect: connection refused
2021-01-05 16:34:53.896 [INFO][8] startup/startup.go 454: Hit error connecting to datastore - retry error=Get "https://10.96.0.1:443/api/v1/nodes/foo": dial tcp 10.96.0.1:443: connect: connection refused
</code></pre>
<p>I don't know what <code>10.96.0.1</code> is. It doesn't have any ports open:</p>
<pre><code>> ping 10.96.0.1 -c 1
PING 10.96.0.1 (10.96.0.1) 56(84) bytes of data.
64 bytes from 10.96.0.1: icmp_seq=1 ttl=248 time=5.62 ms
--- 10.96.0.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 5.621/5.621/5.621/0.000 ms
> nmap 10.96.0.1
Starting Nmap 7.60 ( https://nmap.org ) at 2021-01-05 17:37 CET
Nmap scan report for 10.96.0.1
Host is up (0.018s latency).
All 1000 scanned ports on 10.96.0.1 are closed
Nmap done: 1 IP address (1 host up) scanned in 1.62 seconds
</code></pre>
<p>The pod actually has IP <code>192.168.1.19</code>.</p>
<p>What am I doing wrong?</p>
| <p>I had the same issue, in my case adding <code>--apiserver-advertise-address=<server-address></code> parameter was the solution.</p>
|
<p>I used 2 pods and service on my Kubernetes cluster. They are Redis and my nodeJs application. Also I used scaffold for dev environment.</p>
<p>I want to connect Redis from my nodeJs application. I set environment variable on my nodeJs_app.yaml file for connection to Redis.</p>
<p>My nodeJs_app.yaml file look like:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nodeJsApp
spec:
replicas: 1
selector:
matchLabels:
app: no
template:
metadata:
labels:
app: nodeJsApp
spec:
containers:
- name: nodeJsApp
image: nodeJsApp
env:
- name: REDISCONNECTIONDEV
value: "redis://redis-srv:6379/"
---
apiVersion: v1
kind: Service
metadata:
name: nodeJsApp
spec:
type: NodePort
selector:
app: nodeJsApp
ports:
- name: nodeJsApp
protocol: TCP
port: 3000
nodePort: 30001
</code></pre>
<p>Also my Redis.yaml file like that:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-depl
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis
---
apiVersion: v1
kind: Service
metadata:
name: redis-srv
spec:
selector:
app: redis
ports:
- name: redis
protocol: TCP
port: 6379
targetPort: 6379
</code></pre>
<p>I checked with <strong>kubectl get pods</strong> and <strong>kubectl get services</strong> and each services and ports running correctly.</p>
<p>On my NodeJs_app/index.js file I tried to connect Redis service with:</p>
<pre><code>let REDIS_URL = process.env.REDISCONNECTIONDEV || production_redis_url;
let redisQueue = new Queue('my queue', REDIS_URL);
redisQueue('global:completed', (jobId, result) => {
// console.log(`Job completed with result ${result}`);
});
</code></pre>
<p>But this situation return an error like that:</p>
<pre><code>/app/node_modules/redis-parser/lib/parser.js:179
return new ReplyError(string)
^
ReplyError: ERR value is not an integer or out of range
at parseError (/app/node_modules/redis-parser/lib/parser.js:179:12)
at parseType (/app/node_modules/redis-parser/lib/parser.js:302:14) {
command: { name: 'select', args: [ 'NaN' ] }
}
</code></pre>
<p>Also I'm new to Redis. Why this problem is occuring? How can I solve this?</p>
| <p>Error occurs because you are trying to connect to redis database with NaN instead of number (number indicates database, 1-16). Check your code, because in some place you put NaN instead of number while trying to connect to redis database.</p>
<p><a href="https://stackoverflow.com/questions/35621324/how-to-create-own-database-in-redis">How to create own database in redis?</a></p>
|
<p>I tried to do a deployment for some applications in spring dataflow,
Routinely each diploi takes a few minutes and passes successfully or fails.
But this time the diplomacy took longer than usual. At one point I pressed "undeploy"
Since the system does not respond.
Under Stream all flickers in UNKNOWN mode.
It is not possible to redeploy.
When I try to perform a dipole I get the error <code>Failed to upload the package. Package [test-orders:1.0.0] in Repository [local] already exists.</code> from the ui</p>
<p>When I request the status of the pods I get 2 pods with <code>CrashLoopBackOff</code> status</p>
<ul>
<li>I rebooted all pods <code>kubectl -n **** rollout restart deploy</code></li>
<li>I try to run <code>dataflow:>stream undeploy --name test-orders</code></li>
<li>I deleted the new docker image from EKS</li>
<li>Changed skipper_status from FAILED to DELETED</li>
</ul>
<p>The problem still exists.</p>
<p>I'm really at a loss.</p>
| <p>OK,
I seem to have been able to solve the problem.</p>
<p>Due to the CrashLoopBackOff status I realized that the system is unable to pull the image or the image is corrupt.</p>
<p>I have overwritten all the images in EKS that are associated with the project.</p>
<p>I changed the problematic <code>skipper_status.status_code</code> to <code>DELETED</code>(<code>update skipper_status set status_code = 'DELETED' where id =***</code>).</p>
<p>In the skipper_release table I added</p>
<pre><code>backoffLimit: 6
completions: 1
parallelism: 1
</code></pre>
<p>So a crash of the system after several attempts will result in the end of a run.
I did a reset for all the pods.</p>
<p>And then in the UI interface I pressed the undeploy button.</p>
<p><strong>Edit 1</strong></p>
<p>I noticed that there were pods left that did not close.
I closed them like this:
<code>kubectl -n foobar delete deployment foo-bar-v1</code></p>
|
<p>I have a spring boot application ( set of services ) deployed in k8s cluster. For micromter metrics I need to dynamically get the namespace this application runs so that i can add that as tag in metrics. Following is my custom tag creation bean</p>
<pre><code>@Bean
public MeterRegistryCustomizer<MeterRegistry> metricsCommonTags() {
return registry -> {
String hostname = getHostName();
registry.config().commonTags(TAG_NAME_INSTANCE,hostname);
final String applicationName = environment.getProperty(SPRING_BOOT_APPLICATION_NAME_PROPERTY_KEY);
registry.config().commonTags(TAG_NAME_APPLICATION, applicationName);
registry.config().commonTags(TAG_NAME_APP, applicationName);
registry.config().commonTags(TAG_NAME_KUBE_NAME_SPACE, getKubeNamespace());
};
</code></pre>
<p>}</p>
<p>currently <code>getKubeNamespace</code> return hard coded value. How can i get which namespace this pod is running ?</p>
| <p>Kubernetes has a feature for accessing such fields, it's called the <a href="https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/" rel="nofollow noreferrer">Downward API</a> .</p>
<p>With the help of the Downward API the pod's namespace can be accessed by defining an environment variable in the pod's definition like this:</p>
<pre><code>- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
</code></pre>
<p>After that, the environment variable can be read by Spring:</p>
<pre><code>@Value("${NAMESPACE}")
private String namespace;
</code></pre>
|
<p>I have a deployment that deploys a single pod with a persistent volume claim. If I switch off the node it is running on, after a while k8s terminates the pod and tries to spin it up elsewhere. However the new pod cannot attach the volume (Multi-Attach error for volume "pvc-...").</p>
<p>I can manually delete the old 'Terminating' pod with <code>kubectl delete pod <PODNAME> --grace-period=0 --force</code> and then things recover.</p>
<p>Is there a way to get Kubernetes to force delete the 'Terminating' pods after a timeout or something? Tx.</p>
| <p>According to <a href="https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/" rel="nofollow noreferrer">the docs</a>:</p>
<blockquote>
<p>A Pod is not deleted automatically when a node is unreachable. The
Pods running on an unreachable Node enter the 'Terminating' or
'Unknown' state after a timeout. Pods may also enter these states when
the user attempts graceful deletion of a Pod on an unreachable Node.
The only ways in which a Pod in such a state can be removed from the
apiserver are as follows:</p>
<ul>
<li>The Node object is deleted (either by you, or by the Node Controller).</li>
<li>The kubelet on the unresponsive Node starts responding, kills the Pod and removes the entry from the apiserver.</li>
<li>Force deletion of the Pod by the user.</li>
</ul>
</blockquote>
<p>So I assume you are not deleting nor draining the node that is being shut down.
In general I'd advice to ensure any broken nodes are deleted from the node list and that should make <code>Terminating</code> pods to be deleted by controller manager.
Node deletion normally happens automatically, at least on kubernetes clusters running on the main cloud providers, but if that's not happening for you than you need a way to remove nodes that are not healthy.</p>
|
<p>I often run tasks like:</p>
<ul>
<li>Read the log of the service <code>X</code>
or</li>
<li>Attach a shell inside the service <code>Y</code></li>
</ul>
<p>I always use something in my history like:</p>
<pre><code>kubectl logs `kubectl get pods --no-headers -o custom-columns=":metadata.name" | grep <partial_name>`
</code></pre>
<p>or</p>
<pre><code>kubectl exec -it `kubectl get pods --no-headers -o custom-columns=":metadata.name" | grep <partial_name>` bash
</code></pre>
<p>Do you know if <code>kubectl</code> has already something in place for this? Or should I create my own set of aliases?</p>
| <p>added to my <code>.zshconfig</code></p>
<pre><code>sshpod () {
kubectl exec --stdin --tty `kubectl get pods --no-headers -o custom-columns=":metadata.name" | grep ${1} | head -n 1` -- /bin/bash
}
</code></pre>
<p>usage</p>
<p><code>sshpod podname</code></p>
<p>this</p>
<ol>
<li>finds all pods</li>
<li>greps needed name</li>
<li>picks the first</li>
<li>sshs into the pod</li>
</ol>
|
<p>I need to block pods communication to each other but I failed to do it.<br />
I installed <a href="https://www.weave.works/docs/net/latest/kubernetes/kube-addon/" rel="nofollow noreferrer">weave plug-in</a> on my <strong>Minikube</strong> (v1.21.0), and started two pods in the same namespace:</p>
<pre><code> kubectl run nginx1 --image=nginx -n ns1
kubectl run nginx2 --image=nginx -n ns2
</code></pre>
<p><strong>The pods IPs:</strong><br />
<code>nginx1</code> with IP: <code>172.17.0.3</code><br />
<code>nginx2</code> with IP: <code>172.17.0.4</code></p>
<p>I can ping <code>nginx1</code> from <code>nginx2</code> and vice vesra.<br />
I wanted to try to deny it, so I firstly tried to deny all the network with this network policy:</p>
<pre><code>kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: netpol1
namespace: earth
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress: []
egress: []
EOF
</code></pre>
<p>I still had ping, so I tried this one too:</p>
<pre><code>kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: netpol1
namespace: earth
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 172.17.0.0/16
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 172.17.0.0/16
EOF
</code></pre>
<p>I still can ping each pod from within the other pods in the same namespace.<br />
I verified that <code>weave</code> is installed:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
coredns-74ff55c5b-2r5z8 1/1 Running 0 2d3h
etcd-ip-172-31-37-46 1/1 Running 0 2d3h
kube-apiserver-ip-172-31-37-46 1/1 Running 0 2d3h
kube-controller-manager-ip-172-31-37-46 1/1 Running 0 2d3h
kube-proxy-787pj 1/1 Running 0 2d3h
kube-scheduler-ip-172-31-37-46 1/1 Running 0 2d3h
storage-provisioner 1/1 Running 0 2d3h
weave-net-wd52r 2/2 Running 1 23m
</code></pre>
<p>I also tried to restart kubelet but I still have access from each pod to the other one.<br />
What can be the reason?</p>
| <p>When you specify the Egress and Ingress resources, you do not specify the network protcol. In the <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">kubernetes docs</a> you can see that the protocol can be specified too. Your kubernetes cluster defaults your Egress and Ingress resources to a protocol if you do not specify one.</p>
<p>If you block all TCP or UDP networking, you will find that ping still works just fine. This is because ping uses the ICMP network protocol, not TCP or UDP.</p>
<p>The actual configuration you need depends on your networking plugin. I do not know how to configure Weave to block ICMP.<br />
If you were using Calico, their <a href="https://docs.tigera.io/calico/latest/network-policy/policy-rules/icmp-ping" rel="nofollow noreferrer">docs</a> explain how to handle the ICMP protocol.</p>
|
<p>When we are trying to extract a large table from a sql server, we are getting an error:</p>
<pre><code>Containerized process terminated by signal 119.
</code></pre>
<p>As per my understanding, kubernetes containers have a limit of how many GB is allocated to memory for each POD.
So suppose if we have a limitation on memory and the table size is expected to be larger then what is the option we have?</p>
| <blockquote>
<p>A Container can exceed its memory request if the Node has memory available. But a Container is not allowed to use more than its memory limit. If a Container allocates more memory than its limit, the Container becomes a candidate for termination. If the Container continues to consume memory beyond its limit, the Container is terminated. If a terminated Container can be restarted, the kubelet restarts it, as with any other type of runtime failure.
<sup>[<a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/#exceed-a-container-s-memory-limit" rel="nofollow noreferrer">source</a>]</sup></p>
</blockquote>
<p>There are two possible reasons:</p>
<ul>
<li>Your container exceeds it's memory limits set in <code>spec.containers[].resources.limits.memory</code> field; or</li>
<li>Your container exceeds node available memory.</li>
</ul>
<p>In the first case you can increase memory limit by changing <code>spec.containers[].resources.limits.memory</code> value.</p>
<p>In the second case you can either increase node's resources or make sure the pod is being scheduled on a node with more available memory.</p>
|
<p>So in my environment i am using Docker and Kubernete ,now i have to import a certificate in Pods Java Keystore .
But my Container running as a <code>NON-ROOT</code> user .When i am trying to import the certificate to Java Keystore its failing as to accessing Java Keystore user have to be root user only.</p>
<pre><code>keytool -importcert -alias keycloakTest -keystore $JAVA_HOME/lib/security/cacerts -file $WSO2_SERVER_HOME/keycloak.crt -storepass changeit -noprompt
</code></pre>
<p>Now How i can import the certificate into Java trust store? Currently i am doing this manually but later i have to do this through <code>DockerFile</code></p>
<pre><code>COPY /carFiles/keycloak.crt $WSO2_SERVER_HOME/
CMD keytool -importcert -alias keycloakTest -keystore $JAVA_HOME/lib/security/cacerts -file $WSO2_SERVER_HOME/keycloak.crt -storepass changeit -noprompt
</code></pre>
<p>In Kubernete Pods Java run as a <code>root</code> user only?</p>
<p>Edit</p>
<p>I made changes in Docker File something like this</p>
<pre><code>COPY /carFiles/keycloak.crt /opt
CMD keytool -importcert -alias keycloakTest -keystore $JAVA_HOME/lib/security/cacerts -file /opt/keycloak.crt -storepass changeit -noprompt
</code></pre>
<p>So from Pods Shell i am able to see certificate Copied to <code>/opt</code> directory but next <code>CMD</code> command wont able to add the certificate into the truststore of java</p>
<p><a href="https://i.stack.imgur.com/fqmMI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fqmMI.png" alt="enter image description here" /></a></p>
<p>Enter key or you will be logged out any time after 101 min.</p>
<pre><code>spec:
volumes:
- name: certs
emptyDir: {}
initContainers:
- name: {{ .Chart.Name }}-create-keystore-truststore
securityContext:
runAsNonRoot: true
runAsUser: 100
image: >-
<HOST>/foapi-tools:20.0.1
command:
- sh
- '-c'
- >
# Import Keycloak server certificate into truststore
# Extract host name from ckey URL
KEYCLOAK_HOST=$(echo
"https://<HOST>/auth" | sed
's|[^/]*//\([^/]*\)/.*|\1|')
echo "Importing server cert of '${KEYCLOAK_HOST}'..."
openssl s_client -connect ${KEYCLOAK_HOST}:443 2>/dev/null
</dev/null | openssl x509 > keycloak.crt
ls -lat /certs/truststore;
workingDir: /certs/truststore
resources:
limits:
cpu: 200m
memory: 200Mi
requests:
cpu: 200m
memory: 100Mi
volumeMounts:
- name: certs
mountPath: /certs/truststore
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
</code></pre>
| <p>As per the shared information, you are using WSO2 MI and trying to generate a token from the Mediation sequence with Keycloak. In such a case, it is not required to add the Keycloak's cert to the <code>cacerts</code> of the JDK. You can make use of the WSO2 MI's <code>client-truststore.jks</code> to import the certs.</p>
<p>To access the <code>/opt</code> path, the user needs root permission. As WSO2 MI itself contains a truststore and not requires any root permissions to import the certs, you can simply import the Keycloak's TLS public cert in there. Given is a sample <code>Dockerfile</code> to import the certs</p>
<pre><code>COPY /carFiles/keycloak.crt $WSO2_SERVER_HOME/repository/resources/security/
RUN keytool -importcert -alias keycloakTest -keystore $WSO2_SERVER_HOME/repository/resources/security/client-truststore.jks -file $WSO2_SERVER_HOME/repository/resources/security/keycloak.crt -storepass changeit -noprompt
</code></pre>
<blockquote>
<p>Note: Replace the <code>CMD</code> with the <code>RUN</code> command, as you are trying to import the certs to the truststore as part of building the images. And not setting that command as an entrypoint to start the containers.</p>
</blockquote>
<p>Further, if you have a certifier chain, you have to include all (root, intermediate, and leaf certs) in the truststore. Importing only one of them will result again in an SSL Handshake error.</p>
|
<p>I'm not sure if I have a configuration error or I just don't know where to look but I can't seem to find where my files are being stored with these configurations</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: tmp-storage
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
hostPath:
path: /tmp/netes
type: Directory
</code></pre>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: tmp-storage-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
</code></pre>
<p>Deployment</p>
<pre><code>spec:
volumes:
- name: abcd
persistentVolumeClaim:
claimName: tmp-storage-claim
containers:
...
volumeMounts:
- mountPath: '/tmp'
name: abcd
</code></pre>
<p>I've tried accessing the docker desktop vm with <code>docker run -it --rm --privileged --pid=host justincormack/nsenter1</code> but there is nothing inside <code>/tmp</code></p>
<p>There's also nothing on my computer inside <code>/tmp</code></p>
<p>I've also tried looking in <code>/Users/username/.docker/Volumes/</code> but I have no volumes directory in that location</p>
| <p><strong>Here how I found it on my Mac:</strong></p>
<p><strong>1.</strong> Create PV, PVC, Deployment as you've mentioned. I've just change the PV <code>spec.hostPath.type</code> to <code>DirectoryOrCreate</code></p>
<p><strong>2.</strong> Create a file on the volume using pod shell:</p>
<pre><code>kubeclt exec -ti the-deployment-pod-name-here -- touch /tmp/somefile2.txt
</code></pre>
<p><strong>3.</strong> Run <code>nsenter</code> pod:</p>
<pre><code>docker run -it --rm --privileged --pid=host alpine:edge nsenter -t 1 -m -u -n -i sh
</code></pre>
<p><em>(on the recent DockerDesktop 3.5.1<br />
<code>screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty</code><br />
doesn't show me a VM shell prompt any more)</em></p>
<p><strong>4.</strong> Find the file using <code>find</code> in the <code>nsenter</code> container filesystem:</p>
<pre><code>/ # find / -name somefile2.txt
/var/lib/mount-docker-cache/entries/docker.tar/a08ee16361132af943875aee79aae165cdbfaf0d203ee97fe9d61d29e307aceb/containers/services/docker/tmp/upper/tmp/netes/somefile2.txt
/var/lib/mount-docker-cache/entries/docker.tar/a08ee16361132af943875aee79aae165cdbfaf0d203ee97fe9d61d29e307aceb/containers/services/docker/rootfs/tmp/netes/somefile2.txt
/var/lib/mount-docker-cache/entries/services.tar/bbec751ae945082378b0b2d4a7c6e32f9c35164315587914a95acc4bee8df1ff/containers/services/docker/tmp/upper/tmp/netes/somefile2.txt
/var/lib/mount-docker-cache/entries/services.tar/bbec751ae945082378b0b2d4a7c6e32f9c35164315587914a95acc4bee8df1ff/containers/services/docker/rootfs/tmp/netes/somefile2.txt
/containers/services/docker/tmp/upper/tmp/netes/somefile2.txt
/containers/services/docker/rootfs/tmp/netes/somefile2.txt
</code></pre>
<p><strong>5.</strong> Most promising paths that should work for most cases are:</p>
<pre><code>/containers/services/docker/tmp/upper
/containers/services/docker/rootfs
+ PV hostPath: /tmp/netes
+ filename: somefile2.txt
</code></pre>
<p><strong>Note:</strong> <code>HostPath</code> PV files are located in DockerVM filesystem. I haven't found a way to share Mac folder to PV in DockerDesktop Kubernetes Pod <em>for now</em>.</p>
<p><strong>Note2:</strong> Without specifying <code>StorageClass</code> in PV and PVC, DockerDesktop Kubernetes assigns a default storage class which in my case was <code>hostpath</code> and generates a folder for the PV in the DockerVM temporary location:</p>
<pre><code>/var/lib/k8s-pvs/tmp-storage-claim/pvc-dafbcdf6-a3e8-47cc-af1a-cf740d8ffcd0/somefile2.txt
</code></pre>
|
<p>I'm trying to configure Kubernetes Dashboard using NGINX INGRESS but for some reason I'm getting a 503 error.</p>
<p>I'm running Kubernetes locally in my macbook with docker desktop.</p>
<p>First thing I did was apply/install NGINX INGRESS CONTROLLER</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/cloud/deploy.yaml
</code></pre>
<p>Second step was to apply/install kubernetes dashboard YML File</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
</code></pre>
<p>Third Step was to apply the ingress service</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/configuration-snippet: |-
proxy_ssl_server_name on;
proxy_ssl_name $host;
spec:
rules:
- http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: kubernetes-dashboard
port:
number: 433
</code></pre>
<p>When I try to access http://localhost and/or https://localhost I get a 503 Service Temporarily Unavailable Error from nginx</p>
<p>Not sure what I'm doing wrong.</p>
<p>Here is part of the log from the NGINX POD</p>
<pre><code>I0630 23:36:42.049398 10 main.go:112] "successfully validated configuration, accepting" ingress="dashboard-ingress/kubernetes-dashboard"
I0630 23:36:42.055306 10 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kubernetes-dashboard", Name:"dashboard-ingress", UID:"85e7bd9e-308d-4848-8b70-4a3591415464", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"47868", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0630 23:36:42.056435 10 controller.go:146] "Configuration changes detected, backend reload required"
I0630 23:36:42.124850 10 controller.go:163] "Backend successfully reloaded"
I0630 23:36:42.125333 10 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-5b74bc9868-gplcq", UID:"bbd70716-b843-403b-a8f9-2add0f63f63f", APIVersion:"v1", ResourceVersion:"46315", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
192.168.65.3 - - [30/Jun/2021:23:36:44 +0000] "GET / HTTP/1.1" 400 54 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 657 0.003 [kubernetes-dashboard-kubernetes-dashboard-80] [] 10.1.0.25:8443 48 0.002 400 395aec46af3b21e79cd650f2f86722f3
2021/06/30 23:36:44 [error] 1222#1222: *17477 recv() failed (104: Connection reset by peer) while sending to client, client: 192.168.65.3, server: _, request: "GET / HTTP/1.1", upstream: "http://10.1.0.25:8443/", host: "localhost"
2021/06/30 23:36:45 [error] 1222#1222: *17512 recv() failed (104: Connection reset by peer) while sending to client, client: 192.168.65.3, server: _, request: "GET / HTTP/1.1", upstream: "http://10.1.0.25:8443/", host: "localhost"
192.168.65.3 - - [30/Jun/2021:23:36:45 +0000] "GET / HTTP/1.1" 400 54 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 657 0.002 [kubernetes-dashboard-kubernetes-dashboard-80] [] 10.1.0.25:8443 48 0.001 400 a15e1e48987948cb93503b494d188654
2021/07/01 00:09:31 [error] 1224#1224: *49299 recv() failed (104: Connection reset by peer) while reading upstream, client: 192.168.65.3, server: _, request: "GET / HTTP/1.1", upstream: "http://10.1.0.25:8443/", host: "localhost"
192.168.65.3 - - [01/Jul/2021:00:09:31 +0000] "GET / HTTP/1.1" 400 54 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 657 0.002 [kubernetes-dashboard-kubernetes-dashboard-80] [] 10.1.0.25:8443 48 0.001 400 ac6b88ca52b73358c39371cb4422761d
2021/07/01 00:09:32 [error] 1221#1221: *49336 recv() failed (104: Connection reset by peer) while sending to client, client: 192.168.65.3, server: _, request: "GET / HTTP/1.1", upstream: "http://10.1.0.25:8443/", host: "localhost"
192.168.65.3 - - [01/Jul/2021:00:09:32 +0000] "GET / HTTP/1.1" 400 54 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 657 0.001 [kubernetes-dashboard-kubernetes-dashboard-80] [] 10.1.0.25:8443 48 0.001 400 2c5cd2d9403a8e50a77fdc897c694792
2021/07/01 00:09:33 [error] 1221#1221: *49338 recv() failed (104: Connection reset by peer) while sending to client, client: 192.168.65.3, server: _, request: "GET / HTTP/1.1", upstream: "http://10.1.0.25:8443/", host: "localhost"
192.168.65.3 - - [01/Jul/2021:00:09:33 +0000] "GET / HTTP/1.1" 400 54 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 657 0.001 [kubernetes-dashboard-kubernetes-dashboard-80] [] 10.1.0.25:8443 48 0.000 400 f1f630c886d20b9b9c59bd9e0e0e3860
2021/07/01 00:09:33 [error] 1224#1224: *49344 recv() failed (104: Connection reset by peer) while reading upstream, client: 192.168.65.3, server: _, request: "GET / HTTP/1.1", upstream: "http://10.1.0.25:8443/", host: "localhost"
192.168.65.3 - - [01/Jul/2021:00:09:33 +0000] "GET / HTTP/1.1" 400 54 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 657 0.001 [kubernetes-dashboard-kubernetes-dashboard-80] [] 10.1.0.25:8443 48 0.001 400 2ab6774dec6e2a89599c4745d24b9661
192.168.65.3 - - [01/Jul/2021:00:09:33 +0000] "GET / HTTP/1.1" 400 54 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 657 0.001 [kubernetes-dashboard-kubernetes-dashboard-80] [] 10.1.0.25:8443 48 0.000 400 c9147e08203d9ec8e7b0d0debab8d556
2021/07/01 00:09:33 [error] 1222#1222: *49360 recv() failed (104: Connection reset by peer) while sending to client, client: 192.168.65.3, server: _, request: "GET / HTTP/1.1", upstream: "http://10.1.0.25:8443/", host: "localhost"
I0701 00:10:19.024220 10 main.go:112] "successfully validated configuration, accepting" ingress="dashboard-ingress/kubernetes-dashboard"
I0701 00:10:19.026772 10 controller.go:146] "Configuration changes detected, backend reload required"
I0701 00:10:19.027392 10 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kubernetes-dashboard", Name:"dashboard-ingress", UID:"85e7bd9e-308d-4848-8b70-4a3591415464", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"50637", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0701 00:10:19.102759 10 controller.go:163] "Backend successfully reloaded"
I0701 00:10:19.103246 10 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-5b74bc9868-gplcq", UID:"bbd70716-b843-403b-a8f9-2add0f63f63f", APIVersion:"v1", ResourceVersion:"46315", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
192.168.65.3 - - [01/Jul/2021:00:11:27 +0000] "GET / HTTP/1.1" 503 592 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 657 0.000 [kubernetes-dashboard-kubernetes-dashboard-433] [] - - - - c449f6e8082761ddc3432f956f4701f2
192.168.65.3 - - [01/Jul/2021:00:11:29 +0000] "GET / HTTP/1.1" 503 592 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 657 0.000 [kubernetes-dashboard-kubernetes-dashboard-433] [] - - - - 3a41974b01c5e63e734fce6e37b98e4c
192.168.65.3 - - [01/Jul/2021:00:11:56 +0000] "GET / HTTP/2.0" 503 592 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 408 0.000 [kubernetes-dashboard-kubernetes-dashboard-433] [] - - - - c01f7bec83d3be6b26703b8808f9922a
192.168.65.3 - - [01/Jul/2021:00:11:58 +0000] "GET / HTTP/2.0" 503 592 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 24 0.000 [kubernetes-dashboard-kubernetes-dashboard-433] [] - - - - dc39bcddd4ecfdefe931bf16fe3c1557
192.168.65.3 - - [01/Jul/2021:00:16:36 +0000] "GET / HTTP/1.1" 503 190 "-" "curl/7.64.1" 73 0.000 [kubernetes-dashboard-kubernetes-dashboard-433] [] - - - - 82aad4321afbccb3fc54ac75d96b66ee
192.168.65.3 - - [01/Jul/2021:00:31:47 +0000] "GET / HTTP/2.0" 503 592 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 417 0.000 [kubernetes-dashboard-kubernetes-dashboard-433] [] - - - - c4ab3d2f272be4d38df62c0ffd50bfe9
I0701 00:48:02.059067 10 main.go:112] "successfully validated configuration, accepting" ingress="dashboard-ingress/kubernetes-dashboard"
I0701 00:48:02.062292 10 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kubernetes-dashboard", Name:"dashboard-ingress", UID:"85e7bd9e-308d-4848-8b70-4a3591415464", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"53737", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0701 00:48:02.062876 10 controller.go:146] "Configuration changes detected, backend reload required"
I0701 00:48:02.131494 10 controller.go:163] "Backend successfully reloaded"
I0701 00:48:02.131787 10 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-5b74bc9868-gplcq", UID:"bbd70716-b843-403b-a8f9-2add0f63f63f", APIVersion:"v1", ResourceVersion:"46315", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
192.168.65.3 - - [01/Jul/2021:00:48:12 +0000] "GET / HTTP/2.0" 503 592 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 417 0.000 [kubernetes-dashboard-kubernetes-dashboard-433] [] - - - - d50e3bb0db3a5fa7581c405b8c50d5c8
192.168.65.3 - - [01/Jul/2021:00:48:14 +0000] "GET / HTTP/2.0" 503 592 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 15 0.000 [kubernetes-dashboard-kubernetes-dashboard-433] [] - - - - c8d8752fb4d79d5bc084839ef9a767b2
I0701 00:49:50.908720 10 main.go:112] "successfully validated configuration, accepting" ingress="dashboard-ingress/kubernetes-dashboard"
I0701 00:49:50.911044 10 controller.go:146] "Configuration changes detected, backend reload required"
I0701 00:49:50.911350 10 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kubernetes-dashboard", Name:"dashboard-ingress", UID:"85e7bd9e-308d-4848-8b70-4a3591415464", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"53896", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0701 00:49:50.979935 10 controller.go:163] "Backend successfully reloaded"
I0701 00:49:50.980213 10 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-5b74bc9868-gplcq", UID:"bbd70716-b843-403b-a8f9-2add0f63f63f", APIVersion:"v1", ResourceVersion:"46315", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
192.168.65.3 - - [01/Jul/2021:00:50:55 +0000] "GET / HTTP/2.0" 503 592 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 417 0.000 [kubernetes-dashboard-kubernetes-dashboard-433] [] - - - - d62a8012bc23bbc35a47621d54d68a62
192.168.65.3 - - [01/Jul/2021:00:51:00 +0000] "GET / HTTP/2.0" 503 592 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36" 15 0.000 [kubernetes-dashboard-kubernetes-dashboard-433] [] - - - - 0cbfd2274ad687fc1aaff76dbc483659
</code></pre>
<p>Here is the log for the Kubernete Dashboard Pod</p>
<pre><code>kubectl logs kubernetes-dashboard-78c79f97b4-w5pw9 -n kubernetes-dashboard ✔ docker-desktop ⎈
2021/06/30 23:01:40 Starting overwatch
2021/06/30 23:01:40 Using namespace: kubernetes-dashboard
2021/06/30 23:01:40 Using in-cluster config to connect to apiserver
2021/06/30 23:01:40 Using secret token for csrf signing
2021/06/30 23:01:40 Initializing csrf token from kubernetes-dashboard-csrf secret
2021/06/30 23:01:40 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2021/06/30 23:01:40 Successful initial request to the apiserver, version: v1.21.1
2021/06/30 23:01:40 Generating JWE encryption key
2021/06/30 23:01:40 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2021/06/30 23:01:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2021/06/30 23:01:41 Initializing JWE encryption key from synchronized object
2021/06/30 23:01:41 Creating in-cluster Sidecar client
2021/06/30 23:01:41 Auto-generating certificates
2021/06/30 23:01:41 Successful request to sidecar
2021/06/30 23:01:41 Successfully created certificates
2021/06/30 23:01:41 Serving securely on HTTPS port: 8443
</code></pre>
<p>Here are the endpoints for the kubernetes-dashboard namespace</p>
<pre><code>kubectl get ep -n kubernetes-dashboard
NAME ENDPOINTS AGE
dashboard-metrics-scraper 10.1.0.24:8000 11h
kubernetes-dashboard 10.1.0.25:8443 11h
</code></pre>
<p>Any help would be greatly appreciated.</p>
| <p>I was able to fix this issue.
In my ingress ymal file I had a typo. Port number was set to <code>433</code> instead of <code>443</code></p>
<p>As soon as I made and applied that change, I was able to access the dashboard login page with: https://localhost and http://localhost</p>
|
<p>I have a use case where there will be telecom application running within the several pods(every pod will host some configured service for billing for specific client) and this expects the service to store the state so obvious choice is statefulset .
Now the problem is I need to use iscsi as storage in the backend for these pods, Can you please point me to some reference to such use case - I am looking for Yaml for PV PVC and statefulset and how to link them . These PV PVC shall use iscsi as storage option.</p>
| <p>Yes you are right <code>statefulset</code> is option however you might can also use the <code>deployment</code>.</p>
<p>You have not mentioned which <code>cloud provider</code> you will be using but still sharing one <strong>note</strong> : <code>iscsi storage</code> is not optimized with <strong>GKE cotnainer OS</strong> running K8s nodes so if you are no <strong>GCP GKE</strong> change OS or would suggest using the <strong>Ubuntu</strong> image first.</p>
<p>You can start the <code>iscsi</code> service on the <code>Ubuntu</code> first.</p>
<p>You can use the <code>Minio</code> or <code>OpenEBS</code> for the storage option also.</p>
<p>Sharing the details if for OpenEBS</p>
<p>Create GCP disks for attaching nodes as a mount or you can dynamically provision it using the YAML as per need.</p>
<pre><code>apiVersion: openebs.io/v1alpha1
kind: StoragePoolClaim
metadata:
name: disk-pool
annotations:
cas.openebs.io/config: |
- name: PoolResourceRequests
value: |-
memory: 2Gi
- name: PoolResourceLimits
value: |-
memory: 4Gi
spec:
name: disk-pool
type: disk
poolSpec:
poolType: striped
blockDevices:
blockDeviceList:
- blockdevice-<Number>
- blockdevice-<Number>
- blockdevice-<Number>
</code></pre>
<p>Stoage class</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-sc-rep1
annotations:
openebs.io/cas-type: cstor
cas.openebs.io/config: |
- name: StoragePoolClaim
value: "disk-pool"
- name: ReplicaCount
value: "1"
provisioner: openebs.io/provisioner-iscsi
</code></pre>
<p>Application workload</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
storageClassName: openebs-sc-rep1
</code></pre>
<p>if you are on <code>AWS</code> <strong>EBS</strong> using the <code>iscsi</code>.
For testing you can also checkout</p>
<p><a href="https://cloud.ibm.com/catalog/content/minio" rel="nofollow noreferrer">https://cloud.ibm.com/catalog/content/minio</a></p>
<p>Few links :</p>
<ul>
<li><a href="https://support.zadarastorage.com/hc/en-us/articles/360039289891-Kubernetes-CSI-Getting-started-with-Zadara-CSI-on-GKE-Google-Kubernetes-Engine-" rel="nofollow noreferrer">https://support.zadarastorage.com/hc/en-us/articles/360039289891-Kubernetes-CSI-Getting-started-with-Zadara-CSI-on-GKE-Google-Kubernetes-Engine-</a></li>
</ul>
|
<p>I want to get all events that occurred in Kubernetes cluster in some python dictionary using maybe some API to extract data from the events that occurred in the past. I found on internet that it is possible by storing all data of Kube-watch on Prometheus and later accessing it. I am unable to figure out how to set it up and see all past pod events in python. Any alternative solutions to access past events are also appreciated. Thanks!</p>
| <p>I'll describe a solution that is not complicated and I think meets all your requirements.
There are tools such as <a href="https://github.com/heptiolabs/eventrouter" rel="nofollow noreferrer">Eventrouter</a> that take Kubernetes events and push them to a user specified sink. However, as you mentioned, you only need Pods events, so I suggest a slightly different approach.</p>
<p>In short, you can run the <code>kubectl get events --watch</code> command from within a Pod and collect the output from that command using a log aggregation system like <a href="https://grafana.com/oss/loki/" rel="nofollow noreferrer">Loki</a>.</p>
<p>Below, I will provide a detailed step-by-step explanation.</p>
<h3>1. Running kubectl command from within a Pod</h3>
<p>To display only Pod events, you can use:</p>
<pre><code>$ kubectl get events --watch --field-selector involvedObject.kind=Pod
</code></pre>
<p>We want to run this command from within a Pod. For security reasons, I've created a separate <code>events-collector</code> ServiceAccount with the <code>view</code> Role assigned and our Pod will run under this ServiceAccount.<br />
<strong>NOTE:</strong> I've created a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> instead of a single Pod.</p>
<pre><code>$ cat all-in-one.yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: events-collector
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: events-collector-binding
subjects:
- kind: ServiceAccount
name: events-collector
namespace: default
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: events-collector
name: events-collector
spec:
selector:
matchLabels:
app: events-collector
template:
metadata:
labels:
app: events-collector
spec:
serviceAccountName: events-collector
containers:
- image: bitnami/kubectl
name: test
command: ["kubectl"]
args: ["get","events", "--watch", "--field-selector", "involvedObject.kind=Pod"]
</code></pre>
<p>After applying the above manifest, the <code>event-collector</code> was created and collects Pod events as expected:</p>
<pre><code>$ kubectl apply -f all-in-one.yml
serviceaccount/events-collector created
clusterrolebinding.rbac.authorization.k8s.io/events-collector-binding created
deployment.apps/events-collector created
$ kubectl get deploy,pod | grep events-collector
deployment.apps/events-collector 1/1 1 1 14s
pod/events-collector-d98d6c5c-xrltj 1/1 Running 0 14s
$ kubectl logs -f events-collector-d98d6c5c-xrltj
LAST SEEN TYPE REASON OBJECT MESSAGE
77s Normal Scheduled pod/app-1-5d9ccdb595-m9d5n Successfully assigned default/app-1-5d9ccdb595-m9d5n to gke-cluster-2-default-pool-8505743b-brmx
76s Normal Pulling pod/app-1-5d9ccdb595-m9d5n Pulling image "nginx"
71s Normal Pulled pod/app-1-5d9ccdb595-m9d5n Successfully pulled image "nginx" in 4.727842954s
70s Normal Created pod/app-1-5d9ccdb595-m9d5n Created container nginx
70s Normal Started pod/app-1-5d9ccdb595-m9d5n Started container nginx
73s Normal Scheduled pod/app-2-7747dcb588-h8j4q Successfully assigned default/app-2-7747dcb588-h8j4q to gke-cluster-2-default-pool-8505743b-p7qt
72s Normal Pulling pod/app-2-7747dcb588-h8j4q Pulling image "nginx"
67s Normal Pulled pod/app-2-7747dcb588-h8j4q Successfully pulled image "nginx" in 4.476795932s
66s Normal Created pod/app-2-7747dcb588-h8j4q Created container nginx
66s Normal Started pod/app-2-7747dcb588-h8j4q Started container nginx
</code></pre>
<h3>2. Installing Loki</h3>
<p>You can install <a href="https://github.com/grafana/loki" rel="nofollow noreferrer">Loki</a> to store logs and process queries. Loki is like Prometheus, but for logs :). The easiest way to install Loki is to use the <a href="https://github.com/grafana/helm-charts/tree/main/charts/loki-stack" rel="nofollow noreferrer">grafana/loki-stack</a> Helm chart:</p>
<pre><code>$ helm repo add grafana https://grafana.github.io/helm-charts
"grafana" has been added to your repositories
$ helm repo update
...
Update Complete. ⎈Happy Helming!⎈
$ helm upgrade --install loki grafana/loki-stack
$ kubectl get pods | grep loki
loki-0 1/1 Running 0 76s
loki-promtail-hm8kn 1/1 Running 0 76s
loki-promtail-nkv4p 1/1 Running 0 76s
loki-promtail-qfrcr 1/1 Running 0 76s
</code></pre>
<h3>3. Querying Loki with LogCLI</h3>
<p>You can use the <a href="https://grafana.com/docs/loki/latest/getting-started/logcli/" rel="nofollow noreferrer">LogCLI</a> tool to run LogQL queries against a Loki server. Detailed information on installing and using this tool can be found in the <a href="https://grafana.com/docs/loki/latest/getting-started/logcli/#installation" rel="nofollow noreferrer">LogCLI documentation</a>. I'll demonstrate how to install it on Linux:</p>
<pre><code>$ wget https://github.com/grafana/loki/releases/download/v2.2.1/logcli-linux-amd64.zip
$ unzip logcli-linux-amd64.zip
Archive: logcli-linux-amd64.zip
inflating: logcli-linux-amd64
$ mv logcli-linux-amd64 logcli
$ sudo cp logcli /bin/
$ whereis logcli
logcli: /bin/logcli
</code></pre>
<p>To query the Loki server from outside the Kubernetes cluster, you may need to expose it using the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> resource:</p>
<pre><code>$ cat ingress.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
name: loki-ingress
spec:
rules:
- http:
paths:
- backend:
serviceName: loki
servicePort: 3100
path: /
$ kubectl apply -f ingress.yml
ingress.networking.k8s.io/loki-ingress created
$ kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
loki-ingress <none> * <PUBLIC_IP> 80 19s
</code></pre>
<p>Finally, I've created a simple python script that we can use to query the Loki server:<br />
<strong>NOTE:</strong> We need to set the <code>LOKI_ADDR</code> environment variable as described in the <a href="https://grafana.com/docs/loki/latest/getting-started/logcli/#example" rel="nofollow noreferrer">documentation</a>. You need to replace the <code><PUBLIC_IP></code> with your Ingress IP.</p>
<pre><code>$ cat query_loki.py
#!/usr/bin/env python3
import os
os.environ['LOKI_ADDR'] = "http://<PUBLIC_IP>"
os.system("logcli query '{app=\"events-collector\"}'")
$ ./query_loki.py
...
2021-07-02T10:33:01Z {} 2021-07-02T10:33:01.626763464Z stdout F 0s Normal Pulling pod/backend-app-5d99cf4b-c9km4 Pulling image "nginx"
2021-07-02T10:33:00Z {} 2021-07-02T10:33:00.836755152Z stdout F 0s Normal Scheduled pod/backend-app-5d99cf4b-c9km4 Successfully assigned default/backend-app-5d99cf4b-c9km4 to gke-cluster-1-default-pool-328bd2b1-288w
2021-07-02T10:33:00Z {} 2021-07-02T10:33:00.649954267Z stdout F 0s Normal Started pod/web-app-6fcf9bb7b8-jbrr9 Started container nginx2021-07-02T10:33:00Z {} 2021-07-02T10:33:00.54819851Z stdout F 0s Normal Created pod/web-app-6fcf9bb7b8-jbrr9 Created container nginx
2021-07-02T10:32:59Z {} 2021-07-02T10:32:59.414571562Z stdout F 0s Normal Pulled pod/web-app-6fcf9bb7b8-jbrr9 Successfully pulled image "nginx" in 4.228468876s
...
</code></pre>
|
<p>I'm learning Kubernetes over Minikube.
My demo consists of a Flask API and a MySQL Database.
I made all the <em>.yaml</em> files but something strange happens with services of the deployments...</p>
<p>I cannot communicate with the API <strong>externally</strong> (neither with Postman, Curl, browser...)</p>
<p>By "externally" I mean "from outside the cluster" (on the same machine, ex: from the browser, postman...)</p>
<p><strong>This the Deployment+Service for the API:</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: api-dip-api-deployment
labels:
app: api-dip-api
spec:
replicas: 1
selector:
matchLabels:
app: api-dip-api
template:
metadata:
labels:
app: api-dip-api
spec:
containers:
- name: api-dip-api
image: myregistry.com
ports:
- containerPort: 5000
env:
- name: DATABASE_USER
valueFrom:
secretKeyRef:
name: api-secret
key: api-db-user
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: api-secret
key: api-db-password
- name: DATABASE_HOST
valueFrom:
configMapKeyRef:
name: api-configmap
key: api-database-url
- name: DATABASE_NAME
valueFrom:
configMapKeyRef:
name: api-configmap
key: api-database-name
- name: DATABASE_PORT
valueFrom:
configMapKeyRef:
name: api-configmap
key: api-database-port
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: api-service
spec:
selector:
app: api-dip-api
ports:
- port: 5000
protocol: TCP
targetPort: 5000
nodePort: 30000
type: LoadBalancer
</code></pre>
<p><strong>Dockerfile API:</strong></p>
<pre><code>FROM python:latest
# create a dir for app
WORKDIR /app
# intall dependecies
COPY requirements.txt .
RUN pip install -r requirements.txt
# source code
COPY /app .
EXPOSE 5000
# run the application
CMD ["python", "main.py"]
</code></pre>
<p>Since i'm using Minikube the correct IP for the service is displayed with</p>
<pre><code>minikube service <service_name>
</code></pre>
<p>I already tried looking the minikube context, as suggested in another post, but it shows like:</p>
<pre><code>CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* minikube minikube minikube default
</code></pre>
<p>so it should be ok.</p>
<p>I don't know what to try now... the ports are mapped correctly I think.</p>
| <p>I <strong>did not</strong> found any solution to my problem.
I run Kubernetes with Minikube on Vmware Fusion on my Mac with BigSur.</p>
<p>I found out that the SAME EXACT deployment works on a machine with ubuntu installed, OR on a virtual machine made with VirtualBox.</p>
<p>Actually seems that this is a known issue:</p>
<ul>
<li><a href="https://github.com/kubernetes/minikube/issues/11577" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/issues/11577</a></li>
<li><a href="https://github.com/kubernetes/minikube/issues/11193" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/issues/11193</a></li>
<li><a href="https://github.com/kubernetes/minikube/issues/4027" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/issues/4027</a></li>
</ul>
|
<p>I am trying to setup RabbitMQ Cluster on K8S Cluster (CentOS 7) but it errored with</p>
<pre><code>2021-07-01 16:30:10.048 [error] <0.273.0> Failed to fetch a list of nodes from Kubernetes API: {failed_connect,[{to_address,{"kubernetes.default",443}}, {inet,[inet],nxdomain}]}
</code></pre>
<p>I tried debugging the issue but I am not sure what's missing</p>
<p><strong>nslookup output</strong></p>
<pre><code>[root@re-devk8s-control01 tmp]# nslookup kubernetes.default
Server: 174.25.1.14
Address: 174.25.1.14#53
** server can't find kubernetes.default: NXDOMAIN
</code></pre>
<p><strong>dnsutils output</strong></p>
<pre><code>[root@re-devk8s-control01 tmp]# kubectl exec -i -t dnsutils -- cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local awstruewind.local
options ndots:5
</code></pre>
<p><strong>/etc/resolv.conf</strong></p>
<pre><code>[root@re-devk8s-control01 tmp]# cat /etc/resolv.conf
# Generated by NetworkManager
search ourdomain.local
nameserver 174.25.1.14
</code></pre>
<p><strong>KubeDNS log</strong> and I see few errors in it</p>
<pre><code>[root@re-devk8s-control01 tmp]# kubectl logs --namespace=kube-system -l k8s-app=kube-dns
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.8.0
linux/amd64, go1.15.3, 054c9ae
[ERROR] plugin/errors: 2 610092700101479936.1385205460227937641. HINFO: read udp 10.244.0.31:47645->174.25.1.14:53: read: no route to host
[ERROR] plugin/errors: 2 610092700101479936.1385205460227937641. HINFO: read udp 10.244.0.31:60675->174.25.1.14:53: read: no route to host
</code></pre>
<p>Though there were many similar posts in SO, I was unable to find out the root cause</p>
| <p>I found the following link from SO which helped me to resolve the issue <a href="https://stackoverflow.com/questions/45805483/kubernetes-pods-cant-resolve-hostnames">KubeDNS Masquerade</a></p>
<pre><code>firewall-cmd --list-all
...
masquerade: yes
</code></pre>
<p>Enable if it's "no":</p>
<pre><code>firewall-cmd --add-masquerade --permanent
firewall-cmd --reload
</code></pre>
|
<p>I am following the <code>Installation Instructions</code> from <a href="https://argoproj.github.io/argo-cd/getting_started/#3-access-the-argo-cd-api-server" rel="nofollow noreferrer">https://argoproj.github.io/argo-cd/getting_started/#3-access-the-argo-cd-api-server</a>
and even though the service type has been changes to <code>LoadBalancer</code> I cannot manage to login.</p>
<p>The information I have is:</p>
<pre><code>$ oc describe svc argocd-server
Name: argocd-server
Namespace: argocd
Labels: app.kubernetes.io/component=server
app.kubernetes.io/name=argocd-server
app.kubernetes.io/part-of=argocd
Annotations: <none>
Selector: app.kubernetes.io/name=argocd-server
Type: LoadBalancer
IP: 172.30.70.178
LoadBalancer Ingress: a553569222264478ab2xx1f60d88848a-642416295.eu-west-1.elb.amazonaws.com
Port: http 80/TCP
TargetPort: 8080/TCP
NodePort: http 30942/TCP
Endpoints: 10.128.3.91:8080
Port: https 443/TCP
TargetPort: 8080/TCP
NodePort: https 30734/TCP
Endpoints: 10.128.3.91:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>If I do:</p>
<pre><code>$ oc login https://a553569222264478ab2xx1f60d88848a-642416295.eu-west-1.elb.amazonaws.com
The server is using a certificate that does not match its hostname: x509: certificate is valid for localhost, argocd-server, argocd-server.argocd, argocd-server.argocd.svc, argocd-server.argocd.svc.cluster.local, not a553569222264478ab2xx1f60d88848a-642416295.eu-west-1.elb.amazonaws.com
You can bypass the certificate check, but any data you send to the server could be intercepted by others.
Use insecure connections? (y/n): y
error: Seems you passed an HTML page (console?) instead of server URL.
Verify provided address and try again.
</code></pre>
| <p>I managed to successfully login argocd-server by the following</p>
<pre><code>kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
argoPass=$(kubectl -n argocd get secret argocd-initial-admin-secret \
-o jsonpath="{.data.password}" | base64 -d)
argocd login --insecure --grpc-web k3s_master:32761 --username admin \
--password $argoPass
</code></pre>
|
<p>I just installed bitnami/wordpress image using helm. Is it possible to sync it with git and when I will change some files in this git repository to update kubernetes pods?</p>
<p>I mean update Wordpress source code because Im modyficating plugins in wp-content/plugins dir.</p>
| <p>You can use <a href="https://argo-cd.readthedocs.io/en/stable/" rel="nofollow noreferrer">ArgoCD</a> or <a href="https://fluxcd.io/" rel="nofollow noreferrer">Flux</a> to automate this types of GitOps workflow. Check their documentation. They are pretty powerful and popular for GitOps in Kubernets.</p>
|
<p>I have a simple restful django project that return 'hello world' when you GET localhost:8000.
I want to run it using microk8s.
Using following instruction (<a href="https://microk8s.io/docs/registry-images" rel="nofollow noreferrer">https://microk8s.io/docs/registry-images</a>) i created the image and apply my yaml file</p>
<p>yaml file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
labels:
app: hello
spec:
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: hello1:local
imagePullPolicy: Never
ports:
- containerPort: 8000
---
apiVersion: v1
kind: Service
metadata:
name: hello
spec:
selector:
app: hello
ports:
- name: hello
port: 8000
protocol: TCP
targetPort: 8000
</code></pre>
<p>The pod is running properly.</p>
<pre><code>NAME READY STATUS RESTARTS AGE
pod/hello-deployment-6567ccc4f7-7j6nj 1/1 Running 0 61m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/hello ClusterIP 10.152.183.241 <none> 8000/TCP 59m
</code></pre>
<p>but when i want to access the 10.152.183.241:8000 nothing works.</p>
<p>I also connect to pod by following command</p>
<pre><code>microk8s kubectl exec -it hello-deployment-6567ccc4f7-7j6nj -- bash
</code></pre>
<p>then install curl and test these:</p>
<pre><code>root@hello-deployment-6567ccc4f7-7j6nj:/code# curl localhost:8000
Hello World
root@hello-deployment-6567ccc4f7-7j6nj:/code# curl 10.152.183.241:8000
curl: (7) Failed to connect to 10.152.183.241 port 8000: Connection refused
</code></pre>
<p>Obviously the program is running but there is a problem with kubernetes service. what's wrong with my yaml file?</p>
<p><strong>Update 1:</strong>
found that image file have problem and it's work with other images.
still don't know what is the problem.</p>
<p>my Dockerfile:</p>
<pre><code>FROM python:3
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
CMD python3 manage.py runserver
EXPOSE 8000
</code></pre>
| <p>It's due to you are using the <strong>ClusterIP</strong> which is only accessible internally from CLuster.</p>
<p>Here you require the <strong>Nginx</strong> ingress controller to expose the service. Or you can try with the Host IP once to connect with the service.</p>
<p>You can also try the command</p>
<pre><code>kubectl port-forward svc/hello 8080:8080
</code></pre>
<p>once the port is forwarded you can hit the <code>curl</code> on <strong>localhost:8080</strong>.</p>
<p>However, for production use-case it's always suggested to use the <code>ingress</code> for managing the cluster traffic.</p>
<p>Ingress basically work as the Proxy it's same as Nginx.</p>
<p><strong>Ingress</strong> is configuration object which managed by the <code>Ingress controller</code>. When you enable the ingress you require the Ingress controller in minikube it will be there however for other Cluster on <strong>GKE & EKS</strong> you have to setup manually.</p>
<p>Here is good example for the implementation of the ingress in minikube :</p>
<p><a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/</a></p>
<p>ingress example</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
spec:
rules:
- host: hello-world.info
http:
paths:
- path: /
backend:
service:
name: hello
port:
number: 8080
</code></pre>
|
<p>I am trying to setup RabbitMQ Operator and RabbitMQ Cluster on K8S cluster on bare metal using this <a href="https://www.rabbitmq.com/kubernetes/operator/using-operator.html#create" rel="nofollow noreferrer">link</a></p>
<p>K8S Cluster has got 1 master and 1 worker node</p>
<p>RabbitMQ Cluster pod log</p>
<p>[root@re-ctrl01 containers]# kubectl logs definition-server-0 -n rabbitmq-system</p>
<pre><code>BOOT FAILED (Tailored output)
===========
ERROR: epmd error for host definition-server-0.definition-nodes.rabbitmq-system: nxdomain (non-existing domain)
11:51:13.733 [error] Supervisor rabbit_prelaunch_sup had child prelaunch started with rabbit_prelaunch:run_prelaunch_first_phase() at undefined exit with reason {epmd_error,"definition-server-0.definition-nodes.rabbitmq-system",nxdomain} in context start_error. Crash dump is being written to: erl_crash.dump...
</code></pre>
<p>[root@re-ctrl01 containers]# kubectl describe pod definition-server-0 -n rabbitmq-system</p>
<pre><code>Name: definition-server-0
Namespace: rabbitmq-system
Priority: 0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 44s default-scheduler Successfully assigned rabbitmq-system/definition-server-0 to re-ctrl01.local
Normal Pulled 43s kubelet Container image "rabbitmq:3.8.16-management" already present on machine
Normal Created 43s kubelet Created container setup-container
Normal Started 43s kubelet Started container setup-container
Normal Pulled 42s kubelet Container image "rabbitmq:3.8.16-management" already present on machine
Normal Created 42s kubelet Created container rabbitmq
Normal Started 42s kubelet Started container rabbitmq
Warning Unhealthy 4s (x3 over 24s) kubelet Readiness probe failed: dial tcp 10.244.0.xxx:5672: connect: connection refused
</code></pre>
<p>I added the following entries to <code>/etc/hosts</code> file of <code>worker node</code> because I am NOT sure whether the entry has to be added to <code>master or worker</code></p>
<pre><code>[root@re-worker01 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
127.0.0.1 re-worker01.local re-worker01 definition-server-0.definition-nodes.rabbitmq-system
</code></pre>
<p>I am stuck with this issue for almost 2 days. I googled and found similar issues but none resolved my issue</p>
<p>I see multiple issues in pod logs and describe output and I am unable to find out the root cause</p>
<ol>
<li>Where can I find <code>erl_crash.dump</code> file on K8S ?</li>
<li>Is this really a hostname related issue ?</li>
<li>10.244.0.xxx:5672: connect: connection refused - Is this issue is because of <code>'epmd'</code> or something else ?</li>
</ol>
| <p>I managed to resolve the issue after spending lot of time</p>
<p>I added the host <code>definition-server-0.definition-nodes.rabbitmq-system</code> to <code>/etc/hosts</code> file of RabbitMQ Cluster pod using <strong>hostAliases</strong></p>
<p>YAML to add <strong>hostAliases</strong> is given below</p>
<pre><code>apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: definition
namespace: rabbitmq-system
spec:
replicas: 1
override:
statefulSet:
spec:
template:
spec:
containers: []
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "definition-server-0"
- "definition-server-0.definition-nodes.rabbitmq-system"
</code></pre>
|
<p>Sometimes when I use helm charts, not all things I want to modify are usable with the given values. Is it practical to use kustomize to modify the rendered helm chart?<br />
So like this:<br />
chart -> helm template -> kustomize -> kubectl deploy</p>
| <p>We do use it sometimes. You can use Helm directly in Kustomize with the helmCharts plugin. In my example, <code>values-prod.yaml</code> has to be in the same directory as the <code>kustomization.yaml</code>. <code>namespace: custom-metallb</code> will override <code>namespace: metallb</code> for example.</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: custom-metallb
helmCharts:
- name: metallb
namespace: metallb
releasename: metallb
repo: https://metallb.github.io/metallb
version: 0.10.2
ValuesFile: values-prod.yaml
</code></pre>
<p>To be honest, the documentation is not that great, but you can find the arguments here:
<a href="https://github.com/kubernetes-sigs/kustomize/blob/master/api/types/helmchartargs.go" rel="noreferrer">https://github.com/kubernetes-sigs/kustomize/blob/master/api/types/helmchartargs.go</a></p>
<p>Documentation from kustomize:
<a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/chart.md" rel="noreferrer">https://github.com/kubernetes-sigs/kustomize/blob/master/examples/chart.md</a></p>
|
<p>I'm trying to capture some logs that are file-based in an application pod on GKE and view them from Google Cloud Logging.</p>
<p>For various reasons, these application logs are not sent to STDOUT or STDERR (since those logs are automatically sent to Cloud Logging). I have been suggested to implement a scripting solution that tails the logs and sends them to STDOUT. However I was hoping in a side-car approach with Fluentd (or Fluentbit) logging agent that'll tail the logs and output them to Cloud Logging.</p>
<p>Using the sidecar image <code>"k8s.gcr.io/fluentd-gcp:1.30"</code>, I've tried out the below YAML file (containing the fluentd configmap and deployment) below:</p>
<pre><code>---
apiVersion: v1
kind: ConfigMap
metadata:
name: app-log-config
data:
fluentd.conf: |
<source>
type tail
format none
path /var/log/execution*.log
pos_file /var/log/execution.pos
tag app.*
</source>
<match **>
type google_cloud
</match>
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app.kubernetes.io/name: app
app.kubernetes.io/instance: app
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: app
app.kubernetes.io/instance: app
template:
metadata:
labels:
app.kubernetes.io/name: app
app.kubernetes.io/instance: app
spec:
serviceAccountName: app
volumes:
- name: executionlogs
emptyDir: {}
- name: fluentdconfig
configMap:
name: app-log-config
containers:
- name: app
image: appimage:version
imagePullPolicy: IfNotPresent
volumeMounts:
- name: executionlogs
mountPath: /tmp/executionLogs
ports:
- name: http
containerPort: 8080
protocol: TCP
- name: log-agent
image: "k8s.gcr.io/fluentd-gcp:1.30"
imagePullPolicy: IfNotPresent
env:
- name: FLUENTD_ARGS
value: "-c /etc/fluentd-config/fluentd.conf"
volumeMounts:
- name: executionlogs
mountPath: /var/log
- name: fluentdconfig
mountPath: /etc/fluentd-config
</code></pre>
<p>Initially, the sidecar logs were throwing a 403 error, since I hadn't given the service account the requisite permissions (I was using GKE workload identity and the corresponding GCP IAM service account required logWriter permissions to be added). After fixing the error, I got the following logs:</p>
<pre><code>2021-06-27 12:49:09 +0000 [info]: fluent/supervisor.rb:471:read_config: reading config file path="/etc/fluentd-config/fluentd.conf"
2021-06-27 12:49:09 +0000 [info]: fluent/supervisor.rb:337:supervise: starting fluentd-0.12.29
2021-06-27 12:49:09 +0000 [info]: fluent/engine.rb:126:block in configure: gem 'fluent-mixin-config-placeholders' version '0.4.0'
2021-06-27 12:49:09 +0000 [info]: fluent/engine.rb:126:block in configure: gem 'fluent-mixin-plaintextformatter' version '0.2.6'
2021-06-27 12:49:09 +0000 [info]: fluent/engine.rb:126:block in configure: gem 'fluent-plugin-google-cloud' version '0.5.2'
2021-06-27 12:49:09 +0000 [info]: fluent/engine.rb:126:block in configure: gem 'fluent-plugin-kafka' version '0.3.1'
2021-06-27 12:49:09 +0000 [info]: fluent/engine.rb:126:block in configure: gem 'fluent-plugin-mongo' version '0.7.15'
2021-06-27 12:49:09 +0000 [info]: fluent/engine.rb:126:block in configure: gem 'fluent-plugin-record-reformer' version '0.8.2'
2021-06-27 12:49:09 +0000 [info]: fluent/engine.rb:126:block in configure: gem 'fluent-plugin-rewrite-tag-filter' version '1.5.5'
2021-06-27 12:49:09 +0000 [info]: fluent/engine.rb:126:block in configure: gem 'fluent-plugin-s3' version '0.7.1'
2021-06-27 12:49:09 +0000 [info]: fluent/engine.rb:126:block in configure: gem 'fluent-plugin-scribe' version '0.10.14'
2021-06-27 12:49:09 +0000 [info]: fluent/engine.rb:126:block in configure: gem 'fluent-plugin-systemd' version '0.0.5'
2021-06-27 12:49:09 +0000 [info]: fluent/engine.rb:126:block in configure: gem 'fluent-plugin-td' version '0.10.29'
2021-06-27 12:49:09 +0000 [info]: fluent/engine.rb:126:block in configure: gem 'fluent-plugin-td-monitoring' version '0.2.2'
2021-06-27 12:49:09 +0000 [info]: fluent/engine.rb:126:block in configure: gem 'fluent-plugin-webhdfs' version '0.4.2'
2021-06-27 12:49:09 +0000 [info]: fluent/engine.rb:126:block in configure: gem 'fluentd' version '0.12.29'
2021-06-27 12:49:09 +0000 [info]: fluent/agent.rb:129:add_match: adding match pattern="**" type="google_cloud"
2021-06-27 12:49:10 +0000 [info]: plugin/out_google_cloud.rb:519:block in detect_platform: Detected GCE platform
2021-06-27 12:49:10 +0000 [info]: plugin/out_google_cloud.rb:290:configure: Logs viewer address: https://console.developers.google.com/
project/projectname/logs?service=compute.googleapis.com&key1=instance&key2=9071465168741286442
2021-06-27 12:49:10 +0000 [info]: fluent/root_agent.rb:147:add_source: adding source type="tail"
2021-06-27 12:49:10 +0000 [info]: fluent/engine.rb:133:configure: using configuration file: <ROOT>
<source>
type tail
format none
path /var/log/execution*.log
pos_file /var/log/execution.pos
tag app.*
</source>
<match **>
type google_cloud
</match>
</ROOT>
2021-06-27 12:52:10 +0000 [info]: plugin/in_tail.rb:557:initialize: following tail of /var/log/execution1.log
2021-06-27 12:53:10 +0000 [info]: plugin/out_google_cloud.rb:451:block in write: Successfully sent to Google Cloud Logging API.
</code></pre>
<p>Despite the successful message, I don't see anything on the Cloud Logging end.</p>
<p>So, here are my questions:</p>
<ol>
<li>Is this there a better solution for my use-case?</li>
<li>Is the sidecar image I should be using? I wasn't able to find any other fluentd images and the one I'm using is 3 years old. I'd prefer to use something recommended by Google rather than create my own.</li>
<li>What do I additionally need to do to see the logs on Cloud Logging? How do I debug this further?</li>
</ol>
<p>Thanks!</p>
| <p>I’ve tried implementing the configuration you’ve implemented and faced the same issue. I then configured all sources to stream output to STDOUT and was able to view logs on the Cloud Logging dashboard.</p>
<p>Below are the sample configurations I’ve used.</p>
<p>Sample_map-config.yaml:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-config
data:
fluentd.conf: |
<source>
type tail
format none
path /var/log/1.log
pos_file /var/log/1.log.pos
tag count.format1
</source>
<source>
type tail
format none
path /var/log/2.log
pos_file /var/log/2.log.pos
tag count.format2
</source>
<match **>
type stdout
</match>
</code></pre>
<p>Sample-pod.yaml:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: busybox
args:
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$i: $(date)" >> /var/log/1.log;
echo "$(date) INFO $i" >> /var/log/2.log;
i=$((i+1));
sleep 1;
done
volumeMounts:
- name: varlog
mountPath: /var/log
- name: count-agent
image: k8s.gcr.io/fluentd-gcp:1.30
env:
- name: FLUENTD_ARGS
value: -c /etc/fluentd-config/fluentd.conf
volumeMounts:
- name: varlog
mountPath: /var/log
- name: config-volume
mountPath: /etc/fluentd-config
volumes:
- name: varlog
emptyDir: {}
- name: config-volume
configMap:
name: fluentd-config
</code></pre>
|
<p>Goal is to terminate the pod after completion of Job.
This is my yaml file. Currently, my pod status is <strong>completed</strong> after running the job.</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
# Unique key of the Job instance
name: example-job
spec:
template:
metadata:
name: example-job
spec:
containers:
- name: container-name
image: my-img
command: ["python", "main.py"]
# Do not restart containers after they exit
restartPolicy: Never
# of retries before marking as failed.
backoffLimit: 4
</code></pre>
| <p>A job of a pod basically terminates itself after the main container of that pod finishes successful. If it returns a failure error code it will retry as many times as you specified in your backoffLimit.</p>
<p>So it seems as if your container does not terminate after it finishes whatever job it is supposed to do. Without knowing anything about your job image I cannot tell you what you need to do exactly.
However, it seems as if you need to adapt your main.py to properly exit after it has done what it is supposed to do.</p>
|
<p>I am trying to configure an aws alb ingress for an existing cluster which already has nginx ingress configured. This is the resource file I'm using in nginx ingress which works fine.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "stage-ingress-1"
namespace: "teststage"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/certificate-arn: <cert_arn>
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/scheme: internet-facing
labels:
app: test-ingress
spec:
rules:
- http:
paths:
- path: /(.*)
backend:
serviceName: ssl-redirect
servicePort: use-annotation
- path: /(.*)
backend:
serviceName: "test-service"
servicePort: 80
host: test.abc.com
</code></pre>
<p>However, I get an error saying</p>
<p><code>"msg"="Reconciler error" "error"="failed to reconcile listeners due to failed to reconcile rules due to failed modifying rule 2 on aws alb due to ValidationError: Condition value '/(.*)' contains a character that is not valid</code></p>
<p>I understand it is something to do with the path specified, but I do not get on how this has to be configured. This is the path which contains all my data.</p>
| <p>It is not regex. Its much more simple matching than that
you want.</p>
<pre><code>spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: ssl-redirect
servicePort: use-annotation
- path: /*
backend:
serviceName: "test-service"
servicePort: 80
host: test.abc.com
</code></pre>
|
<p>I am using a Nginx Ingress Controller in a Kubernetes Cluster. I've got an application within the cluster, which was available over the internet. Now I'm using the Ingress Controller to access the application, with the intent of showing some custom errors.</p>
<p>If i access the application (which is not written by myself, therefore I can't change things there), it receives the IP address of the <code>nginx-ingress-controller-pod</code>. The logs of the <code>nginx-ingress-controller-pod</code> indicate that the remote address is a different one.</p>
<p>I've already tried things like <code>use-proxy-protocol</code>, then I would be able to use <code>$remote_addr</code> and get the right IP. But as I mentioned I am not able to change my application, so I have to "trick" the ingress controller to use the <code>$remote_addr</code> as his own.
How can i configure the ingress, so the application will get the request from the remote IP and not from the <code>nginx-ingress-controller-pod</code> IP? Is there a way to do this?</p>
<p>Edit: I'm using a bare metal kubernetes installation with kubernetes v1.19.2 and the nginx chart <code>ingress-nginx-3.29.0</code>.</p>
| <p><strong>It could not achievable by using layer 7 ingress controller.</strong></p>
<p>If Ingress preserves the source IP then response will got directly from the app pod to the client, so the client will get a response from a IP:port different from what he connected to. Or even worse - client's NAT drops the response completely because it doesn't match the existing connections.</p>
<p>You can take a look at this <a href="https://stackoverflow.com/questions/63836681/preserve-source-ip-on-kubernetes-bare-metal-with-ingress-nginx-iptables-and-met">similar question</a> on stackoverflow with accepted answer:</p>
<blockquote>
<p>As ingress is above-layer-4 proxy. There is no way you can preserve SRC IP in layer 3 IP protocol. The best is and I think Nginx Ingress already been set by default that they put the "X-Forwarded-For" header in any HTTP forward.
Your app supposes to log the X-Forwarded-For header</p>
</blockquote>
<p>You can try to workaround by following <a href="https://docs.ovh.com/gb/en/kubernetes/getting-source-ip-behind-loadbalancer/" rel="nofollow noreferrer">this article</a>. It could help you to preserve your IP.</p>
<p>I also recommend this <a href="https://blog.envoyproxy.io/introduction-to-modern-network-load-balancing-and-proxying-a57f6ff80236" rel="nofollow noreferrer">very good article</a> about load balancing and proxying. You will also learn a bit about load balancing on L7:</p>
<blockquote>
<p>L7 load balancing and the OSI model
As I said above in the section on L4 load balancing, using the OSI model for describing load balancing features is problematic. The reason is that L7, at least as described by the OSI model, itself encompasses multiple discrete layers of load balancing abstraction. e.g., for HTTP traffic consider the following sublayers:</p>
<ul>
<li>Optional Transport Layer Security (TLS). Note that networking people argue about which OSI layer TLS falls into. For the sake of this discussion we will consider TLS L7.</li>
<li>Physical HTTP protocol (HTTP/1 or HTTP/2).</li>
<li>Logical HTTP protocol (headers, body data, and trailers).</li>
<li>Messaging protocol (gRPC, REST, etc.).</li>
</ul>
</blockquote>
|
<p>I have to setup a monitoring environment for my EKS cluster.
Prometheus is running on external node and I am trying to use node exporter daemonset for getting metrics.
But on prometheus when I see the targets I am not able to see any target instead of just local host.</p>
<p><strong>Kubernetes_sd_config block</strong></p>
<pre><code>global:
scrape_interval: 15s
scrape_configs:
- job_name: 'prometheus'
scrape_interval: 15s
static_configs:
- targets: ['localhost:9100']
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
api_server: https://{{ kubernetes_api_server_addr }}
tls_config:
insecure_skip_verify: true
bearer_token_file: /etc/prometheus/token
scheme: https
tls_config:
insecure_skip_verify: true
bearer_token_file: /etc/prometheus/token
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name]
action: keep
regex: default;kubernetes;https
- target_label: __address__
replacement: {{ kubernetes_api_server_addr }}
- job_name: 'kubernetes-kube-state'
tls_config:
insecure_skip_verify: true
bearer_token_file: /etc/prometheus/token
kubernetes_sd_configs:
- role: pod
api_server: https://{{ kubernetes_api_server_addr }}
tls_config:
insecure_skip_verify: true
bearer_token_file: /etc/prometheus/token
scheme: https
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
- source_labels: [__meta_kubernetes_pod_label_grafanak8sapp]
regex: .*true.*
action: keep
- target_label: __address__
replacement: {{ kubernetes_api_server_addr }}
- source_labels: ['__meta_kubernetes_pod_label_daemon', '__meta_kubernetes_pod_node_name']
regex: 'node-exporter;(.*)'
action: replace
target_label: nodename
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_pod_name]
regex: (.+);(.+)
target_label: __metrics_path__
replacement: /api/v1/namespaces/${1}/pods/${2}/proxy/metrics
###################################################################################
# Scrape config for nodes (kubelet). #
# #
# Rather than connecting directly to the node, the scrape is proxied though the #
# Kubernetes apiserver. This means it will work if Prometheus is running out of #
# cluster, or can't connect to nodes for some other reason (e.g. because of #
# firewalling). #
###################################################################################
- job_name: 'kubernetes-kubelet'
scheme: https
tls_config:
insecure_skip_verify: true
bearer_token_file: /etc/prometheus/token
kubernetes_sd_configs:
- role: node
api_server: https://{{ kubernetes_api_server_addr }}
tls_config:
insecure_skip_verify: true
bearer_token_file: /etc/prometheus/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: {{ kubernetes_api_server_addr }}
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
- job_name: 'kubernetes-cadvisor'
scheme: https
tls_config:
insecure_skip_verify: true
bearer_token_file: /etc/prometheus/token
kubernetes_sd_configs:
- role: node
api_server: https://{{ kubernetes_api_server_addr }}
tls_config:
insecure_skip_verify: true
bearer_token_file: /etc/prometheus/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: {{ kubernetes_api_server_addr }}
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
###################################################################################
# Example scrape config for service endpoints. #
# #
# The relabeling allows the actual service scrape endpoint to be configured #
# for all or only some endpoints. #
###################################################################################
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
api_server: https://{{ kubernetes_api_server_addr }}
tls_config:
insecure_skip_verify: true
bearer_token_file: /etc/prometheus/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
#########################################################################################
# Example scrape config for probing services via the Blackbox Exporter. #
# #
# The relabeling allows the actual service scrape endpoint to be configured #
# for all or only some services. #
#########################################################################################
- job_name: 'kubernetes-services'
kubernetes_sd_configs:
- role: service
api_server: https://{{ kubernetes_api_server_addr }}
tls_config:
insecure_skip_verify: true
bearer_token_file: /etc/prometheus/token
scheme: https
tls_config:
insecure_skip_verify: true
bearer_token_file: /etc/prometheus/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- target_label: __address__
replacement: {{ kubernetes_api_server_addr }}
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name]
regex: (.+);(.+)
target_label: __metrics_path__
replacement: /api/v1/namespaces/$1/services/$2/proxy/metrics
##################################################################################
# Example scrape config for pods #
# #
# The relabeling allows the actual pod scrape to be configured #
# for all the declared ports (or port-free target if none is declared) #
# or only some ports. #
##################################################################################
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
api_server: https://{{ kubernetes_api_server_addr }}
tls_config:
insecure_skip_verify: true
bearer_token_file: /etc/prometheus/token
relabel_configs:
- source_labels: [__address__, __meta_kubernetes_pod_annotation_example_io_scrape_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pods
- job_name: 'kubernetes-service-endpoints-e'
kubernetes_sd_configs:
- role: endpoints
api_server: https://{{ kubernetes_api_server_addr }}
tls_config:
insecure_skip_verify: true
bearer_token_file: /etc/prometheus/token
scheme: https
tls_config:
insecure_skip_verify: true
bearer_token_file: /etc/prometheus/token
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: (\d+)
target_label: __meta_kubernetes_pod_container_port_number
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
regex: ()
target_label: __meta_kubernetes_service_annotation_prometheus_io_path
replacement: /metrics
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_pod_container_port_number, __meta_kubernetes_service_annotation_prometheus_io_path]
target_label: __metrics_path__
regex: (.+);(.+);(.+);(.+)
replacement: /api/v1/namespaces/$1/services/$2:$3/proxy$4
- target_label: __address__
replacement: {{ kubernetes_api_server_addr }}
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
- source_labels: [__meta_kubernetes_pod_node_name]
action: replace
target_label: instance
</code></pre>
<p>This is the Prometheus.yml file that I have on my prometheus instance.</p>
<p><strong>Prometheus instance logs /var/log/messages</strong></p>
<pre><code>Jul 1 15:18:53 ip-XXXXXXXXXXX prometheus: ts=2021-07-01T15:18:53.655Z caller=log.go:124 component=k8s_client_runtime level=debug func=Verbose.Infof msg="Listing and watching *v1.Endpoints from pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167"
Jul 1 15:18:53 ip-XXXXXXXXXXX prometheus: ts=2021-07-01T15:18:53.676Z caller=log.go:124 component=k8s_client_runtime level=debug func=Infof msg="GET https://XXXXXXXXXXXXXXXXXXXXXXX.eks.amazonaws.com/api/v1/endpoints?limit=500&resourceVersion=0 in 20 milliseconds"
Jul 1 15:18:53 ip-XXXXXXXXXXX prometheus: ts=2021-07-01T15:18:53.676Z caller=log.go:124 component=k8s_client_runtime level=error func=ErrorDepth msg="pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get \"https://XXXXXXXXXXXXXXXXXXXXXXX.eks.amazonaws.com/api/v1/endpoints?limit=500&resourceVersion=0\": x509: certificate signed by unknown authority"
Jul 1 15:18:56 ip-XXXXXXXXXXX prometheus: ts=2021-07-01T15:18:56.445Z caller=log.go:124 component=k8s_client_runtime level=debug func=Verbose.Infof msg="Listing and watching *v1.Pod from pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167"
Jul 1 15:18:56 ip-XXXXXXXXXXX prometheus: ts=2021-07-01T15:18:56.445Z caller=log.go:124 component=k8s_client_runtime level=debug func=Infof msg="GET https://XXXXXXXXXXXXXXXXXXXXXXX.eks.amazonaws.com/api/v1/pods?limit=500&resourceVersion=0 in 0 milliseconds"
Jul 1 15:18:56 ip-XXXXXXXXXXX prometheus: ts=2021-07-01T15:18:56.445Z caller=log.go:124 component=k8s_client_runtime level=error func=ErrorDepth msg="pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://XXXXXXXXXXXXXXXXXXXXXXX.eks.amazonaws.com/api/v1/pods?limit=500&resourceVersion=0\": unable to read authorization credentials file /etc/prometheus/token: open /etc/prometheus/token: no such file or directory"
</code></pre>
| <p>The logs you shared point to the problem:</p>
<pre><code>... unable to read authorization credentials file /etc/prometheus/token: open /etc/prometheus/token: no such file or directory"
</code></pre>
<p>The token file for in-cluster workloads is by default mounted at <code>/var/run/secrets/kubernetes.io/serviceaccount/token</code> but since you mentioned Prometheus is running "on external node" (no idea what you mean by this) this may or may not be useful for you to be possible to change.</p>
|
<p>I followed this instruction to set up a cert-manager on my EKS cluster <a href="https://cert-manager.io/docs/tutorials/acme/ingress/" rel="nofollow noreferrer">https://cert-manager.io/docs/tutorials/acme/ingress/</a>.</p>
<p>here is my ingress</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/issuer: "letsencrypt-staging"
spec:
tls:
- hosts:
- '*.test.com'
secretName: test-tls
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test-service
port:
number: 80
</code></pre>
<p>Here is the issuer. I just copied the config from the instruction</p>
<pre><code>apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt-staging
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: info@test.com
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- http01:
ingress:
class: nginx
</code></pre>
<p>After deployment, I found the certificate ready state is false</p>
<pre><code>kubectl get certificate
NAME READY SECRET AGE
test-tls False test-tls 2m45s
</code></pre>
<p>Then I followed this to troubleshoot <a href="https://cert-manager.io/docs/faq/troubleshooting/" rel="nofollow noreferrer">https://cert-manager.io/docs/faq/troubleshooting/</a></p>
<p>I ran <code>kubectl describe certificaterequest <request name></code>, found error <code>Waiting on certificate issuance from order test-tls-xxx: "pending"</code></p>
<p>then ran <code>kubectl describe order test-tls-xxx</code>, found error
<code> Warning Solver 20m cert-manager Failed to determine a valid solver configuration for the set of domains on the Order: no configured challenge solvers can be used for this challenge</code>.</p>
<p>Any idea why it couldn't determine a valid solver? how do I test if solver is working?</p>
| <p>It's not working due you are using the staging URL in <code>cluster issuer</code> to verify the image.</p>
<p>Please try with the Production URL.</p>
<p>here a simple and proper example of Clusterissuer and ingress YAML (do note you were trying with staging API <a href="https://acme-staging-v02.api.letsencrypt.org/directory" rel="nofollow noreferrer">https://acme-staging-v02.api.letsencrypt.org/directory</a> if possible use the production server address so it works properly with all browsers)</p>
<p>Example:</p>
<pre><code>apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: cluster-issuer-name
namespace: development
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: harsh@example.com
privateKeySecretRef:
name: secret-name
solvers:
- http01:
ingress:
class: nginx-class-name
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx-class-name
cert-manager.io/cluster-issuer: cluster-issuer-name
nginx.ingress.kubernetes.io/rewrite-target: /
name: example-ingress
spec:
rules:
- host: sub.example.com
http:
paths:
- path: /api
backend:
serviceName: service-name
servicePort: 80
tls:
- hosts:
- sub.example.com
secretName: secret-name
</code></pre>
<p><strong>Note</strong> : When you are trying again please try deleting the old objects like ingress, Clusterissuer first.</p>
<p><strong>Issuer vs ClusterIssuer</strong></p>
<blockquote>
<p>An Issuer is a namespaced resource, and it is not possible to issue
certificates from an Issuer in a different namespace. This means you
will need to create an Issuer in each namespace you wish to obtain
Certificates in.</p>
<p>If you want to create a single Issuer that can be consumed in multiple
namespaces, you should consider creating a ClusterIssuer resource.
This is almost identical to the Issuer resource, however is
non-namespaced so it can be used to issue Certificates across all
namespaces.</p>
</blockquote>
<p>Ref : <a href="https://cert-manager.io/docs/concepts/issuer/" rel="nofollow noreferrer">https://cert-manager.io/docs/concepts/issuer/</a></p>
<p><strong>Wildcard cert</strong></p>
<p>You can use as per requirement, if you are using <strong>issuer</strong> you can update the <strong>ingress</strong> <strong>annotation</strong> line like</p>
<pre><code>cert-manager.io/issuer: issuer-name
</code></pre>
<p>If you are trying to get the <strong>wildcard</strong> <code>*</code> certificate you won't be able to get it using <strong>HTTP auth</strong> method</p>
<pre><code>solvers:
- http01:
ingress:
class: nginx-class-name
</code></pre>
<p>instead of this you have to use the <strong>DNS-auth</strong> method for <strong>wildcard</strong> cert.</p>
<pre><code>solvers:
- dns01:
cloudDNS:
project: my-project
serviceAccountSecretRef:
name: prod-clouddns-svc-acct-secret
key: service-account.json
</code></pre>
<p>Read more at : <a href="https://cert-manager.io/docs/configuration/acme/dns01/" rel="nofollow noreferrer">https://cert-manager.io/docs/configuration/acme/dns01/</a></p>
<p>Ref article to get the wildcard cert : <a href="https://medium.com/@harsh.manvar111/wild-card-certificate-using-cert-manager-in-kubernetes-3406b042d5a2" rel="nofollow noreferrer">https://medium.com/@harsh.manvar111/wild-card-certificate-using-cert-manager-in-kubernetes-3406b042d5a2</a></p>
|
<p>I'm not sure if there is a ready condition in DaemonSet. By that, I mean all pods owned by that DaemonSet are ready.</p>
<p>I'm aware of <code>kubectl wait</code>, but it seems can not check the readiness of DaemonSet.</p>
| <p>Simpler method from <a href="https://starkandwayne.com/blog/silly-kubectl-trick-5-waiting-for-things-to-finish-up-2/" rel="nofollow noreferrer">https://starkandwayne.com/blog/silly-kubectl-trick-5-waiting-for-things-to-finish-up-2/</a> -></p>
<pre><code>kubectl rollout status daemonset \
rke2-ingress-nginx-controller \
-n kube-system \
--timeout 60s
</code></pre>
|
<p>I have a <strong>module</strong> definition as below:</p>
<p>===</p>
<p><strong>providers.tf</strong></p>
<pre><code>provider "kubernetes" {
#load_config_file = "false"
host = azurerm_kubernetes_cluster.aks.kube_config.0.host
username = azurerm_kubernetes_cluster.aks.kube_config.0.username
password = azurerm_kubernetes_cluster.aks.kube_config.0.password
client_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate)
client_key = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.client_key)
cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)
}
</code></pre>
<p><strong>outputs.tf</strong></p>
<pre><code>output "node_resource_group" {
value = azurerm_kubernetes_cluster.aks.node_resource_group
description = "The name of resource group where the AKS Nodes are created"
}
output "kubeConfig" {
value = azurerm_kubernetes_cluster.aks.kube_config_raw
description = "Kubeconfig of AKS Cluster"
}
output "host" {
value = azurerm_kubernetes_cluster.aks.kube_config.0.host
}
output "client_key" {
value = azurerm_kubernetes_cluster.aks.kube_config.0.client_key
}
output "client_certificate" {
value = azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate
}
output "kube_config" {
value = azurerm_kubernetes_cluster.aks.kube_config_raw
}
output "cluster_ca_certificate" {
value = azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate
}
</code></pre>
<p><strong>main.tf</strong></p>
<pre><code>resource "azurerm_log_analytics_workspace" "law" {
name = "${var.tla}-la-${local.lookup_result}-${var.identifier}"
location = data.azurerm_resource_group.rg.location
resource_group_name = data.azurerm_resource_group.rg.name
sku = var.la_sku
retention_in_days = 30
}
resource "azurerm_kubernetes_cluster" "aks" {
name = "${var.tla}-aks-${local.lookup_result}-${var.identifier}"
location = data.azurerm_resource_group.rg.location
resource_group_name = data.azurerm_resource_group.rg.name
dns_prefix = var.dns_prefix
kubernetes_version = var.kubernetes_version
sku_tier = var.sku_tier
private_cluster_enabled = var.enable_private_cluster
#api_server_authorized_ip_ranges = ""
default_node_pool {
name = "syspool001"
orchestrator_version = var.orchestrator_version
availability_zones = var.agents_availability_zones
enable_auto_scaling = true
node_count = var.default_pool_node_count
max_count = var.default_pool_max_node_count
min_count = var.default_pool_min_node_count
max_pods = var.default_pool_max_pod_count
vm_size = var.agents_size
enable_node_public_ip = false
os_disk_size_gb = var.default_pool_os_disk_size_gb
type = "VirtualMachineScaleSets"
vnet_subnet_id = var.vnet_subnet_id
node_labels = var.agents_labels
tags = merge(local.tags, var.agents_tags)
}
network_profile {
network_plugin = var.network_plugin
network_policy = var.network_policy
dns_service_ip = var.net_profile_dns_service_ip
docker_bridge_cidr = var.net_profile_docker_bridge_cidr
service_cidr = var.net_profile_service_cidr
}
role_based_access_control {
enabled = true
azure_active_directory {
managed = true
admin_group_object_ids = var.rbac_aad_admin_group_object_ids
}
}
identity {
type = "SystemAssigned"
}
addon_profile {
azure_policy {
enabled = true
}
http_application_routing {
enabled = false
}
oms_agent {
enabled = true
log_analytics_workspace_id = data.azurerm_log_analytics_workspace.log_analytics.id
}
}
tags = local.tags
lifecycle {
ignore_changes = [
default_node_pool
]
}
}
resource "azurerm_kubernetes_cluster_node_pool" "aksnp" {
lifecycle {
ignore_changes = [
node_count
]
}
for_each = var.additional_node_pools
kubernetes_cluster_id = azurerm_kubernetes_cluster.aks.id
name = each.value.node_os == "Windows" ? substr(each.key, 0, 6) : substr(each.key, 0, 12)
node_count = each.value.node_count
vm_size = each.value.vm_size
availability_zones = each.value.zones
max_pods = each.value.max_pods
os_disk_size_gb = each.value.os_disk_size_gb
os_type = each.value.node_os
vnet_subnet_id = var.vnet_subnet_id
node_taints = each.value.taints
enable_auto_scaling = each.value.cluster_auto_scaling
min_count = each.value.cluster_auto_scaling_min_count
max_count = each.value.cluster_auto_scaling_max_count
}
resource "kubernetes_namespace" "aks-namespace" {
metadata {
name = var.namespace
}
}
</code></pre>
<p><strong>data.tf</strong></p>
<pre><code>data "azurerm_resource_group" "rg" {
name = var.resource_group_name
}
</code></pre>
<p><strong>lookups.tf</strong></p>
<pre><code>locals {
environment_lookup = {
dev = "d"
test = "t"
int = "i"
prod = "p"
prd = "p"
uat = "a"
poc = "d"
dr = "r"
lab = "l"
}
lookup_result = lookup(local.environment_lookup, var.environment)
tags = merge(
data.azurerm_resource_group.rg.tags, {
Directory = "tectcompany.com",
PrivateDNSZone = var.private_dns_zone,
Immutable = "False",
ManagedOS = "True",
}
)
}
data "azurerm_log_analytics_workspace" "log_analytics" {
name = "abc-az-lad2"
resource_group_name = "abc-dev-aae"
}
</code></pre>
<p><strong>variables.tf</strong></p>
<pre><code>variable "secondary_region" {
description = "Is this resource being deployed into the secondary (pair) region?"
default = false
type = bool
}
variable "override_log_analytics_workspace" {
description = "Override the vm log analytics workspace"
type = string
default = null
}
variable "override_log_analytics_resource_group_name" {
description = "Overrides the log analytics resource group name"
type = string
default = null
}
variable "environment" {
description = "The name of environment for the AKS Cluster"
type = string
default = "dev"
}
variable "identifier" {
description = "The identifier for the AKS Cluster"
type = number
default = "001"
}
variable "kubernetes_version" {
description = "Specify which Kubernetes release to use. The default used is the latest Kubernetes version available in the region"
type = string
default = "1.19.9"
}
variable "dns_prefix" {
description = "The dns prefix for the AKS Cluster"
type = string
default = "odessa-sandpit"
}
variable "orchestrator_version" {
description = "Specify which Kubernetes release to use for the orchestration layer. The default used is the latest Kubernetes version available in the region"
type = string
default = null
}
variable "agents_availability_zones" {
description = "(Optional) A list of Availability Zones across which the Node Pool should be spread. Changing this forces a new resource to be created."
type = list(string)
default = null
}
variable "agents_size" {
default = "Standard_D4s_v3"
description = "The default virtual machine size for the Kubernetes agents"
type = string
}
variable "vnet_subnet_id" {
description = "(Optional) The ID of a Subnet where the Kubernetes Node Pool should exist. Changing this forces a new resource to be created."
type = string
default = null
}
variable "agents_labels" {
description = "(Optional) A map of Kubernetes labels which should be applied to nodes in the Default Node Pool. Changing this forces a new resource to be created."
type = map(string)
default = {}
}
variable "agents_tags" {
description = "(Optional) A mapping of tags to assign to the Node Pool."
type = map(string)
default = {}
}
variable "net_profile_dns_service_ip" {
description = "(Optional) IP address within the Kubernetes service address range that will be used by cluster service discovery (kube-dns). Changing this forces a new resource to be created."
type = string
default = null
}
variable "net_profile_docker_bridge_cidr" {
description = "(Optional) IP address (in CIDR notation) used as the Docker bridge IP address on nodes. Changing this forces a new resource to be created."
type = string
default = null
}
variable "net_profile_service_cidr" {
description = "(Optional) The Network Range used by the Kubernetes service. Changing this forces a new resource to be created."
type = string
default = null
}
variable "rbac_aad_admin_group_object_ids" {
description = "Object ID of groups with admin access."
type = list(string)
default = null
}
variable "network_policy" {
description = "(Optional) The Network Policy to be used by the network profile of Azure Kubernetes Cluster."
type = string
default = "azure"
}
variable "network_plugin" {
description = "(Optional) The Network Plugin to be used by the network profile of Azure Kubernetes Cluster."
type = string
default = "azure"
}
variable "enable_private_cluster" {
description = "(Optional) Set this variable to true if you want Azure Kubernetes Cluster to be private."
default = true
}
variable "default_pool_node_count" {
description = "(Optional) The initial node count for the default pool of AKS Cluster"
type = number
default = 3
}
variable "default_pool_max_node_count" {
description = "(Optional) The max node count for the default pool of AKS Cluster"
type = number
default = 6
}
variable "default_pool_min_node_count" {
description = "(Optional) The min node count for the default pool of AKS Cluster"
type = number
default = 3
}
variable "default_pool_max_pod_count" {
description = "(Optional) The max pod count for the default pool of AKS Cluster"
type = number
default = 13
}
variable "default_pool_os_disk_size_gb" {
description = "(Optional) The size of os disk in gb for the nodes from default pool of AKS Cluster"
type = string
default = "64"
}
variable "additional_node_pools" {
type = map(object({
node_count = number
max_pods = number
os_disk_size_gb = number
vm_size = string
zones = list(string)
node_os = string
taints = list(string)
cluster_auto_scaling = bool
cluster_auto_scaling_min_count = number
cluster_auto_scaling_max_count = number
}))
}
variable "sku_tier" {
description = "(Optional)The SKU Tier that should be used for this Kubernetes Cluster, possible values Free or Paid"
type = string
default = "Paid"
validation {
condition = contains(["Free", "Paid"], var.sku_tier)
error_message = "SKU_TIER can only be either Paid or Free."
}
}
variable "la_sku" {
description = "(Optional)The SKU Tier that should be used for Log Analytics. Multiple values are possible."
type = string
default = "PerGB2018"
validation {
condition = contains(["Free", "PerNode", "Premium", "Standard", "Standalone", "Unlimited", "CapacityReservation", "PerGB2018"], var.la_sku)
error_message = "SKU_TIER for Log Analytics can be can only be either of Free, PerNode, Premium, Standard, Standalone, Unlimited, CapacityReservation and PerGB2018(Default Value)."
}
}
variable "resource_group_name" {
description = "Resource Group for deploying AKS Cluster"
type = string
}
variable "private_dns_zone" {
description = "DNS prefix for AKS Cluster"
type = string
default = "testcluster"
}
variable "tla" {
description = "Three Level acronym - three letter abbreviation for application"
type = string
default = ""
validation {
condition = length(var.tla) == 3
error_message = "The TLA should be precisely three characters."
}
}
variable "namespace"{
description = "AKS Namespace"
type = string
}
</code></pre>
<p><strong>Finally</strong>, I am calling my module below to create the AKS cluster, LA, and Namespace for the AKS Cluster:</p>
<pre><code>provider "azurerm" {
features {}
#version = "~> 2.53.0"
}
module "aks-cluster1" {
source = "../../"
resource_group_name = "pst-aks-sandpit-dev-1"
tla = "pqr"
additional_node_pools = {
pool1 = {
node_count = "1"
max_pods = "110"
os_disk_size_gb = "30"
vm_size = "Standard_D8s_v3"
zones = ["1","2","3"]
node_os = "Linux"
taints = ["kubernetes.io/os=windows:NoSchedule"]
cluster_auto_scaling = true
cluster_auto_scaling_min_count = "2"
cluster_auto_scaling_max_count = "4"
}
}
namespace = "sample-ns"
}
</code></pre>
<p><strong>Problem:</strong>
I get an error that no such host when terraform attempts to create the cluster.</p>
<p>I think that it is not able to connect to the cluster but I could be wrong. I do not know how it handles internally.</p>
<pre><code>Error: Post "https://testdns-05885a32.145f13c0-25ce-43e4-ae46-8cbef448ecf3.privatelink.australiaeast.azmk8s.io:443/api/v1/namespaces": dial tcp: lookup testdns-05885a32.145f13c0-25ce-43e4-ae46-8cbef448ecf3.privatelink.australiaeast.azmk8s.io: no such host
</code></pre>
| <p>I'm one of the maintainers of the Terraform Kubernetes provider, and I see this particular issue pretty often. As a former devops person myself, I empathize with the struggle I keep seeing in this area. It's something I would really love to fix in the provider, if it were possible.</p>
<p>The issue you're facing is a limitation in Terraform core when <a href="https://www.terraform.io/docs/language/providers/configuration.html#provider-configuration-1" rel="noreferrer">passing an unknown value to a provider configuration block</a>. To quote their docs:</p>
<pre><code>You can use expressions in the values of these configuration arguments,
but can only reference values that are known before the configuration is applied.
</code></pre>
<p>When you make a change to the underlying infrastructure, such the AKS cluster in this case, you're passing an unknown value into the Kubernetes provider configuration block, since the full scope of the cluster infrastructure is not known until after the change has been applied to the AKS cluster.</p>
<p>Although I did write the initial guide to show that <a href="https://github.com/hashicorp/terraform-provider-kubernetes/tree/614b5ed8ab2e962df855c0c8d4ca9c7ec50e88fe/_examples/aks#replacing-the-aks-cluster-and-re-creating-the-kubernetes--helm-resources" rel="noreferrer">it can be possible to work around some of these issues</a>, as you've found from experience, there are many edge cases that make it an unreliable and unintuitive process, to get the Kubernetes provider working alongside the underlying infrastructure. This is due to a long-standing <a href="https://github.com/hashicorp/terraform/issues/4149" rel="noreferrer">limitation in Terraform</a>, that can't be fixed in any provider, but we do have plans to smooth out the bumps a little by adding <a href="https://github.com/hashicorp/terraform-provider-kubernetes/pull/1141" rel="noreferrer">better error messages upfront</a>, which would have saved you some headache in this case.</p>
<p>To solve this particular type of problem, the cluster infrastructure needs to be kept in a state separate from the Kubernetes and Helm provider resources. I have an example here which builds an AKS cluster in one apply and then manages the Kubernetes/Helm resources in a second apply. You can use this approach to build the most robust configuration for you particular use case:</p>
<p><a href="https://github.com/hashicorp/terraform-provider-kubernetes/tree/e058e225e621f06e393bcb6407e7737fd43817bd/_examples/aks" rel="noreferrer">https://github.com/hashicorp/terraform-provider-kubernetes/tree/e058e225e621f06e393bcb6407e7737fd43817bd/_examples/aks</a></p>
<p>I know this two-apply approach is inconvenient, which is why we continue to try and accommodate users in single-apply scenarios, and scenarios which contain the Kubernetes and cluster resources in the same Terraform state. However, until upstream Terraform can add support for this, the single-apply workflow will remain buggy and less reliable than separating cluster infrastructure from Kubernetes resources.</p>
<p>Most cases can be worked around using <code>depends_on</code> (to ensure the cluster is created before the Kubernetes resource), or by moving the cluster infrastructure into a separate module and running <code>terraform state rm module.kubernetes-config</code> or <code>terraform apply -target=module.aks-cluster</code>. But I think encouraging this kind of work-around will cause more headaches in the long run, as it puts the user in charge of figuring out when to use special one-off apply commands, rather than setting up Terraform to behave reliably and predictably from the start. Plus it can have unintended side-effects, like <a href="https://github.com/hashicorp/terraform/issues/27728#issuecomment-840093064" rel="noreferrer">orphaning cloud resources</a>.</p>
|
<p>I have been getting more and more into k8s and I am trying some stuff on a personal VPS I have.
I have created a Deployment of a POD that uses another Service internally.
I would love to validate that these two services are being somewhat loadbalanced.</p>
<p>Here was my attempt to create this:
I have a simple service which I called <strong>metric-test</strong> which only has one endpoint that counts how many times it was called, logs it and returns this information.
For this I used the microframework <strong>jooby</strong> since I had familiarity with it and could get a fast start.</p>
<p>The code of this simple app can be <a href="https://github.com/Flavsditz/k8s-balance-test" rel="nofollow noreferrer">found on github</a></p>
<p>Also on the repository I have added the <strong>deployment.yaml</strong> file with which I use to push it to my local version of <strong>minikube</strong> (which simulates my k8s env).</p>
<p>Steps taken:</p>
<ol>
<li>Use this cmd to compile docker image into minikube repo: <code>eval $(minikube docker-env)</code></li>
<li>I now build the project's docker image with <code>docker build . -t metric-test1</code></li>
<li>I then apply the deployment file with <code>kubectl apply -f deployment.yaml</code> (the file is also on the github link</li>
</ol>
<p>This gives me a Service of type <strong>ClusterIP</strong> (which is what I want since it should not be accessible from outside) and 2 PODS containing the jooby code. Here is the <code>deployment.yaml</code> file:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: metric-test
labels:
run: metric-test
spec:
ports:
- port: 3000
protocol: TCP
selector:
run: metric-test
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: metric-test
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 3
selector:
matchLabels:
run: metric-test
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
run: metric-test
spec:
containers:
- image: metric-test1
imagePullPolicy: Never
name: data-api
ports:
- containerPort: 3000
protocol: TCP
name: 3000tcp
restartPolicy: Always
schedulerName: default-scheduler
terminationGracePeriodSeconds: 30
</code></pre>
<p>Alright everything working! I just set up a port-forwarding so I can access the service:
<code>kubectl port-forward service/metric-test 3000:3000</code></p>
<p>And use this script to fire a lot of requests at the service:</p>
<pre class="lang-sh prettyprint-override"><code>#!/bin/bash
target=${1:-http://localhost:3000}
while true # loop forever, until ctrl+c pressed.
do
for i in $(seq 100) # perfrom the inner command 100 times.
do
curl $target > /dev/null & # send out a curl request, the & indicates not to wait for the response.
done
wait # after 100 requests are sent out, wait for their processes to finish before the next iteration.
done
</code></pre>
<p>I am now seeing <strong>ALL</strong> of the requests being handled by only one of the pods while the other is just sitting there idle.</p>
<p>I went through the documentation (<a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">here</a>) but to be honest I came out actually with more questions than answers. That is why I tried to create a simplified scenario to test these things out.</p>
<p><strong>Questions</strong></p>
<ol>
<li>Can anyone help me out?</li>
<li>What am I missing? How can I achieve this load-balancing without exposing the service through the internet. I just want it to be available for other pods in the cluster.</li>
<li>(Bonus) The front-end services that are being served by an Ingress should all be properly load-balanced (right?)</li>
</ol>
<p>Note: As far as I understood using <code>LoadBalancer</code> and <code>Ingress</code> you can actually achieve load-balancing, however, it also exposes to the outside.</p>
<h2><strong>EDIT 1</strong></h2>
<p>More info on the deployment:
Result of <code>kubectl get po</code></p>
<pre><code>NAME READY STATUS RESTARTS AGE
metric-test-f89bfbf86-ccrj8 1/1 Running 0 16h
metric-test-f89bfbf86-kl7qg 1/1 Running 0 16h
</code></pre>
<p>Here is a picture of the logs of both after running the curl script for a bit:
<a href="https://i.stack.imgur.com/fRoUv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fRoUv.png" alt="Logs of both PODS" /></a></p>
<h2><strong>EDIT FOR SOLUTION</strong></h2>
<p>As m303945 said, the load balancing doesn't work out when we are using the port-forwarding.</p>
<p>In order to validate this, and any future tests I might want to do I did the following:</p>
<p>I've ran the following command in my terminal:</p>
<pre><code>kubectl run -it --rm --restart=Never --image=alpine handytools -n ${1:-default} -- /bin/ash
</code></pre>
<p>which creates an alpine-based container and gives me shell access. At that point however, I cannot use curl since it is not installed. So for that I ran:</p>
<ul>
<li><code>apk update</code></li>
<li><code>apk add curl</code></li>
</ul>
<p>once I had that I've modified my previous bash script from above to run on this pod and try to hit the service I set up:</p>
<pre class="lang-sh prettyprint-override"><code>#!/bin/ash
target=${1:-http://metric-test:3000}
for i in $(seq 5) # loop 5 times to generate 500 calls.
do
for i in $(seq 100) # perfrom the inner command 100 times.
do
curl $target > /dev/null & # send out a curl request, the & indicates not to wait for the response.
done
wait # after 100 requests are sent out, wait for their processes to finish before the next iteration.
done
</code></pre>
<p>The modifications included pointing to the service instead of the "localhost" and no port is needed. Also the alpine uses the <code>ash</code> and not the <code>bash</code> shell. I also ran 500 requests instead of infinity.</p>
<p>As you can see, running the above (which you can create using <code>vi</code> in alpine) I get a nice even load distribution!!!
<a href="https://i.stack.imgur.com/oCBst.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oCBst.png" alt="even distribution of calls" /></a></p>
<p>Thanks again for user m303945 for pointing me in the right direction</p>
| <p>If I remember correctly, TCP load balancing was not working when using port forwarding. Try to run the script from a container inside k8s instead of doing port forwarding.</p>
|
<p>I want to connect to Neo4j database using my creds. I am tunneling into a machine and once that is done, I open my broswer at the port: <code>localhost:7474</code>.
I tried with both neo4j and bolt scheme to connect at the url:
<code>bolt://<node_ip>:7687</code> and <code>neo4j://<node_ip>:7687</code> but the connection times out.</p>
<p>I tried checking the logs but only found that bolt scheme is enabled:</p>
<pre><code>bash-4.2$ tail -f /logs/debug.log
2021-07-02 21:26:03.323+0000 WARN [o.n.k.a.p.GlobalProcedures] Failed to load `org.apache.commons.logging.impl.LogKitLogger` from plugin jar `/home/sandbox/neo/plugins/apoc-4.2.0.2-all.jar`: org/apache/log/Logger
2021-07-02 21:26:03.946+0000 INFO [c.n.m.g.GlobalMetricsExtension] Sending metrics to CSV file at /home/sandbox/neo/metrics
2021-07-02 21:26:03.973+0000 INFO [o.n.b.BoltServer] Bolt enabled on 0.0.0.0:7687.
2021-07-02 21:26:03.974+0000 INFO [o.n.b.BoltServer] Bolt (Routing) enabled on 0.0.0.0:7688.
2021-07-02 21:26:03.974+0000 INFO [o.n.s.AbstractNeoWebServer$ServerComponentsLifecycleAdapter] Starting web server
2021-07-02 21:26:04.001+0000 INFO [o.n.s.m.ThirdPartyJAXRSModule] Mounted unmanaged extension [n10s.endpoint] at [/rdf]
2021-07-02 21:26:05.341+0000 INFO [c.n.s.e.EnterpriseNeoWebServer] Remote interface available at http://<node_ip>:7474/
2021-07-02 21:26:05.341+0000 INFO [o.n.s.AbstractNeoWebServer$ServerComponentsLifecycleAdapter] Web server started.
2021-07-02 21:35:34.565+0000 INFO [c.n.c.c.c.l.s.Segments] [system/00000000] Pruning SegmentFile{path=raft.log.0, header=SegmentHeader{formatVersion=2, recordOffset=56, prevFileLastIndex=-1, segmentNumber=0, prevIndex=-1, prevTerm=-1}}
2021-07-02 21:35:46.079+0000 INFO [c.n.c.c.c.l.s.Segments] [neo4j/32f6599b] Pruning SegmentFile{path=raft.log.0, header=SegmentHeader{formatVersion=2, recordOffset=56, prevFileLastIndex=-1, segmentNumber=0, prevIndex=-1, prevTerm=-1}}
</code></pre>
<p>The query log is empty, as I could not execute any query:</p>
<pre><code>bash-4.2$ tail -f query.log
2021-07-02 21:25:52.510+0000 INFO Query started: id:1 - 1009 ms: 0 B - embedded-session neo4j - - call db.clearQueryCaches() - {} - runtime=pipelined - {}
2021-07-02 21:25:52.580+0000 INFO id:1 - 1080 ms: 112 B - embedded-session neo4j - - call db.clearQueryCaches() - {} - runtime=pipelined - {}
</code></pre>
<p>The other articles or answers that I read were mostly about misconfiguration: wrong ports but I don't think that is the case with me since I checked from <code>debug.log</code> file that my ports are alright.</p>
<p>FWIW, I am using 3 replicas for my Neo4j and right now, connecting to just one pod.</p>
<p>I am tunnelling both the ports:</p>
<pre><code>ssh -L 7687:$IP:7687 -L 7474:$IP:7474 domain_name.com -N```
</code></pre>
| <p>Perhaps you've already checked this, but if not, can you ensure that port <code>7687</code> is also forwarded. When I tunnelled via browser, my expectation was that <code>7474</code> would be sufficient, but it turned out that forwarding <code>7687</code> is also necessary.</p>
|
<p>I have a situation where i have an application for which i would like to run several set of instances configured differently. I guess from readying online, people would usually having several version of the same application in your clusters.</p>
<p>But let me somewhat describe the use case at the high level. The application is a component that takes as configuration a a dataset and a set of instructions stating how to process the dataset. The dataset is actually a datasource.</p>
<p>So in the same namespace, we would like for instance process 2 dataset.</p>
<p>So it is like having two deployments for the same application. Each dataset has different requirements, hence we should be able to have deployment1 scale to 10 instance and deployement 2 scale to 5 instance.</p>
<p>The thing is it is the same application and so far it is the same helm chart, and deployment definition.</p>
<p>The question is what are the different options that exist to handle that at this time.</p>
<p>examples, pointers, article are welcome.</p>
<p>So far i found the following article as the most promising:</p>
<p><a href="https://itnext.io/support-multiple-versions-of-a-service-in-kubernetes-using-helm-ce26adcb516d" rel="nofollow noreferrer">https://itnext.io/support-multiple-versions-of-a-service-in-kubernetes-using-helm-ce26adcb516d</a></p>
<p>Another thing i though about is duplicating the deployment chart into 2 sub-charts of which the folder name differ.</p>
| <p>Helm supports this pretty straightforwardly.</p>
<p>In Helm terminology, you would write a <em>chart</em> that describes how to install one copy of your application. This creates Kubernetes Deployments and other manifests; but it has templating that allows parts of the application to be filled in at deploy time. One copy of the installation is a <em>release</em>, but you can have multiple releases, in the same or different Kubernetes namespaces.</p>
<p>For example, say you have a YAML template for a Kubernetes deployment:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-processor
spec:
replicas: {{ .Values.replicas }}
template:
spec:
containers:
- env:
- name: DATASET_NAME
value: {{ .Values.dataset }}
# and the other things that usually go into a container spec
</code></pre>
<p>When you go to deploy this, you can create a <em>values file</em>:</p>
<pre class="lang-yaml prettyprint-override"><code># a.yaml
replicas: 10
dataset: dataset-1
</code></pre>
<p>And you can deploy it:</p>
<pre class="lang-sh prettyprint-override"><code>helm install \
one \ # release name
. \ # chart location
-f a.yaml # additional values to use
</code></pre>
<p>If you use <code>kubectl get deployment</code>, you will see <code>one-processor</code>, and if you look at it in detail, you will see it has 10 replicas and its environment variable is set to <code>dataset-1</code>.</p>
<p>You can create a second deployment with different settings in the same namespace:</p>
<pre class="lang-yaml prettyprint-override"><code># b.yaml
replicas: 5
dataset: dataset-2
</code></pre>
<pre class="lang-sh prettyprint-override"><code>helm install two . -f b.yaml
</code></pre>
<p>Or in a different namespace:</p>
<pre class="lang-sh prettyprint-override"><code>helm install three . -n other-namespace -f c.yaml
</code></pre>
<p>It's theoretically possible to have a chart that only installs other subcharts (an <em>umbrella chart</em>), but there are some practical issues with it, most notably that Helm will want to install only one copy of a given chart no matter where it appears in the chart hierarchy. There are other higher-level tools like <a href="https://github.com/Praqma/helmsman" rel="noreferrer">Helmsman</a> and <a href="https://github.com/roboll/helmfile" rel="noreferrer">Helmfile</a> that would allow you to basically describe these multiple <code>helm install</code> commands in a single file.</p>
|
<p>I feel a bit like an idiot but I cannot seem to get the logging working on persistent volumes when using the KubernetesExecutor and the freshly released official Helm chart.</p>
<p>After creating a simple <a href="https://github.com/marclamberti/webinar-airflow-chart/blob/master/pv.yaml" rel="nofollow noreferrer">PV</a> and <a href="https://github.com/marclamberti/webinar-airflow-chart/blob/master/pvc.yaml" rel="nofollow noreferrer">PVC</a> manually, I changed the following on the bottom of values.yaml file:</p>
<pre><code>logs:
persistence:
# Enable persistent volume for storing logs
enabled: true
# Volume size for logs
size: 100Gi
# If using a custom storageClass, pass name here
storageClassName:
## the name of an existing PVC to use
existingClaim: airflow-logs
</code></pre>
<p>This process is partly described in the <a href="https://airflow.apache.org/docs/helm-chart/stable/manage-logs.html" rel="nofollow noreferrer">official Helm documentation</a>.
Still, the airflow-scheduler pod crashes due to permission errors as i cannot write in the mounted logs folder: <a href="https://pastebin.com/ek3iH4EZ" rel="nofollow noreferrer">logs here</a>.</p>
<p>When the persistent logging is turned off, all is working, except for task logging as these are deleted when the worker-pod is deleted.</p>
<p>Any help towards a solution would be greatly appreciated!</p>
| <p>I assumed that using standard persistent volume approach was the easiest (I am still a k8s novice) I did not expect that setting up one using azure-file-storage-class (SC) was this easy. These mounts can be set up using 777 rights from the SC yaml file, not sure if this is the sole cure as I also set the uid/gid in the SC mount options. Anyhow, all seems to be working perfectly.</p>
<p>As a reference for otheres here my azure-file-sc.yaml:</p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: airflow-logs
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=50000
- gid=0
- mfsymlinks
- cache=strict
- actimeo=30
parameters:
skuName: Standard_LRS
</code></pre>
<p>My azure-file-pvc.yaml:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: airflow-logs
namespace: airflow
labels:
app: airflow-logs
spec:
accessModes:
- ReadWriteMany
storageClassName: airflow-logs
resources:
requests:
storage: 20Gi
</code></pre>
<p>The values.yaml is unchanged.</p>
<p>With this, the persistent logging works like a charm in Azure Kubernetes Service (AKS).</p>
<p>Hope this helps others!
Dennis</p>
|
<p>I am trying to set Gitlab runner to connect to Artifactory and pull images.My yml file to set RUnner looks like below :</p>
<p>gitlabUrl: <a href="https://gitlab.bayer.com/" rel="nofollow noreferrer">https://gitlab.bayer.com/</a><br />
runnerRegistrationToken: r*******-<br />
rbac:<br />
create: false<br />
serviceAccountName: iz-sai-s<br />
serviceAccount.name: iz-sai-s</p>
<p>runners:<br />
privileged: true</p>
<p>resources:<br />
limits:<br />
memory: 32000Mi<br />
cpu: 4000m<br />
requests:<br />
memory: 32000Mi<br />
cpu: 2000m</p>
<p>What changes are needed to configure my runner properly to connect to Artifactory URL and pull images from there ?</p>
| <p>This is an example where my runner runs as docker container with an image having artifactory cli configured in it, so in your case your runner should have jfrog cli configured , next it needs an api key to access artifactory which you ll generate in artifactory and store in gitlab like below picture , exact path would be your repo - settings - CICD - variables</p>
<p>First it authenticates then uploads</p>
<pre><code>publish_job:
stage: publish_artifact
image: xxxxxplan/jfrog-cli
variables:
ARTIFACTORY_BASE_URL: https://xxxxx.com/artifactory
REPO_NAME: my-rep
ARTIFACT_NAME: my-artifact
script:
- jfrog rt c --url="$ARTIFACTORY_BASE_URL"/ --apikey="$ARTIFACTORY_KEY"
- jfrog rt u "target/demo-0.0.1-SNAPSHOT.jar" "$REPO_NAME"/"$ARTIFACT_NAME"_"$CI_PIPELINE_ID.jar" --recursive=false
</code></pre>
<p>Mark the answer as accepted if it fulfils your requirement
Also make sure to use indentation in your question which is not there
<a href="https://i.stack.imgur.com/xukah.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xukah.png" alt="enter image description here" /></a></p>
<p>Edit 1 : Adding the whole gitlab_ci.yml</p>
<pre><code>stages:
- build_unittest
- static_code_review
- publish_artifact
image: maven:3.6.1-jdk-8-alpine
cache:
paths:
- .m2/repository
- target/
variables:
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
build_unittest_job:
stage: build_unittest
script: 'mvn clean install'
tags:
- my-docker
artifacts:
paths:
- target/*.jar
expire_in: 20 minutes
when: manual
code_review_job:
stage: static_code_review
variables:
SONARQUBE_BASE_URL: https://xxxxxx.com
script:
- mvn sonar:sonar -Dsonar.projectKey=xxxxxx -Dsonar.host.url=https://xxxxx -Dsonar.login=xxxxx
tags:
- my-docker
cache:
paths:
- /root/.sonar/cache
- target/
- .m2/repository
when: manual
publish_job:
stage: publish_artifact
image: plan/jfrog-cli
variables:
ARTIFACTORY_BASE_URL: https://xxxx/artifactory
REPO_NAME: maven
ARTIFACT_NAME: myart
script:
- jfrog rt c --url="$ARTIFACTORY_BASE_URL"/ --apikey="$ARTIFACTORY_KEY"
- jfrog rt u "target/demo-SNAPSHOT.jar" "$REPO_NAME"/"$ARTIFACT_NAME"_"$CI_PIPELINE_ID.jar" --recursive=false
tags:
- my-docker
when: manual
</code></pre>
|
<p>I am pulling my hair out here. I deployed my template, deleted it, and then I go to deploy it again without any changes and am getting the following error:</p>
<p>The Deployment "blog" is invalid: spec.template.metadata.labels: Invalid value: map[string]string(nil): <code>selector</code> does not match template <code>labels</code></p>
<p>My deployment yaml is below and as you can see the metadata and selector labels are both web, so I have no idea what the error is trying to tell me:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: blog
labels:
app: web
spec:
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
replicas: 1
template:
spec:
containers:
- env:
image: test_blog:latest
imagePullPolicy: Always
name: blog
ports:
- containerPort: 8080
</code></pre>
| <p>You have two template block. I think thats the problem. Try this.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: blog
labels:
app: web
spec:
selector:
matchLabels:
app: web
replicas: 1
template:
metadata:
labels:
app: web
spec:
containers:
- env:
image: test_blog:latest
imagePullPolicy: Always
name: blog
ports:
- containerPort: 8080
</code></pre>
|
<p>I am using the following kubectl command to get the Ingress host IP address after my Seldon Deployment is avaible.</p>
<pre><code>kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
</code></pre>
<p>I would like to run the same command from the Kubernetes Python API but cannot find any information in the documentation on how to do so.</p>
<p>Could you please help me out?</p>
<p>Thanks in advance.</p>
| <p>There's a separate client function for each REST operation on each Kubernetes object type. Once you have that object, you can navigate it like any other Python object.</p>
<pre class="lang-py prettyprint-override"><code>service = kubernetes.client.read_namespaced_service('istio-ingressgateway', 'istio-system')
print(service.status.load_balancer.ingress[0].ip)
# Note the capitalization ^^^^ of load_balancer, not loadBalancer
</code></pre>
<p>The <code>service</code> object is a <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1Service.md" rel="nofollow noreferrer">V1Service</a> object. More generally, the <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/README.md" rel="nofollow noreferrer">API documentation</a> includes every method call and documentation for all of the request and return types.</p>
<p>A Service is a standard Kubernetes object. If you were dealing with some of the Istio-specific objects like VirtualServices, there is <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CustomObjectsApi.md" rel="nofollow noreferrer">a separate API for custom resources</a>.</p>
|
<p>Have been trying to setup Kubeflow on bare metal (on prem etc) on a shared server i.e. not my laptop. I followed the <a href="https://www.kubeflow.org/docs/started/k8s/kfctl-k8s-istio/" rel="nofollow noreferrer">Kubeflow Deployment with kfctl_k8s_istio</a> setup instructions which all well.</p>
<p>Under "Access Kubeflow Dashboard" it says</p>
<blockquote>
<p>Refer Ingress Gateway guide.</p>
</blockquote>
<p>which just leads to more questions I don't know the answer to coz i didn't write the setup i.e.</p>
<ol>
<li>What is the ingress port for the UI? <code>kubectl get svc istio-ingressgateway -n istio-system</code> returns a hug list??</li>
<li>What do i do if the external IP is <code><none></code>? The server has an IP on the local network i.e. 192.168.1.69</li>
<li>I'm assuming <code>kfctl</code> didn't setup an external load balancer?</li>
<li>Whats the container that hosts the web UI? What should the <code>Gateway</code> and <code>VirtualService</code> yaml look like?</li>
</ol>
<p>I want to use Kubeflow and have to learn how Istio works? Why?</p>
| <p>So, in the end i went with k3s as it is a one-liner to setup</p>
<pre class="lang-sh prettyprint-override"><code>curl -sfL https://get.k3s.io | sh -
</code></pre>
<p>and there are <a href="https://rancher.com/docs/k3s/latest/en/installation/install-options/" rel="nofollow noreferrer">many options</a> which you can set with environmental vars.</p>
<p>We were using GPUs and so needed to setup <a href="https://github.com/NVIDIA/k8s-device-plugin" rel="nofollow noreferrer">NVIDIA device plugin for Kubernetes</a></p>
<p>We do all this now with Ansible scripts as we have a fleet of machines to manage.</p>
<p>Kubeflow is, like most Google projects, too bloated and we're looking at <a href="https://dagster.io/" rel="nofollow noreferrer">Dagster</a> now as its easy to develop with on your local setup.</p>
|
<p>I want to connect to Neo4j database using my creds. I am tunneling into a machine and once that is done, I open my broswer at the port: <code>localhost:7474</code>.
I tried with both neo4j and bolt scheme to connect at the url:
<code>bolt://<node_ip>:7687</code> and <code>neo4j://<node_ip>:7687</code> but the connection times out.</p>
<p>I tried checking the logs but only found that bolt scheme is enabled:</p>
<pre><code>bash-4.2$ tail -f /logs/debug.log
2021-07-02 21:26:03.323+0000 WARN [o.n.k.a.p.GlobalProcedures] Failed to load `org.apache.commons.logging.impl.LogKitLogger` from plugin jar `/home/sandbox/neo/plugins/apoc-4.2.0.2-all.jar`: org/apache/log/Logger
2021-07-02 21:26:03.946+0000 INFO [c.n.m.g.GlobalMetricsExtension] Sending metrics to CSV file at /home/sandbox/neo/metrics
2021-07-02 21:26:03.973+0000 INFO [o.n.b.BoltServer] Bolt enabled on 0.0.0.0:7687.
2021-07-02 21:26:03.974+0000 INFO [o.n.b.BoltServer] Bolt (Routing) enabled on 0.0.0.0:7688.
2021-07-02 21:26:03.974+0000 INFO [o.n.s.AbstractNeoWebServer$ServerComponentsLifecycleAdapter] Starting web server
2021-07-02 21:26:04.001+0000 INFO [o.n.s.m.ThirdPartyJAXRSModule] Mounted unmanaged extension [n10s.endpoint] at [/rdf]
2021-07-02 21:26:05.341+0000 INFO [c.n.s.e.EnterpriseNeoWebServer] Remote interface available at http://<node_ip>:7474/
2021-07-02 21:26:05.341+0000 INFO [o.n.s.AbstractNeoWebServer$ServerComponentsLifecycleAdapter] Web server started.
2021-07-02 21:35:34.565+0000 INFO [c.n.c.c.c.l.s.Segments] [system/00000000] Pruning SegmentFile{path=raft.log.0, header=SegmentHeader{formatVersion=2, recordOffset=56, prevFileLastIndex=-1, segmentNumber=0, prevIndex=-1, prevTerm=-1}}
2021-07-02 21:35:46.079+0000 INFO [c.n.c.c.c.l.s.Segments] [neo4j/32f6599b] Pruning SegmentFile{path=raft.log.0, header=SegmentHeader{formatVersion=2, recordOffset=56, prevFileLastIndex=-1, segmentNumber=0, prevIndex=-1, prevTerm=-1}}
</code></pre>
<p>The query log is empty, as I could not execute any query:</p>
<pre><code>bash-4.2$ tail -f query.log
2021-07-02 21:25:52.510+0000 INFO Query started: id:1 - 1009 ms: 0 B - embedded-session neo4j - - call db.clearQueryCaches() - {} - runtime=pipelined - {}
2021-07-02 21:25:52.580+0000 INFO id:1 - 1080 ms: 112 B - embedded-session neo4j - - call db.clearQueryCaches() - {} - runtime=pipelined - {}
</code></pre>
<p>The other articles or answers that I read were mostly about misconfiguration: wrong ports but I don't think that is the case with me since I checked from <code>debug.log</code> file that my ports are alright.</p>
<p>FWIW, I am using 3 replicas for my Neo4j and right now, connecting to just one pod.</p>
<p>I am tunnelling both the ports:</p>
<pre><code>ssh -L 7687:$IP:7687 -L 7474:$IP:7474 domain_name.com -N```
</code></pre>
| <p>So, instead of providing localhost in the connection string, I made a silly mistake of writing down the actual IP and that was the reason for connection timeout.</p>
|
<p>Digging into www, didn't found an answer:
I want to know how I can use cert-manager with haproxy-ingress and lets encrypt.</p>
<p>Any documentation / guidelines?</p>
| <ol>
<li><p><strong>Deploy Certmanager with</strong>:</p>
<p><code>kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.4.0/cert-manager.yaml</code></p>
<p><code>kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.4.0/cert-manager.crds.yaml</code></p>
</li>
<li><p><strong>Deploy a ClusterIssuer</strong> (cluster issuers are namespace agnostic)</p>
</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>cat > prod-issuer.yaml <<EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: email@example.com
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: haproxy
EOF
</code></pre>
<p>Apply the cluster issuer with <code>kubectl apply -f prod-issuer.yaml</code></p>
<ol start="3">
<li><strong>Create an Ingress Resource</strong> (namespace gnostic, this example is using Nginx backend service)</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>cat > nginx-ingress.yaml <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-router
namespace: production
annotations:
kubernetes.io/ingress.class: "haproxy"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- "example.com"
secretName: nginx-tls-secret
rules:
- host: example.com
http:
paths:
- pathType: ImplementationSpecific
path: "/"
backend:
service:
name: nginx
port:
number: 80
- host: www.example.com
http:
paths:
- pathType: ImplementationSpecific
path: "/"
backend:
service:
name: nginx
port:
number: 80
EOF
</code></pre>
<p>Apply the ingress recourse with <code>kubectl apply -f nginx-ingress.yaml</code></p>
<p>The important piece of information here, is that the Haproxy controller does NOT need the annotation <code>acme.cert-manager.io/http01-edit-in-place: "true"</code>
that nginx-ingress controller does. It works as expected without any extra annotations.
When you apply the Ingress Resourse to the cluster, the certificate will be issued in 1-2 minutes tops.
Use <code>kubectl describe certificate nginx-tls-secret -n production</code> to check the status of the certificate, and look at the event to get the certificate.</p>
<p>For more debugging info incase something went wrong, refer here <a href="https://cert-manager.io/docs/faq/acme/" rel="nofollow noreferrer">https://cert-manager.io/docs/faq/acme/</a></p>
|
<p>How do I <code>docker run</code> private image from Container Registry in GCP using <code>--privileged</code>
Running this locally works fine:</p>
<pre><code>docker run -it --privileged --entrypoint /bin/bash ${GKE_APP}
</code></pre>
<p>I followed <a href="https://stackoverflow.com/questions/59001647/kubernetespodoperator-privileged-security-context-in-airflow">this</a> but the error still persisted.</p>
<p>This is my DAG:</p>
<pre><code>def process_backup_data(data_to_backup):
secret_file = Secret('volume', '/etc/secrets', 'my-secret')
task_env_vars = {'GOOGLE_APPLICATION_CREDENTIALS': "/etc/secrets/my-sa.json}
security_context = {
'privileged': True,
'allow_privilege_escalation': True,
'capabilities': {'add': ['SYS_ADMIN']}
}
return KubernetesPodOperator(image="asia.gcr.io/my-image:latest",
env_vars=task_env_vars,
cmds=["sh", "run-vpn-script.sh"],
arguments=[data_to_backup],
labels={"project": "my-project"},
name="my-project-backup-datasets",
task_id=f"backup_{task_id}",
dag=dag,
secrets=[secret_file],
image_pull_policy='Always',
security_context=security_context # HERE
)
</code></pre>
| <p>Have you checked whether you are missing some sort of capability?</p>
<p>I see in the example you posted they also add this to the <code>security_context</code></p>
<pre class="lang-py prettyprint-override"><code>'capabilities': {'add': ['SYS_ADMIN']}
</code></pre>
|
<p>Is there anyway we can migrate Redis Cluster data that are running inside 2 different Kubernetes cluster? How we can communicate between Redis stateful pods which are running on two different Kubernetes Clusters?</p>
<p>We have two Redis Clusters which are running on two different Kubernetes Clusters X & Y. I want to transfer data from redis-X to redis-Y cluster. How we can establish connection between redis-X and redis-Y clusters so that we can migrate data?</p>
<p>Any help or hint is appreciated.</p>
| <p>There are two possible approaches to establish connection between clusters:</p>
<ol>
<li>Built-in solutions</li>
<li>3rd party solution</li>
</ol>
<h3>Built-in solutions</h3>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer"><code>NodePort</code></a> - Exposes the service on each Node’s IP at a static port (the <code>NodePort</code>). A <code>ClusterIP</code> service, to which the <code>NodePort</code> service routes, is automatically created. You’ll be able to contact the <code>NodePort</code> service, from outside the cluster, by requesting <code><NodeIP></code>:<code><NodePort></code></li>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer"><code>LoadBalancer</code></a> - Exposes the Service externally using a cloud provider’s load balancer. <code>NodePort</code> and <code>ClusterIP</code> services, to which the external load balancer routes, are automatically created.</li>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer"><code>ingress</code></a> (both 1st and 3rd party implementations) - more flexible then previous two, but only works with HTTP/HTTPS.</li>
</ul>
<p>Read more: <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Kubernets services</a>, <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">NGINX ingress</a></p>
<h3>3rd party solution</h3>
<p><a href="https://istio.io/latest/" rel="nofollow noreferrer">Istio</a> supports <a href="https://istio.io/latest/docs/ops/deployment/deployment-models/#multiple-clusters" rel="nofollow noreferrer">multi-cluster deployment</a> model. However, if you don't have service mesh deployed, doing so may be too much for single task use.</p>
<hr />
<p>Once you have connection established between clusters you can migrate Redis using <a href="https://redis.io/commands/migrate" rel="nofollow noreferrer"><code>MIGRATE</code></a> command, or <a href="https://github.com/vipshop/redis-migrate-tool" rel="nofollow noreferrer"><code>redis-migrate-tool</code></a> proposed in comments.</p>
|
<p>I want to limit the maximum number of restarts for a pod. I have restartPolicy as always and I don't want to change it to Never,but want to limit the number of restarts.Is it possible to do so somehow ?</p>
| <blockquote>
<p>I just don't want the service to restart like 900 or 1000 times. In few restarts, it should stop trying...</p>
</blockquote>
<p>Say, you want to do something (by the program runs inside the pod) which requires a couple of retries but you also don't want it to retry an infinite number of times.</p>
<p>In such a case, I will recommend you to handle it from inside the program instead of handling it from the pod's restart.</p>
<p>Wrap your task inside a retries loop with a timeout/maxRetries:</p>
<pre><code>Algo:
- Retries until timeout/maxRetires:
- Do your job
- end loop
</code></pre>
<p><strong>NB:</strong> If you want your pod to terminate after completing the task, use k8s <code>Job</code> instead of Pod.</p>
|
<p>Can anybody explain me how MetalLB gets IP addresses in a Kubernetes environment? I have installed Kubernetes cluster in GCP compute engines. I have provided a range of Internal IP addresses in MetalLB ConfigMap.</p>
<pre><code>NAME STATUS INTERNAL-IP EXTERNAL-IP
instance-1 Ready 10.140.0.20 56.169.53.26
instance-2 Ready 10.140.0.21 57.11.92.241
instance-3 Ready 10.140.0.22 54.7.255.253
</code></pre>
<p>In my case the IP range which I gave in the CM was 10.140.0.30-10.140.0.40</p>
<p>It works as expected but I want to know how MetalLB get's IP addresses.</p>
| <p>to summarize my comments:</p>
<p>MetalLB in layer 2 mode is deploying on each node a <em>Speaker</em> Pod which responds to ARP(IPv4) and NDP(IPv6) requests.</p>
<p>If you now connect to the IP, which your Kubernetes Service with <code>type: LoadBalancer</code> got from the range you have defined in the MetalLB configuraton, your client will send out an arp-request <code>who-has <IP-Service>, tell <IP-Client></code> to the Network.
Since the <em>Speaker</em> Pods are listening to arp-requests, they'll answer with <code>reply <IP-Service> is-at <node-MAC-address-of-the-leader></code>.</p>
<p>It does not mean, that your Pod is running on that Node which the Mac-Address is resolved, only the MetalLB "leader" is running on this one. Your request will pass then over to the Kube-Proxy which is aware where your Pod lives.</p>
<p>Also keep in mind:</p>
<blockquote>
<p>In that sense, layer 2 does not implement a load-balancer. Rather, it implements a failover mechanism so that a different node can take over should the current leader node fail for some reason.</p>
</blockquote>
<p><a href="https://metallb.universe.tf/concepts/layer2/#load-balancing-behavior" rel="nofollow noreferrer">https://metallb.universe.tf/concepts/layer2/#load-balancing-behavior</a></p>
|
<h3>Background</h3>
<p>We're hosting a proprietary web application, which we like to customize rudimentary to our CD (mainly colors). Since the application doesn't support this and I don't have access to the source, I'd like to create a custom css stylesheet and include it in the app by manipulating it's ingress (= injecting css/stylesheets into the DOM).</p>
<h2><code>http_sub_module</code> of the Nginx Ingress Controller for Kubernetes</h2>
<p>The <a href="http://nginx.org/en/docs/http/ngx_http_sub_module.html" rel="nofollow noreferrer"><code>http_sub_module</code></a> seems similar to <a href="https://httpd.apache.org/docs/2.4/mod/mod_substitute.html" rel="nofollow noreferrer">Apaches <code>mod_substitute</code></a>. When I exec <code>nginx -V</code> inside the nginx ingress pod, there is <code>--with-http_sub_module</code> listed in the configure arguments, so it must be avaliable in the currently deployed 1.19.2 version.</p>
<p>I found <a href="https://serverfault.com/questions/648135/change-nginx-response-body-with-reverse-proxy">a similar question</a> using <code>subs_filter</code> (instead of <code>sub_filter</code>). It seems that <a href="https://docs.nginx.com/nginx/admin-guide/dynamic-modules/http-substitutions-filter/" rel="nofollow noreferrer">the one with s is from nginx plus</a>, also documented <a href="https://www.nginx.com/products/nginx/modules/http-substitutions-filter/" rel="nofollow noreferrer">here</a>. All linked examples there use <code>subs_</code>, where the <a href="http://nginx.org/en/docs/http/ngx_http_sub_module.html" rel="nofollow noreferrer">regular community documentation</a> uses <code>sub_filter</code>. Both of them worked without an error, I guess the plus one is an alias if no plus subscription is avaliable.</p>
<p>Since <code>sub_filter</code> doesn't seem to work, I tried both of them without success:</p>
<pre><code>annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/configuration-snippet: |
#subs_filter_once on;
#subs_filter_types 'text/html';
# subs_filter 'en' 'de';
# subs_filter '</body>' '<h1>TEST</h1></body>';
sub_filter '</body>' '<h1>TEST</h1></body>';
</code></pre>
<h2>Further things I've checked/tried</h2>
<p>The configuration snippet <em>is</em> applied. I looked into the nginx ingress pod, the server vhost block for my application has the <code>sub_filter</code>/<code>subs_filter</code> directive inside the <code>/</code> location block.</p>
<p>The default type for filtering is <code>sub_filter_types text/html</code>. The website returns <code>Content-Type: text/html</code> so it should match. To be sure, I also set <code>sub_filter_types "text/html";</code> in the snippet. And even <code>*</code> which matches any mime type according to the docs, both without any difference.</p>
<p>Doesn't this module work with <code>mod_proxy</code> used by k8s? I can't imagine that since the module is relatively old and I see no reason why it shouldn't work when nginx acts as reverse proxy, since it needs to have access to the HTML header/body too.</p>
| <p>Found out that the problem was something completely different: The application supports gzip, so this compression was enabled. But the <code>http_sub_module</code> doesn't support gzip, as <a href="https://github.com/yaoweibin/ngx_http_substitutions_filter_module/blob/master/README" rel="nofollow noreferrer">documented here</a>. It only works with plain text reasponses. If the response is compressed, the module just do nothing. This explains why it doesn't work and there was no error.</p>
<p>Luckily, the compression could be easily disabled without any modifications on the application by setting the following header:</p>
<pre><code>proxy_set_header Accept-Encoding "";
</code></pre>
<p>If this is added to the <code>nginx.ingress.kubernetes.io/configuration-snippet</code> section, it accepts only plain text and every application which is compilant to the specification will respect that.</p>
<p>But it doesn't mean that we can't use any compression. The Gzip filter can be used from the ingress to the user, it's only not supported from the ingress to the application.</p>
<h2>Full working example annotation snippet</h2>
<pre><code> ingres:
# ...
annotations:
kubernetes.io/ingress.class: "nginx"
# http://nginx.org/en/docs/http/ngx_http_sub_module.html
nginx.ingress.kubernetes.io/configuration-snippet: |
sub_filter "</body>" "<style>body{ color: red !important; }</style></body>";
# The http_sub_module doesn't support compression from the ingress to the backend application
proxy_set_header Accept-Encoding "";
</code></pre>
<p>This would apply the style block before the closing body tag like this:</p>
<pre><code><style>body{ color: red !important; }</style></body></html>
</code></pre>
<p>For productive usage, we can include a link to a custom css file here, which overrides the applications design to our needs.</p>
|
<p>I'm confused with one behavior of pod in k8s. I pulled and run my alpine container and it is working fine when I see the docker ps -a command, but when I run it through k8s the output of the kubectl get pod shows complete. although in the Dockerfile I typed
CMD ["sleep", "3600"], it is not going to sleep in k8s. I can send it to sleep mode with the kubectl run myalpine --image=myalpine -- sleep infinity and pod is working fine, But I don't want to use that command I expect that when I clearly type the sleep command in the Dockerfile and build it, k8s should run it as well.
I really appreciate it if someone explain the behavior of the pod.</p>
| <p><a href="https://kubernetes.io/docs/concepts/workloads/pods/" rel="nofollow noreferrer">The documentation</a> has some useful explanations:</p>
<blockquote>
<p>Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.</p>
</blockquote>
<p>Think of such as the unit of “deployment”—And excuse the abuse of terminology here, since <em>deployment</em> itself is a very well defined and precise concept in k8s as well, namely another type of workload. Pods are classified as a workload.</p>
<blockquote>
<p>A Pod (as in a pod of whales or pea pod) is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers. A Pod's contents are always co-located and co-scheduled, and run in a shared context. A Pod models an application-specific "logical host": it contains one or more application containers which are relatively tightly coupled. In non-cloud contexts, applications executed on the same physical or virtual machine are analogous to cloud applications executed on the same logical host.</p>
</blockquote>
<p>So that’s one part of the answer you’re looking for: a pod models a “logical host” where you can assemble fairly-fully-functional applications (“microservices” if you want, that do 1 basic thing, but do it well) out of one or several containers together.</p>
<p>I like to think of this as some form of composition but brought up to the application level via patterns such as <em>sidecar</em> and <em>adapter</em>. Similar to how you implement cross-cutting concerns in DDD, such as logging, by abstracting them and providing a generic implementation that ought to work the same when used by any class (“attached” to any main container) albeit some wiring work being required. It is precisely that PodSpec that wires these containers up.</p>
<blockquote>
<p>As well as application containers, a Pod can contain init containers that run during Pod startup. You can also inject ephemeral containers for debugging if your cluster offers this.</p>
</blockquote>
<p>Continues supporting the analogy I gave above to composition in OOP/DDD.</p>
<p>The official documentation continues adding more explanations along those lines.</p>
<blockquote>
<p>In terms of Docker concepts, a Pod is similar to a group of Docker containers with shared namespaces and shared filesystem volumes.</p>
</blockquote>
<p>Lastly, when you want to run a container in Kubernetes (necessarily through a pod) via <code>kubectl run</code> be careful you are not overriding the container entrypoint/command-args:</p>
<blockquote>
<p>When you override the default Entrypoint and Cmd, these rules apply:</p>
<ul>
<li>If you do not supply <code>command</code> or <code>args</code> for a Container, the defaults defined in the Docker image are used.</li>
<li>If you supply a <code>command</code> but no <code>args</code> for a Container, only the supplied <code>command</code> is used. The default EntryPoint and the default Cmd
defined in the Docker image are ignored.</li>
<li>If you supply only <code>args</code> for a Container, the default Entrypoint defined in the Docker image is run with the <code>args</code> that you supplied.</li>
<li>If you supply a <code>command</code> and <code>args</code>, the default Entrypoint and the default Cmd defined in the Docker image are ignored. Your <code>command</code> is
run with your <code>args</code>.</li>
</ul>
</blockquote>
|
<p>How do you get around waiting on resources not yes created?</p>
<p>In script I get:</p>
<pre><code>kubectl wait --for=condition=ready --timeout=60s -n <some namespace> --all pods
error: no matching resources found
</code></pre>
| <p>This is a community wiki answer posted for better visibility. Feel free to expand it.</p>
<p>As documented:</p>
<blockquote>
<p>Experimental: Wait for a specific condition on one or many resources.</p>
<p>The command takes multiple resources and waits until the specified
condition is seen in the Status field of every given resource.</p>
<p>Alternatively, the command can wait for the given set of resources to
be deleted by providing the "delete" keyword as the value to the --for
flag.</p>
<p>A successful message will be printed to stdout indicating when the
specified condition has been met. One can use -o option to change to
output destination.</p>
</blockquote>
<p>This command will not work for the resources that hasn't been created yet. @EmruzHossain has posted two valid points:</p>
<ul>
<li><p>Make sure you have provided a valid namespace.</p>
</li>
<li><p>First wait for the resource to get created. Probably a loop running <code>kubectl get</code> periodically. When the desired resource is found, break the loop. Then, run <code>kubectl wait</code> to wait for the resource to be ready.</p>
</li>
</ul>
<p>Also, there is this open thread: <a href="https://github.com/kubernetes/kubernetes/issues/83242" rel="nofollow noreferrer">kubectl wait for un-existed resource. #83242</a> which is still waiting (no pun intended) to be implemented.</p>
|
<h2>Problem</h2>
<p>Deploying OpenLibertyApplication rewrites the /config/server.xml specified in the container image</p>
<h3>What did you do?</h3>
<ol>
<li>Deployed OpenLibertyApplication with initial configuration in src/main/liberty/config/server.xml:</li>
</ol>
<pre><code> <httpEndpoint id="defaultHttpEndpointss"
host="*"
httpPort="8080"
httpsPort="9443" />
</code></pre>
<ol start="2">
<li>Changed port above to</li>
</ol>
<pre><code> <httpEndpoint id="defaultHttpEndpointssssssssss"
host="*"
httpPort="8080"
httpsPort="8083" />
</code></pre>
<ol start="3">
<li>Docker build copies this server.xml to /config/server.xml (Docker pull-ing the image to my local shows the correct updated /config/server.xml)</li>
<li>Deployed the new image using the OpenLibertyApplication supported by Open Liberty Operator from OperatorHub version 0.7.0 using oc apply -f openlibertyapplication.yaml:</li>
</ol>
<pre><code>apiVersion: openliberty.io/v1beta1
kind: OpenLibertyApplication
metadata:
name: vehicle
labels:
app: vehicle
name: vehicle
service: vehicle
version: v2
spec:
applicationImage: quay.io/xxxx/xxxxx:latest
createAppDefinition: false
version: v2
service:
port: 8080
portName: http
expose: true
</code></pre>
<p>server.xml:</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<server description="new server">
<featureManager>
<feature>servlet-4.0</feature>
<feature>springBoot-2.0</feature>
</featureManager>
<httpEndpoint id="defaultHttpEndpointssssssssss"
host="*"
httpPort="8080"
httpsPort="8083" />
<springBootApplication id="vehicle"
location="thin-vehicle-0.0.1-SNAPSHOT.jar"
name="vehicle" />
</server>
</code></pre>
<h3>What did you expect to see?</h3>
<p>/config/server.xml in the Openshift pod showing the correct updated port for httpEndpoint http: 8080, https: 8083</p>
<h3>What did you see instead? Under which circumstances?</h3>
<p>Instead, I'm seeing the old port values on the Pod /config/server.xml, http: 8080, https: 9443. Id remains the old value too: id="defaultHttpEndpointss"</p>
<h3>Environment</h3>
<ul>
<li>Kubernetes version information:</li>
</ul>
<pre><code>Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0+2817867", GitCommit:"2817867655bb7b68215b4e77873a8facf82bee06", GitTreeState:"clean", BuildDate:"2021-06-02T22:14:22Z", GoVersion:"go1.15.7", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<ul>
<li>OpenShift version information (if applicable):</li>
</ul>
<pre><code>Client Version: 4.7.0-202104250659.p0-95881af
Server Version: 4.7.16
Kubernetes Version: v1.20.0+2817867
</code></pre>
<h3>Thanks in advance!</h3>
| <p>The problem in this case was cached image with the <code>latest</code> tag. If the application is created without specifying policy, it is set to <code>pullPolicy: IfNotPresent</code> to optimize creating container. Since the OP first created image with incorrect settings, and then overwrite image with the same tag, OCP used cached version.</p>
<p>To ensure that you are using correct image either:</p>
<ul>
<li>change the tag name</li>
<li>set <code>pullPolicy: Always</code> in your application definition</li>
</ul>
|
<p>Create one liner (Imperative way) command in kubernetes</p>
<pre><code>kubectl run test --image=ubuntu:latest --limits="cpu=200m,memory=512Mi" --requests="cpu=200m,memory=512Mi" --privileged=false
</code></pre>
<p>And also I need to set <code>securityContext</code> in one liner, is it possible? basically I need to run container as <code>securityContext/runAsUser</code> not as <code>root</code> account.</p>
<p>Yes declarative works, but I'm looking for an imperative way.</p>
| <p>Posting this answer as a community wiki to highlight the fact that the solution was posted in the comments (a link to another answer):</p>
<blockquote>
<p>Hi, check this answer: <a href="https://stackoverflow.com/a/37621761/5747959">stackoverflow.com/a/37621761/5747959</a> you can solve this with --overrides – CLNRMN 2 days ago</p>
</blockquote>
<p>Feel free to edit/expand.</p>
<hr />
<p>Citing <code>$ kubectl run --help</code>:</p>
<blockquote>
<pre><code> --overrides='': An inline JSON override for the generated object. If this is non-empty, it is used to override the generated object. Requires that the object supply a valid apiVersion field.
</code></pre>
</blockquote>
<p>Following on <code>--overrides</code> example that have additionals field included and to be more specific to this particular question (<code>securityContext</code> wise):</p>
<pre class="lang-sh prettyprint-override"><code>kubectl run -it ubuntu --rm --overrides='
{
"apiVersion": "v1",
"spec": {
"securityContext": {
"runAsNonRoot": true,
"runAsUser": 1000,
"runAsGroup": 1000,
"fsGroup": 1000
},
"containers": [
{
"name": "ubuntu",
"image": "ubuntu",
"stdin": true,
"stdinOnce": true,
"tty": true,
"securityContext": {
"allowPrivilegeEscalation": false
}
}
]
}
}
' --image=ubuntu --restart=Never -- bash
</code></pre>
<p>By above override you will use a <code>securityContext</code> to constrain your workload.</p>
<blockquote>
<p>Side notes!</p>
<ul>
<li>The example above is specific to running a <code>Pod</code> that you will exec into (<code>bash</code>)</li>
<li>The <code>--overrides</code> will override the other specified parameters outside of it (for example: <code>image</code>)</li>
</ul>
</blockquote>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">Kubernetes.io: Docs: Tasks: Configure pod container: Security context</a></em></li>
<li><em><a href="https://kubernetes.io/docs/concepts/security/pod-security-standards/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Security: Pod security standards</a></em></li>
</ul>
|
<p>I'm trying to make sense of container_memory_rss or container_memory_working_set_bytes with respect to <strong>node_memory_used</strong> i.e (<em>node_memory_MemTotal_bytes - node_memory_MemAvailable_bytes</em>)</p>
<p>Here what I meant</p>
<p><strong>PROMQL 1:</strong></p>
<pre><code>sum(container_memory_rss) by (instance) / 1024 / 1024 / 1024
{instance="172.19.51.8:10250"} 7.537441253662109
</code></pre>
<p><strong>PROMQL 2:</strong></p>
<pre><code>sum(node_memory_MemTotal_bytes - node_memory_MemAvailable_bytes) by (instance) / 1024 / 1024 / 1024
{instance="172.19.51.8:9100"} 2.2688369750976562
</code></pre>
<p><strong>PROMQL 3:</strong></p>
<pre><code>sum(container_memory_working_set_bytes) by (instance) / 1024 / 1024 / 1024
{instance="172.19.51.8:10250"} 9.285114288330078
</code></pre>
<p><strong>PROMQL 4:</strong></p>
<pre><code>sum(node_memory_MemAvailable_bytes) by (instance) / 1024 / 1024 / 1024
{instance="172.19.51.8:9100"} 13.356605529785156
</code></pre>
<p>So if a Pod always runs on a Node. I fail to understand why is <strong>container_memory_rss</strong> or <strong>container_memory_working_set_bytes</strong> is more than <strong>node_memory_used</strong></p>
<p>i.e values for PROMQL 1 and PROMQL 3 are way more than the value of PROMSQL 2 i.e the memory used by the node.</p>
<p>I don't know if I'm correct shouldn't the pod / container rss should always be <= nodes memory used (even if no default resource limit is set)</p>
| <h2>tl;dr</h2>
<p>Use container name filter (<code>container!=""</code>) to exclude totals:</p>
<pre><code>sum(container_memory_rss{container!=""}) by (instance) / 2^30
</code></pre>
<h2>Explanation</h2>
<p>If you ran the first query grouping results by container name, you would have noticed that most of the usage comes from a container without a name:</p>
<pre><code>sort_desc(sum(container_memory_rss{instance="ip-192-168-104-46"}) by (name)) / 2^30
{} 3.9971389770507812
{name="prometheus"} 0.6084518432617188
{name="cluster-autoscaler"} 0.04230499267578125
</code></pre>
<p>Actually there are several entries without name but they all have an <code>id</code>:</p>
<pre><code>sort_desc(sum(container_memory_rss{instance="ip-192-168-104-46"}) by (id)) / 2^30
# these do not have a container name
{id="/"} 1.1889266967773438
{id="/kubepods"} 0.900482177734375
{id="/kubepods/burstable"} 0.6727218627929688
{id="/system.slice/docker.service"} 0.07495498657226562
{id="/system.slice/kubelet.service"} 0.060611724853515625
# and this is an example id of a real container which has a name label
{id="/kubepods/burstable/pod562495f9-afa6-427e-8435-016c2b500c74/e73975d90b66772e2e17ab14c473a2d058c0b9ffecc505739ee1a94032728a78"} 0.6027107238769531
</code></pre>
<p>These are accumulated values for each <code>cgroup</code>. <code>cAdvisor</code> takes the stats from <code>cgroups</code> and if you looks at them, you will find familiar entities:</p>
<pre><code># systemd-cgls -a
├─kubepods
│ ├─podc7dfcc4e-74fc-4469-ad56-c13fe5a9e7d8
│ │ ├─61a1a58e47968e7595f3458a6ded74f9088789a865bda2be431b8c8b07da1c6e
│ │ └─d47601e38a96076dd6e0205f57b0c365d4473cb6051eb0f0e995afb31143279b
│ ├─podfde9b8ca-ce80-4467-ba05-03f02a14d569
│ │ ├─9d3783df65085d54028e2303ccb2e143fecddfb85d7df4467996e82691892176
│ │ └─47702b7977bed65ddc86de92475be8f93b50b06ae8bd99bae9710f0b6f63d8f6
│ ├─burstable
│ │ ├─pod9ff634a5-fd2a-42e2-be27-7e1028e96b67
│ │ │ ├─5fa225aad10bdc1be372859697f53d5517ad28c565c6f1536501543a071cdefc
│ │ │ └─27402fed2e4bb650a6fc41ba073f9994a3fc24782ee366fb8b93a6fd939ba4d3
</code></pre>
<p>If you sum up all direct children of, say <code>kubepods</code>, you will get the same value <code>kubepods</code> has. Because of these totals <code>sum(container_memory_rss) by (instance)</code> shows several times the actual resource utilisation.</p>
<p>The solution is just to filter out any values without a container name. You can either do that when querying, as in the example at the top, or configure Prometheus with <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config" rel="nofollow noreferrer">relabel_config</a> to drop such metrics at the scrape time.</p>
|
<pre><code>apiVersion: apps/v1
kind: deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLebels:
app: mongodb
template:
metadata:
lebels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAE
valueFrom:
secretKeyref:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyref:
name: mongodb-secret
key: mongo-root-password
</code></pre>
<p><a href="https://i.stack.imgur.com/0WWqj.jpg" rel="nofollow noreferrer">enter image description here</a></p>
| <ul>
<li>There are multiple typos in the yaml you have provided in the question.</li>
<li>I have corrected them as following , use following yaml and check</li>
</ul>
<pre><code>apiVersion: apps/v1
kind: Deployment #corrected typo deployment to Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels: #corrected typo matchLebels to matchLabels
app: mongodb
template:
metadata:
labels: #corrected typo lebels to labels
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAE
valueFrom:
secretKeyRef: #corrected typo secretKeyref to secretKeyRef
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef: #corrected typo secretKeyref to secretKeyRef
name: mongodb-secret
key: mongo-root-password
</code></pre>
|
<p>I am deploying Elasticsearch 7.10.1 to AWS EKS Fargate but I got below error when running them:</p>
<pre><code>ERROR: [2] bootstrap checks failed
[1]: max number of threads [1024] for user [elasticsearch] is too low, increase to at least [4096]
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
</code></pre>
<p>I found solutions for them is <a href="https://stackoverflow.com/questions/40683285/max-file-descriptors-4096-for-elasticsearch-process-is-too-low-increase-to-at">max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]</a> and <a href="https://stackoverflow.com/questions/51445846/elasticsearch-max-virtual-memory-areas-vm-max-map-count-65530-is-too-low-inc">Elasticsearch: Max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]</a>.</p>
<p>But both requires a change on the host machine. I am using EKS Fargate which means I don't have access to the Kubernete cluster host machine. What else should I do to solve this issue?</p>
| <p>Your best bet is to set these via privileged init containers within your Elasticsearch pod/deployment/statefulset, for example:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: elasticsearch-node
spec:
initContainers:
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
containers:
- name: elasticsearch-node
...
</code></pre>
<p>You could also do this through Daemonsets, although Daemonsets aren't very well suited to one-time tasks (but it's possible to hack around this).
But the init container approach will guarantee that your expected settings are in effect precisely before an Elasticsearch container is launched.</p>
|
<p>An existing Pod(<code>P</code>) is running 3 containers for API.</p>
<p>To scale Pod <code>P</code> horizonatally,</p>
<p>Is it possible to add one(or n) more container to an existing Pod(running 3 containers)?</p>
<p>or</p>
<p>Is Pod replica set concept supposed to be applied for this scenario(to scale horizontally)?</p>
| <p>No, you don't use multi-container Pods for scaling. Pods with multiple containers are for cases where you need multiple daemons running together (on the same hardware) for a single "instance". That's pretty rare for new users so you almost certainly want 3 replicas of a Pod with one container.</p>
|
<p>I'm trying to figure out which tools from GKE stack I should apply to my use case which is a dynamic deployment of stateful application with dynamic HTTP endpoints.</p>
<p>Stateful in my case means that I don't want any replicas and load-balancing (because the app doesn't scale horizontally at all). I understand though that in k8s/gke nomenclature I'm still going to be using a 'load-balancer' even though it'll act as a reverse proxy and not actually balance any load.</p>
<p>The use case is as follows. I have some web app where I can request for a 'new instance' and in return I get a dynamically generated url (e.g. <a href="http://random-uuid-1.acme.io" rel="nofollow noreferrer">http://random-uuid-1.acme.io</a>). This domain should point to a newly spawned, single instance of a container (Pod) hosting some web application. Again, if I request another 'new instance', I'll get a <a href="http://random-uuid-2.acme.io" rel="nofollow noreferrer">http://random-uuid-2.acme.io</a> which will point to another (separate), newly spawned instance of the same app.</p>
<p>So far I figured out following setup. Every time I request a 'new instance' I do the following:</p>
<ul>
<li>create a new Pod with dynamic name <code>app-${uuid}</code> that exposes HTTP port</li>
<li>create a new Service with NodePort that "exposes" the Pod's HTTP port to the Cluster</li>
<li>create or update (if exists) Ingress by adding a new http rule where I specify that domain X should point at NodePort X</li>
</ul>
<p>The Ingress mentioned above uses a LoadBalancer as its controller, which is automated process in GKE.</p>
<p>A few issues that I've already encountered which you might be able to help me out with:</p>
<ol>
<li>While Pod and NodePort are separate resources per each app, Ingress is shared. I am thus not able to just create/delete a resource but I'm also forced to keep track of what has been added to the Ingress to be then able to append/delete from the yaml which is definitely not the way to do that (i.e. editing yamls). Instead I'd probably want to have something like an Ingress to monitor a specific namespace and create rules automatically based on Pod labels. Say I have 3 pods with labels, <code>app-1</code>, <code>app-2</code> and <code>app-3</code> and I want Ingress to automatically monitor all Pods in my namespace and create rules based on the labels of these pods (i.e. app-1.acme.io -> reverse proxy to Pod <code>app-1</code>).</li>
<li>Updating Ingress with a new HTTP rule takes around a minute to allow traffic into the Pod, until then I keep getting 404 even though both Ingress and LoadBalancer look as 'ready'. I can't figure out what I should watch/wait for to get a clear message that the Ingress Controller is ready for accepting traffic for newly spawned app.</li>
<li>What would be the good practice of managing such cluster where you can't strictly define Pods/Services manifests because you are creating them dynamically (with different names, endpoints or rules). You surely don't want to create bunch of yaml-s for every application you spawn to maintain. I would imagine something similar to <a href="https://github.com/hashicorp/consul-template" rel="nofollow noreferrer">consul templates</a> in case of Consul but for k8s?</li>
</ol>
| <p>I participated in a similar project and our decision was to use <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">Kubernetes Client Library</a> to spawn instances. The instances were managed by a simple web application, which took some customisation parameters, saved them into its database, then created an instance. Because of the database, there was no problem with keeping track of what have been created so far. By querying the database we were able to tell if such deployment was already created or update/delete any associated resources.</p>
<p>Each instance consisted of:</p>
<ul>
<li>a deployment (single or multi-replica, depending on the instance);</li>
<li>a <code>ClusterIp</code> service (no reason to reserve machine port with <code>NodePort</code>);</li>
<li>an ingress object for shared ingress controller;</li>
<li>and some shared configMaps.</li>
</ul>
<p>And we also used <a href="https://github.com/kubernetes-sigs/external-dns" rel="nofollow noreferrer">external DNS</a> and <a href="https://cert-manager.io/docs/installation/kubernetes/" rel="nofollow noreferrer">cert manager</a>, one to manage DNS records and another to issue SSL certificates for the ingress. With this setup it took about 10 minutes to deploy a new instance. The pod and ingress controller were ready in seconds but we had to wait for the certificate and it's readiness depended on whether issuer's DNS got our new record. This problem might be avoided by using a wildcard domain but we had to use many different domains so it wasn't an option in our case.</p>
<p>Other than that you might consider writing a <a href="https://helm.sh/" rel="nofollow noreferrer">Helm chart</a> and make use of <code>helm list</code> <a href="https://helm.sh/docs/helm/helm_list/" rel="nofollow noreferrer">command</a> to find existing instances and manage them. Though, this is a rather 'manual' solution. If you want this functionality to be a part of your application - better use a client library for Kubernetes.</p>
|
<p>I have a Kubernetes <code>Job</code>, <code>job.yaml</code> :</p>
<pre><code>---
apiVersion: v1
kind: Namespace
metadata:
name: my-namespace
---
apiVersion: batch/v1
kind: Job
metadata:
name: my-job
namespace: my-namespace
spec:
template:
spec:
containers:
- name: my-container
image: gcr.io/project-id/my-image:latest
command: ["sh", "run-vpn-script.sh", "/to/download/this"] # need to run this multiple times
securityContext:
privileged: true
allowPrivilegeEscalation: true
restartPolicy: Never
</code></pre>
<p>I need to run <code>command</code> for different parameters. I have like 30 parameters to run. I'm not sure what is the best solution here. I'm thinking to create container in a loop to run all parameters. How can I do this? I want to run the <code>commands</code> or containers all simultaneously.</p>
| <p>In simpler terms , you want to run multiple commands , following is a sample format to execute multiple commands in a pod :</p>
<pre><code>command: ["/bin/bash","-c","touch /foo && echo 'here' && ls /"]
</code></pre>
<p>When we apply this logic to your requirement for two different operations</p>
<pre><code> command: ["sh", "-c", "run-vpn-script.sh /to/download/this && run-vpn-script.sh /to/download/another"]
</code></pre>
|
<p>I want to install Selenium Grid 4 in Kubernetes. I am new to this. Could anyone share helm charts or manifests or installation steps or anything. I could not find anything.</p>
<p>Thanks.</p>
| <p>You can find the selenium docker hub image at : <a href="https://hub.docker.com/layers/selenium/hub/4.0.0-alpha-6-20200730" rel="nofollow noreferrer">https://hub.docker.com/layers/selenium/hub/4.0.0-alpha-6-20200730</a></p>
<p>YAML example</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: selenium-hub
spec:
selector:
matchLabels:
app: selenium-hub
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: selenium-hub
spec:
containers:
- name: selenium-hub
image: selenium/hub:3.141.59-20200515
resources:
limits:
memory: "1000Mi"
cpu: "500m"
ports:
- containerPort: 4444
livenessProbe:
httpGet:
path: /wd/hub/status
port: 4444
initialDelaySeconds: 30
timeoutSeconds: 5
</code></pre>
<p>you can read more at : <a href="https://www.swtestacademy.com/selenium-kubernetes-scalable-parallel-tests/" rel="nofollow noreferrer">https://www.swtestacademy.com/selenium-kubernetes-scalable-parallel-tests/</a></p>
|
<p>Kubernetes <code>vertical pod autoscaler</code> (autoscale memory, cpu resources of pods) necessitates a restart of the pod to be able to use the newly assigned resources which might add small window of unavailability.</p>
<p>My question is that if the deployment of the pod is running a <code>rolling update</code> would that ensure zero down time, and zero window of unavailability when the VPA recommendation is applied.</p>
<p>Thank you.</p>
| <p>From the <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/" rel="nofollow noreferrer">official documentation</a>:</p>
<blockquote>
<p><strong>Rolling updates</strong> allow Deployments' update to take place with zero downtime by incrementally updating Pods instances with new ones. The new Pods will be scheduled on Nodes with available resources.</p>
</blockquote>
<p>In this documentation, you will find a very good rolling update overview:</p>
<blockquote>
<p>Rolling updates allow the following actions:</p>
<ul>
<li>Promote an application from one environment to another (via container image updates)</li>
<li>Rollback to previous versions</li>
<li>Continuous Integration and Continuous Delivery of applications with zero downtime</li>
</ul>
</blockquote>
<p>Here you can find information about <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment" rel="nofollow noreferrer">Rolling update deployment</a>:</p>
<blockquote>
<p>The Deployment updates Pods in a rolling update fashion when <code>.spec.strategy.type==RollingUpdate</code>. You can specify <code>maxUnavailable</code> and <code>maxSurge</code> to control the rolling update process.</p>
</blockquote>
<p>Additionally, you can add another 2 fields: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-unavailable" rel="nofollow noreferrer">Max Unavailable</a> and <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-surge" rel="nofollow noreferrer">Max Surge</a>.</p>
<blockquote>
<p><code>.spec.strategy.rollingUpdate.maxUnavailable</code> is an optional field that specifies the maximum number of Pods that can be unavailable during the update process.</p>
</blockquote>
<blockquote>
<p><code>.spec.strategy.rollingUpdate.maxSurge</code> is an optional field that specifies the maximum number of Pods that can be created over the desired number of Pods.</p>
</blockquote>
<p>Now it's up to you how you set these values. Here are some options:</p>
<ul>
<li><strong>Deploy by adding a Pod, then remove an old one:</strong> <code>maxSurge</code> = 1, <code>maxUnavailable</code> = 0. With this configuration, Kubernetes will spin up an additional Pod, then stop an “old” one down.</li>
<li><strong>Deploy by removing a Pod, then add a new one:</strong> <code>maxSurge</code> = 0, <code>maxUnavailable</code> = 1. In that case, Kubernetes will first stop a Pod before starting up a new one.</li>
<li><strong>Deploy by updating pods as fast as possible:</strong> <code>maxSurge</code> = 1, <code>maxUnavailable</code> = 1. This configuration drastically reduce the time needed to switch between application versions, but combines the cons from both the previous ones.</li>
</ul>
<p>See also:</p>
<ul>
<li><a href="https://www.exoscale.com/syslog/kubernetes-zero-downtime-deployment/" rel="nofollow noreferrer">good article about zero downtime</a></li>
<li><a href="https://medium.com/platformer-blog/enable-rolling-updates-in-kubernetes-with-zero-downtime-31d7ec388c81" rel="nofollow noreferrer">guide with examples</a></li>
</ul>
|
<p>I have application running in K3s and want to implement network policy based on namespace only.
<br/><br/> Let's assume that currently I have three namespace A, B and C. I want to allow egress (external call to internet from pod) for <code>namespace-A</code> and remaining <code>namespace[B & C]</code> egress calls should be blocked/denied.<br/><br/> Is this possible in Kubernetes network policy (and not calico or cilium) ?</p>
| <p>You can define a <code>deny all egress</code> policy like described in <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#default-deny-all-egress-traffic" rel="nofollow noreferrer">the documentation</a>:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-egress
namespce: your-namespace
spec:
podSelector: {}
policyTypes:
- Egress
</code></pre>
<p>This policy will be applied to all pods in the namespace because the pod selector is empty and that means (quoting <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#networkpolicy-resource" rel="nofollow noreferrer">documentation</a>):</p>
<blockquote>
<p>An empty podSelector selects all pods in the namespace.</p>
</blockquote>
<p>The policy will block all egress traffic because it has <code>Egress</code> as policy type but it doesn't have any <code>egress</code> section.</p>
<p>If you want to allow in-cluster egress you might want to add an <code>egress</code> section in the policy, like for example:</p>
<pre><code> egress:
- to:
- namespaceSelector:
matchLabels:
networking/namespace: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
</code></pre>
<p>This allows all traffic from the namespace where you create the network policy to pods labeled with <code>k8s-app: kube-dns</code> in namespace <code>kube-system</code> on port 53 (TCP and UDP).</p>
|
<p>I have a single node Kubernetes instance from <a href="https://microk8s.io/" rel="noreferrer">microk8s</a>. It is installed on a Ubuntu Server 20.20 running on Raspberry Pi 4.</p>
<p>I am tring to setup an ingress resource which cannot get working.</p>
<p>When I run <code>kubectl describe ingress my-ingress</code> I get this output</p>
<pre><code>Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
</code></pre>
<p>From what I found in the internet, <code>default-http-backend</code> is something that should have been there by default, but when I run <code>kubectl get pods -n kube-system</code> I don't see it.</p>
<p><strong>Question:</strong> How to enable <code>default-http-backend</code> in mikrok8s? Or more generally, how do I make ingress work?</p>
<p>Note: Ingress and DNS addons are enabled.</p>
| <p>The <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules" rel="noreferrer">default backend</a> is a fallback for when the ingress controller cannot match any of the rules.</p>
<h2><code>apiVersion: networking.k8s.io/v1</code></h2>
<pre class="lang-yaml prettyprint-override"><code>spec:
defaultBackend:
service:
name: tea-svc
port:
number: 80
</code></pre>
<p>Here is a complete example using <code>v1</code></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
spec:
defaultBackend:
service:
name: tea-svc
port:
number: 80
rules:
- host: cafe.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: tea-svc
port:
number: 80
</code></pre>
<h2><code>apiVersion: networking.k8s.io/v1beta1</code></h2>
<p>Depending on the <code>apiVersion</code> of your yaml file, the default backend is specified in a different format. It looks like you are using the beta format.</p>
<pre class="lang-yaml prettyprint-override"><code>spec:
backend:
serviceName: tea-svc
servicePort: 80
</code></pre>
<p>The NGINX Ingress Controller complains about <code>v1beta1</code>, so far it works in kubernetes 1.21.2, but as the warning says it won't soon:</p>
<pre><code>networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
</code></pre>
|
<p>I am trying my hands on creating my own kubernetes operator by following this <a href="https://sdk.operatorframework.io/docs/building-operators/golang/tutorial/" rel="nofollow noreferrer">link</a>. In the Reconcile function, I need to create multiple Deployments and each will vary in some attributes (like name for e.g.) and the configuration is huge. Instead of creating the deployment by using appsv1.Deployment and creating each attributes within it (like below code), is there a way wherein I can provide a yaml template file and read this file to obtain the appsv1.Deployment object?</p>
<pre><code> dep := &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: customName,
Namespace: m.Namespace,
},
Spec: appsv1.DeploymentSpec{
Strategy: appsv1.DeploymentStrategy{
Type: "RollingUpdate",
},
... and so on
</code></pre>
<p>Instead of above, can something like below possible with some handy util functions?</p>
<pre><code>dep := utils.parseYaml(deploymentYamlFile)
</code></pre>
| <p>Yes, you can have your <code>Deployment</code> in a yaml file and read it in code.</p>
<p>Given this file structure:</p>
<pre><code>example.go
manifests/deployment.yaml
</code></pre>
<p>You would have something like this in <code>example.go</code>:</p>
<pre class="lang-golang prettyprint-override"><code>import (
"io/ioutil"
appsv1 "k8s.io/api/apps/v1"
"sigs.k8s.io/yaml"
)
func example() {
var bs []byte
{
bs, err = ioutil.ReadFile("manifests/deployment.yaml")
if err != nil {
// handle err
}
}
var deployment appsv1.Deployment
err = yaml.Unmarshal(bs, &deployment)
if err != nil {
// handle err
}
// now you have your deployment load into `deployment` var
}
</code></pre>
|
<p>I'm new in kubernetes and docker world :)</p>
<p>I try to deploy our application in docker in kubernetes, but i can't connect to external mysql database..
my steps:
1, Install kubernetes with kubeadm in our new server.
2, Create a docker image from our application with <code>mvn spring-boot:build-image</code>
3, I create a deployment and service yaml to use image.</p>
<p>Deployment YAML:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
labels:
app: demo-app
name: demo-app
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: demo-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: demo-app
spec:
containers:
- image: demo/demo-app:0.1.05-SNAPSHOT
imagePullPolicy: IfNotPresent
name: demo-app-service
env:
- name: SPRING_DATASOURCE_URL
value: jdbc:mysql://mysqldatabase/DBDEV?serverTimezone=Europe/Budapest&useLegacyDatetimeCode=false
ports:
- containerPort: 4000
volumeMounts:
- name: uploads
mountPath: /uploads
- name: ssl-dir
mountPath: /ssl
volumes:
- name: ssl-dir
hostPath:
path: /var/www/dev.hu/backend/ssl
- name: uploads
hostPath:
path: /var/www/dev.hu/backend/uploads
restartPolicy: Always
</code></pre>
<p>Service YAML:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: demo-app
name: demo-app
namespace: default
spec:
ports:
- port: 4000
name: spring
protocol: TCP
targetPort: 4000
selector:
app: demo-app
sessionAffinity: None
type: LoadBalancer
</code></pre>
<p>4, Create an endpoints and Service YAML, to communicate to outside:</p>
<pre><code>kind: Endpoints
apiVersion: v1
metadata:
name: mysqldatabase
subsets:
- addresses:
- ip: 10.10.0.42
ports:
- port: 3306
---
kind: Service
apiVersion: v1
metadata:
name: mysqldatabase
spec:
type: ClusterIP
ports:
- port: 3306
targetPort: 3306
</code></pre>
<p>But it's not working, when i going to see logs i see spring cant connect to database.</p>
<pre><code>Caused by: java.net.UnknownHostException: mysqldatabase
at java.net.InetAddress.getAllByName0(InetAddress.java:1281)
at java.net.InetAddress.getAllByName(InetAddress.java:1193)
at java.net.InetAddress.getAllByName(InetAddress.java:1127)
at com.mysql.cj.protocol.StandardSocketFactory.connect(StandardSocketFactory.java:132)
at com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:63)
</code></pre>
<p>thanks for any helps</p>
| <p>hold on. you don't create endpoints yourself. endpoints are registered by kubernetes when a service has matching pods. right now, you have deployed your application and exposed it via a service.</p>
<p>if you want to connect to your mysql database via service it needs to be deployed and kubernetes as well. if it is not hosted on kubernetes you will need a hostname or the ip address of the database and adapt your SPRING_DATASOURCE_URL accordingly!</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.