prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>I have a kubernetes cluster in GKE. Inside the cluster there is an private docker registry service. A certificate for this service is generated inside a docker image by running:</p>
<pre class="lang-sh prettyprint-override"><code>openssl req -x509 -newkey rsa:4096 -days 365 -nodes -sha256 -keyout /certs/tls.key -out /certs/tls.crt -subj "/CN=registry-proxy"
</code></pre>
<p>When any pod that uses an image from this private registry tries to pull the image I get an error:</p>
<pre><code>x509: certificate signed by unknown authority
</code></pre>
<p>Is there any way to put the self signed certificate to all GKE nodes in the cluster to resolve the problem?</p>
<p>UPDATE</p>
<p>I put the CA certificate to each GKE node as @ArmandoCuevas recommended in his comment, but it doesn't help, still getting the error <code>x509: certificate signed by unknown authority</code>. What could cause it? How docker images are pulled into pods?</p>
| <p>TL;DR: Almost all modifications you need to perform to nodes in GKE, like adding trusted root certificates to the server, can be done using Daemonsets.</p>
<p>There is an amazing guide that the user <a href="https://github.com/samos123" rel="nofollow noreferrer">Sam Stoelinga</a> created about how to perform what you are looking to do. The link can be found <a href="https://github.com/samos123/gke-node-customizations#2-deploy-daemonset-to-insert-ca-on-gke-nodes" rel="nofollow noreferrer">here</a>.</p>
<p>As a summary, the way Sam propose how to perform this changes is by distributing the cert in each of the nodes by using a Daemonsets. Since the Daemonsets guarantees that there is 1 pod on each of the nodes always, the POD will be in charge of adding your certificate to the node so you can pull your images from the private registry.</p>
<p>Normally adding the node by your own will not work since if GKE needs to recreate the node you change will be lost. This approach of using DS guarantees that even if the node is recreated, since the Daemonset will schedule one of this "overhaul pod" in the node, you will always going to have the cert in place.</p>
<p>The steps that Sam proposed are very simple:</p>
<ol>
<li>Create an image that with the commands needed to distribute the certificate. This step may be different if you are using Ubuntu nodes or COS nodes. If you are using COS nodes, the commands that your pod needs to run if you are using COS are perfectly outlined by SAM:</li>
</ol>
<pre><code>cp /myCA.pem /mnt/etc/ssl/certs
nsenter --target 1 --mount update-ca-certificates
nsenter --target 1 --mount systemctl restart docker
</code></pre>
<p>If you are running Ubuntu nodes, the commands are outlined in several posts in Ask Ubuntu like <a href="https://askubuntu.com/questions/73287/how-do-i-install-a-root-certificate">this one</a>.</p>
<ol start="2">
<li><p>Move the image to a container registry that your nodes currently have access like <a href="https://cloud.google.com/container-registry" rel="nofollow noreferrer">GCR</a>.</p>
</li>
<li><p>Deploy the DS using the image that adds the cert as privileged with the NET_ADMIN capability (needed to perform this operation) and mount host's "/etc" folder inside the POD. Sam added an example of doing this that may help, but you can use your own definition.
If you face problems while trying to deploy a privileged pod, may worth taking a look to GKE documentation about <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/pod-security-policies" rel="nofollow noreferrer">Using PodSecurityPolicies</a></p>
</li>
</ol>
|
<p>I am trying to set up a <code>Mongo DB</code>on a specific node on <code>GKE</code>. I upgraded my current cluster using</p>
<pre><code> gcloud container node-pools create mongo --cluster drogo --num-nodes 1 --region us-east1
</code></pre>
<p>It created a new node-pool in the cluster with the name <code>mongo</code>. I have the following <code>Deployment</code>, <code>Volume</code> and <code>Service</code> file.</p>
<h1>Deployment.YAML</h1>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
nodeSelector:
cloud.google.com/gke-nodepool: mongo
containers:
- name: mongo
image: mongo:3.6.17-xenial
ports:
- containerPort: 27017
volumeMounts:
- name: storage
mountPath: /data/db
volumes:
- name: storage
persistentVolumeClaim:
claimName: mongo-pvc
</code></pre>
<p>In the above file, I provided the <code>nodeSelector</code> cloud.google.com/gke-nodepool: mongo (as mentioned <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-pools#deploy" rel="nofollow noreferrer">here</a>)</p>
<h1>Volume.YAML</h1>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
</code></pre>
<h1>Service.YAML</h1>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mongo
spec:
selector:
app: mongo
ports:
- port: 27017
targetPort: 27017
</code></pre>
<p>When I check my pod error it says:</p>
<pre><code>Can’t schedule this Pod because no nodes match the nodeSelector.
Cannot schedule pods: node(s) had volume node affinity conflict.
</code></pre>
<p>What am I doing wrong here? Any help would be appreciable.</p>
<p>This is how I set up <code>Kubernetes Label</code> into a node pool</p>
<p><a href="https://i.stack.imgur.com/mBvx5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mBvx5.png" alt="enter image description here" /></a></p>
<p>I ran <code> gcloud container node-pools describe node --cluster=drogo --region us-east1</code> and in the response I can see:</p>
<pre><code>autoscaling: {}
config:
diskSizeGb: 20
diskType: pd-standard
imageType: COS_CONTAINERD
labels:
mongo: mongo
machineType: e2-medium
</code></pre>
| <p>There were two issues with the deployment setup:</p>
<p>The <code>nodeSelector</code> specified in the Deployment manifest was using wrong label</p>
<pre><code> nodeSelector:
cloud.google.com/gke-nodepool: mongo
</code></pre>
<p>Whereas the created node had a label pair <code>mongo: mongo</code>. Either changing the node label to <code>cloud.google.com/gke-nodepool: mongo</code> or the deployment nodeSelector to <code>mongo: mongo</code> works.</p>
<p>The following issue was that the available <code>persistentVolume</code> lived in AZ <code>us-east1-c</code> whereas the available node was in <code>us-east1-d</code>. Therefore kubernetes scheduler couldn't find a match of requested <code>nodeSelector</code> + <code>PersistentVolume</code> within the same AZ. Issue was solved by adding a new node with the same configuration in AZ <code>us-east1-c</code></p>
|
<h1>Suspending & resuming my virtual-machine does break the k8s deployment</h1>
<p>When I suspend with <code>minikube stop</code> and then resume the Virtual Machine with <code>minikube start</code>, Minikube re-deploys my app from scratch.</p>
<p>I see this behaviour with newer versions of Minikube higher than <em>v1.18</em> (I run on <em>v1.19</em>).</p>
<hr />
<h1>The setup:</h1>
<ul>
<li>The <em>Kubernetes</em> deployment mounts a volume with the source code from my host machine, via <code>hostPath</code>.\</li>
<li>Also I have a container of <code>initContainers</code> that setups the application.</li>
</ul>
<p>Since the new <em>"redeploy behaviour on resume"</em> happens, the init-container <strong>breaks my deploy, <em>if</em> I have work-in-progress code on my host machine</strong>..</p>
<h1>The issue:</h1>
<p>Now, if I have temporary/non-perfectly-running code, I cannot suspend the machine with unfinished work anymore, between working days; because every time I resume it <strong>Minikube will try to deploy again but with broken code</strong> and fail with an <code>Init:CrashLoopBackOff</code>.</p>
<h1>The workaround:</h1>
<p>For now, each time I resume the machine I need to</p>
<ol>
<li>stash/commit my WIP code</li>
<li>checkout the last commit with working deployment</li>
<li>run the deployment & wait for it to complete the initialization (minutes...)</li>
<li>checkout/stash-pop the code saved at point <em>1)</em>.</li>
</ol>
<p>I can survive, but the workflow is terrible.</p>
<h1>How do I restore the old behaviour?</h1>
<ul>
<li><em>How do I make my deploys to stay untouched, as expected when suspending the VM, instead of being re-deployed every time I resume?</em></li>
</ul>
| <p>In short words there are two ways to achieve what you want:</p>
<ul>
<li>On current versions of <code>minikube</code> and <code>virtualbox</code> you can use <code>save state</code> option in Virtual box directly.</li>
<li>Move initContianer's code to a separate <code>job</code>.</li>
</ul>
<p><strong>More details about minikube + virtual box</strong></p>
<p>I have an environment with minikube version 1.20, virtual box 6.1.22 (from yesterday) and MacOS. Also minikube driver is set to <code>virtualbox</code>.</p>
<p>First with <code>minikube</code> + <code>VirtualBox</code>. Different scenarios:</p>
<p><code>minikube stop</code> does following:</p>
<blockquote>
<p>Stops a local Kubernetes cluster. This command stops the underlying VM
or container, but keeps user data intact.</p>
</blockquote>
<p>What happens is virtual machine where minikube is set up stops entirely. <code>minikube start</code> starts the VM and all processes in it. All containers are started as well, so if your pod has an init-container, it will run first anyway.</p>
<p><code>minikube pause</code> pauses all processes and free up CPU resourses while memory will still be allocated. <code>minikube unpause</code> brings back CPU resources and continues executing containers from a state when they were paused.</p>
<p>Based on different scenarios I tried with <code>minikube</code> it's not achievable using only minikube commands. To avoid any state loss on your <code>minikube</code> environment due to host restart or necessity to stop a VM to get more resources, you can use <code>save state</code> feature in VirtualBox in UI or cli. Below what it does:</p>
<blockquote>
<p><strong>VBoxManage controlvm savestate</strong>: Saves the current state of the VM to disk and then stops the VM.</p>
</blockquote>
<p>Virtual box creates something like a snapshot with all memory content within this snapshot. When virtual machine is restarted, Virtual box will restore the state of VM to the state when the VM was saved.</p>
<p>One more assumption is if this works the same way in v. 1.20 - this is expected behaviour and not a bug (otherwise it would've been fixed already)</p>
<p><strong>Init-container and jobs</strong></p>
<p>You may consider moving your init-container's code to a a separate <code>job</code> so you will avoid any issues with unintended pod restarts and braking your deployment in the main container. Also it's advised to have init-container's code idempotent.
Here's a quote from official documentation:</p>
<blockquote>
<p>Because init containers can be restarted, retried, or re-executed,
init container code should be idempotent. In particular, code that
writes to files on <code>EmptyDirs</code> should be prepared for the possibility
that an output file already exists.</p>
</blockquote>
<p>This can be achieved by using <code>jobs</code> in Kubernetes which you can run manually when you need to do so.
To ensure following the workflow you can place a check for a <code>Job completion</code> or a specific file on a data volume to the deployment's pod init container to indicate that code is working, deployment will be fine.</p>
<p>Links with more information:</p>
<ul>
<li><p><a href="https://www.virtualbox.org/manual/ch08.html#vboxmanage-controlvm" rel="nofollow noreferrer">VirtualBox <code>save state</code></a></p>
</li>
<li><p><a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#detailed-behavior" rel="nofollow noreferrer">initContainers</a></p>
</li>
<li><p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="nofollow noreferrer">kubernetes jobs</a></p>
</li>
</ul>
|
<p>I have an OpenShift/Tekton pipeline which in <code>Task A</code> deploys an application to a test environment. In <code>Task B</code>, the application's test suite is run. If all tests pass, then the application is deployed to another environment in <code>Task C</code>.</p>
<p>The problem is that <code>Task A</code>'s pod is deployed (with <code>oc apply -f <deployment></code>), and before the pod is actually ready to receive requests, <code>Task B</code> starts running the test suite, and all the tests fail (because it can't reach the endpoints defined in the test cases).</p>
<p>Is there an elegant way to make sure the pod from <code>Task A</code> is ready to receive requests, before starting the execution of <code>Task B</code>? One solution I have seen is to do HTTP GET requests against a health endpoint until you get a HTTP 200 response. We have quite a few applications which do not expose HTTP endpoints, so is there a more "generic" way to make sure the pod is ready? Can I for example query for a specific record in <code>Task A</code>'s log? There is a log statement which always shows when the pod is ready to receive traffic.</p>
<p>If it's of any interest, here is the definition for <code>Task A</code>:</p>
<pre><code>apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: create-and-deploy-integration-server
spec:
params:
- name: release-name
type: string
description: The release name of the application
- name: image-name
type: string
description: The name of the container image to use
- name: namespace
type: string
description: The namespace (OCP project) the application resides in
- name: commit-id
type: string
description: The commit hash identifier for the current HEAD commit
steps:
- name: create-is-manifest
image: image-registry.openshift-image-registry.svc:5000/openshift/origin-cli
script: |
echo "Creating IntegrationServer manifest"
cat << EOF > integrationserver.yaml
apiVersion: appconnect.ibm.com/v1beta1
kind: IntegrationServer
metadata:
name: $(params.release-name)
namespace: $(params.namespace)
spec:
license:
accept: true
license: L-KSBM-BZWEAT
use: CloudPakForIntegrationNonProduction
pod:
containers:
runtime:
image: image-registry.openshift-image-registry.svc:5000/$(params.namespace)/$(params.image-name)-$(params.commit-id)
imagePullPolicy: Always
resources:
limits:
cpu: 500m
memory: 500Mi
requests:
cpu: 300m
memory: 300Mi
adminServerSecure: true
router:
timeout: 120s
designerFlowsOperationMode: disabled
service:
endpointType: http
version: 11.0.0.11-r2
replicas: 1
barURL: ''
EOF
- name: deploy-is-manifest
image: image-registry.openshift-image-registry.svc:5000/openshift/origin-cli
script: |
echo "Applying IntegrationServer manifest to OpenShift cluster"
oc apply -f integrationserver.yaml
</code></pre>
| <p>After your <em>step</em> that do <code>oc apply</code>, you can add a step to wait for the deployment to become "available". This is for <code>kubectl</code> but should work the same way with <code>oc</code>:</p>
<pre><code>kubectl wait --for=condition=available --timeout=60s deployment/myapp
</code></pre>
<p>Then the next Task can depend on this Task with <code>runAfter: ["create-and-deploy-integration-server"]</code></p>
|
<p>I use <code>kubectl exec -it</code> for logging into a single Kubernetes pod.</p>
<p>Is there any way to log in to multiple pods in a cluster, at the same time with a single command (just like <code>csshX</code>)?</p>
| <p>There is a plugin that could help you with this. It's called <a href="https://github.com/predatorray/kubectl-tmux-exec" rel="nofollow noreferrer">kubectl-tmux-exec</a>:</p>
<blockquote>
<p>A kubectl plugin that controls multiple pods simultaneously using
<a href="https://github.com/tmux/tmux" rel="nofollow noreferrer">Tmux</a>.</p>
<p>It is to <code>kubectl exec</code> as <code>csshX</code> or <code>pssh</code> is to <code>ssh</code>.</p>
<p>Instead of exec bash into multiple pod's containers one-at-a-time,
like <code>kubectl exec pod{N} /bin/bash</code>.</p>
<p>You can now use</p>
<pre><code>kubectl tmux-exec -l app=nginx /bin/bash
</code></pre>
</blockquote>
<p>All necessary details regarding <a href="https://github.com/predatorray/kubectl-tmux-exec#installation" rel="nofollow noreferrer">Installation</a> and <a href="https://github.com/predatorray/kubectl-tmux-exec#usage" rel="nofollow noreferrer">Usage</a> can be found in the linked docs.</p>
|
<p>We are running an application on k8s cluster on GKE.</p>
<p>We are using an <code>nginx-ingress-controller</code> as external load-balancer Service which is reachable on, let's say, <a href="https://12.345.67.98" rel="nofollow noreferrer">https://12.345.67.98</a> . We are facing an issue that when we directly access the load-balancer on mentioned URL, we get a certificate warning because a self-signed "Kubernetes Ingress Controller Fake Certificate" is used.</p>
<p>We only have Ingress objects that are mapping our domains (e.g. app.our-company.com) to Kubernetes services. The nginx load-balancer is a Kubernetes Service with load-balancer type. For SSL/TLS for our domains <code>cert-manager</code> is used. There is no issue when accessing these domains, only when we directly access the load-balancer on the IP-Address.</p>
<p>Is there a way to somehow replace the certificate on the load-balancer, so it's not using the default fake certificate anymore?</p>
| <p>You need to define a secret with your CA signed certificate and the private key. These will have to be base64 encoded in the secret. You will then use this secret in the "tls" section of the ingress manifest.</p>
<p>Ensure that the certificate chain (cert -> intermediate CA -> root CA) is established in the certificate above.</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-test
spec:
tls:
- hosts:
- foo.bar.com
# This assumes tls-secret exists and the SSL
# certificate contains a CN for foo.bar.com
secretName: tls-secret
rules:
- host: foo.bar.com
http:
paths:
- path: /
backend:
# This assumes http-svc exists and routes to healthy endpoints
serviceName: http-svc
servicePort: 80
</code></pre>
<p><a href="https://kubernetes.github.io/ingress-nginx/examples/tls-termination/" rel="nofollow noreferrer">References</a></p>
|
<p>I have a cluster of Artemis in Kubernetes with 3 group of master/slave:</p>
<pre><code>activemq-artemis-master-0 1/1 Running
activemq-artemis-master-1 1/1 Running
activemq-artemis-master-2 1/1 Running
activemq-artemis-slave-0 0/1 Running
activemq-artemis-slave-1 0/1 Running
activemq-artemis-slave-2 0/1 Running
</code></pre>
<p>I am using Spring boot JmsListener to consume messages sent to a wildcard queue as follow.</p>
<pre class="lang-java prettyprint-override"><code> @Component
@Log4j2
public class QueueListener {
@Autowired
private ListenerControl listenerControl;
@JmsListener(id = "queueListener0", destination = "QUEUE.service2.*.*.*.notification")
public void add(String message, @Header("sentBy") String sentBy, @Header("sentFrom") String sentFrom, @Header("sentAt") Long sentAt) throws InterruptedException {
log.info("---QUEUE[notification]: message={}, sentBy={}, sentFrom={}, sentAt={}",
message, sentBy, sentFrom, sentAt);
TimeUnit.MILLISECONDS.sleep(listenerControl.getDuration());
}
}
</code></pre>
<p>There was 20 messages sent to the queue and master-1 was the delivering node. When 5 messages has been consumed, I killed the master-1 node to simulate a crash, I saw slave-1 started running then yielded back to master-1 after Kubernetes respawn it. The listener threw a <code>JMSException</code> that the connection was lost and it tried to reconnect. Then I saw it successfully connected to master-0 (I saw the queue created and the consumer count > 0). However the queue on master-0 was empty, while the same queue in master-1 still had 15 messages and no consumer attached to it. I waited for a while but the 15 messages was never delivered. I am not sure why redistribution did not kick in.</p>
<p>The attributes of the wildcard queue on master-1 is like this when it came back online after the crash (I manually replace the value of the field <strong>accessToken</strong> since it has sensitive info):</p>
<pre><code>Attribute Value
Acknowledge attempts 0
Address QUEUE.service2.*.*.*.notification
Configuration managed false
Consumer count 0
Consumers before dispatch 0
Dead letter address DLQ
Delay before dispatch -1
Delivering count 0
Delivering size 0
Durable true
Durable delivering count 0
Durable delivering size 0
Durable message count 15
Durable persistent size 47705
Durable scheduled count 0
Durable scheduled size 0
Enabled true
Exclusive false
Expiry address ExpiryQueue
Filter
First message age 523996
First message as json [{"JMSType":"service2","address":"QUEUE.service2.tech-drive2.188100000059.thai.notification","messageID":68026,"sentAt":1621957145988,"accessToken":"REMOVED","type":3,"priority":4,"userID":"ID:56c7b509-bd6f-11eb-a348-de0dacf99072","_AMQ_GROUP_ID":"tech-drive2-188100000059-thai","sentBy":"user@email.com","durable":true,"JMSReplyTo":"queue://QUEUE.service2.tech-drive2.188100000059.thai.notification","__AMQ_CID":"e4469ea3-bd62-11eb-a348-de0dacf99072","sentFrom":"service2","originalDestination":"QUEUE.service2.tech-drive2.188100000059.thai.notification","_AMQ_ROUTING_TYPE":1,"JMSCorrelationID":"c329c733-1170-440a-9080-992a009d87a9","expiration":0,"timestamp":1621957145988}]
First message timestamp 1621957145988
Group buckets -1
Group count 0
Group first key
Group rebalance false
Group rebalance pause dispatch false
Id 119
Last value false
Last value key
Max consumers -1
Message count 15
Messages acknowledged 0
Messages added 15
Messages expired 0
Messages killed 0
Name QUEUE.service2.*.*.*.notification
Object Name org.apache.activemq.artemis:broker="activemq-artemis-master-1",component=addresses,address="QUEUE.service2.\*.\*.\*.notification",subcomponent=queues,routing-type="anycast",queue="QUEUE.service2.\*.\*.\*.notification"
Paused false
Persistent size 47705
Prepared transaction message count 0
Purge on no consumers false
Retroactive resource false
Ring size -1
Routing type ANYCAST
Scheduled count 0
Scheduled size 0
Temporary false
User f7bcdaed-8c0c-4bb5-ad03-ec06382cb557
</code></pre>
<p>The attributes of the wildcard queue on master-0 is like this:</p>
<pre><code>Attribute Value
Acknowledge attempts 0
Address QUEUE.service2.*.*.*.notification
Configuration managed false
Consumer count 3
Consumers before dispatch 0
Dead letter address DLQ
Delay before dispatch -1
Delivering count 0
Delivering size 0
Durable true
Durable delivering count 0
Durable delivering size 0
Durable message count 0
Durable persistent size 0
Durable scheduled count 0
Durable scheduled size 0
Enabled true
Exclusive false
Expiry address ExpiryQueue
Filter
First message age
First message as json [{}]
First message timestamp
Group buckets -1
Group count 0
Group first key
Group rebalance false
Group rebalance pause dispatch false
Id 119
Last value false
Last value key
Max consumers -1
Message count 0
Messages acknowledged 0
Messages added 0
Messages expired 0
Messages killed 0
Name QUEUE.service2.*.*.*.notification
Object Name org.apache.activemq.artemis:broker="activemq-artemis-master-0",component=addresses,address="QUEUE.service2.\*.\*.\*.notification",subcomponent=queues,routing-type="anycast",queue="QUEUE.service2.\*.\*.\*.notification"
Paused false
Persistent size 0
Prepared transaction message count 0
Purge on no consumers false
Retroactive resource false
Ring size -1
Routing type ANYCAST
Scheduled count 0
Scheduled size 0
Temporary false
User f7bcdaed-8c0c-4bb5-ad03-ec06382cb557
</code></pre>
<p>The Artemis version in use is 2.17.0. Here is my cluster config in master-0 <code>broker.xml</code>. The configs are the same for other brokers except the <code>connector-ref</code> is changed to match the broker:</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0"?>
<configuration xmlns="urn:activemq" xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core" xsi:schemaLocation="urn:activemq:core ">
<name>activemq-artemis-master-0</name>
<persistence-enabled>true</persistence-enabled>
<journal-type>ASYNCIO</journal-type>
<paging-directory>data/paging</paging-directory>
<bindings-directory>data/bindings</bindings-directory>
<journal-directory>data/journal</journal-directory>
<large-messages-directory>data/large-messages</large-messages-directory>
<journal-datasync>true</journal-datasync>
<journal-min-files>2</journal-min-files>
<journal-pool-files>10</journal-pool-files>
<journal-device-block-size>4096</journal-device-block-size>
<journal-file-size>10M</journal-file-size>
<journal-buffer-timeout>100000</journal-buffer-timeout>
<journal-max-io>4096</journal-max-io>
<disk-scan-period>5000</disk-scan-period>
<max-disk-usage>90</max-disk-usage>
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>HALT</critical-analyzer-policy>
<page-sync-timeout>2244000</page-sync-timeout>
<acceptors>
<acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
<acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true</acceptor>
<acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>
<acceptor name="hornetq">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>
<acceptor name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>
</acceptors>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="amq"/>
<permission type="deleteNonDurableQueue" roles="amq"/>
<permission type="createDurableQueue" roles="amq"/>
<permission type="deleteDurableQueue" roles="amq"/>
<permission type="createAddress" roles="amq"/>
<permission type="deleteAddress" roles="amq"/>
<permission type="consume" roles="amq"/>
<permission type="browse" roles="amq"/>
<permission type="send" roles="amq"/>
<permission type="manage" roles="amq"/>
</security-setting>
</security-settings>
<address-settings>
<address-setting match="activemq.management#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redistribution-delay>60000</redistribution-delay>
<redelivery-delay>0</redelivery-delay>
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
</address-settings>
<addresses>
<address name="DLQ">
<anycast>
<queue name="DLQ"/>
</anycast>
</address>
<address name="ExpiryQueue">
<anycast>
<queue name="ExpiryQueue"/>
</anycast>
</address>
</addresses>
<cluster-user>clusterUser</cluster-user>
<cluster-password>aShortclusterPassword</cluster-password>
<connectors>
<connector name="activemq-artemis-master-0">tcp://activemq-artemis-master-0.activemq-artemis-master.svc.cluster.local:61616</connector>
<connector name="activemq-artemis-slave-0">tcp://activemq-artemis-slave-0.activemq-artemis-slave.svc.cluster.local:61616</connector>
<connector name="activemq-artemis-master-1">tcp://activemq-artemis-master-1.activemq-artemis-master.svc.cluster.local:61616</connector>
<connector name="activemq-artemis-slave-1">tcp://activemq-artemis-slave-1.activemq-artemis-slave.svc.cluster.local:61616</connector>
<connector name="activemq-artemis-master-2">tcp://activemq-artemis-master-2.activemq-artemis-master.svc.cluster.local:61616</connector>
<connector name="activemq-artemis-slave-2">tcp://activemq-artemis-slave-2.activemq-artemis-slave.svc.cluster.local:61616</connector>
</connectors>
<cluster-connections>
<cluster-connection name="activemq-artemis">
<connector-ref>activemq-artemis-master-0</connector-ref>
<retry-interval>500</retry-interval>
<retry-interval-multiplier>1.1</retry-interval-multiplier>
<max-retry-interval>5000</max-retry-interval>
<initial-connect-attempts>-1</initial-connect-attempts>
<reconnect-attempts>-1</reconnect-attempts>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<!-- scale-down>true</scale-down -->
<static-connectors>
<connector-ref>activemq-artemis-master-0</connector-ref>
<connector-ref>activemq-artemis-slave-0</connector-ref>
<connector-ref>activemq-artemis-master-1</connector-ref>
<connector-ref>activemq-artemis-slave-1</connector-ref>
<connector-ref>activemq-artemis-master-2</connector-ref>
<connector-ref>activemq-artemis-slave-2</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
<ha-policy>
<replication>
<master>
<group-name>activemq-artemis-0</group-name>
<quorum-vote-wait>12</quorum-vote-wait>
<vote-on-replication-failure>true</vote-on-replication-failure>
<!--we need this for auto failback-->
<check-for-live-server>true</check-for-live-server>
</master>
</replication>
</ha-policy>
</core>
<core xmlns="urn:activemq:core">
<jmx-management-enabled>true</jmx-management-enabled>
</core>
</configuration>
</code></pre>
<p>From another answer from Stack Overflow, I understand that my topology for high-availability is redundant and I am planning to remove the slave. However, I don't think the slave is the cause for redistribution of messages not working. Is there a config that I am missing to handle Artemis node crash?</p>
<p>Updated 1:
As Justin suggested, I tried to use a cluster of 2 nodes of Artemis without HA.</p>
<pre><code>activemq-artemis-master-0 1/1 Running 0 27m
activemq-artemis-master-1 1/1 Running 0 74s
</code></pre>
<p>The following is broker.xml of the 2 artemis node. The only different between them is the node name and journal-buffer-timeout:</p>
<pre><code><?xml version="1.0"?>
<configuration xmlns="urn:activemq" xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core" xsi:schemaLocation="urn:activemq:core ">
<name>activemq-artemis-master-0</name>
<persistence-enabled>true</persistence-enabled>
<journal-type>ASYNCIO</journal-type>
<paging-directory>data/paging</paging-directory>
<bindings-directory>data/bindings</bindings-directory>
<journal-directory>data/journal</journal-directory>
<large-messages-directory>data/large-messages</large-messages-directory>
<journal-datasync>true</journal-datasync>
<journal-min-files>2</journal-min-files>
<journal-pool-files>10</journal-pool-files>
<journal-device-block-size>4096</journal-device-block-size>
<journal-file-size>10M</journal-file-size>
<journal-buffer-timeout>100000</journal-buffer-timeout>
<journal-max-io>4096</journal-max-io>
<disk-scan-period>5000</disk-scan-period>
<max-disk-usage>90</max-disk-usage>
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>HALT</critical-analyzer-policy>
<page-sync-timeout>2244000</page-sync-timeout>
<acceptors>
<acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
<acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true</acceptor>
<acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>
<acceptor name="hornetq">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>
<acceptor name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>
</acceptors>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="amq"/>
<permission type="deleteNonDurableQueue" roles="amq"/>
<permission type="createDurableQueue" roles="amq"/>
<permission type="deleteDurableQueue" roles="amq"/>
<permission type="createAddress" roles="amq"/>
<permission type="deleteAddress" roles="amq"/>
<permission type="consume" roles="amq"/>
<permission type="browse" roles="amq"/>
<permission type="send" roles="amq"/>
<permission type="manage" roles="amq"/>
</security-setting>
</security-settings>
<cluster-user>ClusterUser</cluster-user>
<cluster-password>longClusterPassword</cluster-password>
<connectors>
<connector name="activemq-artemis-master-0">tcp://activemq-artemis-master-0.activemq-artemis-master.ncp-stack-testing.svc.cluster.local:61616</connector>
<connector name="activemq-artemis-master-1">tcp://activemq-artemis-master-1.activemq-artemis-master.ncp-stack-testing.svc.cluster.local:61616</connector>
</connectors>
<cluster-connections>
<cluster-connection name="activemq-artemis">
<connector-ref>activemq-artemis-master-0</connector-ref>
<retry-interval>500</retry-interval>
<retry-interval-multiplier>1.1</retry-interval-multiplier>
<max-retry-interval>5000</max-retry-interval>
<initial-connect-attempts>-1</initial-connect-attempts>
<reconnect-attempts>-1</reconnect-attempts>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<static-connectors>
<connector-ref>activemq-artemis-master-0</connector-ref>
<connector-ref>activemq-artemis-master-1</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
<address-settings>
<address-setting match="activemq.management#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redistribution-delay>60000</redistribution-delay>
<redelivery-delay>0</redelivery-delay>
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
</address-settings>
<addresses>
<address name="DLQ">
<anycast>
<queue name="DLQ"/>
</anycast>
</address>
<address name="ExpiryQueue">
<anycast>
<queue name="ExpiryQueue"/>
</anycast>
</address>
</addresses>
</core>
<core xmlns="urn:activemq:core">
<jmx-management-enabled>true</jmx-management-enabled>
</core>
</configuration>
</code></pre>
<p>With this setup, I still got the same result, after the the artemis node crash and comeback, the left over message was not moved to the other node.</p>
<p>Update 2
I tried to use non-wildcard queue as Justin suggested but still got the same behavior. One different I noticed is that if I use the non-wildcard queue, the consumer count is only 1 compare to 3 in the case of wildcard queue.Here is the attributes of the old queue after the crash</p>
<pre><code>Acknowledge attempts 0
Address QUEUE.service2.tech-drive2.188100000059.thai.notification
Configuration managed false
Consumer count 0
Consumers before dispatch 0
Dead letter address DLQ
Delay before dispatch -1
Delivering count 0
Delivering size 0
Durable true
Durable delivering count 0
Durable delivering size 0
Durable message count 15
Durable persistent size 102245
Durable scheduled count 0
Durable scheduled size 0
Enabled true
Exclusive false
Expiry address ExpiryQueue
Filter
First message age 840031
First message as json [{"JMSType":"service2","address":"QUEUE.service2.tech-drive2.188100000059.thai.notification","messageID":8739,"sentAt":1621969900922,"accessToken":"DONOTDISPLAY","type":3,"priority":4,"userID":"ID:09502dc0-bd8d-11eb-b75c-c6609f1332c9","_AMQ_GROUP_ID":"tech-drive2-188100000059-thai","sentBy":"user@email.com","durable":true,"JMSReplyTo":"queue://QUEUE.service2.tech-drive2.188100000059.thai.notification","__AMQ_CID":"c292b418-bd8b-11eb-b75c-c6609f1332c9","sentFrom":"service2","originalDestination":"QUEUE.service2.tech-drive2.188100000059.thai.notification","_AMQ_ROUTING_TYPE":1,"JMSCorrelationID":"90b783d0-d9cc-4188-9c9e-3453786b2105","expiration":0,"timestamp":1621969900922}]
First message timestamp 1621969900922
Group buckets -1
Group count 0
Group first key
Group rebalance false
Group rebalance pause dispatch false
Id 606
Last value false
Last value key
Max consumers -1
Message count 15
Messages acknowledged 0
Messages added 15
Messages expired 0
Messages killed 0
Name QUEUE.service2.tech-drive2.188100000059.thai.notification
Object Name org.apache.activemq.artemis:broker="activemq-artemis-master-0",component=addresses,address="QUEUE.service2.tech-drive2.188100000059.thai.notification",subcomponent=queues,routing-type="anycast",queue="QUEUE.service2.tech-drive2.188100000059.thai.notification"
Paused false
Persistent size 102245
Prepared transaction message count 0
Purge on no consumers false
Retroactive resource false
Ring size -1
Routing type ANYCAST
Scheduled count 0
Scheduled size 0
Temporary false
User 6e25e08b-9587-40a3-b7e9-146360539258
</code></pre>
<p>and here is the attributes of the new queue</p>
<pre><code>Attribute Value
Acknowledge attempts 0
Address QUEUE.service2.tech-drive2.188100000059.thai.notification
Configuration managed false
Consumer count 1
Consumers before dispatch 0
Dead letter address DLQ
Delay before dispatch -1
Delivering count 0
Delivering size 0
Durable true
Durable delivering count 0
Durable delivering size 0
Durable message count 0
Durable persistent size 0
Durable scheduled count 0
Durable scheduled size 0
Enabled true
Exclusive false
Expiry address ExpiryQueue
Filter
First message age
First message as json [{}]
First message timestamp
Group buckets -1
Group count 0
Group first key
Group rebalance false
Group rebalance pause dispatch false
Id 866
Last value false
Last value key
Max consumers -1
Message count 0
Messages acknowledged 0
Messages added 0
Messages expired 0
Messages killed 0
Name QUEUE.service2.tech-drive2.188100000059.thai.notification
Object Name org.apache.activemq.artemis:broker="activemq-artemis-master-1",component=addresses,address="QUEUE.service2.tech-drive2.188100000059.thai.notification",subcomponent=queues,routing-type="anycast",queue="QUEUE.service2.tech-drive2.188100000059.thai.notification"
Paused false
Persistent size 0
Prepared transaction message count 0
Purge on no consumers false
Retroactive resource false
Ring size -1
Routing type ANYCAST
Scheduled count 0
Scheduled size 0
Temporary false
User 6e25e08b-9587-40a3-b7e9-146360539258
</code></pre>
| <p>I've taken your simplified configuration with just 2 nodes using a non-wildcard queue with <code>redistribution-delay</code> of <code>0</code>, and I reproduced the behavior you're seeing on my local machine (i.e. without Kubernetes). I believe I see <em>why</em> the behavior is such, but in order to understand the current behavior you first must understand how redistribution works in the first place.</p>
<p>In a cluster every time a consumer is created the node on which the consumer is created notifies every other node in the cluster about the consumer. If other nodes in the cluster have messages in their corresponding queue but don't have any consumers then those other nodes <em>redistribute</em> their messages to the node with the consumer (assuming the <code>message-load-balancing</code> is <code>ON_DEMAND</code> and the <code>redistribution-delay</code> is >= <code>0</code>).</p>
<p>In your case however, the node with the messages is actually <em>down</em> when the consumer is created on the other node so it never actually receives the notification about the consumer. Therefore, once that node restarts it doesn't know about the other consumer and does not redistribute its messages.</p>
<p>I see you've opened <a href="https://issues.apache.org/jira/browse/ARTEMIS-3321" rel="nofollow noreferrer">ARTEMIS-3321</a> to enhance the broker to deal with this situation. However, that will take time to develop and release (assuming the change is approved). My recommendation to you in the mean-time would be to configure your client reconnection which is discussed in <a href="https://activemq.apache.org/components/artemis/documentation/latest/client-reconnection.html" rel="nofollow noreferrer">the documentation</a>, e.g.:</p>
<pre><code>tcp://127.0.0.1:61616?reconnectAttempts=30
</code></pre>
<p>Given the default <code>retryInterval</code> of <code>2000</code> milliseconds that will give the broker to which the client was originally connected 1 minute to come back up before the client gives up trying to reconnect and throws an exception at which point the application can completely re-initialize its connection as it is currently doing now.</p>
<p>Since you're using Spring Boot be sure to use version 2.5.0 as it contains <a href="https://github.com/spring-projects/spring-boot/commit/99b43cb690e70afd39cc69783e1155a7a7b4e186" rel="nofollow noreferrer">this change</a> which will allow you to specify the broker URL rather than just host and port.</p>
<p>Lastly, keep in mind that shutting the node down <em>gracefully</em> will short-circuit the client's reconnect and trigger your application to re-initialize the connection, which is not what we want here. Be sure to kill the node ungracefully (e.g. using <code>kill -9 <pid></code>).</p>
|
<p>I'm building a little k8s controller based on the <a href="https://github.com/kubernetes/sample-controller" rel="nofollow noreferrer">sample-controller</a>.</p>
<p>I'm listening for ServiceAccount events with the following event handler:</p>
<pre><code>...
serviceAccountInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: controller.enqueueServiceAccount,
DeleteFunc: controller.enqueueServiceAccount,
})
...
func (c *Controller) enqueueServiceAccount(obj interface{}) {
var key string
var err error
if key, err = cache.MetaNamespaceKeyFunc(obj); err != nil {
utilruntime.HandleError(err)
return
}
c.workqueue.Add(key)
}
</code></pre>
<p>This is working fine; my events are coming in and the <code>enqueueServiceAccount()</code> function is getting called.</p>
<p>This is my first foray into Golang and I can't figure out how to get the object's Kubernetes annotations from the <code>obj</code>.</p>
<p>I dumped the object with <code>go-spew</code> and can confirm it's got an <code>ObjectMeta</code>. I'm just not sure how I cast this into some object where I can access the <code>ObjectMeta</code> - and from there it should be easy to get the annotations (in this case this object does't have any, it's one of the <code><nil></code> values.</p>
<pre><code>(*v1.ServiceAccount)(0xc0002c1010)(&ServiceAccount{ObjectMeta:{kube-proxy kube-system /api/v1/namespaces/kube-system/serviceaccounts/kube-proxy d2013421-92c8-44ae-b6d8-202231ea557c 234 0 2021-04-29 18:40:20 +0100 BST <nil> <nil> map[eks.amazonaws.com/component:kube-proxy k8s-app:kube-proxy] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"labels":{"eks.amazonaws.com/component":"kube-proxy","k8s-app":"kube-proxy"},"name":"kube-proxy","namespace":"kube-system"}}
</code></pre>
<p><strong>How can I access this object's annotations?</strong></p>
| <p>You can use a <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/meta/interfaces.go#L44" rel="noreferrer">MetaAccessor</a>:</p>
<pre class="lang-golang prettyprint-override"><code>import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
)
var metaAccessor = metav1.NewAccessor()
func (c *Controller) enqueueServiceAccount(obj interface{}) {
if typed, ok := obj.(runtime.Object); ok {
annotations, err := metaAccessor.Annotations(typed)
}
}
</code></pre>
<p>But often people tend to use controller-runtime.</p>
|
<p>My certificates for rancher server expired and now I can not log in to UI anymore to manage my k8s clusters.</p>
<p>Error:</p>
<pre><code>2021-05-26 00:57:52.437334 I | http: TLS handshake error from 127.0.0.1:43238: remote error: tls: bad certificate
2021/05/26 00:57:52 [INFO] Waiting for server to become available: Get https://127.0.0.1:6443/version?timeout=30s: x509: certificate has expired or is not yet valid
</code></pre>
<p>So what I did was rolling back the date on the RancherOS machine that is running Rancher Server container. After that I restarted the container and it refreshed the certificates. I checked with:</p>
<pre><code>for i in `ls /var/lib/rancher/k3s/server/tls/*.crt`; do echo $i; openssl x509 -enddate -noout -in $i; done
</code></pre>
<p>Since now I was able to log into the UI I forced a certificate rotation on the k8s cluster.</p>
<p>But I still get the same error once the date is reset to current and I can not log in to the Rancher Server UI.</p>
<p>What am I missing here?</p>
| <p>This was the missing piece: <a href="https://github.com/rancher/rancher/issues/26984#issuecomment-818770519" rel="nofollow noreferrer">https://github.com/rancher/rancher/issues/26984#issuecomment-818770519</a></p>
<p>Deleting the dynamic-cert.json and running kubectl delete secret</p>
|
<p>when I run the cronjob into Kubernetes, that time cron gives me to success the cron but not getting the desired result</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ $.Values.appName }}
namespace: {{ $.Values.appName }}
spec:
schedule: "* * * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
containers:
- name: test
image: image
command: ["/bin/bash"]
args: [ "test.sh" ]
restartPolicy: OnFailure
</code></pre>
<p>also, I am sharing test.sh</p>
<pre><code>#!/bin/sh
rm -rf /tmp/*.*
echo "remove done"
</code></pre>
<p>cronjob running successfully but
when I checked the container the file is not getting deleted into <strong>/tmp</strong> directory</p>
| <p>Cronjob run in one specific container, if you want to remove the file or directory from another container it won't work.</p>
<p>If your main container running under deployment while when your job or cronjob gets triggered it create a <strong>new container</strong> <strong>(POD)</strong> which has a separate file system and mount option.</p>
<p>if you want to achieve this scenario you have to use the <strong>PVC</strong> with <strong>ReadWriteMany</strong> where <strong>multiple containers</strong> (POD) can connect with your <strong>single PVC</strong> and share the file system.</p>
<p>In this way then your <strong>cronjob container</strong> (POD) get started with the existing PVC file system and you can remove the directory using job or cronjobs.</p>
<p>mount the same <strong>PVC</strong> to the cronjob container and the main container and it will work.</p>
|
<p>I am trying to setup my Helm chart to be able to deploy a <code>VirtualService</code>. My deploy user has the <code>Edit</code> ClusterRole bound to it. But I realized that because Istio is not part of the core Kubernetes distro, the <code>Edit</code> ClusterRole does not have permissions to add a <code>VirtualService</code> (or even look at them).</p>
<p>I can, of course, make my own Roles and ClusterRoles if needed. But I figured I would see if Istio has a recommended Role or ClusterRole for that.</p>
<p>But all the docs that I can find for Istio Roles and ClusterRoles are for old versions of Istio.</p>
<p><strong>Does Istio not recommend using Roles and ClusterRoles anymore? If not, what do they recommend? If they do, where are the docs for it?</strong></p>
| <p>I ended up using these ClusterRoles. They merge with the standard Kubernetes roles of admin, edit and view. (My edit role only allows access to the VirtualService because that fit my situtation.)</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: istio-admin
labels:
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["config.istio.io", "networking.istio.io", "rbac.istio.io", "authentication.istio.io", "security.istio.io"]
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: istio-edit
labels:
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rules:
- apiGroups: ["config.istio.io", "networking.istio.io", "rbac.istio.io", "authentication.istio.io", "security.istio.io"]
resources: ["virtualservices"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: istio-view
labels:
rbac.authorization.k8s.io/aggregate-to-view: "true"
rules:
- apiGroups: ["config.istio.io", "networking.istio.io", "rbac.istio.io", "authentication.istio.io", "security.istio.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]
</code></pre>
|
<p>Major features of service meshes are</p>
<ol>
<li>Service Discovery</li>
<li>Configuration management</li>
</ol>
<p>both of them are provided by Kubernetes.<br />
<strong>Why do we need a service mesh then?</strong></p>
<p>*I understand that for more complex tasks e.g. zoning, security, complex load balancing and routing a service mesh is the right tool.</p>
| <p>In short, applying a service mesh, for example Istio help establish and manage communication between services (microservices) easier especially when you have a large number of services, and also provide security and other features. But if you had just couple of services, you might not need it for example.</p>
|
<p>When I upgrade kubernetes version 1.20.X to 1.21.1, all containers related is up to date. But the pause container is still in use, I can not force update it to the latest version.</p>
<pre><code># docker ps
XXX/pause:3.2
# docker images
XXX/pause:3.2
XXX/pause:3.4.1
# docker rmi -f XXX/pause:3.2
Error response from daemon: conflict: unable to delete XXX/pause:3.2 (cannot be forced) - image is being used by running container
</code></pre>
| <p>When you upgrade the cluster using <code>kubeadm</code> you will probably get the notification about the <code>kubelet</code> manual upgrade requirement:</p>
<pre><code>Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 1 x v1.20.7 v1.21.1
</code></pre>
<p>I've managed to create a <code>kubeadm</code> cluster version: <code>1.20.7-00</code> and then upgraded the cluster to the newest available at the time: <code>1.21.1-00</code>. After the upgrade was complete, the pause container stayed in the version <code>3.2.0</code> even after upgrading <code>kubelet</code>.</p>
<p><strong>One of the ways</strong> to update <code>kubelet</code> to use specific <code>pause</code> container version is by:</p>
<ul>
<li>modifiying following file:
<ul>
<li><code>/var/lib/kubelet/kubeadm-flags.env</code> (change for example to <code>k8s.gcr.io/pause:3.3</code>)</li>
</ul>
</li>
</ul>
<pre class="lang-sh prettyprint-override"><code>KUBELET_KUBEADM_ARGS="--network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2"
</code></pre>
<ul>
<li>restarting kubelet (depending on the OS)
<ul>
<li><code>$ systemctl restart kubelet</code></li>
</ul>
</li>
</ul>
<p>After this steps you should be seeing the new version of <code>pause</code> container passed to <code>kubelet</code>.</p>
<ul>
<li><code>$ systemctl status kubelet</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code>
kruk@ubuntu:~$ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Thu 2021-05-27 13:28:12 UTC; 7h ago
Docs: https://kubernetes.io/docs/home/
Main PID: 724 (kubelet)
Tasks: 18 (limit: 9442)
Memory: 128.6M
CGroup: /system.slice/kubelet.service
└─724 /usr/bin/kubelet <-SKIPPED-> --pod-infra-container-image=k8s.gcr.io/pause:3.3
May 27 13:29:12 ubuntu kubelet[724]: 2021-05-27 13:29:12.125 [INFO][5164] ipam.go 1068: Successfully claimed IPs: [172.16.243.205/26] block=172.16.243.192/26 handle="k8s-pod-network.1638a3ba44d1a46f6ad7eadb1519a42cdda98fafd0c94a7b67881f38213a5032" host="ubuntu"
May 27 13:29:12 ubuntu kubelet[724]: 2021-05-27 13:29:12.125 [INFO][5164] ipam.go 722: Auto-assigned 1 out of 1 IPv4s: [172.16.243.205/26] handle="k8s-pod-network.1638a3ba44d1a46f6ad7eadb1519a42cdda98fafd0c94a7b67881f38213a5032" host="ubuntu"
May 27 13:29:12 ubuntu kubelet[724]: time="2021-05-27T13:29:12Z" level=info msg="Released host-wide IPAM lock." source="ipam_plugin.go:369"
</code></pre>
<p>In my testing the old container that were present were not updated to the new <code>pause</code> container. They stayed at version <code>3.2</code>. Each new workload that was spawned, like for example <code>nginx</code> <code>Deployment</code> was using new <code>pause</code> container version:</p>
<ul>
<li><code>$ docker ps</code></li>
</ul>
<pre><code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1cc215019335 nginx "/docker-entrypoint.…" 7 hours ago Up 8 hours k8s_nginx_nginx-6799fc88d8-lhh48_default_58580cf2-ac6c-4d55-9c08-608ce2018fce_1
1638a3ba44d1 k8s.gcr.io/pause:3.3 "/pause" 7 hours ago Up 8 hours k8s_POD_nginx-6799fc88d8-lhh48_default_58580cf2-ac6c-4d55-9c08-608ce2018fce_1
</code></pre>
<hr />
<p>Additional resources/reference on the topic:</p>
<ul>
<li><em><a href="https://www.ianlewis.org/en/almighty-pause-container" rel="nofollow noreferrer">Ianlewis.org: Almighty pause container</a></em></li>
<li><em><a href="https://github.com/kubernetes/kubernetes/issues/98765" rel="nofollow noreferrer">Github.com: Kubernetes: Isuses: Handling Deprecation of pod-infra-container-image</a></em></li>
<li><em><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/#workflow-when-using-kubeadm-init" rel="nofollow noreferrer">Kubernetes.io: Docs: Setup: Production environment: Tools: Kubeadm: Kubelet integration: Workflow when using kubeadm init</a></em></li>
</ul>
|
<blockquote>
<p>prometheus-prometheus-kube-prometheus-prometheus-0 0/2 Terminating 0 4s
alertmanager-prometheus-kube-prometheus-alertmanager-0 0/2 Terminating 0 10s</p>
</blockquote>
<p>After updating EKS cluster to 1.16 from 1.15 everything works fine except these two pods, they keep on terminating and unable to initialise. Hence, prometheus monitoring does not work. I am getting below errors while describing the pods.</p>
<pre><code>Error: failed to start container "prometheus": Error response from daemon: OCI runtime create failed: container_linux.go:362: creating new parent process caused: container_linux.go:1941: running lstat on namespace path "/proc/29271/ns/ipc" caused: lstat /proc/29271/ns/ipc: no such file or directory: unknown
Error: failed to start container "config-reloader": Error response from daemon: cannot join network of a non running container: 7e139521980afd13dad0162d6859352b0b2c855773d6d4062ee3e2f7f822a0b3
Error: cannot find volume "config" to mount into container "config-reloader"
Error: cannot find volume "config" to mount into container "prometheus"
</code></pre>
<p>here is my yaml file for the deployment:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/psp: eks.privileged
creationTimestamp: "2021-04-30T16:39:14Z"
deletionGracePeriodSeconds: 600
deletionTimestamp: "2021-04-30T16:49:14Z"
generateName: prometheus-prometheus-kube-prometheus-prometheus-
labels:
app: prometheus
app.kubernetes.io/instance: prometheus-kube-prometheus-prometheus
app.kubernetes.io/managed-by: prometheus-operator
app.kubernetes.io/name: prometheus
app.kubernetes.io/version: 2.26.0
controller-revision-hash: prometheus-prometheus-kube-prometheus-prometheus-56d9fcf57
operator.prometheus.io/name: prometheus-kube-prometheus-prometheus
operator.prometheus.io/shard: "0"
prometheus: prometheus-kube-prometheus-prometheus
statefulset.kubernetes.io/pod-name: prometheus-prometheus-kube-prometheus-prometheus-0
name: prometheus-prometheus-kube-prometheus-prometheus-0
namespace: mo
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: StatefulSet
name: prometheus-prometheus-kube-prometheus-prometheus
uid: 326a09f2-319c-449d-904a-1dd0019c6d80
resourceVersion: "9337443"
selfLink: /api/v1/namespaces/monitoring/pods/prometheus-prometheus-kube-prometheus-prometheus-0
uid: e2be062f-749d-488e-a6cc-42ef1396851b
spec:
containers:
- args:
- --web.console.templates=/etc/prometheus/consoles
- --web.console.libraries=/etc/prometheus/console_libraries
- --config.file=/etc/prometheus/config_out/prometheus.env.yaml
- --storage.tsdb.path=/prometheus
- --storage.tsdb.retention.time=10d
- --web.enable-lifecycle
- --storage.tsdb.no-lockfile
- --web.external-url=http://prometheus-kube-prometheus-prometheus.monitoring:9090
- --web.route-prefix=/
image: quay.io/prometheus/prometheus:v2.26.0
imagePullPolicy: IfNotPresent
name: prometheus
ports:
- containerPort: 9090
name: web
protocol: TCP
readinessProbe:
failureThreshold: 120
httpGet:
path: /-/ready
port: web
scheme: HTTP
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 3
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- mountPath: /etc/prometheus/config_out
name: config-out
readOnly: true
- mountPath: /etc/prometheus/certs
name: tls-assets
readOnly: true
- mountPath: /prometheus
name: prometheus-prometheus-kube-prometheus-prometheus-db
- mountPath: /etc/prometheus/rules/prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
name: prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: prometheus-kube-prometheus-prometheus-token-mh66q
readOnly: true
- args:
- --listen-address=:8080
- --reload-url=http://localhost:9090/-/reload
- --config-file=/etc/prometheus/config/prometheus.yaml.gz
- --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml
- --watched-dir=/etc/prometheus/rules/prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
command:
- /bin/prometheus-config-reloader
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: SHARD
value: "0"
image: quay.io/prometheus-operator/prometheus-config-reloader:v0.47.0
imagePullPolicy: IfNotPresent
name: config-reloader
ports:
- containerPort: 8080
name: reloader-web
protocol: TCP
resources:
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- mountPath: /etc/prometheus/config
name: config
- mountPath: /etc/prometheus/config_out
name: config-out
- mountPath: /etc/prometheus/rules/prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
name: prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: prometheus-kube-prometheus-prometheus-token-mh66q
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
hostname: prometheus-prometheus-kube-prometheus-prometheus-0
nodeName: ip-10-1-49-45.ec2.internal
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 2000
runAsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
serviceAccount: prometheus-kube-prometheus-prometheus
serviceAccountName: prometheus-kube-prometheus-prometheus
subdomain: prometheus-operated
terminationGracePeriodSeconds: 600
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: config
secret:
defaultMode: 420
secretName: prometheus-prometheus-kube-prometheus-prometheus
- name: tls-assets
secret:
defaultMode: 420
secretName: prometheus-prometheus-kube-prometheus-prometheus-tls-assets
- emptyDir: {}
name: config-out
- configMap:
defaultMode: 420
name: prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
name: prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0
- emptyDir: {}
name: prometheus-prometheus-kube-prometheus-prometheus-db
- name: prometheus-kube-prometheus-prometheus-token-mh66q
secret:
defaultMode: 420
secretName: prometheus-kube-prometheus-prometheus-token-mh66q
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2021-04-30T16:39:14Z"
status: "True"
type: PodScheduled
phase: Pending
qosClass: Burstable
</code></pre>
| <p>If someone needs to know the answer, in my case(the above situation) there were 2 Prometheus operators running in different different namespace, 1 in default & another monitoring namespace. so I removed the one from the default namespace and it resolved my pods crashing issue.</p>
|
<p>I have a workflow on Github action that builds, tests, and pushes a container to GKE.
I followed the steps outlined in <a href="https://docs.github.com/en/actions/guides/deploying-to-google-kubernetes-engine" rel="nofollow noreferrer">https://docs.github.com/en/actions/guides/deploying-to-google-kubernetes-engine</a> but my build keeps on failing.
The failure comes from the Kustomization stage of the build process.</p>
<p>This is what the error looks like:</p>
<pre><code>Run ./kustomize edit set image gcr.io/PROJECT_ID/IMAGE:TAG=gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA
./kustomize edit set image gcr.io/PROJECT_ID/IMAGE:TAG=gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA
./kustomize build . | kubectl apply -f -
kubectl rollout status deployment/$DEPLOYMENT_NAME
kubectl get services -o wide
shell: /usr/bin/bash -e ***0***
env:
PROJECT_ID: ***
GKE_CLUSTER: codematictest
GKE_ZONE: us-east1-b
DEPLOYMENT_NAME: codematictest
IMAGE: codematictest
CLOUDSDK_METRICS_ENVIRONMENT: github-actions-setup-gcloud
KUBECONFIG: /home/runner/work/codematic-test/codematic-test/fb7d2ebb-4c82-4d43-af10-5b0b62bab1fd
Error: Missing kustomization file 'kustomization.yaml'.
Usage:
kustomize edit set image [flags]
Examples:
The command
set image postgres=eu.gcr.io/my-project/postgres:latest my-app=my-registry/my-app@sha256:24a0c4b4a4c0eb97a1aabb8e29f18e917d05abfe1b7a7c07857230879ce7d3d3
will add
images:
- name: postgres
newName: eu.gcr.io/my-project/postgres
newTag: latest
- digest: sha256:24a0c4b4a4c0eb97a1aabb8e29f18e917d05abfe1b7a7c07857230879ce7d3d3
name: my-app
newName: my-registry/my-app
to the kustomization file if it doesn't exist,
and overwrite the previous ones if the image name exists.
The command
set image node:8.15.0 mysql=mariadb alpine@sha256:24a0c4b4a4c0eb97a1aabb8e29f18e917d05abfe1b7a7c07857230879ce7d3d3
will add
images:
- name: node
newTag: 8.15.0
- name: mysql
newName: mariadb
- digest: sha256:24a0c4b4a4c0eb97a1aabb8e29f18e917d05abfe1b7a7c07857230879ce7d3d3
name: alpine
to the kustomization file if it doesn't exist,
and overwrite the previous ones if the image name exists.
Flags:
-h, --help help for image
Error: Process completed with exit code 1.
</code></pre>
<p>My GitHub workflow file looks like this:</p>
<pre><code>name: gke
on: push
env:
PROJECT_ID: ${{ secrets.GKE_PROJECT }}
GKE_CLUSTER: codematictest
GKE_ZONE: us-east1-b
DEPLOYMENT_NAME: codematictest
IMAGE: codematictest
jobs:
setup-build-publish-deploy:
name: Setup, Build, Publish, and Deploy
defaults:
run:
working-directory: api
runs-on: ubuntu-latest
environment: production
steps:
- name: Checkout
uses: actions/checkout@v2
# Setup gcloud CLI
- uses: google-github-actions/setup-gcloud@v0.2.0
with:
service_account_key: ${{ secrets.GKE_SA_KEY }}
project_id: ${{ secrets.GKE_PROJECT }}
# Configure Docker to use the gcloud command-line tool as a credential
# helper for authentication
- run: |-
gcloud --quiet auth configure-docker
# Get the GKE credentials so we can deploy to the cluster
- uses: google-github-actions/get-gke-credentials@v0.2.1
with:
cluster_name: ${{ env.GKE_CLUSTER }}
location: ${{ env.GKE_ZONE }}
credentials: ${{ secrets.GKE_SA_KEY }}
# Build the Docker image
- name: Build
run: |-
docker build \
--tag "gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA" \
--build-arg GITHUB_SHA="$GITHUB_SHA" \
--build-arg GITHUB_REF="$GITHUB_REF" \
.
# Push the Docker image to Google Container Registry
- name: Publish
run: |-
docker push "gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA"
# Set up kustomize
- name: Set up Kustomize
run: |-
curl -sfLo kustomize https://github.com/kubernetes-sigs/kustomize/releases/download/v3.1.0/kustomize_3.1.0_linux_amd64
chmod u+x ./kustomize
# Deploy the Docker image to the GKE cluster
- name: Deploy
run: |-
./kustomize edit set image gcr.io/PROJECT_ID/IMAGE:TAG=gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA
./kustomize build . | kubectl apply -f -
kubectl rollout status deployment/$DEPLOYMENT_NAME
kubectl get services -o wide
</code></pre>
| <p>The kustomization file, as explained in it's <a href="https://github.com/kubernetes-sigs/kustomize#usage" rel="nofollow noreferrer">repository</a>, should be in the next file structure:</p>
<pre><code>~/someApp
├── deployment.yaml
├── kustomization.yaml
└── service.yaml
</code></pre>
|
<p>I'm trying to deploy the ELK stack to my developing kubernetes cluster. It seems that I do everything as described in the tutorials, however, the pods keep failing with Java errors (see below). I will describe the whole process from installing the cluster until the error happens.</p>
<p>Step 1: Installing the cluster</p>
<pre><code># Apply sysctl params without reboot
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# Setup required sysctl params, these persist across reboots.
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sudo sysctl --system
#update and install apt https stuff
sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates curl gnupg lsb-release
# add docker repo for containerd and install it
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo \
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install -y containerd.io
# copy config
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
sudo systemctl restart containerd
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1 // somewhat redundant
net.bridge.bridge-nf-call-iptables = 1 // somewhat redundant
EOF
sudo sysctl --system
#install kubernetes binaries
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
#disable swap and comment swap in fstab
sudo swapoff -v /dev/mapper/main-swap
sudo nano /etc/fstab
#init cluster
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
#make user to kubectl admin
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
#install calico
kubectl apply -f
kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
kubectl create -f https://docs.projectcalico.org/manifests/custom-resources.yaml
#untaint master node that pods can run on it
kubectl taint nodes --all node-role.kubernetes.io/master-
#install helm
curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
sudo apt-get install apt-transport-https --yes
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
</code></pre>
<p>Step 2: Install ECK (<a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-install-helm.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-install-helm.html</a>) and elasticsearch (<a href="https://github.com/elastic/helm-charts/blob/master/elasticsearch/README.md#installing" rel="nofollow noreferrer">https://github.com/elastic/helm-charts/blob/master/elasticsearch/README.md#installing</a>)</p>
<pre><code># add helm repo
helm repo add elastic https://helm.elastic.co
helm repo update
# install eck
#### ommited as suggested in comment section!!!! helm install elastic-operator elastic/eck-operator -n elastic-system --create-namespace
helm install elasticsearch elastic/elasticsearch
</code></pre>
<p>Step 3: Add PersistentVolume</p>
<pre><code>---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elk-data1
labels:
type: local
spec:
capacity:
storage: 30Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elk-data2
labels:
type: local
spec:
capacity:
storage: 30Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data2"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elk-data3
labels:
type: local
spec:
capacity:
storage: 30Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data3"
</code></pre>
<p>apply it</p>
<pre><code>sudo mkdir /mnt/data1
sudo mkdir /mnt/data2
sudo mkdir /mnt/data3
kubectl apply -f storage.yaml
</code></pre>
<p>Now the pods (or at least one) sould run. But I keep getting STATUS CrashLoopBackOff with java errors in the log.</p>
<pre><code>kubectl get pv,pvc,pods
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/elk-data1 30Gi RWO Retain Bound default/elasticsearch-master-elasticsearch-master-1 140m
persistentvolume/elk-data2 30Gi RWO Retain Bound default/elasticsearch-master-elasticsearch-master-2 140m
persistentvolume/elk-data3 30Gi RWO Retain Bound default/elasticsearch-master-elasticsearch-master-0 140m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/elasticsearch-master-elasticsearch-master-0 Bound elk-data3 30Gi RWO 141m
persistentvolumeclaim/elasticsearch-master-elasticsearch-master-1 Bound elk-data1 30Gi RWO 141m
persistentvolumeclaim/elasticsearch-master-elasticsearch-master-2 Bound elk-data2 30Gi RWO 141m
NAME READY STATUS RESTARTS AGE
pod/elasticsearch-master-0 0/1 CrashLoopBackOff 32 141m
pod/elasticsearch-master-1 0/1 Pending 0 141m
pod/elasticsearch-master-2 0/1 Pending 0 141m
</code></pre>
<p>Logs and Error:</p>
<pre><code>kubectl logs pod/elasticsearch-master-2
Exception in thread "main" java.lang.InternalError: java.lang.reflect.InvocationTargetException
at java.base/jdk.internal.platform.Metrics.systemMetrics(Metrics.java:65)
at java.base/jdk.internal.platform.Container.metrics(Container.java:43)
at jdk.management/com.sun.management.internal.OperatingSystemImpl.<init>(OperatingSystemImpl.java:48)
at jdk.management/com.sun.management.internal.PlatformMBeanProviderImpl.getOperatingSystemMXBean(PlatformMBeanProviderImpl.java:279)
at jdk.management/com.sun.management.internal.PlatformMBeanProviderImpl$3.nameToMBeanMap(PlatformMBeanProviderImpl.java:198)
at java.management/java.lang.management.ManagementFactory.lambda$getPlatformMBeanServer$0(ManagementFactory.java:487)
at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:273)
at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179)
at java.base/java.util.HashMap$ValueSpliterator.forEachRemaining(HashMap.java:1766)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596)
at java.management/java.lang.management.ManagementFactory.getPlatformMBeanServer(ManagementFactory.java:488)
at org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:140)
at org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:558)
at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:263)
at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:207)
at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:220)
at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:197)
at org.elasticsearch.common.logging.LogConfigurator.configureStatusLogger(LogConfigurator.java:248)
at org.elasticsearch.common.logging.LogConfigurator.configureWithoutConfig(LogConfigurator.java:95)
at org.elasticsearch.cli.CommandLoggingConfigurator.configureLoggingWithoutConfig(CommandLoggingConfigurator.java:29)
at org.elasticsearch.cli.Command.main(Command.java:76)
at org.elasticsearch.common.settings.KeyStoreCli.main(KeyStoreCli.java:32)
Caused by: java.lang.reflect.InvocationTargetException
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:78)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at java.base/jdk.internal.platform.Metrics.systemMetrics(Metrics.java:61)
... 26 more
Caused by: java.lang.ExceptionInInitializerError
at java.base/jdk.internal.platform.CgroupSubsystemFactory.create(CgroupSubsystemFactory.java:107)
at java.base/jdk.internal.platform.CgroupMetrics.getInstance(CgroupMetrics.java:167)
... 31 more
Caused by: java.lang.NullPointerException
at java.base/java.util.Objects.requireNonNull(Objects.java:208)
at java.base/sun.nio.fs.UnixFileSystem.getPath(UnixFileSystem.java:260)
at java.base/java.nio.file.Path.of(Path.java:147)
at java.base/java.nio.file.Paths.get(Paths.java:69)
at java.base/jdk.internal.platform.CgroupUtil.lambda$readStringValue$1(CgroupUtil.java:66)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:554)
at java.base/jdk.internal.platform.CgroupUtil.readStringValue(CgroupUtil.java:68)
at java.base/jdk.internal.platform.CgroupSubsystemController.getStringValue(CgroupSubsystemController.java:65)
at java.base/jdk.internal.platform.CgroupSubsystemController.getLongValue(CgroupSubsystemController.java:124)
at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.getLongValue(CgroupV1Subsystem.java:272)
at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.getHierarchical(CgroupV1Subsystem.java:218)
at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.setPath(CgroupV1Subsystem.java:201)
at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.setSubSystemControllerPath(CgroupV1Subsystem.java:173)
at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.lambda$initSubSystem$5(CgroupV1Subsystem.java:113)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133)
at java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596)
at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.initSubSystem(CgroupV1Subsystem.java:113)
at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.<clinit>(CgroupV1Subsystem.java:47)
... 33 more
Exception in thread "main" java.lang.InternalError: java.lang.reflect.InvocationTargetException
at java.base/jdk.internal.platform.Metrics.systemMetrics(Metrics.java:65)
at java.base/jdk.internal.platform.Container.metrics(Container.java:43)
at jdk.management/com.sun.management.internal.OperatingSystemImpl.<init>(OperatingSystemImpl.java:48)
at jdk.management/com.sun.management.internal.PlatformMBeanProviderImpl.getOperatingSystemMXBean(PlatformMBeanProviderImpl.java:279)
at jdk.management/com.sun.management.internal.PlatformMBeanProviderImpl$3.nameToMBeanMap(PlatformMBeanProviderImpl.java:198)
at java.management/sun.management.spi.PlatformMBeanProvider$PlatformComponent.getMBeans(PlatformMBeanProvider.java:195)
at java.management/java.lang.management.ManagementFactory.getPlatformMXBean(ManagementFactory.java:686)
at java.management/java.lang.management.ManagementFactory.getOperatingSystemMXBean(ManagementFactory.java:388)
at org.elasticsearch.tools.launchers.DefaultSystemMemoryInfo.<init>(DefaultSystemMemoryInfo.java:28)
at org.elasticsearch.tools.launchers.JvmOptionsParser.jvmOptions(JvmOptionsParser.java:125)
at org.elasticsearch.tools.launchers.JvmOptionsParser.main(JvmOptionsParser.java:86)
Caused by: java.lang.reflect.InvocationTargetException
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:78)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at java.base/jdk.internal.platform.Metrics.systemMetrics(Metrics.java:61)
... 10 more
Caused by: java.lang.ExceptionInInitializerError
at java.base/jdk.internal.platform.CgroupSubsystemFactory.create(CgroupSubsystemFactory.java:107)
at java.base/jdk.internal.platform.CgroupMetrics.getInstance(CgroupMetrics.java:167)
... 15 more
Caused by: java.lang.NullPointerException
at java.base/java.util.Objects.requireNonNull(Objects.java:208)
at java.base/sun.nio.fs.UnixFileSystem.getPath(UnixFileSystem.java:260)
at java.base/java.nio.file.Path.of(Path.java:147)
at java.base/java.nio.file.Paths.get(Paths.java:69)
at java.base/jdk.internal.platform.CgroupUtil.lambda$readStringValue$1(CgroupUtil.java:66)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:554)
at java.base/jdk.internal.platform.CgroupUtil.readStringValue(CgroupUtil.java:68)
at java.base/jdk.internal.platform.CgroupSubsystemController.getStringValue(CgroupSubsystemController.java:65)
at java.base/jdk.internal.platform.CgroupSubsystemController.getLongValue(CgroupSubsystemController.java:124)
at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.getLongValue(CgroupV1Subsystem.java:272)
at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.getHierarchical(CgroupV1Subsystem.java:218)
at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.setPath(CgroupV1Subsystem.java:201)
at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.setSubSystemControllerPath(CgroupV1Subsystem.java:173)
at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.lambda$initSubSystem$5(CgroupV1Subsystem.java:113)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179)
at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133)
at java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596)
at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.initSubSystem(CgroupV1Subsystem.java:113)
at java.base/jdk.internal.platform.cgroupv1.CgroupV1Subsystem.<clinit>(CgroupV1Subsystem.java:47)
... 17 more
</code></pre>
<p>values.yaml from helm chart</p>
<pre><code>---
clusterName: "elasticsearch"
nodeGroup: "master"
# The service that non master groups will try to connect to when joining the cluster
# This should be set to clusterName + "-" + nodeGroup for your master group
masterService: ""
# Elasticsearch roles that will be applied to this nodeGroup
# These will be set as environment variables. E.g. node.master=true
roles:
master: "true"
ingest: "true"
data: "true"
remote_cluster_client: "true"
ml: "true"
replicas: 3
minimumMasterNodes: 2
esMajorVersion: ""
# Allows you to add any config files in /usr/share/elasticsearch/config/
# such as elasticsearch.yml and log4j2.properties
esConfig: {}
# elasticsearch.yml: |
# key:
# nestedkey: value
# log4j2.properties: |
# key = value
# Extra environment variables to append to this nodeGroup
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs: []
# - name: MY_ENVIRONMENT_VAR
# value: the_value_goes_here
# Allows you to load environment variables from kubernetes secret or config map
envFrom: []
# - secretRef:
# name: env-secret
# - configMapRef:
# name: config-map
# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security and for mounting
# the X-Pack license
secretMounts: []
# - name: elastic-certificates
# secretName: elastic-certificates
# path: /usr/share/elasticsearch/config/certs
# defaultMode: 0755
hostAliases: []
#- ip: "127.0.0.1"
# hostnames:
# - "foo.local"
# - "bar.local"
image: "docker.elastic.co/elasticsearch/elasticsearch"
imageTag: "7.12.1"
imagePullPolicy: "IfNotPresent"
podAnnotations: {}
# iam.amazonaws.com/role: es-cluster
# additionals labels
labels: {}
esJavaOpts: "-Xmx1g -Xms1g"
resources:
requests:
cpu: "1000m"
memory: "2Gi"
limits:
cpu: "1000m"
memory: "2Gi"
initResources: {}
# limits:
# cpu: "25m"
# # memory: "128Mi"
# requests:
# cpu: "25m"
# memory: "128Mi"
sidecarResources: {}
# limits:
# cpu: "25m"
# # memory: "128Mi"
# requests:
# cpu: "25m"
# memory: "128Mi"
networkHost: "0.0.0.0"
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 30Gi
rbac:
create: false
serviceAccountAnnotations: {}
serviceAccountName: ""
podSecurityPolicy:
create: false
name: ""
spec:
privileged: true
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- secret
- configMap
- persistentVolumeClaim
- emptyDir
persistence:
enabled: true
labels:
# Add default labels for the volumeClaimTemplate of the StatefulSet
enabled: false
annotations: {}
extraVolumes: []
# - name: extras
# emptyDir: {}
extraVolumeMounts: []
# - name: extras
# mountPath: /usr/share/extras
# readOnly: true
extraContainers: []
# - name: do-something
# image: busybox
# command: ['do', 'something']
extraInitContainers: []
# - name: do-something
# image: busybox
# command: ['do', 'something']
# This is the PriorityClass settings as defined in
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: ""
# By default this will make sure two pods don't end up on the same node
# Changing this to a region would allow you to spread pods across regions
antiAffinityTopologyKey: "kubernetes.io/hostname"
# Hard means that by default pods will only be scheduled if there are enough nodes for them
# and that they will never end up on the same node. Setting this to soft will do this "best effort"
antiAffinity: "hard"
# This is the node affinity settings as defined in
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
nodeAffinity: {}
# The default is to deploy all pods serially. By setting this to parallel all pods are started at
# the same time when bootstrapping the cluster
podManagementPolicy: "Parallel"
# The environment variables injected by service links are not used, but can lead to slow Elasticsearch boot times when
# there are many services in the current namespace.
# If you experience slow pod startups you probably want to set this to `false`.
enableServiceLinks: true
protocol: http
httpPort: 9200
transportPort: 9300
service:
labels: {}
labelsHeadless: {}
type: ClusterIP
nodePort: ""
annotations: {}
httpPortName: http
transportPortName: transport
loadBalancerIP: ""
loadBalancerSourceRanges: []
externalTrafficPolicy: ""
updateStrategy: RollingUpdate
# This is the max unavailable setting for the pod disruption budget
# The default value of 1 will make sure that kubernetes won't allow more than 1
# of your pods to be unavailable during maintenance
maxUnavailable: 1
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
securityContext:
capabilities:
drop:
- ALL
# readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
# How long to wait for elasticsearch to stop gracefully
terminationGracePeriod: 120
sysctlVmMaxMapCount: 262144
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
# https://www.elastic.co/guide/en/elasticsearch/reference/7.12/cluster-health.html#request-params wait_for_status
clusterHealthCheckParams: "wait_for_status=green&timeout=1s"
## Use an alternate scheduler.
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""
imagePullSecrets: []
nodeSelector: {}
tolerations: []
# Enabling this will publically expose your Elasticsearch instance.
# Only enable this if you have security enabled on your cluster
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths:
- path: /
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
nameOverride: ""
fullnameOverride: ""
# https://github.com/elastic/helm-charts/issues/63
masterTerminationFix: false
lifecycle: {}
# preStop:
# exec:
# command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
# postStart:
# exec:
# command:
# - bash
# - -c
# - |
# #!/bin/bash
# # Add a template to adjust number of shards/replicas
# TEMPLATE_NAME=my_template
# INDEX_PATTERN="logstash-*"
# SHARD_COUNT=8
# REPLICA_COUNT=1
# ES_URL=http://localhost:9200
# while [[ "$(curl -s -o /dev/null -w '%{http_code}\n' $ES_URL)" != "200" ]]; do sleep 1; done
# curl -XPUT "$ES_URL/_template/$TEMPLATE_NAME" -H 'Content-Type: application/json' -d'{"index_patterns":['\""$INDEX_PATTERN"\"'],"settings":{"number_of_shards":'$SHARD_COUNT',"number_of_replicas":'$REPLICA_COUNT'}}'
sysctlInitContainer:
enabled: true
keystore: []
networkPolicy:
## Enable creation of NetworkPolicy resources. Only Ingress traffic is filtered for now.
## In order for a Pod to access Elasticsearch, it needs to have the following label:
## {{ template "uname" . }}-client: "true"
## Example for default configuration to access HTTP port:
## elasticsearch-master-http-client: "true"
## Example for default configuration to access transport port:
## elasticsearch-master-transport-client: "true"
http:
enabled: false
## if explicitNamespacesSelector is not set or set to {}, only client Pods being in the networkPolicy's namespace
## and matching all criteria can reach the DB.
## But sometimes, we want the Pods to be accessible to clients from other namespaces, in this case, we can use this
## parameter to select these namespaces
##
# explicitNamespacesSelector:
# # Accept from namespaces with all those different rules (only from whitelisted Pods)
# matchLabels:
# role: frontend
# matchExpressions:
# - {key: role, operator: In, values: [frontend]}
## Additional NetworkPolicy Ingress "from" rules to set. Note that all rules are OR-ed.
##
# additionalRules:
# - podSelector:
# matchLabels:
# role: frontend
# - podSelector:
# matchExpressions:
# - key: role
# operator: In
# values:
# - frontend
transport:
## Note that all Elasticsearch Pods can talks to themselves using transport port even if enabled.
enabled: false
# explicitNamespacesSelector:
# matchLabels:
# role: frontend
# matchExpressions:
# - {key: role, operator: In, values: [frontend]}
# additionalRules:
# - podSelector:
# matchLabels:
# role: frontend
# - podSelector:
# matchExpressions:
# - key: role
# operator: In
# values:
# - frontend
# Deprecated
# please use the above podSecurityContext.fsGroup instead
fsGroup: ""
</code></pre>
| <p>What you are experiencing is not an issue related to Elasticsearch. It is a problem resulting from the cgroup configuration for the version of containerd you are using. I haven't unpacked the specifics, but the exception in the Elasticsearch logs relates to the JDK failing when attempting to retrieve the required cgroup information.</p>
<p>I had the same issue and resolved it by executing the following steps, before installing Kubernetes, to install a later version of containerd and configure it to use cgroups with systemd:</p>
<ol>
<li>Add the GPG key for the official Docker repository.</li>
</ol>
<pre><code>curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
</code></pre>
<ol start="2">
<li>Add the Docker repository to APT sources.</li>
</ol>
<pre><code>sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
</code></pre>
<ol start="3">
<li>Install the latest containerd.io package instead of the containerd package from Ubuntu.</li>
</ol>
<pre><code>apt-get -y install containerd.io
</code></pre>
<ol start="4">
<li>Generate the default containerd configuration.</li>
</ol>
<pre><code>containerd config default > /etc/containerd/config.toml
</code></pre>
<ol start="5">
<li>Configure containerd to use systemd to manage the cgroups.</li>
</ol>
<pre><code> [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
runtime_engine = ""
runtime_root = ""
privileged_without_host_devices = false
base_runtime_spec = ""
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
</code></pre>
<ol start="6">
<li>Restart the containerd service.</li>
</ol>
<pre><code>systemctl restart containerd
</code></pre>
|
<p>im trying to a loop with a range in helm but using 2 variables, what i have..</p>
<p><strong>values.yaml</strong></p>
<pre><code>master:
slave1:
- slave1value1
- slave1value2
slave2:
- slave2value1
- slave2value2
</code></pre>
<p>My actual loop.</p>
<pre><code>{{- range .Values.master.slave1 }}
name: http://slave1-{{ . }}
{{- end }}
{{- range .Values.master.slave2 }}
name: http://slave2-{{ . }}
{{- end }}
</code></pre>
<p>This is actually doing what i need, the output will be like this...</p>
<p>looping on <code>.Values.master.slave1</code></p>
<pre><code>name: http://slave1-slave1value1
name: http://slave1-slave1value2
</code></pre>
<p>looping on <code>.Values.master.slave2</code></p>
<pre><code>name: http://slave2-slave1value1
name: http://slave2-slave1value2
</code></pre>
<p>This is fully working for now, the question is, can i achieve the same result using just one loop block ? i tried this.</p>
<pre><code>{{ alias := .Values.master }}
{{- range $alias }}
name: http://{{ . }}-{{ $alias.name }}
{{- end }}
</code></pre>
<p>But the output is not what I'm expecting, thanks in advance.</p>
| <p>Almost...you need a nested loop to do this. The top-level data structure is a map, where the keys are the worker names and the values are the list of values. So you can iterate through the top-level map, then for each item iterate through the value list.</p>
<pre><code>{{- $key, $values := range .Values.master -}}
{{- $value := range $values -}}
name: http://{{ $key }}-{{ $value }}
{{ end -}}
{{- end -}}
</code></pre>
<p>Note that we've assigned the values of <code>range</code> to locals to avoid some ambiguity around what exactly <code>.</code> means (inside each <code>range</code> loop it would be the iterator, for the currently-innermost loop).</p>
|
<p>I want to use kind other than Deployment for Autoscaling in Kubernetes is it possible ? the reason I don't want to use kind:Deployment is the restart policy, as per as k8s documentation the only valid field for restart policy is "Always", and If put "Never" I am getting an error.</p>
<p>In my scenario I have a external monitoring UI which I use to shutdown the service if required, but now what happening is the pods are terminating and new pod are getting created. What should I do ? please note that I can not run it as kind:Pod as I want to auto-scale the services and Autoscaling of Kind:Pod is not valid !</p>
<p>Please share your suggestions and view on this ! thanks in advance.</p>
| <p>HPA can be used with the following resources: <code>ReplicationController</code>, <code>Deployment</code>, <code>ReplicaSet</code> or <code>StatefulSet</code>. However HPA doesn't support scaling to 0.</p>
<p>There are some serverless frameworks that support scalability to zero in kubernetes such as <a href="https://knative.dev/" rel="nofollow noreferrer">Knative</a> and <a href="https://keda.sh/" rel="nofollow noreferrer">Keda</a>.</p>
<p>Your use case sounds much simpler though, as you're looking to scale to zero based on a manual action. You can achieve this by setting the number of replicas of your deployment to 0.</p>
<pre><code>kubectl scale --replicas=0 deployment/{deploymentName}
</code></pre>
<p>And then if you want to re-activate the service then increase the replicas back again.</p>
<pre><code>kubectl scale --replicas=1 deployment/{deploymentName}
</code></pre>
|
<p>I am trying to install docker inside an openshift pod like below.</p>
<pre><code>sh-4.2$ yum install docker
Loaded plugins: ovl, product-id, search-disabled-repos, subscription-manager
ovl: Error while doing RPMdb copy-up:
[Errno 13] Permission denied: '/var/lib/rpm/.dbenv.lock'
You need to be root to perform this command.
sh-4.2$ id
uid=1001(1001) gid=0(root) groups=0(root)
sh-4.2$
</code></pre>
<p>Tried applying following
oc adm policy add-scc-to-user anyuid -z default</p>
<p>Could you please help.</p>
| <p>You should specify "0" using "runAsUser" as follows. Because "anyuid" is using UID which is configured when an image builds if you do not specify the UID in your container. I think your image is build with 1001 UID initially as far as I can see the result.</p>
<pre><code> containers:
- name: YOURCONTAINERNAME
:
securityContext:
runAsUser: 0
</code></pre>
|
<p>I am trying to setup an LXC container (debian) as a Kubernetes node.
I am so far that the only thing in the way is the kubeadm init script...</p>
<pre class="lang-sh prettyprint-override"><code>error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/5.4.44-2-pve/modules.dep.bin'\nmodprobe: FATAL: Module configs not found in directory /lib/modules/5.4.44-2-pve\n", err: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
</code></pre>
<p>After some research I figured out that I probably need to add the following: <code>linux.kernel_modules: ip_tables,ip6_tables,netlink_diag,nf_nat,overlay</code>
But adding this to <code>/etc/pve/lxc/107.conf</code> doesn't do anything.</p>
<p><strong>Does anybody have a clue how to add the linux kernel modules?</strong></p>
| <p>To allow load with modprobe any modules inside privileged proxmox lxc container, you need add this options to container config:</p>
<pre><code>lxc.apparmor.profile: unconfined
lxc.cgroup.devices.allow: a
lxc.cap.drop:
lxc.mount.auto: proc:rw sys:rw
lxc.mount.entry: /lib/modules lib/modules none bind 0 0
</code></pre>
<p>before that, you must first create the /lib/modules folder inside the container</p>
|
<p>I am deploying some web app on kubernetes and I want to set liveness probe for this application.
When I configure my deployment with liveness probe, kubelet start health check. I was defined httpGet with scheme "HTTP" parameter but kubelet using https schema randomly.</p>
<p>This is my liveness probe configuration:</p>
<pre><code>livenessProbe:
failureThreshold: 4
httpGet:
path: /
port: 80
scheme: HTTP
initialDelaySeconds: 40
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 2
</code></pre>
<p>This is result from kubelet:</p>
<p>kubectl describe pod greenlight-7877dc58b7-6s78l</p>
<p>output:</p>
<blockquote>
<p>Warning Unhealthy 31s (x4 over 46s) kubelet Liveness
probe failed: Get "https://10.244.4.182/": dial tcp 10.244.4.182:443:
connect: connection refused</p>
</blockquote>
<p>Kubernetes version: v1.19.9</p>
<p>Thanx for help!</p>
| <p>Since you are explicitly stating livenessProbe to use HTTP, it's probably your application that redirects traffic to HTTPS. Make sure that your application returns a <code>200 OK</code> on basepath <code>/</code>, and not a redirection (any of <code>3xx</code> codes).</p>
<p>You can either fix that, or use <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-tcp-liveness-probe" rel="nofollow noreferrer">TCP probe</a></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: goproxy
labels:
app: goproxy
spec:
containers:
- name: goproxy
image: k8s.gcr.io/goproxy:0.1
ports:
- containerPort: 8080
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 15
periodSeconds: 20
</code></pre>
|
<p>im trying to a loop with a range in helm but using 2 variables, what i have..</p>
<p><strong>values.yaml</strong></p>
<pre><code>master:
slave1:
- slave1value1
- slave1value2
slave2:
- slave2value1
- slave2value2
</code></pre>
<p>My actual loop.</p>
<pre><code>{{- range .Values.master.slave1 }}
name: http://slave1-{{ . }}
{{- end }}
{{- range .Values.master.slave2 }}
name: http://slave2-{{ . }}
{{- end }}
</code></pre>
<p>This is actually doing what i need, the output will be like this...</p>
<p>looping on <code>.Values.master.slave1</code></p>
<pre><code>name: http://slave1-slave1value1
name: http://slave1-slave1value2
</code></pre>
<p>looping on <code>.Values.master.slave2</code></p>
<pre><code>name: http://slave2-slave1value1
name: http://slave2-slave1value2
</code></pre>
<p>This is fully working for now, the question is, can i achieve the same result using just one loop block ? i tried this.</p>
<pre><code>{{ alias := .Values.master }}
{{- range $alias }}
name: http://{{ . }}-{{ $alias.name }}
{{- end }}
</code></pre>
<p>But the output is not what I'm expecting, thanks in advance.</p>
| <p>Hi @DavidMaze i made it work changing the order of the "range" in the loop.</p>
<p>This doesn't work.</p>
<pre><code>{{- $key, $values := range .Values.master -}}
{{- $value := range $values -}}
name: http://{{ $key }}-{{ $value }}
{{ end -}}
{{- end -}}
</code></pre>
<p>This work as expected :)</p>
<pre><code>{{- range $key, $values := .Values.master -}}
{{- range $value := $values -}}
name: http://{{ $key }}-{{ $value }}
{{ end -}}
{{- end -}}
</code></pre>
|
<p>I frequently encounter issues will kubectl port forwarding and want to write a bash script to auto-reconnect. How do you grep for something on a stream and "fail" when it's found? I've found questions like <a href="https://stackoverflow.com/questions/31816132/how-to-grep-a-continuous-stream-from-the-cli-and-exit-if-string-found">this</a> for "tailing a file" but I don't want to write the output of kubectl to a log file and use a separate process to tail - that feels too complicated. Also I'm aware that grep -q doesn't work on steams e.g. see <a href="https://stackoverflow.com/questions/7178888/grep-q-not-exiting-with-tail-f">here</a>.</p>
<p>So far I have tried this:</p>
<pre><code>kubectl port-forward deployment/haproxy 8080:8080 | grep -q --line-buffered error
</code></pre>
<p>Which correctly prints</p>
<pre><code>E0528 21:30:14.643696 95553 portforward.go:400] an error occurred forwarding 8080 -> 8080: error forwarding port 8080 to pod 7aa1822dab53ddd732bc8fdc9575e1820ab476be2be097a472eb838f5eec22b2, uid : exit status 1: 2021/05/29 01:30:14 socat[13287] E connect(5, AF=2 127.0.0.1:1883, 16): Connection refused
</code></pre>
<p>But how do I return after error so I can do something like this:</p>
<pre><code>while :
do
kubectl port-forward deployment/haproxy 8080:8080 | grep -q --line-buffered error | return on error
echo "port forwarding failed trying again!"
done
</code></pre>
| <p>How about.</p>
<pre><code>#!/usr/bin/env bash
if kubectl port-forward deployment/haproxy 8080:8080 2>&1 |
grep -Fq --line-buffered error; then
printf >&2 "port forwarding failed trying again!\n" &&
exit
fi
</code></pre>
<p>If it is inside a loop, replace <code>exit</code> with <code>break</code>.</p>
|
<p>Istio can route traffic based off headers and such. There are <a href="https://istio.io/latest/docs/reference/config/networking/virtual-service/" rel="nofollow noreferrer">great examples</a> of how to do this in the Istio docs.</p>
<p>Istio can also validate your JWT. The Istio docs also <a href="https://istio.io/latest/docs/tasks/security/authorization/authz-jwt/" rel="nofollow noreferrer">cover that</a>.</p>
<p>But I can't seem to find a way to get my JWT Validated, then use the user claim found in the JWT Json to route traffic. The example I linked to just expects the user to be plain text in a header.</p>
<p>How can an Istio Virtual Service be setup to route based on a claim in a JWT (preferably one it validated).</p>
| <p>You can implement this using the <strong>Istio authorization policy</strong>. I did something similar with Keycloak and Kong to restrict user traffic at API gateway level if claim or roles were not there.</p>
<p><a href="https://github.com/binc75/istio-jwt#inspect-jtw-token" rel="nofollow noreferrer">Here</a> is one nice example of JWT auth with istio:</p>
<pre><code>apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: backend
namespace: default
spec:
selector:
matchLabels:
app: backend
jwtRules:
- issuer: "${KEYCLOAK_URL}/auth/realms/istio"
jwksUri: "${KEYCLOAK_URL}/auth/realms/istio/protocol/openid-connect/certs"
---
# To allow only requests with a valid token, create an authorization policy
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: backend
namespace: default
spec:
selector:
matchLabels:
app: backend
action: ALLOW
rules:
- from:
when:
- key: request.auth.claims[preferred_username]
values: ["testuser"]
</code></pre>
<p>Example link : <a href="https://istio.io/latest/docs/tasks/security/authorization/authz-jwt/" rel="nofollow noreferrer">https://istio.io/latest/docs/tasks/security/authorization/authz-jwt/</a></p>
<p>Another nice example with OIDC : <a href="https://www.jetstack.io/blog/istio-oidc" rel="nofollow noreferrer">https://www.jetstack.io/blog/istio-oidc</a></p>
<p>RBAC and group list checks : <a href="https://istio.io/v1.4/docs/tasks/security/authorization/rbac-groups/" rel="nofollow noreferrer">https://istio.io/v1.4/docs/tasks/security/authorization/rbac-groups/</a></p>
|
<p>I am trying to create a helm chart of kind ConfigMap that will replace the following command from kubernates.</p>
<pre><code>kubectl create configmap my-config -n $namespace --from-file=./my-directory
</code></pre>
<p><code>my-directory</code> contains around 5 files, 2 of them are properties file and 2 of them jpg file. I see the following result for <code>kubectl get cm</code>, I can see <code>4</code> DATA files in configMap</p>
<pre><code>[admin@cluster ~]$ kubectl get cm
NAME DATA AGE
warm-up-config 4 41m
</code></pre>
<p>I created a template as follows, it work if I specify only properties file but If I add jpg files it doesn't work at all</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
data:
{{ (.Files.Glob "resources/*").AsConfig | nindent 2 }}
</code></pre>
<p>Does anyone know how I make this work.</p>
| <p>JPG files are binary, and should be added as such.</p>
<pre><code>data:
binaryData:
{{ .Files.Get "/path/to/file.jpg" }}
</code></pre>
<p>Files in <code>binaryData</code> field must be encoded with base64, so:</p>
<pre><code>{{ .Files.Get "/path/to/file.jpg" | b64enc }}
</code></pre>
<p>Don't forget proper indentation:</p>
<pre><code>{{ .Files.Get "/path/to/file.jpg" | b64enc | nindent 4 }}
</code></pre>
|
<p>We are just getting started with k8s (bare metal on Ubuntu 20.04). Is it possible for ingress traffic arriving at a host for a load balanced service to go to a pod running on that host (if one is available)?</p>
<p>We have some apps that use client side consistent hashing (using customer ID) to select a service instance to call. The service instances are stateless but maintain in memory ML models for each customer. So it is useful (but not essential) to have repeated requests for a given customer go to the same service. Then we can just use antiAffinity to have one pod per host.</p>
<p>Our existing service discovery mechanism lets the clients find all the instances of the service and the nodes they are running on. All our k8s nodes are running the Nginx ingress controller.</p>
| <p>I finally got this figured out. This was way harder than it should be IMO! <strong>Update: It's not working. Traffic frequently goes to the wrong pod.</strong></p>
<p>The service needs <code>externalTrafficPolicy: Local</code> (see <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip" rel="nofollow noreferrer">docs</a>).</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: starterservice
spec:
type: LoadBalancer
selector:
app: starterservice
ports:
- port: 8168
externalTrafficPolicy: Local
</code></pre>
<p>The Ingress needs <code>nginx.ingress.kubernetes.io/service-upstream: "true"</code> (<a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#service-upstream" rel="nofollow noreferrer">service-upstream docs</a>).</p>
<p>The <code>nginx.ingress.kubernetes.io/server-alias: "~^starterservice-[a-z0-9]+\\.example\\.com"</code> bit is because our service discovery updates DNS so each instance of the service includes the name of the host it is running on in its DNS name.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: starterservice
namespace: default
annotations:
nginx.ingress.kubernetes.io/server-alias: "~^starterservice-[a-z0-9]+\\.example\\.com"
nginx.ingress.kubernetes.io/service-upstream: "true"
spec:
rules:
- host: starterservice.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: starterservice
port:
number: 8168
</code></pre>
<p>So now a call <code>https://starterservice-foo.example.com</code> will go to the instance running on k8s host foo.</p>
|
<p>I have Django Channels (with Redis) served by Daphne, running behind Nginx ingress controller, proxying behind a LB, all setup in Kubernetes. The Websocket is upgraded and everything runs fine... for a few minutes. After between 5-15min (varies), my daphne logs (set in -v 2 to debug) show:</p>
<pre><code>WARNING dropping connection to peer tcp4:10.2.0.163:43320 with abort=True: WebSocket ping timeout (peer did not respond with pong in time)
</code></pre>
<p>10.2.0.163 is the cluster IP address of my Nginx pod. Immediately after, Nginx logs the following:</p>
<pre><code>[error] 39#39: *18644 recv() failed (104: Connection reset by peer) while proxying upgraded connection [... + client real IP]
</code></pre>
<p>After this, the websocket connection is getting wierd: the client can still send messages to the backend, but the same websocket connection in Django channels does not receive group messages anymore, as if the channel had unsubscribed from the group. I know my code works since everything runs smoothly until the error gets logged but I'm guessing there is a configuration error somewhere that causes the problem. I'm sadly all out of ideas. Here is my nginx ingress:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
acme.cert-manager.io/http01-edit-in-place: "true"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.org/websocket-services: "daphne-svc"
name: ingress
namespace: default
spec:
tls:
- hosts:
- mydomain
secretName: letsencrypt-secret
rules:
- host: mydomain
http:
paths:
- path: /
backend:
service:
name: uwsgi-svc
port:
number: 80
pathType: Prefix
- path: /ws
backend:
service:
name: daphne-svc
port:
number: 80
pathType: Prefix
</code></pre>
<p>Configured according to <a href="https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/websocket" rel="nofollow noreferrer">this</a> and <a href="https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/" rel="nofollow noreferrer">this</a>. Installation with helm:</p>
<pre><code>helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ngingress ingress-nginx/ingress-nginx
</code></pre>
<p>Here is my Django Channels consumer:</p>
<pre><code>class ChatConsumer(AsyncWebsocketConsumer):
async def connect(self):
user = self.scope['user']
if user.is_authenticated:
self.inbox_group_name = "inbox-%s" % user.id
device = self.scope.get('device', None)
added = False
if device:
added = await register_active_device(user, device)
if added:
# Join inbox group
await self.channel_layer.group_add(
self.inbox_group_name,
self.channel_name
)
await self.accept()
else:
await self.close()
else:
await self.close()
async def disconnect(self, close_code):
user = self.scope['user']
device = self.scope.get('device', None)
if device:
await unregister_active_device(user, device)
# Leave room group
if hasattr(self, 'inbox_group_name'):
await self.channel_layer.group_discard(
self.inbox_group_name,
self.channel_name
)
"""
Receive message from room group; forward it to client
"""
async def group_message(self, event):
message = event['message']
# Send message to WebSocket
await self.send(text_data=json.dumps(message))
async def forward_message_to_other_members(self, chat, message, notification_fallback=False):
user = self.scope['user']
other_members = await get_other_chat_members(chat, user)
for member in other_members:
if member.active_devices_count > 0:
#this will send the message to the user inbox; each consumer will handle it with the group_message method
await self.channel_layer.group_send(
member.inbox.group_name,
{
'type': 'group_message',
'message': message
}
)
else:
#no connection for this user, send a notification instead
if notification_fallback:
await ChatNotificationHandler().send_chat_notification(chat, message, recipient=member, author=user)
</code></pre>
| <p>I ended up adding a ping internal on the client and increasing nginx timeout to 1 day, which changed the problem but also shows it's probably not a nginx/daphne configuration problem.</p>
|
<p>I am working on an angular application and deployed it in kubernetes. I can access my application through Nginx Ingress.</p>
<p>I am using angular router to enable navigation through different components in my app.</p>
<p><em>Using the deployed application i tried to navigate through different components, when I click refresh on the browser or directly access a specific url path, I get 404 Not Found Page.</em></p>
<p>For example, if one accesses URL <code>mycompany.domain.com</code>, it shows the home component. In my angular router I have a <code>/user</code> path that points to user component.</p>
<p>Upon navigating to user menu, my new URL will now be <code>mycompany.domain.com/user</code> - and it is all working as expected.
However if I refresh the current page, it will become 404 Not Found Page, which is the problem.</p>
<p><strong>My few thoughts:</strong></p>
<ol>
<li>The router is part of the SPA, and of course will be loaded once the SPA is loaded.</li>
<li>The url path <code>/user</code> is only known by the router in the SPA - and so when we try to access the mycompany.domain.com/user directly, the server does not find any resource matching to it.</li>
<li>The only one who can understand the <code>/user</code> url path is my SPA - which is not loaded yet because the server already decided that the resource is not found.</li>
</ol>
<p>So I concluded (but still to try) the problem can occur anywhere I deploy my SPA regardless my ingress or server configuration.</p>
<p><strong>My solution</strong> is using angular router <code>useHash</code> option - it means that my navigation path will be after a # and be considered as URL Fragments, like this: mycompany.domain.com/#/user, in this case, my server will not try to understand the fragment, as it is meant to be understood by the page. I did so inspired by Vue.js router.</p>
<p><strong>My questions are:</strong></p>
<ol>
<li>Is my understanding (and conclusion) correct?</li>
<li>Is there any other solution? Because Angular by default doesn't use the hash and I am sure that there is a reason for that because it is not making sense if its doesn't work when deployed?</li>
<li>Can URL Rewriting help me? I have tried to look for it myself the usage is not matching with my conclusions.</li>
</ol>
<p>I am not a SPA expert, I am just starting and I would appreciate if someone will correct and answer me.</p>
| <p><strong>Save This code as web.config then paste the web.config file in the dist folder</strong></p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.web>
<compilation targetFramework="4.0" />
<customErrors mode="On" redirectMode="ResponseRewrite">
<error statusCode="404" redirect="/index.html" />
</customErrors>
</system.web>
<system.webServer>
<httpErrors errorMode="Custom">
<remove statusCode="404"/>
<error statusCode="404" path="/index.html" responseMode="ExecuteURL"/>
</httpErrors>
</system.webServer>
</configuration>
</code></pre>
|
<p>I'm running a MySQL image in my one-node cluster for local testing purposes only.</p>
<p>I would like to be able to delete the database when needed to have the image build a new database from scratch, but I can't seem to find where or how I can do that easily.</p>
<p>I am on Windows, using Docker Desktop to manage my Docker images and Kubernetes cluster with WSL2. The pod uses a persistent volume/claim which can be seen below.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/MySQLTemp"
type: DirectoryOrCreate
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
</code></pre>
<p>The volume part of my deployment looks like:</p>
<pre class="lang-yaml prettyprint-override"><code> volumeMounts:
- name: mysql-persistent
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent
persistentVolumeClaim:
claimName: mysql-pv-claim
</code></pre>
<p>Is there a command I can use to either see where this database is stored on my Windows or WSL2 machine so I can delete it manually, or delete it from the command line through <code>docker</code> or <code>kubectl</code>?</p>
| <p>For anyone looking in the future for this solution, and doesn't want to dredge through deep github discussions, my solution was this:</p>
<p>Change <code>hostPath:</code> to <code>local:</code> when defining the path. hostPath is apparently for if your kubernetes node providers have external persistent disks, like GCE or AWS.</p>
<p>Second, the path pointing to the symlink to your local machine from Docker desktop can apparently be found at <code>/run/desktop/mnt/host/c</code> for your c drive. I set my path to <code>/run/desktop/mnt/host/c/MySQLTemp</code> and created a <code>MySQLTemp</code> file in the root of my C drive, and it works.</p>
<p>Third, a <code>local:</code> path definition requires a nodeAffinity. You can tell it to use Docker Desktop like this:</p>
<pre class="lang-yaml prettyprint-override"><code>spec:
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
</code></pre>
|
<pre><code>[root@kubemaster ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod1deployment-c8b9c74cb-hkxmq 1/1 Running 0 12s 192.168.90.1 kubeworker1 <none> <none>
[root@kubemaster ~]# kubectl logs pod1deployment-c8b9c74cb-hkxmq
2020/05/16 23:29:56 Server listening on port 8080
[root@kubemaster ~]# kubectl get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13m <none>
pod1service ClusterIP 10.101.174.159 <none> 80/TCP 16s creator=sai
</code></pre>
<p>Curl on master node:</p>
<pre><code>[root@kubemaster ~]# curl -m 2 -v -s http://10.101.174.159:80
* About to connect() to 10.101.174.159 port 80 (#0)
* Trying 10.101.174.159...
* Connection timed out after 2001 milliseconds
* Closing connection 0
</code></pre>
<p>Curl on worker node 1 is sucessfull for cluster IP ( this is the node where pod is running )</p>
<pre><code>[root@kubemaster ~]# ssh kubeworker1 curl -m 2 -v -s http://10.101.174.159:80
Hello, world!
Version: 1.0.0
Hostname: pod1deployment-c8b9c74cb-hkxmq
</code></pre>
<p>Curl fails on other worker node as well : </p>
<pre><code>[root@kubemaster ~]# ssh kubeworker2 curl -m 2 -v -s http://10.101.174.159:80
* About to connect() to 10.101.174.159 port 80 (#0)
* Trying 10.101.174.159...
* Connection timed out after 2001 milliseconds
* Closing connection 0
</code></pre>
| <p><strong>I was facing the same issue so this is what I did and it worked:</strong></p>
<p><strong>Brief:</strong> I am running 2 VMs for a 2 Node cluster. 1 Master Node and 1 Worker Node. A Deployment is running on the worker node. I wanted to curl from the master node so that I can get response from my application running inside a pod on the worker node. For that I deployed a service on the worker node which then exposed those set of pods inside the cluster.</p>
<p><strong>Issue:</strong> After deploying the service and doing <code>Kubectl get service</code>, it provided me with <code>ClusterIP</code> of that service and a port (BTW I used <code>NodePort</code> instead of Cluster IP when writing the service.yaml). But when curling on that IP address and port it was just hanging and then after sometime giving timeout.</p>
<p><strong>Solution:</strong> Then I tried to look at the hierarchy. First I need to contact the Node on which service is located then on the port given by the NodePort (i.e The one between 30000-32767) so first I did <code>Kubectl get nodes -o wide</code> to get the Internal IP address of the required Node (mine was 10.0.1.4) and then I did <code>kubectl get service -o wide</code> to get the port (the one between 30000-32767) and curled it. So my curl command was -> <code>curl http://10.0.1.4:30669</code> and I was able to get the output.</p>
|
<p>We have recently set up a AKS cluster with a NGINX controller.</p>
<p>Access <em>seemed</em> ok, but then we found that occasional requests are unable to connect.</p>
<p>To demonstrate the problem we use a short powershell script which makes repeated requests to the URL, writes out the response's status code, waits 0.5 seconds, then repeats.</p>
<pre><code>$url = "https://staging.[...].[...]"
$i = 0
while($true)
{
$statusCode = "ERROR"
try{
$statusCode = (invoke-webrequest $url).statuscode
}
catch{
$statusCode = $_
}
$i = $i + 1
write-host "$i Checking $url --> $statusCode"
start-sleep -seconds 0.5
}
</code></pre>
<p>When I run this script it can run for about 200 requests, each time returning a <code>200 (OK)</code> response, then it will pause for about 30 seconds (which I assume to be the timeout period of the <code>Invoke-WebRequest</code> method) then write "Unable to connect to the remote server".</p>
<p><a href="https://i.stack.imgur.com/8E6cI.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8E6cI.jpg" alt="enter image description here" /></a></p>
<p>To debug the problem we enabled port-forwarding to bypass the load balancer, thus addressing the pod directly (with the host header being added): No problem; the powershell script consistently shows <code>200</code> responses over at least 10 minutes.</p>
<p>We have also enabled port-forwarding to the NGINX controller and repeated the test: Again, consistent <code>200</code> responses.</p>
<p>But without port-forwarding enabled, requests to the URL - now going through the load balancer - show intermittent connection problems.</p>
<p>Strangely, when I run the script these connection problems happen every 200 or 201 requests, yet when a colleague ran the same script he got no response for every 2nd or 3rd request. I can repeat this and continue to see these connection problems at consistent intervals.</p>
<p>UPDATE:<br />
The load balancer looks like this...</p>
<p><a href="https://i.stack.imgur.com/EPiMK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EPiMK.png" alt="enter image description here" /></a></p>
| <p>I can't explain <em>why</em> but we found out that if we changed the VMs in our node pool from burstable VMs to non-burstable (from 'B' class to 'D' class) then the problem went away.</p>
|
<p>we <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/gpus#cos" rel="noreferrer">followed this guide</a> to use GPU enabled nodes in our existing cluster but when we try to schedule pods we're getting <code>2 Insufficient nvidia.com/gpu error</code></p>
<p><strong>Details:</strong></p>
<p>We are trying to use GPU in our existing cluster and for that we're able to successfully create a NodePool with a single node having GPU enabled.</p>
<p>Then as a next step according to the guide above we've to create a daemonset and we're also able to run the DS successfully.</p>
<p>But now when we are trying to schedule the Pod using the following resource section the pod becomes un-schedulable with this error <code>2 insufficient nvidia.com/gpu</code></p>
<pre><code> resources:
limits:
nvidia.com/gpu: "1"
requests:
cpu: 200m
memory: 3Gi
</code></pre>
<p><strong>Specs:</strong></p>
<pre><code>Node version - v1.18.17-gke.700 (+ v1.17.17-gke.6000) tried on both
Instance type - n1-standard-4
image - cos
GPU - NVIDIA Tesla T4
</code></pre>
<p>any help or pointers to debug this further will be highly appreaciated.</p>
<p>TIA,</p>
<hr />
<p>output of <code>kubectl get node <gpu-node> -o yaml</code> [Redacted]</p>
<pre><code>apiVersion: v1
kind: Node
metadata:
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/instance-type: n1-standard-4
beta.kubernetes.io/os: linux
cloud.google.com/gke-accelerator: nvidia-tesla-t4
cloud.google.com/gke-boot-disk: pd-standard
cloud.google.com/gke-container-runtime: docker
cloud.google.com/gke-nodepool: gpu-node
cloud.google.com/gke-os-distribution: cos
cloud.google.com/machine-family: n1
failure-domain.beta.kubernetes.io/region: us-central1
failure-domain.beta.kubernetes.io/zone: us-central1-b
kubernetes.io/arch: amd64
kubernetes.io/os: linux
node.kubernetes.io/instance-type: n1-standard-4
topology.kubernetes.io/region: us-central1
topology.kubernetes.io/zone: us-central1-b
name: gke-gpu-node-d6ddf1f6-0d7j
spec:
taints:
- effect: NoSchedule
key: nvidia.com/gpu
value: present
status:
...
allocatable:
attachable-volumes-gce-pd: "127"
cpu: 3920m
ephemeral-storage: "133948343114"
hugepages-2Mi: "0"
memory: 12670032Ki
pods: "110"
capacity:
attachable-volumes-gce-pd: "127"
cpu: "4"
ephemeral-storage: 253696108Ki
hugepages-2Mi: "0"
memory: 15369296Ki
pods: "110"
conditions:
...
nodeInfo:
architecture: amd64
containerRuntimeVersion: docker://19.3.14
kernelVersion: 5.4.89+
kubeProxyVersion: v1.18.17-gke.700
kubeletVersion: v1.18.17-gke.700
operatingSystem: linux
osImage: Container-Optimized OS from Google
</code></pre>
<p>Tolerations from the deployments</p>
<pre><code> tolerations:
- effect: NoSchedule
key: nvidia.com/gpu
operator: Exists
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
</code></pre>
| <p>The <code>nvidia-gpu-device-plugin</code> should be installed in the GPU node as well. You should see <code>nvidia-gpu-device-plugin</code> DaemonSet in your <code>kube-system</code> namespace.</p>
<p>It should be automatically deployed by Google, but if you want to deploy it on your own, run the following command: <code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/device-plugins/nvidia-gpu/daemonset.yaml</code></p>
<p>It will install the GPU plugin in the node and afterwards your pods will be able to consume it.</p>
|
<p>When I try to retrieve logs from my pods, I note that K8s does not print all the logs, and I know that because I observe that logs about microservice initialization are not present in the head of logs.</p>
<p>Considering that my pods print a lot of logs in a long observation period, does someone know if K8s has a limit in showing all logs?</p>
<p>I also tried to set <code>--since</code> parameter in the <code>kubectl logs</code> command to get all logs in a specific time range, but it seems to have no effect.</p>
<p>Thanks.</p>
| <p>The container runtime engine typically manages container (pod) logs. Do check the settings on the runtime engine in use.</p>
|
<p>My pod has a volume as:</p>
<pre class="lang-json prettyprint-override"><code> "volumes": [
{
"name": "configs",
"secret": {
"defaultMode": 420,
"secretName": "some_secret"
}
},
....]
</code></pre>
<p>I want to be able to read it using Python as <code>V1Volume</code>.</p>
<p>Tried to do:</p>
<pre class="lang-py prettyprint-override"><code> from kubernetes import config
config.load_incluster_config()
spec = client.V1PodSpec()
</code></pre>
<p>But I'm stuck as it gives me</p>
<pre class="lang-py prettyprint-override"><code> raise ValueError("Invalid value for `containers`, must not be `None`")
</code></pre>
<p>and I'm not sure how to continue. How can I get the volumes from the <code>V1PodSpec</code>?</p>
| <p>It gives you the error because you initialise <code>V1PodSpec</code> without any arguments. <code>V1PodSpec</code> used to create pods, not to read them.</p>
<p>To read pod <code>spec</code> from Kubernetes:</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes import client,config
config.load_kube_config()
# or
# config.load_incluster_config()
core_api = client.CoreV1Api()
response = core_api.read_namespaced_pod(name="debug-pod", namespace='dev')
# access volumes in the returned response
type(response.spec.volumes[0])
# returns:
# <class 'kubernetes.client.models.v1_volume.V1Volume'>
</code></pre>
|
<p>I am new to K8s autoscaling. I have a stateful application I am trying to find out which autoscaling method works for me. According to the documentation:</p>
<blockquote>
<p>if pods don't have the correct resources set, the Updater component
of VPA kills them so that they can be recreated by their controllers
with the updated requests.</p>
</blockquote>
<p>I want to know the downtime for kills the existing pod and creating the new ones. Or at least how can I measure it for my application?</p>
<p>I am comparing the HPA and VPA approaches for my application.</p>
<p>the follow-up question is - how long does it take in HPA to create a new pod in scaling up?</p>
| <p>There are few things to clear out here:</p>
<ul>
<li><p>VPA does not create nodes, <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#cluster-autoscaler" rel="nofollow noreferrer">Cluster Autoscaler</a> is used for that. Vertical Pod Autoscaler allocates more (or less) CPUs and memory to existing pods and CA scales your node clusters based on the number of pending pods.</p>
</li>
<li><p>Whether to use HPA, VPA, CA, or some combination, depends on the needs of your application. Experimentation is the most reliable way to find which option works best for you, so it might take a few tries to find the right setup. HPA and VPA depend on metrics and some historic data. CA is recommended if you have a good understanding of your pods and containers needs.</p>
</li>
<li><p>HPA and VPA should not be used together to evaluate CPU/Memory. However, VPA can be used to evaluate CPU or Memory whereas HPA can be used to evaluate external metrics (like the number of HTTP requests or the number of active users, etc). Also, you can use VPA together with CA.</p>
</li>
<li><p>It's hard to evaluate the exact time needed for VPA to adjust and restart pods as well as for HPA to scale up. The difference between best case scenario and worse case one relies on many factors and can make a significant gap in time. You need to rely on metrics and observations in order to evaluate that.</p>
</li>
<li><p><a href="https://github.com/kubernetes-sigs/metrics-server#kubernetes-metrics-server" rel="nofollow noreferrer">Kubernetes Metrics Server</a> collects resource metrics from Kubelets and exposes them in Kubernetes apiserver through Metrics API for use by Horizontal Pod Autoscaler and Vertical Pod Autoscaler.</p>
</li>
</ul>
<p>Below are some useful sources that would help you understand and choose the right solution for you:</p>
<ul>
<li><p><a href="https://medium.com/nerd-for-tech/autoscaling-in-kubernetes-hpa-vpa-ab61a2177950" rel="nofollow noreferrer">AutoScaling in Kubernetes ( HPA / VPA )</a></p>
</li>
<li><p><a href="https://www.replex.io/blog/kubernetes-in-production-best-practices-for-cluster-autoscaler-hpa-and-vpa" rel="nofollow noreferrer">Kubernetes Autoscaling in Production: Best Practices for Cluster Autoscaler, HPA and VPA</a></p>
</li>
<li><p><a href="https://platform9.com/blog/kubernetes-autoscaling-options-horizontal-pod-autoscaler-vertical-pod-autoscaler-and-cluster-autoscaler/" rel="nofollow noreferrer">Kubernetes Autoscaling Options: Horizontal Pod Autoscaler, Vertical Pod Autoscaler and Cluster Autoscaler</a></p>
</li>
</ul>
<p><strong>EDIT:</strong></p>
<p>Scaling up is a time sensitive operation. You should consider the average time it can take your pods to scale up. Two example scenarios:</p>
<ol>
<li>Best case scenario - 4 minutes:</li>
</ol>
<ul>
<li>30 seconds : Target metrics values updated: 30-60 seconds</li>
<li>30 seconds : HPA checks on metrics values: 30 seconds</li>
<li>< 2 seconds: pods created and goes into pending state - 1 second</li>
<li>< 2 seconds : CA sees the pending pods and fires up the calls to provision nodes - 1 second</li>
<li>3 minutes: Cloud provider provision the nodes & K8 waits for them till they are ready: up to 10 minutes (depends on multiple factors)</li>
</ul>
<ol start="2">
<li>(Reasonable) Worst case scenario - 12 minutes:</li>
</ol>
<ul>
<li>60 seconds : Target metrics values updated</li>
<li>30 seconds : HPA checks on metrics values</li>
<li>< 2 seconds : pods created and goes into pending state</li>
<li>< 2 seconds : CA sees the pending pods and fires up the calls to provision nodes</li>
<li>10 minutes : Cloud provider provision the nodes & K8 waits for them till they are ready minutes (depends on multiple factors, such provider latency, OS latency, boot strapping tools, etc.)</li>
</ul>
<p>Again, it is hard to estimate the exact time it would take so observation and metrics are the key here.</p>
|
<p>I attempted but there is an error..i also see See <strong>'kubectl run --help' for usage.</strong>
but i can't fix it..</p>
<pre><code>kubectl run pod pod4 --image=aamirpinger/helloworld:latest --port=80 --annotaions=createdBy="Muhammad Shahbaz" --restart=Never
</code></pre>
<p><strong>Error: unknown flag: --annotaions</strong></p>
| <p><code>kubectl run</code> supports specifying annotations via the <code>--annotations</code> flag that can be specified multiple times to apply multiple annotations.</p>
<p>For example:</p>
<pre><code>$ kubectl run --image myimage --annotations="foo=bar" --annotations="another=one" mypod
</code></pre>
<p>results in the following:</p>
<pre><code>$ kubectl get pod mypod -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
foo: bar
another: one
[...]
</code></pre>
|
<p>I have a rabbit mq pod and I configured to use a persistence storage incase of pod restart/deletion by mounting a volume.</p>
<p>I configured everything but not able to get through this error:</p>
<pre><code>/usr/lib/rabbitmq/bin/rabbitmq-server: 42:
/usr/lib/rabbitmq/bin/rabbitmq-server:
cannot create /var/lib/rabbitmq/mnesia/rabbit@reana-message-broker-5f45f797ff-cs79m.pid:
Permission denied
</code></pre>
<p>Here're my config file and deployment app for kubernetes</p>
<ol>
<li><code>Dockerfile</code></li>
</ol>
<pre><code>FROM ubuntu:16.04
# hadolint ignore=DL3009
RUN apt-get update
# hadolint ignore=DL3008
RUN apt-get -y install --no-install-recommends rabbitmq-server
RUN apt-get -y autoremove && apt-get -y clean
# hadolint ignore=DL3001
RUN service rabbitmq-server start
COPY start.sh /start.sh
RUN chmod 755 ./start.sh
EXPOSE 5672
EXPOSE 15672
CMD ["/start.sh", "test", "1234"]
</code></pre>
<ol start="2">
<li><code>start.sh</code></li>
</ol>
<pre><code>#!/bin/sh
cat > /etc/rabbitmq/rabbitmq.conf <<EOF
listeners.tcp.default = 5672
default_user = <<"$1">>
default_pass = <<"$2">>
EOF
rabbitmq-server
</code></pre>
<ol start="3">
<li><code>rabbitmq.yaml</code></li>
</ol>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: message-broker
namespace: {{ .Release.Namespace }}
spec:
ports:
- port: 5672
targetPort: 5672
name: "tcp"
protocol: TCP
- port: 15672
targetPort: 15672
name: "management"
protocol: TCP
selector:
app: message-broker
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: message-broker
namespace: {{ .Release.Namespace }}
spec:
replicas: 1
selector:
matchLabels:
app: message-broker
template:
metadata:
labels:
app: message-broker
spec:
containers:
- name: message-broker
image: {{ .Values.message_broker.image }}
imagePullPolicy: {{ .Values.components.message_broker.imagePullPolicy }}
ports:
- containerPort: 5672
name: tcp
- containerPort: 15672
name: management
volumeMounts:
- name: data
mountPath: /var/lib/rabbitmq/mnesia
env:
- name: RABBITMQ_DEFAULT_PASS
valueFrom:
secretKeyRef:
name: rabbitmq-secrets
key: password # password = root
- name: RABBITMQ_DEFAULT_USER
valueFrom:
secretKeyRef:
name: rabbitmq-secrets
key: user # user = root
...
nodeSelector:
....
volumes:
- name: data
hostPath:
path: /var/test/rabbitmq
</code></pre>
<p>Let me know what I might be missing. :)</p>
| <p>The volume you mounted in <code>/var/lib/rabbitmq/mnesia</code> is owned by root.</p>
<p>The rabbitmq process is running as <code>rabbitmq</code> user and doesn't have write access to this directory.</p>
<p>In your <code>start.sh</code> add:</p>
<pre><code>chown rabbitmq:rabbitmq /var/lib/rabbitmq/mnesia
</code></pre>
<p>before starting the rabbitmq-server process.</p>
|
<p>I am trying to create cluster by using <a href="https://gridscale.io/en/community/tutorials/kubernetes-cluster-with-kubeadm/" rel="nofollow noreferrer">this article</a> in my WSl Ubuntu. But It returns some errors.</p>
<p>Errors:</p>
<pre><code>yusuf@DESKTOP-QK5VI8R:~/aws/kubs2$ sudo systemctl daemon-reload
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
yusuf@DESKTOP-QK5VI8R:~/aws/kubs2$ sudo systemctl restart kubelet
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
yusuf@DESKTOP-QK5VI8R:~/aws/kubs2$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.21.1
[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-6443]: Port 6443 is in use
[ERROR Service-Docker]: docker service is not active, please run 'systemctl start docker.service'
[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
</code></pre>
<p>I don't understand the reason when I use <code>sudo systemctl restart kubelet</code>. Error like this occurs:</p>
<pre><code>docker service is not enabled, please run 'systemctl enable docker.service'
</code></pre>
<p>When I use:</p>
<pre><code>yusuf@DESKTOP-QK5VI8R:~/aws/kubs2$ systemctl enable docker.service
Failed to enable unit, unit docker.service does not exist.
</code></pre>
<p>But I have docker images still runnig:</p>
<p><a href="https://i.stack.imgur.com/Zs5eQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Zs5eQ.png" alt="enter image description here" /></a></p>
<p>What is wrong while creating Cluster Kubernetes in WSL? Is there any good tutorial for creating cluster in WSL?</p>
| <p>Tutorial you're following is designed for cloud Virtual machines with Linux OS on them (this is important since WSL works a bit differently).
E.g. SystemD is not presented in WSL, behaviour you're facing is currently <a href="https://github.com/MicrosoftDocs/WSL/issues/457" rel="nofollow noreferrer">in development phase</a>.</p>
<p>What you need is to follow designated tutorial for WSL (WSL2 in this case). Also see that docker is set up on Windows machine and shares its features with WSL integration. Please find <a href="https://kubernetes.io/blog/2020/05/21/wsl-docker-kubernetes-on-the-windows-desktop/" rel="nofollow noreferrer">Kubernetes on Windows desktop tutorial</a> (this uses KinD or minikube which is enough for development and testing)</p>
<p>Also there's a <a href="https://kubernetes.io/blog/2020/05/21/wsl-docker-kubernetes-on-the-windows-desktop/#minikube-enabling-systemd" rel="nofollow noreferrer">part for enabling SystemD</a> which can potentially resolve your issue on a state where you are (I didn't test this as I don't have a windows machine).</p>
|
<p>Need help on how to configure TLS/SSL on k8s cluster for internal pod to pod communication over https. Able to curl http://servicename:port over http but for https i am ending up with NSS error on client pod.</p>
<p>I generated a self signed cert with CN=*.svc.cluster.local (As all the services in k8s end with this) and i am stuck on how to configure it on k8s.</p>
<p>Note: i exposed the main svc on 8443 port and i am doing this in my local docker desktop setup on windows machine.</p>
<ol>
<li>No Ingress --> Because communication happens within the cluster itself.</li>
<li>Without any CRD(custom resource definition) cert-manager</li>
</ol>
| <p>You can store your self-signed certificate inside the secret of Kubernetes and mount it to the volume of the pod.</p>
<p>If you don't want to use the CRD or cert-manager you can use the native Kubernetes API to generate the Certificate which will be trusted by all the pods by default.</p>
<p><a href="https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/</a></p>
<p>managing the self singed certificate across all pods and service might be hard I would suggest using the service mesh. Service mesh encrypts the network traffic using the <code>mTLS</code>.</p>
<p><a href="https://linkerd.io/2.10/features/automatic-mtls/#:%7E:text=By%20default%2C%20Linkerd%20automatically%20enables,TLS%20connections%20between%20Linkerd%20proxies" rel="nofollow noreferrer">https://linkerd.io/2.10/features/automatic-mtls/#:~:text=By%20default%2C%20Linkerd%20automatically%20enables,TLS%20connections%20between%20Linkerd%20proxies</a>.</p>
<p>Mutual <strong>TLS</strong> between service to service communication managed by the Side car containers in case of service mesh.</p>
<p><a href="https://istio.io/latest/docs/tasks/security/authentication/mtls-migration/" rel="nofollow noreferrer">https://istio.io/latest/docs/tasks/security/authentication/mtls-migration/</a></p>
<p>in this case, No ingress required and no <strong>cert-manager</strong> required.</p>
|
<p>my natural thought is that if nginx is just a daemon process on the k8s node, but not a pod(container) in the k8s cluster, looks like it still can fullfill ingress controller jobs. because:
if it's a process, because it is on the k8s node, it still can talk to apiserver to fetch service backend pods information, like IP addresses, so it's still can be used as a http proxy server to direct traffic to different services.</p>
<p>so 2 questions,</p>
<ol>
<li>why nginx ingress controller has to be a pod?</li>
<li>why nginx ingress controller only got 1 replica? and on which node? if nginx controller pod is dead, things will go unstable.</li>
</ol>
<p>Thanks!</p>
| <blockquote>
<p>why Nginx ingress controller has to be a pod?</p>
</blockquote>
<p>it is possible to run the Nginx controller as a daemon set in Kubernetes however I am not sure about the running on the node.</p>
<p>Manging the POD using daemon set and deployment of Kubernetes easy compare to process on Node.</p>
<p>By default Nginx daemon process is not part of any Kubernetes node, if you cluster autoscale will you install the Nginx process manually on Node?</p>
<p>If you thinking to create own AMI with Nginx process inside and use it inside the Node pool and scale that pool, it's possible but how about OS patching and maintenance ?</p>
<blockquote>
<p>why nginx ingress controller only got 1 replica? and on which node? if
nginx controller pod is dead, things will go unstable.</p>
</blockquote>
<p>Running replicas 1 is the default configuration but you can implement the HPA and increase the replicas as per need. Nginx is lightweight so handling a large volume of traffic not require more replicas.</p>
<p>Still, as per need, you can run multiple replicas with HPA or increase manually replicas to get high availability.</p>
|
<p>I am using EKS Fargate and created a fargate profile based on this doc: <a href="https://docs.aws.amazon.com/eks/latest/userguide/fargate-profile.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/fargate-profile.html</a>.</p>
<p>In the doc it says Fargate is used to allocate what pods are deployed to Fargate instead of Nodegroup or EC2. So my question is should I always have one Fargate profile in one cluster? Is there any reason to have more than 1?</p>
| <p>Consider that today Fargate profiles are immutable and they do not support wild-cards. So practically you end up using more than a Fargate profile when you have a rather dynamic environment and you want to selectively include/exclude namespaces / pod tags etc. There are other reasons why you <em>may</em> want to use different Fargate profiles for different namespaces in that the Fargate profile determines the network(s) your pods get deployed to and the execution IAM role of your pod (which you <em>could</em> use to selectively determine wich ECR private repo the pod can pull from). This role is not to be confused with the <a href="https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html" rel="nofollow noreferrer">IAM roles for SA</a>.</p>
|
<p>I am new to kubernetes and trying to create a deployment. So first I created a replicaset named rs.yml shown below.</p>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: web
labels:
env: dev
role: web
spec:
replicas: 4
selector:
matchLabels:
role: web
template:
metadata:
labels:
role: web
spec:
containers:
- name: nginx
image: nginx
</code></pre>
<p>and applied it using</p>
<blockquote>
<p>kubectl apply -f rs.yml</p>
</blockquote>
<p>now Instead rewriting all this in deployment. I just want to refer this 'rs.yml' file or service inside my deployment.yml file.</p>
| <p>You can create single YAMl file and add both thing deployment service inside it.</p>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: web
labels:
env: dev
role: web
spec:
replicas: 4
selector:
matchLabels:
role: web
template:
metadata:
labels:
role: web
spec:
containers:
- name: nginx
image: nginx
---
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
</code></pre>
<p>you can divide the things using the <code>---</code> make the single YAML file.</p>
<p>also, another suggestion don't use the <strong>Replicasets</strong> by default <strong>Deployment</strong> create the <strong>Replicaset</strong> in the background.</p>
<p>You can use the <strong>kind: deployment</strong> can check <code>kubectl get rc</code> still replica set will be there. Deployment creates it in the background and manage it.</p>
|
<p>We are trying to analyze specific requirement for container implementation and would like to know the limit of maximum number of labels that can be created for the given pods in kubernetes?
Does such limit exists or it is not defined.</p>
<p>Thanks in advance.</p>
| <p>Based on <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="nofollow noreferrer">official Kubernetes documentation</a> there should be no limit of independent labels in pod. But you should know, that each valid label:</p>
<blockquote>
<ul>
<li>must be 63 characters or less (can be empty),</li>
<li>unless empty, must begin and end with an alphanumeric character (<code>[a-z0-9A-Z]</code>),</li>
<li>could contain dashes (<code>-</code>), underscores (<code>_</code>), dots (<code>.</code>), and alphanumerics between.</li>
</ul>
</blockquote>
<p>If you want to know where the 63-character limit comes from, I recommend <a href="https://stackoverflow.com/questions/50412837/">this thread</a>, <a href="https://datatracker.ietf.org/doc/html/rfc1123" rel="nofollow noreferrer">RFC-1223</a> and <a href="https://stackoverflow.com/questions/32290167/what-is-the-maximum-length-of-a-dns-name#32294443">explanation</a>.</p>
<p>And <a href="https://stackoverflow.com/users/2525872/leroy">Leroy</a> well mentioned in the comment:</p>
<blockquote>
<p>keep in mind that all this data is being retrieved by an api.</p>
</blockquote>
|
<p>When I deploy the new release of the Kubernetes app I got that error</p>
<pre><code>Error: secret "env" not found
</code></pre>
<p><a href="https://i.stack.imgur.com/7TbF4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7TbF4.png" alt="enter image description here" /></a></p>
<p>even I have env in <strong>Custom Resource Definitions</strong> --> <strong>sealedsecrets.bitnami.com</strong></p>
<p><a href="https://i.stack.imgur.com/BMtg4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BMtg4.png" alt="enter image description here" /></a></p>
<p><strong>env.yaml</strong></p>
<pre><code>apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
creationTimestamp: null
name: env
namespace: api
spec:
encryptedData:
AUTH_COGNITO: AgCIxZX0Zv6gcK2p ----
template:
metadata:
creationTimestamp: null
name: env
namespace: api
type: Opaque
</code></pre>
<p><strong>deployment.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
labels:
app: {{ .Release.Name }}
spec:
revisionHistoryLimit: 2
replicas: {{ .Values.replicas }}
selector:
matchLabels:
app: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ .Release.Name }}
spec:
containers:
- name: {{ .Release.Name }}
image: "{{ .Values.imageRepository }}:{{ .Values.tag }}"
env:
{{- include "api.env" . | nindent 12 }}
resources:
limits:
memory: {{ .Values.memoryLimit }}
cpu: {{ .Values.cpuLimit }}
requests:
memory: {{ .Values.memoryRequest }}
cpu: {{ .Values.cpuRequest }}
{{- if .Values.healthCheck }}
livenessProbe:
httpGet:
path: /healthcheck
port: 4000
initialDelaySeconds: 3
periodSeconds: 3
timeoutSeconds: 3
{{- end }}
imagePullSecrets:
- name: {{ .Values.imagePullSecret }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
</code></pre>
<p><strong>UPDATE to my question</strong>
my secrets I don't have secret called <code>env</code></p>
<p>plus that error in <code>regcred</code> inside <code>Sealedsecrets.bitnami.com</code></p>
<pre><code>Failed to unseal: no key could decrypt secret (.dockerconfigjson)
</code></pre>
<p><a href="https://i.stack.imgur.com/zfiAo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zfiAo.png" alt="enter image description here" /></a></p>
| <p>You ran <code>kubeseal</code> against the wrong Kubernetes cluster or you tried to edit the name or namespace after encrypting without enabling those in the encryption mode. More likely the first.</p>
|
<p>I have a Spring boot application, that now I generated a helm chart for it. I am using the ConfigMap from k8s to create this application properties. But When I inspect the pod I see the error below:</p>
<blockquote>
<p>2021-05-31 09:39:31.815 WARN 1 --- [ost-startStop-1]
o.s.b.a.orm.jpa.DatabaseLookup : Unable to determine jdbc
url from datasource</p>
<p>org.springframework.jdbc.support.MetaDataAccessException: Could not
get Connection for extracting meta-data; nested exception is
org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to
obtain JDBC Connection; nested exception is
org.postgresql.util.PSQLException: The connection attempt failed.</p>
</blockquote>
<p>I wrote the application properties as a ConfigMap:</p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: myconfigmap
data:
application.properties: |-
server.port = 8080
spring.datasource.url={{ .Values.database.url }}
spring.datasource.username={{ .Values.database.username }}
spring.datasource.password={{ .Values.database.password }}
</code></pre>
<p>In deployment.yaml I call this using it:</p>
<pre><code> envFrom:
- configMapRef:
name: myconfigmap
</code></pre>
<p>I overwrite the .Values.database... using the keyvault from azure. How is the best way to make this file available on my k8s cluster?</p>
<p>I have the variables overwritten with this command:</p>
<blockquote>
<p>helm upgrade --namespace namescpace --install --set
"database.url=database_url,database.username=username,database.password=password"
name_application chartname</p>
</blockquote>
<p>main class:</p>
<pre><code>@SpringBootApplication
@Configuration
@EnableScheduling
</code></pre>
<p>public class Application extends SpringBootServletInitializer {</p>
<pre><code>/**
* Main method.
*
* @param args
* args passed to the Spring Boot App. Can be used to set the
* active profile.
*/
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
/**
* Configure method for enabling deployment in external tomcat.
*
* {@inheritDoc}
*/
@Override
protected SpringApplicationBuilder configure(SpringApplicationBuilder application) {
return application.sources(Application.class);
}
</code></pre>
<p>}</p>
| <p>You can't inject files as env-var. Only simple key=value entries.</p>
<p>If you want to keep your configMap as-is you should instead mount it as a volume inside your container.</p>
<pre><code> volumeMounts:
- name: application-config
mountPath: "/config"
readOnly: true
volumes:
- name: application-config
configMap:
name: myconfigmap
items:
- key: application.properties
path: application.properties
</code></pre>
<p>Application.properties will now be dropped under /config directory.</p>
<p>Spring will load the mounted file at startup following the documentation : <a href="https://docs.spring.io/spring-boot/docs/2.1.8.RELEASE/reference/html/boot-features-external-config.html#boot-features-external-config-application-property-files" rel="nofollow noreferrer">https://docs.spring.io/spring-boot/docs/2.1.8.RELEASE/reference/html/boot-features-external-config.html#boot-features-external-config-application-property-files</a></p>
|
<p>I have been trying to deploy a hyperledger fabric model with 3 CAs 1 orderer and 2 peer nodes. I am able to create the channel with OSADMIN command of fabric but when I try to join the channel with peer node, I get Error: <code>error getting endorser client for channel: endorser client failed to connect to peer-govt:7051: failed to create new connection: context......</code> .</p>
<p>Here are the logs from terminal (local host machine):</p>
<pre><code>2021-06-01 06:38:54.509 UTC [common.tools.configtxgen] main -> INFO 001 Loading configuration
2021-06-01 06:38:54.522 UTC [common.tools.configtxgen.localconfig] completeInitialization -> INFO 002 orderer type: etcdraft
2021-06-01 06:38:54.522 UTC [common.tools.configtxgen.localconfig] completeInitialization -> INFO 003 Orderer.EtcdRaft.Options unset, setting to tick_interval:"500ms" election_tick:10 heartbeat_tick:1 max_inflight_blocks:5 snapshot_interval_size:16777216
2021-06-01 06:38:54.522 UTC [common.tools.configtxgen.localconfig] Load -> INFO 004 Loaded configuration: /etc/hyperledger/clipod/configtx/configtx.yaml
2021-06-01 06:38:54.712 UTC [common.tools.configtxgen] doOutputBlock -> INFO 005 Generating genesis block
2021-06-01 06:38:54.712 UTC [common.tools.configtxgen] doOutputBlock -> INFO 006 Creating application channel genesis block
2021-06-01 06:38:54.712 UTC [common.tools.configtxgen] doOutputBlock -> INFO 007 Writing genesis block
cli-dd4cc5fbf-pdcgb
Status: 201
{
"name": "commonchannel",
"url": "/participation/v1/channels/commonchannel",
"consensusRelation": "consenter",
"status": "active",
"height": 1
}
cli-dd4cc5fbf-pdcgb
Error: error getting endorser client for channel: endorser client failed to connect to peer-govt:7051: failed to create new connection: context deadline exceeded
command terminated with exit code 1
Error: error getting endorser client for channel: endorser client failed to connect to peer-general:9051: failed to create new connection: context deadline exceeded
command terminated with exit code 1
</code></pre>
<p>One thing to note down here is I am using Kubernetes and service CLUSTER_IP for all the PODS.</p>
<p>here are logs from one of the PEER POD (same for other)</p>
<pre><code>2021-06-01 06:38:42.180 UTC [nodeCmd] registerDiscoveryService -> INFO 01b Discovery service activated
2021-06-01 06:38:42.180 UTC [nodeCmd] serve -> INFO 01c Starting peer with ID=[peer-govt], network ID=[dev], address=[peer-govt:7051]
2021-06-01 06:38:42.180 UTC [nodeCmd] func6 -> INFO 01d Starting profiling server with listenAddress = 0.0.0.0:6060
2021-06-01 06:38:42.180 UTC [nodeCmd] serve -> INFO 01e Started peer with ID=[peer-govt], network ID=[dev], address=[peer-govt:7051]
2021-06-01 06:38:42.181 UTC [kvledger] LoadPreResetHeight -> INFO 01f Loading prereset height from path [/var/hyperledger/production/ledgersData/chains]
2021-06-01 06:38:42.181 UTC [blkstorage] preResetHtFiles -> INFO 020 No active channels passed
2021-06-01 06:38:56.006 UTC [core.comm] ServerHandshake -> ERRO 021 Server TLS handshake failed in 24.669µs with error tls: first record does not look like a TLS handshake server=PeerServer remoteaddress=172.17.0.1:13258
2021-06-01 06:38:57.007 UTC [core.comm] ServerHandshake -> ERRO 022 Server TLS handshake failed in 17.772µs with error tls: first record does not look like a TLS handshake server=PeerServer remoteaddress=172.17.0.1:29568
2021-06-01 06:38:58.903 UTC [core.comm] ServerHandshake -> ERRO 023 Server TLS handshake failed in 13.581µs with error tls: first record does not look like a TLS handshake server=PeerServer remoteaddress=172.17.0.1:32615
</code></pre>
<p>To overcome this issue, I tried disabling the TLS by setting <code>CORE_PEER_TLS_ENABLED</code> to <code>FALSE</code></p>
<p>then the proposal gets submitted but the orderer POD throws the same error of <code>TLS handshake failed.........</code></p>
<p>Here are the commands I am using to join the channel from cli pod:</p>
<p><code>kubectl -n hyperledger -it exec $CLI_POD -- sh -c "export FABRIC_CFG_PATH=/etc/hyperledger/clipod/config && export CORE_PEER_LOCALMSPID=GeneralMSP && export CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/clipod/organizations/peerOrganizations/general.example.com/peers/peer0.general.example.com/tls/ca.crt && export CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/clipod/organizations/peerOrganizations/general.example.com/users/Admin@general.example.com/msp && export CORE_PEER_ADDRESS=peer-general:9051 && peer channel join -b /etc/hyperledger/clipod/channel-artifacts/$CHANNEL_NAME.block -o orderer:7050 --tls --cafile /etc/hyperledger/clipod/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem"</code></p>
<p>I am stuck on this problem, any help will be appreciated.
Thank you</p>
| <p>I have fixed it. The issue I was facing was because of not setting the <code>CORE_PEER_TLS_ENABLED = true</code> for CLI pod.</p>
<p>One thing I have got learn from this whole model, whenever you see TLS issue, first to check for would be checking <code>CORE_PEER_TLS_ENABLED</code> variable. Make sure you have set it for all the pods or containers you are trying to interact with. The case can be false(for no TLS) or true(for using TLS) depending on your deployment.
Other things to keep in mind is using the correct variables of fabric including <code>FABRIC_CFG_PATH</code>, <code>CORE_PEER_LOCALMSPID</code>, <code>CORE_PEER_TLS_ROOTCERT_FILE</code>, <code>CORE_PEER_MSPCONFIGPATH</code> and some others depending on your command.</p>
|
<p>I'm using Azure Container Insights for an AKS cluster and want to filter some logs using Log Analytics and Kusto Query Language. I do it to provide a convenient dashboard and alerts.</p>
<p>What I'm trying to achieve is list only not ready pods. Listing the ones not Running is not enough. This can be easily filtered using kubectl e.g. following this post <a href="https://stackoverflow.com/questions/58992774/how-to-get-list-of-pods-which-are-ready">How to get list of pods which are "ready"?</a>
However this data is not avaiable when querying in Log analytics with Kusto as the containerStatuses seems to be only a string
<a href="https://i.stack.imgur.com/EP95b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EP95b.png" alt="enter image description here" /></a></p>
<p>It should be somehow possible because Container Insights allow this filtering in Metrics section. However it's not fully satisfying because with metrics my filtering capabilities are much smaller.</p>
| <p>You can do it for pods as below for last 1h.</p>
<pre><code>let endDateTime = now();
let startDateTime = ago(1h);
KubePodInventory
| where TimeGenerated < endDateTime
| where TimeGenerated >= startDateTime
| where PodStatus != "Running"
| distinct Computer, PodUid, TimeGenerated, PodStatus
</code></pre>
|
<p>Now I read this config in kubernetes:</p>
<pre><code> containers:
- name: canal-admin-stable
image: 'dolphinjiang/canal-admin:v1.1.5'
ports:
- name: http
containerPort: 8089
protocol: TCP
resources:
limits:
cpu: '2'
memory: 2Gi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: admin-conf
mountPath: /home/canal/conf/application.yml
subPath: application.yml
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
</code></pre>
<p>the volumne mount path(<code>/home/canal/conf/application.yml</code>) already file full path, I think kubernetes only overwrite the file application.yml, why still specify subPath <code>application.yml</code>? Why not write like this:</p>
<pre><code>volumeMounts:
- name: admin-conf
mountPath: /home/canal/conf/
subPath: application.yml
</code></pre>
| <p>I was using init-containers to pass a config file to the main container and discovered the use behind the mountPath and subPath as it was confusing to me too at first. The mountPath is always the destination inside the Pod a volume gets mounted to. However, I discovered that if your mountPath has other config files in it, then that directory will be overwritten and all that exists will be your file that you mounted.</p>
<p>I only needed to mount part of the volume, such as a single file in a volume, so I used subPath to specify the part to be mounted within the mountPath.</p>
<p>Sometimes using just a mountPath is fine, but I had to also use a subPath to preserve the other config files in the directory.</p>
|
<p>tl;dr: I have a server that handles WebSocket connections. The nature of the workload is that it is necessarily stateful (i.e., each connection has long-running state). Each connection can last ~20m-4h. Currently, I only deploy new revisions of this service at off hours to avoid interrupting users too much.</p>
<p>I'd like to move to a new model where deploys happen whenever, and the services gracefully drain connections over the course of ~30 minutes (typically the frontend can find a "good" time to make that switch over within 30 minutes, and if not, we just forcibly disconnect them). I can do that pretty easily with K8s by setting gracePeriodSeconds.
However, what's less clear is how to do rollouts such that new connections only go to the most recent deployment. Suppose I have five replicas running. Normal deploys have an undesirable mode where a client is on R1 (replica 1) and then K8s deploys R1' (upgraded version) and terminates R1; frontend then reconnects and gets routed to R2; R2 terminates, frontend reconnects, gets routed to R3.</p>
<p>Is there any easy way to ensure that after the upgrade starts, new clients get routed only to the upgraded versions? I'm already running Istio (though not using very many of its features), so I could imagine doing something complicated with some custom deployment infrastructure (currently just using Helm) that spins up a new deployment, cuts over new connections to the new deployment, and gracefully drains the old deployment... but I'd rather keep it simple (just Helm running in CI) if possible.</p>
<p>Any thoughts on this?</p>
| <p>This is already how things work with normal Services. Once a pod is terminating, it has already been removed from the Endpoints. You'll probably need to tune up your max burst in the rolling update settings of the Deployment to 100%, so that it will spawn all new pods all at once and then start the shutdown process on all the rest.</p>
|
<p>Currently I am using a script to renew Kubernetes certificates before they expire. But this is a manual process. I have to monitor expiration dates carefully and run this script beforehand. What's the recommended way to update all control plane certificates automatically without updating control plane? Do kubelet's --rotate* flags rotate all components (e.g. controller) or it is just for kubelet? PS: Kubernetes cluster was created with kubeadm.</p>
| <p>Answering following question:</p>
<blockquote>
<p>What's the recommended way to update all control plane certificates automatically without updating control plane</p>
</blockquote>
<p>According to the k8s docs and best practices the best practice is to use "Automatic certificate renewal" with control plane upgrade:</p>
<blockquote>
<h3>Automatic certificate renewal</h3>
<p>This feature is designed for addressing the simplest use cases; if you don't have specific requirements on certificate renewal and perform Kubernetes version upgrades regularly (less than 1 year in between each upgrade), kubeadm will take care of keeping your cluster up to date and reasonably secure.</p>
<p><strong>Note:</strong> It is a best practice to upgrade your cluster frequently in order to stay secure.</p>
<p>-- <em><a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#automatic-certificate-renewal" rel="noreferrer">Kubernetes.io: Administer cluster: Kubeadm certs: Automatic certificate renewal</a></em></p>
</blockquote>
<p>Why this is the recommended way:</p>
<p>From the best practices standpoint you should be upgrading your <code>control-plane</code> to patch vulnerabilities, add features and use the version that is currently supported.</p>
<p>Each <code>control-plane</code> upgrade will renew the certificates as described (defaults to <code>true</code>):</p>
<ul>
<li><code>$ kubeadm upgrade apply --help</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code>--certificate-renewal Perform the renewal of certificates used by component changed during upgrades. (default true)
</code></pre>
<p>You can also check the expiration of the <code>control-plane</code> certificates by running:</p>
<ul>
<li><code>$ kubeadm certs check-expiration</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code>[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf May 30, 2022 13:36 UTC 364d no
apiserver May 30, 2022 13:36 UTC 364d ca no
apiserver-etcd-client May 30, 2022 13:36 UTC 364d etcd-ca no
apiserver-kubelet-client May 30, 2022 13:36 UTC 364d ca no
controller-manager.conf May 30, 2022 13:36 UTC 364d no
etcd-healthcheck-client May 30, 2022 13:36 UTC 364d etcd-ca no
etcd-peer May 30, 2022 13:36 UTC 364d etcd-ca no
etcd-server May 30, 2022 13:36 UTC 364d etcd-ca no
front-proxy-client May 30, 2022 13:36 UTC 364d front-proxy-ca no
scheduler.conf May 30, 2022 13:36 UTC 364d no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca May 28, 2031 13:36 UTC 9y no
etcd-ca May 28, 2031 13:36 UTC 9y no
front-proxy-ca May 28, 2031 13:36 UTC 9y no
</code></pre>
<blockquote>
<p><strong>A side note!</strong></p>
<p><code>kubelet.conf</code> is not included in the list above because <code>kubeadm</code> configures <code>kubelet</code> for automatic certificate renewal.</p>
</blockquote>
<p>From what it can be seen by default:</p>
<blockquote>
<ul>
<li>Client certificates generated by <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/" rel="noreferrer">kubeadm</a> expire after 1 year.</li>
<li>CA created by <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/" rel="noreferrer">kubeadm</a> are set to expire after 10 years.</li>
</ul>
</blockquote>
<hr />
<p>There are other features that allows you to rotate the certificates in a "semi automatic" way.</p>
<p>You can opt for a manual certificate renewal with the:</p>
<ul>
<li><code>$ kubeadm certs renew</code></li>
</ul>
<p>where you can automatically (with the command) renew the specified (or all) certificates:</p>
<ul>
<li><code>$ kubeadm certs renew all</code></li>
</ul>
<pre><code>[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed
Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.
</code></pre>
<p>Please take a specific look on the output:</p>
<pre><code>You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.
</code></pre>
<p>As pointed, you will need to restart the components of your <code>control-plane</code> to use new certificate but remember:</p>
<ul>
<li><code>$ kubectl delete pod -n kube-system kube-scheduler-ubuntu</code> <strong>will not work</strong>.</li>
</ul>
<p>You will need to restart the docker container responsible for the component:</p>
<ul>
<li><code>$ docker ps | grep -i "scheduler"</code></li>
<li><code>$ docker restart 8c361562701b</code> (example)</li>
</ul>
<pre><code>8c361562701b 38f903b54010 "kube-scheduler --au…" 11 minutes ago Up 11 minutes k8s_kube-scheduler_kube-scheduler-ubuntu_kube-system_dbb97c1c9c802fa7cf2ad7d07938bae9_5
b709e8fb5e6c k8s.gcr.io/pause:3.4.1 "/pause" About an hour ago Up About an hour k8s_POD_kube-scheduler-ubuntu_kube-system_dbb97c1c9c802fa7cf2ad7d07938bae9_0
</code></pre>
<hr />
<p>As pointed in below link, <code>kubelet</code> can automatically renew it's certificate (<code>kubeadm</code> configures the cluster in a way that this option is enabled):</p>
<ul>
<li><em><a href="https://kubernetes.io/docs/tasks/tls/certificate-rotation/" rel="noreferrer">Kubernetes.io: Configure Certificate Rotation for the Kubelet</a></em></li>
<li><em><a href="https://github.com/kubernetes/kubeadm/issues/2185#issuecomment-644260417" rel="noreferrer">Github.com: Kubernetes: Kubeadm: Issues: --certificate-renewal true doesn't renew kubelet.conf</a></em></li>
</ul>
<p>Depending on the version used in your environment, this can be disabled. Currently in the newest version of k8s managed by <code>kubeadm</code> this option is enabled by default according to my knowledge.</p>
<hr />
<p>Please keep in mind that before you start with any kubernetes node/control plane/update/upgrade to <strong>read "Urgent Upgrade Notes" specific to your k8s version</strong> (example):</p>
<ul>
<li><a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md#urgent-upgrade-notes" rel="noreferrer">Github.com: Kubernetes: CHANGELOG: 1.21: Urgent upgrade nodes</a></li>
</ul>
<hr />
<p>Defining the automatic way of certificate rotation could go in either way but you can use already mentioned commands to automate this process. You would need to create a script (which you already have) that would be put in cron that would fire after some time and renew them.</p>
|
<p>my natural thought is that if nginx is just a daemon process on the k8s node, but not a pod(container) in the k8s cluster, looks like it still can fullfill ingress controller jobs. because:
if it's a process, because it is on the k8s node, it still can talk to apiserver to fetch service backend pods information, like IP addresses, so it's still can be used as a http proxy server to direct traffic to different services.</p>
<p>so 2 questions,</p>
<ol>
<li>why nginx ingress controller has to be a pod?</li>
<li>why nginx ingress controller only got 1 replica? and on which node? if nginx controller pod is dead, things will go unstable.</li>
</ol>
<p>Thanks!</p>
| <p>Because Pods are how you run daemon processes (or really, all processes) inside Kubernetes. That's just how you run stuff. I suppose there is nothing stopping you from running it outside the cluster, manually setting up API configuration and authentication, doing all the needed networking bits yourself. But ... why?</p>
<p>As for replicas, you should indeed generally have more than one across multiple physical nodes for redundancy. A lot of the tutorials show it with <code>replicas: 1</code> because either it's for a single-node dev cluster like Minikube or it's only an example.</p>
|
<p>Now my kubernetes (<code>v1.15.x</code>) deployment keeps restarting all the time. From the log ouput with kubernetes dashboard I could not see anything useful. Now I want to log into the pod and check the log from log dir of my service. But the pod keeps restarting all the time and I have no chance to log into the pod.</p>
<p>Is there any way to login restart pod or dump some file or see the file in the pod? I want to find why the pod restart all the time.</p>
<p><a href="https://i.stack.imgur.com/noGBd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/noGBd.png" alt="enter image description here" /></a></p>
| <p>if you are running the <strong>GKE</strong> and <strong>logging</strong> is enabled you can get all container log by default into the dashboard of stack driver logging.</p>
<p>As of now you can run the <code>kubectl describe pod <pod name></code> to check the status code of the container which got exited. Status code might be helpful to understand the reason for restart, is it due to <strong>Error</strong> or <strong>OOM killed</strong>.</p>
<p>you can also use the flag <code>--previous</code> and get logs of restarted POD</p>
<p>Example :</p>
<pre><code>kubectl logs <POD name> --previous
</code></pre>
<p>in the above case of <code>--previous</code> your pod needs but still exist inside the cluster.</p>
|
<p>while evaluating the network security using nmap on Kubernetes server, we noticed a warning as below</p>
<p>~]# nmap xxx.xx.xx.xx -p 6443 -sVC --script=ssl*</p>
<pre><code>.
.
.
ssl-enum-ciphers:
| TLSv1.2:
| ciphers:
| TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) - A
| TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) - A
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_3DES_EDE_CBC_SHA (rsa 2048) - C
| compressors:
| NULL
| cipher preference: server
| warnings:
| 64-bit block cipher 3DES vulnerable to SWEET32 attack
</code></pre>
<p>With bit of research got to know that <strong>TLS_RSA_WITH_3DES_EDE_CBC_SHA (rsa 2048) - C</strong> cipher suite is to support 64bit block SSL/TLS Handshake and the suggested solution is to disable the cipher option in Kubernetes etcd. please help me how to do it.</p>
<p>other views on this much appreciated, please let me know what is the better way to secure the environment.</p>
| <p>You can use the <code>--cipher-suites</code> CLI option to etcd. See <a href="https://etcd.io/docs/v3.4/op-guide/security/" rel="nofollow noreferrer">https://etcd.io/docs/v3.4/op-guide/security/</a> for a summary of all their TLS config options. The default ciphers is based on the version of Go used to compile it.</p>
|
<p><strong>Story</strong>: in my java code i have a few ScheduledFuture's that i need to run everyday on specific time (15:00 for example), the only available thing that i have is database, my current application and openshift with multiple pods. I can't move this code out of my application and must run it from there.</p>
<p><strong>Problem</strong>: ScheduledFuture works on every pod, but i need to run it only once a day. I have a few ideas, but i don't know how to implement them.</p>
<p><strong>Idea #1</strong>:
Set environment variable to specific pod, then i will be able to check if this variable exists (and its value), read it and run schedule task if required. I know that i have a risk of hovered pods, but that's better not to run scheduled task at all than to run it multiple times.</p>
<p><strong>Idea #2</strong>:
Determine a leader pod somehow, this seems to be a bad idea in my case since it always have "split-brain" problem.</p>
<p><strong>Idea #3 (a bit offtopic)</strong>:
Create my own synchronization algorithm thru database. To be fair, it's the simplest way to me since i'm a programmer and not SRE. I understand that this is not the best one tho.</p>
<p><strong>Idea #4 (a bit offtopic)</strong>:
Just use quartz schedule library. I personally don't really like that and would prefer one of the first two ideas (if i will able to implement them), but at the moment it seems like my only valid choice.</p>
<p>UPD. May be you have some other suggestions or a warning that i shouldn't ever do that?</p>
| <p>I would suggest to use a ready-to-use solution. Getting those things right, especially covering all possible corner-cases wrt. reliability, is hard. If you do not want to use quartz, I would at least suggest to use a database-backed solution. Postgres, for example, has <a href="https://www.postgresql.org/docs/current/sql-select.html" rel="nofollow noreferrer"><code>SELECT ... FOR UPDATE SKIP LOCKED;</code> (scroll down to the section "The Locking Clause")</a> which may be used to implement one-time only scheduling.</p>
|
<p>I am trying to add custom alert-routing config to my alertmanager, deployed as a part of kube-prometheus-stack. But prometheus-operator pod, while trying to generate the alertmanager configmap, fails due to the following error:</p>
<pre><code>level=error ts=2021-05-31T06:29:38.883470881Z caller=klog.go:96 component=k8s_client_runtime func=ErrorDepth msg="Sync \"infra-services/prometheus-operator-kube-p-alertmanager\" failed: provision alertmanager configuration: base config from Secret could not be parsed: yaml: unmarshal errors:\n line 19: field matchers not found in type config.plain"
</code></pre>
<p>I also validated the same using amtool inside alertmanager container, which gives the same error. Here is my alertmanager.yml file:</p>
<pre><code>global:
resolve_timeout: 5m
slack_api_url: https://hooks.slack.com/services/xxxxxx/yyyyy/zzzzzzzzzzz
receivers:
- name: slack-notifications
slack_configs:
- channel: '#alerts'
send_resolved: true
text: '{{ template "slack.myorg.text" . }}'
- name: blackhole-receiver
route:
group_by:
- alertname
group_interval: 5m
group_wait: 30s
receiver: blackhole-receiver
repeat_interval: 12h
routes:
- matchers:
- severity=~"warning|critical"
receiver: slack-notifications
templates:
- /etc/alertmanager/config/*.tmpl
</code></pre>
<p>I have followed <a href="https://prometheus.io/docs/alerting/latest/configuration/" rel="noreferrer">https://prometheus.io/docs/alerting/latest/configuration/</a> and <a href="https://github.com/prometheus/alertmanager/blob/master/doc/examples/simple.yml" rel="noreferrer">https://github.com/prometheus/alertmanager/blob/master/doc/examples/simple.yml</a> to write my simple alertmanager config.</p>
| <p>Try to change from:</p>
<pre><code> routes:
- matchers:
- severity=~"warning|critical"
receiver: slack-notifications
</code></pre>
<p>To:</p>
<pre><code> routes:
- match_re:
severity: "warning|critical"
receiver: slack-notifications
</code></pre>
|
<p>I'm stepping through Kubernetes in Action to get more than just familiarity with Kubernetes.</p>
<p>I already had a Docker Hub account that I've been using for Docker-specific experiments.</p>
<p>As described in chapter 2 of the book, I built the toy "kubia" image, and I was able to push it to Docker Hub. I verified this again by logging into Docker Hub and seeing the image.</p>
<p>I'm doing this on Centos7.</p>
<p>I then run the following to create the replication controller and pod running my image:</p>
<pre><code>kubectl run kubia --image=davidmichaelkarr/kubia --port=8080 --generator=run/v1
</code></pre>
<p>I waited a while for statuses to change, but it never finishes downloading the image, when I describe the pod, I see something like this:</p>
<pre><code> Normal Scheduled 24m default-scheduler Successfully assigned kubia-25th5 to minikube
Normal SuccessfulMountVolume 24m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-x5nl4"
Normal Pulling 22m (x4 over 24m) kubelet, minikube pulling image "davidmichaelkarr/kubia"
Warning Failed 22m (x4 over 24m) kubelet, minikube Failed to pull image "davidmichaelkarr/kubia": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
</code></pre>
<p>So I then constructed the following command:</p>
<pre><code>curl -v -u 'davidmichaelkarr:**' 'https://registry-1.docker.io/v2/'
</code></pre>
<p>Which uses the same password I use for Docker Hub (they should be the same, right?).</p>
<p>This gives me the following:</p>
<pre><code>* About to connect() to proxy *** port 8080 (#0)
* Trying **.**.**.**...
* Connected to *** (**.**.**.**) port 8080 (#0)
* Establish HTTP proxy tunnel to registry-1.docker.io:443
* Server auth using Basic with user 'davidmichaelkarr'
> CONNECT registry-1.docker.io:443 HTTP/1.1
> Host: registry-1.docker.io:443
> User-Agent: curl/7.29.0
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 200 Connection established
<
* Proxy replied OK to CONNECT request
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
* subject: CN=*.docker.io
* start date: Aug 02 00:00:00 2017 GMT
* expire date: Sep 02 12:00:00 2018 GMT
* common name: *.docker.io
* issuer: CN=Amazon,OU=Server CA 1B,O=Amazon,C=US
* Server auth using Basic with user 'davidmichaelkarr'
> GET /v2/ HTTP/1.1
> Authorization: Basic ***
> User-Agent: curl/7.29.0
> Host: registry-1.docker.io
> Accept: */*
>
< HTTP/1.1 401 Unauthorized
< Content-Type: application/json; charset=utf-8
< Docker-Distribution-Api-Version: registry/2.0
< Www-Authenticate: Bearer realm="https://auth.docker.io/token",service="registry.docker.io"
< Date: Wed, 24 Jan 2018 18:34:39 GMT
< Content-Length: 87
< Strict-Transport-Security: max-age=31536000
<
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":null}]}
* Connection #0 to host *** left intact
</code></pre>
<p>I don't understand why this is failing auth.</p>
<p><strong>Update</strong>:</p>
<p>Based on the first answer and the info I got from this <a href="https://stackoverflow.com/questions/40288077/how-to-pass-image-pull-secret-while-using-kubectl-run-command">other question</a>, I edited the description of the service account, adding the "imagePullSecrets" key, then I deleted the replicationcontroller again and recreated it. The result appeared to be identical.</p>
<p>This is the command I ran to create the secret:</p>
<pre><code>kubectl create secret docker-registry regsecret --docker-server=registry-1.docker.io --docker-username=davidmichaelkarr --docker-password=** --docker-email=**
</code></pre>
<p>Then I obtained the yaml for the serviceaccount, added the key reference for the secret, then set that yaml as the settings for the serviceaccount.</p>
<p>This are the current settings for the service account:</p>
<pre><code>$ kubectl get serviceaccount default -o yaml
apiVersion: v1
imagePullSecrets:
- name: regsecret
kind: ServiceAccount
metadata:
creationTimestamp: 2018-01-24T00:05:01Z
name: default
namespace: default
resourceVersion: "81492"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: 38e2882c-009a-11e8-bf43-080027ae527b
secrets:
- name: default-token-x5nl4
</code></pre>
<p>Here's the updated events list from the describe of the pod after doing this:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m default-scheduler Successfully assigned kubia-f56th to minikube
Normal SuccessfulMountVolume 7m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-x5nl4"
Normal Pulling 5m (x4 over 7m) kubelet, minikube pulling image "davidmichaelkarr/kubia"
Warning Failed 5m (x4 over 7m) kubelet, minikube Failed to pull image "davidmichaelkarr/kubia": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Normal BackOff 4m (x6 over 7m) kubelet, minikube Back-off pulling image "davidmichaelkarr/kubia"
Warning FailedSync 2m (x18 over 7m) kubelet, minikube Error syncing pod
</code></pre>
<p>What else might I be doing wrong?</p>
<p><strong>Update</strong>:</p>
<p>I think it's likely that all these issues with authentication are unrelated to the real issue. The key point is what I see in the pod description (breaking into multiple lines to make it easier to see):</p>
<pre><code>Warning Failed 22m (x4 over 24m) kubelet,
minikube Failed to pull image "davidmichaelkarr/kubia": rpc error: code =
Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/:
net/http: request canceled while waiting for connection
(Client.Timeout exceeded while awaiting headers)
</code></pre>
<p>The last line seems like the most important piece of information at this point. It's not failing authentication, it's timing out the connection. In my experience, something like this is usually caused by issues getting through a firewall/proxy. We do have an internal proxy, and I have those environment variables set in my environment, but what about the "serviceaccount" that kubectl is using to make this connection? Do I have to somehow set a proxy configuration in the serviceaccount description?</p>
| <p>I faced same issue couple of time.
Updating here, might be useful for someone.</p>
<p>First describe the POD(kubectl describe pod <pod_name>),</p>
<p><strong>1. If you see <em>access denied/repository does not exist</em> errors like</strong></p>
<blockquote>
<p>Error response from daemon: pull access denied for test/nginx,
repository does not exist or may require 'docker login': denied:
requested access to the resource is denied</p>
</blockquote>
<p><strong>Solution:</strong></p>
<ul>
<li>If local K8s, you need to login into docker registry first <strong>OR</strong></li>
<li>if Kubernetes Cluster on Cloud, create secret for Registry and add imagepullsecret
along with secret name</li>
</ul>
<hr />
<p><strong>2. If you get timeout error,</strong></p>
<blockquote>
<p><strong>Error:</strong> Get <a href="https://registry-1.docker.io/v2/" rel="nofollow noreferrer">https://registry-1.docker.io/v2/</a>: net/http: request canceled while waiting for connection (Client.Timeout exceeded while
awaiting headers)</p>
</blockquote>
<p><strong>Solution:</strong></p>
<ul>
<li>check the node is able to connect network OR able to reach private/public Registry.</li>
<li>If AWS EKS Cluster, you need to enable auto-assign ip to Subnet where EC2 is running.</li>
</ul>
|
<p>When I put the application into production with pod-managed Kubernetes architecture where it has the possibility of scaling so today it has two servers running the same application, hangfire recognizes both but returns an error 500</p>
<pre><code>Unable to refresh the statistics: the server responded with 500 (error). Try reloading the page manually, or wait for automatic reload that will happen in a minute.
</code></pre>
<p>But when I leave on stage which is the testing application where there is only one server, hangfire works normally.</p>
<p>Hangfire Configuration:</p>
<pre class="lang-cs prettyprint-override"><code>Startup.cs
services.AddHangfire(x => x.UsePostgreSqlStorage(Configuration.GetConnectionString("DefaultConnection")));
app.UseHangfireDashboard("/hangfire", new DashboardOptions
{
Authorization = new[] { new AuthorizationFilterHangfire() }
});
app.UseHangfireServer();
</code></pre>
<p><a href="https://i.stack.imgur.com/Z7w6s.png" rel="nofollow noreferrer">Error</a></p>
| <p>You can now add <code>IgnoreAntiforgeryToken</code> to your service which should resolve this issue.</p>
<p>According to <a href="https://github.com/HangfireIO/Hangfire/issues/1248#issuecomment-517357213" rel="nofollow noreferrer">this github post</a>, the issue occured when you had multiple servers running the dashboard and due to load balancing when your request went to different server from the one you originally got the page, you'd see the error.</p>
<p>Adding <code>IgnoreAntiforgeryToken = true</code> to the dashboard should resolve the issue.</p>
<p>Excerpt Taken from <a href="https://github.com/HangfireIO/Hangfire/issues/1248#issuecomment-517357213" rel="nofollow noreferrer">here</a></p>
<pre><code>app.UseHangfireDashboard("/hangfire", new DashboardOptions
{
Authorization = new[] {new HangfireAuthFilter()},
IgnoreAntiforgeryToken = true // <--This
});
</code></pre>
|
<p>I am trying to write a high-level CDK construct that can be used to deploy Django applications with EKS. I have most of the k8s manifests defined for the application, but I am struggling with the Ingress part. Looking into different options, I have decided to try installing the AWS Load Balancer Controller (<a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/deploy/installation/" rel="nofollow noreferrer">https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/deploy/installation/</a>). Their documentation has instructions for installing the controller using the AWS CLI and the eksctl CLI tool, so I'm working on trying to translate these into CDK code. Here's what I have so far:</p>
<pre class="lang-js prettyprint-override"><code>import * as ec2 from '@aws-cdk/aws-ec2';
import * as eks from '@aws-cdk/aws-eks';
import * as iam from '@aws-cdk/aws-iam';
import * as cdk from '@aws-cdk/core';
import { ApplicationVpc } from './vpc';
var request = require('sync-request');
export interface DjangoEksProps {
readonly vpc: ec2.IVpc;
}
export class DjangoEks extends cdk.Construct {
public vpc: ec2.IVpc;
public cluster: eks.Cluster;
constructor(scope: cdk.Construct, id: string, props: DjangoEksProps) {
super(scope, id);
this.vpc = props.vpc;
// allow all account users to assume this role in order to admin the cluster
const mastersRole = new iam.Role(this, 'AdminRole', {
assumedBy: new iam.AccountRootPrincipal(),
});
this.cluster = new eks.Cluster(this, "MyEksCluster", {
version: eks.KubernetesVersion.V1_19,
vpc: this.vpc,
mastersRole,
defaultCapacity: 2,
});
// Adopted from comments in this issue: https://github.com/aws/aws-cdk/issues/8836
const albServiceAccount = this.cluster.addServiceAccount('aws-alb-ingress-controller-sa', {
name: 'aws-load-balancer-controller',
namespace: 'kube-system',
});
const awsAlbControllerPolicyUrl = 'https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.2.0/docs/install/iam_policy.json';
const policyJson = request('GET', awsAlbControllerPolicyUrl).getBody('utf8');
((JSON.parse(policyJson))['Statement'] as any[]).forEach(statement => {
albServiceAccount.addToPrincipalPolicy(iam.PolicyStatement.fromJson(statement))
})
// This is where I am stuck
// https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/deploy/installation/#add-controller-to-cluster
// I tried running this
// kubectl apply -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller//crds?ref=master"
this.cluster.addHelmChart('aws-load-balancer-controller-helm-chart', {
repository: 'https://aws.github.io/eks-charts',
chart: 'eks/aws-load-balancer-controller',
release: 'aws-load-balancer-controller',
version: '2.2.0',
namespace: 'kube-system',
values: {
clusterName: this.cluster.clusterName,
serviceAccount: {
create: false,
name: 'aws-load-balancer-controller',
},
},
});
}
}
</code></pre>
<p>Here are the errors I am seeing in CDK when I do <code>cdk deploy</code>:</p>
<pre><code>Received response status [FAILED] from custom resource. Message returned: Error: b'WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /tmp/kubeconfig\nWARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /tmp/kubeconfig\nRelease "aws-load-balancer-controller" does not exist. Installing it now.\nError: chart "eks/aws-load-balancer-controller" version "2.2.0" not found in https://aws.github.io/eks-charts repository\n' Logs: /aws/lambda/DjangoEks-awscdkawseksKubectlProvi-Handler886CB40B-6yld0A8rw9hp at invokeUserFunction (/var/task/framework.js:95:19) at processTicksAndRejections (internal/process/task_queues.js:93:5) at async onEvent (/var/task/framework.js:19:27) at async Runtime.handler (/var/task/cfn-response.js:48:13) (RequestId: ec066bb2-4cc1-48f6-8a88-c6062c27ed0f)
</code></pre>
<p>and a related error:</p>
<pre><code>Received response status [FAILED] from custom resource. Message returned: Error: b'error: no objects passed to create\n' Logs: /aws/lambda/DjangoEks-awscdkawseksKubectlProvi-Handler886CB40B-6yld0A8rw9hp at invokeUserFunction (/var/task/framework.js:95:19) at processTicksAndRejections (internal/process/task_queues.js:93:5) at async onEvent (/var/task/framework.js:19:27) at async Runtime.handler (/var/task/cfn-response.js:48:13) (RequestId: fe2c4c04-4de9-4a71-b18a-ab5bc91d180a)
</code></pre>
<p>The CDK EKS docs say that <code>addHelmChart</code> will install the provided Helm Chart with <code>helm upgrade --install</code>.</p>
<p>The AWS Load Balancer Controller installation instructions also say:</p>
<blockquote>
<p>Install the TargetGroupBinding CRDs if upgrading the chart via helm upgrade.</p>
<pre><code>kubectl apply -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller//crds?ref=master"
</code></pre>
</blockquote>
<p>I'm not sure how I can do this part in CDK. The link to <code>github.com/aws/eks-charts/stable/aws-load-balancer-controller//crds?ref=master</code> gives a 404, but that command does work when I run it against my EKS cluster, and I can install those CRDs. Running the deploy command after manually installing those CRDs also fails with the same message.</p>
<p>I think the error in my code comes from the <code>HelmChartOptions</code> that I pass to the <code>addHelmChart</code> command, and I have tried several options, and referenced similar CDK projects that install Helm charts from the same repo, but I keep getting failures.</p>
<p>Is anyone else installing the AWS Load Balancer Controller with CDK like I am trying to do here? One project that I have been trying to reference is <a href="https://github.com/neilkuan/cdk8s-aws-load-balancer-controller" rel="nofollow noreferrer">https://github.com/neilkuan/cdk8s-aws-load-balancer-controller</a>.</p>
<p>There is also discussion in this GitHub issue: <a href="https://github.com/aws/aws-cdk/issues/8836" rel="nofollow noreferrer">https://github.com/aws/aws-cdk/issues/8836</a> that might help as well, but a lot of the discussion is around cert manager manager which doesn't seem to be relevant for what I'm trying to do.</p>
| <p>I got some help on the CDK.dev slack channel. I had the wrong <code>version</code>. It should be 1.2.0, the version of the Helm Chart. 2.2.0 is the version of the Controller.</p>
<p>Here's an official example of how to do this from the <code>aws-samples</code> GitHub repo: <a href="https://github.com/aws-samples/nexus-oss-on-aws/blob/d3a092d72041b65ca1c09d174818b513594d3e11/src/lib/sonatype-nexus3-stack.ts#L207-L242" rel="nofollow noreferrer">https://github.com/aws-samples/nexus-oss-on-aws/blob/d3a092d72041b65ca1c09d174818b513594d3e11/src/lib/sonatype-nexus3-stack.ts#L207-L242</a></p>
|
<p>I have a cluster on GKE currently on version v1.19.9-gke.1400. Accordingly do kubernetes release notes, on 1.20 dockershim will be deprecated. My cluster is configured to auto-upgrades and in one specific application I use docker socket mapped to the application, where I run direct containers through their API.</p>
<p>My question: In a hypothetical upgrade of the cluster to the 1.20 version of kubernetes, the docker socket will be unavailable immediately? Or the deprecated flag only points that in the future it will be removed?</p>
| <p>Yes, if you use the non-containerd images. In the node pool config you can choose which image type you want and COS vs COS_Containerd are separate choices there. At some point later in 2021 we may (if all goes according to plan) remove Docker support in Kubernetes itself for 1.23. However Google may choose to remove support one version earlier in 1.22 or continue it later via the out-of-tree Docker CRI that Mirantis is working on.</p>
<p>I am running 1.20 in the Rapid channel and can confirm that Docker is still there and happy. Also FWIW if you need to run <code>dockerd</code> yourself via a DaemonSet it takes like 30 seconds to set up, really not a huge deal either way.</p>
|
<p>I have multiple Testcafe scripts (<code>script1.js</code>, <code>script2.js</code>) that are working fine. I have Dockerized this code into a Dockerfile and it works fine when I run the Docker Image. Next, I want to invoke this Docker Image as a CronJob in Kubernetes. Given below is my <code>manifest.yaml</code> file.</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: application-automation-framework
namespace: development
labels:
team: development
spec:
schedule: "*/1 * * * *"
jobTemplate:
metadata:
labels:
team: development
spec:
ttlSecondsAfterFinished: 120
backoffLimit: 3
template:
metadata:
labels:
team: development
spec:
containers:
- name: script1-job
image: testcafe-minikube
imagePullPolicy: Never
args: ["chromium:headless", "script1.js"]
- name: script2-job
image: testcafe-minikube
imagePullPolicy: Never
args: [ "chromium:headless", "script2.js"]
restartPolicy: OnFailure
</code></pre>
<p>As seen above, this manifest has two containers running. When I apply this manifest to Kubernetes, the first container (<code>script1-job</code>), runs well. But the second container (<code>script2-job</code>) gives me the following error.</p>
<pre><code>ERROR The specified 1337 port is already in use by another program.
</code></pre>
<p>If I run this with one container, it works perfectly. I also tried changing the args of the containers to the following.</p>
<pre><code>args: ["chromium:headless", "script1.js", "--ports 12345,12346"]
args: ["chromium:headless", "script2.js", "--ports 1234,1235"]
</code></pre>
<p>Still, I get the same error saying 1337 port already in use. (I wonder whether the <code>--ports</code> argument is working at all in Docker).</p>
<p>This is my <code>Dockerfile</code> for reference.</p>
<pre><code>FROM testcafe/testcafe
COPY . ./
USER root
RUN npm install
</code></pre>
<p>Could someone please help me with this? I want to run multiple containers as Cronjobs in Kubernetes, where I can run multiple Testcafe scripts in each job invocation?</p>
| <p>adding the containerPort configuration to your kubernetes resource should do the trick.</p>
<p>for example:</p>
<pre><code> spec:
containers:
- name: script1-job
image: testcafe-minikube
imagePullPolicy: Never
args: ["chromium:headless", "script1.js", "--ports 12345,12346"]
ports:
- containerPort: 12346
</code></pre>
|
<p>I am using Azure Kubernetes Service (AKS) and want to make sure pods inside a specific namespace can only receive ingress traffic from other pods in the same namespace.</p>
<p>I found this network policy to achieve this namespace isolation (from <a href="https://github.com/ahmetb/kubernetes-network-policy-recipes/blob/master/04-deny-traffic-from-other-namespaces.md" rel="nofollow noreferrer">here</a>):</p>
<pre class="lang-yaml prettyprint-override"><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
namespace: my-namespace
name: deny-from-other-namespaces
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
</code></pre>
<p>After I create this network policy, it successfully blocks traffic between pods on "my-namespace" and another namespace while communication between the pods in "my-namespace" is still possible. However, this is only true if both pods are scheduled on the same node. If both pods are in "my-namespace" but run on different nodes, then the connection between them no longer works. As soon as I delete above network policy, the connection works again.
I would think that this is not the intended behavior, as the pods are in the same namespace and ingress traffic should therefore be allowed.
Does anybody know what could cause this issue?</p>
<p>I am running Kubernetes version 1.19.6 with kubenet and calico network policies.</p>
| <p>Looks like you hit a known problem in AKS clusters v1.19+ around "Pod IP SNAT/Masquerade behavior".</p>
<p>How it affects clusters using Calico's plugin for Network Policies was explained there by other users:</p>
<blockquote>
<p>Just for information of other users, this issue causes problem for a NetworkPolicy with podSelector configs. Since the policy will be set based on the ipset of the pods in the IPtables by Calico, but the source IP of the packet is set to the node IP and even the packets that are supposed to be allowed will be dropped.</p>
</blockquote>
<p>Please read more about this problem in github issue <a href="https://github.com/Azure/AKS/issues/2031" rel="nofollow noreferrer">#2031</a>, along with the hard fix (node image upgrade) or workaround (run Daemonset creating SNAT exemption in iptables).</p>
|
<p>I have an Kubernetes Cluster with a working Ingress config for one REST API. Now I want to add a port forward to my mqtt adapter to this config, but I have problems finding a way to add an TCP rule to the config. The Kubernetes docs only show a HTTP example. <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></p>
<p>I'm pretty new to Kubernetes and I have problems adapting other configs, because whatever I find looks totally different from that what I found in the Kubernetes Docs.</p>
<p>I have used a regular nginx webserver with letsencrypt to secure TCP connections. I hope this works with the ingress controller, too.</p>
<p>My goal is to send messages via MQTT with TLS to my cluster.
Does someone have the right docs for this or knows how to add the config?</p>
<p>My config looks like this:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ratings-web-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt
spec:
tls:
- hosts:
- example.com
secretName: ratings-web-cert
rules:
- host: example.com
http:
paths:
- backend:
serviceName: test-api
servicePort: 8080
path: /
</code></pre>
| <p>the Ingress system only handles HTTP traffic in general. A few Ingress Controllers support custom extensions for non-HTTP packet handling but it's different for each. <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/</a> shows how to do this specifically for ingress-nginx, as shown there you configure it entirely out of band via some ConfigMaps, not via the Ingress object(s).</p>
<p>What you probably actually want is a LoadBalancer type Service object instead.</p>
|
<p>I am trying to create some persistent space for my Microk8s kubernetes project, but without success so far.</p>
<p>What I've done so far is:</p>
<p>1st. I have created a PV with the following yaml:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: dev-pv-0001
labels:
name: dev-pv-0001
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/dev
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- asfweb
After applying it kubernetess is showing:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS
dev-pv-0001 10Gi RWO Retain Available local-storage
Name: dev-pv-0001
Labels: name=dev-pv-0001
Annotations: <none>
Finalizers: [kubernetes.io/pv-protection]
StorageClass: local-storage
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 10Gi
Node Affinity:
Required Terms:
Term 0: kubernetes.io/hostname in [asfweb]
Message:
Source:
Type: LocalVolume (a persistent volume backed by local storage on a node)
Path: /data/dev
Events: <none>
</code></pre>
<p>And here is my deployment yaml:</p>
<pre><code>apiVersion: "v1"
kind: PersistentVolumeClaim
metadata:
name: "dev-pvc-0001"
spec:
storageClassName: "local-storage"
accessModes:
- "ReadWriteMany"
resources:
requests:
storage: "10Gi"
selector:
matchLabels:
name: "dev-pv-0001"
---
# Source: server/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: RELEASE-NAME-server
labels:
helm.sh/chart: server-0.1.0
app.kubernetes.io/name: server
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "0.1.0"
app.kubernetes.io/managed-by: Helm
spec:
type: ClusterIP
ports:
- port: 4000
selector:
app.kubernetes.io/name: server
app.kubernetes.io/instance: RELEASE-NAME
---
# Source: server/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: RELEASE-NAME-server
labels:
helm.sh/chart: server-0.1.0
app.kubernetes.io/name: server
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "0.1.0"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: server
app.kubernetes.io/instance: RELEASE-NAME
template:
metadata:
labels:
app.kubernetes.io/name: server
app.kubernetes.io/instance: RELEASE-NAME
spec:
imagePullSecrets:
- name: gitlab-auth
serviceAccountName: default
securityContext:
{}
containers:
- name: server
securityContext:
{}
image: "registry.gitlab.com/asfweb/asfk8s/server:latest"
imagePullPolicy: Always
resources:
{}
volumeMounts:
- mountPath: /data/db
name: server-pvc-0001
volumes:
- name: server-pvc-0001
persistentVolumeClaim:
claimName: dev-pvc-0001
---
# Source: server/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: RELEASE-NAME-server
labels:
helm.sh/chart: server-0.1.0
app.kubernetes.io/name: server
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "0.1.0"
app.kubernetes.io/managed-by: Helm
annotations:
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
tls:
- hosts:
- "dev.domain.com"
secretName: dev.domain.com
rules:
- host: "dev.domain.com"
http:
paths:
- path: /?(.*)
pathType: Prefix
backend:
service:
name: RELEASE-NAME-server
port:
number: 4000
</code></pre>
<p>Everything else is working a part of the persistent volume claim part.
Here is some more info if that's can help:</p>
<p><code>kubectl get pvc -A</code></p>
<pre><code>NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
controller-micro storage-controller-0 Bound pvc-f0f97686-c59f-4209-b349-cacf3cd0f126 20Gi RWO microk8s-hostpath 69d
gitlab-managed-apps prometheus-prometheus-server Bound pvc-abc7ea42-8c74-4698-9b40-db2005edcb42 8Gi RWO microk8s-hostpath 69d
asfk8s-25398156-development dev-pvc-0001 Pending local-storage 28m
</code></pre>
<p><code>kubectl describe pvc dev-pvc-0001 -n asfk8s-25398156-development</code></p>
<pre><code>Name: dev-pvc-0001
Namespace: asfk8s-25398156-development
StorageClass: local-storage
Status: Pending
Volume:
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: asfk8s
meta.helm.sh/release-namespace: asfk8s-25398156-development
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: asfk8s-server-6c6bc89c7b-hn44d
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 31m (x2 over 31m) persistentvolume-controller waiting for first consumer to be created before binding
Normal WaitForPodScheduled 30m persistentvolume-controller waiting for pod asfk8s-server-6c6bc89c7b-hn44d to be scheduled
Normal WaitForPodScheduled 12s (x121 over 30m) persistentvolume-controller waiting for pod asfk8s-server-6c6bc89c7b-hn44d to be scheduled
</code></pre>
<p><code>kubectl describe pod asfk8s-server-6c6bc89c7b-hn44d -n asfk8s-25398156-development</code></p>
<pre><code>Name: asfk8s-server-6c6bc89c7b-hn44d
Namespace: asfk8s-25398156-development
Priority: 0
Node: <none>
Labels: app.kubernetes.io/instance=asfk8s
app.kubernetes.io/name=server
pod-template-hash=6c6bc89c7b
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/asfk8s-server-6c6bc89c7b
Containers:
server:
Image: registry.gitlab.com/asfweb/asfk8s/server:3751bf19e3f495ac804ae91f5ad417829202261d
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/data/db from server-pvc-0001 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-lh7dl (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
server-pvc-0001:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: dev-pvc-0001
ReadOnly: false
default-token-lh7dl:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-lh7dl
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 33m default-scheduler 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.
Warning FailedScheduling 32m default-scheduler 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.
</code></pre>
<p>Can somebody please help me to fix this problem? Thanks in advance.</p>
| <p>the issue is that you are using the <strong>node affinity</strong> while creating the PV.</p>
<p>Which think something like you say inform to Kubernetes my disk will attach to this type of node. Due to affinity your disk or PV is attached to one type of specific node only.</p>
<p>when you are deploying the workload or deployment (POD) it's not getting schedule on that specific node and your POD is not getting that PV or PVC.</p>
<p><strong>Simple words:</strong></p>
<p>in simple word just make sure if you adding <strong>node affinity</strong> to <strong>PVC</strong> add it to <strong>deployment</strong> also. So both <strong>PVC</strong> and pod get scheduled on the <strong>same</strong> node.</p>
<p><strong>to resolve this issue</strong></p>
<p>make sure both POD and PVC schedule at same node add the node affinity to deployment also so POD schedule on that node.</p>
<p><strong>or else</strong></p>
<p>Remove the node affinity rule from PV and create a new PV and PVC and use it.</p>
<p>here is the place where you have mentioned the <strong>node affinity</strong> rule</p>
<pre><code>nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- asfweb
</code></pre>
<p>i can see in your deployment there is no rule so your POD is getting scheduled anywhere in cluster.</p>
<p>here you can see the simple example to create the PV and PVC and use it for MySQL Db : <a href="https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/</a></p>
|
<p>I am new to skaffold, k8s, docker set and I've been having trouble building my application on a cluster locally.</p>
<p>I have a code repository that is trying to pull a private NPM package but when building it loses the .npmrc file or the npm secret.</p>
<pre><code>npm ERR! code E404
npm ERR! 404 Not Found - GET https://registry.npmjs.org/@sh1ba%2fcommon - Not found
npm ERR! 404
npm ERR! 404 '@sh1ba/common@^1.0.3' is not in the npm registry.
npm ERR! 404 You should bug the author to publish it (or use the name yourself!)
npm ERR! 404
npm ERR! 404 Note that you can also install from a
npm ERR! 404 tarball, folder, http url, or git url.
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2021-06-02T06_08_57_246Z-debug.log
unable to stream build output: The command '/bin/sh -c npm install' returned a non-zero code: 1. Please fix the Dockerfile and try again..
</code></pre>
<p>Ideally I'd like to avoid hard coding the secret into the file and use a k8s environment variable to pass in the key to docker as a secret. I am able to (kind of) do it with the docker build command:</p>
<ul>
<li>with "--build-args" with npm secret (the not safe way)</li>
<li>with "--secret" with npm secret (the better way)</li>
<li>copying the .npmrc file directly, <code>npm install</code>ing and deleting it right after</li>
</ul>
<p>The issue arises when I try to build it using kubernetes/skaffold. After running, it doesn't seem like any of the args, env variables, or even the .npmrc file is found. When checking in the dockerfile for clues I was able to identify that nothing was being passed over from the manifest (args defined, .npmrc file, etc) to the dockerfile.</p>
<p>Below is the manifest for the application:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: auth
env:
- name: NPM_SECRET
valueFrom:
secretKeyRef:
name: npm-secret
key: NPM_SECRET
args: ["--no-cache", "--progress=plain", "--secret", "id=npmrc,src=.npmrc"]
</code></pre>
<p>Here's the code in the dockerfile:</p>
<pre><code># syntax=docker/dockerfile:1.2
# --------------> The build image
FROM node:alpine AS build
WORKDIR /app
COPY package*.json .
RUN --mount=type=secret,mode=0644,id=npmrc,target=/app/.npmrc \
npm install
# --------------> The production image
FROM node:alpine
WORKDIR /app
COPY package.json .
COPY tsconfig.json .
COPY src .
COPY prisma .
COPY --chown=node:node --from=build /app/node_modules /app/node_modules
COPY --chown=node:node . /app
s
RUN npm run build
CMD ["npm", "start"]
</code></pre>
<p>And also the skaffold file:</p>
<pre><code>apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
- ./infra/k8s-dev/*
build:
local:
push: false
artifacts:
- image: auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
</code></pre>
<p>A few notes:</p>
<ul>
<li>I can't locate the .npmrc file regardless of where I copy and paste it (in auth, in the manifest, in skaffold and in the ~/ directories)</li>
<li>I would like to also make it semi-usable (pretty reusable) in production too so that I don't need to do a complete overhaul if possible (but if this is bad practice I'd like to hear more about it)</li>
<li>I've been able to make it work with buildArgs in the skaffold.yaml file but I'm not sure how that would translate into a production environment as I can't pass build args from kubernetes to docker (and I read that it isn't safe and that secrets should be used)</li>
<li>The args in the manifest are throwing this error too (server runs if the args are commented out):</li>
</ul>
<pre><code> - deployment/auth-depl: container auth terminated with exit code 9
- pod/auth-depl-85fb8975d8-4rh9r: container auth terminated with exit code 9
> [auth-depl-85fb8975d8-4rh9r auth] node: bad option: --progress=plain
> [auth-depl-85fb8975d8-4rh9r auth] node: bad option: --secret
- deployment/auth-depl failed. Error: container auth terminated with exit code 9.
</code></pre>
<p>Any insight would be amazing, I've been fiddling with this for far too long now.</p>
<p>Thank you!</p>
| <p>Building and deploying an image to Kubernetes is at three levels:</p>
<ol>
<li>Your local system where you initiate the building of an image</li>
<li>The Docker build that populates the image and then stores that image somewhere</li>
<li>The Kubernetes cluster that loads and starts running that image</li>
</ol>
<p>Docker is not involved in #3. <em>(This is only partially true, since some clusters use Docker to run the containers too, but that's a hidden detail and is also changing.)</em></p>
<p>There are two places where you might communicate secrets:</p>
<ul>
<li>at image build time (steps #1 to #2): you can use Docker <code>--build-args</code> or mounting secrets with <code>--secret</code> (both require Buildkit)</li>
<li>at deployment time (step #3): you use Kubernetes secrets or config maps, which are configured separately from the image build</li>
</ul>
<p>Skaffold supports passing build-time secrets, like your npm password, with Docker's <code>--build-args</code> and <code>--secret</code> flags, though they are slightly renamed.</p>
<p><a href="https://skaffold.dev/docs/references/yaml/#build-artifacts-docker-buildArgs" rel="noreferrer"><code>buildArgs</code></a> supports Go-style templating, so you can reference environment variables like <code>MYSECRET</code> as <code>{{.MYSECRET}}</code>:</p>
<pre><code>build:
local:
useBuildkit: true
artifacts:
- image: auth
context: auth
docker:
buildArgs:
MYSECRET: "{{.MYSECRET}}"
</code></pre>
<p>Then you can reference <code>MYSECRET</code> within your <code>Dockerfile</code>:</p>
<pre><code>ARG MYSECRET
RUN echo MYSECRET=${MYSECRET}
</code></pre>
<p>Note that build-args are not propagated into your container unless you explicitly assign it via an <code>ENV MYSECRET=${MYSECRET}</code>.</p>
<p>If the secret is in a local file, you can use the <a href="https://skaffold.dev/docs/references/yaml/#build-artifacts-docker-secret" rel="noreferrer"><code>secret</code></a> field in the <code>skaffold.yaml</code>:</p>
<pre><code>build:
local:
useBuildkit: true
artifacts:
- image: auth
context: auth
docker:
secret:
id: npmrc
src: /path/to/.npmrc
</code></pre>
<p>and you'd then reference the secret as you are in your <code>Dockerfile</code>:</p>
<pre><code>RUN --mount=type=secret,mode=0644,id=npmrc,target=/app/.npmrc \
npm install
</code></pre>
<hr />
<p>Now in your <code>Deployment</code>, you're attempting to setting <code>args</code> for your container:</p>
<pre><code> args: ["--no-cache", "--progress=plain", "--secret", "id=npmrc,src=.npmrc"]
</code></pre>
<p>The <code>args</code> field overrides the <code>CMD</code> directive set in your image. This field is used to provide command-line arguments provided to your image's entrypoint, which is likely <code>node</code>. If you want to reference a secret in a running container on a cluster, you'd use a <code>Secret</code> or <code>ConfigMap</code>.</p>
|
<p>when I run the cronjob into Kubernetes, that time cron gives me to success the cron but not getting the desired result</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ $.Values.appName }}
namespace: {{ $.Values.appName }}
spec:
schedule: "* * * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
containers:
- name: test
image: image
command: ["/bin/bash"]
args: [ "test.sh" ]
restartPolicy: OnFailure
</code></pre>
<p>also, I am sharing test.sh</p>
<pre><code>#!/bin/sh
rm -rf /tmp/*.*
echo "remove done"
</code></pre>
<p>cronjob running successfully but
when I checked the container the file is not getting deleted into <strong>/tmp</strong> directory</p>
| <p>You need to have the persistence volume attached with you pod and cronjob that you are using so it can remove all the files when the script get executed. You need to mount and provide path accordingly in your script. For adding kubernetes cronjobs kindly go through this <a href="https://stackoverflow.com/questions/46578331/kubernetes-is-it-possible-to-mount-volumes-to-a-container-running-as-a-cronjob">link</a></p>
|
<p>I am successfully configure my service monitor to monitor the API that provide metrics that is running in the kubernetes pod. However, I also would like to add external service to my service monitor target. This external service is arangoDB oasis exporter metrics (<a href="https://www.youtube.com/watch?v=c8i7K4HUPF4&t=554s" rel="nofollow noreferrer">https://www.youtube.com/watch?v=c8i7K4HUPF4&t=554s</a>). And this service is not running in kubernetes container. Here are my configuration files concerned:</p>
<ol>
<li><code>/helm/charts/prometheus-xxx/templates/service_monitor.tpl</code></li>
</ol>
<pre><code>---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ template "jobs-manager-servicemonitor.fullname" . }}
# Change this to the namespace the jobs-manager-servicemonitor instance is running in
namespace: {{ .Values.serviceMonitor.namespace }}
labels:
serviceapp: {{ template "jobs-manager-servicemonitor.name" . }}
release: "{{ .Release.Name }}"
spec:
selector:
matchLabels:
# Targets jobs-manager service
app.kubernetes.io/instance: {{ .Values.instance.name }}
endpoints:
- port: {{ .Values.service.metricsPort.name }}
interval: {{ .Values.serviceMonitor.interval }}
{{- if .Values.serviceMonitor.scrapeTimeout }}
scrapeTimeout: {{ .Values.serviceMonitor.scrapeTimeout }}
{{- end }}
namespaceSelector:
matchNames:
- {{ .Values.Namespace }}
</code></pre>
<ol start="2">
<li><code>/helm/charts/prometheus-xxx/Chart.yaml</code></li>
</ol>
<pre><code>apiVersion: v1
appVersion: "1.0.0"
description: Prometheus Service monitor, customized
name: jobs-manager-servicemonitor
version: 1.0.1
</code></pre>
<ol start="3">
<li><code>/helm/charts/prometheus-xxx/templates/_helpers.tpl</code></li>
</ol>
<pre><code>{{/*
Expand the name of the chart.
*/}}
{{- define "jobs-manager-servicemonitor.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "jobs-manager-servicemonitor.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
</code></pre>
<ol start="4">
<li><code>/helm/charts/prometheus-xxx/values.yaml</code></li>
</ol>
<pre><code>serviceMonitor:
enabled: false
namespace: prometheus
interval: 10s
scrapeTimeout: 10s
service:
metricsPort:
name: http
instance:
name: jobs-manager
Namespace: test1
</code></pre>
<p>Do you have any suggestion on how to add the external service that is not running in kubernetes pod into the target of service monitor? Thank you very much in advance.</p>
<p><strong>----------UPDATE POST----------</strong></p>
<p>Here are my new config files in the charts template <code>arangodb-servicemonitor</code>:</p>
<ol>
<li><code>/helm/charts/arangodb-servicemonitor/templates/service.yaml</code></li>
</ol>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: arangodb
namespace: prometheus
labels:
app: arangodb
release: prometheus
spec:
type: ClusterIP
externalName: xxxxx.arangodb.cloud:xxxx
ports:
- name: metrics
port: 9000
targetPort: 9000
protocol: TCP
- bearer_token: [ARANGODB_TOKEN]
type: ExternalName
</code></pre>
<ol start="2">
<li><code>/helm/charts/arangodb-servicemonitor/templates/endpoints.yaml</code></li>
</ol>
<pre><code>kind: Endpoints
apiVersion: v1
metadata:
name: arangodb
labels:
app: arangodb
subsets:
- addresses:
- ip: xxxxx.arangodb.cloud:xxxx
ports:
- name: metrics
port: 9000
protocol: TCP
</code></pre>
<ol start="3">
<li><code>/helm/charts/arangodb-servicemonitor/templates/service_monitor.tpl</code></li>
</ol>
<pre><code>---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: arangodb-servicemonitor
# Change this to the namespace the arangodb-servicemonitor instance is running in
namespace: prometheus
labels:
serviceapp: arangodb-servicemonitor
release: prometheus
spec:
selector:
# Targets arangodb service
app: arangodb
endpoints:
# TO DO: use an array (List) of endpoints to monitor many endpoints
- port: metrics
interval: 30s
namespaceSelector:
matchNames:
# TO DO: use an array (List) of endpoints to monitor many endpoints
- default
</code></pre>
<p>But when I do <code>terraform apply</code>, I get this error message regarding the <code>bearer_token</code> that I added:</p>
<p><a href="https://i.stack.imgur.com/6PggK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6PggK.png" alt="makeapplyERR" /></a></p>
<p>For info, I need to add this token to connect to ArangoDB external service:</p>
<p><a href="https://i.stack.imgur.com/XuX3A.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XuX3A.png" alt="connectArangoDB" /></a></p>
| <p>Create the <strong>Kubernetes</strong> service and use the Prometheus <strong>ServiceMonitor</strong> same way.</p>
<p>create the K8s service</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: arangoDB
spec:
type: ClusterIP
ports:
- name: metrics
port: 9000
targetPort: 9000
</code></pre>
<p>create the <strong>Endpoint</strong> config of that Db in this prometheus should be able to access the IP of your Database</p>
<pre><code>kind: Endpoints
apiVersion: v1
metadata:
name: arangoDB
subsets:
- addresses:
- ip: IP of Database
ports:
- name: metrics
port: 9000
</code></pre>
<p>so nice service monitor will check the Kubernetes service and Kubernetes service will be pointing to the Endpoint and database and getting metrics from external service.</p>
<p>check endpoint service creation details at.
: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors</a></p>
|
<p>I have created a new GKE Cluster in the region 'us-west-1' and gave full access to the cloud services. I want to deploy the kubeflow pipeline on the cluster. I am getting the following error when I click the deploy button...</p>
<p>Error: Failed to create CustomResourceDefinition.</p>
<pre><code>{"metadata":{},"status":"Failure","message":"CustomResourceDefinition.apiextensions.k8s.io \"applications.app.k8s.io\" is invalid: [spec.versions[0].schema.openAPIV3Schema: Required value: schemas are required, spec.versions: Invalid value: []apiextensions.CustomResourceDefinitionVersion{apiextensions.CustomResourceDefinitionVersion{Name:\"v1beta1\", Served:false, Storage:false, Schema:(*apiextensions.CustomResourceValidation)(nil), Subresources:(*apiextensions.CustomResourceSubresources)(nil), AdditionalPrinterColumns:[]apiextensions.CustomResourceColumnDefinition(nil)}}: must have exactly one version marked as storage version, status.storedVersions: Invalid value: []string(nil): must have at least one stored version, metadata.annotations[api-approved.kubernetes.io]: Required value: protected groups must have approval annotation \"api-approved.kubernetes.io\", see https://github.com/kubernetes/enhancements/pull/1111]","reason":"Invalid","details":{"name":"applications.app.k8s.io","group":"apiextensions.k8s.io","kind":"CustomResourceDefinition","causes":[{"reason":"FieldValueRequired","message":"Required value: schemas are required","field":"spec.versions[0].schema.openAPIV3Schema"},{"reason":"FieldValueInvalid","message":"Invalid value: []apiextensions.CustomResourceDefinitionVersion{apiextensions.CustomResourceDefinitionVersion{Name:\"v1beta1\", Served:false, Storage:false, Schema:(*apiextensions.CustomResourceValidation)(nil), Subresources:(*apiextensions.CustomResourceSubresources)(nil), AdditionalPrinterColumns:[]apiextensions.CustomResourceColumnDefinition(nil)}}: must have exactly one version marked as storage version","field":"spec.versions"},{"reason":"FieldValueInvalid","message":"Invalid value: []string(nil): must have at least one stored version","field":"status.storedVersions"},{"reason":"FieldValueRequired","message":"Required value: protected groups must have approval annotation \"api-approved.kubernetes.io\", see https://github.com/kubernetes/enhancements/pull/1111","field":"metadata.annotations[api-approved.kubernetes.io]"}]},"code":422}
</code></pre>
<p>I have left all the other fields default and not accessing any database.</p>
<p>This is my first project using GCP. Any help is much appreciated.</p>
| <p>Which version of <strong>Kubernetes</strong> you are running in your cluster.</p>
<p>Error you are getting is due to wrong API version in CRD files</p>
<p>you can check the supported API version of your kubernetes by running the command :</p>
<pre><code>kubectl api-resources
</code></pre>
<p><a href="https://i.stack.imgur.com/viNRj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/viNRj.png" alt="enter image description here" /></a></p>
<p>check the CRD supported API version</p>
<pre><code>apiextensions.k8s.io
</code></pre>
<p>update supported API version in file and try applying same.</p>
<p><strong>kube-flow</strong> also provide the way to validate the necessary all API first</p>
<p>you can check at : <a href="https://github.com/kubeflow/crd-validation" rel="nofollow noreferrer">https://github.com/kubeflow/crd-validation</a></p>
|
<p>I have taken up the challenge of automating the deployment of my company's Django-based application that is done with AKS but I am very new to it. My initial idea is to accomplish it by upgrading the steps in a GitHub workflow that acts on the release of a new version.</p>
<p>I have structured it with three jobs. <code>build</code>, <code>migrate</code> and <code>deploy</code>:</p>
<ol>
<li><code>build</code>: Simply build the Docker image and push it to the container registry on DockerHub - this step is successfully done.</li>
<li><code>migrate</code>: Run the migrations in the production database from <code>python manage.py migrate</code> - here lies the problem.</li>
<li><code>deploy</code>: Deploy the image to the Kubernetes cluster - successfully done.</li>
</ol>
<p>Step 2 is the problem because we store the Postgres database credentials inside the Kubernetes cluster and to run the migrations I need those secrets to pass them as environment variables before I call the <code>migrate</code> command. So I am stuck on how I can pull those secrets from Kubernetes and use them to run a command in a step in GitHub action like this:</p>
<pre class="lang-yaml prettyprint-override"><code>migrate:
needs: build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.8
uses: actions/setup-python@v2
with:
python-version: 3.8
- name: psycopg2 prerequisites
run: sudo apt-get install python3-dev libpq-dev
- name: Install dependencies
run: |
python3 -m pip install --upgrade pip
python3 -m pip install -r requirements.txt
- name: Run migrations
run: |
POSTGRES_HOST={{ secret_host }} POSTGRES_USER={{ secret_user }} POSTGRES_PASSWORD={{ secret_password }} python manage.py showmigrations --settings settings_production
POSTGRES_HOST={{ secret_host }} POSTGRES_USER={{ secret_user }} POSTGRES_PASSWORD={{ secret_password }} python manage.py migrate --settings settings_production
</code></pre>
<p>Question is, is this even possible? If so, how can I do it? If not, what is another option to run the migrations in production before finishing the deployment?</p>
| <p>You can rub db migrations from Kubernetes cluster itself.</p>
<ol>
<li>Create a Kubernetes Job, which basically runs db migration. and</li>
<li>Deploy an init container before main container(application), which periodically checks db migration job completion.</li>
</ol>
|
<h1>Question:</h1>
<p>I try to create a kubernetes cluster, namespace and secrets via terraform.
The cluster is created, but the resources building upon the cluster fail to be created.</p>
<h1>Background information</h1>
<h2>Error message:</h2>
<p>This is the error message thrown by terraform after creation of the kubernetes cluster, when the namespace is to be created:</p>
<pre><code>azurerm_kubernetes_cluster_node_pool.mypool: Creation complete after 6m4s [id=/subscriptions/aaabcde1-abcd-abcd-abcd-aaaaaaabdce/resourcegroups/myrg/providers/Microsoft.ContainerService/managedClusters/my-aks/agentPools/win]
Error: Post https://my-aks-abcde123.hcp.australiaeast.azmk8s.io:443/api/v1/namespaces: dial tcp: lookup my-aks-abcde123.hcp.australiaeast.azmk8s.io on 10.128.10.5:53: no such host
on mytf.tf line 114, in resource "kubernetes_namespace" "my":
114: resource "kubernetes_namespace" "my" {
</code></pre>
<h2>Manual workaround:</h2>
<p>I can resolve this by manually authenticating against the kubernetes cluster via the command line and applying the outstanding terraform changes via another <code>terraform apply</code>:</p>
<pre><code>az aks get-credentials -g myrg -n my-aks --overwrite-existing
</code></pre>
<h2>Automated workaround attempt:</h2>
<p>My attempt to automate this authentication step failed. I have tried with a local exec provisioner inside the definition of the kubernetes cluster, without success:</p>
<pre><code>resource "azurerm_kubernetes_cluster" "myCluster" {
name = "my-aks"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
dns_prefix = "my-aks"
network_profile {
network_plugin = "azure"
}
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_B2s"
}
service_principal {
client_id = azuread_service_principal.tfapp.application_id
client_secret = azuread_service_principal_password.tfapp.value
}
tags = {
Environment = "demo"
}
windows_profile {
admin_username = "myself"
admin_password = random_string.password.result
}
provisioner "local-exec" {
command="az aks get-credentials -g myrg -n my-aks --overwrite-existing"
}
}
</code></pre>
<p>This is an example of a resource that fails to be created:</p>
<pre><code>resource "kubernetes_namespace" "my" {
metadata {
name = "my-namespace"
}
}
</code></pre>
<p>Is there a way to fully automate the creation of my resources, including those that are based on the kubernetes cluster, without manual authentication?</p>
| <p>Eventually got this to work, without any requiredment to use AZ LOGIN, or AZ AKS GET-CREDENTIALS null_resource or local-exec provisioners as suggested above etc.</p>
<p>Instead used a data block in main.tf to obtain the AKS Cluster (output from AKS module), and use the KUBE ADMIN CONFIG from the DATA as credentials for the Kubernetes Provider block. See below:</p>
<pre><code> data "azurerm_kubernetes_cluster" "aks" {
name = local.aks_cluster_name
resource_group_name = module.infra_resource_group.rg.name
depends_on = [
module.aks.aks_cluster
]
}
provider "kubernetes" {
host = data.azurerm_kubernetes_cluster.aks.kube_admin_config.0.host
client_certificate = base64decode(data.azurerm_kubernetes_cluster.aks.kube_admin_config.0.client_certificate)
client_key = base64decode(data.azurerm_kubernetes_cluster.aks.kube_admin_config.0.client_key)
cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.aks.kube_admin_config.0.cluster_ca_certificate)
}
</code></pre>
<p>NOTE: Found that using KUBE_CONFIG in the Kubernetes Provider block did not work. It needed higher permissions, hence why used the KUBE_ADMIN_CONFIG attribute instead</p>
<p>Using:</p>
<ul>
<li>Terraform v0.15.0</li>
<li>AzureRM Provider 2.60.0</li>
<li>Kubernetes Provider 2.2.0</li>
</ul>
|
<p>I'm new to Kubernetes and wondering, if there's a <code>kubectl</code> command to figure out what namespace I'm currently working in?</p>
<p>Running the <code>kubectl get ns</code> command prints out all the namespaces but doesn't show which one I'm in at present.</p>
| <p>You want to inspect the local config for <code>kubectl</code> and see current context. This shows your current context with namespace.</p>
<pre><code>kubectl config get-contexts
</code></pre>
<p>Example output - can also be multiple clusters, but only one "current":</p>
<pre><code>$kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* minikube minikube minikube default
</code></pre>
|
<p>In my scenario, our package from gitlab is already implemented pipeline to deploy kubernetes pod on remote host server. If I want to edit/open folder on kubernetes pod of the container using vscode, is it doable for vscode?</p>
| <p>Official Visual Studio Code documentation <a href="https://code.visualstudio.com/docs/remote/attach-container#_attach-to-a-container-in-a-kubernetes-cluster" rel="nofollow noreferrer">shows</a> how to attach to a running container, also <a href="https://code.visualstudio.com/docs/remote/attach-container#_attach-to-a-container-in-a-kubernetes-cluster" rel="nofollow noreferrer">inside Kubernetes cluster</a>.</p>
<hr />
<p>However, using <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">Persisent Volumes</a> may be a better approach.</p>
<p>Files modified in a "local" directory are simultaneously modified in attached volume (this is a bit more complicated but lets go with that for the sake of example), and they persist between pod restarts.</p>
|
<p>We run Airflow on K8s on DigitalOcean using Helm Chart. Tasks are written using <code>airflow.contrib.operators.kubernetes_pod_operator.KubernetesPodOperator</code>. On a regular basis we see that pods are failing with the following messages (but the issue is not constant, so the majority of time it works fine):</p>
<p>We are using <code>KubernetesExecutor</code>.</p>
<p>Helm Chart info - <code>airflow-stable/airflow version 7.16.0</code></p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 31s default-scheduler Successfully assigned airflow/... to k8s-sai-pool-8c13r-9b9-8br6p
Normal Pulling 18s (x2 over 31s) kubelet Pulling image "registry.digitalocean.com/...:455eebb0"
Warning Failed 18s (x2 over 30s) kubelet Failed to pull image "registry.digitalocean.com/...:455eebb0": rpc error: code = Unknown desc = Error response from daemon: Get https://{URL}: unauthorized: authentication required
Warning Failed 18s (x2 over 30s) kubelet Error: ErrImagePull
Normal SandboxChanged 18s (x7 over 30s) kubelet Pod sandbox changed, it will be killed and re-created.
Normal BackOff 16s (x6 over 29s) kubelet Back-off pulling image "registry.digitalocean.com/...:455eebb0"
Warning Failed 16s (x6 over 29s) kubelet Error: ImagePullBackOff
</code></pre>
| <p>That are most likely just DigitalOcean Registry issues
I suggest to try dockerhub</p>
|
<p>As part of writing logic for listing pods within a give k8s node I've the following api call:</p>
<pre><code>func ListRunningPodsByNodeName(kubeClient kubernetes.Interface, nodeName string (*v1.PodList, error) {
return kubeClient.
CoreV1().
Pods("").
List(context.TODO(), metav1.ListOptions{
FieldSelector: "spec.nodeName=" + nodeName,
})
}
</code></pre>
<p>In order to test <strong>ListRunningPodsByNodeName</strong> using the fake client provided by k8s, I came up with the following test initialization:</p>
<pre><code>func TestListRunningPodsByNodeName(t *testing.T) {
// happy path
kubeClient := fake.NewSimpleClientset(&v1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "pod1",
Namespace: "default",
Annotations: map[string]string{},
},
Spec: v1.PodSpec{
NodeName: "foo",
},
}, &v1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "pod2",
Namespace: "default",
Annotations: map[string]string{},
},
Spec: v1.PodSpec{
NodeName: "bar",
},
})
got, _ := ListRunningPodsByNodeName(kubeClient, "foo")
for i, pod := range got.Items {
fmt.Println(fmt.Sprintf("[%2d] %s", i, pod.GetName()))
}
t.Errorf("Error, expecting only one pod")
}
</code></pre>
<p>When debugging, I got <strong>pod1</strong> and <strong>pod2</strong> Pods returned despite I'm filtering by those running within <strong>foo</strong> node. Using this same approach for filtering by certain metadata work like a charm but can't make this to work in case of filtering by nodeName. ¿Anyone knows why please? I suspect it might be a limitation with the fake client capabilities but not completely sure to open an issue yet</p>
<p>thanks by advance</p>
| <p>The fake k8s client does not support filtering by field selector (see <a href="https://github.com/kubernetes/client-go/issues/326#issuecomment-412993326" rel="nofollow noreferrer">this comment</a>). When unit testing with the fake k8s client, it's best to assume that the k8s client will work as expected in the real world (return the correct pods based on your field selector query). In your test, provide the pods to the fake k8s client that <strong>your application</strong> expects and test your own logic, rather than also testing the query logic of the k8s client.</p>
<p>If it's absolutely critical that the fake client perform the filtering for you, you may be able to use the fake client reactors to inject this custom behavior into the fake client. It just means more boilerplate code.</p>
<blockquote>
<p>Anything non-generic (like field selection behavior) can be injected in your tests by adding reactors that deal with specific types of actions, use additional info in the action (in this case, ListAction#GetListRestrictions().Fields), and customize the data returned</p>
</blockquote>
<p>I haven't tested this at all but hopefully it gives you something to start with.</p>
<pre><code>client := fake.NewSimpleClientset()
client.AddReactor("*", "MyResource", func(action testing.Action) (handled bool, ret runtime.Object, err error) {
// Add custom filtering logic here
})
</code></pre>
|
<p>I am using the WordCountProg from the tutorial on <a href="https://www.tutorialspoint.com/apache_flink/apache_flink_creating_application.htm" rel="nofollow noreferrer">https://www.tutorialspoint.com/apache_flink/apache_flink_creating_application.htm</a> . The code is as follows:</p>
<p><strong>WordCountProg.java</strong></p>
<pre><code>package main.java.spendreport;
import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.java.DataSet;
import org.apache.flink.api.java.ExecutionEnvironment;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.api.java.utils.ParameterTool;
import org.apache.flink.util.Collector;
public class WordCountProg {
// *************************************************************************
// PROGRAM
// *************************************************************************
public static void main(String[] args) throws Exception {
final ParameterTool params = ParameterTool.fromArgs(args);
// set up the execution environment
final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
// make parameters available in the web interface
env.getConfig().setGlobalJobParameters(params);
// get input data
DataSet<String> text = env.readTextFile(params.get("input"));
DataSet<Tuple2<String, Integer>> counts =
// split up the lines in pairs (2-tuples) containing: (word,1)
text.flatMap(new Tokenizer())
// group by the tuple field "0" and sum up tuple field "1"
.groupBy(0)
.sum(1);
// emit result
if (params.has("output")) {
counts.writeAsCsv(params.get("output"), "\n", " ");
// execute program
env.execute("WordCount Example");
} else {
System.out.println("Printing result to stdout. Use --output to specify output path.");
counts.print();
}
}
// *************************************************************************
// USER FUNCTIONS
// *************************************************************************
public static final class Tokenizer implements FlatMapFunction<String, Tuple2<String, Integer>> {
public void flatMap(String value, Collector<Tuple2<String, Integer>> out) {
// normalize and split the line
String[] tokens = value.toLowerCase().split("\\W+");
// emit the pairs
for (String token : tokens) {
if (token.length() > 0) {
out.collect(new Tuple2<>(token, 1));
}
}
}
}
}
</code></pre>
<p>This example takes in a text file as input, provides a count for how many times a word appears on the document, and writes the results to an output file.</p>
<p>I am creating my Job Image using the following Dockerfile:</p>
<p><strong>Dockerfile</strong></p>
<pre><code>FROM flink:1.13.0-scala_2.11
WORKDIR /opt/flink/usrlib
# Create Directory for Input/Output
RUN mkdir /opt/flink/resources
COPY target/wordcount-0.0.1-SNAPSHOT.jar /opt/flink/usrlib/wordcount.jar
</code></pre>
<p>Then the yaml for my job looks as follows:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: flink-jobmanager
spec:
template:
metadata:
labels:
app: flink
component: jobmanager
spec:
restartPolicy: OnFailure
containers:
- name: jobmanager
image: docker/wordcount:latest
imagePullPolicy: Never
env:
#command: ["ls"]
args: ["standalone-job", "--job-classname", "main.java.spendreport.WordCountProg", "-input", "/opt/flink/resources/READ.txt", "-output", "/opt/flink/resources/results.txt"] #, <optional arguments>, <job arguments>] # optional arguments: ["--job-id", "<job id>", "--fromSavepoint", "/path/to/savepoint", "--allowNonRestoredState"]
#args: ["standalone-job", "--job-classname", "org.sense.flink.examples.stream.tpch.TPCHQuery03"] #, <optional arguments>, <job arguments>] # optional arguments: ["--job-id", "<job id>", "--fromSavepoint", "/path/to/savepoint", "--allowNonRestoredState"]
ports:
- containerPort: 6123
name: rpc
- containerPort: 6124
name: blob-server
- containerPort: 8081
name: webui
livenessProbe:
tcpSocket:
port: 6123
initialDelaySeconds: 30
periodSeconds: 60
volumeMounts:
- name: job-artifacts-volume
mountPath: /opt/flink/resources
- name: flink-config-volume
mountPath: /opt/flink/conf
securityContext:
runAsUser: 9999 # refers to user _flink_ from official flink image, change if necessary
volumes:
- name: flink-config-volume
configMap:
name: flink-config
items:
- key: flink-conf.yaml
path: flink-conf.yaml
- key: log4j-console.properties
path: log4j-console.properties
- name: job-artifacts-volume
hostPath:
# directory location on host
path: /Users/my-user/Documents/semafor/apache_flink/PV
</code></pre>
<p>The goal is to mount /Users/my-user/Documents/semafor/apache_flink/PV where there is a READ.txt file into the pod that serves as input to the job. But when the job tries to execute, I get the following error:</p>
<pre><code>java.io.FileNotFoundException: File /opt/flink/resources/READ.txt does not exist or the user running Flink ('flink') has insufficient permissions to access it.
</code></pre>
<p>I have tried to run:</p>
<pre><code>sudo chown -R 9999:9999 /Users/my-user/Documents/semafor/apache_flink/PV
</code></pre>
<p>Also ran chmod 777... but I get the same error.</p>
<p>I also tried copying the jar to where the READ.txt file is: <em>/Users/my-user/Documents/semafor/apache_flink/PV</em> on my local directory and mount that to /opt/flink/usrlib instead, but then I got:</p>
<pre><code>org.apache.flink.util.FlinkException: Could not find the provided job class (main.java.spendreport.WordCountProg) in the user lib directory (/opt/flink/usrlib).
</code></pre>
<p>I am not that experienced in Kubernetes or Flink, so I'm not sure if I am mounting incorrectly or if I'm doing something wrong. If you have any suggestions, please lmk. Thanks in advance.</p>
| <p>If using minikube you need to first mount the volume using</p>
<pre><code>minikube mount /Users/my-user/Documents/semafor/apache_flink/PV:/tmp/PV
</code></pre>
<p>Then use /tmp/PV in your hostPath configuration in the volumes section</p>
<p><strong>Refer to these threads:</strong>
<a href="https://stackoverflow.com/questions/60479594/minikube-volume-write-permissions">Minikube volume write permissions?</a></p>
<p><a href="https://stackoverflow.com/questions/38682114/hostpath-with-minikube-kubernetes">HostPath with minikube - Kubernetes</a></p>
|
<p>I noticed that one of AKS services is in the failed state. When I went to diagnostics, I found out that current version is not supported anymore. So I tried to follow instructions stated here: <a href="https://learn.microsoft.com/en-us/azure/aks/upgrade-cluster" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/upgrade-cluster</a></p>
<p>I ran first the command:</p>
<pre><code>az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster --output table
</code></pre>
<p>and then:</p>
<pre><code>az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version new_version
</code></pre>
<p>and that would produce an error:</p>
<blockquote>
<p>Operation failed with status: 'Conflict'. Details: Upgrades are
disallowed while cluster is in a failed state. For resolution steps
visit <a href="https://aka.ms/aks-cluster-failed" rel="nofollow noreferrer">https://aka.ms/aks-cluster-failed</a> to troubleshoot why the
cluster state may have failed and steps to fix cluster state.</p>
</blockquote>
<p>So, state was failed due to old version, and version could not be updated due to failed state...
I checked this <a href="https://stackoverflow.com/questions/54631309/this-container-service-is-in-a-failed-state">This container service is in a failed state</a> but that was not our problem, we had plenty of resources to go around (which we checked with <code>az aks show --resource-group myResourceGroup --name myAKSCluster --query agentPoolProfiles</code>)</p>
<p>Deleting and recreating AKS is not an option.</p>
| <p>So after hours of trying different solutions and failing, I found fix for this among the answers here: <a href="https://github.com/Azure/AKS/issues/542" rel="nofollow noreferrer">https://github.com/Azure/AKS/issues/542</a></p>
<p>In order to fix failed state because of outdated version, I had to simply do the following:</p>
<p>Upgrade aks to version that is already there. So my version was 1.14.8 and I simply ran:</p>
<pre><code>az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version 1.14.8
</code></pre>
<p>which fixed the failed state of the cluster!</p>
<p>After this I just ran upgrade to the correct next version (1.18.19 in my case):</p>
<pre><code>az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version 1.18.19
</code></pre>
<p>I hope that this will save someone hours of frustrations :)</p>
|
<p>I got this error after the upgrade for my ingress from <code>networking.k8s.io/v1beta</code> to <code>networking.k8s.io/v1</code></p>
<pre><code>ComparisonError: failed to convert *unstructured.Unstructured to *v1beta1.Ingress: no kind "Ingress" is registered for version "networking.k8s.io/v1" in scheme "pkg/runtime/scheme.go:101"
</code></pre>
<p><strong>ingress.yaml</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Release.Name }}-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.org/websocket-services: "{{ .Release.Name }}-service"
nginx.org/listen-ports: "80,[::]:80,8443"
nginx.org/client-max-body-size: "50m"
nginx.org/location-snippets: |
proxy_set_header Bit-default-date-format $http_bit_default_date_format;
proxy_set_header Bit-timezone $http_bit_timezone;
nginx.org/proxy-pass-headers: "X-Real-IP,X-Forwarded-For,X-Forwarded-Host,X-Forwarded-Port,X-Forwarded-Proto"
spec:
rules:
- host: "myHost"
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: {{ .Release.Name }}-service
port:
number: 80
</code></pre>
<p>any hint what I am missing?</p>
<p><strong>Kubernetes</strong> version <strong>1.20</strong></p>
<p><strong>Note</strong> this was an old project and I just update these variables without helm upgrade</p>
| <p>my bad I updated the ingress file without run <code>helm upgrade</code>
I revert it back to <code>v1beta1</code> for now</p>
|
<p>I have a GKE Ingress that creates a L7 Load Balancer. I'd like to use a Cloud Tasks Queue to manage asynchronous tasks for one of the web applications running behind the GKE Ingress. This documentation says it is possible to use Cloud Tasks with GKE <a href="https://cloud.google.com/tasks/docs/creating-http-target-tasks#java" rel="nofollow noreferrer">https://cloud.google.com/tasks/docs/creating-http-target-tasks#java</a>.</p>
<p>I'm connecting the dots here, I'd really appreciate it if someone can help answer these questions.</p>
<ul>
<li>What HTTP endpoint should I configure for the Cloud Tasks queue?</li>
</ul>
<p>Is it better to create a separate Internal HTTP load balancer to target the Kubernetes Services?</p>
| <p>The HTTP endpoint is the public URL that you want to call to run your async task. Use the public IP/FQDN of your L7 load balancer, following by the correct path to reach your service and trigger the correct endpoint on it.</p>
<p>You can't use internal HTTP load balancer (even if it's a pleasant solution to increase security and external/unwanted call.). Indeed, Cloud Task (and Cloud Scheduler, PubSub and others) can, for now, only reach public URL, not private/VPC related IPs.</p>
|
<p>Following the instructions on the Keycloak docs site below, I'm trying to set up Keycloak to run in a Kubernetes cluster. I have an Ingress Controller set up which successfully works for a simple test page. Cloudflare points the domain to the ingress controllers IP.</p>
<p>Keycloak deploys successfully (<code>Admin console listening on http://127.0.0.1:9990</code>), but when going to the domain I get a message from NGINX: <code>503 Service Temporarily Unavailable</code>.</p>
<p><a href="https://www.keycloak.org/getting-started/getting-started-kube" rel="nofollow noreferrer">https://www.keycloak.org/getting-started/getting-started-kube</a></p>
<p>Here's the Kubernetes config:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: keycloak-cip
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
name: keycloak
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: nginx
service.beta.kubernetes.io/linode-loadbalancer-default-protocol: https
service.beta.kubernetes.io/linode-loadbalancer-port-443: '{ "tls-secret-name": "my-secret", "protocol": "https" }'
spec:
rules:
- host: my.domain.com
http:
paths:
- backend:
serviceName: keycloak-cip
servicePort: 8080
tls:
- hosts:
- my.domain.com
secretName: my-secret
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
namespace: default
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:12.0.3
env:
- name: KEYCLOAK_USER
value: "admin"
- name: KEYCLOAK_PASSWORD
value: "admin"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
initialDelaySeconds: 90
periodSeconds: 5
failureThreshold: 30
successThreshold: 1
revisionHistoryLimit: 1
</code></pre>
<hr />
<p>Edit:</p>
<p>TLS should be handled by the ingress controller.</p>
<p>--</p>
<p>Edit 2:</p>
<p>If I go into the controller using kubectl exec, I can do <code>curl -L http://127.0.0.1:8080/auth</code> which successfully retrieves the page:
<code><title>Welcome to Keycloak</title></code>. So I'm sure that keycloak is running. It's just that either traffic doesn't reach the pod, or keycloak doesn't respond.</p>
<p>If I use the ClusterIP instead but otherwise keep the call above the same, I get a <code>Connection timed out</code>. I tried both ports 80 and 8080 with the same result.</p>
| <p>The following configuration is required to run <strong>keycloak</strong> behind <strong>ingress controller</strong>:</p>
<pre><code>- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: KEYCLOAK_HOSTNAME
value: "my.domain.com"
</code></pre>
<p>So I think adding correct <strong>KEYCLOAK_HOSTNAME</strong> value should solve your issue.</p>
<p>I had a similar issue with Traefik Ingress Controller:
<strong><a href="https://stackoverflow.com/questions/67828817/cant-expose-keycloak-server-on-aws-with-traefik-ingress-controller-and-aws-http">Can't expose Keycloak Server on AWS with Traefik Ingress Controller and AWS HTTPS Load Balancer</a></strong></p>
<p>You can find the full code of my configuration here:
<strong><a href="https://github.com/skyglass-examples/user-management-keycloak" rel="nofollow noreferrer">https://github.com/skyglass-examples/user-management-keycloak</a></strong></p>
|
<p>I've an operator which run a reconcile for some object changes, now I want to add ability to reconcile when specific <code>configmap</code> is changing, (my operator <strong>doesn't</strong> responsible on this CM just needs to listen to it and read on changes...) from the docs I think I need to use the <code>Owns(&corev1.Configmap{})</code> but not sure how to do it and provide specific configmap name to watch,</p>
<p>How should I refer to specific configmap <code>name: foo</code> in <code>namespace=bar</code></p>
<p><a href="https://sdk.operatorframework.io/docs/building-operators/golang/references/event-filtering/#using-predicates" rel="nofollow noreferrer">https://sdk.operatorframework.io/docs/building-operators/golang/references/event-filtering/#using-predicates</a></p>
| <p>I haven't used this specific operator framework, but the concepts are familiar. Create a predicate function like this and use it when you are creating a controller by passing it into the SDK's <code>WithEventFilter</code> function:</p>
<pre><code>func specificConfigMap(name, namespace string) predicate.Predicate {
return predicate.Funcs{
UpdateFunc: func(e event.UpdateEvent) bool {
configmap := e.NewObject.(*corev1.ConfigMap)
if configmap.Name == name && configmap.Namespace == namespace {
return true
}
return false
},
}
}
</code></pre>
|
<p>Previously I was using the <code>extensions/v1beta1</code> api to create ALB on Amazon EKS. After upgrading the EKS to <code>v1.19</code> I started getting warnings:</p>
<pre><code>Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
</code></pre>
<p>So I started to update my ingress configuration accordingly and deployed the ALB but the ALB is not launching in AWS and also not getting the ALB address.</p>
<p>Ingress configuration --></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: "pub-dev-alb"
namespace: "dev-env"
annotations:
kubernetes.io/ingress.class: "alb"
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
rules:
- host: "dev.test.net"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: "dev-test-tg"
port:
number: 80
</code></pre>
<p>Node port configuration --></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: "dev-test-tg"
namespace: "dev-env"
spec:
ports:
- port: 80
targetPort: 3001
protocol: TCP
type: NodePort
selector:
app: "dev-test-server"
</code></pre>
<p>Results ---></p>
<p><a href="https://i.stack.imgur.com/tc2rp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tc2rp.png" alt="enter image description here" /></a></p>
<p>Used <a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-alb-ingress-controller-setup/" rel="nofollow noreferrer">this documentation</a> to create ALB ingress controller.</p>
<p>Could anyone help me on here?</p>
| <p>Your ingress should work fine even if you use newest Ingress. The warnings you see indicate that a new version of the API is available. You don't have to worry about it.</p>
<p>Here is the <a href="https://github.com/kubernetes/kubernetes/issues/94761" rel="nofollow noreferrer">explanation</a> why this warning occurs, even if you you use <code>apiVersion: networking.k8s.io/v1</code>:</p>
<blockquote>
<p>This is working as expected. When you create an ingress object, it can be read via any version (the server handles converting into the requested version). <code>kubectl get ingress</code> is an ambiguous request, since it does not indicate what version is desired to be read.</p>
<p>When an ambiguous request is made, kubectl searches the discovery docs returned by the server to find the first group/version that contains the specified resource.</p>
<p>For compatibility reasons, <code>extensions/v1beta1</code> has historically been preferred over all other api versions. Now that ingress is the only resource remaining in that group, and is deprecated and has a GA replacement, 1.20 will drop it in priority so that <code>kubectl get ingress</code> would read from <code>networking.k8s.io/v1</code>, but a 1.19 server will still follow the historical priority.</p>
<p>If you want to read a specific version, you can qualify the get request (like <code>kubectl get ingresses.v1.networking.k8s.io</code> ...) or can pass in a manifest file to request the same version specified in the file (<code>kubectl get -f ing.yaml -o yaml</code>)</p>
</blockquote>
<p>You can also see a <a href="https://stackoverflow.com/questions/66080909">similar question</a>.</p>
|
<p>Below is our current application architecture, we are using CA Workload Automation ESP for batch scheduling</p>
<p><a href="https://i.stack.imgur.com/1o42O.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1o42O.png" alt="enter image description here" /></a></p>
<p>and we are moving to this application to Azure Kubernetes</p>
<p>Application is built on Java tech stack. How do I configure the CA Workload Automation ESP Agent on the docker Image so that ESP Server can connect and execute the Jobs?</p>
| <p>You'll need to build a custom docker image with CA Workload Automation ESP Agent installed on it before adding your Java application.</p>
|
<p>I was installing Keycloak using <a href="https://operatorhub.io/operator/keycloak-operator" rel="nofollow noreferrer">Operator</a> (version 13.0.0). The updated code has theme related stuff <a href="https://github.com/keycloak/keycloak-operator" rel="nofollow noreferrer">github repository</a> and supports custom theme integration quite well. All we need an URL where the custom <code>theme</code> is located. I tried it and worked flawlessly.</p>
<p>However, what if we have themes in some local directory, not on some public URL. How do we suppose to upload the <code>theme</code> in the Keycloak then?</p>
<p>I've tried using the File URL and file paths as well but didn't work for me.</p>
<p>The <code>Keycloak.yaml</code></p>
<pre><code>apiVersion: keycloak.org/v1alpha1
kind: Keycloak
metadata:
name: keycloak-test
labels:
app: keycloak-test
spec:
instances: 1
extensions:
- https://SOME-PUBLIC-URL/keycloak-themes.jar
externalAccess:
enabled: False
podDisruptionBudget:
enabled: True
</code></pre>
| <p><strong>We can add custom keycloak themes in keycloak operator (v13.0.0) using the below steps:</strong></p>
<ol>
<li>Create a jar file for your custom theme using step shown here <a href="https://www.keycloak.org/docs/latest/server_development/#deploying-themes" rel="nofollow noreferrer">Deploying Keycloak Themes</a></li>
<li>Create a kubernetes configmap of the jar using the following command</li>
</ol>
<pre><code>kubectl create cm customtheme --from-file customtheme.jar
</code></pre>
<ol start="3">
<li>To use above configmap update <code>Keycloak.yaml</code> and add the following code block</li>
</ol>
<pre><code> keycloakDeploymentSpec:
experimental:
volumes:
defaultMode: 0777
items:
- name: customtheme
mountPath: /opt/jboss/keycloak/standalone/deployments/custom-themes
subPath: customtheme.jar
configMaps:
- customtheme
</code></pre>
<p><strong>Note:</strong> Make sure the size of theme is less than 1MB.</p>
|
<p>I need to create a deployment descriptor "A" yaml in which I can find the endpoint IP address of a pod (that belongs to a deployment "B") . There is an option to use Downward API but I don't know if I can use it in that case.</p>
| <p>What you are looking for is an <code>Headless service</code> (see <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">documentation</a>).</p>
<p>With an headless service, the service will not have an own IP address. If you specify a selector for the service, the DNS service will return the pods' IP when you query the service's name.</p>
<p>Quoting the documentation:</p>
<blockquote>
<p>For headless Services that define selectors, the endpoints controller
creates Endpoints records in the API, and modifies the DNS
configuration to return A records (IP addresses) that point directly
to the Pods backing the Service.</p>
</blockquote>
<p>In order to create an headless service, simply set the <code>.spec.clusterIP</code> to <code>None</code> and specify the selector as you would normally do with a traditional service.</p>
|
<p>According to the [documentation][1] Kubernetes variables are expanded using the previous defined environment variables in the container using the syntax $(VAR_NAME). The variable can be used in the container's entrypoint.</p>
<p>For example:</p>
<pre class="lang-yaml prettyprint-override"><code>env:
- name: MESSAGE
value: "hello world"
command: ["/bin/echo"]
args: ["$(MESSAGE)"]
</code></pre>
<p>Is this possible though to use bash expansion aka <code>${Var1:-${Var2}}</code> inside the container's entrypoint for the kubernetes environment variables E.g.</p>
<pre class="lang-yaml prettyprint-override"><code>env:
- name: Var1
value: "hello world"
- name: Var2
value: "no hello"
command: ['bash', '-c', "echo ${Var1:-$Var2}"]
</code></pre>
| <blockquote>
<p>Is this possible though to use bash expansion aka <code>${Var1:-${Var2}}</code> inside the container's entrypoint ?</p>
</blockquote>
<p>Yes, by using</p>
<pre><code>command:
- /bin/bash
- "-c"
- "echo ${Var1:-${Var2}}"
</code></pre>
<p>but not otherwise -- kubernetes is not a wrapper for bash, it use the Linux <code>exec</code> system call to launch programs inside the container, and so the only way to get bash behavior is to launch bash</p>
<p>That's also why they chose <code>$()</code> syntax for their environment interpolation so it would be different from the <code>${}</code> style that a shell would use, although this question comes up so much that one might wish they had not gone with <code>$</code> anything to avoid further confusing folks</p>
|
<p>I need to download chart which is located external OCI repository, when I download it using click on the link of the chart and version and provide user and password it works but not with the following code, this is what I tried and get an error</p>
<p><strong>failed to download "https://fdr.cdn.repositories.amp/artifactory/control-1.0.0.tgz" at version "1.0.0" (hint: running <code>helm repo update</code> may help)</strong> , if I click on the above link it asks for user and pass (in the browser) and when I provide it (the same in the code) the chart is <strong>downloaded</strong>, any idea why with the code its not working?</p>
<p>This is what I tried</p>
<pre><code> package main
import (
"fmt"
"os"
"helm.sh/helm/v3/pkg/action"
"helm.sh/helm/v3/pkg/cli"
"helm.sh/helm/v3/pkg/repo"
)
var config *cli.EnvSettings
func main() {
config = cli.New()
re := repo.Entry{
Name: "control",
URL: "https://fdr.cdn.repositories.amp/artifactory/control",
Username: "myuser",
Password: "mypass",
}
file, err := repo.LoadFile(config.RepositoryConfig)
if err != nil {
fmt.Println(err.Error())
}
file.Update(&re)
file.WriteFile(config.RepositoryConfig, os.ModeAppend)
co := action.ChartPathOptions{
InsecureSkipTLSverify: false,
RepoURL: "https://fdr.cdn.repositories.amp/artifactory/control",
Username: "myuser",
Password: "mypass",
Version: "1.0.0",
}
fp, err := co.LocateChart("control", config)
if err != nil {
fmt.Println(err.Error())
}
fmt.Println(fp)
}
</code></pre>
<p>While <strong>debug</strong> the code I found <strong>where the error is coming from</strong> <a href="https://github.com/helm/helm/blob/release-3.6/pkg/downloader/chart_downloader.go#L352" rel="nofollow noreferrer">https://github.com/helm/helm/blob/release-3.6/pkg/downloader/chart_downloader.go#L352</a>
it trying to find some cache which doesn't exist in my laptop, how could I disable it or some other solution to make it work?</p>
| <p>I think you need to update your repository before locating the chart.</p>
<p><a href="https://github.com/helm/helm/blob/main/cmd/helm/repo_update.go#L64-L89" rel="nofollow noreferrer">This</a> is the code the CLI uses to update the repositories.</p>
<p>And <a href="https://github.com/helm/helm/blob/main/cmd/helm/repo_update.go#L64-L89" rel="nofollow noreferrer">this</a> is the function that performs the update on the repositories.</p>
|
<p>We are running a Spark Streaming application on a Kubernetes cluster using spark 2.4.5.
The application is receiving massive amounts of data through a Kafka topic (one message each 3ms). 4 executors and 4 kafka partitions are being used.</p>
<p>While running, the memory of the driver pod keeps increasing until it is getting killed by K8s with an 'OOMKilled' status. The memory of executors is not facing any issues.</p>
<p>When checking the driver pod resources using this command :</p>
<pre><code>kubectl top pod podName
</code></pre>
<p>We can see that the memory increases until it reaches 1.4GB, and the pod is getting killed.</p>
<p>However, when checking the storage memory of the driver on Spark UI, we can see that the storage memory is not fully used (50.3 KB / 434 MB). Is there any difference between the <strong>storage memory of the driver</strong>, and the <strong>memory of the pod containing the driver</strong> ?</p>
<p>Has anyone had experience with a similar issue before?</p>
<p>Any help would be appreciated.</p>
<p>Here are few more details about the app :</p>
<ul>
<li>Kubernetes version : 1.18</li>
<li>Spark version : 2.4.5</li>
<li>Batch interval of spark streaming context : 5 sec</li>
<li>Rate of input data : 1 kafka message each 3 ms</li>
<li>Scala language</li>
</ul>
| <p>In brief, the Spark memory consists of three parts:</p>
<ul>
<li>Reversed memory (300MB)</li>
<li>User memory ((all - 300MB)*0.4), used for data processing logic.</li>
<li>Spark memory ((all-300MB)*0.6(<code>spark.memory.fraction</code>)), used for cache and shuffle in Spark.</li>
</ul>
<p>Besides this, there is also <code>max(executor memory * 0.1, 384MB)</code>(<code>0.1</code> is <code>spark.kubernetes.memoryOverheadFactor</code>) extra memory used by non-JVM memory in K8s.</p>
<p>Adding executor memory limit by memory overhead in K8S should fix the OOM.</p>
<p>You can also decrease <code>spark.memory.fraction</code> to allocate more RAM to user memory.</p>
|
<p>How do I make the <code>celery -A app worker</code> command to consume only a single task and then exit.</p>
<p>I want to run celery workers as a kubernetes Job that finishes after handling a single task.</p>
<p>I'm using KEDA for autoscaling workers according to queue messages.
I want to run celery workers as jobs for long running tasks, as suggested in the documentation:
<a href="https://keda.sh/docs/1.5/concepts/scaling-deployments/#long-running-executions" rel="nofollow noreferrer">KEDA long running execution</a></p>
| <p>There's not really anything specific for this. You would have to hack in your own driver program, probably via a custom concurrency module. Are you trying to use Keda ScaledJobs or something? You would just use a ScaledObject instead.</p>
|
<p>I was trying readiness probe in kubernetes with a springboot app. After the app starts, lets say after 60 seconds I fire <code>ReadinessState.REFUSING_TRAFFIC</code> app event.</p>
<p>I use port-forward for kubernetes service(Cluster-Ip) and checked /actuator/health/readiness and see
<code>"status":"OUT_OF_SERVICE"</code> after 60 seconds.</p>
<p>I, then fire some GET/POST requests to service.</p>
<p>Expected:
Service unavailable message</p>
<p>Actual:
GET/POST endpoints return data as usual</p>
<p>Is the behavior expected. Please comment.</p>
<p>Sample liveness/readiness probe yaml</p>
<pre><code> livenessProbe:
failureThreshold: 3
httpGet:
httpHeaders:
- name: Authorization
value: Basic xxxxxxxxxxxxxx
path: /actuator/health/liveness
port: http
scheme: HTTP
initialDelaySeconds: 180
periodSeconds: 20
successThreshold: 1
timeoutSeconds: 10
name: sample-app
ports:
- containerPort: 8080
name: http
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
httpHeaders:
- name: Authorization
value: Basic xxxxxxxxxxxxxx
path: /actuator/health/readiness
port: http
scheme: HTTP
initialDelaySeconds: 140
periodSeconds: 20
successThreshold: 1
timeoutSeconds: 10
</code></pre>
| <p>This is expected behavior as:</p>
<ul>
<li><code>$ kubectl port-forward service/SERVICE_NAME LOCAL_PORT:TARGET_PORT</code></li>
</ul>
<p>is not considering the state of the <code>Pod</code> when doing a port forwarding (directly connects to a <code>Pod</code>).</p>
<hr />
<h3>Explanation</h3>
<p>There is already a great answer which pointed me on further investigation here:</p>
<ul>
<li><em><a href="https://stackoverflow.com/a/59941521/12257134">Stackoverflow.com: Answer: Does kubectl port-forward ignore loadBalance services?</a></em></li>
</ul>
<p>Let's assume that you have a <code>Deployment</code> with a <code>readinessProbe</code> (in this example probe will never succeed):</p>
<ul>
<li><code>$ kubectl get pods, svc</code> (redacted not needed part)</li>
</ul>
<pre class="lang-sh prettyprint-override"><code>NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-64ff4d8749-7khht 0/1 Running 0 97m
pod/nginx-deployment-64ff4d8749-bklnf 0/1 Running 0 97m
pod/nginx-deployment-64ff4d8749-gsmml 0/1 Running 0 97m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx ClusterIP 10.32.31.105 <none> 80/TCP 97m
</code></pre>
<ul>
<li><code>$ kubectl describe endpoints nginx</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code>Name: nginx
Namespace: default
Labels: <none>
Annotations: <none>
Subsets:
Addresses: <none>
NotReadyAddresses: 10.36.0.62,10.36.0.63,10.36.0.64 # <-- IMPORTANT
Ports:
Name Port Protocol
---- ---- --------
<unset> 80 TCP
Events: <none>
</code></pre>
<p>As you can see all of the <code>Pods</code> are not in <code>Ready</code> state and the <code>Service</code> will not send the traffic to it. This can be seen in a following scenario ( create a test <code>Pod</code> that will try to <code>curl</code> the <code>Service</code>):</p>
<ul>
<li><code>$ kubectl run -it --rm nginx-check --image=nginx -- /bin/bash</code></li>
<li><code>$ curl nginx.default.svc.cluster.local</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code>curl: (7) Failed to connect to nginx.default.svc.cluster.local port 80: Connection refused
</code></pre>
<p>Using <code>kubectl port-forward</code>:</p>
<ul>
<li><code>$ kubectl port-forward service/nginx 8080:80</code></li>
<li><code>$ curl localhost:8080</code></li>
</ul>
<pre class="lang-sh prettyprint-override"><code><-- REDACTED -->
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<-- REDACTED -->
</code></pre>
<p>More light can be shed on why it happened by getting more verbose output from the command:</p>
<ul>
<li><code>kubectl port-forward service/nginx 8080:80 -v=6</code> (the number can be higher)</li>
</ul>
<pre class="lang-sh prettyprint-override"><code>I0606 21:29:24.986382 7556 loader.go:375] Config loaded from file: /SOME_PATH/.kube/config
I0606 21:29:25.041784 7556 round_trippers.go:444] GET https://API_IP/api/v1/namespaces/default/services/nginx 200 OK in 51 milliseconds
I0606 21:29:25.061334 7556 round_trippers.go:444] GET https://API_IP/api/v1/namespaces/default/pods?labelSelector=app%3Dnginx 200 OK in 18 milliseconds
I0606 21:29:25.098363 7556 round_trippers.go:444] GET https://API_IP/api/v1/namespaces/default/pods/nginx-deployment-64ff4d8749-7khht 200 OK in 18 milliseconds
I0606 21:29:25.164402 7556 round_trippers.go:444] POST https://API_IP/api/v1/namespaces/default/pods/nginx-deployment-64ff4d8749-7khht/portforward 101 Switching Protocols in 62 milliseconds
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
</code></pre>
<p>What happened:</p>
<ul>
<li><code>kubectl</code> requested the information about the <code>Service</code>: <code>nginx</code></li>
<li><code>kubectl</code> used the <code>selector</code> associated with the <code>Service</code> and looked for <code>Pods</code> with the same <code>selector</code> (<code>nginx</code>)</li>
<li><code>kubectl</code> chose a single <code>Pod</code> and port-forwarded to it.</li>
</ul>
<p>The <code>Nginx</code> welcome page showed as the <code>port-forward</code> connected directly to a <code>Pod</code> and not to a <code>Service</code>.</p>
<hr />
<p>Additional reference:</p>
<ul>
<li><p><em><a href="https://github.com/kubernetes/kubernetes/issues/15180" rel="nofollow noreferrer">Github.com: Kubernetes: Issues: kubectl port-forward should allow forwarding to a Service</a></em></p>
</li>
<li><p><code>$ kubectl port-forward --help</code></p>
</li>
</ul>
<blockquote>
<p># Listen on port 8443 locally, forwarding to the targetPort of the service's port named "https" in a <strong>pod selected by
the service</strong></p>
<blockquote>
<p><code>kubectl port-forward service/myservice 8443:https</code></p>
</blockquote>
</blockquote>
|
<p>Given the following appliction.conf :</p>
<pre><code>akka {
loglevel = debug
actor {
provider = cluster
serialization-bindings {
"sample.cluster.CborSerializable" = jackson-cbor
}
}
remote {
artery {
canonical.hostname = "127.0.0.1"
canonical.port = 0
}
}
cluster {
roles= ["testrole1" , "testrole2"]
seed-nodes = [
"akka://ClusterSystem@127.0.0.1:25251",
"akka://ClusterSystem@127.0.0.1:25252"]
downing-provider-class = "akka.cluster.sbr.SplitBrainResolverProvider"
}
}
</code></pre>
<p>To discern between the roles within an Actor I use :</p>
<pre><code>void register(Member member) {
if (member.hasRole("testrole1")) {
//start actor a1
}
else if (member.hasRole("testrole2")) {
//start actor a2
}
}
</code></pre>
<p>edited from src (<a href="https://doc.akka.io/docs/akka/current/cluster-usage.html" rel="nofollow noreferrer">https://doc.akka.io/docs/akka/current/cluster-usage.html</a>)</p>
<p>To enable role for a node I use the following config :</p>
<p>Within application.conf I configure the array for the roles but this appears to be at the cluster level rather than node level. In other words it does not seem possible to configure application.conf such that the Akka cluster is instructed to start actor a1 on node n1 and actor a2 on node n2? Should note details be specified at the level of akka.cluster in application.conf ?</p>
<p>For each node, is it required to specify multiple application.conf configuration files?</p>
<p>For example, application.conf for testrole1</p>
<pre><code>akka {
loglevel = debug
actor {
provider = cluster
serialization-bindings {
"sample.cluster.CborSerializable" = jackson-cbor
}
}
remote {
artery {
canonical.hostname = "127.0.0.1"
canonical.port = 0
}
}
cluster {
roles= ["testrole1"]
seed-nodes = [
"akka://ClusterSystem@127.0.0.1:25251",
"akka://ClusterSystem@127.0.0.1:25252"]
downing-provider-class = "akka.cluster.sbr.SplitBrainResolverProvider"
}
}
</code></pre>
<p>application.conf for testrole2 :</p>
<pre><code>akka {
loglevel = debug
actor {
provider = cluster
serialization-bindings {
"sample.cluster.CborSerializable" = jackson-cbor
}
}
remote {
artery {
canonical.hostname = "127.0.0.1"
canonical.port = 0
}
}
cluster {
roles= ["testrole2"]
seed-nodes = [
"akka://ClusterSystem@127.0.0.1:25251",
"akka://ClusterSystem@127.0.0.1:25252"]
downing-provider-class = "akka.cluster.sbr.SplitBrainResolverProvider"
}
}
</code></pre>
<p>The difference between each application.conf defined above is the value of <code>akka.cluster.roles</code> is either "<code>testrole1"</code> or <code>"testrole2"</code>.</p>
<p>How should application.conf be configured such that the Akka cluster is instructed to start actor a1 on node n1 and actor a2 on node n2? Should node details be specified at the level of akka.cluster in application.conf ?</p>
<p>Update:</p>
<p>Another option is to pass the rolename via an environment variable? I've just noticed this is explicitly stated here: <a href="https://doc.akka.io/docs/akka/current/typed/cluster.html" rel="nofollow noreferrer">https://doc.akka.io/docs/akka/current/typed/cluster.html</a> "The node roles are defined in the configuration property named akka.cluster.roles and typically defined in the start script as a system property or environment variable." In this scenario, use the same application.conf file for all nodes but each node uses an environment variable. For example, an updated appliction.conf (note addition of "ENV_VARIABLE")</p>
<pre><code>akka {
loglevel = debug
actor {
provider = cluster
serialization-bindings {
"sample.cluster.CborSerializable" = jackson-cbor
}
}
remote {
artery {
canonical.hostname = "127.0.0.1"
canonical.port = 0
}
}
cluster {
roles= ["ENV_VARIABLE"]
seed-nodes = [
"akka://ClusterSystem@127.0.0.1:25251",
"akka://ClusterSystem@127.0.0.1:25252"]
downing-provider-class = "akka.cluster.sbr.SplitBrainResolverProvider"
}
}
</code></pre>
<p>Cluster startup scripts determine the role for each node via the <code>ENV_VARIABLE</code> parameter, is this a viable solution?</p>
| <p>If you're going to assign different roles to different nodes, those nodes cannot use the same configuration. The easiest way to accomplish this is through n1 having <code>"testRole1"</code> in its <code>akka.cluster.roles</code> list and n2 having <code>"testRole2"</code> in its <code>akka.cluster.roles</code> list.</p>
<p>Everything in <code>akka.cluster</code> config is only configuring that node for participation in the cluster (it's configuring the cluster component on that node). A few of the settings have to be the same across the nodes of a cluster (e.g. the SBR settings), but a setting on n1 doesn't affect a setting on n2.</p>
|
<p>I am trying to map kubernetes secret value to a environment variable . My secret is as shown below</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: test-secret
type: opaque
data:
tls.crt: {{ required "A valid value is required for tls.crt" .Values.tlscrt }}
</code></pre>
<p>Mapped the key to environment variable in the deployment yaml</p>
<pre><code> env:
- name: TEST_VALUE
valueFrom:
secretKeyRef:
name: test-secret
key: tls.crt
</code></pre>
<p>The value gets mapped when i do helm install. However when i do helm upgrade , the changed value is not reflected in the environment variable , it still has the old value. Can anyone please help here ?</p>
| <h2>If the configMap is used to store pod environment variables</h2>
<blockquote>
<p><strong>ConfigMaps consumed as environment variables are not updated automatically and require a pod restart.</strong></p>
</blockquote>
<p>There are ways to automate pod restarts on configMap changes, see here for example: <a href="https://stackoverflow.com/questions/52658839/helm-chart-restart-pods-when-configmap-changes">Helm chart restart pods when configmap changes</a></p>
<h2>If the configMap is mounted as a file</h2>
<p>In this case pod should see an update, although with some delay:</p>
<blockquote>
<p>When a ConfigMap currently consumed in a volume is updated, projected keys are eventually updated as well. The kubelet checks whether the mounted ConfigMap is fresh on every periodic sync. However, the kubelet uses its local cache for getting the current value of the ConfigMap. The type of the cache is configurable using the <code>configMapAndSecretChangeDetectionStrategy</code> field in the <a href="https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/" rel="nofollow noreferrer">KubeletConfiguration struct</a>. A ConfigMap can be either propagated by watch (default), ttl-based, or by redirecting all requests directly to the API server. As a result, the total delay from the moment when the ConfigMap is updated to the moment when new keys are projected to the Pod can be as long as the kubelet sync period + cache propagation delay, where the cache propagation delay depends on the chosen cache type (it equals to watch propagation delay, ttl of cache, or zero correspondingly).</p>
</blockquote>
<p>Quotes taken from <a href="https://kubernetes.io/docs/concepts/configuration/configmap/#mounted-configmaps-are-updated-automatically" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/configmap/#mounted-configmaps-are-updated-automatically</a> (thanks to @jeremysprofile)</p>
|
<p>I need to create a deployment descriptor "A" yaml in which I can find the endpoint IP address of a pod (that belongs to a deployment "B") . There is an option to use Downward API but I don't know if I can use it in that case.</p>
| <p>If I understand correctly, you want to map the <code>test.api.com</code> hostname to the IP address of a specific Pod.<br />
As <strong>@whites11</strong> rightly pointed out, you can use <a href="https://kubernetes.io/docs/concepts/services-networking/service/#with-selectors" rel="nofollow noreferrer">Headless Services with selectors</a>:</p>
<blockquote>
<p>For headless Services that define selectors, the endpoints controller creates Endpoints records in the API, and modifies the DNS configuration to return A records (IP addresses) that point directly to the Pods backing the Service.</p>
</blockquote>
<p>In this case, it may be difficult to properly configure the <code>/etc/hosts</code> file inside a Pod, but it is possible to configure the Kubernetes cluster DNS to achieve this goal.</p>
<p>If you are using <code>CoreDNS</code> as a DNS server, you can configure <code>CoreDNS</code> to map one domain (<code>test.api.com</code>) to another domain (headless service DNS name) by adding a <code>rewrite</code> rule.</p>
<p>I will provide an example to illustrate how it works.</p>
<hr />
<p>First, I prepared a sample <code>web</code> Pod with an associated <code>web</code> Headless Service:</p>
<pre><code># kubectl get pod,svc -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/web 1/1 Running 0 66m 10.32.0.2 kworker <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/web ClusterIP None <none> 80/TCP 65m run=web
</code></pre>
<p>We can check if the <code>web</code> headless Service returns A record (IP address) that points directly to the <code>web</code> Pod:</p>
<pre><code># kubectl exec -i -t dnsutils -- nslookup web.default.svc
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: web.default.svc.cluster.local
Address: 10.32.0.2
</code></pre>
<p>Next, we need to configure <code>CoreDNS</code> to map <code>test.api.com</code> -> <code>web.default.svc.cluster.local</code>.</p>
<p>Configuration of <code>CoreDNS</code> is stored in the <code>coredns</code> <code>ConfigMap</code> in the <code>kube-system</code> namespace. You can edit it using:</p>
<pre><code># kubectl edit cm coredns -n kube-system
</code></pre>
<p>Just add one <code>rewrite</code> rule, like in the example below:</p>
<pre><code>apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
rewrite name test.api.com web.default.svc.cluster.local # mapping test.api.com to web.default.svc.cluster.local
...
</code></pre>
<p>To reload CoreDNS, we may delete <code>coredns</code> Pods (<code>coredns</code> is deployed as Deployment, so new Pods will be created)</p>
<p>Finally, we can check how it works:</p>
<pre><code># kubectl exec -i -t dnsutils -- nslookup test.api.com
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: test.api.com
Address: 10.32.0.2
</code></pre>
<p>As you can see, the <code>test.api.com</code> domain also returns the IP address of the <code>web</code> Pod.</p>
<p>For more information on the <code>rewrite</code> plugin, see the <a href="https://coredns.io/plugins/rewrite/" rel="nofollow noreferrer">Coredns rewrite documentation</a>.</p>
|
<p>I read about Knative private and public service. Private service always point to the actual deployment's endpoint while public service can either point to - where private service is pointing or it can point to activator.</p>
<p>But in my case public service always points to the activator (no matter if we are in serve mode or proxy mode). But things works fine. Please check the image below, 10.24.3.16:8012 is activator endpoint:</p>
<p>In scaled down mode (pod count is zero), please check the helloworld-go-00001
<a href="https://i.stack.imgur.com/AtTYR.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AtTYR.jpg" alt="enter image description here" /></a></p>
<p>In scaled up mode (serve mode) when pod count is more than 0.</p>
<p><a href="https://i.stack.imgur.com/RegZs.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RegZs.jpg" alt="enter image description here" /></a></p>
<p>Please let me understand what am I missing.</p>
| <p>You're noticing an optimization added last year -- in the case of small amounts of traffic (basically, less than 10-15 pods), the activator can often perform better request-weighted list balancing than the typical ingress in terms of queueing and managing <code>concurrencyCount</code> for existing pods and routing delayed requests to new pods or existing pods which have become available.</p>
<p>If your serving scales up to 20 or 30 pods, you should see the activator stop being in the traffic path; I believe the cutover point is <code>trafficBurstCapacity / ( (1.0-targetCapacity) * concurrencyCount)</code> pods, but I may be mistaken. If I recall correctly, this works out to something like <code>200 / (0.3 * 80) > 8</code>, but I haven't looked in a while.</p>
<p>The way this is implemented in the apiserver is that the Knative autoscaler manages the endpoints for the <code>helloworld-go-00001</code> service directly, using metrics from the activator and queue-proxy for details.</p>
|
<p>I am learning how to use an ingress to expose my application GKE v1.19.
I followed the tutorial on GKE docs for Service, Ingress, and BackendConfig to get to the following setup. However, my backend services still become UNHEALTHY after some time. My aim is to overwrite the default "/" health check path for the ingress controller.</p>
<p>I have the same health checks defined in my deployment.yaml file under livenessProbe and readinessProbe and they seem to work fine since the Pod enters running stage. I have also tried to curl the endpoint and it returns a 200 status.</p>
<p>I have no clue why are my service is marked as unhealthy despite them being accessible from the NodePort service I defined directly. Any advice or help would be appreciated. Thank you.</p>
<p>I will add my yaml files below:</p>
<p><strong>deployment.yaml</strong></p>
<pre><code>....
livenessProbe:
httpGet:
path: /api
port: 3100
initialDelaySeconds: 180
readinessProbe:
httpGet:
path: /api
port: 3100
initialDelaySeconds: 180
.....
</code></pre>
<p><strong>backendconfig.yaml</strong></p>
<pre><code>apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: backend-config
namespace: ns1
spec:
healthCheck:
checkIntervalSec: 30
port: 3100
type: HTTP #case-sensitive
requestPath: /api
</code></pre>
<p><strong>service.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/backend-config: '{"default": "backend-config"}'
name: service-ns1
namespace: ns1
labels:
app: service-ns1
spec:
type: NodePort
ports:
- protocol: TCP
port: 3100
targetPort: 3100
selector:
app: service-ns1
</code></pre>
<p><strong>ingress.yaml</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ns1-ingress
namespace: ns1
annotations:
kubernetes.io/ingress.global-static-ip-name: ns1-ip
networking.gke.io/managed-certificates: ns1-cert
kubernetes.io/ingress.allow-http: "false"
spec:
rules:
- http:
paths:
- path: /api/*
backend:
serviceName: service-ns1
servicePort: 3100
</code></pre>
| <p>The ideal way to use the ‘BackendConfig’ is when the serving pods for your service contains multiple containers, if you're using the Anthos Ingress controller or if you need control over the port used for the load balancer's health checks, then you should use a BackendConfig CDR to define <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#health_checks" rel="nofollow noreferrer">health check</a> parameters. Refer to the <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#health_checks" rel="nofollow noreferrer">1</a>.</p>
<p>When a backend service's health check parameters are inferred from a serving Pod's readiness probe, GKE does not keep the readiness probe and health check synchronized. Hence any changes you make to the <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#interpreted_hc" rel="nofollow noreferrer">readiness probe</a> will not be copied to the health check of the corresponding backend service on the load balancer as per <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#interpreted_hc" rel="nofollow noreferrer">2</a>.</p>
<p>In your scenario, the backend is healthy when it follows path ‘/’ but showing unhealthy when it uses path ‘/api’, so there might be some misconfiguration in your ingress.</p>
<p>I would suggest you to add the annotations: ingress.kubernetes.io/rewrite-target: /api
so the path mentioned in spec.path will be rewritten to /api before the request is sent to the backend service.</p>
|
<p>I have obtained a cert from name.com.</p>
<pre class="lang-sh prettyprint-override"><code>➜ tree .
.
├── ca.crt
├── vpk.crt
├── vpk.csr
└── vpk.key
</code></pre>
<p><strong>How I created the secrets</strong></p>
<p>I added ca.crt content at the end of vpk.crt file.</p>
<pre class="lang-sh prettyprint-override"><code>(⎈ | vpk-dev-eks:argocd)
➜ k create secret tls tls-secret --cert=vpk.crt --key=vpk.key --dry-run -o yaml | kubectl apply -f -
(⎈ | vpk-dev-eks:argocd)
➜ kubectl create secret generic ca-secret --from-file=ca.crt=ca.crt --dry-run -o yaml | kubectl apply -f -
</code></pre>
<p><strong>This is my ingress:</strong></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: websockets-ingress
namespace: development
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
# Enable client certificate authentication
nginx.ingress.kubernetes.io/auth-tls-verify-client: "optional_no_ca"
# Create the secret containing the trusted ca certificates
nginx.ingress.kubernetes.io/auth-tls-secret: "development/ca-secret"
# Specify the verification depth in the client certificates chain
nginx.ingress.kubernetes.io/auth-tls-verify-depth: "1"
# Specify if certificates are passed to upstream server
nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
argocd.argoproj.io/sync-wave: "10"
spec:
tls:
- hosts:
- backend-dev.project.com
secretName: tls-secret
rules:
- host: backend-dev.project.com
http:
paths:
- path: /ws/
backend:
serviceName: websockets-service
servicePort: 443
</code></pre>
<p>The cert is properly validated, I can connect via various CLI WebSocket clients and <a href="https://www.ssllabs.com/ssltest" rel="nofollow noreferrer">https://www.ssllabs.com/ssltest</a> gives me "A+"</p>
<p>However if I set</p>
<p><code>nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"</code></p>
<p>then everything stops working and I get 400 error on the nginx ingress controller side (POD logs).</p>
<p><strong>I am confused from the official docs:</strong></p>
<p>The optional_no_ca parameter (1.3.8, 1.2.5) requests the client certificate but does not require it to be signed by a trusted CA certificate. This is intended for the use in cases when a service that is external to nginx performs the actual certificate verification. The contents of the certificate is accessible through the $ssl_client_cert variable.</p>
<h3>So what exactly "optional_no_ca" is doing and why "on" fails the requests?</h3>
| <p><strong>Optional_no_ca</strong> does the optional client certificate validation and it does not fail the request when the client certificate is not signed by the CAs from <strong>auth-tls-secret</strong>. Even after specifying the optional_no_ca parameter, it is necessary to provide the client certificate. As mentioned in the document <a href="http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_trusted_certificate" rel="nofollow noreferrer">1</a>, the actual certificate verification is done when the service is external to Nginx.</p>
<p>When you set <strong>nginx.ingress.kubernetes.io/auth-tls-verify-client:on</strong>,
it requests a client certificate that must be signed by a certificate that is included in the secret key <strong>ca.crt</strong> of the secret specified by <strong>nginx.ingress.kubernetes.io/auth-tls-secret: secretName</strong>.</p>
<p>If not so, then certificate verification will fail and result in a status code 400 (Bad Request). Check <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#client-certificate-authentication" rel="nofollow noreferrer">this</a> for further information.</p>
|
<p>I need to get the metadata->labels->app.kubernetes.io/version value from my pods. But I can't seem to find the <code>jsonpath</code> that will allow the label key to have the slash and periods.</p>
<p>I have a basic command that is working: <code>kubectl get pods -o=jsonpath="{range .items[*]}{.metadata.name}{end}" -A</code>. This successfully returns name of each pods.</p>
<p>I have modified this to try to get the version. Here are the permutations that I have attempted (each has failed):</p>
<pre><code>kubectl get pods -o=jsonpath="{range .items[*]}{.metadata.labels.app.kubernetes.io/version}{end}" -A
</code></pre>
<p>and</p>
<pre><code>kubectl get pods -o=jsonpath="{range .items[*]}{.metadata.labels.'app.kubernetes.io/version'}{end}" -A
</code></pre>
<p>and</p>
<pre><code>kubectl get pods -o=jsonpath="{range .items[*]}{.metadata.labels.`"app.kubernetes.io`/version`"}{end}" -A
</code></pre>
<p><strong>How can I get the version using <code>jsonpath</code>?</strong></p>
<p>I am running in Windows PowerShell if that matters</p>
| <p>Escape dots in key name:</p>
<pre><code>.metadata.labels.app\.kubernetes\.io/version
</code></pre>
<pre><code>kubectl get pods -o=jsonpath="{range .items[*]}{.metadata.labels.app\.kubernetes\.io/version}{end}" -A
</code></pre>
|
<p>I have implemented Liveness Probe in C# based Kubernetes Application. When I am removing InitialDelaySeconds in my <em>livenessconfig</em> (at code level using <a href="https://github.com/kubernetes-client/csharp/blob/225bb1f59b650e6849858f52608ef917b2484ddb/src/KubernetesClient/generated/Models/V1Probe.cs#L66" rel="nofollow noreferrer">Kubernetes Client's V1Probe</a>):</p>
<pre><code>IList<string> command = new List<string>();
V1Probe livenessconfig = null;
command.Add("curl");
command.Add("- f");
command.Add("http://localhost:5000/checkhealth/");
V1ExecAction execommand = new V1ExecAction(command);
livenessconfig = new V1Probe { Exec = execommand, PeriodSeconds = 10};
</code></pre>
<p>then I am seeing Liveness Probe failing in Pod description(no reason is mentioned though):</p>
<pre><code> Normal Created 26s kubelet Created container App-prox
Normal Started 26s kubelet Started container App-prox
Warning Unhealthy 6s (x2 over 16s) kubelet Liveness probe failed:
Normal Killing 6s kubelet Stopping container App-prox
</code></pre>
<p><strong>I don't want to give any initialdelay to my probe and want my probe to start executing command as soon as container is up.</strong> How should I manage this?</p>
| <p>By providing <code>initialDelaySeconds</code> you basically giving enough time to your container so that your container can be up perfectly. You want your probe to start executing command as soon as your container is up, and how would you do that? by providing <code>initialDelaySeconds</code>.</p>
<p>If a container is restarted and the <code>initialDelaySeconds</code> parameter is not long enough or not given then probe can fail. You need to provide <code>initialDelaySeconds</code> so that containers can be reliably restarted, otherwise your probe will fail because of having the risk of never starting the application.</p>
<p>And, <code>initialDelaySeconds</code> parameter should be longer than maximum initialization time for your container.</p>
<h1>Update</h1>
<p>The <code>kubelet</code> uses liveness probes to know when to restart a container. There are couple of terms in liveness probe, like: <code>periodSeconds</code>, <code>initialDelaySeconds</code>. <code>periodSeconds</code> specifies that the <code>kubelet</code> should perform a liveness probe every <code>periodSeconds</code> seconds and the <code>initialDelaySeconds</code> tells the <code>kubelet</code> that it should wait <code>initialDelaySeconds</code> seconds before performing the first probe.</p>
|
<p>We are developing a simulation software which is deployed and scaled between multiple pods using kubernetes.
When a user makes a simulation request, a pod is selected which starts doing the job and is considered as busy.
When another user makes a simulation request, it should be routed to the next free pod.
Currently, a busy pod is often selected (even though there are free ones) as kubernetes does not know which pods are busy/free.</p>
<p>Is it possible to balance requests in such way that a free pod is always selected?
(Assuming that each app instance inside a pod exposes an HTTP endpoint which tells it's current busy/free status)</p>
| <p>I think you can make use of <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes" rel="nofollow noreferrer">readiness probes</a>:</p>
<blockquote>
<p>Sometimes, applications are temporarily unable to serve traffic. For example, an application might need to load large data or configuration files during startup, or depend on external services after startup. In such cases, you don't want to kill the application, but you don't want to send it requests either. Kubernetes provides readiness probes to detect and mitigate these situations. A pod with containers reporting that they are not ready does not receive traffic through Kubernetes Services.</p>
</blockquote>
<p>You can make the application to respond to probe requests with non-200 return code. It will be noted and no new requests will pass in until readiness probe succeed again. There are downsides though:</p>
<ul>
<li>when all pods are busy you'll get a 502 error;</li>
<li>users will not be able to submit subsequent requests to their pod (because the pod will be busy);</li>
<li>changing readiness status take some time so if you receive a lot of requests (more than the number of pods) during a short interval (probe interval), some pods may take more than one request.</li>
</ul>
|
<p>I've setup the <a href="https://kubernetes.github.io/ingress-nginx/deploy/#using-helm" rel="nofollow noreferrer">ingress-nginx</a> Helm chart to setup ingress controllers on my cluster, however by default it only runs a single pod instance.</p>
<p>Since we're running on Digital Ocean's k8s cluster, we're running with <code>externalTrafficPolicy: Local</code> to allow cert-manager to access other pods internally, and also so we have less network hops for requests.</p>
<p>For resilience we've configured our backend services to run on at least 2 nodes, so it makes sense that we have ingress controllers on each of the nodes that have a backend pod running on it, to avoid unnecessary inter-node traffic.</p>
<p>How would we go about configuring the ingress controller setup to ensure that we have a controller pod on each of the nodes that the backend pods are running on?</p>
| <p>If you want to run the POD on each node you can use the <strong>daemonset</strong>.</p>
<p>Deamon set : <a href="https://github.com/nginxinc/kubernetes-ingress/blob/master/deployments/helm-chart/templates/controller-daemonset.yaml" rel="nofollow noreferrer">https://github.com/nginxinc/kubernetes-ingress/blob/master/deployments/helm-chart/templates/controller-daemonset.yaml</a></p>
<p>now if you want to make sure Nginx ingress controller POD only run on Nodes on which your backend service running, you can use affinity and anti-affinity.</p>
<p>Affinity example :</p>
<pre><code>affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: role
operator: In
values:
- app-1
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: component
operator: In
values:
- nginx-ms
topologyKey: "kubernetes.io/1-hostname"
</code></pre>
<p>You can read more and find example at : <a href="https://github.com/infracloudio/kubernetes-scheduling-examples/blob/master/podAffinity/README.md" rel="nofollow noreferrer">https://github.com/infracloudio/kubernetes-scheduling-examples/blob/master/podAffinity/README.md</a></p>
|
<p>I have the <code>minikube</code> environment as the following: -</p>
<ul>
<li>Host OS: <code>CentOS Linux release 7.7.1908 (Core)</code></li>
<li>Docker: <code>Docker Engine - Community 20.10.7</code></li>
<li>minikube: <code>minikube version: v1.20.0</code></li>
</ul>
<p>I would like to add some additional host mapping (5+ IP and name) to the <code>/etc/hosts</code> inside the <code>minikube</code> container. Then I use the <code>minikube ssh</code> to enter to the shell and try to <code>echo "172.17.x.x my.some.host" >> /etc/hosts</code>. There is an error as <code>-bash: /etc/hosts: Permission denied</code> since the user who login to this shell is a <code>docker</code>, not a <code>root</code>.</p>
<p>I also found that at the host machine there is a docker container named <code>minikube</code> running, by using the <code>docker container ls</code>. Even I can go to this container with <code>root</code> by using <code>docker exec -it -u root minikube /bin/bash</code>. I understand that it is a kind of tweak and may be a bad practice. Especially it is too much tasks.</p>
<p>Regarding to the <code>docker</code> and <code>docker-compose</code> which provides the <code>--add-host</code> and <code>extra_hosts</code> respectively to add hostname mappings, Does the <code>minikube</code> provide it? Is there any good practice to achieve this within the <code>minikube</code> and/or system administrator point-of-view good practice?</p>
<h4>Edit 1</h4>
<p>After <code>echo 172.17.x.x my.some.host > ~/.minikube/files/etc/hosts</code> and start the <code>minikube</code>, there are some error as the following: -</p>
<pre class="lang-sh prettyprint-override"><code>[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp: lookup localhost on 8.8.8.8:53: no such host.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
</code></pre>
<p>Then I use the <code>vi</code> to create a whole <code>hosts</code> file at <code>~/.minikube/files/etc/hosts</code> as the following: -</p>
<pre><code>127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.x.x my.some.host1
172.17.x.y my.some.host2
...
</code></pre>
<p>At this time the <code>minikube</code> is started properly.</p>
| <p>Minikube has a <a href="https://minikube.sigs.k8s.io/docs/handbook/filesync/" rel="nofollow noreferrer">built-in sync mechanism</a> that could deploy a desired /etc/hosts with the following example:</p>
<pre><code>mkidr -p ~/.minikube/files/etc
echo 127.0.0.1 localhost > ~/.minikube/files/etc/hosts
minikube start
</code></pre>
<p>Then go and check if it's working:</p>
<pre><code>minikube ssh
</code></pre>
<p>And once you are inside the container:</p>
<pre><code>cat /etc/hosts
</code></pre>
|
<p>I am trying to install my rancher(RKE) kubernetes cluster bitnami/mongodb-shared . But I couldn't create a valid PV for this helm chart.</p>
<p>The error that I am getting:
no persistent volumes available for this claim and no storage class is set</p>
<p>This is the helm chart documentation section about PersistenceVolume: <a href="https://github.com/bitnami/charts/tree/master/bitnami/mongodb-sharded/#persistence" rel="nofollow noreferrer">https://github.com/bitnami/charts/tree/master/bitnami/mongodb-sharded/#persistence</a></p>
<p>This is the StorageClass and PersistentVolume yamls that I created for this helm chart PVCs':</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ssd-nfs-storage
provisioner: nope
parameters:
archiveOnDelete: "false"
----------
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
labels:
name: db-nfs
spec:
storageClassName: ssd-nfs-storage # same storage class as pvc
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
nfs:
server: 142.251.33.78 # ip addres of nfs server
path: "/bitnami/mongodb" # path to directory
</code></pre>
<p>This is the PVC yaml that created by the helm chart:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: "2021-06-06T17:50:40Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app.kubernetes.io/component: shardsvr
app.kubernetes.io/instance: sam-db
app.kubernetes.io/name: mongodb-sharded
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app.kubernetes.io/component: {}
f:app.kubernetes.io/instance: {}
f:app.kubernetes.io/name: {}
f:spec:
f:accessModes: {}
f:resources:
f:requests:
.: {}
f:storage: {}
f:volumeMode: {}
f:status:
f:phase: {}
manager: kube-controller-manager
operation: Update
time: "2021-06-06T17:50:40Z"
name: datadir-sam-db-mongodb-sharded-shard1-data-0
namespace: default
resourceVersion: "960381"
uid: c4313ed9-cc99-42e9-a64f-82bea8196629
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
volumeMode: Filesystem
status:
phase: Pending
</code></pre>
<p>Can you tell me what I am missing?</p>
| <p>I am giving the <code>bitnami/mongodb-sharded</code> installation instruction with NFS server on Rancher(v2.5.8).</p>
<p>I have three Centos 8 VM. One NFS server(lets we say 1.1.1.1), two k8s nodes(lets we say 8.8.8.8 and 9.9.9.9) on k8s-cluster, i am using RKE(aka Rancher K8S Engine)</p>
<ol>
<li>We will create a NFS server</li>
<li>We will bind the nodes to the NFS server</li>
<li>We will add <code>nfs-subdir-external-provisioner</code> HELM repository to the Rancher Chart Repositories</li>
<li>We will install <code>nfs-subdir-external-provisioner</code> via rancher charts</li>
<li>We will add <code>bitnami</code> HELM repo to the Rancher Chart Repositories</li>
<li>We will install <code>mongodb-sharded</code> via Rancher charts</li>
</ol>
<hr />
<ol>
<li>Create a NFS server</li>
</ol>
<pre><code># nfs server install
dnf install nfs-utils -y
systemctl start nfs-server.service
systemctl enable nfs-server.service
systemctl status nfs-server.service
# you can verify the version
rpcinfo -p | grep nfs
# nfs deamon config: /etc/nfs.conf
# nfs mount config: /etc/nfsmount.conf
mkdir /mnt/storage
# allows creation from client
# for mongodb-sharded: /mnt/storage
chown -R nobody: /mnt/storage
chmod -R 777 /mnt/storage
# restart service again
systemctl restart nfs-utils.service
# grant access to the client
vi /etc/exports
/mnt/storage 8.8.8.8(rw,sync,no_all_squash,root_squash)
/mnt/storage 9.9.9.9(rw,sync,no_all_squash,root_squash)
# check exporting
exportfs -arv
exportfs -s
# exporting 8.8.8.8:/mnt/storage
# exporting 9.9.9.9:/mnt/storage
</code></pre>
<hr />
<ol start="2">
<li>Bind the k8s nodes to the NFS server</li>
</ol>
<pre><code># nfs client install
dnf install nfs-utils nfs4-acl-tools -y
# see from the client shared folder
showmount -e 1.1.1.1
# create mounting folder for client
mkdir /mnt/cstorage
# mount server folder to the client folder
mount -t nfs 1.1.1.1:/mnt/storage /mnt/cstorage
# check mounted folder vis nfs
mount | grep -i nfs
# mount persistent upon a reboot
vi /etc/fstab
# add following codes
1.1.1.1:/mnt/storage /mnt/cstorage nfs defaults 0 0
# all done
</code></pre>
<p><strong>Bonus:</strong> Unbind nodes.</p>
<pre><code># un mount and delete from client
umount -f -l /mnt/cstorage
rm -rf /mnt/cstorage
# delete added volume from fstab
vi /etc/fstab
</code></pre>
<hr />
<ol start="3">
<li>Add nfs-subdir-external-provisioner helm repository</li>
</ol>
<p><strong>Helm Repository URL:</strong> <code>https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/</code></p>
<ul>
<li>Rancher --></li>
<li>Cluster Explorer --></li>
<li>Apps & Marketplace</li>
<li>Chart Repositories --></li>
<li>Create --></li>
<li>Add url like below <a href="https://i.stack.imgur.com/WQFAa.png" rel="nofollow noreferrer">this ccreenshot</a> --></li>
<li>Save --></li>
</ul>
<hr />
<ol start="4">
<li>Install <code>nfs-subdir-external-provisioner</code> via Charts</li>
</ol>
<ul>
<li>Rancher --></li>
<li>Cluster Explorer --></li>
<li>Apps & Marketplace</li>
<li>Charts --></li>
<li><a href="https://i.stack.imgur.com/5gNLH.png" rel="nofollow noreferrer">find nfs-subdir-external-provisioner chart</a> --></li>
<li>Select --></li>
<li>Give a name(like nfs-pr) --></li>
<li>Select Values YAML --></li>
<li><a href="https://i.stack.imgur.com/zKOvQ.png" rel="nofollow noreferrer">set path, server ip and StorageClass name(we will use this class name later)</a> --></li>
<li>Install --></li>
</ul>
<hr />
<ol start="5">
<li>Add <code>bitnami</code> HELM repo to the Rancher Chart Repositories</li>
</ol>
<p>Bitnami HELM URL: <code>https://charts.bitnami.com/bitnami</code></p>
<ul>
<li>Rancher --></li>
<li>Cluster Explorer --></li>
<li>Apps & Marketplace</li>
<li>Chart Repositories --></li>
<li>Create --></li>
<li>Add url like step 3's screenshot --></li>
<li>Save --></li>
</ul>
<hr />
<ol start="6">
<li>Install <code>mongodb-sharded</code> via Rancher Charts</li>
</ol>
<ul>
<li><p>Rancher --></p>
</li>
<li><p>Cluster Explorer --></p>
</li>
<li><p>Apps & Marketplace</p>
</li>
<li><p>Charts --></p>
</li>
<li><p>Find <code>mongodb-sharded</code> --></p>
</li>
<li><p>Select --></p>
</li>
<li><p>Give a name(my-db) --></p>
</li>
<li><p>Select Values YAML --></p>
</li>
<li><p><a href="https://i.stack.imgur.com/IpCKF.png" rel="nofollow noreferrer">Add global.storageClassname: nfs-client</a>(we set this value step 5) --></p>
</li>
<li><p>Install</p>
</li>
</ul>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.