prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>I am following the book Kubernetes for developers and seems maybe book is heavily outdated now.
Recently I have been trying to get prometheus up and running on kubernetes following the instruction from book. That suggested to install and use HELM to get Prometheus and grafana up and running.</p>
<pre><code> helm install monitor stable/prometheus --namespace monitoring
</code></pre>
<p>this resulted:</p>
<pre><code>NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
monitor-kube-state-metrics-578cdbb5b7-pdjzw 0/1 CrashLoopBackOff 14 36m 192.168.23.1 kube-worker-vm3 <none> <none>
monitor-prometheus-alertmanager-7b4c476678-gr4s6 0/2 Pending 0 35m <none> <none> <none> <none>
monitor-prometheus-node-exporter-5kz8x 1/1 Running 0 14h 192.168.1.13 rockpro64 <none> <none>
monitor-prometheus-node-exporter-jjrjh 1/1 Running 1 14h 192.168.1.35 osboxes <none> <none>
monitor-prometheus-node-exporter-k62fn 1/1 Running 1 14h 192.168.1.37 kube-worker-vm3 <none> <none>
monitor-prometheus-node-exporter-wcg2k 1/1 Running 1 14h 192.168.1.36 kube-worker-vm2 <none> <none>
monitor-prometheus-pushgateway-6898f8475b-sk4dz 1/1 Running 0 36m 192.168.90.200 osboxes <none> <none>
monitor-prometheus-server-74d7dc5d4c-vlqmm 0/2 Pending 0 14h <none> <none> <none
</code></pre>
<p>For the prometheus server I checked why is it Pending:</p>
<pre><code># kubectl describe pod monitor-prometheus-server-74d7dc5d4c-vlqmm -n monitoring
Name: monitor-prometheus-server-74d7dc5d4c-vlqmm
Namespace: monitoring
Priority: 0
Node: <none>
Labels: app=prometheus
chart=prometheus-13.8.0
component=server
heritage=Helm
pod-template-hash=74d7dc5d4c
release=monitor
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/monitor-prometheus-server-74d7dc5d4c
Containers:
prometheus-server-configmap-reload:
Image: jimmidyson/configmap-reload:v0.4.0
Port: <none>
Host Port: <none>
Args:
--volume-dir=/etc/config
--webhook-url=http://127.0.0.1:9090/-/reload
Environment: <none>
Mounts:
/etc/config from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from monitor-prometheus-server-token-n49ls (ro)
prometheus-server:
Image: prom/prometheus:v2.20.1
Port: 9090/TCP
Host Port: 0/TCP
Args:
--storage.tsdb.retention.time=15d
--config.file=/etc/config/prometheus.yml
--storage.tsdb.path=/data
--web.console.libraries=/etc/prometheus/console_libraries
--web.console.templates=/etc/prometheus/consoles
--web.enable-lifecycle
Liveness: http-get http://:9090/-/healthy delay=30s timeout=30s period=15s #success=1 #failure=3
Readiness: http-get http://:9090/-/ready delay=30s timeout=30s period=5s #success=1 #failure=3
Environment: <none>
Mounts:
/data from storage-volume (rw)
/etc/config from config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from monitor-prometheus-server-token-n49ls (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: monitor-prometheus-server
Optional: false
storage-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: monitor-prometheus-server
ReadOnly: false
monitor-prometheus-server-token-n49ls:
Type: Secret (a volume populated by a Secret)
SecretName: monitor-prometheus-server-token-n49ls
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 28m (x734 over 14h) default-scheduler 0/6 nodes are available: 6 pod has unbound immediate PersistentVolumeClaims.
Warning FailedScheduling 3m5s (x23 over 24m) default-scheduler 0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.
r
</code></pre>
<p>However this message I am seeing <code>0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.</code> is coming with all other nodejs's StatefulSets and rabbitmq Deployments I have tried created. for rabbitmq and nodejs I figured out I need to create a PersistantVolume and a storage class whose name I needed to specify in the PV and PVC. and then it all worked but now I have Prometheus Server, Do I have to do the same for prometheus as well ? why is it not instructed by the HELM ?</p>
<p>Has something change in the Kubernetes API recently ? that I always have to create a PV and Storage Class explicitly for a PVC ?</p>
| <p>Unless you configure your cluster with <a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/" rel="nofollow noreferrer">dynamic volume provisioning </a>, you will have to make the PV manually each time. Even if you are not on a cloud, you can setup dynamic storage providers. There are a number of options for providers and you can find many <a href="https://landscape.cncf.io/card-mode?category=cloud-native-storage&grouping=category" rel="nofollow noreferrer">here</a>. Ceph and minio are popular providers.</p>
|
<p>I have an application which consists of a backend (spring boot) and a search engine (elasticsearch). After I deployed it into OCP, Initially I tested the connection between the two using the command "curl" to elasticsearch service <code>(https://service-name.namespace.svc.cluster.local:9200)</code> from backend pod and it worked. Here's the picture:
<a href="https://i.stack.imgur.com/QEbfY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QEbfY.png" alt="enter image description here" /></a></p>
<p>However, when I try to access elasticsearch from within the deployed backend application, an error message appears as below:
<a href="https://i.stack.imgur.com/heWjN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/heWjN.png" alt="enter image description here" /></a></p>
<p>And here are my configuration in Spring Boot to connect with Elasticsearch that I did:</p>
<pre><code>package com.siolbca.config;
import org.elasticsearch.client.RestHighLevelClient;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.elasticsearch.client.ClientConfiguration;
import org.springframework.data.elasticsearch.client.RestClients;
import org.springframework.data.elasticsearch.core.ElasticsearchOperations;
import org.springframework.data.elasticsearch.core.ElasticsearchRestTemplate;
import org.springframework.data.elasticsearch.repository.config.EnableElasticsearchRepositories;
@Configuration
@EnableElasticsearchRepositories(basePackages = "com.siolbca.repository")
@ComponentScan(basePackages = "com.siolbca.services")
public class Config {
@Bean
public RestHighLevelClient client() {
ClientConfiguration clientConfiguration
= ClientConfiguration.builder()
.connectedTo("https://elasticsearch-siol-es-http.siolbca-dev.svc.cluster.local:9200")
.usingSsl()
.withBasicAuth("elastic","G0D1g6TurJ79pcxr1065pU0U")
.build();
return RestClients.create(clientConfiguration).rest();
}
@Bean
public ElasticsearchOperations elasticsearchTemplate() {
return new ElasticsearchRestTemplate(client());
}
}
</code></pre>
<p>Some of the things I have done are:</p>
<ol>
<li>Use the elasticsearch service IP address directly in the backend configuration</li>
</ol>
<blockquote>
<p>https://elasticsearch-service-ipaddress:9200</p>
</blockquote>
<ol start="2">
<li>Expose a route from elasticsearch service and put it in the backend configuration</li>
</ol>
<blockquote>
<p>https://elasticsearch-route:443</p>
</blockquote>
<ol start="3">
<li>Change the service url into</li>
</ol>
<blockquote>
<p><a href="https://service-name.namespace.svc:9200" rel="nofollow noreferrer">https://service-name.namespace.svc:9200</a></p>
</blockquote>
<p>Does anyone know why my backend app can't communicate with the elasticsearch service even though the two pods are able to connect? Any answer would be very helpful. Thank you.</p>
<hr />
<p><strong>EDIT</strong></p>
<p>Here's my pom.xml</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.4.3</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<groupId>com.siolbca</groupId>
<artifactId>siolbca</artifactId>
<version>1.0</version>
<name>siolbca</name>
<description>Backend project for SIOLBCA</description>
<properties>
<java.version>1.8</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-elasticsearch</artifactId>
<version>4.0.0.RELEASE</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
</code></pre>
| <p>I think the elasticsearch builder <code>.connectedTo</code> method requires the format <code>host:port</code>, i.e. without the protocol.
So try:</p>
<pre><code>.connectedTo("elasticsearch-siol-es-http.siolbca-dev.svc.cluster.local:9200")
</code></pre>
|
<p>Hy folks, I'm using Vagrant box provisioned with Ansible and provider Oracle virtualbox, it was working fine for me.
But one day i installed Android Studio and it's Emulator and Minicube wit KVM. </p>
<p>Afterwards vagrant with virtual-box just stop working. Now whenever i run <code>vagrant up</code> i get below error.</p>
<pre class="lang-sh prettyprint-override"><code>Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'ubuntu/bionic64' version '20200416.0.0' is up to date...
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
==> default: Forwarding ports...
default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
The guest machine entered an invalid state while waiting for it
to boot. Valid states are 'starting, running'. The machine is in the
'gurumeditation' state. Please verify everything is configured
properly and try again.
If the provider you're using has a GUI that comes with it,
it is often helpful to open that and watch the machine, since the
GUI often has more helpful error messages than Vagrant can retrieve.
For example, if you're using VirtualBox, run `vagrant up` while the
VirtualBox GUI is open.
The primary issue for this error is that the provider you're using
is not properly configured. This is very rarely a Vagrant issue.
</code></pre>
<p><strong>I need to run all of three on Ubuntu, How can i fix this?</strong></p>
| <p>I had a gurumeditation issue with a fresh debian/buster64 and a reboot of my laptop fixed it (Virtualbox was only tellig me that the state was invalid in the log).</p>
<p>Some time lost for nothing. If ever it can help</p>
|
<p>today while trying to run my pod , I discovered this error which we see in the describe events:</p>
<pre><code># kubectl describe pod monitor-prometheus-alertmanager-c94f7b6b7-tg6vc -n monitoring
Name: monitor-prometheus-alertmanager-c94f7b6b7-tg6vc
Namespace: monitoring
Priority: 0
Node: kube-worker-vm2/192.168.1.36
Start Time: Sun, 09 May 2021 20:42:57 +0100
Labels: app=prometheus
chart=prometheus-13.8.0
component=alertmanager
heritage=Helm
pod-template-hash=c94f7b6b7
release=monitor
Annotations: cni.projectcalico.org/podIP: 192.168.222.51/32
cni.projectcalico.org/podIPs: 192.168.222.51/32
Status: Running
IP: 192.168.222.51
IPs:
IP: 192.168.222.51
Controlled By: ReplicaSet/monitor-prometheus-alertmanager-c94f7b6b7
Containers:
prometheus-alertmanager:
Container ID: docker://0ce55357c5f32c6c66cdec3fe0aaaa06811a0a392d0329c989ac6f15426891ad
Image: prom/alertmanager:v0.21.0
Image ID: docker-pullable://prom/alertmanager@sha256:24a5204b418e8fa0214cfb628486749003b039c279c56b5bddb5b10cd100d926
Port: 9093/TCP
Host Port: 0/TCP
Args:
--config.file=/etc/config/alertmanager.yml
--storage.path=/data
--cluster.advertise-address=[$(POD_IP)]:6783
--web.external-url=http://localhost:9093
State: Running
Started: Sun, 09 May 2021 20:52:33 +0100
Ready: False
Restart Count: 0
Readiness: http-get http://:9093/-/ready delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:
POD_IP: (v1:status.podIP)
Mounts:
/data from storage-volume (rw)
/etc/config from config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from monitor-prometheus-alertmanager-token-kspg6 (ro)
prometheus-alertmanager-configmap-reload:
Container ID: docker://eb86ea355b820ddc578333f357666156dc5c5a3a53c63220ca00b98ffada5531
Image: jimmidyson/configmap-reload:v0.4.0
Image ID: docker-pullable://jimmidyson/configmap-reload@sha256:17d34fd73f9e8a78ba7da269d96822ce8972391c2838e08d92a990136adb8e4a
Port: <none>
Host Port: <none>
Args:
--volume-dir=/etc/config
--webhook-url=http://127.0.0.1:9093/-/reload
State: Running
Started: Sun, 09 May 2021 20:44:59 +0100
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/etc/config from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from monitor-prometheus-alertmanager-token-kspg6 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: monitor-prometheus-alertmanager
Optional: false
storage-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: prometheus-pv-claim
ReadOnly: false
monitor-prometheus-alertmanager-token-kspg6:
Type: Secret (a volume populated by a Secret)
SecretName: monitor-prometheus-alertmanager-token-kspg6
Optional: false
QoS Class: BestEffort
Node-Selectors: boardType=x86vm
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m54s default-scheduler Successfully assigned monitoring/monitor-prometheus-alertmanager-c94f7b6b7-tg6vc to kube-worker-vm2
Normal Pulled 7m53s kubelet Container image "jimmidyson/configmap-reload:v0.4.0" already present on machine
Normal Created 7m52s kubelet Created container prometheus-alertmanager-configmap-reload
Normal Started 7m52s kubelet Started container prometheus-alertmanager-configmap-reload
Warning Failed 6m27s (x2 over 7m53s) kubelet Failed to pull image "prom/alertmanager:v0.21.0": rpc error: code = Unknown desc = context canceled
Warning Failed 5m47s (x3 over 7m53s) kubelet Error: ErrImagePull
Warning Failed 5m47s kubelet Failed to pull image "prom/alertmanager:v0.21.0": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Normal BackOff 5m11s (x6 over 7m51s) kubelet Back-off pulling image "prom/alertmanager:v0.21.0"
Warning Failed 5m11s (x6 over 7m51s) kubelet Error: ImagePullBackOff
Normal Pulling 4m56s (x4 over 9m47s) kubelet Pulling image "prom/alertmanager:v0.21.0"
Normal Pulled 19s kubelet Successfully pulled image "prom/alertmanager:v0.21.0" in 4m36.445692759s
</code></pre>
<p>then I tried to ping first with google.com since it was working I wanted to check <a href="https://registry-1.docker.io/v2/" rel="nofollow noreferrer">https://registry-1.docker.io/v2/</a> so I tried to ping docker.io but I do not get ping result. what is causing this ?</p>
<pre><code>osboxes@kube-worker-vm2:~$ ping google.com
PING google.com (142.250.200.14) 56(84) bytes of data.
64 bytes from lhr48s29-in-f14.1e100.net (142.250.200.14): icmp_seq=10 ttl=117 time=35.8 ms
64 bytes from lhr48s29-in-f14.1e100.net (142.250.200.14): icmp_seq=11 ttl=117 time=11.9 ms
64 bytes from lhr48s29-in-f14.1e100.net (142.250.200.14): icmp_seq=12 ttl=117 time=9.16 ms
64 bytes from lhr48s29-in-f14.1e100.net (142.250.200.14): icmp_seq=13 ttl=117 time=11.2 ms
64 bytes from lhr48s29-in-f14.1e100.net (142.250.200.14): icmp_seq=14 ttl=117 time=12.1 ms
^C
--- google.com ping statistics ---
14 packets transmitted, 5 received, 64% packet loss, time 13203ms
rtt min/avg/max/mdev = 9.163/16.080/35.886/9.959 ms
osboxes@kube-worker-vm2:~$ ping docker.io
PING docker.io (35.169.217.170) 56(84) bytes of data.
</code></pre>
| <p>Because <code>docker.io</code> does not respond to pings, from anywhere.</p>
|
<p>After a pod crashes and restarts, is it possible to retrieve the IP address of the pod prior to the crash?</p>
| <p>This is a very wide question as not possible to identify where exactly and why pod crashes.
However I'll show what's possible to do with in different scenarios.</p>
<ul>
<li>Pod's container crashes and then restarts itself:</li>
</ul>
<p>In this case pod will save its IP address. The easiest way is to run
<code>kubectl get pods -o wide</code></p>
<p>Output will be like</p>
<pre><code>NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-699f9b54d-g2w97 0/1 CrashLoopBackOff 2 55s 10.244.1.53 worker1 <none> <none>
</code></pre>
<p>As you can see even if container crashes, pod has assigned with IP address</p>
<p>Also it's possible to add <code>initContainers</code> and add a command which will get the IP address of the pod (depending on the image you will use, there are different options like <code>ip a</code>, <code>ifconfig -a</code> etc.</p>
<p>Here's a simple example how it can be added:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
initContainers: # here is where to add initContainer section
- name: init-container
image: busybox
args: [/bin/sh, -c, "echo IP ADDRESS ; ifconfig -a"]
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
command: ["/sh", "-c", "nginx --version"] #this is to make nginx image failing
</code></pre>
<p>Before your main container starts, this <code>init-container</code> will run an <code>ifconfig -a</code> command and will put its results into logs.</p>
<p>You can check it with:</p>
<p><code>kubectl logs %pod_name% -c init-container</code></p>
<p>Output will be:</p>
<pre><code>IP ADDRESS
eth0 Link encap:Ethernet HWaddr F6:CB:AD:D0:7E:7E inet addr:10.244.1.52 Bcast:10.244.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1410 Metric:1 RX packets:5 errors:0 dropped:0 overruns:0 frame:0 TX packets:1 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:398 (398.0 B) TX bytes:42 (42.0 B)
lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
</code></pre>
<p>Also you can check logs for previous running version of pod by adding <code>--previous</code> to the command above.</p>
<ul>
<li>Pod crashes and then is recreated
In this case new pod is created which means local logs are gone. You will need to think about saving them separately from pods. For this matter you can use <code>volumes</code>. E.g. <code>hostPath</code> will store logs on the node where pod runs or <code>nfs</code> can be attached to different pods and be accessed.</li>
<li>Control plane crashed while pods are still running
You can't access logs using control-plane and <code>kubectl</code> however your containers will still be running on the nodes. To get logs directly from nodes where your containers are running use <code>docker</code> or <code>crictl</code> depending on your runtime.</li>
</ul>
<p>Ideal solution for such cases is to use monitoring systems such as <code>prometheus</code> or <code>elasticseach</code>.
It will require additional set up of <a href="https://github.com/fluent/fluentd-kubernetes-daemonset" rel="nofollow noreferrer">fluentd</a> or <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">kube-state-metrics</a></p>
|
<p>I like <a href="https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/configmapgenerator/" rel="nofollow noreferrer">configMapGenerator</a> with suffix hashes because it forces redeployment of pod that are consuming particular config. But the diff output after changing config is just delete and create, which is less than ideal. Is there a way to get more intelligent diff config maps produces by configMapGenerator with suffix hashes?</p>
<h2>Edit:</h2>
<p>For example if I have kustomization.yaml:</p>
<pre class="lang-yaml prettyprint-override"><code>generatorOptions:
disableNameSuffixHash: false
configMapGenerator:
- name: nginx-conf
files:
- nginx.conf=config/nginx.conf
</code></pre>
<ol>
<li><p>Lets assume that for first time <code>kubectl apply -k</code> generates <code>nginx-conf-aaaa</code> config map.</p>
</li>
<li><p>Edit <code>config/nginx.conf</code>.</p>
</li>
<li><p>Lets assume that <code>kubectl apply -k</code> will generate <code>nginx-config-bbbb</code>.</p>
</li>
</ol>
<p>Is there a way to diff <code>nginx-config-aaaa</code> and <code>nginx-config-bbbb</code> before applying changes?</p>
| <p>You can do something like this</p>
<ul>
<li><p>Get the current version of the ConfigMap and write it to a file <code>current.yaml</code></p>
<p><code>kubectl get configmap nginx-conf-aaaa -o=yaml > ./current.yaml</code></p>
</li>
<li><p>After making the changes get the new version of the ConfigMap in <code>new.yaml</code></p>
<p><code>kubectl kustomize . > ./new.yaml</code></p>
</li>
<li><p>Then perform a <code>git diff</code></p>
<p><code>git diff --no-index ./current.yaml ./new.yaml</code></p>
</li>
</ul>
<p>If you are happy with the diff, go ahead and apply the changes.</p>
|
<p>I have a Openshift Route of type :</p>
<pre><code>- apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: <Route name>
labels:
app.kubernetes.io/name: <name>
spec:
host: <hostname>
port:
targetPort: http
tls:
termination: passthrough
to:
kind: Service
name: <serviceName>
</code></pre>
<p>I want to convert it into a Ingress Object as there are no routes in bare k8s. I couldn't find any reference to tls termination as passthrough in Ingress documentation. Can someone please help me converting this to an Ingress object?</p>
| <p>TLS passthrough is not officially part of the Ingress spec. Some specific ingress controllers support it, usually via non-standard TCP proxy modes. But what you probably want is a LoadBalancer-type service.</p>
|
<p>Our application uses RabbitMQ with only a single node. It is run in a single Kubernetes pod.</p>
<p>We use durable/persistent queues, but any time that our cloud instance is brought down and back up, and the RabbitMQ pod is restarted, our existing durable/persistent queues are gone.</p>
<p>At first, I though that it was an issue with the volume that the queues were stored on not being persistent, but that turned out not to be the case. </p>
<p>It appears that the queue data is stored in <code>/var/lib/rabbitmq/mnesia/<user@hostname></code>. Since the pod's hostname changes each time, it creates a new set of data for the new hostname and loses access to the previously persisted queue. I have many sets of files built up in the mnesia folder, all from previous restarts.</p>
<p>How can I prevent this behavior?</p>
<p>The closest answer that I could find is in <a href="https://stackoverflow.com/questions/46892531/messages-dont-survive-pod-restarts-in-rabbitmq-autocluster-kubernetes-installat">this question</a>, but if I'm reading it correctly, this would only work if you have multiple nodes in a cluster simultaneously, sharing queue data. I'm not sure it would work with a single node. Or would it?</p>
| <p>What helped in our case was to set <code>hostname: <static-host-value></code></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
spec:
replicas: 1
...
template:
metadata:
labels:
app: rabbitmq
spec:
...
containers:
- name: rabbitmq
image: rabbitmq:3-management
...
hostname: rmq-host
</code></pre>
|
<p>I have defined the <code>values.yaml</code> like the following:</p>
<pre><code>name: custom-streams
image: streams-docker-images
imagePullPolicy: Always
restartPolicy: Always
replicas: 1
port: 8080
nodeSelector:
nodetype: free
configHocon: |-
streams {
monitoring {
custom {
uri = ${?URI}
method = ${?METHOD}
}
}
}
</code></pre>
<p>And <code>configmap.yaml</code> like the following:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: custom-streams-configmap
data:
config.hocon: {{ .Values.configHocon | indent 4}}
</code></pre>
<p>Lastly, I have defined the <code>deployment.yaml</code> like the following:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ default 1 .Values.replicas }}
strategy: {}
template:
spec:
containers:
- env:
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
image: {{ .Values.image }}
name: {{ .Values.name }}
volumeMounts:
- name: config-hocon
mountPath: /config
ports:
- containerPort: {{ .Values.port }}
restartPolicy: {{ .Values.restartPolicy }}
volumes:
- name: config-hocon
configmap:
name: custom-streams-configmap
items:
- key: config.hocon
path: config.hocon
status: {}
</code></pre>
<p>When I run the container via:</p>
<pre><code>helm install --name custom-streams custom-streams -f values.yaml --debug --namespace streaming
</code></pre>
<p>Then the pods are running fine, but I cannot see the <code>config.hocon</code> file in the container:</p>
<pre><code>$ kubectl exec -it custom-streams-55b45b7756-fb292 sh -n streaming
/ # ls
...
config
...
/ # cd config/
/config # ls
/config #
</code></pre>
<p>I need the <code>config.hocon</code> written in the <code>/config</code> folder. Can anyone let me know what is wrong with the configurations?</p>
| <p>I was able to resolve the issue. The issue was using <code>configmap</code> in place <code>configMap</code> in <code>deployment.yaml</code>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ default 1 .Values.replicas }}
strategy: {}
template:
spec:
containers:
- env:
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
image: {{ .Values.image }}
name: {{ .Values.name }}
volumeMounts:
- name: config-hocon
mountPath: /config
ports:
- containerPort: {{ .Values.port }}
restartPolicy: {{ .Values.restartPolicy }}
volumes:
- name: config-hocon
configMap:
name: custom-streams-configmap
items:
- key: config.hocon
path: config.hocon
status: {}
</code></pre>
|
<p>I have created a docker image ( java web application ), created a kubernetes cluster with 1 master and 1 worker, created a deployment and a service. All the resources seem to run fine as I have checked by 'kubectl describe resource resourcename'. At last, I used Ingress in order to expose the services outside the cluster. The ingress resource seems to work fine as there are no errors while describing the ingress object. However, on accessing the host on a browser from another machine , I get "Your connection is not private" error. I am pretty new to Kubernetes and I am unable to debug the cause of this.</p>
<p>Below are service/deployment yaml files, ingress file contents and the status of resources.</p>
<p>Service and Deployment YAML:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: hotelapplication
labels:
name: hotelapplication
spec:
ports:
- name: appport
port: 8080
targetPort: 8080
selector:
app: hotelapplication
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hotelapplication
spec:
selector:
matchLabels:
app: hotelapplication
replicas: 1
template:
metadata:
labels:
app: hotelapplication
spec:
containers:
- name: hotelapplication
image: myname/hotelapplication:2.0
imagePullPolicy: Always
ports:
- containerPort: 8080
env: # Setting Enviornmental Variables
- name: DB_HOST # Setting Database host address from configMap
valueFrom:
configMapKeyRef:
name: db-config # name of configMap
key: host
- name: DB_NAME # Setting Database name from configMap
valueFrom:
configMapKeyRef:
name: db-config
key: name
- name: DB_USERNAME # Setting Database username from Secret
valueFrom:
secretKeyRef:
name: db-user # Secret Name
key: username
- name: DB_PASSWORD # Setting Database password from Secret
valueFrom:
secretKeyRef:
name: db-user
key: password
</code></pre>
<p>Below is the ingress yaml:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: springboot-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: testing.mydomain.dev
http:
paths:
- backend:
serviceName: hotelapplication
servicePort: 8080
</code></pre>
<p>All the resources - pods, deployments, services, endpoints seem to work fine.</p>
<p>Ingress:</p>
<pre><code>Name: springboot-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
testing.mydomain.dev
hotelapplication:8080 (192.168.254.51:8080)
Annotations: ingress.kubernetes.io/rewrite-target: /
Events: <none>
</code></pre>
<p>Services:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hotelapplication ClusterIP 10.109.220.90 <none> 8080/TCP 37m
</code></pre>
<p>Deployments:</p>
<pre><code>NAME READY UP-TO-DATE AVAILABLE AGE
hotelapplication 1/1 1 1 5h55m
mysql-hotelapplication 1/1 1 1 22h
nfs-client-provisioner 1/1 1 1 23h
</code></pre>
<p>Pods object:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
hotelapplication-596f65488f-cnhlc 1/1 Running 0 149m
mysql-hotelapplication-65587cb8c8-crx4v 1/1 Running 0 22h
nfs-client-provisioner-64f4fb59d8-cb6hd 1/1 Running 0 23h
</code></pre>
<p>I have deleted services/deployments/pods and retried, all in vain. Please help me to fix this.</p>
<p>Edit 1:</p>
<p>I have added nginx.ingress.kubernetes.io/ssl-redirect: "false" to the ingress service definition. But, I am facing the same issue. On accessing the public IP of the host, I am facing 502 Bad Gateway error.</p>
<p>On the logs of ingress, I found below error:</p>
<pre><code>P/1.1", upstream: "http://192.168.254.56:8081/", host: "myip"
2021/05/06 06:01:33 [error] 115#115: *272 connect() failed (111: Connection refused) while connecting to upstream, client: <clientipaddress>, server: _, request: "GET / HTTP/1.1", upstream: "http://192.168.254.56:8081/", host: "<myhostipaddress>"
2021/05/06 06:01:33 [error] 115#115: *272 connect() failed (111: Connection refused) while connecting to upstream, client: <clientipaddress>, server: _, request: "GET / HTTP/1.1", upstream: "http://192.168.254.56:8081/", host: "<myhostipaddress>"
2021/05/06 06:01:34 [error] 115#115: *272 connect() failed (111: Connection refused) while connecting to upstream, client: <clientipaddress>, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "http://192.168.254.56:8081/favicon.ico", host: "<myhostipaddress>", referrer: "http://<myhostipaddress>/"
2021/05/06 06:01:34 [error] 115#115: *272 connect() failed (111: Connection refused) while connecting to upstream, client: <clientipaddress>, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "http://192.168.254.56:8081/favicon.ico", host: "<myhostipaddress>", referrer: "http://<myhostipaddress>/"
2021/05/06 06:01:34 [error] 115#115: *272 connect() failed (111: Connection refused) while connecting to upstream, client: <clientipaddress>, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "http://192.168.254.56:8081/favicon.ico", host: "<myhostipaddress>", referrer: "http://<myhostipaddress>/"
2021/05/06 06:01:35 [error] 115#115: *272 connect() failed (111: Connection refused) while connecting to upstream, client: <clientipaddress>, server: _, request: "GET / HTTP/1.1", upstream: "http://192.168.254.56:8081/", host: "<myhostipaddress>"
2021/05/06 06:01:35 [error] 115#115: *272 connect() failed (111: Connection refused) while connecting to upstream, client: <clientipaddress>, server: _, request: "GET / HTTP/1.1", upstream: "http://192.168.254.56:8081/", host: "<myhostipaddress>"
2021/05/06 06:01:35 [error] 115#115: *272 connect() failed (111: Connection refused) while connecting to upstream, client: <clientipaddress>, server: _, request: "GET / HTTP/1.1", upstream: "http://192.168.254.56:8081/", host: "<myhostipaddress>"
2021/05/06 06:01:36 [error] 115#115: *272 connect() failed (111: Connection refused) while connecting to upstream, client: <clientipaddress>, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "http://192.168.254.56:8081/favicon.ico", host: "<myhostipaddress>", referrer: "http://<myhostipaddress>/"
2021/05/06 06:01:36 [error] 115#115: *272 connect() failed (111: Connection refused) while connecting to upstream, client: <clientipaddress>, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "http://192.168.254.56:8081/favicon.ico", host: "<myhostipaddress>", referrer: "http://<myhostipaddress>/"
2021/05/06 06:01:36 [error] 115#115: *272 connect() failed (111: Connection refused) while connecting to upstream, client: <clientipaddress>, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "http://192.168.254.56:8081/favicon.ico", host: "<myhostipaddress>", referrer: "http://<myhostipaddress>/"
W0506 06:06:46.328727 6 controller.go:391] Service "ingress-nginx/default-http-backend" does not have any active Endpoint
W0506 06:09:06.921564 6 controller.go:391] Service "ingress-nginx/default-http-backend" does not have any active Endpoint
</code></pre>
| <p>Apparently, I had configured the incorrect containerPort in the deployment. There is nothing wrong with ingress configuration. But, kubernetes did not actually show any errors in logs which made debugging pretty difficult.</p>
<p>Just a tip for beginners, before trying to expose your services, test the service by configuring the 'type' in service definition as 'NodePort'. This way we can ensure the service is configured correctly, just by accessing the service easily outside the cluster.</p>
|
<p>I see there are 2 separate metrics <code>ApproximateNumberOfMessagesVisible</code> and <code>ApproximateNumberOfMessagesNotVisible</code>.</p>
<p>Using number of messages visible causes processing pods to get triggered for termination immediately after they pick up the message from queue, as they're no longer visible. If I use number of messages not visible, it will not scale up.</p>
<p>I'm trying to scale a kubernetes service using horizontal pod autoscaler and external metric from SQS. Here is template external metric:</p>
<pre><code>apiVersion: metrics.aws/v1alpha1
kind: ExternalMetric
metadata:
name: metric-name
spec:
name: metric-name
queries:
- id: metric_name
metricStat:
metric:
namespace: "AWS/SQS"
metricName: "ApproximateNumberOfMessagesVisible"
dimensions:
- name: QueueName
value: "queue_name"
period: 60
stat: Average
unit: Count
returnData: true
</code></pre>
<p>Here is HPA template:</p>
<pre><code>kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
metadata:
name: hpa-name
spec:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: deployment-name
minReplicas: 1
maxReplicas: 50
metrics:
- type: External
external:
metricName: metric-name
targetAverageValue: 1
</code></pre>
<p>The problem will be solved if I can define another custom metric that is a sum of these two metrics, how else can I solve this problem?</p>
| <p>We used a lambda to fetch two metrics and publish a custom metric that is sum of messages in-flight and waiting, and trigger this lambda using cloudwatch events at whatever frequency you want, <a href="https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#rules:action=create" rel="nofollow noreferrer">https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#rules:action=create</a></p>
<p>Here is lambda code for reference:</p>
<pre><code>const AWS = require('aws-sdk');
const cloudwatch = new AWS.CloudWatch({region: ''}); // fill region here
const sqs = new AWS.SQS();
const SQS_URL = ''; // fill queue url here
async function getSqsMetric(queueUrl) {
var params = {
QueueUrl: queueUrl,
AttributeNames: ['All']
};
return new Promise((res, rej) => {
sqs.getQueueAttributes(params, function(err, data) {
if (err) rej(err);
else res(data);
});
})
}
function buildMetric(numMessages) {
return {
Namespace: 'yourcompany-custom-metrics',
MetricData: [{
MetricName: 'mymetric',
Dimensions: [{
Name: 'env',
Value: 'prod'
}],
Timestamp: new Date(),
Unit: 'Count',
Value: numMessages
}]
}
}
async function pushMetrics(metrics) {
await new Promise((res) => cloudwatch.putMetricData(metrics, (err, data) => {
if (err) {
console.log('err', err, err.stack); // an error occurred
res(err);
} else {
console.log('response', data); // successful response
res(data);
}
}));
}
exports.handler = async (event) => {
console.log('Started');
const sqsMetrics = await getSqsMetric(SQS_URL).catch(console.error);
var queueSize = null;
if (sqsMetrics) {
console.log('Got sqsMetrics', sqsMetrics);
if (sqsMetrics.Attributes) {
queueSize = parseInt(sqsMetrics.Attributes.ApproximateNumberOfMessages) + parseInt(sqsMetrics.Attributes.ApproximateNumberOfMessagesNotVisible);
console.log('Pushing', queueSize);
await pushMetrics(buildMetric(queueSize))
}
} else {
console.log('Failed fetching sqsMetrics');
}
const response = {
statusCode: 200,
body: JSON.stringify('Pushed ' + queueSize),
};
return response;
};
</code></pre>
|
<p>I am running my JMeter on Azure Kubernetes for load testing. JMeter uses HTTP sampler to call Azure function( HTTP endpoint) to generate load.
I am getting error message <strong>Response code:Non HTTP response code: java.net.UnknownHostException Response message:Non HTTP response message: abcd.azurewebsites.net.</strong> on kubernetes.</p>
<p>I ran my JMeter test in Azure virtual machine and I am not getting any issue.
I am using same test plan (jmx file) for both Kuberenetes and virtual machine and same Azure function is used in both cases.</p>
<p>Please let me know why getting error <strong>Response code:Non HTTP response code: java.net.UnknownHostException Response message:Non HTTP response message: abcd.azurewebsites.net</strong> when running on Kubernetes.</p>
<p>I am using following configurations</p>
<ol>
<li>JMeter - 5.2.1</li>
<li>Kubernetes - 1.19.9</li>
<li>Jmx file has 150 threads</li>
</ol>
<p>Regards,
Amit Agrawal</p>
| <p>I doubt that this is a JMeter problem, most probably your pod doesn't have the Internet access for some reason, you might want to get familiarized with the following materials:</p>
<ul>
<li><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-required-ports" rel="nofollow noreferrer">Check required ports</a></li>
<li><a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="nofollow noreferrer">Debugging DNS Resolution</a></li>
</ul>
<p>However if JMeter is the only piece of software having this problem you might want to <a href="https://www.blazemeter.com/blog/how-to-configure-jmeter-logging" rel="nofollow noreferrer">increase JMeter's logging verbosity</a> for HTTP protocol by adding the next lines to <em>log4j2.xml</em> file:</p>
<pre><code><Logger name="org.apache.http" level="debug" />
<Logger name="org.apache.http.wire" level="error" />
<Logger name="org.apache.jmeter.protocol.http" level="debug" />
</code></pre>
<p>One of the possible reasons is that locally you <a href="https://jmeter.apache.org/usermanual/get-started.html#proxy_server" rel="nofollow noreferrer">configured JMeter to use your corporate proxy for accessing the Internet</a> and by accident forgot to remove this settings for your k8s deployment</p>
|
<p>I am trying to run local development kubernetes cluster which runs in Docker Desktop context. But its just keeps having following taint: <code>node.kubernetes.io/not-ready:NoSchedule</code>.</p>
<p>Manally removing taints, ie <code>kubectl taint nodes --all node.kubernetes.io/not-ready-</code>, doesn't help, because it comes back right away</p>
<p><code>kubectl describe node</code>, is:</p>
<pre><code>Name: docker-desktop
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=docker-desktop
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 07 May 2021 11:00:31 +0100
Taints: node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: docker-desktop
AcquireTime: <unset>
RenewTime: Fri, 07 May 2021 16:14:19 +0100
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 07 May 2021 16:14:05 +0100 Fri, 07 May 2021 11:00:31 +0100 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 07 May 2021 16:14:05 +0100 Fri, 07 May 2021 11:00:31 +0100 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 07 May 2021 16:14:05 +0100 Fri, 07 May 2021 11:00:31 +0100 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Fri, 07 May 2021 16:14:05 +0100 Fri, 07 May 2021 16:11:05 +0100 KubeletNotReady PLEG is not healthy: pleg was last seen active 6m22.485400578s ago; threshold is 3m0s
Addresses:
InternalIP: 192.168.65.4
Hostname: docker-desktop
Capacity:
cpu: 5
ephemeral-storage: 61255492Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 18954344Ki
pods: 110
Allocatable:
cpu: 5
ephemeral-storage: 56453061334
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 18851944Ki
pods: 110
System Info:
Machine ID: f4da8f67-6e48-47f4-94f7-0a827259b845
System UUID: d07e4b6a-0000-0000-b65f-2398524d39c2
Boot ID: 431e1681-fdef-43db-9924-cb019ff53848
Kernel Version: 5.10.25-linuxkit
OS Image: Docker Desktop
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.6
Kubelet Version: v1.19.7
Kube-Proxy Version: v1.19.7
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1160m (23%) 1260m (25%)
memory 1301775360 (6%) 13288969216 (68%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeNotReady 86m (x2 over 90m) kubelet Node docker-desktop status is now: NodeNotReady
Normal NodeReady 85m (x3 over 5h13m) kubelet Node docker-desktop status is now: NodeReady
Normal Starting 61m kubelet Starting kubelet.
Normal NodeAllocatableEnforced 61m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 61m (x8 over 61m) kubelet Node docker-desktop status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 61m (x7 over 61m) kubelet Node docker-desktop status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 61m (x8 over 61m) kubelet Node docker-desktop status is now: NodeHasSufficientPID
Normal Starting 60m kube-proxy Starting kube-proxy.
Normal NodeNotReady 55m kubelet Node docker-desktop status is now: NodeNotReady
Normal Starting 49m kubelet Starting kubelet.
Normal NodeAllocatableEnforced 49m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientPID 49m (x7 over 49m) kubelet Node docker-desktop status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 49m (x8 over 49m) kubelet Node docker-desktop status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 49m (x8 over 49m) kubelet Node docker-desktop status is now: NodeHasNoDiskPressure
Normal Starting 48m kube-proxy Starting kube-proxy.
Normal NodeNotReady 41m kubelet Node docker-desktop status is now: NodeNotReady
Normal Starting 37m kubelet Starting kubelet.
Normal NodeAllocatableEnforced 37m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientPID 37m (x7 over 37m) kubelet Node docker-desktop status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 37m (x8 over 37m) kubelet Node docker-desktop status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 37m (x8 over 37m) kubelet Node docker-desktop status is now: NodeHasSufficientMemory
Normal Starting 36m kube-proxy Starting kube-proxy.
Normal NodeAllocatableEnforced 21m kubelet Updated Node Allocatable limit across pods
Normal Starting 21m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 21m (x8 over 21m) kubelet Node docker-desktop status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 21m (x7 over 21m) kubelet Node docker-desktop status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 21m (x8 over 21m) kubelet Node docker-desktop status is now: NodeHasNoDiskPressure
Normal Starting 21m kube-proxy Starting kube-proxy.
Normal NodeReady 6m16s (x2 over 14m) kubelet Node docker-desktop status is now: NodeReady
Normal NodeNotReady 3m16s (x3 over 15m) kubelet Node docker-desktop status is now: NodeNotReady
</code></pre>
<p>Allocated resources are quite significant, because the cluster is huge as well</p>
<pre><code>CPU: 5GB
Memory: 18GB
SWAP: 1GB
Disk Image: 60GB
</code></pre>
<p>Machine: Mac Core i7, 32GB RAM, 512 GB SSD</p>
<p>I can see that the problem is with PLEG, but I need to understand what caused Pod Lifecycle Event Generator to result an error. Whether it's not sufficient allocated node resources or something else.</p>
<p>Any ideas?</p>
| <p>In my case the problem was some super resource-hungry pods. Thus I had to downscale some deployments to be able to have a stable environment</p>
|
<p>I am trying to set the two <code>env</code> variables of mongo namely - <code>MONGO_INITDB_ROOT_USERNAME</code> and <code>MONGO_INITDB_ROOT_PASSWORD</code> using <code>kubernetes</code> <strong>ConfigMap</strong> and <strong>Secret</strong> as follows:</p>
<p>When I don't use the config map and password, i.e. I hardcode the username and password, it works, but when I try to replace it with configmap and secret, it says</p>
<blockquote>
<p>'Authentication failed.'</p>
</blockquote>
<p>my username and password is the same, which is <code>admin</code></p>
<p>Here's the <code>yaml</code> definition for these obects, can someone help me what is wrong?</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: mongodb-username
data:
username: admin
---
apiVersion: v1
kind: Secret
metadata:
name: mongodb-password
data:
password: YWRtaW4K
type: Opaque
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodbtest
spec:
# serviceName: mongodbtest
replicas: 1
selector:
matchLabels:
app: mongodbtest
template:
metadata:
labels:
app: mongodbtest
selector: mongodbtest
spec:
containers:
- name: mongodbtest
image: mongo:3
# env:
# - name: MONGO_INITDB_ROOT_USERNAME
# value: admin
# - name: MONGO_INITDB_ROOT_PASSWORD
# value: admin
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
configMapKeyRef:
name: mongodb-username
key: username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-password
key: password
</code></pre>
| <p>Finally I was able to find the solution after hours, it is not something I did from kubernetes side, it is when I did <code>base64</code> encode.</p>
<p>The correct way to encode is with following command:</p>
<pre><code>echo -n 'admin' | base64
</code></pre>
<p>and this was the issue with me.</p>
|
<p>I am creating a fresh private node in GCloud where i have a deployment.yml with:</p>
<pre><code>...
containers:
- name: print-logs
image: busybox
command: "sleep infinity"
</code></pre>
<p>When i review the corresponding POD, I always get this error: "failed to do request: Head <a href="https://registry-1.docker.io/.." rel="nofollow noreferrer">https://registry-1.docker.io/..</a>. timeout"</p>
<p>Full logs:</p>
<pre><code># kubectl describe pod <my_pod>
Warning Failed 9s kubelet Failed to pull image "docker.io/library/busybox:latest": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/busybox:latest": failed to resolve reference "docker.io/library/busybox:latest": failed to do request: Head https://registry-1.docker.io/v2/library/busybox/manifests/latest: dial tcp 3.220.36.210:443: i/o timeout
Warning Failed 9s kubelet Error: ErrImagePull
</code></pre>
<p>Custer settings:</p>
<pre><code>gcloud container clusters create test-cluster \
--preemptible \
--enable-ip-alias \
--enable-private-nodes \
--machine-type n1-standard-2 \
--zone europe-west4-a \
--enable-cloud-logging \
--enable-cloud-monitoring \
--create-subnetwork name=main-subnet \
--master-ipv4-cidr 172.16.0.32/28 \
--no-enable-master-authorized-networks \
--image-type COS_CONTAINERD
</code></pre>
<p>Please help me.</p>
| <p>First connect into the cluster using[link]</p>
<pre><code>gcloud container clusters get-credentials NAME [--internal-ip] [--region=REGION | --zone=ZONE, -z ZONE] [GCLOUD_WIDE_FLAG …]
</code></pre>
<p>And then try to pull the docker image for docker.</p>
<p>For more information you can refer <a href="https://cloud.google.com/sdk/gcloud/reference/container/clusters/get-credentials" rel="nofollow noreferrer">link</a> and <a href="https://cloud.google.com/build/docs/building/build-containers" rel="nofollow noreferrer">this</a>(which explains about Building container images).</p>
<p>For troubleshooting common Container Registry and Docker issues you can refer <a href="https://cloud.google.com/container-registry/docs/troubleshooting" rel="nofollow noreferrer">this</a> doc.</p>
|
<p>When I'm running following code:</p>
<pre class="lang-sh prettyprint-override"><code>minikube addons enable ingress
</code></pre>
<p>I'm getting following error:</p>
<pre class="lang-sh prettyprint-override"><code>▪ Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
🔎 Verifying ingress addon...
❌ Exiting due to MK_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: Process exited with status 1
stdout:
namespace/ingress-nginx unchanged
configmap/ingress-nginx-controller unchanged
configmap/tcp-services unchanged
configmap/udp-services unchanged
serviceaccount/ingress-nginx unchanged
clusterrole.rbac.authorization.k8s.io/ingress-nginx unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
role.rbac.authorization.k8s.io/ingress-nginx unchanged
rolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
serviceaccount/ingress-nginx-admission unchanged
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
role.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
service/ingress-nginx-controller-admission unchanged
service/ingress-nginx-controller unchanged
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission configured
stderr:
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"app.kubernetes.io/component\":\"controller\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\"},\"name\":\"ingress-nginx-controller\",\"namespace\":\"ingress-nginx\"},\"spec\":{\"minReadySeconds\":0,\"revisionHistoryLimit\":10,\"selector\":{\"matchLabels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"app.kubernetes.io/component\":\"controller\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\"}},\"strategy\":{\"rollingUpdate\":{\"maxUnavailable\":1},\"type\":\"RollingUpdate\"},\"template\":{\"metadata\":{\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"app.kubernetes.io/component\":\"controller\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\",\"gcp-auth-skip-secret\":\"true\"}},\"spec\":{\"containers\":[{\"args\":[\"/nginx-ingress-controller\",\"--ingress-class=nginx\",\"--configmap=$(POD_NAMESPACE)/ingress-nginx-controller\",\"--report-node-internal-ip-address\",\"--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services\",\"--udp-services-configmap=$(POD_NAMESPACE)/udp-services\",\"--validating-webhook=:8443\",\"--validating-webhook-certificate=/usr/local/certificates/cert\",\"--validating-webhook-key=/usr/local/certificates/key\"],\"env\":[{\"name\":\"POD_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.name\"}}},{\"name\":\"POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}},{\"name\":\"LD_PRELOAD\",\"value\":\"/usr/local/lib/libmimalloc.so\"}],\"image\":\"k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a\",\"imagePullPolicy\":\"IfNotPresent\",\"lifecycle\":{\"preStop\":{\"exec\":{\"command\":[\"/wait-shutdown\"]}}},\"livenessProbe\":{\"failureThreshold\":5,\"httpGet\":{\"path\":\"/healthz\",\"port\":10254,\"scheme\":\"HTTP\"},\"initialDelaySeconds\":10,\"periodSeconds\":10,\"successThreshold\":1,\"timeoutSeconds\":1},\"name\":\"controller\",\"ports\":[{\"containerPort\":80,\"hostPort\":80,\"name\":\"http\",\"protocol\":\"TCP\"},{\"containerPort\":443,\"hostPort\":443,\"name\":\"https\",\"protocol\":\"TCP\"},{\"containerPort\":8443,\"name\":\"webhook\",\"protocol\":\"TCP\"}],\"readinessProbe\":{\"failureThreshold\":3,\"httpGet\":{\"path\":\"/healthz\",\"port\":10254,\"scheme\":\"HTTP\"},\"initialDelaySeconds\":10,\"periodSeconds\":10,\"successThreshold\":1,\"timeoutSeconds\":1},\"resources\":{\"requests\":{\"cpu\":\"100m\",\"memory\":\"90Mi\"}},\"securityContext\":{\"allowPrivilegeEscalation\":true,\"capabilities\":{\"add\":[\"NET_BIND_SERVICE\"],\"drop\":[\"ALL\"]},\"runAsUser\":101},\"volumeMounts\":[{\"mountPath\":\"/usr/local/certificates/\",\"name\":\"webhook-cert\",\"readOnly\":true}]}],\"dnsPolicy\":\"ClusterFirst\",\"serviceAccountName\":\"ingress-nginx\",\"volumes\":[{\"name\":\"webhook-cert\",\"secret\":{\"secretName\":\"ingress-nginx-admission\"}}]}}}}\n"},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","app.kubernetes.io/managed-by":null,"app.kubernetes.io/version":null,"helm.sh/chart":null}},"spec":{"minReadySeconds":0,"selector":{"matchLabels":{"addonmanager.kubernetes.io/mode":"Reconcile"}},"strategy":{"$retainKeys":["rollingUpdate","type"],"rollingUpdate":{"maxUnavailable":1}},"template":{"metadata":{"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","gcp-auth-skip-secret":"true"}},"spec":{"$setElementOrder/containers":[{"name":"controller"}],"containers":[{"$setElementOrder/ports":[{"containerPort":80},{"containerPort":443},{"containerPort":8443}],"args":["/nginx-ingress-controller","--ingress-class=nginx","--configmap=$(POD_NAMESPACE)/ingress-nginx-controller","--report-node-internal-ip-address","--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services","--udp-services-configmap=$(POD_NAMESPACE)/udp-services","--validating-webhook=:8443","--validating-webhook-certificate=/usr/local/certificates/cert","--validating-webhook-key=/usr/local/certificates/key"],"image":"k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a","name":"controller","ports":[{"containerPort":80,"hostPort":80},{"containerPort":443,"hostPort":443}]}],"nodeSelector":null,"terminationGracePeriodSeconds":null}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "ingress-nginx-controller", Namespace: "ingress-nginx"
for: "/etc/kubernetes/addons/ingress-dp.yaml": Deployment.apps "ingress-nginx-controller" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"addonmanager.kubernetes.io/mode":"Reconcile", "app.kubernetes.io/component":"controller", "app.kubernetes.io/instance":"ingress-nginx", "app.kubernetes.io/name":"ingress-nginx"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"helm.sh/hook":null,"helm.sh/hook-delete-policy":null,"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"batch/v1\",\"kind\":\"Job\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"app.kubernetes.io/component\":\"admission-webhook\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\"},\"name\":\"ingress-nginx-admission-create\",\"namespace\":\"ingress-nginx\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"app.kubernetes.io/component\":\"admission-webhook\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\"},\"name\":\"ingress-nginx-admission-create\"},\"spec\":{\"containers\":[{\"args\":[\"create\",\"--host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc\",\"--namespace=$(POD_NAMESPACE)\",\"--secret-name=ingress-nginx-admission\"],\"env\":[{\"name\":\"POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"image\":\"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"create\"}],\"restartPolicy\":\"OnFailure\",\"securityContext\":{\"runAsNonRoot\":true,\"runAsUser\":2000},\"serviceAccountName\":\"ingress-nginx-admission\"}}}}\n"},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","app.kubernetes.io/managed-by":null,"app.kubernetes.io/version":null,"helm.sh/chart":null}},"spec":{"template":{"metadata":{"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","app.kubernetes.io/managed-by":null,"app.kubernetes.io/version":null,"helm.sh/chart":null}},"spec":{"$setElementOrder/containers":[{"name":"create"}],"containers":[{"image":"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7","name":"create"}]}}}}
to:
Resource: "batch/v1, Resource=jobs", GroupVersionKind: "batch/v1, Kind=Job"
Name: "ingress-nginx-admission-create", Namespace: "ingress-nginx"
for: "/etc/kubernetes/addons/ingress-dp.yaml": Job.batch "ingress-nginx-admission-create" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-admission-create", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"addonmanager.kubernetes.io/mode":"Reconcile", "app.kubernetes.io/component":"admission-webhook", "app.kubernetes.io/instance":"ingress-nginx", "app.kubernetes.io/name":"ingress-nginx", "controller-uid":"d33a74a3-101c-4e82-a2b7-45b46068f189", "job-name":"ingress-nginx-admission-create"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"create", Image:"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7", Command:[]string(nil), Args:[]string{"create", "--host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc", "--namespace=$(POD_NAMESPACE)", "--secret-name=ingress-nginx-admission"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar{core.EnvVar{Name:"POD_NAMESPACE", Value:"", ValueFrom:(*core.EnvVarSource)(0xc00a79dea0)}}, Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"OnFailure", TerminationGracePeriodSeconds:(*int64)(0xc003184dc0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"ingress-nginx-admission", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc010b3d980), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil)}}: field is immutable
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"helm.sh/hook":null,"helm.sh/hook-delete-policy":null,"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"batch/v1\",\"kind\":\"Job\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"app.kubernetes.io/component\":\"admission-webhook\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\"},\"name\":\"ingress-nginx-admission-patch\",\"namespace\":\"ingress-nginx\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"app.kubernetes.io/component\":\"admission-webhook\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\"},\"name\":\"ingress-nginx-admission-patch\"},\"spec\":{\"containers\":[{\"args\":[\"patch\",\"--webhook-name=ingress-nginx-admission\",\"--namespace=$(POD_NAMESPACE)\",\"--patch-mutating=false\",\"--secret-name=ingress-nginx-admission\",\"--patch-failure-policy=Fail\"],\"env\":[{\"name\":\"POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"image\":\"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"patch\"}],\"restartPolicy\":\"OnFailure\",\"securityContext\":{\"runAsNonRoot\":true,\"runAsUser\":2000},\"serviceAccountName\":\"ingress-nginx-admission\"}}}}\n"},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","app.kubernetes.io/managed-by":null,"app.kubernetes.io/version":null,"helm.sh/chart":null}},"spec":{"template":{"metadata":{"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","app.kubernetes.io/managed-by":null,"app.kubernetes.io/version":null,"helm.sh/chart":null}},"spec":{"$setElementOrder/containers":[{"name":"patch"}],"containers":[{"image":"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7","name":"patch"}]}}}}
to:
Resource: "batch/v1, Resource=jobs", GroupVersionKind: "batch/v1, Kind=Job"
Name: "ingress-nginx-admission-patch", Namespace: "ingress-nginx"
for: "/etc/kubernetes/addons/ingress-dp.yaml": Job.batch "ingress-nginx-admission-patch" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-admission-patch", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"addonmanager.kubernetes.io/mode":"Reconcile", "app.kubernetes.io/component":"admission-webhook", "app.kubernetes.io/instance":"ingress-nginx", "app.kubernetes.io/name":"ingress-nginx", "controller-uid":"ef303f40-b52d-49c5-ab80-8330379fed36", "job-name":"ingress-nginx-admission-patch"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"patch", Image:"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7", Command:[]string(nil), Args:[]string{"patch", "--webhook-name=ingress-nginx-admission", "--namespace=$(POD_NAMESPACE)", "--patch-mutating=false", "--secret-name=ingress-nginx-admission", "--patch-failure-policy=Fail"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar{core.EnvVar{Name:"POD_NAMESPACE", Value:"", ValueFrom:(*core.EnvVarSource)(0xc00fd798a0)}}, Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"OnFailure", TerminationGracePeriodSeconds:(*int64)(0xc00573d190), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"ingress-nginx-admission", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc00d7d9100), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil)}}: field is immutable
]
😿 If the above advice does not help, please let us know:
👉 https://github.com/kubernetes/minikube/issues/new/choose
</code></pre>
<p>So I had some bug issue in my PC. So, i reinstall minikube. After this when I use <code>minikube start</code> and all want fine. But when i enable ingress then the above error was showing.</p>
<p>And when i run <code>skaffold dev</code> the following error was showing:</p>
<pre class="lang-sh prettyprint-override"><code>Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
- Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": an error on the server ("") has prevented the request from succeeding
exiting dev mode because first deploy failed: kubectl apply: exit status 1
</code></pre>
| <p>As <strong>@Brian de Alwis</strong> pointed out in the comments section, this PR <a href="https://github.com/kubernetes/minikube/pull/11189" rel="nofollow noreferrer">#11189</a> should resolve the above issue.</p>
<p>You can try the <a href="https://github.com/kubernetes/minikube/releases/tag/v1.20.0-beta.0" rel="nofollow noreferrer">v1.20.0-beta.0</a> release with this fix. Additionally, a stable <a href="https://github.com/kubernetes/minikube/releases/tag/v1.20.0" rel="nofollow noreferrer">v1.20.0</a> version is now available.</p>
|
<p>I'm running Kubeflow in a local machine that I deployed with multipass using <a href="https://yann-leguilly.gitlab.io/post/2020-03-04-kubeflow-on-laptop/" rel="nofollow noreferrer">these steps</a> but when I tried running my pipeline, it got stuck with the message ContainerCreating. When I ran <code>kubectl describe pod train-pipeline-msmwc-1648946763 -n kubeflow</code> I found this on the Events part of the describe:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedMount 7m12s (x51 over 120m) kubelet, kubeflow-vm Unable to mount volumes for pod "train-pipeline-msmwc-1648946763_kubeflow(45889c06-87cf-4467-8cfa-3673c7633518)": timeout expired waiting for volumes to attach or mount for pod "kubeflow"/"train-pipeline-msmwc-1648946763". list of unmounted volumes=[docker-sock]. list of unattached volumes=[podmetadata docker-sock mlpipeline-minio-artifact pipeline-runner-token-dkvps]
Warning FailedMount 2m22s (x67 over 122m) kubelet, kubeflow-vm MountVolume.SetUp failed for volume "docker-sock" : hostPath type check failed: /var/run/docker.sock is not a socket file
</code></pre>
<p>Looks to me like there is a problem with my deployment, but I'm new to Kubernetes and can't figure out what I supposed to do right now. Any idea on how to solve this? I don't know if it helps but I'm pulling the containers from a private docker registry and I've set up the secret according to <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">this.</a></p>
| <p>You don't need to use docker. In fact the problem is with <code>workflow-controller-configmap</code> in kubeflow name space. You can edit it with</p>
<pre><code>kubectl edit configmap workflow-controller-configmap -n kubeflow
</code></pre>
<p>and change <code>containerRuntimeExecutor: docker</code> to <code>containerRuntimeExecutor: pns</code>. Also you can change some of the steps and install kubeflow 1.3 in mutlitpass 1.21 rather than 1.15. Do not use kubelfow add-on (at least didn't work for me). You need <code>kustomize 3.2</code> to create manifests as they mentioned in <a href="https://github.com/kubeflow/manifests#installation" rel="noreferrer">https://github.com/kubeflow/manifests#installation</a>.</p>
|
<p>Im using the helm <a href="https://pkg.go.dev/helm.sh/helm/v3" rel="nofollow noreferrer">SDK</a> and it works great, for my testing using the fake option which works(for kubeconfig),</p>
<p>Now when I update the <code>kubeconfig</code> to my cluster I notice that during installation the chart is <strong>stuck</strong> on <strong>status pending</strong>,
and it <strong>stuck forever</strong> in this state until I'm deleting & installing it again,(manually)
My question is how to solve this issue with the
<strong>Helm SDK</strong> (via code only) <a href="https://pkg.go.dev/helm.sh/helm/v3" rel="nofollow noreferrer">https://pkg.go.dev/helm.sh/helm/v3</a>,</p>
<p>I mean wait for a while and if the status is pending after 3 min delete and reinstall it again... or try upgrade before</p>
<p>This is the code</p>
<pre><code> kubeConfigPath, err := findKubeConfig()
if err != nil {
fmt.Println()
}
actionConfig := &action.Configuration{
}
cfg := cli.New()
clientGetter := genericclioptions.NewConfigFlags(false)
clientGetter.KubeConfig = &kubeConfigPath
actionConfig.Init(clientGetter, "def", "memory", log.Printf)
if err != nil {
fmt.Println(err)
}
chart, err := installation.InstallChart(cfg, "test", "chart1", "./charts/dns", nil, actionConfig)
if err != nil {
fmt.Println(err)
}
fmt.Println(chart)
}
func findKubeConfig() (string, error) {
env := os.Getenv("KUBECONFIG")
if env != "" {
return env, nil
}
path, err := homedir.Expand("~/.kube/config")
if err != nil {
return "", err
}
return path, nil
}
</code></pre>
| <p>Looking at the example, I don't know what the <code>installation</code> package is but I feel like you would need to use a <code>Loader</code> (maybe you are using it in that pkg)</p>
<p>from a quick search over the <a href="https://github.com/helm/helm/issues/6910#issuecomment-557106092" rel="nofollow noreferrer">github issues</a> I found someone with a similar problem - and they got the same suggestion. here is a derived example:</p>
<pre class="lang-golang prettyprint-override"><code>package main
import (
"fmt"
"os"
"helm.sh/helm/v3/pkg/action"
"helm.sh/helm/v3/pkg/chart/loader"
"helm.sh/helm/v3/pkg/kube"
"helm.sh/helm/v3/pkg/release"
_ "k8s.io/client-go/plugin/pkg/client/auth"
)
func main() {
chartPath := "./charts/dns"
chart, err := loader.Load(chartPath)
if err != nil {
panic(err)
}
kubeconfigPath := findKubeConfig()
releaseName := "test"
releaseNamespace := "default"
actionConfig := new(action.Configuration)
if err := actionConfig.Init(kube.GetConfig(kubeconfigPath, "", releaseNamespace), releaseNamespace, os.Getenv("HELM_DRIVER"), func(format string, v ...interface{}) {
fmt.Sprintf(format, v)
}); err != nil {
panic(err)
}
iCli := action.NewInstall(actionConfig)
iCli.Namespace = releaseNamespace
iCli.ReleaseName = releaseName
rel, err := iCli.Run(chart, nil)
if err != nil {
panic(err)
}
fmt.Println("Successfully installed release: ", rel.Name)
rel.Info.Status
// check Release Status, feel free to run it in a go routine along the deletetion logic
upCli := action.NewUpgrade(actionConfig)
upgradedRel, err := pollAndUpdate(rel.Info.Status, upCli) // see if its better to just run that code here directly :shrug:
// if we still on pending, then delete it
if upgradedRel.Info.Status.IsPending() {
unCli := action.NewUninstall(actionConfig)
res, err := unCli.Run(rel.Name)
if err != nil {
panic(err)
}
}
}
func pollAndUpdate(originalRel *release.Release, upgradeCli *action.Upgrade) (*release.Release, error) {
if !originalRel.Info.Status.IsPending() {
return originalRel, nil
}
c := time.Tick(10 * time.Second) // we gonna time it out besides checking repeatedly
var rel *release.Release = originalRel
for _ = range c {
//check the status and try and upgrade
for rel.Info.Status.IsPending() { // https://pkg.go.dev/helm.sh/helm/v3@v3.5.4/pkg/release#Status.IsPending
// run the upgrade command you have
// its this function: https://github.com/helm/helm/blob/main/pkg/action/upgrade.go#L111
rel, err := upgradeCli.Run(/*you gotta get all the values this needs*/)
if err != nil {
panic(err)
}
}
}
return rel, nil
}
</code></pre>
|
<p>It looks like Helm 3 is making this more difficult: <a href="https://github.com/databus23/helm-diff/issues/176" rel="nofollow noreferrer">https://github.com/databus23/helm-diff/issues/176</a></p>
<p>But I'm finding that using the helm-diff plugin OR just doing this: <code>helm template releaseName chart | kubectl diff -f - | bat -l diff -</code> I'm seeing ALL resources as new with "+" next to them. Why is this?</p>
<p>I'm running these commands:</p>
<pre><code># upgrade
helm upgrade --install --create-namespace \
--namespace derps -f helm/deploy-values.yaml \
--set 'parentChart.param1=sdfsdfsdfdsf' \
--set 'parentChart.param2=sdfsdfsdfdsf' \
--set 'parentChart.param3=sdfsdfsdfdsf' \
--set 'parentChart.param4=sdfsdfsdfdsf' \
--set 'parentChart.param5=sdfsdfsdfdsf' \
myapp helm/mychart
# make no changes and try to diff
helm template \
--namespace derps -f helm/deploy-values.yaml \
--set 'parentChart.param1=sdfsdfsdfdsf' \
--set 'parentChart.param2=sdfsdfsdfdsf' \
--set 'parentChart.param3=sdfsdfsdfdsf' \
--set 'parentChart.param4=sdfsdfsdfdsf' \
--set 'parentChart.param5=sdfsdfsdfdsf' \
myapp helm/mychart | kubectl diff -f - | bat -l diff -
</code></pre>
<p>I get output that shows the ENTIRE manifest is new- why is this?</p>
| <p>did you try:</p>
<pre class="lang-sh prettyprint-override"><code>helm template \
--namespace derps --no-hooks --skip-tests \
-f helm/deploy-values.yaml \
--set 'parentChart.param1=sdfsdfsdfdsf' \
--set 'parentChart.param2=sdfsdfsdfdsf' \
--set 'parentChart.param3=sdfsdfsdfdsf' \
--set 'parentChart.param4=sdfsdfsdfdsf' \
--set 'parentChart.param5=sdfsdfsdfdsf' \
myapp helm/mychart | kubectl diff --namespace derps -f - | bat -l diff -
</code></pre>
|
<p>I am trying to build docker image with postgres data included.I am following below link.</p>
<pre><code>https://sharmank.medium.com/build-postgres-docker-image-with-data-included-489bd58a1f9e
</code></pre>
<p>It is built with data as below.</p>
<pre><code>REPOSITORY TAG IMAGE ID CREATED SIZE
db_image_with_data latest 48b5d776e7d6 2 hours ago 1.67GB
</code></pre>
<p>I could see data is available in images using below command</p>
<pre><code>
docker run --env PGDATA=postgres -p 5432:5432 -I db_image_with_data
</code></pre>
<p>I have pushed same image to docker hub and I have configured same image in kubernets .</p>
<p>But only db has created there is no data populated in db.</p>
<pre><code>kubectl exec my-database-7c4bb7bdd7-m8sjd -n dev-app -- sh -c 'psql -U "postgres" -d "devapp" -c "select * from execution_groups"'
id | groupname | grouptype | user_id | tag | created_at | updated_at | group_id
----+-----------+-----------+---------+--------+------------+------------+----------
(0 rows)
</code></pre>
<p>Is there anything i am missing out here?</p>
| <p>In the many years since that post came out, it looks like the <code>postgres</code> community container image has been tweaked to automatically provision a volume for the data when running under Docker (via the <code>VOLUME</code> directive). This means your content was stored outside the running container and thus is not part of the saved image. AFAIK this cannot be disabled so you'll have to build your own base image (or find another one to use).</p>
|
<p>What is the default cpu and memory allocated to a Pod in a custom (not default) namespace when NO limit OR request is specified?</p>
<p>Thanks</p>
| <p>I think <a href="https://kubernetes.io/docs/concepts/policy/limit-range/" rel="nofollow noreferrer">Kubernetes Limit Ranges</a> docs answers your question</p>
<blockquote>
<p><em>"By default, containers run with <strong>unbounded</strong> compute resources on a Kubernetes cluster..."</em></p>
</blockquote>
|
<p>I try to run my web application with two backend containers.</p>
<ul>
<li>/ should be routed to the frontend container</li>
<li>everything starting with /backend/ should go to the backend container.</li>
</ul>
<p>So fare, so good, but now the css & js files from the /backend are not loaded because the files are referenced in the HTML file like "/bundles/css/style.css" and now ingress controller route this request to the frontend container instead of to the backend.</p>
<p>How can I fix this issue?</p>
<ul>
<li>Can I fix that with a smart Ingress rule?</li>
<li>Do I need to update the app root of the backend container?</li>
</ul>
<p>Here my Ingress resource</p>
<pre><code>apiVersion: networking.k8s.io/v1 # for versions before 1.14 use extensions/v1beta1
kind: Ingress
metadata:
name: example
namespace: example
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
tls:
- hosts:
- www.example.ch
secretName: tls-example.ch
rules:
- host: www.example.ch
http:
paths:
- path: /backend(/|$)(.*)
pathType: Prefix
backend:
service:
name: example-backend-svc
port:
number: 8081
- path: /
pathType: Prefix
backend:
service:
name: example-frontend-svc
port:
number: 8080
</code></pre>
| <p>You can add another path if all files are located in /bundles/* path.
I have given an example manifest file below.</p>
<pre><code>apiVersion: networking.k8s.io/v1 # for versions before 1.14 use extensions/v1beta1
kind: Ingress
metadata:
name: example
namespace: example
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
tls:
- hosts:
- www.example.ch
secretName: tls-example.ch
rules:
- host: www.example.ch
http:
paths:
- path: /backend(/|$)(.*)
pathType: Prefix
backend:
service:
name: example-backend-svc
port:
number: 8081
- path: /bundles
pathType: Prefix
backend:
service:
name: example-backend-svc
port:
number: 8081
- path: /
pathType: Prefix
backend:
service:
name: example-frontend-svc
port:
number: 8080
</code></pre>
|
<p>I can create ingress with basic auth. I followed the template from kubernetes/ingress-nginx:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-with-auth
annotations:
# type of authentication
nginx.ingress.kubernetes.io/auth-type: basic
# name of the secret that contains the user/password definitions
nginx.ingress.kubernetes.io/auth-secret: basic-auth
# message to display with an appropriate context why the authentication is required
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - foo'
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /
backend:
serviceName: http-svc
servicePort: 80
</code></pre>
<p>It works fine, but I need to allow 'OPTIONS' method without basic auth for pre-flight requests. Any pointers on how to do it will be very helpful.</p>
| <p>I just encountered the same problem. I solved it by using a configuration-snippet.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-cors-auth-ingress
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
# fix cors issues of ingress when using external auth service
if ($request_method = OPTIONS) {
add_header Content-Length 0;
add_header Content-Type text/plain;
return 204;
}
more_set_headers "Access-Control-Allow-Credentials: true";
more_set_headers "Access-Control-Allow-Methods: GET, POST, PUT, PATCH, DELETE, OPTIONS";
more_set_headers "Access-Control-Allow-Headers: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization";
more_set_headers "Access-Control-Allow-Origin: $http_origin";
more_set_headers "Access-Control-Max-Age: 600";
nginx.ingress.kubernetes.io/auth-url: "http://auth-service.default.svc.cluster.local:80"
</code></pre>
|
<p>I am trying to set TCP idleTimeout via an Envoy Filter, so that outbound connections external domain <code>some.app.com</code> will be terminated if they are idle for 5s:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: listener-timeout-tcp
namespace: istio-system
spec:
configPatches:
- applyTo: NETWORK_FILTER
match:
context: SIDECAR_OUTBOUND
listener:
filterChain:
sni: some.app.com
filter:
name: envoy.filters.network.tcp_proxy
patch:
operation: MERGE
value:
name: envoy.filters.network.tcp_proxy
typed_config:
'@type': type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy
idle_timeout: 5s
</code></pre>
<p>However, when I try to apply this filter I get the following error:</p>
<pre><code>Error from server: error when creating "filter.yaml": admission webhook "pilot.validation.istio.io" denied the request: configuration is invalid: envoy filter: missing filters
</code></pre>
<p>So, I realised that the EnvoyFilter configuration above is not supported by <code>istio 1.2.5</code>, so I modified the configuration to work with the old version:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: tcp-idle-timeout
spec:
workloadSelector:
labels:
app: mecha-dev
filters:
- listenerMatch:
listenerType: SIDECAR_OUTBOUND
listenerProtocol: TCP
filterName: envoy.tcp_proxy
filterType: NETWORK
filterConfig:
idle_timeout: 5s
</code></pre>
<p>After modifying the EnvoyFilter was created but it does not seem to have any affect on the outbound requests. Also, I couldn't find a way to restrict this filter to only outbound requests going to external service <code>some.app.com</code>.</p>
<p>Is there something missing in my EnvoyFilter configuration? Also, can we restrict this filter to just <code>some.app.com</code>? There's <code>address</code> option under <code>listenerMatch</code> but what if the IP address of the external service keeps on changing?</p>
<p>Istio and EnvoyProxy version used:</p>
<pre><code>ISTIO_VERSION=1.2.5
ENVOY_VERSION=1.11.0-dev
</code></pre>
| <p>This is a community wiki answer. Feel free to expand it.</p>
<p>As already discussed in the comments, the <code>EnvoyFilter</code> was not yet supported in Istio version 1.2 and actually that version is no longer in support since Dec 2019.</p>
<p>I strongly recommend upgrading to the latest Istio and Envoy versions. Also, after you upgrade please notice that the filter name you want to use was <a href="https://www.envoyproxy.io/docs/envoy/latest/version_history/v1.14.0#deprecated" rel="nofollow noreferrer">deprecated and replaced</a>. You should now use <code>envoy.filters.network.tcp_proxy</code> instead of <code>envoy.tcp_proxy</code>.</p>
<p>Please remember that things are getting deprecated for a reason and keeping the old versions will sooner or later bring you more trouble. Try to keep things more up-to-date.</p>
<p>More details can be found in the <a href="https://istio.io/latest/docs/reference/config/networking/envoy-filter/" rel="nofollow noreferrer">latest docs</a>.</p>
|
<p>I have mounted two tar files as secrets. I would like to mount them to my container and then unpack the contents. The commands that created the secrets are as follows:</p>
<pre><code>kubectl create secret generic orderer-genesis-block --from-file=./channel-artifacts/genesis.block
kubectl create secret generic crypto-config --from-file=crypto-config.tar
kubectl create secret generic channel-artifacts --from-file=channel-artifacts.tar
</code></pre>
<p>The following is what I <code>kubectl apply</code>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: fabric-orderer-01
spec:
selector:
matchLabels:
app: fabric-orderer-01
replicas: 1
template:
metadata:
labels:
app: fabric-orderer-01
spec:
initContainers:
- name: init-channel-artifacts
image: busybox
volumeMounts:
- name: channel-artifacts
mountPath: /hlf/channel-artifacts
command: ['sh', '-c', 'tar -xf /hlf/channel-artifacts/channel-artifacts.tar']
containers:
- name: fabric-orderer-01
image: hyperledger/fabric-orderer:1.4.9
env:
- name: ORDERER_CFG_PATH
value: /hlf/
- name: CONFIGTX_ORDERER_ADDRESSES
value: "orderer.example.com:7050"
- name: ORDERER_GENERAL_LISTENADDRESS
value: 0.0.0.0
- name: ORDERER_GENERAL_LISTENPORT
value: "7050"
- name: ORDERER_GENERAL_LOGLEVEL
value: debug
- name: ORDERER_GENERAL_LOCALMSPID
value: OrdererMSP
- name: ORDERER_GENERAL_GENESISMETHOD
value: file
- name: ORDERER_GENERAL_GENESISFILE
value: /hlf/genesis.block
imagePullPolicy: Always
ports:
- containerPort: 8080
volumeMounts:
- name: fabricfiles-01
mountPath: /fabric
- name: orderer-genesis-block
mountPath: /hlf/
readOnly: true
- name: crypto-config
mountPath: /hlf/crypto-config
readOnly: true
- name: channel-artifacts
mountPath: /hlf/channel-artifacts
readOnly: true
volumes:
- name: orderer-genesis-block
secret:
secretName: orderer-genesis-block
- name: crypto-config
secret:
secretName: crypto-config
- name: channel-artifacts
secret:
secretName: channel-artifacts
- name: fabricfiles-01
persistentVolumeClaim:
claimName: fabric-pvc-01
</code></pre>
<p>My deployment succeeds, but when I <code>bash</code> into my pod, I don't see my tar files being extracted. I only see my tar files <code>/hlf/channel-artifacts/channel-artifacts.tar</code> and <code>/hlf/crypto-config/crypto-config.tar</code>. How should I go about extracting their contents?</p>
| <p>When you create an initContainer and execute this command:</p>
<p><code>command: ['sh', '-c', 'tar -xvf /hlf/channel-artifacts/channel-artifacts.tar']</code></p>
<p>it runs in default for this container path.
I checked this by adding <code>pwd</code> and <code>ls -l</code> commands.</p>
<p>Whole line is:</p>
<p><code>command: ['sh', '-c', 'tar -xvf /hlf/channel-artifacts/channel-artifacts.tar ; pwd ; ls -l']</code></p>
<p>From an initContainer you can get logs by:</p>
<p><code>kubectl logs fabric-orderer-01-xxxxxx -c init-channel-artifacts</code></p>
<p>Output was:</p>
<pre><code>channel-artifacts.txt # first line for -v option so tar was untared indeed
/ # working directory
total 44
drwxr-xr-x 2 root root 12288 May 3 21:57 bin
-rw-rw-r-- 1 1001 1002 32 May 10 14:15 channel-artifacts.txt # file which was in tar
drwxr-xr-x 5 root root 360 May 11 08:41 dev
drwxr-xr-x 1 root root 4096 May 11 08:41 etc
drwxr-xr-x 4 root root 4096 May 11 08:41 hlf
drwxr-xr-x 2 nobody nobody 4096 May 3 21:57 home
dr-xr-xr-x 225 root root 0 May 11 08:41 proc
drwx------ 2 root root 4096 May 3 21:57 root
dr-xr-xr-x 13 root root 0 May 11 08:41 sys
drwxrwxrwt 2 root root 4096 May 3 21:57 tmp
drwxr-xr-x 3 root root 4096 May 3 21:57 usr
drwxr-xr-x 1 root root 4096 May 11 08:41 var
</code></pre>
<p>As you can see your file is stored in <code>/</code> path of the container which means when this container is terminated, its filesystem is terminated as well and your file is gone.</p>
<p>Once we know what happened, it's time to workaround it.
First and impotant thing is secrets are read-only and should be used in prepared form, you can't write file to secret like you wanted to do in your example.</p>
<p>Instead one of the options is you can untar your secrets to a persistent volume:</p>
<p><code>command: ['sh', '-c', 'tar -xvf /hlf/channel-artifacts/channel-artifacts.tar -C /hlf/fabric']</code></p>
<p>And then use <code>postStart hook</code> for the main container where you can e.g. copy your files to desired locations or create simlinks and you won't need to mount your secrets to the main container.</p>
<p>Simple example of <code>postStart hook</code> (<a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/" rel="nofollow noreferrer">reference</a>):</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: lifecycle-demo
spec:
containers:
- name: lifecycle-demo-container
image: nginx
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
</code></pre>
<p>Small notice is</p>
<blockquote>
<p>Kubernetes sends the postStart event immediately after the Container
is created. There is no guarantee, however, that the postStart handler
is called before the Container's entrypoint is called.</p>
</blockquote>
<p>To workaround it you can add <code>sleep 5</code> in your main container before entrypoint. Here's an example of a beginning of container section with nginx image (for your image it'll be different):</p>
<pre><code>containers:
- name: main-container
image: nginx
command: ["bash", "-c", 'sleep 5 ; echo "daemon off;" >> /etc/nginx/nginx.conf ; nginx']
</code></pre>
<p>This will fix your issue. Also you can use this approach for untar your files and you won't even need an <code>initContainer</code></p>
<p>It's not clear why you want to use <code>tar</code> for this purpose as you can store small files in <code>secrets</code> or <code>configmaps</code> and mount them directly using <code>subPath</code> where they are needed without additional steps (you can read about it and find an example <a href="https://dev.to/joshduffney/kubernetes-using-configmap-subpaths-to-mount-files-3a1i" rel="nofollow noreferrer">here</a>)</p>
<p>To use secrets securely, you should consider e.g. <code>HashiCorp Vault</code> (<a href="https://www.hashicorp.com/products/vault/kubernetes" rel="nofollow noreferrer">Vault with kubernetes</a>)</p>
|
<p>Does anyone know how to use SSL on Spring Boot application to connect with ElasticSearch which is deployed at Openshift in the form of https? I have a config.java in my Spring Boot application like the following:</p>
<pre><code>@Configuration
@EnableElasticsearchRepositories(basePackages = "com.siolbca.repository")
@ComponentScan(basePackages = "com.siolbca.services")
public class Config {
@Bean
public RestHighLevelClient client() {
ClientConfiguration clientConfiguration
= ClientConfiguration.builder()
.connectedTo("elasticsearch-siol-es-http.siolbca-dev.svc.cluster.local")
.usingSsl()
.withBasicAuth("elastic","G0D1g6TurJ79pcxr1065pU0U")
.build();
return RestClients.create(clientConfiguration).rest();
}
@Bean
public ElasticsearchOperations elasticsearchTemplate() {
return new ElasticsearchRestTemplate(client());
}
}
</code></pre>
<p>However, when I run it with Postman to run elasticsearch, an error appears like this:</p>
<pre><code>javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
</code></pre>
<p>I've seen some tutorials on the internet that say it's a certificate issue but I can't get a clue how to implement it in my code because I'm a beginner to Java & Spring Boot.
<a href="https://stackoverflow.com/questions/47334476/using-elasticsearch-java-rest-api-with-self-signed-certificates">using-elasticsearch-java-rest-api-with-self-signed-certificates</a>
<a href="https://stackoverflow.com/questions/56598798/how-to-connect-spring-boot-2-1-with-elasticsearch-6-6-with-cluster-node-https">how-to-connect-spring-boot-2-1-with-elasticsearch-6-6-with-cluster-node-https</a></p>
<p>And here’s my configuration for elasticsearch.yml:</p>
<pre><code>cluster:
name: elasticsearch-siol
routing:
allocation:
awareness:
attributes: k8s_node_name
discovery:
seed_providers: file
http:
publish_host: ${POD_NAME}.${HEADLESS_SERVICE_NAME}.${NAMESPACE}.svc
network:
host: "0"
publish_host: ${POD_IP}
node:
attr:
attr_name: attr_value
k8s_node_name: ${NODE_NAME}
name: ${POD_NAME}
roles:
- master
- data
store:
allow_mmap: false
path:
data: /usr/share/elasticsearch/data
logs: /usr/share/elasticsearch/logs
xpack:
license:
upload:
types:
- trial
- enterprise
security:
authc:
realms:
file:
file1:
order: -100
native:
native1:
order: -99
reserved_realm:
enabled: "false"
enabled: "true"
http:
ssl:
certificate: /usr/share/elasticsearch/config/http-certs/tls.crt
certificate_authorities: /usr/share/elasticsearch/config/http-certs/ca.crt
enabled: true
key: /usr/share/elasticsearch/config/http-certs/tls.key
transport:
ssl:
certificate: /usr/share/elasticsearch/config/node-transport-cert/transport.tls.crt
certificate_authorities:
- /usr/share/elasticsearch/config/transport-certs/ca.crt
- /usr/share/elasticsearch/config/transport-remote-certs/ca.crt
enabled: "true"
key: /usr/share/elasticsearch/config/node-transport-cert/transport.tls.key
verification_mode: certificate
</code></pre>
<p>Does anyone know how to use the provided certificate in my Spring Boot application? Thank you.</p>
| <p>I solved my problem by ignoring SSL certificate verification while connecting to elasticsearch from my Backend (Spring Boot). I followed some instruction from website below:</p>
<p><a href="https://stackoverflow.com/questions/62270799/ignore-ssl-certificate-verfication-while-connecting-to-elasticsearch-from-spring">Ignore SSL Certificate Verification</a></p>
<p>I also modified the code by adding basic authentication as follows:</p>
<pre><code>@Configuration
@EnableElasticsearchRepositories(basePackages = "com.siolbca.repository")
@ComponentScan(basePackages = "com.siolbca.services")
public class Config {
@Bean
public RestHighLevelClient createSimpleElasticClient() throws Exception {
try {
final CredentialsProvider credentialsProvider =
new BasicCredentialsProvider();
credentialsProvider.setCredentials(AuthScope.ANY,
new UsernamePasswordCredentials("elastic","G0D1g6TurJ79pcxr1065pU0U"));
SSLContextBuilder sslBuilder = SSLContexts.custom()
.loadTrustMaterial(null, (x509Certificates, s) -> true);
final SSLContext sslContext = sslBuilder.build();
RestHighLevelClient client = new RestHighLevelClient(RestClient
.builder(new HttpHost("elasticsearch-siol-es-http.siolbca-dev.svc.cluster.local", 9200, "https"))
//port number is given as 443 since its https schema
.setHttpClientConfigCallback(new HttpClientConfigCallback() {
@Override
public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {
return httpClientBuilder
.setSSLContext(sslContext)
.setSSLHostnameVerifier(NoopHostnameVerifier.INSTANCE)
.setDefaultCredentialsProvider(credentialsProvider);
}
})
.setRequestConfigCallback(new RestClientBuilder.RequestConfigCallback() {
@Override
public RequestConfig.Builder customizeRequestConfig(
RequestConfig.Builder requestConfigBuilder) {
return requestConfigBuilder.setConnectTimeout(5000)
.setSocketTimeout(120000);
}
}));
System.out.println("elasticsearch client created");
return client;
} catch (Exception e) {
System.out.println(e);
throw new Exception("Could not create an elasticsearch client!!");
}
}
}
</code></pre>
|
<h3>What happened:</h3>
<p>When I performed a stress test on nginx, the nginx deployment scaled, but the newly created nginx pod did not have any load. If I stop the stress test for two minutes, all pods will start working normally. As shown in the picture below:
<a href="https://i.stack.imgur.com/bWVsg.png" rel="nofollow noreferrer">image</a></p>
<h3>What you expected to happen:</h3>
<p>Once the pod is created and running through hpa, it can participate in load balancing normally.</p>
<h3>How to reproduce it (as minimally and precisely as possible):</h3>
<p>Create bitnami/nginx use helm:</p>
<pre><code># helm get values nginx -ntest-p1
USER-SUPPLIED VALUES:
autoscaling:
enabled: false
maxReplicas: 40
minReplicas: 1
targetCPU: 30
targetMemory: 30
resources:
limits:
cpu: 200m
memory: 128Mi
requests:
cpu: 200m
memory: 128Mi
</code></pre>
<h3>Anything else we need to know?:</h3>
<p>My test tool: http_load
If I start a stress test during scaledown, it told me no route to host err</p>
<h3>Environment:</h3>
<p>Kubernetes version (use kubectl version): v1.18.8-aliyun.1</p>
<p>Cloud provider or hardware configuration: aliyun</p>
<pre><code>OS (e.g: cat /etc/os-release): Alibaba Cloud Linux (Aliyun Linux) 2.1903 LTS (Hunting Beagle)
Kernel (e.g. uname -a): Linux 4.19.91-23.al7.x86_64 #1 SMP Tue Mar 23 18:02:34 CST 2021 x86_64 x86_64 x86_64 GNU/Linux
Network plugin and version (if this is a network-related bug): flannel:v0.11.0.2-g6e46593e-aliyun
</code></pre>
<p>Others:</p>
<p>kube-proxy mode is ipvs, and other config is default.</p>
<p>same issue on github: <a href="https://github.com/kubernetes/kubernetes/issues/101887" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/101887</a></p>
| <p>I am not familiar with http_load and the documentation is quite sparse.
From your observation I assume that http_load uses HTTP keepalive, therefore reusing TCP connections. Kubernetes does loadbalancing on the TCP level, so only new connections will reach the added replicas.</p>
<p>You can either configure nginx to not provide keepalive, which will reduce the efficiency for regular use cases or start multiple http_load instances once the scale up has occurred to observe the effects.</p>
|
<p>Running <code>kubectl get all</code> returns Throttling request errors</p>
<p>How can I debug and fix this issue?</p>
<pre><code>I0223 10:28:04.717522 44883 request.go:655] Throttling request took 1.1688991s, request: GET:https://192.168.64.2:8443/apis/apps/v1?timeout=32s
I0223 10:28:14.913541 44883 request.go:655] Throttling request took 5.79656704s, request: GET:https://192.168.64.2:8443/apis/authorization.k8s.io/v1?timeout=32s
I0223 10:28:24.914386 44883 request.go:655] Throttling request took 7.394979677s, request: GET:https://192.168.64.2:8443/apis/cert-manager.io/v1alpha2?timeout=32s
I0223 10:28:35.513643 44883 request.go:655] Throttling request took 1.196992376s, request: GET:https://192.168.64.2:8443/api/v1?timeout=32s
I0223 10:28:45.516586 44883 request.go:655] Throttling request took 2.79962307s, request: GET:https://192.168.64.2:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
I0223 10:28:55.716699 44883 request.go:655] Throttling request took 4.600430975s, request: GET:https://192.168.64.2:8443/apis/node.k8s.io/v1beta1?timeout=32s
I0223 10:29:05.717707 44883 request.go:655] Throttling request took 6.196503125s, request: GET:https://192.168.64.2:8443/apis/storage.k8s.io/v1?timeout=32s
I0223 10:29:15.914744 44883 request.go:655] Throttling request took 7.99827047s, request: GET:https://192.168.64.2:8443/apis/acme.cert-manager.io/v1alpha2?timeout=32s
</code></pre>
| <p>To diagnose kubectl commands, you select a level of verbosity when running the command.
If you run <code>kubectl -v=9</code> you'll get a load of debug output.</p>
<p>If you look in there you may find the permissions for your cache directory within .kube are invalid.</p>
<pre><code>I0511 09:28:13.431116 260204 cached_discovery.go:87] failed to write cache to /home/$USER/.kube/cache/discovery/CLUSTER_NAME/security.istio.io/v1beta1/serverresources.json due to mkdir /home/$USER/.kube/cache/discovery: permission denied
</code></pre>
<p>To resolve this I simply set the permissions to allow the cache data to be written.</p>
<pre><code>chmod 755 -R ~/.kube/cache
</code></pre>
<p>This resolved the problem for me - hope it helps others.</p>
|
<p>I am using an external TCP/UDP network load balancer (Fortigate), Kubernetes 1.20.6 and Istio 1.9.4.
I have set set externalTrafficPolicy: Local and need to run ingress gateway on every node (as said <a href="https://istio.io/latest/docs/tasks/security/authorization/authz-ingress/#source-ip-address-of-the-original-client" rel="nofollow noreferrer">here</a> in network load balancer tab) . How do I do that?</p>
<p>This is my ingress gateway service:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: istio-ingressgateway
namespace: istio-system
uid: d1a86f50-ad14-415f-9c1e-d186fd72cb31
resourceVersion: '1063961'
creationTimestamp: '2021-04-28T19:25:37Z'
labels:
app: istio-ingressgateway
install.operator.istio.io/owning-resource: unknown
install.operator.istio.io/owning-resource-namespace: istio-system
istio: ingressgateway
istio.io/rev: default
operator.istio.io/component: IngressGateways
operator.istio.io/managed: Reconcile
operator.istio.io/version: 1.9.4
release: istio
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"istio-ingressgateway","install.operator.istio.io/owning-resource":"unknown","install.operator.istio.io/owning-resource-namespace":"istio-system","istio":"ingressgateway","istio.io/rev":"default","operator.istio.io/component":"IngressGateways","operator.istio.io/managed":"Reconcile","operator.istio.io/version":"1.9.4","release":"istio"},"name":"istio-ingressgateway","namespace":"istio-system"},"spec":{"ports":[{"name":"status-port","port":15021,"protocol":"TCP","targetPort":15021},{"name":"http2","port":80,"protocol":"TCP","targetPort":8080},{"name":"https","port":443,"protocol":"TCP","targetPort":8443},{"name":"tcp-istiod","port":15012,"protocol":"TCP","targetPort":15012},{"name":"tls","port":15443,"protocol":"TCP","targetPort":15443}],"selector":{"app":"istio-ingressgateway","istio":"ingressgateway"},"type":"LoadBalancer"}}
managedFields:
- manager: istio-operator
........operation: Apply
apiVersion: v1
time: '2021-05-04T18:02:38Z'
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
'f:kubectl.kubernetes.io/last-applied-configuration': {}
'f:labels':
'f:app': {}
'f:install.operator.istio.io/owning-resource': {}
'f:install.operator.istio.io/owning-resource-namespace': {}
'f:istio': {}
'f:istio.io/rev': {}
'f:operator.istio.io/component': {}
'f:operator.istio.io/managed': {}
'f:operator.istio.io/version': {}
'f:release': {}
'f:spec':
'f:ports':
'k:{"port":80,"protocol":"TCP"}':
.: {}
'f:name': {}
'f:port': {}
'f:protocol': {}
'f:targetPort': {}
'k:{"port":443,"protocol":"TCP"}':
.: {}
'f:name': {}
'f:port': {}
'f:protocol': {}
'f:targetPort': {}
'k:{"port":15012,"protocol":"TCP"}':
.: {}
'f:name': {}
'f:port': {}
'f:protocol': {}
'f:targetPort': {}
'k:{"port":15021,"protocol":"TCP"}':
.: {}
'f:name': {}
'f:port': {}
'f:protocol': {}
'f:targetPort': {}
'k:{"port":15443,"protocol":"TCP"}':
.: {}
'f:name': {}
'f:port': {}
'f:protocol': {}
'f:targetPort': {}
'f:selector':
'f:app': {}
'f:istio': {}
'f:type': {}
- manager: kubectl-patch
operation: Update
apiVersion: v1
time: '2021-05-04T18:01:23Z'
fieldsType: FieldsV1
fieldsV1:
'f:spec':
'f:externalIPs': {}
'f:externalTrafficPolicy': {}
'f:type': {}
selfLink: /api/v1/namespaces/istio-system/services/istio-ingressgateway
spec:
ports:
- name: status-port
protocol: TCP
port: 15021
targetPort: 15021
nodePort: 30036
- name: http2
protocol: TCP
port: 80
targetPort: 8080
nodePort: 32415
- name: https
protocol: TCP
port: 443
targetPort: 8443
nodePort: 32418
- name: tcp-istiod
protocol: TCP
port: 15012
targetPort: 15012
nodePort: 31529
- name: tls
protocol: TCP
port: 15443
targetPort: 15443
nodePort: 30478
selector:
app: istio-ingressgateway
istio: ingressgateway
clusterIP: 10.103.72.212
clusterIPs:
- 10.103.72.212
type: LoadBalancer
externalIPs:
- 10.43.34.38
- 10.43.34.77
sessionAffinity: None
externalTrafficPolicy: Local
healthCheckNodePort: 30788
status:
loadBalancer: {}
</code></pre>
<p>The firewall has these two addresses 10.43.34.38 and 10.43.34.77, and relays requests to two K8S nodes on ports 32415 (http) and 32415 (https).</p>
| <p>As brgsousa mentioned in the comment, the solution was redeploy as DaemonSet.</p>
<p>Here is working yaml file:</p>
<pre><code>apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
meshConfig:
accessLogFile: /dev/stdout
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
k8s:
overlays:
- apiVersion: apps/v1
kind: Deployment
name: istio-ingressgateway
patches:
- path: kind
value: DaemonSet
- path: spec.strategy
- path: spec.updateStrategy
value:
rollingUpdate:
maxUnavailable: 50%
type: RollingUpdate
egressGateways:
- name: istio-egressgateway
enabled: true
</code></pre>
|
<p>AWS seems to be hiding my NVMe SSD when an r6gd instance is deployed in Kubernetes, created via the config below.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code># eksctl create cluster -f spot04test00.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: tidb-arm-dev #replace with your cluster name
region: ap-southeast-1 #replace with your preferred AWS region
nodeGroups:
- name: tiflash-1a
desiredCapacity: 1
availabilityZones: ["ap-southeast-1a"]
instancesDistribution:
instanceTypes: ["r6gd.medium"]
privateNetworking: true
labels:
dedicated: tiflash</code></pre>
</div>
</div>
</p>
<p>The running instance has an 80 GiB EBS gp3 block and ZERO NVMe SSD storage as shown in Figure 1.</p>
<p><a href="https://i.stack.imgur.com/x3r94.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/x3r94.png" alt="Figure 1.The 59 GiB NVMe SSD for r6gd instance is swapped out for a 80 GiB gp3 EBS block. What happended to my NVMe SSD?" /></a></p>
<p>Why did Amazon swapped out the 59GiB NVMe for a 80 GiB EBS gp3 storage?</p>
<p>where has my NVMe disk gone?</p>
<ol start="2">
<li><p>Even if I pre-allocate ephemeral-storage using non-managed nodeGroups, it still showed an 80 GiB EBS storage (Figure 1).</p>
</li>
<li><p>If I use the AWS Web UI to start a new r6gd instance, it clearly shows the attached NVMe SSD (Figure 2)</p>
</li>
</ol>
<p><a href="https://i.stack.imgur.com/Hyacv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hyacv.png" alt="Figure 2. 59 GiB NVMe for r6gd instance created via AWS Web Console. " /></a></p>
<p>After further experimentations, it was found that the 80 GiB EBS volume is attached to r6gd.medium, r6g.medium, r6gd.large, r6g.large instances as a 'ephemeral' resource, regardless of instance size.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>eksctl describe nodes:
Capacity:
attachable-volumes-aws-ebs: 39
cpu: 2
ephemeral-storage: 83864556Ki
hugepages-2Mi: 0
memory: 16307140Ki
pods: 29
Allocatable:
attachable-volumes-aws-ebs: 39
cpu: 2
ephemeral-storage: 77289574682
hugepages-2Mi: 0
memory: 16204740Ki
pods: 29
Capacity:
attachable-volumes-aws-ebs: 39
cpu: 2
ephemeral-storage: 83864556Ki
hugepages-2Mi: 0
memory: 16307140Ki
pods: 29
Allocatable:
attachable-volumes-aws-ebs: 39
cpu: 2
ephemeral-storage: 77289574682
hugepages-2Mi: 0
memory: 16204740Ki
pods: 29</code></pre>
</div>
</div>
</p>
<p>Awaiting enlightenment from folks who have successfully utilized NVMe SSD in Kubernetes.</p>
| <p>Solved my issue, here are my learnings:</p>
<ol>
<li><p>NVMe will not show up in the instance by default (either in AWS web console or within terminal of the VM), but is accessible as /dev/nvme1. Yes you need to format and mount them. For a single VM, that is straightforward, but for k8s, you need to deliberately format them before you can use them.</p>
</li>
<li><p>the 80GB can be overridden with settings on the kubernetes config file</p>
</li>
<li><p>to utilize the VM attached NVMe in k8s, you need to run these 2 additional kubernetes services while setting up the k8s nodes. Remember to modify the yaml files of the 2 servcies to use ARM64 images if you are using ARM64 VM's:</p>
<p>a. <a href="https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner" rel="noreferrer">storage-local-static-provisioner</a></p>
<ul>
<li>ARM64 image: jasonxh/local-volume-provisioner:latest</li>
</ul>
<p>b. <a href="https://github.com/brunsgaard/eks-nvme-ssd-provisioner" rel="noreferrer">eks-nvme-ssd-provisioner</a></p>
<ul>
<li>ARM64 image: zhangguiyu/eks-nvme-ssd-provisioner</li>
</ul>
</li>
<li><p>The NVMe will <em>never</em> show up as part of the ephemeral storage of your k8s clusters. That ephemeral storage describes the EBS volume you have attached to each VM. I have since restricted mine to 20GB EBS.</p>
</li>
<li><p>The PV will show up when you type kubectl get pvc:</p>
</li>
<li><p>Copies of TiDB node config files below for reference:</p>
</li>
</ol>
<ul>
<li><p>kubectl get pvc</p>
<pre><code> guiyu@mi:~/dst/bin$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
local-pv-1a3321d4 107Gi RWO Retain Bound tidb-cluster-dev/tikv-tidb-arm-dev-tikv-2 local-storage 9d
local-pv-82e9e739 107Gi RWO Retain Bound tidb-cluster-dev/pd-tidb-arm-dev-pd-1 local-storage 9d
local-pv-b9556b9b 107Gi RWO Retain Bound tidb-cluster-dev/data0-tidb-arm-dev-tiflash-2 local-storage 6d8h
local-pv-ce6f61f2 107Gi RWO Retain Bound tidb-cluster-dev/pd-tidb-arm-dev-pd-2 local-storage 9d
local-pv-da670e42 107Gi RWO Retain Bound tidb-cluster-dev/tikv-tidb-arm-dev-tikv-3 local-storage 6d8h
local-pv-f09b19f4 107Gi RWO Retain Bound tidb-cluster-dev/pd-tidb-arm-dev-pd-0 local-storage 9d
local-pv-f337849f 107Gi RWO Retain Bound tidb-cluster-dev/data0-tidb-arm-dev-tiflash-0 local-storage 9d
local-pv-ff2f11c6 107Gi RWO Retain Bound tidb-cluster-dev/tikv-tidb-arm-dev-tikv-0 local-storage 9d
</code></pre>
</li>
<li><p>pods.yaml</p>
<pre><code>tiflash:
baseImage: pingcap/tiflash-arm64
maxFailoverCount: 3
replicas: 2
nodeSelector:
dedicated: tiflash
tolerations:
- effect: NoSchedule
key: dedicated
operator: Equal
value: tiflash
storageClaims:
- resources:
requests:
storage: "100Gi"
storageClassName: local-storage
</code></pre>
</li>
<li><p>eks-setup.yaml</p>
<pre><code>- name: tiflash-1a
desiredCapacity: 1
instanceTypes: ["r6gd.large"]
privateNetworking: true
availabilityZones: ["ap-southeast-1a"]
spot: false
volumeSize: 20 # GiB EBS gp3 3000 IOPS
volumeType: gp3
ssh:
allow: true
publicKeyPath: '~/dst/etc/data-platform-dev.pub'
labels:
dedicated: tiflash
</code></pre>
</li>
</ul>
|
<p>I'm just getting started with kubebuilder and Golang to extend our Kubernetes-cluster with a custom resource. I would love to do different things in the reconciler-function based on the event, that actually called it.</p>
<p>Was the resource created? Was it updated? Was it deleted?</p>
<p>Each of those events triggers the controller, however, I can't seem to find a possibility to see, which of those events actually happened. I can work around this issue by writing a reconciler like this:</p>
<pre class="lang-golang prettyprint-override"><code>func (r *ServiceDescriptorReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
service := &batchv1.ServiceDescriptor{}
if err := r.Get(context.TODO(), req.NamespacedName, service); err != nil && errors.IsNotFound(err) {
fmt.Println("Resource was not found -> must have been deleted")
else {
fmt.Println("No errors found -> Resource must have been created or updated")
}
}
</code></pre>
<p>However, this feels oddly implicit and kinda hacky.</p>
<p>Is there a clean (possibly native) way of getting the event-type of the reconciler-call?</p>
| <p>You won't be able to to that because this system was designed as level-based and it's not being triggered by individual events changes but rather by the actual cluster state that is being fetch from the apiserver.</p>
<p>Looking at <code>reconcile.go</code> you will notice in line <a href="https://github.com/kubernetes-sigs/controller-runtime/blob/b704f447ea7c8f7059c6665143a4aa1f6da28328/pkg/reconcile/reconcile.go#L84-L87" rel="noreferrer">#84</a> has this comment about it:</p>
<blockquote>
<p>Reconciliation is level-based, meaning action <strong>isn't driven off changes
in individual Events</strong>, but instead is driven by actual cluster state
read from the apiserver or a local cache. For example if responding to
a Pod Delete Event, the Request won't contain that a Pod was
deleted,instead the reconcile function observes this when reading the
cluster state and seeing the Pod as missing.</p>
</blockquote>
<p>And in line <a href="https://github.com/kubernetes-sigs/controller-runtime/blob/master/pkg/reconcile/reconcile.go#L44-L46" rel="noreferrer">#44</a>:</p>
<blockquote>
<p>Request contains the information necessary to reconcile a
Kubernetes object. This includes the information to uniquely
identify the object - its Name and Namespace. <strong>It does NOT contain
information about any specific Event or the object contents itself</strong>.</p>
</blockquote>
|
<p>We are looking to use OPA gatekeeper to audit K8s PodDisruptionBudget (PDB) objects. In particular, we are looking to audit the number of <code>disruptionsAllowed</code> within the <code>status</code> field.</p>
<p>I believe this field will not be available at point of admission since it is calculated and added by the apiserver once the PDB has been applied to the cluster.</p>
<p>It appears that for e.g Pods, the <code>status</code> field is passed as part of the <code>AdmissionReview</code> object [1]. In that particular example it appears that only the pre-admission status fields make it into the <code>AdmissionReview</code> object.</p>
<p>1.) Is it possible to audit on the current in-cluster status fields in the case of PDBs?</p>
<p>2.) Given the intended use of OPA Gatekeeper as an admission controller, would this be considered an anti-pattern?</p>
<p>[1] <a href="https://www.openpolicyagent.org/docs/latest/kubernetes-introduction/" rel="nofollow noreferrer">https://www.openpolicyagent.org/docs/latest/kubernetes-introduction/</a></p>
| <p>This is actually quite reasonable, and is one of the use cases of <a href="https://open-policy-agent.github.io/gatekeeper/website/docs/audit" rel="nofollow noreferrer">Audit</a>. You just need to make sure audit is enabled and <code>spec.enforcementAction: dryrun</code> is set in the Constraint.</p>
<p>Here is an example of what the ConstratintTemplate's Rego would look like. <a href="https://play.openpolicyagent.org/p/nioYvNyN4N" rel="nofollow noreferrer">OPA Playground</a>.</p>
<pre><code>deny[msg] {
value := input.request.object.status.disruptionsAllowed
value > maxDisruptionsAllowed
msg := sprintf("status.disruptionsAllowed must be <%v> or fewer; found <%v>", [maxDisruptionsAllowed, value])
}
</code></pre>
<p>In the specific Constraint, make sure to set <code>enforcementAction</code> to <code>dryrun</code> so the Constraint does not prevent k8s from updating the status field. For example:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sAllowedPodDisruptions
metadata:
name: max-disruptions
spec:
enforcementAction: dryrun
match:
kinds:
- apiGroups: [""]
kinds: ["PodDisruptionBudget"]
namespaces:
- "default"
parameters:
maxDisruptionsAllowed:
- 10
</code></pre>
<p>If you forget to set <code>enforcementAction</code>, k8s will be unable to update the status field of the PodDisruptionBudget.</p>
|
<p>I would like <code>kubectl config get-contexts</code> to show all, or any arbitrary subset, of the columns shown in default output.</p>
<p>Currently, <code>kubectl config get-contexts</code> shows <code>CURRENT NAME CLUSTER AUTHINFO</code> and <code>NAMESPACE</code>. On my terminal, that's a total of 221 columns, with <code>NAME</code>, <code>CLUSTER</code>, and <code>AUTHINFO</code> being identical for all contexts.</p>
<p><code>kubectl config get-contexts</code> documentation shows only one output option: <code>-o=name</code>. Attempts to override this with <code>-o=custom-columns="CURRENT:.metadata.current,NAME:.metadata.name"</code> (for example) result in an error.</p>
<p>Am I doing something wrong or is the <code>custom-columns</code> option that is common to <code>kubectl get</code> a missing feature?</p>
<p><strong>Update:</strong> maintainers decided that there was no clean way of implementing output options; see <a href="https://github.com/kubernetes/kubectl/issues/1052" rel="nofollow noreferrer">https://github.com/kubernetes/kubectl/issues/1052</a></p>
| <p>As indicated by the error message:</p>
<pre><code>error: output must be one of '' or 'name'
</code></pre>
<p>and described in <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-strong-getting-started-strong-" rel="nofollow noreferrer">the docs</a>:</p>
<pre><code>output o Output format. One of: name
</code></pre>
<p>only the value of <code>name</code> can be used with the <a href="https://kubernetes.io/docs/reference/kubectl/overview/#custom-columns" rel="nofollow noreferrer">custom-columns</a> option for <code>kubectl config get-contexts</code>.</p>
<p>The other option that you have left is to list the current context with:</p>
<pre><code>kubectl config current-context
</code></pre>
|
<p>The following error is being prompted when it is tried to add a new cluster in 'CMAK' in the K8s cluster.</p>
<pre><code>Yikes! KeeperErrorCode = Unimplemented for /kafka-manager/mutex Try again.
</code></pre>
<p>My cluster configurations are as follows,</p>
<pre><code>zookeeper: wurstmeister/zookeeper
kafka-manager: kafkamanager/kafka-manager:3.0.0.4
kafka: wurstmeister/kafka:2.12-2.4.1
</code></pre>
| <p>I could resolve it by following the steps.</p>
<ol>
<li><p>Connect to the 'zookeeper' container in k8s</p>
<p>k exec -it podid -- bash</p>
</li>
<li><p>Connect with zookeeper cli,</p>
<p>./bin/zkCli.sh</p>
</li>
<li><p>Make sure that it has created the 'kafka-manager' path already. if it does not exist, then try to create a cluster in 'kafka-manager' first.</p>
<p>ls /kafka-manager</p>
</li>
<li><p>Hit the following commands to create subsequent paths,</p>
<p>create /kafka-manager/mutex ""</p>
<p>create /kafka-manager/mutex/locks ""</p>
<p>create /kafka-manager/mutex/leases ""</p>
</li>
<li><p>Now try to create the cluster again.</p>
</li>
</ol>
<p>The output would be like this,</p>
<pre><code>WATCHER::
WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /kafka-manager
[configs, deleteClusters, clusters]
[zk: localhost:2181(CONNECTED) 1] create /kafka-manager/mutex ""
Created /kafka-manager/mutex
[zk: localhost:2181(CONNECTED) 2] create /kafka-manager/mutex/locks ""
Created /kafka-manager/mutex/locks
[zk: localhost:2181(CONNECTED) 3] create /kafka-manager/mutex/leases ""
Created /kafka-manager/mutex/leases
[zk: localhost:2181(CONNECTED) 4]
</code></pre>
<p>The original answer is mentioned here,
<a href="https://github.com/yahoo/CMAK/issues/731#issuecomment-643880544" rel="noreferrer">https://github.com/yahoo/CMAK/issues/731#issuecomment-643880544</a></p>
|
<p>I am unable to scale vertical my AKS cluster.
Currently, I have 3 nodes in my cluster with 2 core and 8 ram, I am trying to upgrade it with 16 code and 64 RAM, how do I do it?
I tried scaling the VM scale set, on Azure portal it shows it is scaled but when I do "kubectl get nodes -o wide" it still shows the old version.</p>
<p>Any leads will be helpful.
Thanks,
Abhishek</p>
| <p>Vertical scaling or changing the node pool VM size is not supported. You need to create a new node pool and schedule your pods on the new nodes.</p>
<p><a href="https://github.com/Azure/AKS/issues/1556#issuecomment-615390245" rel="nofollow noreferrer">https://github.com/Azure/AKS/issues/1556#issuecomment-615390245</a></p>
<blockquote>
<p>this UX issues is due to how the VMSS is managed by AKS. Since AKS is
a managed service, we don't support operations done outside of the AKS
API to the infrastructure resources. In this example you are using the
VMSS portal to resize, which uses VMSS APIs to resize the resource and
as a result has unexpected changes.</p>
<p>AKS nodepools don't support resize in place, so the supported way to
do this is to create a new nodepool with a new target and delete the
previous one. This needs to be done through the AKS portal UX. This
maintains the goal state of the AKS node pool, as at the moment the
portal is showing the VMSize AKS knows you have because that is what
was originally requested.</p>
</blockquote>
|
<p>I am setting up a cluster of Artemis in Kubernetes with 3 group of master/slave:</p>
<pre><code>activemq-artemis-master-0 1/1 Running
activemq-artemis-master-1 1/1 Running
activemq-artemis-master-2 1/1 Running
activemq-artemis-slave-0 0/1 Running
activemq-artemis-slave-1 0/1 Running
activemq-artemis-slave-2 0/1 Running
</code></pre>
<p>The Artemis version is 2.17.0. Here is my cluster config in master-0 <code>broker.xml</code>. The configs are the same for other brokers except the <code>connector-ref</code> is changed to match the broker:</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
<configuration xmlns="urn:activemq" xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core" xsi:schemaLocation="urn:activemq:core ">
<name>activemq-artemis-master-0</name>
<persistence-enabled>true</persistence-enabled>
<!-- this could be ASYNCIO, MAPPED, NIO
ASYNCIO: Linux Libaio
MAPPED: mmap files
NIO: Plain Java Files
-->
<journal-type>ASYNCIO</journal-type>
<paging-directory>data/paging</paging-directory>
<bindings-directory>data/bindings</bindings-directory>
<journal-directory>data/journal</journal-directory>
<large-messages-directory>data/large-messages</large-messages-directory>
<journal-datasync>true</journal-datasync>
<journal-min-files>2</journal-min-files>
<journal-pool-files>10</journal-pool-files>
<journal-device-block-size>4096</journal-device-block-size>
<journal-file-size>10M</journal-file-size>
<!--
This value was determined through a calculation.
Your system could perform 1.1 writes per millisecond
on the current journal configuration.
That translates as a sync write every 911999 nanoseconds.
Note: If you specify 0 the system will perform writes directly to the disk.
We recommend this to be 0 if you are using journalType=MAPPED and journal-datasync=false.
-->
<journal-buffer-timeout>100000</journal-buffer-timeout>
<!--
When using ASYNCIO, this will determine the writing queue depth for libaio.
-->
<journal-max-io>4096</journal-max-io>
<!--
You can verify the network health of a particular NIC by specifying the <network-check-NIC> element.
<network-check-NIC>theNicName</network-check-NIC>
-->
<!--
Use this to use an HTTP server to validate the network
<network-check-URL-list>http://www.apache.org</network-check-URL-list> -->
<!-- <network-check-period>10000</network-check-period> -->
<!-- <network-check-timeout>1000</network-check-timeout> -->
<!-- this is a comma separated list, no spaces, just DNS or IPs
it should accept IPV6
Warning: Make sure you understand your network topology as this is meant to validate if your network is valid.
Using IPs that could eventually disappear or be partially visible may defeat the purpose.
You can use a list of multiple IPs, and if any successful ping will make the server OK to continue running -->
<!-- <network-check-list>10.0.0.1</network-check-list> -->
<!-- use this to customize the ping used for ipv4 addresses -->
<!-- <network-check-ping-command>ping -c 1 -t %d %s</network-check-ping-command> -->
<!-- use this to customize the ping used for ipv6 addresses -->
<!-- <network-check-ping6-command>ping6 -c 1 %2$s</network-check-ping6-command> -->
<!-- how often we are looking for how many bytes are being used on the disk in ms -->
<disk-scan-period>5000</disk-scan-period>
<!-- once the disk hits this limit the system will block, or close the connection in certain protocols
that won't support flow control. -->
<max-disk-usage>90</max-disk-usage>
<!-- should the broker detect dead locks and other issues -->
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>HALT</critical-analyzer-policy>
<page-sync-timeout>2244000</page-sync-timeout>
<!-- the system will enter into page mode once you hit this limit.
This is an estimate in bytes of how much the messages are using in memory
The system will use half of the available memory (-Xmx) by default for the global-max-size.
You may specify a different value here if you need to customize it to your needs.
<global-max-size>100Mb</global-max-size>
-->
<acceptors>
<!-- useEpoll means: it will use Netty epoll if you are on a system (Linux) that supports it -->
<!-- amqpCredits: The number of credits sent to AMQP producers -->
<!-- amqpLowCredits: The server will send the # credits specified at amqpCredits at this low mark -->
<!-- amqpDuplicateDetection: If you are not using duplicate detection, set this to false
as duplicate detection requires applicationProperties to be parsed on the server. -->
<!-- amqpMinLargeMessageSize: Determines how many bytes are considered large, so we start using files to hold their data.
default: 102400, -1 would mean to disable large mesasge control -->
<!-- Note: If an acceptor needs to be compatible with HornetQ and/or Artemis 1.x clients add
"anycastPrefix=jms.queue.;multicastPrefix=jms.topic." to the acceptor url.
See https://issues.apache.org/jira/browse/ARTEMIS-1644 for more information. -->
<!-- Acceptor for every supported protocol -->
<acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
<!-- AMQP Acceptor. Listens on default AMQP port for AMQP traffic.-->
<acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true</acceptor>
<!-- STOMP Acceptor. -->
<acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>
<!-- HornetQ Compatibility Acceptor. Enables HornetQ Core and STOMP for legacy HornetQ clients. -->
<acceptor name="hornetq">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>
<!-- MQTT Acceptor -->
<acceptor name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>
</acceptors>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="amq"/>
<permission type="deleteNonDurableQueue" roles="amq"/>
<permission type="createDurableQueue" roles="amq"/>
<permission type="deleteDurableQueue" roles="amq"/>
<permission type="createAddress" roles="amq"/>
<permission type="deleteAddress" roles="amq"/>
<permission type="consume" roles="amq"/>
<permission type="browse" roles="amq"/>
<permission type="send" roles="amq"/>
<!-- we need this otherwise ./artemis data imp wouldn't work -->
<permission type="manage" roles="amq"/>
</security-setting>
</security-settings>
<address-settings>
<!-- if you define auto-create on certain queues, management has to be auto-create -->
<address-setting match="activemq.management#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
</address-settings>
<addresses>
<address name="DLQ">
<anycast>
<queue name="DLQ"/>
</anycast>
</address>
<address name="ExpiryQueue">
<anycast>
<queue name="ExpiryQueue"/>
</anycast>
</address>
</addresses>
<!-- Uncomment the following if you want to use the Standard LoggingActiveMQServerPlugin pluging to log in events
<broker-plugins>
<broker-plugin class-name="org.apache.activemq.artemis.core.server.plugin.impl.LoggingActiveMQServerPlugin">
<property key="LOG_ALL_EVENTS" value="true"/>
<property key="LOG_CONNECTION_EVENTS" value="true"/>
<property key="LOG_SESSION_EVENTS" value="true"/>
<property key="LOG_CONSUMER_EVENTS" value="true"/>
<property key="LOG_DELIVERING_EVENTS" value="true"/>
<property key="LOG_SENDING_EVENTS" value="true"/>
<property key="LOG_INTERNAL_EVENTS" value="true"/>
</broker-plugin>
</broker-plugins>
-->
<cluster-user>clusterUser</cluster-user>
<cluster-password>aShortclusterPassword</cluster-password>
<connectors>
<connector name="activemq-artemis-master-0">tcp://activemq-artemis-master-0.activemq-artemis-master.svc.cluster.local:61616</connector>
<connector name="activemq-artemis-slave-0">tcp://activemq-artemis-slave-0.activemq-artemis-slave.svc.cluster.local:61616</connector>
<connector name="activemq-artemis-master-1">tcp://activemq-artemis-master-1.activemq-artemis-master.svc.cluster.local:61616</connector>
<connector name="activemq-artemis-slave-1">tcp://activemq-artemis-slave-1.activemq-artemis-slave.svc.cluster.local:61616</connector>
<connector name="activemq-artemis-master-2">tcp://activemq-artemis-master-2.activemq-artemis-master.svc.cluster.local:61616</connector>
<connector name="activemq-artemis-slave-2">tcp://activemq-artemis-slave-2.activemq-artemis-slave.svc.cluster.local:61616</connector>
</connectors>
<cluster-connections>
<cluster-connection name="activemq-artemis">
<connector-ref>activemq-artemis-master-0</connector-ref>
<retry-interval>500</retry-interval>
<retry-interval-multiplier>1.1</retry-interval-multiplier>
<max-retry-interval>5000</max-retry-interval>
<initial-connect-attempts>-1</initial-connect-attempts>
<reconnect-attempts>-1</reconnect-attempts>
<message-load-balancing>ON_DEMAND</message-load-balancing>
<max-hops>1</max-hops>
<!-- scale-down>true</scale-down -->
<static-connectors>
<connector-ref>activemq-artemis-master-0</connector-ref>
<connector-ref>activemq-artemis-slave-0</connector-ref>
<connector-ref>activemq-artemis-master-1</connector-ref>
<connector-ref>activemq-artemis-slave-1</connector-ref>
<connector-ref>activemq-artemis-master-2</connector-ref>
<connector-ref>activemq-artemis-slave-2</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
<ha-policy>
<replication>
<master>
<group-name>activemq-artemis-0</group-name>
<quorum-vote-wait>12</quorum-vote-wait>
<vote-on-replication-failure>true</vote-on-replication-failure>
<!--we need this for auto failback-->
<check-for-live-server>true</check-for-live-server>
</master>
</replication>
</ha-policy>
</core>
<core xmlns="urn:activemq:core">
<jmx-management-enabled>true</jmx-management-enabled>
</core>
</configuration>
</code></pre>
<p>My consumer is defined as a <code>JmsListener</code> in a Spring Boot app. During the consumption of the messages in a queue, the Spring Boot app crashed which results in kubernetes deleting the pod and recreating a new one. However, I noticed that the new pod did not connect to the same Artemis node, thus the left over messages from the previous connection were never consumed.</p>
<p>I thought the whole point of using the cluster is to have all the Artemis nodes act as one unit to deliver messages to the consumer regardless of which node it connect to. Am I wrong? If the cluster cannot reroute the consumer connection to the correct node (which holds the left over messages from previous consumer) then what is the recommended way to deal with this situation?</p>
| <p>First, it's important to note that there's no feature to make a client reconnect to the broker from which it disconnected after the client crashes/restarts. Generally speaking the client shouldn't really care about what broker it connects to; that's one of the main goals of horizontal scalability.</p>
<p>It's also worth noting that if the number of messages on the brokers and the number of connected clients is low enough that this condition arises frequently that almost certainly means you have too many brokers in your cluster.</p>
<p>That said, I believe the reason that your client isn't getting the messages it expects is because you're using the default <code>redistribution-delay</code> (i.e. <code>-1</code>) which means messages will <em>not</em> be redistributed to other nodes in the cluster. If you want to enable redistribution (which is seems like you do) then you should set it to >= 0, e.g.:</p>
<pre class="lang-xml prettyprint-override"><code> <address-setting match="#">
...
<redistribution-delay>0</redistribution-delay>
...
</address-setting>
</code></pre>
<p>You can read more about redistribution in <a href="https://activemq.apache.org/components/artemis/documentation/latest/clusters.html#message-redistribution" rel="nofollow noreferrer">the documentation</a>.</p>
<hr />
<p>Aside from that you may want to reconsider your topology in general. Typically if you're in a cloud-like environment (e.g. one using Kubernetes) where the infrastructure itself will restart failed pods then you wouldn't use a master/slave configuration. You'd simply mount the journal on a separate pod (e.g. using NFSv4) such that when a node fails it would be restarted and then reconnect back to its persistent storage. This effectively provides broker high availability (which is what master/slave is designed for outside of cloud environments).</p>
<p>Also, a single instance of ActiveMQ Artemis can handle millions of messages per second depending on the use-case so you may not actually need 3 live nodes for your expected load.</p>
<p>Note, these are general recommendations about your overall architecture and are not directly related to your question.</p>
|
<p>I have a reccuring problem with container in different pods can't communicate with each other.
To make things simple, I created a cluster with only 2 containers in different pods:</p>
<ol>
<li>app that does only one thing: connecting to redis server.</li>
<li>redis-server container</li>
</ol>
<p>To make long story short: I'm keep getting 'connection refused' when trying to connect from the app to redis:</p>
<pre><code>$ kubectl logs app-deployment-86f848b46f-n7672
> app@1.0.0 start
> node ./app.js
LATEST
Error: connect ECONNREFUSED 10.104.95.63:6379
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1133:16) {
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '10.104.95.63',
port: 6379
}
</code></pre>
<p>the app identidfy the redis-service successfully but fails to connect</p>
<pre><code>$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app-service ClusterIP 10.107.18.112 <none> 4000/TCP 2m42s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29h
redis-service ClusterIP 10.104.95.63 <none> 6379/TCP 29h
</code></pre>
<p>the app code:</p>
<pre><code>const redis = require("redis");
const bluebird = require("bluebird");
bluebird.promisifyAll(redis);
console.log('LATEST');
const host = process.env.HOST;
const port = process.env.PORT;
const client = redis.createClient({ host, port });
client.on("error", function (error) {
console.error(error);
});
</code></pre>
<p>app's docker file:</p>
<pre><code>FROM node
WORKDIR "/app"
COPY ./package.json ./
RUN npm install
COPY . .
CMD ["npm", "start"]
</code></pre>
<p>for the redis server I tried the default image of redis, and when it didn't work, I used a custome-made image without any bind to a specific ip and no protected-mode.</p>
<p>redis dockerfile:</p>
<pre><code>FROM redis:latest
COPY redis.conf /usr/local/etc/redis/redis.conf
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
</code></pre>
<p>Finally, I've created 2 deployments with respected ClusterIP services:</p>
<p>app deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
replicas: 1
selector:
matchLabels:
component: app
template:
metadata:
labels:
component: app
spec:
containers:
- name: app
image: user/redis-app:latest
ports:
- containerPort: 4000
env:
- name: HOST
valueFrom:
configMapKeyRef:
name: app-env
key: HOST
- name: PORT
valueFrom:
configMapKeyRef:
name: app-env
key: PORT
</code></pre>
<p>app service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
type: ClusterIP
selector:
component: app
ports:
- port: 4000
targetPort: 4000
</code></pre>
<p>env file:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: app-env
data:
PORT: "6379"
HOST: "redis-service.default"
</code></pre>
<p>redis deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
spec:
replicas: 1
selector:
matchLabels:
db: redis
template:
metadata:
labels:
db: redis
spec:
containers:
- name: redis
image: user/custome-redis:latest
ports:
- containerPort: 6379
</code></pre>
<p>redis service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: redis-service
spec:
type: ClusterIP
selector:
component: redis
ports:
- protocol: TCP
port: 6379
targetPort: 6379
</code></pre>
<p>Originally, I used Windows enviorment with WSL2 and Kubernetes running over docker with Docker Desktop installed. when it failed, I've provisioned a centos8 vm over virtualbox and installed kubernets with minikube - got the same results..</p>
<p>any ideas?....</p>
| <p>Posting an answer out of comments since David Maze found the issue (added as a community wiki, feel free to edit)</p>
<p>It's very important to match labels between pods, deployments, services and other elements.</p>
<p>In the example above, there are different labels used for <code>redis</code> service:</p>
<p><code>component: redis</code> and <code>db: redis</code> which caused this issue.</p>
|
<p>I'm trying to run an automation job in Python that restarts a deployment in Kubernetes cluster. I cannon install <code>kubectl</code> on the box due to limited permissions. Does anyone have a suggestion or solution for this?</p>
<p>Thank you.</p>
| <p>For configuration follow - <a href="https://github.com/kubernetes-client/python/blob/master/examples/remote_cluster.py" rel="noreferrer">https://github.com/kubernetes-client/python/blob/master/examples/remote_cluster.py</a></p>
<pre><code># This is equivalent to `kubectl rollout restart deployment/dashboard-kubernetes-dashboard -n default`
from kubernetes import client, config
from kubernetes.client.rest import ApiException
import datetime
def restart_deployment(v1_apps, deployment, namespace):
now = datetime.datetime.utcnow()
now = str(now.isoformat("T") + "Z")
body = {
'spec': {
'template':{
'metadata': {
'annotations': {
'kubectl.kubernetes.io/restartedAt': now
}
}
}
}
}
try:
v1_apps.patch_namespaced_deployment(deployment, namespace, body, pretty='true')
except ApiException as e:
print("Exception when calling AppsV1Api->read_namespaced_deployment_status: %s\n" % e)
def main():
config.load_kube_config(context="minikube")
# Enter name of deployment and "namespace"
deployment = "dashboard-kubernetes-dashboard"
namespace = "default"
v1_apps = client.AppsV1Api()
restart_deployment(v1_apps, deployment, namespace)
if __name__ == '__main__':
main()
</code></pre>
|
<p>I'm trying to deploy a Quarkus app to a Kubernetes cluster, but I got the following stacktrace:</p>
<pre><code>exec java -Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager -XX:+ExitOnOutOfMemoryError -cp . -jar /deployments/quarkus-run.jar
__ ____ __ _____ ___ __ ____ ______
--/ __ \/ / / / _ | / _ \/ //_/ / / / __/
-/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \
--\___\_\____/_/ |_/_/|_/_/|_|\____/___/
2021-05-11 16:47:19,455 ERROR [io.qua.run.Application] (main) Failed to start application (with profile prod): java.lang.NumberFormatException: SRCFG00029: Expected an integer value, got "tcp://10.233.12.82:80"
at io.smallrye.config.Converters.lambda$static$60db1e39$1(Converters.java:104)
at io.smallrye.config.Converters$EmptyValueConverter.convert(Converters.java:949)
at io.smallrye.config.Converters$TrimmingConverter.convert(Converters.java:970)
at io.smallrye.config.Converters$BuiltInConverter.convert(Converters.java:872)
at io.smallrye.config.Converters$OptionalConverter.convert(Converters.java:790)
at io.smallrye.config.Converters$OptionalConverter.convert(Converters.java:771)
at io.smallrye.config.SmallRyeConfig.getValue(SmallRyeConfig.java:225)
at io.smallrye.config.SmallRyeConfig.getOptionalValue(SmallRyeConfig.java:270)
at io.quarkus.arc.runtime.ConfigRecorder.validateConfigProperties(ConfigRecorder.java:37)
at io.quarkus.deployment.steps.ConfigBuildStep$validateConfigProperties1249763973.deploy_0(ConfigBuildStep$validateConfigProperties1249763973.zig:328)
at io.quarkus.deployment.steps.ConfigBuildStep$validateConfigProperties1249763973.deploy(ConfigBuildStep$validateConfigProperties1249763973.zig:40)
at io.quarkus.runner.ApplicationImpl.doStart(ApplicationImpl.zig:576)
at io.quarkus.runtime.Application.start(Application.java:90)
at io.quarkus.runtime.ApplicationLifecycleManager.run(ApplicationLifecycleManager.java:100)
at io.quarkus.runtime.Quarkus.run(Quarkus.java:66)
at io.quarkus.runtime.Quarkus.run(Quarkus.java:42)
at io.quarkus.runtime.Quarkus.run(Quarkus.java:119)
at io.quarkus.runner.GeneratedMain.main(GeneratedMain.zig:29)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at io.quarkus.bootstrap.runner.QuarkusEntryPoint.doRun(QuarkusEntryPoint.java:48)
at io.quarkus.bootstrap.runner.QuarkusEntryPoint.main(QuarkusEntryPoint.java:25)
</code></pre>
<p>I build the Docker image with the <a href="https://github.com/quarkusio/quarkus-quickstarts/blob/main/getting-started/src/main/docker/Dockerfile.jvm" rel="nofollow noreferrer">default dockerfile</a>, and my quarkus-related dependencies are the following:</p>
<pre class="lang-kotlin prettyprint-override"><code>dependencies {
implementation(enforcedPlatform("${quarkusPlatformGroupId}:${quarkusPlatformArtifactId}:${quarkusPlatformVersion}"))
implementation("org.optaplanner:optaplanner-quarkus")
implementation("io.quarkus:quarkus-resteasy")
implementation("io.quarkus:quarkus-vertx")
implementation("io.quarkus:quarkus-resteasy-jackson")
implementation("io.quarkus:quarkus-undertow-websockets")
implementation("io.quarkus:quarkus-smallrye-health")
}
</code></pre>
<p>I'm using Quarkus 1.13.3.Final, and I've written a helm chart for my deployment by hand. The deployed dockerfile runs fine on my machine, and the kubernetes deployment descriptor does not have that IP address in it. I think that IP is a ClusterIP of the cluster.</p>
<p>Any idea? Thanks</p>
| <p>It's due to the <a href="https://github.com/kubernetes/kubernetes/blob/v1.20.0/pkg/kubelet/envvars/envvars.go#L87-L90" rel="noreferrer">docker link variables</a> that kubernetes mimics for <code>Service</code> names in scope; it bites people a lot when they have generically named services such as <code>{ apiVersion: v1, kind: Service, metadata: { name: http }, ...</code> as it will cheerfully produce environment variables of the form <code>HTTP_PORT=tcp://10.233.12.82:80</code> in the Pod, and things such as Spring boot or evidently Quarkus which coerce env-vars into configuration overrides can cause the exact outcome you're experiencing</p>
<p>The solution is (a) don't name <code>Services</code> with bland names (b) "mask off" the offensive env-vars for the Pod:</p>
<pre class="lang-yaml prettyprint-override"><code>...
containers:
- ...
env:
- name: HTTP_PORT
# it doesn't need a value:, it just needs the name to be specified
# so it hides the injected version
- ... any remaining env-vars you really want
</code></pre>
|
<p>I am running on prem kubernetes. I have a release that is running with 3 pods. At one time (I assume) I deployed the helm chart with 3 replicas. But I have since deployed an update that has 2 replicas.</p>
<p>When I run <code>helm get manifest my-release-name -n my-namespace</code>, it shows that the deployment yaml has replicas set to 2.</p>
<p>But it still has 3 pods when I run <code>kubectl get pods -n my-namespace</code>.</p>
<p><strong>What is needed (from a helm point of view) to get the number of replicas down to the limit I set?</strong></p>
<p><strong>Update</strong><br />
I noticed this when I was debugging a crash loop backoff for the release.</p>
<p>This is an example of what a <code>kubectl describe pod</code> looks like on one of the three pods.</p>
<pre>
Name: my-helm-release-7679dc8c79-knd9x
Namespace: my-namespace
Priority: 0
Node: my-kube-cluster-b178d4-k8s-worker-1/10.1.2.3
Start Time: Wed, 05 May 2021 21:27:36 -0600
Labels: app.kubernetes.io/instance=my-helm-release
app.kubernetes.io/name=my-helm-release
pod-template-hash=7679dc8c79
Annotations:
Status: Running
IP: 10.1.2.4
IPs:
IP: 10.1.2.4
Controlled By: ReplicaSet/my-helm-release-7679dc8c79
Containers:
my-helm-release:
Container ID: docker://9a9f213efa63ba8fd5a9e0fad84eb0615996c768c236ae0045d1e7bec012eb02
Image: dockerrespository.mydomain.com/repository/runtime/my-helm-release:1.9.0-build.166
Image ID: docker-pullable://dockerrespository.mydomain.com/repository/runtime/my-helm-release@sha256:a11179795e7ebe3b9e57a35b0b27ec9577c5c3cd473cc0ecc393a874f03eed92
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 139
Started: Tue, 11 May 2021 12:24:04 -0600
Finished: Tue, 11 May 2021 12:24:15 -0600
Ready: False
Restart Count: 2509
Liveness: http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-82gnm (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-82gnm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-82gnm
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 10m (x3758 over 5d15h) kubelet Readiness probe failed: Get http://10.1.2.4:80/: dial tcp 10.1.2.4:80: connect: connection refused
Warning BackOff 15s (x35328 over 5d14h) kubelet Back-off restarting failed container</pre>
| <blockquote>
<p>What is needed (from a helm point of view) to get the number of replicas down to the limit I set?</p>
</blockquote>
<p>Your pods need to be in a "healthy" state. Then they are in your desired number of replicas.</p>
<p>First, you deployed 3 replicas. This is managed by a ReplicaSet.</p>
<p>Then you deployed a new revision, with 2 replicas. A "rolling deployment" will be performed. First pods with your new revision will be created, but replicas of your old ReplicaSet will only be scaled down when you have healthy instances of your new revision.</p>
|
<p>With the following code, I'm able to fetch all the Pods running in a cluster. How can I find the Pod Controller (Deployment/DaemonSet) using the Kubernetes go-client library?</p>
<pre class="lang-golang prettyprint-override"><code>var kubeconfig *string
if home := homedir.HomeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()
// use the current context in kubeconfig
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err.Error())
}
// create the kubeClient
kubeClient, err := kubernetes.NewForConfig(config)
metricsClient, err := metricsv.NewForConfig(config)
if err != nil {
panic(err.Error())
}
pods, err := kubeClient.CoreV1().Pods("").List(context.TODO(), metav1.ListOptions{})
if err != nil {
panic(err.Error())
}
for _, pod := range pods.Items {
fmt.Println(pod.Name)
// how can I get the Pod controller? (Deployment/DaemonSet)
// e.g. fmt.Println(pod.Controller.Name)
}
</code></pre>
| <p>By following @Jonas suggestion I was able to get Pod's manager. Here's a fully working sample:</p>
<pre class="lang-golang prettyprint-override"><code>package main
import (
"context"
"flag"
"fmt"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
"path/filepath"
)
func main() {
var kubeconfig *string
if home := homedir.HomeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()
// use the current context in kubeconfig
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err.Error())
}
// create the kubeClient
kubeClient, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
pods, err := kubeClient.CoreV1().Pods("").List(context.TODO(), metav1.ListOptions{})
if err != nil {
panic(err.Error())
}
for _, pod := range pods.Items {
if len(pod.OwnerReferences) == 0 {
fmt.Printf("Pod %s has no owner", pod.Name)
continue
}
var ownerName, ownerKind string
switch pod.OwnerReferences[0].Kind {
case "ReplicaSet":
replica, repErr := kubeClient.AppsV1().ReplicaSets(pod.Namespace).Get(context.TODO(), pod.OwnerReferences[0].Name, metav1.GetOptions{})
if repErr != nil {
panic(repErr.Error())
}
ownerName = replica.OwnerReferences[0].Name
ownerKind = "Deployment"
case "DaemonSet", "StatefulSet":
ownerName = pod.OwnerReferences[0].Name
ownerKind = pod.OwnerReferences[0].Kind
default:
fmt.Printf("Could not find resource manager for type %s\n", pod.OwnerReferences[0].Kind)
continue
}
fmt.Printf("POD %s is managed by %s %s\n", pod.Name, ownerName, ownerKind)
}
}
</code></pre>
|
<p>As per kubectl documentation, kubectl apply is possible by using a file or stdin. My usecase is that there would be service/deployment json strings in runtime and I have to deploy those in clusters using nodejs. Of course, I can create files and just do kubectl apply -f thefilename. But, I don't want to create files. Is there any approach where I can do like below:</p>
<pre><code>kubectl apply "{"apiVersion": "extensions/v1beta1","kind": "Ingress"...}"
</code></pre>
<p>For the record, I am using node_ssh library.</p>
| <pre><code>echo 'your manifest' | kubectl create -f -
</code></pre>
<p>Reference:</p>
<ul>
<li><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply</a></li>
</ul>
|
<p>I would like to route traffic to pods based on headers - with a fallback.</p>
<p>The desired result would be a k8s cluster where multiple versions of the same service could be deployed and routed to using header values.</p>
<p>svcA
svcB
svcC</p>
<p>each of these services (the main branch of git repo) would be deployed either to default namespace or labelled 'main'. any feature branch of each service can also be deployed, either into its own namespace or labelled with the branch name.</p>
<p>Ideally by setting a header <code>X-svcA</code> to a value matching a branch name, we would route any traffic to the in matching namespace or label. If there is no such name space or label, route the traffic to the default (main) pod.</p>
<pre><code>if HEADERX && svcX:label
route->svcX:label
else
route->svcX
</code></pre>
<p>The first question - is this (or something like) even possible with istio or linkerd</p>
| <p>You can do that using Istio <code>VirtualService</code></p>
<pre><code>apiVersion: networking.istio.io/v1beta1
kind: VirtualService
...
spec:
hosts:
- reviews
http:
- match:
- headers:
end-user:
exact: jason
route:
- destination:
host: reviews
subset: v2
- route:
- destination:
host: reviews
subset: v1
</code></pre>
<p>Read more <a href="https://istio.io/latest/docs/tasks/traffic-management/request-routing/#route-based-on-user-identity" rel="nofollow noreferrer">here</a>.</p>
|
<p>I'm using the AKS cluster with version 1.19, and I found that this version of K8s using <a href="https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd" rel="nofollow noreferrer">Containerd</a> instead of Dockershim as the container runtime.
I also use Fluentd to collect logs from my spring apps, with k8s version 1.18 it works okay, but with k8s version 1.19 I can't collect logs from my spring app.
I use <a href="https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/fluentd-daemonset-azureblob.yaml" rel="nofollow noreferrer">this file</a> for my Fluentd DeamonSet.
I wonder if the log files of my applications are not lived in var/log/containers, is this correct?</p>
| <p>I found a solution here: <a href="https://github.com/fluent/fluentd-kubernetes-daemonset#use-cri-parser-for-containerdcri-o-logs" rel="nofollow noreferrer">use-cri-parser-for-containerdcri-o-logs</a></p>
<blockquote>
<p>By default, these images use json parser for /var/log/containers/
files because docker generates json formatted logs. On the other hand,
containerd/cri-o use different log format. To parse such logs, you
need to use cri parser instead.</p>
</blockquote>
<p>We need to build a new fluentd image using cri parser, that works for me.</p>
|
<p>Can anybody please tell me how to use the claim as volumes in kubernetes?</p>
<p>Does a vol needs to be created?</p>
<p>Documentation does not give much information about it:
<a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#claims-as-volumes" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/#claims-as-volumes</a></p>
| <p>A <code>PersistentVolumeClaim</code> allows to bind to an existing <code>PersistentVolume</code>. A <code>PersistentVolume</code> is a representation of a "real" storage device.</p>
<p>You have the detailed lookup algorithm in the following page, section <code>Matching and binding</code>: <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/persistent-storage.md#matching-and-binding" rel="nofollow noreferrer">https://github.com/kubernetes/community/design-proposals/storage/persistent-storage.md</a></p>
<p>Since it is not very practical to declare each <code>PersistentVolume</code> manually there is an option to use a <code>StorageClass</code> that allows to create a <code>PersistentVolume</code> dynamically.
You can either set the <code>StorageClass</code> in the <code>PersistentVolumeClaim</code> or define a default <code>StorageClass</code> for your cluster.</p>
<p>So when a Pod uses a <code>PersistentVolumeClaim</code> as volume. First a matching <code>PersistentVolume</code> will be searched. If no matching PV can be found and a <code>StorageClass</code> is defined in the claim (or a default <code>StorageClass</code> exists) then a volume will be dynamically created.</p>
|
<p><code>kubectl logs -f <pod-name></code></p>
<p>This command shows the logs from the container log file.</p>
<p>Basically, I want to check the difference between "what is generated by the container" and "what is written to the log file".
I see some unusual binary logs, so I just want to find out if the container is creating those binary logs or the logs are not properly getting written to the log file.</p>
<p>"Unusual logs":</p>
<p><code>\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\</code></p>
| <p>Usually, containerized applications, do not write to the log files but send messages to <code>stdout</code>/<code>stderr</code>, there is no point in storing log files inside containers, as they will be deleted when the pod is deleted.<br />
What you see when running</p>
<pre><code>kubectl logs -f <pod-name>
</code></pre>
<p>are messages sent to <code>stdout</code>/<code>stderr</code>. There are no container specific logs here, only application logs.</p>
<hr />
<p>If, for some reason, your application does write to the log file, you can check it by <code>exec</code>ing into pod with e.g.</p>
<pre><code>kubectl exec -it <pod-name> -- /bin/bash
</code></pre>
<p>and read logs as you would in shell.</p>
<h1>Edit</h1>
<h2>Application logs</h2>
<p>A container engine handles and redirects any output generated to a containerized application's <code>stdout</code> and <code>stderr</code> streams. For example, the Docker container engine redirects those two streams to a <a href="https://docs.docker.com/config/containers/logging/configure/" rel="nofollow noreferrer">logging driver</a>, which is configured in Kubernetes to write to a file in JSON format.<br />
Those logs are also saved to</p>
<ul>
<li><code>/var/log/containers/</code></li>
<li><code>/var/log/pods/</code></li>
</ul>
<p>By default, if a container restarts, the kubelet keeps one terminated container with its logs. If a pod is evicted from the node, all corresponding containers are also evicted, along with their logs.</p>
<p>Everything you see by issuing the command</p>
<pre><code>kubectl logs <pod-name>
</code></pre>
<p>is what application sent to <code>stdout</code>/<code>stderr</code>, or what was redirected to <code>stdout</code>/<code>stderr</code>. For example <code>nginx</code>:</p>
<blockquote>
<p>The official nginx image creates a symbolic link from <code>/var/log/nginx/access.log</code> to <code>/dev/stdout</code>, and creates another symbolic link from <code>/var/log/nginx/error.log</code> to <code>/dev/stderr</code>, overwriting the log files and causing logs to be sent to the relevant special device instead.</p>
</blockquote>
<h2>Node logs</h2>
<p>Components that do not run inside containers (e.g kubelet, container runtime) write to <code>journald</code>. Otherwise, they write to <code>.log</code> fies inside <code>/var/log/</code> directory.</p>
<p><em>Excerpt from <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster/#looking-at-logs" rel="nofollow noreferrer">official documentation</a>:</em></p>
<p>For now, digging deeper into the cluster requires logging into the relevant machines. Here are the locations of the relevant log files. (note that on systemd-based systems, you may need to use journalctl instead)</p>
<p><strong>Master</strong></p>
<ul>
<li><code>/var/log/kube-apiserver.log</code> - API Server, responsible for serving the API</li>
<li><code>/var/log/kube-scheduler.log</code> - Scheduler, responsible for making scheduling decisions</li>
<li><code>/var/log/kube-controller-manager.log</code> - Controller that manages replication controllers</li>
</ul>
<p><strong>Worker Nodes</strong></p>
<ul>
<li><code>/var/log/kubelet.log</code> - Kubelet, responsible for running containers on the node</li>
<li><code>/var/log/kube-proxy.log</code> - Kube Proxy, responsible for service load balancing</li>
</ul>
|
<p>It seems that we cannot make the Snowplow container (snowplow/scala-stream-collector-kinesis) use the service account we provide. It always uses the <code>shared-eks-node-role</code> but not the provided service account. The config is set to <code>default</code> for both the <code>accessKey</code> as the <code>secretKey</code>.</p>
<p>This is the service account part we use:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: thijs-service-account
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::123:role/thijs-eks-service-account-role-snowplow
</code></pre>
<p>And when I inspect the pod I can see the account:</p>
<pre><code>AWS_ROLE_ARN: arn:aws:iam::123:role/thijs-eks-service-account-role-snowplow
</code></pre>
<p>The error then shows not the right account.</p>
<pre><code>Exception in thread "main" com.amazonaws.services.kinesis.model.AmazonKinesisException: User: arn:aws:sts::123:assumed-role/shared-eks-node-role/i-123 is not authorized to perform: kinesis:DescribeStream on resource: arn:aws:kinesis:eu-west-1:123:stream/snowplow-good (Service: AmazonKinesis; Status Code: 400; Error Code: AccessDeniedException; Request ID: 123-123-123; Proxy: null)
</code></pre>
| <p>The collector itself doesn't do any role swapping. It only cares to receive credentials via one of three methods:</p>
<ul>
<li>the default creds provider chain</li>
<li>a specific IAM role</li>
<li>environment variables.</li>
</ul>
<p>The most popular deployment is on an EC2 instance, in which case the default EC2 role can be used to access other resources in the account.</p>
<p>It looks like when you are deploying it on EKS things are not as straightforward. The collector seems to work with this assumed role: <code>arn:aws:sts::123:assumed-role/shared-eks-node-role/i-123</code> but it is not authorised with Kinesis permissions. Do you know what process creates that role? Perhaps you could add the missing Kinesis policies there?</p>
|
<p>I'm new to Kubernetes and to supporting a particular website hosted in Kubernetes. I'm trying to figure out why cert-manager did not renew the certificate in the QA environment a few weeks back.</p>
<p>Looking at the details of various certificate-related resources, the problem seems to be that the challenge failed:</p>
<pre><code>State: invalid, Reason: Error accepting authorization: acme: authorization error for [DOMAIN]: 400 urn:ietf:params:acme:error:connection: Fetching http://[DOMAIN]/.well-known/acme-challenge/[CHALLENGE TOKEN STRING]: Timeout during connect (likely firewall problem)
</code></pre>
<p>I assume that error means Let's Encrypt wasn't able to access the challenge file at http://[DOMAIN]/.well-known/acme-challenge/[CHALLENGE TOKEN STRING]</p>
<p>(Domain and challenge token string redacted)</p>
<p>I've tried connecting to the URL via PowerShell:</p>
<p><code>PS C:\Users\Simon> invoke-webrequest -uri http://[DOMAIN]/.well-known/acme-challenge/[CHALLENGE TOKEN STRING] -SkipCertificateCheck</code></p>
<p>and it returns a 200 OK.</p>
<p>However, PowerShell follows redirects automatically and checking with WireShark the Nginx web server is performing a 308 permanent redirect to https://[DOMAIN]/.well-known/acme-challenge/[CHALLENGE TOKEN STRING]</p>
<p>(same URL but just redirecting HTTP to HTTPS)</p>
<p>I understand that Let's Encrypt should be able to handle HTTP to HTTPS redirects.</p>
<p>Given that the URL Let's Encrypt was trying to reach is accessible from the internet I'm at a loss as to what the next step should be in investigating this issue. Could anyone provide any advice?</p>
<p>Here is the full output of the kubectl cert-manager plugin, checking the status of the certificate and associated resources:</p>
<pre><code>PS C:\Users\Simon> kubectl cert-manager status certificate -n qa containers-tls-secret
Name: containers-tls-secret
Namespace: qa
Created at: 2020-10-16T08:40:14+13:00
Conditions:
Ready: False, Reason: Expired, Message: Certificate expired on Sun, 14 Mar 2021 17:41:12 UTC
Issuing: False, Reason: Failed, Message: The certificate request has failed to complete and will be retried: Failed to wait for order resource "containers-tls-secret-q2cwr-3223066309" to become ready: order is in "invalid" state:
DNS Names:
- [DOMAIN]
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Issuing 31s (x236 over 9d) cert-manager Renewing certificate as renewal was scheduled at 2021-02-12 17:41:12 +0000 UTC
Normal Reused 31s (x236 over 9d) cert-manager Reusing private key stored in existing Secret resource "containers-tls-secret"
Warning Failed 31s (x236 over 9d) cert-manager The certificate request has failed to complete and will be retried: Failed to wait for order resource "containers-tls-secret-q2cwr-3223066309" to become ready: order is in "invalid" state:
Issuer:
Name: letsencrypt
Kind: ClusterIssuer
Conditions:
Ready: True, Reason: ACMEAccountRegistered, Message: The ACME account was registered with the ACME server
Events: <none>
Secret:
Name: containers-tls-secret
Issuer Country: US
Issuer Organisation: Let's Encrypt
Issuer Common Name: R3
Key Usage: Digital Signature, Key Encipherment
Extended Key Usages: Server Authentication, Client Authentication
Public Key Algorithm: RSA
Signature Algorithm: SHA256-RSA
Subject Key ID: dadf29869b58d05e980c390fdc8783f52369228d
Authority Key ID: 142eb317b75856cbae500940e61faf9d8b14c2c6
Serial Number: 04f7356add94a7909afab94f0847a3457765
Events: <none>
Not Before: 2020-12-15T06:41:12+13:00
Not After: 2021-03-15T06:41:12+13:00
Renewal Time: 2021-02-13T06:41:12+13:00
CertificateRequest:
Name: containers-tls-secret-q2cwr
Namespace: qa
Conditions:
Ready: False, Reason: Failed, Message: Failed to wait for order resource "containers-tls-secret-q2cwr-3223066309" to become ready: order is in "invalid" state:
Events: <none>
Order:
Name: containers-tls-secret-q2cwr-3223066309
State: invalid, Reason:
Authorizations:
URL: https://acme-v02.api.letsencrypt.org/acme/authz-v3/10810339315, Identifier: [DOMAIN], Initial State: pending, Wildcard: false
FailureTime: 2021-02-13T06:41:59+13:00
Challenges:
- Name: containers-tls-secret-q2cwr-3223066309-2302286353, Type: HTTP-01, Token: [CHALLENGE TOKEN STRING], Key: [CHALLENGE TOKEN STRING].8b00cc-ysOWGQ8vtmpOJobWOFa2cEQUe4Sun5NUKCws, State: invalid, Reason: Error accepting authorization: acme: authorization error for [DOMAIN]: 400 urn:ietf:params:acme:error:connection: Fetching http://[DOMAIN]/.well-known/acme-challenge/[CHALLENGE TOKEN STRING]: Timeout during connect (likely firewall problem), Processing: false, Presented: false
</code></pre>
<p>By the way, the invoke-webrequest results show an HTML page was returned:</p>
<pre><code><!doctype html><html lang="en"><head><meta charset="utf-8"><title>Containers</title><base href="./"><meta name="viewport" content="width=device-width,initial-scale=1"><link rel="icon" href="favicon.ico…
</code></pre>
<p>Could that be the issue? I don't know what Let's Encrypt expects to find at the URL of the HTTP01 challenge. Is a web page allowed or is it expecting something different?</p>
<p><strong>EDIT:</strong> I now suspect the HTML page returned by invoke-webrequest is not normal, since I understand the file should include the Let's Encrypt token and a key. Here is the full HTML page:</p>
<pre><code><!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Wineworks</title>
<base href="./">
<meta name="viewport" content="width=device-width,initial-scale=1">
<link rel="icon" href="favicon.ico">
<link rel="apple-touch-icon-precomposed" href="favicon-152.png">
<meta name="msapplication-TileColor" content="#FFFFFF">
<meta name="msapplication-TileImage" content="favicon-152.png">
<script src="https://secure.aadcdn.microsoftonline-p.com/lib/1.0.16/js/adal.min.js"/>
<link href="styles.025a840d59ecfcfe427e.bundle.css" rel="stylesheet"/>
</head>
<body>
<app-root/>
<script type="text/javascript" src="inline.ce954cfcbe723b5986e6.bundle.js"/>
<script type="text/javascript" src="polyfills.7edc676f7558876c179d.bundle.js"/>
<script type="text/javascript" src="main.da3590aac44ee76e7b3a.bundle.js"/>
</body>
</html>
</code></pre>
<p>Any idea what might cause cert-manager to drop the wrong kind of file at the challenge location?</p>
| <p>In the end I was unable to determine the cause of the certificate renewal failure. However, events on one of the certificate-related resources suggested previous renewals had worked. So I thought it was possible whatever the problem was might have been transient or a one-off, and that trying again to renew the certificate may work.</p>
<p>Reading various articles and blog posts it appeared that deleting the CertificateRequest object would prompt cert-manager to create a new one, which should result in a certificate renewal. Also, deleting the CertificateRequest object would automatically delete the associated ACME Order and Challenge objects as well, so it wouldn't be necessary to delete them manually.</p>
<p>Deleting the CertificateRequest object did work: The certificate was renewed successfully. However, it didn't renew straight away. Further reading suggests it may take an hour for the certificate renewal (I didn't check the exact time it took so can't verify this).</p>
<p>To delete a CertificateRequest:</p>
<pre><code>kubectl delete certificaterequest <certificateRequest name>
</code></pre>
<p>For example:</p>
<pre><code>kubectl delete certificaterequest my-certificate-zrt6p -n qa
</code></pre>
<p>If you wish to force an immediate renewal, rather than waiting an hour, after deleting the CertificateRequest object and cert-manager creating a new one run the following kubectl command, if you have the <strong>kubectl cert-manager plugin</strong> installed:</p>
<pre><code>kubectl cert-manager renew <certificate name>
</code></pre>
<p>For example, to renew certificate my-certificate in namespace qa:</p>
<pre><code>kubectl cert-manager renew my-certificate -n qa
</code></pre>
<p><strong>NOTE:</strong> The easiest way to install the kubectl cert-manager plugin is via the <strong>Krew</strong> plugin manager:</p>
<pre><code>kubectl krew install cert-manager
</code></pre>
<p>See <a href="https://krew.sigs.k8s.io/docs/user-guide/setup/install/" rel="nofollow noreferrer">https://krew.sigs.k8s.io/docs/user-guide/setup/install/</a> for details of how to install Krew (which is useful for all kubectl plugins, not just cert-manager).</p>
<p>One further thing I found from researching this is that sometimes the old certificate secret can get "stuck", preventing a new secret from being created. You can delete the certificate secret to avoid this problem. For example:</p>
<pre><code>kubectl delete secret my-certificate -n qa
</code></pre>
<p>I assume, however, that without a certificate secret your website will have no certificate, which may prevent browsers from accessing it. So I would only delete the existing secret as a last resort.</p>
|
<p>Previously my kubernetes pod was running as root and I was running <code>ulimit -l <MLOCK_AMOUNT></code> in its startup script before starting its main program in foreground. However, I can no more run my pod as root, would you please know how can I set this limit now?</p>
| <p>To be able to set it per specific <code>Pod</code>, the way you did it before, unfortunatelly you need privilege escalation i.e. run your container as root.</p>
<p>As far as I understand you're interested in setting it only per specific <code>Pod</code>, not globally, right ? Because it can be done by changing docker configuration on a specific kubernetes node.</p>
<p>This topic has already been raised in <a href="https://stackoverflow.com/questions/33649192/how-do-i-set-ulimit-for-containers-in-kubernetes">another thread</a> and as you may read in <a href="https://stackoverflow.com/a/33666728/11714114">James Brown's answer</a>:</p>
<blockquote>
<p>It appears that you can't currently set a ulimit but it is an open
issue: <a href="https://github.com/kubernetes/kubernetes/issues/3595" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/3595</a></p>
</blockquote>
|
<p>I have multiple openshift routes of type:</p>
<pre><code>apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: <name>
labels:
app.kubernetes.io/name: <app-name>
spec:
host: <host>
port:
targetPort: <targetPort>
tls:
termination: reencrypt
destinationCACertificate: |-
-----BEGIN CERTIFICATE-----
MIIDejCCAmICCQCNHBN8tj/FwzANBgkqhkiG9w0BAQsFADB/MQswCQYDVQQGEwJV
UzELMAkGA1UECAwCQ0ExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28xDzANBgNVBAoM
BlNwbHVuazEXMBUGA1UEAwwOU3BsdW5rQ29tbW9uQ0ExITAfBgkqhkiG9w0BCQEW
EnN1cHBvcnRAc3BsdW5rLmNvbTAeFw0xNzAxMzAyMDI2NTRaFw0yNzAxMjgyMDI2
NTRaMH8xCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJDQTEWMBQGA1UEBwwNU2FuIEZy
YW5jaXNjbzEPMA0GA1UECgwGU3BsdW5rMRcwFQYDVQQDDA5TcGx1bmtDb21tb25D
QTEhMB8GCSqGSIb3DQEJARYSc3VwcG9ydEBzcGx1bmsuY29tMIIBIjANBgkqhkiG
9w0BAQEFAAOCAQ8AMIIBCgKCAQEAzB9ltVEGk73QvPlxXtA0qMW/SLDQlQMFJ/C/
tXRVJdQsmcW4WsaETteeWZh8AgozO1LqOa3I6UmrWLcv4LmUAh/T3iZWXzHLIqFN
WLSVU+2g0Xkn43xSgQEPSvEK1NqZRZv1SWvx3+oGHgu03AZrqTj0HyLujqUDARFX
sRvBPW/VfDkomHj9b8IuK3qOUwQtIOUr+oKx1tM1J7VNN5NflLw9NdHtlfblw0Ys
5xI5Qxu3rcCxkKQuwzdask4iijOIRMAKX28pbakxU9Nk38Ac3PNadgIk0s7R829k
980sqGWkd06+C17OxgjpQbvLOR20FtmQybttUsXGR7Bp07YStwIDAQABMA0GCSqG
SIb3DQEBCwUAA4IBAQCxhQd6KXP2VzK2cwAqdK74bGwl5WnvsyqdPWkdANiKksr4
ZybJZNfdfRso3fA2oK1R8i5Ca8LK3V/UuAsXvG6/ikJtWsJ9jf+eYLou8lS6NVJO
xDN/gxPcHrhToGqi1wfPwDQrNVofZcuQNklcdgZ1+XVuotfTCOXHrRoNmZX+HgkY
gEtPG+r1VwSFowfYqyFXQ5CUeRa3JB7/ObF15WfGUYplbd3wQz/M3PLNKLvz5a1z
LMNXDwN5Pvyb2epyO8LPJu4dGTB4jOGpYLUjG1UUqJo9Oa6D99rv6sId+8qjERtl
ZZc1oaC0PKSzBmq+TpbR27B8Zra3gpoA+gavdRZj
-----END CERTIFICATE-----
to:
kind: Service
name: <ServiceName>
</code></pre>
<p>I want to convert it into a Ingress Object as there are no routes in bare k8s. I see we don't have definition of termination type in Ingress Object, so can anyone recommend what is the optimal way to achieve this same functionality of openshift route using k8s ingress?</p>
<p>Thanks in advance</p>
| <p>The option <code>reencrypt</code> is not available in NGINX ingress controller. TLS cert in bare metal ingress is just stored in a secret. In the case of NGINX ingress controller, TLS termination takes place at the controller. In the case of openshift's route, it is similar to edge termination. So it is impossible to achieve similar TLS termination to openshift's route using bare k8s. You can achive this using <a href="https://istio.io/latest/docs/concepts/security/#mutual-tls-authentication" rel="nofollow noreferrer">istio</a>. Here is <a href="https://istio.io/latest/docs/tasks/security/authentication/mtls-migration/" rel="nofollow noreferrer">tutorial</a> how to setup Mutual TLS Migration.</p>
|
<p>Following a tutorial on Kubernetes and got stuck after the logs look fine, but the port exposed doesn't work : "Connection Refused" using Chrome / curl.</p>
<p>Used a yaml file to power up the service via NodePort / ClusterIP.</p>
<p>posts-srv.yaml - Updated</p>
<pre class="lang-js prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: posts-srv
spec:
type: NodePort
selector:
app: posts
ports:
- name: posts
protocol: TCP
port: 4000
targetPort: 4000
nodePort: 32140
</code></pre>
<p>posts-depl.yaml - Updated</p>
<pre class="lang-js prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: posts-depl
spec:
replicas: 1
selector:
matchLabels:
app: posts
template:
metadata:
labels:
app: posts
spec:
containers:
- name: posts
image: suraniadi/posts
ports:
- containerPort: 4000
</code></pre>
<pre><code>$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
posts-depl 1/1 1 1 27m
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 27h
posts-srv NodePort 10.111.64.122 <none> 4000:32140/TCP 21m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
posts-depl-79b6889f89-rxdv2 1/1 Running 0 26m
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:15:20Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| <p>For structural reasons, it's better to specify the nodePort in your service yaml configuration file or kubernetes will allocate it randomly from the k8s port range (30000-32767).
In the ports section it's a list of ports no need, in your case, to specify a name check the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">nodePort_docs</a> for more infos.
This should work for you :</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: posts-srv
spec:
type: NodePort
selector:
app: posts
ports:
- port: 4000
targetPort: 4000
nodePort: 32140
protocol: TCP
</code></pre>
<p>To connect to the nodePort service verify if any firewall service is up then verify that this port is enabled in your VMs : (centos example)</p>
<pre><code>sudo firewall-cmd --permanent --add-port=32140/tcp
</code></pre>
<p>Finally connect to this service using any <strong>node IP</strong> address (not the CLusterIP, this IP is an INTERNAL-IP not accessible outside the cluster) and the nodePort : <node_pubilc_IP>:<nodePort:32140></p>
|
<p>Hope you are all well,</p>
<p>I am currently trying to rollout the <a href="https://github.com/ansible/awx-operator" rel="noreferrer">awx-operator</a> on to a Kubernetes Cluster and I am running into a few issues with going to the service from outside of the cluster.</p>
<p>Currently I have the following services set up:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
awx NodePort 10.102.30.6 <none> 8080:32155/TCP 110m
awx-operator NodePort 10.110.147.152 <none> 80:31867/TCP 125m
awx-operator-metrics ClusterIP 10.105.190.155 <none> 8383/TCP,8686/TCP 3h17m
awx-postgres ClusterIP None <none> 5432/TCP 3h16m
awx-service ClusterIP 10.102.86.14 <none> 80/TCP 121m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17h
</code></pre>
<p>I did set up a <code>NodePort</code> which is called <code>awx-operator</code>. I did attempt to create an ingress to the application. You can see that below:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: awx-ingress
spec:
rules:
- host: awx.mycompany.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: awx
port:
number: 80
</code></pre>
<p>When I create the ingress, and then run <code>kubectl describe ingress</code>, I get the following output:</p>
<pre><code>Name: awx-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
awx.mycompany.com
/ awx:80 (10.244.1.8:8080)
Annotations: <none>
Events: <none>
</code></pre>
<p>Now I am not too sure whether the <code>default-http-backend:80</code> error is a red-herring as I have seen this in a number of places and they don't seem too worried about it, but please correct me if I am wrong.</p>
<p>Please let me know whether there is anything else I can do to troubleshoot this, and I will get back to you as soon as I can.</p>
| <p>You are right and the blank address is the issue here. In traditional <em>cloud</em> environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the NGINX Ingress controller to external clients and, indirectly, to any application running inside the cluster.</p>
<p><em>Bare-metal</em> environments on the other hand lack this option, requiring from you a slightly different setup to offer the same kind of access to external consumers:</p>
<p><img src="https://kubernetes.github.io/ingress-nginx/images/baremetal/baremetal_overview.jpg" alt="Bare-metal environment" /></p>
<p>This means you have to do some additional gymnastics to make the ingress work. And you have basically two main options here (all well described <a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#bare-metal-considerations" rel="nofollow noreferrer">here</a>):</p>
<ul>
<li>A pure software solution: <a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#a-pure-software-solution-metallb" rel="nofollow noreferrer">MetalLB</a></li>
<li>Over the <a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#over-a-nodeport-service" rel="nofollow noreferrer">NodePort</a> service.</li>
</ul>
<p>What is happening here is that you basically creating a service type <code>NodePort</code> with selector that matches your ingress controller pod and then it's routes the traffic accordingly to your ingress object:</p>
<pre class="lang-yaml prettyprint-override"><code># Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
labels:
helm.sh/chart: ingress-nginx-3.30.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.46.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
</code></pre>
<p>Full nginx deployment that conatains that service can be found <a href="https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal" rel="nofollow noreferrer">here</a>.</p>
<p>If you wish to skip the ingress you might be just using the <code>nodePort</code> service <code> awx</code> and reach it directly.</p>
|
<p>I use k8s nginx ingress controller before my backend, having two instances. When web-cleint sends quite big request to ingress - nginx for POST/PUT/GET request always returns 400 Bad request. Btw, no another explanation or details for this error, or at least it is logged in some place where I cannot find it or identify that my problem is.</p>
<p>After googling I figured out the recipe to fix this: apply <code>large_client_header_buffers</code> with increased value: that's exactly what I did - now my buffer size is <code>4 256k</code>. But no effect , I still get this error.</p>
<p>Pls give me any idea how to procede this problem</p>
| <p>So answer is: nginx is not guilty in described behaviour.
After thoroughly investigation of log of java-app which stands behind nginx this exception was noticed</p>
<pre><code>[INFO ] 2021-05-10 16:20:56.354 --- [io-10104-exec-4] org.apache.juli.logging.DirectJDKLog : Error parsing HTTP request header
Note: further occurrences of HTTP request parsing errors will be logged at DEBUG level.
java.lang.IllegalArgumentException: Request header is too large
</code></pre>
<p>And because of this detail - <code>Note: further occurrences of HTTP request parsing errors will be logged at DEBUG level.</code> - it was too fleeting to be catch up during fluent observation of log.</p>
<p>Summing up solution was to increase SpringBoot property <code>server.max-http-header-size</code> to to more proper value. Default value was 8 Kb.</p>
|
<p>I would like to extend the default "service port range" in <a href="https://docs.k0sproject.io/v1.21.0+k0s.0/" rel="nofollow noreferrer">K0s Kubernetes distro</a>.</p>
<p>I know that in kubernetes, setting <code>--service-node-port-range</code> option in <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code> will do the trick.</p>
<p>But, how to do so or where is that option in the <code>K0s</code> distro?</p>
| <p>It looks like you could use <code>spec.api.extraArgs</code> to pass the <code>service-node-port-range</code> parameter to api-server.</p>
<p><a href="https://docs.k0sproject.io/v1.21.0+k0s.0/configuration/#specapi" rel="nofollow noreferrer">Spec api</a>:</p>
<blockquote>
<p><strong>extraArgs</strong>: Map of key-values (strings) for any extra arguments you wish to pass down to Kubernetes api-server process</p>
</blockquote>
<p>Example:</p>
<pre><code>apiVersion: k0s.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: k0s
spec:
api:
extraArgs:
service-node-port-range: 30000-32767
</code></pre>
|
<p>Say I have some special nodes in my cluster, and I want to be able to identify with a pod label all pods running on these nodes.</p>
<p>Taints & tolerations are close to this, but afaik tolerations aren’t labels, they’re their own thing, and can’t be referenced in places where you expect a label selector, right? Is there any way to require all pods running on these nodes to identify themselves in a way that can then be queried via label selectors?</p>
| <p>There are two parts of problem to solve:<br />
<strong>1. Ensure that pods with specific labels are scheduled to specific set of nodes.</strong><br />
This can be done by using custom scheduler or a plugin for default scheduler (more <a href="https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/" rel="nofollow noreferrer">here</a>, <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/scheduling-framework/" rel="nofollow noreferrer">here</a>, <a href="https://developer.ibm.com/technologies/containers/articles/creating-a-custom-kube-scheduler/" rel="nofollow noreferrer">here</a>, <a href="https://github.com/IBM/k8s-custom-scheduler" rel="nofollow noreferrer">here</a> and <a href="https://github.com/kubernetes-sigs/scheduler-plugins" rel="nofollow noreferrer">here</a>).</p>
<p><strong>2. Prevent other schedulers to schedule pods to that nodes</strong><br />
This can by achieved by using <code>nodeAffinity</code> or node taints (more <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-isolation-restriction" rel="nofollow noreferrer">here</a> and <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/" rel="nofollow noreferrer">here</a>)</p>
<p>A complete solution may require additional apiserver admission controller to set all those properties to a pod that suppose to have a specific label only.<br />
Unfortunately, there is no easy built-in solution as of yet. Hopefuly the above will point you in the right direction.</p>
|
<p>I installed the Kubernetes and performed kubeadm init and join from the worker too. But when i run kubectl get nodes it gives the following response</p>
<p>the server doesn't have a resource type "nodes"</p>
<p>What might be the problem here? COuld not see anything in the /var/log/messages </p>
<p>Any hints here?</p>
| <p>In my case, I wanted to see the description of my pods.</p>
<p>When I used <code>kubectl describe postgres-deployment-866647ff76-72kwf</code>, the error said <strong>error: the server doesn't have a resource type "postgres-deployment-866647ff76-72kwf"</strong>.</p>
<p>I corrected it by adding <code>pod</code>, before the pod name, as follows:</p>
<pre><code>kubectl describe pod postgres-deployment-866647ff76-72kwf
</code></pre>
|
<p>I am learning about highly available distributed systems and some of the concepts that keep coming up are load balancing (Nginx) and container orchestration (Kubernetes). Right now my simplified understanding of them is as so:</p>
<h3>Nginx</h3>
<ul>
<li>Web server that handles Http requests</li>
<li>Performs load balancing via reverse proxy to other servers (usually done in a round robin manner)</li>
<li>Maps a single IP (the IP of the Nginx server) to many IPs (nodes which we are load balancing over).</li>
</ul>
<h3>Kubernetes</h3>
<ul>
<li>Container orchestration tool which keeps a defined state of a container cluster.</li>
<li>Maps a single IP (the IP of the control plane?) to many IPs (nodes which have a container instance running on them).</li>
</ul>
<p>So my question is, do we use both of these tools in conjunction? It seems like there is some overlap?</p>
<p>For example, if I was creating a NodeJS app to act as a microservice which exposes a REST API, would I just simply deploy my app in a Docker container, then let Kubernetes manage it? I would not need a load balancer like Nginx in front of my Kubernetes cluster?</p>
| <blockquote>
<p>So my question is, do we use both of these tools in conjunction? It seems like there is some overlap?</p>
</blockquote>
<p>You seem to have mixed a few concepts. Don't look to much on the number of IP addresses, but more on the <strong>role</strong> of the different components.</p>
<h2>Load Balancer / Gateway / Nginx</h2>
<p>You probably want some form of Gateway or reverse proxy with a <strong>static known IP address</strong> (and DNS name) so that traffic from Internet can find its way to your services in the cluster. When using Kubernetes, it is common that your services run in a local network, but the <strong>Gateway</strong> or reverse proxy is typically the way into your cluster.</p>
<h2>Kubernetes API / Control Plane</h2>
<p>This is an API for <strong>managing</strong> Kubernetes resources, e.g. deploy a new version of your apps. This API is only for management / administration. Your customer traffic does not use this API. You want to use strong authentication for this, only usable by you and your team. Pods in your cluster <em>can</em> use this API, but they need a <em>Service Account</em> and proper <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">RBAC Authorization</a>.</p>
|
<p>I have an application deployed to Kubernetes that depends on an outside application. Sometimes the connection between these 2 goes to an invalid state, and that can only be fixed by restarting my application.</p>
<p>To do automatic restarts, I have configured a liveness probe that will verify the connection.</p>
<p>This has been working great, however, I'm afraid that if that outside application goes down (such that the connection error isn't just due to an invalid pod state), all of my pods will immediately restart, and my application will become completely unavailable. I want it to remain running so that functionality not depending on the bad service can continue.</p>
<p>I'm wondering if a pod disruption budget would prevent this scenario, as it limits the # of pods down due to a "voluntary" disruption. However, the K8s docs don't state whether liveness probe failure are a voluntary disruption. Are they?</p>
| <p>I would say, accordingly to the documentation:</p>
<blockquote>
<h3>Voluntary and involuntary disruptions</h3>
<p>Pods do not disappear until someone (a person or a controller) destroys them, or there is an unavoidable hardware or system software error.</p>
<p>We call these unavoidable cases <em>involuntary disruptions</em> to an application. Examples are:</p>
<ul>
<li>a hardware failure of the physical machine backing the node</li>
<li>cluster administrator deletes VM (instance) by mistake</li>
<li>cloud provider or hypervisor failure makes VM disappear</li>
<li>a kernel panic</li>
<li>the node disappears from the cluster due to cluster network partition</li>
<li>eviction of a pod due to the node being <a href="https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/" rel="nofollow noreferrer">out-of-resources</a>.</li>
</ul>
<p>Except for the out-of-resources condition, all these conditions should be familiar to most users; they are not specific to Kubernetes.</p>
<p>We call other cases <em>voluntary disruptions</em>. These include both actions initiated by the application owner and those initiated by a Cluster Administrator. Typical application owner actions include:</p>
<ul>
<li>deleting the deployment or other controller that manages the pod</li>
<li>updating a deployment's pod template causing a restart</li>
<li>directly deleting a pod (e.g. by accident)</li>
</ul>
<p>Cluster administrator actions include:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/" rel="nofollow noreferrer">Draining a node</a> for repair or upgrade.</li>
<li>Draining a node from a cluster to scale the cluster down (learn about <a href="https://github.com/kubernetes/autoscaler/#readme" rel="nofollow noreferrer">Cluster Autoscaling</a> ).</li>
<li>Removing a pod from a node to permit something else to fit on that node.</li>
</ul>
<p>-- <em><a href="https://kubernetes.io/docs/concepts/workloads/pods/disruptions/" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Workloads: Pods: Disruptions</a></em></p>
</blockquote>
<p><strong>So your example is quite different and according to my knowledge it's neither voluntary or involuntary disruption.</strong></p>
<hr />
<p>Also taking a look on another Kubernetes documentation:</p>
<blockquote>
<h3>Pod lifetime</h3>
<p>Like individual application containers, Pods are considered to be relatively ephemeral (rather than durable) entities. Pods are created, assigned a unique ID (<a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids" rel="nofollow noreferrer">UID</a>), and scheduled to nodes where they remain until termination (according to restart policy) or deletion. If a <a href="https://kubernetes.io/docs/concepts/architecture/nodes/" rel="nofollow noreferrer">Node</a> dies, the Pods scheduled to that node are <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection" rel="nofollow noreferrer">scheduled for deletion</a> after a timeout period.</p>
<p>Pods do not, by themselves, self-heal. If a Pod is scheduled to a <a href="https://kubernetes.io/docs/concepts/architecture/nodes/" rel="nofollow noreferrer">node</a> that then fails, the Pod is deleted; likewise, a Pod won't survive an eviction due to a lack of resources or Node maintenance. Kubernetes uses a higher-level abstraction, called a <a href="https://kubernetes.io/docs/concepts/architecture/controller/" rel="nofollow noreferrer">controller</a>, that handles the work of managing the relatively disposable Pod instances.</p>
<p>-- <em><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-lifetime" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Workloads: Pods: Pod lifecycle: Pod lifetime</a></em></p>
</blockquote>
<blockquote>
<h3>Container probes</h3>
<p>The kubelet can optionally perform and react to three kinds of probes on running containers (focusing on a <code>livenessProbe</code>):</p>
<ul>
<li><code>livenessProbe</code>: Indicates whether the container is running. If the liveness probe fails, the kubelet kills the container, and the container is subjected to its <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy" rel="nofollow noreferrer">restart policy</a>. If a Container does not provide a liveness probe, the default state is <code>Success</code>.</li>
</ul>
<p>-- <em><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Workloads: Pods: Pod lifecycle: Container probes</a></em></p>
<h3>When should you use a liveness probe?</h3>
<p>If the process in your container is able to crash on its own whenever it encounters an issue or becomes unhealthy, you do not necessarily need a liveness probe; the kubelet will automatically perform the correct action in accordance with the Pod's <code>restartPolicy</code>.</p>
<p>If you'd like your container to be killed and restarted if a probe fails, then specify a liveness probe, and specify a <code>restartPolicy</code> of Always or OnFailure.</p>
<p>-- <em><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#when-should-you-use-a-startup-probe" rel="nofollow noreferrer">Kubernetes.io: Docs: Concepts: Workloads: Pods: Pod lifecycle: When should you use a startup probe</a></em></p>
</blockquote>
<p>According to those information it would be better to create custom liveness probe which should consider internal process health checks and external dependency(liveness) health check. In the first scenario your container should stop/terminate your process unlike the the second case with external dependency.</p>
<p>Answering following question:</p>
<blockquote>
<p>I'm wondering if a pod disruption budget would prevent this scenario.</p>
</blockquote>
<p><strong>In this particular scenario PDB will not help.</strong></p>
<hr />
<p>I'd reckon giving more visibility to the comment, I've made with additional resources on the matter could prove useful to other community members:</p>
<ul>
<li><em><a href="https://blog.risingstack.com/designing-microservices-architecture-for-failure/" rel="nofollow noreferrer">Blog.risingstack.com: Designing microservices architecture for failure</a></em></li>
<li><em><a href="https://loft.sh/blog/kubernetes-readiness-probes-examples-common-pitfalls/#external-dependencies" rel="nofollow noreferrer">Loft.sh: Blog: Kubernetes readiness probles examples common pitfalls: External depenedencies</a></em></li>
<li><em><a href="https://cloud.google.com/architecture/scalable-and-resilient-apps#resilience_designing_to_withstand_failures" rel="nofollow noreferrer">Cloud.google.com: Archiecture: Scalable and resilient apps: Resilience designing to withstand failures</a></em></li>
</ul>
|
<p>We have a use case to monitor kubernetes clusters and I am trying to find the list of exceptions thrown by kubernetes to reflect the status of the k8s server (in a namespace) while trying to submit a job on the UI.</p>
<p>Example: if k8s server throws <code>ClusterNotFound</code> exception that means we cannot submit any more jobs to that api server.</p>
<p>Is there such a comprehensive list?</p>
<p>I came across <a href="https://github.com/kubernetes/apimachinery/blob/master/pkg/api/errors/errors.go" rel="nofollow noreferrer">this</a> in Go Lang. Will this be it? Does java has something like this?</p>
| <p>The file you are referencing is a part of Kubernetes library used by many Kubernetes components for API requests fields validations. As all Kubernetes components are written in Go and I couldn't find any plans to port Kubernetes to Java, it's unlikely to have a Java version of that file.</p>
<p>However, there is an officially supported Kubernetes client library, written in Java, so you can check for the proper modules to validate API requests and process API responses in the <a href="https://github.com/kubernetes-client/java" rel="nofollow noreferrer">java-client repostiory</a> or on the <a href="https://javadoc.io/doc/io.kubernetes/client-java-api/latest/index.html" rel="nofollow noreferrer">javadoc site</a>.</p>
<p>For example, objects that are used to contain proper or improper HTTP replies from Kubernetes apiserver: <a href="https://javadoc.io/doc/io.kubernetes/client-java-api/latest/io/kubernetes/client/openapi/models/V1Status.html" rel="nofollow noreferrer">V1Status</a> and <a href="https://javadoc.io/doc/io.kubernetes/client-java-api/latest/io/kubernetes/client/openapi/ApiException.html" rel="nofollow noreferrer">ApiExceptions</a>, <a href="https://github.com/kubernetes-client/java/blob/8b08ec39ab12542a3fed1f4a92d67b7e7a393e14/kubernetes/src/main/java/io/kubernetes/client/openapi/ApiException.java" rel="nofollow noreferrer">(repository link)</a></p>
<p>Please consider to check java-client usage <a href="https://github.com/kubernetes-client/java/wiki/3.-Code-Examples" rel="nofollow noreferrer">examples</a> for better understanding.</p>
<p>Detailed Kubernetes RESTful API reference could be found on the <a href="https://kubernetes.io/docs/reference/kubernetes-api/" rel="nofollow noreferrer">official page</a><br />
For example: <a href="https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/deployment-v1/#create-create-a-deployment" rel="nofollow noreferrer">Deployment create request</a></p>
<p>If you are really interested in Kubernetes cluster monitoring and logging aspects, please consider to read the following articles at the beginning:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/" rel="nofollow noreferrer">Metrics For Kubernetes System Components</a></li>
<li><a href="https://www.datadoghq.com/blog/kubernetes-control-plane-monitoring/" rel="nofollow noreferrer">Kubernetes Control Plane monitoring with Datadog</a></li>
<li><a href="https://sysdig.com/blog/monitor-kubernetes-control-plane/" rel="nofollow noreferrer">How to monitor Kubernetes control plane</a></li>
<li><a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/" rel="nofollow noreferrer">Logging Architecture</a></li>
<li><a href="https://logz.io/blog/a-practical-guide-to-kubernetes-logging/" rel="nofollow noreferrer">A Practical Guide to Kubernetes Logging</a></li>
</ul>
|
<p>I have a kubernetes deployment managed by a helm chart that I am planning an upgrade of. The app has 2 persistent volumes attached which are are EBS volumes in AWS. If the deployment goes wrong and needs rolling back I might also need to roll back the EBS volumes. How would one manage that in K8s? I can easily create the volume manually in AWS from my snapshot I've taken pre deployment but for the deployment to use it would I need to edit the pv yaml file to point to my new volume ID? Or would I need to create a new PV using the volume ID and a new PVC and then edit my deployment to use that claim name?</p>
| <p>First you need to define a storage class with reclaimPolicy: Delete</p>
<p><a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/storage-classes/</a></p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- debug
volumeBindingMode: Immediate
</code></pre>
<p>Then, in your helm chart, you need to use that storage class. So, when you delete the helm chart, the persistent claim will be deleted and because the ReclaimPolicy=Delete for the storage class used, the corresponding persistent volume will also be deleted.</p>
<p>Be careful though. Once PV is deleted, you will not be able to recover that volume's data. There is no "recycle bin".</p>
|
<p>I use Kubernetes which v1.19.7, when I run the CronJob sample</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: express-learn-cronjob
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
</code></pre>
<p>get unable to recognize "app-cronjob.yml": no matches for kind "CronJob" in version "batch/v1"</p>
<p>I can get the batch info by run kubectl api-versions | grep batch</p>
<pre class="lang-sh prettyprint-override"><code>batch/v1
batch/v1beta1
</code></pre>
<p>is there anything I missed? how can I fix it?</p>
| <p>For Kubernetes version 1.19.x you need to use <code>batch/v1beta1</code> as apiVersion for your CronJob.</p>
<p>That is documented in the doc version 1-19:</p>
<p><a href="https://v1-19.docs.kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="noreferrer">https://v1-19.docs.kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/</a></p>
<p>It is stable only on k8s version 1.21.</p>
|
<p>I am learning about highly available distributed systems and some of the concepts that keep coming up are load balancing (Nginx) and container orchestration (Kubernetes). Right now my simplified understanding of them is as so:</p>
<h3>Nginx</h3>
<ul>
<li>Web server that handles Http requests</li>
<li>Performs load balancing via reverse proxy to other servers (usually done in a round robin manner)</li>
<li>Maps a single IP (the IP of the Nginx server) to many IPs (nodes which we are load balancing over).</li>
</ul>
<h3>Kubernetes</h3>
<ul>
<li>Container orchestration tool which keeps a defined state of a container cluster.</li>
<li>Maps a single IP (the IP of the control plane?) to many IPs (nodes which have a container instance running on them).</li>
</ul>
<p>So my question is, do we use both of these tools in conjunction? It seems like there is some overlap?</p>
<p>For example, if I was creating a NodeJS app to act as a microservice which exposes a REST API, would I just simply deploy my app in a Docker container, then let Kubernetes manage it? I would not need a load balancer like Nginx in front of my Kubernetes cluster?</p>
| <p>Kubernetes gives you a self-contained/sandboxed environment where your services are safe from the outside world running on private non-routable subnets. Because pods are ephemeral, their IPs can change anytime. Hence, Kubernetes has a "Service" concept. Different micro-services interact with each other using servicename:port so that they don't have to worry about the POD IPs.</p>
<p>However,if you want to access your application from outside (internet) you need to configure an ingress controller. This ingress controller can be implemented using Nginx.</p>
<p>So, your ingress controller (nginx) will receive request and send it to the service, which can load balance it to the pods to meet the desired state.</p>
<p>In large system, the nginx ingress controllers may need to scale as well to serve the incoming requests.</p>
<p>So, long story short, you will need both if you want to scale and for routing purposes.</p>
|
<p>I've created a simple K8s deployment with the <code>kubectl create</code> command</p>
<pre><code>kubectl create -f k8-deployment.yaml
</code></pre>
<p>My <code>k8-deployment.yaml</code> file looks like this</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: mage-di
name: mage-di
spec:
replicas: 1
selector:
matchLabels:
app: mage-di
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mage-di
spec:
containers:
- image: astorm/mage-di
name: mage-di
imagePullPolicy: Never
resources: {}
status: {}
</code></pre>
<p>This results in a single pod being started.</p>
<p>I want to tell my k8 cluster that more pods are needed to handle an expected traffic spike.</p>
<p>How should I do this? If I look at <code>kubectl help</code> I see there's an <code>edit</code> command that allows me to edit a deployment object's configuration, but this requires an interactive editor. Also, I'm new to K8s and I'm unsure if editing a deployment in place and updating its replica count is enough to trigger the <em>proper</em> creation of new pods. If I look at other <code>kubectl</code> commands I see there's also a <code>rollout</code>, <code>apply</code> and <code>patch</code> command that might do what I want.</p>
<p>Is there a canonically accepted way to do what I want, or is K8s the sort of tech where I just need to experiment and hope for the best?</p>
| <p>You can do this in two ways. Either <em>imperative</em> - a quick command Or <em>declarative</em> - good for a production environment where you store your Deployment-manifest in Git.</p>
<p><strong>Imperative way</strong>: (this will then diverge from what you have in your yaml-file)</p>
<pre><code>kubectl scale deployment mage-di --replicas=2
</code></pre>
<p><strong>Declarative way</strong>, edit this line in your Yaml file:</p>
<pre><code>replicas: 2
</code></pre>
<p>then apply it to the cluster with:</p>
<pre><code>kubectl apply -f k8-deployment.yaml
</code></pre>
<p>See also:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/" rel="noreferrer">Declarative config management</a></li>
<li><a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-command/" rel="noreferrer">Imperative config management with commands</a></li>
</ul>
|
<p>I have a microservice that is working on my laptop. However, I am using docker compose. I am working to deploy to a kubernetes cluster which I have already set up. I am stuck on making data persistent. E.g here is my mongodb in docker-compose</p>
<pre><code>systemdb:
container_name: system-db
image: mongo:4.4.1
restart: always
ports:
- '9000:27017'
volumes:
- ./system_db:/data/db
networks:
- backend
</code></pre>
<p>Since it is an on premise solution, I went with an NFS server. I have created a Persistent Volume and Persistent Volume Claim (pvc-nfs-pv1) which seem to work well when testing with nginx. However, I don't know how to deploy a mongodb statefulset to use the pvc. I am not implementing a replicaset.</p>
<p>Here is my yaml:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongod
spec:
serviceName: mongodb-service
replicas: 1
selector:
matchLabels:
role: mongo
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongod-container
image: mongo
resources:
requests:
cpu: "0.2"
memory: 200Mi
ports:
- containerPort: 27017
volumeMounts:
- name: pvc-nfs-pv1
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: pvc-nfs-pv1
annotations:
volume.beta.kubernetes.io/storage-class: "standard"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 500Mi
</code></pre>
<p>How do i do it?</p>
| <p>Ok, I have a solution. It works simply by selecting the volume by using the matchLabels selector.</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: mongodb-data-volume
labels:
app: moderetic
type: mongodb
role: data
spec:
storageClassName: hcloud-volumes
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
csi:
volumeHandle: "11099996"
driver: csi.hetzner.cloud
fsType: ext4
---
---
apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
name: system-mongodb
labels:
app: moderetic
type: mongodb
spec:
members: 1
type: ReplicaSet
version: "4.2.6"
logLevel: INFO
security:
authentication:
modes: ["SCRAM"]
users:
- name: moderetic
db: moderetic
passwordSecretRef:
name: mongodb-secret
roles:
- name: clusterAdmin
db: moderetic
- name: userAdminAnyDatabase
db: moderetic
scramCredentialsSecretName: moderetic-scram-secret
additionalMongodConfig:
storage.wiredTiger.engineConfig.journalCompressor: zlib
persistent: true
statefulSet:
spec:
template:
spec:
containers:
- name: mongod
resources:
requests:
cpu: 1
memory: 1Gi
limits:
memory: 8Gi
- name: mongodb-agent
resources:
requests:
memory: 50Mi
limits:
cpu: 500m
memory: 256Mi
volumeClaimTemplates:
- metadata:
name: data-volume
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: hcloud-volumes
resources:
requests:
storage: 10Gi
selector:
matchLabels:
app: moderetic
type: mongodb
role: data
- metadata:
name: logs-volume
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: hcloud-volumes
resources:
requests:
storage: 10Gi
selector:
matchLabels:
app: moderetic
type: mongodb
role: logs
</code></pre>
|
<p>I am looking for a simple method to get the storage used and storage allocated for a persistent volume dynamically claimed by PVC for any pod. Is there any rest API or <code>oc</code> command for the same?</p>
<p>I am new to OpenShift/Kubernetes. As per my investigation, I could not find any such command. <code>oc adm top</code> command describes the usage statistics for nodes and pods only.</p>
| <p>You can do: <code>oc rsh podname</code> to access pod command line and then: <code>du -c /path/to/pv</code> or <code>du -shc /path/to/pv</code>.</p>
|
<p>Is it possible to trigger Pub/Sub event or Google Cloud Function in the GCloud, when my Node pool is auto-scaling under high performance conditions?</p>
<p>Or is there any other analytics event which can be used to trigger Cloud Function?</p>
| <p>You can be notified (in Pub/Sub) when a node pool is scaled up by setting a <a href="https://cloud.google.com/logging/docs/export" rel="nofollow noreferrer">sink on Stackdriver logs</a>, with a Pub/Sub topic as destination.</p>
<p>You have to listen to specific logs corresponding to the scale up of your node pool. There are 2 filters you can use for your sink. Any message corresponding to one of these filter will be sent to a Pub/Sub topic, meaning that you can then have a Cloud Function triggered when a message is published in that topic.</p>
<h3>Filter on instance group</h3>
<p>You can use this filter on instance group (a GKE node pool is in fact a managed instance group of Compute VMs) :</p>
<pre><code>resource.type="gce_instance_group_manager" AND
resource.labels.instance_group_manager_name:"gke-<cluster_name>-default-pool" AND
protoPayload.methodName="v1.compute.instanceGroupManagers.resize" AND
operation.last="true"
</code></pre>
<p>(please replace <code><cluster_name></code> with the name of your cluster, and <code><node_pool_name></code> with the node pool name, like <code>default-pool</code>).</p>
<h3>Filter on cluster autoscaler logs</h3>
<p>You can also use this filter :</p>
<pre><code>resource.type="k8s_cluster" AND
logName="projects/<project_id>/logs/container.googleapis.com%2Fcluster-autoscaler-visibility" AND
jsonPayload.decision.scaleUp.increasedMigs.mig.nodepool="<node_pool_name>"
</code></pre>
<p>(please replace <code><project_id></code> with the id of your project, and <code><node_pool_name></code> to the node pool name, like <code>default-pool</code>).</p>
<p>What's interesting with this filter is that you can know which pod have caused the scale up, and how many nodes have been added, by looking inside the <code>jsonPayload</code> :</p>
<pre><code>jsonPayload: {
decision: {
eventId: "41ddc559-c616-4068-8ba2-2f26eadcc7bd"
decideTime: "1620897027"
scaleUp: {
increasedMigs: [
0: {
mig: {
name: "gke-<cluster_name>-<node_pool_name>-xxxxxxxx-grp"
nodepool: "<node_pool_name>"
zone: "<zone>"
}
requestedNodes: 1
}
]
triggeringPods: [
0: {
name: "<pod_name_causing_the_scale_up>"
namespace: "<pod_namespace>"
}
]
triggeringPodsTotalCount: 1
}
}
}
</code></pre>
|
<p>I am new to Kubernetes and this is my first time deploying a react-django web app to Kubernetes cluster.</p>
<p>I have created:</p>
<ol>
<li>frontend.yaml # to run npm server</li>
<li>backend.yaml # to run django server</li>
<li>backend-service.yaml # to make django server accessible for react.</li>
</ol>
<p>In my frontend.yaml file I am passing <code>REACT_APP_HOST</code> and <code>REACT_APP_PORT</code> as a env variable and changed URLs in my react app to:</p>
<pre><code>axios.get('http://'+`${process.env.REACT_APP_HOST}`+':'+`${process.env.REACT_APP_PORT}`+'/todolist/api/bucket/').then(res => {
setBuckets(res.data);
setReload(false);
}).catch(err => {
console.log(err);
})
</code></pre>
<p>and my URL becomes <code>http://backend-service:8000/todolist/api/bucket/</code></p>
<p>here <code>backend-service</code> is name of backend-service that I am passing using env variable <code>REACT_APP_HOST</code>.</p>
<p>I am not getting any errors, but when I used <code>kubectl port-forward <frontend-pod-name> 3000:3000</code> and accessed <code>localhost:3000</code> I saw my react app page but it did not hit any django apis.</p>
<p>On chrome, I am getting error:</p>
<pre><code>net::ERR_NAME_NOT_RESOLVED
</code></pre>
<p>and in Mozilla:</p>
<pre><code>Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://backend-service:8000/todolist/api/bucket/. (Reason: CORS request did not succeed).
</code></pre>
<p>Please help on this issue, I have spent 3 days but not getting any ideas.</p>
<p><strong>frontend.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: frontend
name: frontend
spec:
replicas: 1
selector:
matchLabels:
app: frontend
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: frontend
spec:
containers:
- image: 1234567890/todolist:frontend-v13
name: react-todolist
env:
- name: REACT_APP_HOST
value: "backend-service"
- name: REACT_APP_PORT
value: "8000"
ports:
- containerPort: 3000
volumeMounts:
- mountPath: /var/log/
name: frontend
command: ["/bin/sh", "-c"]
args:
- npm start;
volumes:
- name: frontend
hostPath:
path: /var/log/
</code></pre>
<p><strong>backend.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: backend
name: backend
spec:
replicas: 1
selector:
matchLabels:
app: backend
template:
metadata:
creationTimestamp: null
labels:
app: backend
spec:
serviceAccountName: backend-sva
containers:
- image: 1234567890/todolist:backend-v11
name: todolist
env:
- name: DB_NAME
value: "todolist"
- name: MYSQL_HOST
value: "mysql-service"
- name: MYSQL_USER
value: "root"
- name: MYSQL_PORT
value: "3306"
- name: MYSQL_PASSWORD
value: "mysql123"
ports:
- containerPort: 8000
volumeMounts:
- mountPath: /var/log/
name: backend
command: ["/bin/sh", "-c"]
args:
- apt-get update;
apt-get -y install vim;
python manage.py makemigrations bucket;
python manage.py migrate;
python manage.py runserver 0.0.0.0:8000
volumes:
- name: backend
hostPath:
path: /var/log/
</code></pre>
<p><strong>backend-service.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: backend
name: backend-service
spec:
ports:
- port: 8000
targetPort: 8000
selector:
app: backend
status:
loadBalancer: {}
</code></pre>
<p><strong>frontend docker file</strong></p>
<pre><code>FROM node:14.16.1-alpine
COPY package.json /app/react-todolist/react-todolist/
WORKDIR /app/react-todolist/react-todolist/
RUN npm install
COPY . /app/react-todolist/react-todolist/
EXPOSE 3000
</code></pre>
<p><strong>backend docker file</strong></p>
<pre><code>FROM python:3.6
COPY requirements.txt ./app/todolist/
WORKDIR /app/todolist/
RUN pip install -r requirements.txt
COPY . /app/todolist/
</code></pre>
<p><strong>django settings</strong></p>
<pre><code>CORS_ORIGIN_ALLOW_ALL=True
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
# Rest Frame Work
'rest_framework',
# CORS
'corsheaders',
# Apps
'bucket',
]
MIDDLEWARE = [
'corsheaders.middleware.CorsMiddleware',
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
</code></pre>
<p><strong>ingress.yaml</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: todolist-ingress
spec:
rules:
- host: kubernetes.docker.internal
http:
paths:
- path: /
backend:
serviceName: frontend-service
servicePort: 3000
- path: /
backend:
serviceName: backend-service
servicePort: 8000
</code></pre>
<p><strong>react axios api</strong></p>
<pre><code>useEffect(() => {
axios.get('http://'+`${process.env.REACT_APP_HOST}`+':'+`${process.env.REACT_APP_PORT}`+'/todolist/api/bucket/', {
headers: {"Access-Control-Allow-Origin": "*"}
}).then(res => {
setBuckets(res.data);
setReload(false);
}).catch(err => {
console.log(err);
})
}, [reload])
</code></pre>
<p><strong>web app github link</strong> <a href="https://github.com/vgautam99/ToDoList" rel="nofollow noreferrer">https://github.com/vgautam99/ToDoList</a></p>
| <p>Welcome to the community!</p>
<p>I reproduced your example and made it work fine.
I forked your repository, made some changes to js files and package.json and added Dockerfiles (you can see this commit <a href="https://github.com/fivecatscats/ToDoList/commit/74790836659232284832688beb2e1779660d7615" rel="nofollow noreferrer">here</a></p>
<p>Since I didn't change database settings in <code>settings.py</code> I attached it as a <code>configMap</code> to backend deployment (see <a href="https://github.com/fivecatscats/ToDoList/blob/master/backend-deploy.yaml#L37-L39" rel="nofollow noreferrer">here</a> how it's done). Config map was created by this command:</p>
<p><code>kubectl create cm django1 --from-file=settings.py</code></p>
<p>The trickiest part here is to use your domain name <code>kubernetes.docker.internal</code> and add your port with <code>/backend</code> path to environment variables you're passing to your frontend application (see <a href="https://github.com/fivecatscats/ToDoList/blob/master/frontend-deploy.yaml#L21-L24" rel="nofollow noreferrer">here</a>)</p>
<p>Once this is done, it's time to set up an ingress controller (this one uses apiVersion - <code>extestions/v1beta1</code> as it's done in your example, however it'll be deprecated soon, so it's advised to use <code>networking.k8s.io/v1</code> - example of a newer apiVersion is <a href="https://github.com/fivecatscats/ToDoList/blob/master/ingress-after-1-22.yaml" rel="nofollow noreferrer">here</a>):</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: todolist-backend-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: kubernetes.docker.internal
http:
paths:
- path: /backend(/|$)(.*)
backend:
serviceName: backend-service
servicePort: 8000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: todolist-frontend-ingress
annotations:
spec:
rules:
- host: kubernetes.docker.internal
http:
paths:
- path: /
backend:
serviceName: frontend-service
servicePort: 3000
</code></pre>
<p>I set it up in two different ingresses because there are some issues with <code>rewrite-target</code> and <code>regex</code> using root path <code>/</code>. As you can see we use <code>rewrite-target</code> here because requests are supposed to hit <code>/todolist/api/bucket</code> path instead of <code>/backend/todolist/api/bucket</code> path.
Please see <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">nginx rewrite annotations</a></p>
<p>Next step is to find an IP address to test your application from the node where kubernetes is running and from the web.
To find IP addresses and ports run <code>kubectl get svc</code> and <code>find ingress-nginx-controller</code>:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend-service ClusterIP 10.100.242.79 <none> 8000/TCP 21h
frontend-service ClusterIP 10.110.102.96 <none> 3000/TCP 21h
ingress-nginx-controller LoadBalancer 10.107.31.20 192.168.1.240 80:31323/TCP,443:32541/TCP 8d
</code></pre>
<p>There are two options: <code>CLUSTER-IP</code> and <code>EXTERNAL-IP</code> if you have loadbalancer set up.
On your kubernetes control plane, you can run a simple checking test with <code>curl</code> command using <code>CLUSTER-IP</code> address. In my case it looks like:</p>
<p><code>curl http://kubernetes.docker.internal/ --resolve kubernetes.docker.internal:80:10.107.31.20</code></p>
<p>And next test is:</p>
<p><code>curl -I http://kubernetes.docker.internal/backend/todolist/api/bucket --resolve kubernetes.docker.internal:80:10.107.31.20</code></p>
<p>Output will be like:</p>
<pre><code>HTTP/1.1 301 Moved Permanently
Date: Fri, 14 May 2021 12:21:59 GMT
Content-Type: text/html; charset=utf-8Content-Length: 0
Connection: keep-alive
Location: /todolist/api/bucket/
X-Content-Type-Options: nosniff
Referrer-Policy: same-origin
Vary: Origin
</code></pre>
<p>Next step is access your application via web browser. You'll need to correct <code>/etc/hosts</code> on your local machine (linux/Mac OS, for Windows it's a bit different, but very easy to find) to match <code>kubernetes.docker.internal</code> domain with proper IP address.</p>
<p>If you're using a <code>load balancer</code> then <code>EXTERNAL-IP</code> is the right address.
If you don't have a <code>load balancer</code> then it's possible to reach out to the node directly. You can find IP address in cloud console and add it to <code>/etc/hosts</code>. In this case you will need to use a different port. In my case it was <code>31323</code> (you can find it above in <code>kubectl get svc</code> output).</p>
<p>When it's set up, I hit the application in my web-browser by <code>http://kubernetes.docker.internal:31323</code></p>
<p>(Repository is <a href="https://github.com/fivecatscats/ToDoList" rel="nofollow noreferrer">here</a> feel free to use everything you need from it)</p>
|
<p>I'm trying to provision emepheral environments via automation leveraging Kubernetes namespaces. My automation workers deployed in Kubernetes must be able to create Namespaces. So far my experimentation with this led me nowhere. Which binding do I need to attach to the Service Account to allow it to control Namespaces? Or is my approach wrong?</p>
<p>My code so far:</p>
<p><code>deployment.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-deployer
namespace: tooling
labels:
app: k8s-deployer
spec:
replicas: 1
selector:
matchLabels:
app: k8s-deployer
template:
metadata:
name: k8s-deployer
labels:
app: k8s-deployer
spec:
serviceAccountName: k8s-deployer
containers: ...
</code></pre>
<p><code>rbac.yaml</code>:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-deployer
namespace: tooling
---
# this lets me view namespaces, but not write
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: administer-cluster
subjects:
- kind: ServiceAccount
name: k8s-deployer
namespace: tooling
roleRef:
kind: ClusterRole
name: admin
apiGroup: rbac.authorization.k8s.io
</code></pre>
| <p>To give a pod control over something in Kubernetes you need at least four things:</p>
<ol>
<li>Create or select existing <code>Role</code>/<code>ClusterRole</code> (you picked <code>administer-cluster</code>, which rules are unknown to me).</li>
<li>Create or select existing <code>ServiceAccount</code> (you created <code>k8s-deployer</code> in namespace <code>tooling</code>).</li>
<li>Put the two together with <code>RoleBinding</code>/<code>ClusterRoleBinding</code>.</li>
<li>Assign the <code>ServiceAccount</code> to a pod.</li>
</ol>
<p>Here's an example that can manage namespaces:</p>
<pre class="lang-yaml prettyprint-override"><code># Create a service account
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-deployer
namespace: tooling
---
# Create a cluster role that allowed to perform
# ["get", "list", "create", "delete", "patch"] over ["namespaces"]
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: k8s-deployer
rules:
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list", "create", "delete", "patch"]
---
# Associate the cluster role with the service account
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: k8s-deployer
# make sure NOT to mention 'namespace' here or
# the permissions will only have effect in the
# given namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: k8s-deployer
subjects:
- kind: ServiceAccount
name: k8s-deployer
namespace: tooling
</code></pre>
<p>After that you need to mention the service account name in pod <code>spec</code> as you already did. More info about RBAC in the <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">documentation</a>.</p>
|
<p>After restarting the VM, microk8s stopped running.</p>
<pre><code>Executing this command outputs the following:
microk8s status:
microk8s is not running. use microk8s inspect for a deeper inspection.
microk8s kubectl get pod:
The connection to the server 127.0.0.1 16443 was refused did you specify the right host or port?
</code></pre>
<p>I have practically tried all the methods, but there is no result.</p>
| <p>Run <em>ps -ef | grep kubelet</em></p>
<p>Output will be somewhat like this:</p>
<p><em>root 934 1 5 08:17 ? 00:20:40 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=<strong>/etc/kubernetes/kubelet.conf</strong> --config=/var/lib/kubelet/config.yaml --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2</em></p>
<p>Look for path of <em>kubelet.conf</em> and then edit port in that file. On this port, kubelet connects with kube-apiserver. Once it's fixed, output of <em>kubectl cluster-info</em> should be somewhat like this:</p>
<p><em>Kubernetes control plane is running at <a href="https://192.168.56.2:6443" rel="nofollow noreferrer">https://192.168.56.2:6443</a>
KubeDNS is running at <a href="https://192.168.56.2:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy" rel="nofollow noreferrer">https://192.168.56.2:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy</a>
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.</em></p>
|
<p>In kubernetes POD I have an option to mount a secret or a configmap as a volume mounted to the POD. It would be difficult to access these files as environment variables. So why should I be doing it instead of using them as environment variables?</p>
| <p>This depends on how the application expect to load the secret.</p>
<p>E.g. if the application expect to load an <a href="https://www.tutorialsteacher.com/https/ssl-certificate-format" rel="nofollow noreferrer">SSL certificate file</a>, it is possible to have the certificated as a file in a Secret and mount the Secret so that the application can read it as file.</p>
|
<p>I have been trying to setup Jenkins to utilize Kubernetes as in the tutorials. I have everything working really well, but I have been trying to add some custom images using the Kubernetes Jenkins plugin. It seems that any public images work just fine, but when I create an image and put it in my private Container Registry, the Jenkins slave fails miserably. </p>
<p>I wanted to find out how best to utilize the images in my Container Registry within Jenkins. I found this tutorial (<a href="https://cloud.google.com/solutions/jenkins-on-container-engine#customizing_the_docker_image" rel="nofollow noreferrer">https://cloud.google.com/solutions/jenkins-on-container-engine#customizing_the_docker_image</a>). When I tried those steps by building the jenkins-slave image and pushing it to my repo, it did not work. Every time it complains the slave is offline and is unable to be reached.</p>
| <p>When a container build agent (old term was <code>slave</code>) shows as <em>Offline</em> within Jenkins, this usually means there was an error in the Jenkinsfile if using Pipeline, or there was a problem pulling the agent image container.</p>
<p>If you're using the Kubernetes Plugin, you'll see the error on the Kubernetes pod, and in the build logs in newer versions of the plugin. For docker plugin containers, the error may not be visible in the build log, but would be in the docker logs.</p>
<p>This is usually a problem of authentication or access for pulling the image from the registry.</p>
<p>If you're using the <strong><a href="https://plugins.jenkins.io/kubernetes/" rel="nofollow noreferrer">Jenkins Kubernetes plugin</a></strong>, the Pod YAML or Pod Template must contain the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer"><code>imagePullSecrets</code></a> for the Container Registry from which you wish to pull the container image from.</p>
<p><em>Jenkins Scripted Pipeline:</em></p>
<pre><code>node {
podTemplate(cloud: 'kubernetes', containers: [
containerTemplate(
name: 'mine',
image: 'my-image:v1.2',
ttyEnabled: true,
),
],
imagePullSecrets: [ 'my-credentials-id' ]) {
sh 'run command'
}
}
</code></pre>
<p><em>Jenkins Declarative Pipeline:</em></p>
<pre><code>pipeline {
agent {
kubernetes {
cloud 'kubernetes'
defaultContainer 'mine'
yaml '''
apiVersion: v1
kind: Pod
metadata:
labels:
build: my-app
spec:
containers:
- name: mine
image: my-image:v1.2
command:
- cat
tty: true
imagePullSecrets:
- name: my-image-pull-cred
'''
}
}
stages {
stage('Run Build') {
steps {
sh 'mvn -version'
}
}
}
}
</code></pre>
<p>If you're using the <strong><a href="https://plugins.jenkins.io/docker-plugin/" rel="nofollow noreferrer">Docker plugin</a></strong>, you'll need to define the docker agent <code>withRegistry</code> which allows you to pass a Jenkins credential for the registry from which you wish to pull the container image from.</p>
<p><em>Jenkins Scripted Pipeline:</em></p>
<pre><code>node {
docker.withRegistry('https://my-registry.gcr.io', 'my-credentials-id') {
docker.image('my-image:v1.2').inside {
sh 'run command'
}
}
}
</code></pre>
<p><em>Jenkins Declarative Pipeline:</em></p>
<pre><code>pipeline {
agent {
docker {
image 'my-image:v1.0'
registryUrl 'https://my-registry.gcr.io'
registryCredentialsId 'my-credentials-id'
}
}
stages {
stage('Build') {
steps {
sh 'run command'
}
}
}
}
</code></pre>
|
<p>Having:</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: example-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
</code></pre>
<p>And rolebinding:</p>
<pre><code>kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: example-rolebinding
namespace: default
subjects:
- kind: User
name: example-user
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: example-role
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>How can I get the secret token?</p>
<pre><code>token=$(kubectl get secret/$name -o jsonpath='{.data.token}' | base64 --decode)
</code></pre>
<p>But there is no secret for the user only the "default-token-xxx".</p>
<p>Do I need to bind a services account or is the token added to default-token-xxx?</p>
| <p>All Kubernetes clusters have two categories of users: service accounts managed by Kubernetes, and normal users, and a third subject: Groups. Kubernetes does not have objects (kinds) which represent normal user accounts. Normal users cannot be added to a cluster through an API call. Normal users are typically managed or authenticated through integrations with other authentication protocols such as LDAP, SAML, Azure Active Directory, Kerberos, etc. You can leverage an external identity provider like OIDC to authenticate through a token.</p>
<p>For Service Accounts, as you've correctly noticed, if you don't explicitly create a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="nofollow noreferrer">Kubernetes Service Account</a> in your namespace, you'll only have access to the default service account, which will be <code>default-token-<hash></code>.</p>
<p>A token is not automatically created for a "Normal User", but is automatically created for a Service Account. Service accounts are users managed by the Kubernetes API. They are bound to specific namespaces, and created automatically by the API server or manually through API calls. Service accounts are tied to a set of credentials stored as Secrets.</p>
<p>Kubernetes uses client certificates, bearer tokens, an authenticating proxy, or HTTP basic auth to authenticate API requests through authentication plugins. As HTTP requests are made to the API server, plugins attempt to associate the following attributes with the request:</p>
<ul>
<li>Username: a string which identifies the end user. Common values might be kube-admin or jane@example.com.</li>
<li>UID: a string which identifies the end user and attempts to be more consistent and unique than username.</li>
</ul>
<p>The subject of user authentication is easier answered if we know what the user authentication integration being used is.</p>
|
<p>I have a deployment.yaml which has a readiness probe on the container. (The readiness probe is intended to fail here)</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: nginx
name: my-nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
readinessProbe:
exec:
command:
- cat
- /server/xyz.txt
initialDelaySeconds: 50
periodSeconds: 10
resources: {}
status: {}
</code></pre>
<p>The pods in the deployment are served using a service of type ClusterIP.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: nginx-service
name: nginx-service
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 80
selector:
app: nginx
type: ClusterIP
status:
loadBalancer: {}
</code></pre>
<p>after applying these yamls using <code>kubectl apply</code>, container in the pods is never ready as the readiness probe is failing, which is expected.</p>
<pre><code>NAME READY STATUS RESTARTS AGE
my-nginx-deployment-6b788b89c6-f69j7 0/1 Running 0 9m50s
my-nginx-deployment-6b788b89c6-m5qf6 0/1 Running 0 9m50s
</code></pre>
<p>So since these pods are not ready, they should not serve the traffic but when I do</p>
<pre><code>kubectl port-forward services/nginx-service 8086:8080
</code></pre>
<p>I am able to get 200 response and nginx home page on <code>http://127.0.0.1:8086/</code> and I can also see pods logs about serving traffic.</p>
<p>question is, why pods are serving traffic when readiness probe is failing.</p>
<p>PS: I have created cluster on my machine using Kind</p>
| <p>The <code>port-forward</code> <a href="https://v1-17.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#create-connect-portforward-pod-v1-core" rel="nofollow noreferrer">api</a> is for a Pod. The <code>kubectl port-forward</code> command just use <em>service</em> to make it easy to use, but your port are actually forwarded to a Pod - so the <em>readiness</em> status is not applicable.</p>
|
<p>I have a resource yaml file in a folder structure given below</p>
<blockquote>
<p>base</p>
<p>---- first.yaml</p>
<p>main</p>
<p>---- kustomization.yaml</p>
</blockquote>
<p>In kustomization.yaml I am referring the first.yaml as</p>
<blockquote>
<p>resources:</p>
<ul>
<li>../base/first.yaml</li>
</ul>
</blockquote>
<p>But I am getting an error when i do apply of kubectl apply -f kustomizatio.yaml</p>
<pre><code>accumulating resources: accumulating resources from '../base/first.yaml': security; file '../base/first.yaml' is not in or below '../base'
</code></pre>
<p>How can i call the first.yaml resource from the folder base to the kustomization in main folder?</p>
| <p>Kustomize cannot refer to individual resources in parent directories, it can only refer to resources in current or child directories, but it can refer to other Kustomize directories.</p>
<p>The following would be a valid configuration for what you have:</p>
<pre><code>.
├── base
│ ├── main
│ │ ├── kustomization.yaml
│ │ └── resource.yaml
│ └── stuff
│ ├── first.yaml
│ └── kustomization.yaml
└── cluster
└── kustomization.yaml
</code></pre>
<p>Contents of <code>base/main/kustomization.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- resource.yaml
</code></pre>
<p>Contents of <code>base/stuff/kustomization.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- first.yaml
</code></pre>
<p>Contents of <code>cluster/kustomization.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base/main
- ../base/stuff
</code></pre>
|
<p>I have a problem with switching context in a terminal and Docker desktop bar panel on Mac:</p>
<pre><code>kubectl config use-context master-cluster
error: open /Users/sacherus/.kube/config.lock: file exists
ls -lh /Users/sacherus/.kube/config.lock
---------- 1 sacherus staff 0B Jun 9 00:01
</code></pre>
<p>I think it could be: Pycharm's kubernetes plugin, Docker desktop bar panel or some plugin to zsh. </p>
<p>Of course, I can delete config.lock, but this file is being created every few hours.</p>
| <p>Community wiki answer based on discussion from the comments</p>
<p>Most probably the root cause is duplicate entries in KUBECONFIG env.</p>
<p>To fix the problem export proper KUBECONFIG. This was also discussed <a href="https://stackoverflow.com/questions/58985141/getting-kubectl-config-use-context-error-when-trying-to-switch-context">here</a></p>
|
<p>I'm trying to use the <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">Kubernetes client-go</a> to access pod details in a cluster.</p>
<p>I want to use it to get the details of pods running in one particular namespace, similar to <code>kubectl get pods -n <my namespace></code>.</p>
<p>The details I want are the <code>name</code>, <code>status</code>, <code>ready</code>, <code>restarts</code> and <code>age</code> of the pod.</p>
<p>How can I get those data?</p>
| <p>So, I wrote a function that takes in a Kubernetes client (refer the client-go for details on making one) and a namespace and returns all the pods available-</p>
<pre class="lang-golang prettyprint-override"><code>func GetPods(client *meshkitkube.Client, namespace string) (*v1core.PodList, error) {
// Create a pod interface for the given namespace
podInterface := client.KubeClient.CoreV1().Pods(namespace)
// List the pods in the given namespace
podList, err := podInterface.List(context.TODO(), v1.ListOptions{})
if err != nil {
return nil, err
}
return podList, nil
}
</code></pre>
<p>After getting all the pods, I used a loop to run through all the pods and containers within each pod and manually got all the data I required-</p>
<pre class="lang-golang prettyprint-override"><code>// List all the pods similar to kubectl get pods -n <my namespace>
for _, pod := range podList.Items {
// Calculate the age of the pod
podCreationTime := pod.GetCreationTimestamp()
age := time.Since(podCreationTime.Time).Round(time.Second)
// Get the status of each of the pods
podStatus := pod.Status
var containerRestarts int32
var containerReady int
var totalContainers int
// If a pod has multiple containers, get the status from all
for container := range pod.Spec.Containers {
containerRestarts += podStatus.ContainerStatuses[container].RestartCount
if podStatus.ContainerStatuses[container].Ready {
containerReady++
}
totalContainers++
}
// Get the values from the pod status
name := pod.GetName()
ready := fmt.Sprintf("%v/%v", containerReady, totalContainers)
status := fmt.Sprintf("%v", podStatus.Phase)
restarts := fmt.Sprintf("%v", containerRestarts)
ageS := age.String()
// Append this to data to be printed in a table
data = append(data, []string{name, ready, status, restarts, ageS})
}
</code></pre>
<p>This will result in the exact same data as you would get when running <code>kubectl get pods -n <my namespace></code>.</p>
|
<p>I can install bitnami/redis with this helm command:</p>
<pre><code>helm upgrade --install "my-release" bitnami/redis \
--set auth.existingSecret=redis-key \
--set metrics.enabled=true \
--set metrics.podAnnotations.release=prom \
--set master.podAnnotations."linkerd\.io/inject"=enabled \
--set replica.podAnnotations."linkerd\.io/inject"=enabled
</code></pre>
<p>Now I want to install it using <a href="https://argoproj.github.io/argo-cd/operator-manual/declarative-setup/" rel="nofollow noreferrer">ArgoCD Manifest</a>.</p>
<pre><code>project: default
source:
repoURL: 'https://charts.bitnami.com/bitnami'
targetRevision: 14.1.1
helm:
valueFiles:
- values.yaml
parameters:
- name: metrics.enabled
value: 'true'
- name: metrics.podAnnotations.release
value: 'prom'
- name: master.podAnnotations.linkerd.io/inject
value: enabled
- name: replica.podAnnotations.linkerd.io/inject
value: enabled
- name: auth.existingSecret
value: redis-key
chart: redis
destination:
server: 'https://kubernetes.default.svc'
namespace: default
syncPolicy: {}
</code></pre>
<p>But I'm getting validation error because of <code>master.podAnnotations.linkerd.io/inject</code> and <code>replica.podAnnotations.linkerd.io/inject</code></p>
<pre><code>error validating data: ValidationError(StatefulSet.spec.template.metadata.annotations."linkerd): invalid type for io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta.annotations: got "map", expected "string"
error validating data: ValidationError(StatefulSet.spec.template.metadata.annotations."linkerd): invalid type for io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta.annotations: got "map", expected "string"
</code></pre>
<p>If I remove these two annotation settings the app can be installed.
I've tried <code>master.podAnnotations."linkerd.io\/inject"</code>, but it doesn't work. I guess it has something to do with the "." or "/". Can anyone help me solve this issue?</p>
| <p>Look at <a href="https://argoproj.github.io/argo-cd/operator-manual/application.yaml" rel="nofollow noreferrer">this example</a>, parameters containing dots need to be escaped.</p>
|
<p>i've found two similar posts here but one hasn't been answered and the other was about android. I have a spring boot project and I want to access GCP Storage files within my application.</p>
<p>Locally everything works fine I can access my bucket and read as well as store files in my storage. But when i upload it to gcp kubernetes I get the following exception:</p>
<blockquote>
<p>"java.nio.file.FileSystemNotFoundException: Provider "gs" not
installed at java.nio.file.Paths.get(Paths.java:147) ~[na:1.8.0_212]
at
xx.xx.StorageService.saveFile(StorageService.java:64)
~[classes!/:0.3.20-SNAPSHOT]</p>
</blockquote>
<p>My line of code where it appears is like follows:</p>
<pre><code>public void saveFile(MultipartFile multipartFile, String path) {
String completePath = filesBasePath + path;
Path filePath = Paths.get(URI.create(completePath)); // <- exception appears here
Files.createDirectories(filePath);
multipartFile.transferTo(filePath);
}
</code></pre>
<p>}</p>
<p>The completePath could result in something like "gs://my-storage/path/to/file/image.jpg"</p>
<p>I have the following dependencies:</p>
<pre><code><dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-gcp-starter-storage</artifactId>
<version>1.2.6.RELEASE</version>
</dependency>
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-nio</artifactId>
<version>0.122.5</version>
</dependency>
</code></pre>
<p>Does anyone have a clue where to look at?
The only real difference except for the infrastructure is that i don't explicitly don't use authentication on kubernetes as it is not required according to the documentation</p>
<blockquote>
<p>When using Google Cloud libraries from a Google Cloud Platform
environment such as Compute Engine, Kubernetes Engine, or App Engine,
no additional authentication steps are necessary.</p>
</blockquote>
| <p>It looks like the conventional Spring boot packaging here isn't packaging the dependency in the needed way. Usually you'll see something like:</p>
<pre><code><plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<version>2.4.5</version>
<executions>
<execution>
<goals>
<goal>repackage</goal>
</goals>
</execution>
</executions>
</plugin>
</code></pre>
<p>However, for the "gs" provider to be accessible it needs to be in the 'lib/' folder. You can package it manually by copying the dependencies and then creating the JAR (this is based on the <a href="https://github.com/GoogleCloudPlatform/java-docs-samples/tree/master/appengine-java11/springboot-helloworld" rel="nofollow noreferrer">springboot-helloworld</a> sample:</p>
<pre class="lang-xml prettyprint-override"><code><plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<executions>
<execution>
<id>copy-dependencies</id>
<phase>prepare-package</phase>
<goals>
<goal>copy-dependencies</goal>
</goals>
<configuration>
<outputDirectory>
${project.build.directory}/lib
</outputDirectory>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<configuration>
<archive>
<manifest>
<addClasspath>true</addClasspath>
<classpathPrefix>lib/</classpathPrefix>
<mainClass>
com.example.appengine.springboot.SpringbootApplication
</mainClass>
</manifest>
</archive>
</configuration>
</plugin>
</code></pre>
<p>Originally <a href="https://github.com/googleapis/java-storage-nio/issues/547" rel="nofollow noreferrer">posted on GitHub</a>.</p>
|
<p>I am working to configure Istio in my on prem Kubernetes cluster. As part of this I have to coordinate with my System Admins to setup DNS and load balancer resources.</p>
<p>I have found with my work learing and setting up Istio, that I need to fully uninstall it and re-install it. <em>When I do that Istio will pick a new port for the Ingress Gateway.</em> This then necessitates me coordinating updates with the System Admins.</p>
<p>It would be convenient if I could force Istio to just keep using the same port.</p>
<p>I am using the Istio Operator to manage Istio. <strong>Is there a way to set an Ingress Gateway's NodePort with the Istio Operator?</strong></p>
| <p>In your Istio operator yaml you can define/override ingressgateway settings (k8s section of an ingressgateway definition)</p>
<p><a href="https://istio.io/latest/docs/reference/config/istio.operator.v1alpha1/#KubernetesResourcesSpec" rel="nofollow noreferrer">https://istio.io/latest/docs/reference/config/istio.operator.v1alpha1/#KubernetesResourcesSpec</a></p>
<p>for example :</p>
<pre><code>components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
k8s:
service:
ports:
- name: status-port
port: 15021
- name: tls-istiod
port: 15012
- name: tls
port: 15443
nodePort: 31371
- name: http2
port: 80
nodePort: 31381
targetPort: 8280
- name: https
port: 443
nodePort: 31391
targetPort: 8243
</code></pre>
|
<p>I'm trying to add kubectl provider for terraform module and I follow the docs from <a href="https://registry.terraform.io/providers/gavinbunney/kubectl/latest/docs" rel="nofollow noreferrer">Terraform kubectl</a>. I run <code>terraform init</code> and provider is installed with success but when I try to add a sample config, for ex: ( or thers from <a href="https://registry.terraform.io/providers/gavinbunney/kubectl/latest/docs/resources/kubectl_manifest" rel="nofollow noreferrer">here</a> )</p>
<pre><code>resource "kubectl_server_version" "current" {}
</code></pre>
<p>and run <code>terraform plan</code> I got the following msg:</p>
<pre><code>Error: Could not load plugin
Failed to instantiate provider "registry.terraform.io/hashicorp/kubectl" to
obtain schema: unknown provider "registry.terraform.io/hashicorp/kubectl"
</code></pre>
<p>and when I tun <code>terraform init</code> ( with the resource in place in module k8s )</p>
<pre><code>Error: Failed to install provider
Error while installing hashicorp/kubectl: provider registry
registry.terraform.io does not have a provider named
registry.terraform.io/hashicorp/kubectl
</code></pre>
<p>some outputs:</p>
<pre><code>$terraform plugins
├── provider[registry.terraform.io/hashicorp/kubernetes] 1.13.2
├── provider[registry.terraform.io/gavinbunney/kubectl] 1.9.1
├── module.k8s
│ ├── provider[registry.terraform.io/hashicorp/kubectl]
│ └── provider[registry.terraform.io/hashicorp/kubernetes]
$terraform init
Initializing modules...
Initializing the backend...
Initializing provider plugins...
- Using previously-installed hashicorp/kubernetes v1.13.2
- Using previously-installed gavinbunney/kubectl v1.9.1
$terraform -v
Terraform v0.13.4
+ provider registry.terraform.io/gavinbunney/kubectl v1.9.1
+ provider registry.terraform.io/hashicorp/kubernetes v1.13.2
....
</code></pre>
<p>some config files:</p>
<p>terraform.tf</p>
<pre><code>terraform {
required_version = "0.13.4"
backend "gcs" {
...
}
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "1.13.2"
}
kubectl = {
source = "gavinbunney/kubectl"
version = "1.9.1"
}
....
</code></pre>
<p>terraform successfully init the <code>gavinbunney/kubectl</code> provider but when I add <code>resource "kubectl_manifest" ...</code> in k8s.module terraform is trying to load <code>hashicorp/kubectl</code> provider</p>
<p>what i'm missing? :)</p>
<p><strong>Update</strong>
with terraform 1.4.0</p>
<pre><code>required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.18.1"
}
}
provider "kubernetes" {
host = module.k8s.cluster_endpoint
cluster_ca_certificate = base64decode(module.k8s.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
args = ["eks", "get-token", "--cluster-name", module.k8s.cluster_name]
}
}
resource "kubernetes_namespace" "velero" {
metadata {
name = var.velero_ns
}
}
</code></pre>
| <p>In my case it was due to referencing kubectl resources in a module, but the module needed the provider adding to <code>required_providers</code> within the module:</p>
<pre><code>terraform {
required_providers {
kubectl = {
source = "gavinbunney/kubectl"
version = "x.x.x"
}
}
}
</code></pre>
|
<p>I'm new to Kubernetes (K8s). It's my understanding that in order to "do things" in a kubernetes cluster, we interact with a kuberentes REST API endpoint and create/update/delete objects. When these objects are created/updated/deleted K8s will see those changes and take steps to bring the system in-line with the state of your objects.</p>
<p>In other words, you tell K8s you want a "deployment object" with container image <code>foo/bar</code> and 10 replicas and K8s will create 10 running pods with the <code>foo/bar</code> image. If you update the deployment to say you want 20 replicas, K8s will start more pods.</p>
<p>My Question: Is there a canonical description of all the possible configuration fields for these objects? That is -- tutorials liks <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment" rel="nofollow noreferrer">this one</a> do a good job of describing the simplest possible configuration to get an object like a deployment working, but now I'm curious what else it's possible to do with deployments that go beyond these hello world examples.</p>
| <blockquote>
<p>Is there a canonical description of all the possible configuration fields for these objects?</p>
</blockquote>
<p>Yes, there is the <a href="https://kubernetes.io/docs/reference/kubernetes-api/" rel="nofollow noreferrer">Kubernetes API reference</a> e.g. for <a href="https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/deployment-v1/" rel="nofollow noreferrer">Deployment</a>.</p>
<p>But when developing, the easiest way is to use <code>kubectl explain <resource></code> and navigate deeper, e.g:</p>
<pre><code>kubectl explain Deployment.spec
</code></pre>
<p>and then deeper, e.g:</p>
<pre><code>kubectl explain Deployment.spec.template
</code></pre>
|
<p>As per these two issues on <code>ingress-nginx</code> Github, it seems that the only way to get grpc/http2 working on port 80 without TLS is by using a custom config template:</p>
<ol>
<li><a href="https://github.com/kubernetes/ingress-nginx/issues/6313" rel="nofollow noreferrer">ingress does not supporting http2 at port 80 without tls #6313</a></li>
<li><a href="https://github.com/kubernetes/ingress-nginx/issues/6736" rel="nofollow noreferrer">Add new annotation to support listen 80 http2 #6736</a></li>
</ol>
<p>Unfortunately I could not find any straightforward examples on how to set-up a custom nginx-ingress config. Here are the links I tried:</p>
<ol>
<li><a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/custom-template/" rel="nofollow noreferrer">Custom NGINX template</a></li>
<li><a href="https://github.com/nginxinc/kubernetes-ingress/tree/v1.11.1/examples/custom-templates" rel="nofollow noreferrer">Custom Templates</a></li>
</ol>
<p>Can anyone help me with the exact steps and config on how to get grpc/http2 working with nginx-ingress on port 80 <strong>without TLS</strong>?</p>
| <p>This is a community wiki answer posted for better visibility. Feel free to expand it.</p>
<p>As already mentioned in the comments, the steps to make it work are as follows:</p>
<ol>
<li><p>Launch a separate nginx controller in an empty namespace to avoid issues with the main controller.</p>
</li>
<li><p>Create custom templates, using <a href="https://github.com/nginxinc/kubernetes-ingress/tree/v1.11.1/internal/configs/version1" rel="nofollow noreferrer">these</a> as a reference.</p>
</li>
<li><p>Put them in a <code>ConfigMap</code> like <a href="https://github.com/nginxinc/kubernetes-ingress/tree/v1.11.1/examples/custom-templates#example" rel="nofollow noreferrer">this</a>.</p>
</li>
<li><p>Mount the templates into the controller pod as in this <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/custom-template/#custom-nginx-template" rel="nofollow noreferrer">example</a>.</p>
</li>
</ol>
|
<p>I have a simple container that consists of OpenLDAP installed on Alpine. It's installed to run as a non-root user. I am able to run the container without any issues using my local Docker engine. However, when I deploy it to our Kubernetes system it is killed almost immediately as OOMKilled. I've tried increasing the memory without any change. I've also looked at the memory usage for the pod and don't see anything unusual.</p>
<p>The server is started as <code>slapd -d debug -h ldap://0.0.0.0:1389/ -u 1000 -g 1000</code>, where <code>1000</code> is the uid and gid, respectively.</p>
<p>The node trace shows this output:</p>
<pre><code>May 13 15:33:44 pu1axb-arcctl00 kernel: Task in /kubepods/burstable/podbac2e0ae-9e9c-420e-be4e-c5941a2d562f/7d71b550e2d37e5d8d78c73ba8c7ab5f7895d9c2473adf4443675b9872fb84a4 killed as a result of limit of /kubepods/burstable/podbac2e0ae-9e9c-420e-be4e-c5941a2d562f
May 13 15:33:44 pu1axb-arcctl00 kernel: memory: usage 512000kB, limit 512000kB, failcnt 71
May 13 15:33:44 pu1axb-arcctl00 kernel: memory+swap: usage 0kB, limit 9007199254740988kB, failcnt 0
May 13 15:33:44 pu1axb-arcctl00 kernel: kmem: usage 7892kB, limit 9007199254740988kB, failcnt 0
May 13 15:33:44 pu1axb-arcctl00 kernel: Memory cgroup stats for /kubepods/burstable/podbac2e0ae-9e9c-420e-be4e-c5941a2d562f: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:
May 13 15:33:44 pu1axb-arcctl00 kernel: Memory cgroup stats for /kubepods/burstable/podbac2e0ae-9e9c-420e-be4e-c5941a2d562f/db65b4f82efd556a780db6eb2c3ddf4b594774e4e5f523a8ddb178fd3256bdda: cache:0KB rss:44KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB inactive_anon:
May 13 15:33:44 pu1axb-arcctl00 kernel: Memory cgroup stats for /kubepods/burstable/podbac2e0ae-9e9c-420e-be4e-c5941a2d562f/59f908d8492f3783da587beda7205c3db5ee78f0744d8cb49b0491bcbb95c4c7: cache:0KB rss:0KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB inactive_anon:0
May 13 15:33:44 pu1axb-arcctl00 kernel: Memory cgroup stats for /kubepods/burstable/podbac2e0ae-9e9c-420e-be4e-c5941a2d562f/7d71b550e2d37e5d8d78c73ba8c7ab5f7895d9c2473adf4443675b9872fb84a4: cache:4KB rss:504060KB rss_huge:0KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB inactive_a
May 13 15:33:44 pu1axb-arcctl00 kernel: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
May 13 15:33:44 pu1axb-arcctl00 kernel: [69022] 0 69022 242 1 28672 0 -998 pause
May 13 15:33:44 pu1axb-arcctl00 kernel: [69436] 1000 69436 591 454 45056 0 969 docker-entrypoi
May 13 15:33:44 pu1axb-arcctl00 kernel: [69970] 1000 69970 401 2 45056 0 969 nc
May 13 15:33:44 pu1axb-arcctl00 kernel: [75537] 1000 75537 399 242 36864 0 969 sh
May 13 15:33:44 pu1axb-arcctl00 kernel: [75544] 1000 75544 648 577 45056 0 969 bash
May 13 15:33:44 pu1axb-arcctl00 kernel: [75966] 1000 75966 196457 126841 1069056 0 969 slapd
May 13 15:33:44 pu1axb-arcctl00 kernel: Memory cgroup out of memory: Kill process 75966 (slapd) score 1961 or sacrifice child
May 13 15:33:44 pu1axb-arcctl00 kernel: Killed process 75966 (slapd) total-vm:785828kB, anon-rss:503016kB, file-rss:4348kB, shmem-rss:0kB
</code></pre>
<p>I find it hard to believe it's really running out of memory. It's a simple LDAP container with only 8-10 elements in the directory tree and the pod is not showing memory issues on the dashboard (Lens). We have other Alpine images which don't have this issue.</p>
<p>I'm relatively new to Kubernetes, so I'm hoping the users on SO can give me some guidance on how to debug this. I can provide more info once I know what is helpful. As I mentioned increasing the memory has no affect. I plan to switch from "burstable" to "guaranteed" deployment and see if that makes a difference.</p>
<p><strong>===== UPDATE - Is working now =====</strong></p>
<p>I believe I was confusing the meaning of resource "limits" vs "requests". I had been trying several variations on these before making the original post. After reading through the responses I now have the pod deployed with the following settings:</p>
<pre><code> resources:
limits:
cpu: 50m
memory: 1Gi
requests:
cpu: 50m
memory: 250Mi
</code></pre>
<p>Looking at the memory footprint in Lens it's holding steady at around 715Mi for the usage. This is higher that our other pods by at least 25%. Perhaps the LDAP server just needs more. Regardless, I thank you all for your timely help.</p>
| <p>Check your deployment or pod spec for resource limits.</p>
<p>If your application requires more memory than it is allowed, it will be OOMKilled by the kubernetes.</p>
<pre><code>...
resources:
limits:
memory: 200Mi
requests:
memory: 100Mi
...
</code></pre>
<p><strong>Equivalent JAVA JVM flags to better understand this concept</strong></p>
<p>requests = Xms</p>
<p>limits = Xmx</p>
<p>Read more:</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/</a></p>
<p><a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/</a></p>
|
<p>I have an NFS based PVC in a kubernetes cluster that I need to freeze to take a snapshot of. I tried fsfreeze, but I get "operation not supported". I assume because it is trying to freeze the entire nfs instead of just the mount. I have checked and I can freeze the filesystem on the side of the NFS server. Is there a different way that I can stop writes to the filesystem to properly sync everything?</p>
| <p>From <a href="https://github.com/vmware-tanzu/velero/issues/2042" rel="nofollow noreferrer">https://github.com/vmware-tanzu/velero/issues/2042</a> and some other quick poking around, fsfreeze doesn't support NFS mounts. In general it seems to mostly on work with real local volumes which you'll almost never use with Kubernetes.</p>
|
<p>In GKE, the <code>Reclaim Policy</code> of my <code>PersistentVolume</code> is set to <code>Retain</code>, in order to prevent unintentional data removal. However, sometimes, after the deletion of some <code>PersistentVolumes</code>, I'd like to remove the associated <code>Google Persistent Disks</code> manually. Deleting the <code>Google Persistent Disks</code> using the web UI (i.e. Google Cloud Console) is time-consuming, that's why I'd like to use a <code>gcloud</code> command to remove all <code>Google Persistent Disks</code> that are not attached to a GCP VM instance. Could somebody please provide me this command?</p>
| <p>This one should work:</p>
<pre><code>gcloud compute disks delete $(gcloud compute disks list --filter="-users:*" --format "value(uri())")
</code></pre>
|
<p>Dear StackOverflow community!</p>
<p>I am trying to run the <a href="https://github.com/GoogleCloudPlatform/microservices-demo" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/microservices-demo</a> locally on minikube, so I am following their development guide: <a href="https://github.com/GoogleCloudPlatform/microservices-demo/blob/master/docs/development-guide.md" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/microservices-demo/blob/master/docs/development-guide.md</a></p>
<p>After I successfully set up minikube (using virtualbox driver, but I tried also hyperkit, however the results were the same) and execute <code>skaffold run</code>, after some time it will end up with following error:</p>
<pre><code>Building [shippingservice]...
Sending build context to Docker daemon 127kB
Step 1/14 : FROM golang:1.15-alpine as builder
---> 6466dd056dc2
Step 2/14 : RUN apk add --no-cache ca-certificates git
---> Running in 0e6d2ab2a615
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.13/main: DNS lookup error
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.13/community: DNS lookup error
ERROR: unable to select packages:
git (no such package):
required by: world[git]
Building [recommendationservice]...
Building [cartservice]...
Building [emailservice]...
Building [productcatalogservice]...
Building [loadgenerator]...
Building [checkoutservice]...
Building [currencyservice]...
Building [frontend]...
Building [adservice]...
unable to stream build output: The command '/bin/sh -c apk add --no-cache ca-certificates git' returned a non-zero code: 1. Please fix the Dockerfile and try again..
</code></pre>
<p>The error message suggest that DNS does not work. I tried to add <code>8.8.8.8</code> to <code>/etc/resolv.conf</code> on a minikube VM, but it did not help. I've noticed that after I re-run <code>skaffold run</code> and it fails again, the content <code>/etc/resolv.conf</code> returns to its original state containing <code>10.0.2.3</code> as the only DNS entry. Reaching the outside internet and pinging <code>8.8.8.8</code> form within the minikube VM works.</p>
<p>Could you point me to a direction how can possible I fix the problem and learn on how the DNS inside minikube/kubernetes works? I've heard that problems with DNS inside Kubernetes cluster are frequent problems you run into.</p>
<p>Thanks for your answers!</p>
<p>Best regards,
Richard</p>
| <p>Tried it with docker driver, i.e. <code>minikube start --driver=docker</code>, and it works. Thanks Brian!</p>
|
<p>Assume there is a system that accepts millions of simultaneous WebSocket connections from client applications. I was wondering if there is a way to route WebSocket connections to a specific instance behind a load balancer (or IP/Domain/etc) if clients provide some form of metadata, such as hash key, instance name, etc.</p>
<p>For instance, let's say each WebSocket client of the above system will always belong to a group (e.g. max group size of 100), and it will attempt to communicate with 99 other clients using the above system as a message gateway.</p>
<p>So the system's responsibility is to relay messages sent from clients in a group to other 99 clients in the same group. Clients won't ever need to communicate with other clients who belong to different groups.</p>
<p>Of course, one way to tackle this problem is to use Pubsub system, such that regardless of which instance clients are connected to, the server can simply publish the message to the Pubsub system with a group identifier and other clients can subscribe to the messages with a group identifier.</p>
<p>However, the Pubsub system can potentially encounter scaling challenges, excessive resource usage (single message getting published to thousands of instances), management overhead, latency increase, cost, and etc.</p>
<p>If it is possible to guarantee that WebSocket clients in a group will all be connected to the instance behind LB, we can skip using the Pubsub system and make things simpler, lower latency, and etc.</p>
<p>Would this be something that is possible to do, and if it isn't, what would be the best option?</p>
<p>(I am using Kubernetes in one of the cloud service providers if that matters.)</p>
| <p>Routing in HTTP is generally based on the hostname and/or URL path. Sometimes to a lesser degree on other headers like cookies. But in this case it would mean that each group should have it's own unique URL.</p>
<p>But that part is easy, what I think you're really asking is "given arbitrary URLs, how can I get consistent routing?" which is much, much more complicated. The base concept is "consistent hashing", you hash the URL and use that to pick which endpoint to talk to. But then how to do deal with adding or removing replicas without scrambling the mapping entirely. That usually means using a hash ring and assigning portions of the hash space to specific replicas. Unfortunately this is the point where off-the-shelf tools aren't enough. These kinds of systems require deep knowledge of your protocol and system specifics so you'll probably need to rig this up yourself.</p>
|
<p>I have the following ConfigMap in my Kubernetes cluster that contains the <code>web.config</code> for my application there is a different one per environment so I would like to volume mount the ConfigMap to <code>web.config</code> in the pod.</p>
<p><strong>ConfigMap</strong>:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: ConfigMap
apiVersion: v1
metadata:
name: stars-website-data-config
namespace: stars-website
data:
config: |-
<?xml version="1.0"?>
<configuration>
...
</configuration>
</code></pre>
<p><strong>Deployment</strong>:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Deployment
apiVersion: apps/v1
metadata:
name: stars-website-data
namespace: stars-website
spec:
template:
spec:
containers:
- name: stars-website-data
...
volumeMounts:
- name: config
mountPath: C:\STARS.Website.Data
volumes:
- name: config
configMap:
name: stars-website-data-config
items:
- key: config
path: web.config
</code></pre>
<p>This seems to do what I want it to do but it replaces all the other files and folder in that directory.</p>
<pre><code>PS C:\STARS.Website.Data> ls
Directory: C:\STARS.Website.Data
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 5/14/2021 10:55 AM ..2021_05_14_09_55_42.624339347
d----l 5/14/2021 10:55 AM ..data
-a---l 5/14/2021 10:55 AM 0 web.config
PS C:\STARS.Website.Data> cat .\web.config
<?xml version="1.0"?>
<configuration>
<configSections>
</code></pre>
<p>If I try to use the <code>subPath</code> value like:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Deployment
apiVersion: apps/v1
metadata:
name: stars-website-data
namespace: stars-website
spec:
template:
spec:
containers:
- name: stars-website-data
...
volumeMounts:
- name: config
mountPath: C:\STARS.Website.Data\web.config
subPath: web.config
volumes:
- name: config
configMap:
name: stars-website-data-config
items:
- key: config
path: web.config
</code></pre>
<p>I am getting the following error:</p>
<pre><code>Error: Error response from daemon: invalid volume specification: 'c:\var\lib\kubelet\pods\ddfe6b07-4bff-42fe-ad3b-a02add00fbbf\volumes\kubernetes.io~configmap\config\web.config:C:\STARS.Website.Data\web.config:ro': invalid mount config for type "bind": source path must be a directory
</code></pre>
| <p>When configMap is mounted as volume then it will delete all the files which were previously present in the mount path, this is expected behaviour. Please refer caution section in this <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#populate-a-volume-with-data-stored-in-a-configmap" rel="nofollow noreferrer">link</a>.</p>
<p>With respective to your subpath configuration, there is a limitation in bind mounting files directly for windows based container, please refer this <a href="https://github.com/moby/moby/issues/30555" rel="nofollow noreferrer">discussion</a>. The preferred approach is directory mounting for windows based container, you can create separate empty directory and you can mount your configmap to that directory which might be quick and straight forward solution.</p>
<p>This <a href="https://artisticcheese.wordpress.com/2021/05/02/circumventing-windows-containers-limitation-on-volume-mounts/" rel="nofollow noreferrer">link</a> has got workaround solution which might be useful for you.</p>
<p>There is one major disadvantage of using either subpath configuration or workaround solution given in the above link is whenever you update your configmap it would not automatically update the project key.</p>
<p>Thanks,</p>
|
<p>I followed <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes" rel="nofollow noreferrer">this tutorial</a> to serve a basic application using the NGINX Ingrss Controller, and cert-manager with letsencrypt.</p>
<p>I am able to visit the website, but the SSL certificate is broken, saying <code>Issued By: (STAGING) Artificial Apricot R3</code>.</p>
<p>This is my <code>ClusterIssuer</code>:</p>
<pre><code>apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-issuer
namespace: cert-manager
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: my-email@example.com
privateKeySecretRef:
name: letsencrypt-issuer
solvers:
- http01:
ingress:
class: nginx
</code></pre>
<p>And the <code>Ingress</code>:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app-ingress-dev
namespace: my-app
annotations:
cert-manager.io/cluster-issuer: letsencrypt-issuer
spec:
tls:
- secretName: echo-tls
hosts:
- my-app.example.com
rules:
- host: my-app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app-dev
port:
number: 80
</code></pre>
| <p>LetsEncrypt staging is for testing, and does not issue certificates that are trusted by browsers. Use the production LE URL instead <code>https://acme-v02.api.letsencrypt.org/directory</code></p>
|
<p>I'm trying to use kubernetes-alpha provider in Terraform, but I have "Failed to construct REST client" error message.
I'm using tfk8s to convert my yaml file to terraform code.</p>
<p>I make the seme declaration for the provider than kubernetes, and my kubernetes provider work correctely</p>
<pre><code>provider "kubernetes-alpha" {
host = "https://${data.google_container_cluster.primary.endpoint}"
token = data.google_client_config.default.access_token
cluster_ca_certificate = base64decode(data.google_container_cluster.primary.master_auth[0].cluster_ca_certificate)
}
provider "kubernetes" {
host = "https://${data.google_container_cluster.primary.endpoint}"
token = data.google_client_config.default.access_token
cluster_ca_certificate = base64decode(data.google_container_cluster.primary.master_auth[0].cluster_ca_certificate)
}
</code></pre>
<pre><code>resource "kubernetes_manifest" "exemple" {
provider = kubernetes-alpha
manifest = {
# result of tfk8s
}
}
</code></pre>
<p><a href="https://i.stack.imgur.com/rJoT9.png" rel="nofollow noreferrer">the error message</a></p>
<p>somebody can help ?</p>
| <p>After some digging, I found that this resource requires a running kubernetes instance and config before the terraform plan will work properly. Best stated in github here: <a href="https://github.com/hashicorp/terraform-provider-kubernetes-alpha/issues/199#issuecomment-832614387" rel="nofollow noreferrer">https://github.com/hashicorp/terraform-provider-kubernetes-alpha/issues/199#issuecomment-832614387</a></p>
<p>Basically, you have to have two steps to first terraform apply your main configuration to stand up kubernetes in your cloud, and then secondly terraform apply the CRD resource once that cluster has been established.</p>
<p>EDIT: I'm still trying to learn good patterns/practices for managing terraform config and found this pretty helpful. <a href="https://stackoverflow.com/questions/47708338/how-to-give-a-tf-file-as-input-in-terraform-apply-command">How to give a .tf file as input in Terraform Apply command?</a>. I ended up just keeping the cert manager CRD as a standard kubernetes manifest yaml that I apply per-cluster with other application helm charts.</p>
|
<p>in my k8s system I have a nginx ingress controller as LoadBalancer and accessing it to ddns adress like hedehodo.ddns.net and this triggering to forward web traffic to another nginx port.
Now I deployed another nginx which works on node.js app but I cannot forward nginx ingress controller for any request to port 3000 to go another nginx</p>
<p>here is the nginx ingress controller yaml</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: default
spec:
rules:
- host: hedehodo.ddns.net
http:
paths:
- path: /
backend:
serviceName: my-nginx
servicePort: 80
- path: /
backend:
serviceName: helloapp-deployment
servicePort: 3000
</code></pre>
<p>helloapp deployment works a Loadbalancer and I can access it from IP:3000</p>
<p>could any body help me?</p>
| <p>Each host cannot share multiple duplicate paths, so in your example, the request to host: <code>hedehodo.ddns.net</code> will always map to the first service listed: <code>my-nginx:80</code>.</p>
<p>To use another service, you have to specify a different path. That path can use any service that you want. Your ingress should always point to a service, and that service can point to a deployment.</p>
<p>You should also use HTTPS by default for your ingress.</p>
<p>Ingress example:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
rules:
- host: my.example.net
http:
paths:
- path: /
backend:
serviceName: my-nginx
servicePort: 80
- path: /hello
backend:
serviceName: helloapp-svc
servicePort: 3000
</code></pre>
<p>Service example:</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: v1
kind: Service
metadata:
name: helloapp-svc
spec:
ports:
- port: 3000
name: app
protocol: TCP
targetPort: 3000
selector:
app: helloapp
type: NodePort
</code></pre>
<p>Deployment example:</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloapp
labels:
app: helloapp
spec:
replicas: 1
selector:
matchLabels:
app: helloapp
template:
metadata:
labels:
app: helloapp
spec:
containers:
- name: node
image: my-node-img:v1
ports:
- name: web
containerPort: 3000
</code></pre>
|
<p>I want to deploy helm charts, which are stored in a repository in AWS ECR, in the kubernetes cluster using ArgoCD. But I am getting a 401 unauthorized issue. I have pasted the entire issue below</p>
<pre><code>Unable to create application: application spec is invalid: InvalidSpecError: Unable to get app details: rpc error: code = Unknown desc = `helm chart pull <aws account id>.dkr.ecr.<region>.amazonaws.com/testrepo:1.1.0` failed exit status 1: Error: unexpected status code [manifests 1.1.0]: 401 Unauthorized
</code></pre>
| <p>Yes, you can use ECR for storing helm charts (<a href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/push-oci-artifact.html" rel="noreferrer">https://docs.aws.amazon.com/AmazonECR/latest/userguide/push-oci-artifact.html</a>)</p>
<p>I have managed to add the repo to ArgoCD, but the token expires so it is not a complete solution.</p>
<pre><code>argocd repo add XXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com --type helm --name some-helmreponame --enable-oci --username AWS --password $(aws ecr get-login-password --region us-east-1)
</code></pre>
|
<p>I'm trying to set up an Ingress rule for a service (Kibana) running in my microk8s cluster but I'm having some problems.</p>
<p>The first rule set up is</p>
<pre><code>Name: web-ingress
Namespace: default
Address: 127.0.0.1
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
k8s-ingress-tls terminates web10
Rules:
Host Path Backends
---- ---- --------
*
/ web-service:8080 (10.1.72.26:8080,10.1.72.27:8080)
Annotations: nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: false
Events: <none>
</code></pre>
<p>I'm trying to set Kibana service to get served on path /kibana</p>
<pre><code>Name: kibana
Namespace: default
Address: 127.0.0.1
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
k8s-ingress-tls terminates web10
Rules:
Host Path Backends
---- ---- --------
*
/kibana(/|$)(.*) kibana:5601 (10.1.72.39:5601)
Annotations: nginx.ingress.kubernetes.io/ssl-redirect: false
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 17m nginx-ingress-controller Ingress default/kibana
Normal UPDATE 17m nginx-ingress-controller Ingress default/kibana
</code></pre>
<p>My problem is that first thing Kibana does is returns a 302 redirect to host/login?next=%2F which gets resolved by the first Ingress rule because now I've lost my /kibana path.</p>
<p>I tried adding <code>nginx.ingress.kubernetes.io/rewrite-target: /kibana/$2</code> but redirect then just looks like <code>host/login?next=%2Fkibana%2F</code> which is not what I want at all.</p>
<p>If I delete the first rule, I just get 404 once Kibana does a redirect to host/login?next=%2F</p>
| <p>Add the following annotation to the <code>kibana</code> ingress so that nginx-ingress interprets the <code>/kibana(/|$)(.*)</code> path using regex:</p>
<pre><code> nginx.ingress.kubernetes.io/use-regex: "true"
</code></pre>
<p>Additional detail:
To let kibana know that it runs on <code>/kibana</code> path, add the following env variable to the kibana pod/deployment:</p>
<pre><code> - name: SERVER_BASEPATH
value: /kibana
</code></pre>
|
<p>I am using kubernetes and its resources like secrets. During deployment one secret has been created (say test-secret) with some values inside it.
Now I need to renamed this secretes (dev-secret) within the same namespace.
How can I rename the secret or how can I copy test-secret value to dev-secret.</p>
<p>Please let me know the correct approach for this.</p>
| <p>There is no specific way to do this. The Kubernetes API does not have "rename" as an operation. In this particular case you would <code>kubectl get server test-secret -o yaml</code>, clean up the <code>metadata:</code> sections that don't apply anymore, edit the name, and <code>kubectl apply</code> it again.</p>
|
<p>I am trying to create a shell script that will validate that certain pods are ready by checking the READY heading showing 1/1. I have tried two ways.</p>
<p>1.</p>
<pre><code>ready=$(oc get pods | awk '{print $2}' | tail -n +2) # prints 1/1 or 0/1 for each pod
until [[ ${ready} == "1/1" ]]
do
echo "Waiting for pods to be ready."
sleep 3
done
</code></pre>
<p>Above script just keeps saying "Waiting for pods to be ready" even when the pods are ready and displaying 1/1 in the READY column. </p>
<p>2.</p>
<pre><code>while true ; do
for i in 1 2 3; do
ready=`oc get pods | awk '{print $2}' | tail -n +2 | head -n $i`
if [[ "${ready}" == "1/1" ]]; then
echo "pods are up and running"
else
echo "waiting for pods to be ready"
sleep 10
break
fi
done
done
</code></pre>
<p>Above script just continually prints waiting for pods to be ready and pods are up and running.</p>
<p>Any help would be appreciated, I am starting with Bash and not quite sure what to do.</p>
| <p>I'm surprised no one so far has mentioned the experimental, yet official <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#wait" rel="noreferrer">kubectl wait</a>:<br />
<code>$ kubectl wait ([-f FILENAME] | resource.group/resource.name | resource.group [(-l label | --all)]) [--for=delete|--for condition=available]</code></p>
|
<p>In my spring boot microservice project, one of the microservice is upgraded to java 11 and it is causing some issue in my existing functionality. I want to change the version of java from 11 to 8 in that microservice. but I am unable to do it as its created as docker image automatically through jenkins.</p>
<p>In the Gradle file I have mentioned the java version as 1.8 but still somehow in the kubernetes cluster only for this particular service java is upgraded to 11.</p>
<p>confirmed the version of java by using following command in the server.
"<strong>docker exec {container_id} java -version</strong>"</p>
<p>please suggest me how I can change the version of java in docker image or containers?</p>
<p>Is it possible to change the java version just by running linux command? or is it require to follow some other steps?</p>
| <p>@RakshithM yes, Jenkins must refer the <code>Dockerfile</code> to build docker container, if you find out then you can do it as suggested by @David. for you reference <a href="https://docs.docker.com/language/java/build-images/" rel="nofollow noreferrer">https://docs.docker.com/language/java/build-images/</a> , any container refer to the base image which has the basic environment on which our application runs which basically starts with <code>FROM</code> in docker file.
eg:</p>
<pre><code>FROM openjdk:16-alpine3.13
</code></pre>
<p>here our container is referring to the <code>openjdk:16</code> image which is <code>minimal linux environment with java 16</code> on the top of which other tools are included which is defined in <code>dockerfile</code>. so in your case if you find the <code>Dockerfile</code> then you can change it to the java version of your need.</p>
|
<p>I've got familiar with kubernetes very recently. I want to deploy a project into my minikube cluster. Here is the yaml file:</p>
<p><em>deployment.yml</em></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: project
name: project
namespace: default
spec:
replicas: 1
selector:
matchLabels:
run: project
template:
metadata:
labels:
run: project
spec:
hostNetwork: true
containers:
- name: container1
image: image1
imagePullPolicy: Never
- name: container2
image: image2
imagePullPolicy: Never
</code></pre>
<p>I created the deployment successfully, and both containers in a pod are up and running but these containers can not connect to local resources like databases(e.g. redis)</p>
<p>I get logs via this command and the resut is as below.</p>
<p>kubectl logs -f project-5f5c6df6bc-q82s5 container1</p>
<blockquote>
<p><em>Error 111 connecting to 127.0.0.1:6379. Connection refused.</em></p>
</blockquote>
| <p>I think you can use a Service of type <code>ExternalName</code> with the special hostname provided by Minikube:
<a href="https://minikube.sigs.k8s.io/docs/handbook/host-access/#hostminikubeinternal" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/handbook/host-access/#hostminikubeinternal</a>
This exposes an external Service that can be consumed from Kubernetes.</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: redis
spec:
type: ExternalName
externalName: host.minikube.internal
</code></pre>
<p>In the code that runs in the cluster simply use the host <code>redis</code> so that Kubernetes can retrieve the mapping to the desired host.</p>
|
<p>I have used some bitnami charts in my kubernetes app. In my pod, there is a file whose path is /etc/settings/test.html. I want to override the file. When I search it, I figured out that I should mount my file by creating a configmap. But how can I use the created configmap with the <strong>existed pod</strong> . Many of the examples creates a new pod and uses the created config map. But I dont want to create a new pod, I wnat to use the existed pod.</p>
<p>Thanks</p>
| <p>If not all then almost all pod specs are immutable, meaning that you can't change them without destroying the old pod and creating a new one with desired parameters. There is <em>no way</em> to edit pod volume list without recreating it.</p>
<p>The reason behind this is that pods aren't meant to be immortal. Pods meant to be temporary units that can be spawned/destroyed according to scheduler needs. In general, you need a workload object that does pod management for you (a <code>Deployement</code>, <code>StatefulSet</code>, <code>Job</code>, or <code>DaemonSet</code>, depenging on deployment strategy and application nature).</p>
<p>There are two ways to edit a file in an existing pod: either by using <code>kubectl exec</code> and console commands to edit the file in place, or <code>kubectl cp</code> to copy an already edited file into the pod. I advise you against <em>both of these</em>, because this is not permanent. Better backup the necessary data, switch deployment type to <code>Deployment</code> with one replica, then go with mounting a <code>configMap</code> as you read on the Internet.</p>
|
<p>I enabled AGIC in the Azure portal and then created <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#simple-fanout" rel="nofollow noreferrer">Fanout Ingress</a>. But it's not working. I checked Rules (ingress-appgateway > Rules > Path-based routing) and paths are targeting correct backend pool.</p>
<p>When I am testing health probe, it's failing ("MC_..." resource group > ingress-appgateway > Health probes > Click test) - showing error :</p>
<blockquote>
<p>One or more of your backend instances are unhealthy. It is recommended
to address this health issue first before attaching the probe</p>
</blockquote>
<p>I tried:</p>
<ul>
<li>disabling and enabling AGIC -> <strong>did not work</strong></li>
<li>using <code>pathType: ImplementationSpecific</code> (instead of <code>pathType: Prefix</code>) -> <strong>did not work</strong></li>
<li><code>nginx.ingress.kubernetes.io/rewrite-target: /$1</code> & <code>/foo(/|$)(.*)</code> -> <strong>did not work</strong></li>
</ul>
<p>At the top of the Overview page of "ingress-appgateway" (Azure portal), showing error:</p>
<blockquote>
<p>All the instances in one or more of your backend pools are unhealthy.
This will result in a 502 error when you try to access your
application hosted behind the Application Gateway. Please check the
backend health and resolve the issue.</p>
</blockquote>
<p>It works only If I remove paths (<code>/foo</code> & <code>/bar</code>) and target a single service.</p>
<p>FYI, I am using Azure CNI networking and existing VNet (dedicated subnet).</p>
<p><code>deployment.yaml</code></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: demo-web-app1
namespace: demo
spec:
selector:
app: demo-web-app1
type: ClusterIP
ports:
- protocol: TCP
port: 4200
targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: demo-web-app2
namespace: demo
spec:
selector:
app: demo-web-app2
type: ClusterIP
ports:
- protocol: TCP
port: 8080
targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-web-app1
namespace: demo
spec:
replicas: 2
selector:
matchLabels:
app: demo-web-app1
template:
metadata:
labels:
app: demo-web-app1
spec:
containers:
- name: demo-web-app1
image: myacr.azurecr.io/myacr6472:375
ports:
- containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-web-app2
namespace: demo
spec:
replicas: 3
selector:
matchLabels:
app: demo-web-app2
template:
metadata:
labels:
app: demo-web-app2
spec:
containers:
- name: demo-web-app2
image: myacr.azurecr.io/myacr6472:375
ports:
- containerPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-demo-web-app
namespace: demo
annotations:
kubernetes.io/ingress.class: azure/application-gateway
appgw.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /foo
pathType: Prefix
backend:
service:
name: demo-web-app1
port:
number: 4200
- path: /bar
pathType: Prefix
backend:
service:
name: demo-web-app2
port:
number: 8080
</code></pre>
| <p><code>appgw.ingress.kubernetes.io/backend-path-prefix: /</code> solved the problem as pointed out by @Matt in the comment section.</p>
<p>I can now target multiple backend pools using different paths i.e. <code>/api</code> for API service & <code>/app</code> for UI app.</p>
<p>I wrote <a href="https://hovermind.com/azure-kubernetes-service/application-gateway-ingress-controller#ingress-for-multiple-apis" rel="nofollow noreferrer">an article in my site about serving multiple APIs</a></p>
|
<p>I have recently upgrade my AKS cluster from 1.16.x to 1.18.17 (it was a jump of two versions). I did the upgrade using the Azure Portal, not the CLI.</p>
<p>The upgrade itself has worked, I can see my cluster is now on version 1.18.17 and on first glance everything seems to be working as expected, but at the top of the Overview panel, this message is displayed:</p>
<p><em>This container service is in a failed state. Click here to go to diagnose and solve problems.</em></p>
<p>With the cluster in this state I can't scale, or upgrade, as I get an error telling me the operation isn't available whilst the cluster is upgrading, or in a failed state.</p>
<p>The supporting page the error links to doesn't give me any useful information. It doesn't even mention the fact my cluster is in a failed state.</p>
<p>I've seen this error once before when I was approaching the limit of our VM Compute quota. At the moment though, I am only using 10%, and I don't have enough pods and nodes to push it over. The only other quotas which are maxed are network watchers and I don't think that's related.</p>
<p>The scaling operation links to this support document: <a href="https://learn.microsoft.com/en-us/azure/aks/troubleshooting#im-receiving-errors-that-my-cluster-is-in-failed-state-and-upgrading-or-scaling-will-not-work-until-it-is-fixed" rel="nofollow noreferrer">aka.ms/aks-cluster-failed</a>, and the suggestion there is about quota sizes, which I have already tried.</p>
<p>I'm really scratching my head with this one I can't find any useful support documents, blog posts or other questions, so any help would be greatly appreciated!</p>
| <p>Answering my own question in the hope it can help others, or myself in the future.</p>
<p>I managed to get more information of the error by running an update with the azure cli <a href="https://learn.microsoft.com/en-us/azure/aks/upgrade-cluster" rel="nofollow noreferrer">Upgrade an Azure Kubernetes Service (AKS) cluster</a>.</p>
<p>You can also use the cli to check for available updates. <a href="https://learn.microsoft.com/en-us/azure/aks/upgrade-cluster#check-for-available-aks-cluster-upgrades" rel="nofollow noreferrer">Check for available AKS cluster upgrades</a>.</p>
<p>Using the cli seems to be a bit more informative when troubleshooting.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.