prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>In the below yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
labels:
app: my-nginx # Line 6
spec: # Line 7
replicas: 2
selector:
matchLabels:
app: my-nginx # line 11
template:
metadata:
labels:
app: my-nginx # Line 15
spec: # Line 16
containers:
- name: my-nginx
image: nginx:alpine
ports:
- containerPort: 80
resources:
limits:
memory: "128Mi" #128 MB
cpu: "200m" #200 millicpu (.2 cpu or 20% of the cpu)
</code></pre>
<hr />
<p>Deployment is given a label(<code>app: nginx</code>) at Line 6.</p>
<p>Deployment spec at Line 7 uses the Pod spec mentioned in Line 16</p>
<hr />
<ol>
<li><p>What is the purpose of <code>selector</code> field with <code>matchLabels</code>?</p>
</li>
<li><p>What is the purpose of <code>template</code> field with <code>labels</code>?</p>
</li>
</ol>
| <p>Tried to add comments to explain the role of labels:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
labels:
app: my-nginx # LABEL-A: <--this label is to manage the deployment itself. this may be used to filter the deployment based on this label.
spec:
replicas: 2
selector:
matchLabels:
app: my-nginx #LABEL-B: <-- field defines how the Deployment finds which Pods to manage.
template:
metadata:
labels:
app: my-nginx #LABEL-C: <--this is the label of the pod, this must be same as LABEL-B
spec:
containers:
- name: my-nginx
image: nginx:alpine
ports:
- containerPort: 80
resources:
limits:
memory: "128Mi" #128 MB
cpu: "200m" #200 millicpu (.2 cpu or 20% of the cpu)
</code></pre>
<p><strong>LABEL-A:</strong> <--this label is to manage the deployment itself. this may be used to filter the deployment based on this label. Example usage of <code>LABEL-A</code> is for deployment management, such as filtering.</p>
<pre><code>k get deployments.apps -L app=my-nginx
</code></pre>
<p><strong>LABEL-B</strong>: <-- There must be some place where we tell replication controller to manage the pods. This field defines how the Deployment finds which Pods to manage. Based on these labels of the pod, replication controller ensure they are ready.</p>
<p><strong>LABEL-C:</strong> <--this is the label of the pod, which LABEL-B use to monitor. this must be same as LABEL-B</p>
|
<p>I built a kubernetes cluster on CentOS 8 first. I followed the how-to found here: <a href="https://www.tecmint.com/install-a-kubernetes-cluster-on-centos-8/" rel="nofollow noreferrer">https://www.tecmint.com/install-a-kubernetes-cluster-on-centos-8/</a></p>
<p>And then I built an Ubuntu 18.04 VM and installed Rancher on it. I can access the Rancher website just fine and all appears to be working on the rancher side, except I can't add my kubernetes cluster to it.</p>
<p>When I use the "Add Cluster" feature, I chose the "Other Cluster" option, give it a name, and then click create. I then copy the insecure "Cluster Registration Command" to the master node. It appears to take the command just fine.</p>
<p>In troubleshooting, I've issued the following command: kubectl -n cattle-system logs -l app=cattle-cluster-agent</p>
<p>The output I get is as follows:</p>
<pre><code>INFO: Environment: CATTLE_ADDRESS=10.42.0.1 CATTLE_CA_CHECKSUM=94ad10e756d390cdf8b25465f938c04344a396b16b4ff6c0922b9cd6b9fc454c CATTLE_CLUSTER=true CATTLE_CLUSTER_REGISTRY= CATTLE_FEATURES= CATTLE_INTERNAL_ADDRESS= CATTLE_IS_RKE=false CATTLE_K8S_MANAGED=true CATTLE_NODE_NAME=cattle-cluster-agent-7b9df685cf-9kr4p CATTLE_SERVER=https://192.168.188.189:8443
INFO: Using resolv.conf: nameserver 10.96.0.10 search cattle-system.svc.cluster.local svc.cluster.local cluster.local options ndots:5
ERROR: https://192.168.188.189:8443/ping is not accessible (Failed to connect to 192.168.188.189 port 8443: No route to host)
INFO: Environment: CATTLE_ADDRESS=10.40.0.0 CATTLE_CA_CHECKSUM=94ad10e756d390cdf8b25465f938c04344a396b16b4ff6c0922b9cd6b9fc454c CATTLE_CLUSTER=true CATTLE_CLUSTER_REGISTRY= CATTLE_FEATURES= CATTLE_INTERNAL_ADDRESS= CATTLE_IS_RKE=false CATTLE_K8S_MANAGED=true CATTLE_NODE_NAME=cattle-cluster-agent-7bc7687557-tkvzt CATTLE_SERVER=https://192.168.188.189:8443
INFO: Using resolv.conf: nameserver 10.96.0.10 search cattle-system.svc.cluster.local svc.cluster.local cluster.local options ndots:5
ERROR: https://192.168.188.189:8443/ping is not accessible (Failed to connect to 192.168.188.189 port 8443: No route to host)
[root@k8s-master ~]# ping 192.168.188.189
PING 192.168.188.189 (192.168.188.189) 56(84) bytes of data.
64 bytes from 192.168.188.189: icmp_seq=1 ttl=64 time=0.432 ms
64 bytes from 192.168.188.189: icmp_seq=2 ttl=64 time=0.400 ms
^C
--- 192.168.188.189 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.400/0.416/0.432/0.016 ms
</code></pre>
<p>As you can see I'm getting a "No route to host" error message. But, I can ping the rancher VM using its IP address.</p>
<p>It appears to be attempting to use resolv.conf inside the cluster and looking to use a 10.96.0.10 to resolve the ip address of 192.168.188.189 (my Rancher VM). But it appears to be failing to resolve it.</p>
<p>I'm thinking I have some sort of DNS issue that's preventing me from using hostnames. Though I've edited the /etc/hosts file on the master and worker nodes to include entries for each of the devices. I can ping devices using their hostname, but I can't reach a pod using :. I get a "No route to host" error message when I try that too. See here:</p>
<pre><code>[root@k8s-master ~]# ping k8s-worker1
PING k8s-worker1 (192.168.188.191) 56(84) bytes of data.
64 bytes from k8s-worker1 (192.168.188.191): icmp_seq=1 ttl=64 time=0.478 ms
64 bytes from k8s-worker1 (192.168.188.191): icmp_seq=2 ttl=64 time=0.449 ms
^C
--- k8s-worker1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.449/0.463/0.478/0.025 ms
[root@k8s-master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-world NodePort 10.103.5.49 <none> 8080:30370/TCP 45m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 26h
nginx NodePort 10.97.172.245 <none> 80:30205/TCP 3h43m
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
hello-world-7884c6997d-2dc9z 1/1 Running 0 28m 10.40.0.4 k8s-worker3 <none> <none>
hello-world-7884c6997d-562lh 1/1 Running 0 28m 10.35.0.8 k8s-worker2 <none> <none>
hello-world-7884c6997d-78dmm 1/1 Running 0 28m 10.36.0.3 k8s-worker1 <none> <none>
hello-world-7884c6997d-7vt4f 1/1 Running 0 28m 10.40.0.6 k8s-worker3 <none> <none>
hello-world-7884c6997d-bpq5g 1/1 Running 0 49m 10.36.0.2 k8s-worker1 <none> <none>
hello-world-7884c6997d-c529d 1/1 Running 0 28m 10.35.0.6 k8s-worker2 <none> <none>
hello-world-7884c6997d-ddk7k 1/1 Running 0 28m 10.36.0.5 k8s-worker1 <none> <none>
hello-world-7884c6997d-fq8hx 1/1 Running 0 28m 10.35.0.7 k8s-worker2 <none> <none>
hello-world-7884c6997d-g5lxs 1/1 Running 0 28m 10.40.0.3 k8s-worker3 <none> <none>
hello-world-7884c6997d-kjb7f 1/1 Running 0 49m 10.35.0.3 k8s-worker2 <none> <none>
hello-world-7884c6997d-nfdpc 1/1 Running 0 28m 10.40.0.5 k8s-worker3 <none> <none>
hello-world-7884c6997d-nnd6q 1/1 Running 0 28m 10.36.0.7 k8s-worker1 <none> <none>
hello-world-7884c6997d-p6gxh 1/1 Running 0 49m 10.40.0.1 k8s-worker3 <none> <none>
hello-world-7884c6997d-p7v4b 1/1 Running 0 28m 10.35.0.4 k8s-worker2 <none> <none>
hello-world-7884c6997d-pwpxr 1/1 Running 0 28m 10.36.0.4 k8s-worker1 <none> <none>
hello-world-7884c6997d-qlg9h 1/1 Running 0 28m 10.40.0.2 k8s-worker3 <none> <none>
hello-world-7884c6997d-s89c5 1/1 Running 0 28m 10.35.0.5 k8s-worker2 <none> <none>
hello-world-7884c6997d-vd8ch 1/1 Running 0 28m 10.40.0.7 k8s-worker3 <none> <none>
hello-world-7884c6997d-wvnh7 1/1 Running 0 28m 10.36.0.6 k8s-worker1 <none> <none>
hello-world-7884c6997d-z57kx 1/1 Running 0 49m 10.36.0.1 k8s-worker1 <none> <none>
nginx-6799fc88d8-gm5ls 1/1 Running 0 4h11m 10.35.0.1 k8s-worker2 <none> <none>
nginx-6799fc88d8-k2jtw 1/1 Running 0 4h11m 10.44.0.1 k8s-worker1 <none> <none>
nginx-6799fc88d8-mc5mz 1/1 Running 0 4h12m 10.36.0.0 k8s-worker1 <none> <none>
nginx-6799fc88d8-qn6mh 1/1 Running 0 4h11m 10.35.0.2 k8s-worker2 <none> <none>
[root@k8s-master ~]# curl k8s-worker1:30205
curl: (7) Failed to connect to k8s-worker1 port 30205: No route to host
</code></pre>
<p>I suspect this is the underlying reason why I can't join the cluster to rancher.</p>
<p>EDIT: I want to add additional details to this question. Each of my nodes (master & worker nodes) have the following ports open on the firewall:</p>
<pre><code>firewall-cmd --list-ports
6443/tcp 2379-2380/tcp 10250/tcp 10251/tcp 10252/tcp 10255/tcp 6783/tcp 6783/udp 6784/udp
</code></pre>
<p>For the CNI, the Kubernetes cluster is using Weavenet.</p>
<p>Each node (master & worker) is configured to use my main home DNS server (which is also an active directory domain controller) in their networking configuration. I've created AAA records for each node in the DNS server. The nodes are NOT joined to the domain. However, I've also edited each node's /etc/hosts file to contain the following records:</p>
<pre><code># more /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.188.190 k8s-master
192.168.188.191 k8s-worker1
192.168.188.192 k8s-worker2
192.168.188.193 k8s-worker3
</code></pre>
<p>I've found that I CAN use "curl k8s-worker1.mydomain.com:30370" with about 33% success. But I would have thought that the /etc/hosts file would take precedence over using my home DNS server.</p>
<p>And finally, I noticed an additional anomaly. I've discovered that the cluster is not load balancing across the three worker nodes. As shown above, I'm running a deployment called "hello-world" based on the bashofmann/rancher-demo image with 20 replicas. I've also created a nodeport service for hello-world that maps nodeport 30370 to port 8080 on each respective pod.</p>
<p>If I open my web browser and go to <a href="http://192.168.188.191:30370" rel="nofollow noreferrer">http://192.168.188.191:30370</a> then it'll load the website but only served up by pods on k8s-worker1. It'll never load the website served up by any pods on any of the other worker nodes. This would explain why I only get ~33% success, as long as it's served up by the same worker node that I've specified in my url.</p>
| <p>OP confirmed that the issue is found to be due to firewall rules. This was debugged by disabling the firewall, that lead to desired operation(cluster addition) to be successful.</p>
<p>In order to nodePort service to work properly, port range <code>30000 - 32767</code> should be reachable on all the nodes of the cluster.</p>
|
<p>We have an AKS cluster set up with a multiple availability zone node pool. Using the <code>default</code> storage class, if a Pod needs to move to another node and the only available node is in a different region, the Pod can't start up because the storage is stuck in the original region. Do any of the other built-in storage classes support the relocation of workloads across multi-zone pools?</p>
<pre class="lang-yaml prettyprint-override"><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: data
namespace: $NAMESPACE
labels:
service: db
spec:
accessModes:
- ReadWriteOnce
storageClassName: default
resources:
requests:
storage: 4Gi
</code></pre>
| <p>Yes you can use below configurations depending on your need.</p>
<p><strong>Example StorageClass.yaml</strong></p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: azurefile
provisioner: kubernetes.io/azure-file
parameters:
skuName: Standard_ZRS
location: eastus
storageAccount: azure_storage_account_name
allowedTopologies:
- matchLabelExpressions:
- key: failure-domain.beta.kubernetes.io/zone
values:
- eastus2-1
- eastus2-2
- eastus2-3
</code></pre>
<p>According to that following skuName are available in Azure —</p>
<ul>
<li><strong>Standard_LRS</strong> — standard locally redundant storage (LRS)</li>
<li><strong>Standard_GRS</strong> — standard geo-redundant storage (GRS)</li>
<li><strong>Standard_ZRS</strong> — standard zone redundant storage (ZRS)</li>
<li><strong>Standard_RAGRS</strong> — standard read-access geo-redundant storage (RA-GRS)</li>
<li><strong>Premium_LRS</strong> — premium locally redundant storage (LRS)</li>
<li><strong>Premium_ZRS</strong> — premium zone redundant storage (GRS)</li>
</ul>
<p>References: <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#allowed-topologies" rel="nofollow noreferrer">K8s Allowed Topologies</a>, <a href="https://learn.microsoft.com/en-us/azure/aks/availability-zones" rel="nofollow noreferrer">AKS - Availability Zones</a>, <a href="https://learn.microsoft.com/en-us/azure/aks/concepts-storage#storage-classes" rel="nofollow noreferrer">AKS - StorageClasses</a></p>
|
<p>I am using Spring Cloud Kubernetes and I am trying to make feign able to send requests based on the name of the services present in kubernetes, but I can't, when I try to make a request the following error occurs:</p>
<pre><code> "timestamp": "2019-12-06T15:37:50.285+0000",
"status": 500,
"error": "Internal Server Error",
"message": "com.netflix.client.ClientException: Load balancer does not have available server for client: poc-saldo",
"trace": "java.lang.RuntimeException: com.netflix.client.ClientException: Load balancer does not have available server for client: poc-saldo\n\tat org.springframework.cloud.openfeign.ribbon.LoadBalancerFeignClient.execute....
</code></pre>
<p>I tried to call other services within the cluster and the problem is the same for all of them, I did a test by going into the poc-deposit pod and doing a poc-balance curl and it works normally, so the problem is not with the poc-deposit service. balance or with kubernetes's service discovery apparently.</p>
<p>The project has a public profile at:</p>
<pre><code>https://gitlab.com/viniciusxyz/spring-kubernetes-feign
</code></pre>
<p>For those who want more direct information:</p>
<p>My main class is as follows:</p>
<pre><code>@SpringBootApplication
@EnableDiscoveryClient
@EnableFeignClients
public class ServiceDiscoveryApplication {
public static void main(String[] args) {
SpringApplication.run(ServiceDiscoveryApplication.class, args);
}
}
</code></pre>
<p>My interface with feign is as follows:</p>
<pre><code>@FeignClient("poc-saldo")
public interface ProxyGenerico {
@RequestMapping(method = RequestMethod.GET)
String getHttpResponse();
}
</code></pre>
<p>I can list the services available in kubernetes within the application as follows:</p>
<pre><code>@RestController
public class RestTest {
@Autowired
private DiscoveryClient discoveryClient;
@Autowired
private ProxyGenerico proxyGenerico;
@GetMapping("/services")
public ResponseEntity<?> services() {
return new ResponseEntity<Object>(discoveryClient.getServices(), HttpStatus.OK);
}
@GetMapping("/pocsaldo")
public ResponseEntity<?> gitlab() {
return new ResponseEntity<Object>(proxyGenerico.getHttpResponse(), HttpStatus.OK);
}
}
</code></pre>
<p>And in this list I have several services among them the service I want to access called poc-balance, the return json looks like the following:</p>
<pre><code>[
"poc-deposito",
"poc-saldo",
"sonarqube",
"sql-server-sonar",
"zookeeper",
"gitlab"
]
</code></pre>
<p>To complement the list follow my dependencies:</p>
<pre><code><dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes-ribbon</artifactId>
</dependency>
</code></pre>
<p>The discoveryClient.getInstances ("poc-saldo") command returns:</p>
<pre><code>[
{
"instanceId": "32a4db0d-0549-11ea-8850-e0d55ef66cf8",
"serviceId": "poc-saldo",
"secure": false,
"metadata": {
"helm.sh/chart": "spring-boot-app-0.1.23",
"port.http": "8080",
"app.kubernetes.io/managed-by": "Tiller",
"app.kubernetes.io/name": "poc-saldo",
"app.kubernetes.io/instance": "banco-digital-poc-saldo",
"app.kubernetes.io/version": "1.0"
},
"port": 8080,
"host": "10.42.0.60",
"scheme": "http://",
"uri": "http://10.42.0.60:8080"
}
]
</code></pre>
<p>Can you think of where the problem might be?</p>
| <p>Long story short, <strong>Spring Cloud Feign works perfectly fine with Spring Cloud Kubernetes</strong> as of July 2021. I've moved a <em>Spring Cloud Feign</em> project from using <em>Spring Cloud Netflix</em> to <em>Spring Cloud Kubernetes</em> and no change in the Feign interfaces was required. I just removed the previous <em>Service Registry (Eureka)</em> dependency from my <code>build.gradle</code> file and added:</p>
<pre><code>implementation 'org.springframework.cloud:spring-cloud-starter-kubernetes-client-all'
</code></pre>
<p>The support for <em>Spring Cloud LoadBalancer</em> which is now used by <em>Spring Cloud Feign</em> was added to <em>Spring Cloud Kubernetes</em> and <em>Ribbon</em> was removed as of <strong>Spring Cloud 2020.0. (aka Ilford)</strong>, therefore there is no need to exclude it anymore.</p>
<p>Beyond that, the only change required in the whole codebase was annotating the Spring Boot application class with <code>@EnableDiscoveryClient</code> to enable K8s-native Service Discovery:</p>
<pre><code>@EnableFeignClients
@EnableDiscoveryClient
@SpringBootApplication
public class BookApplication {
</code></pre>
<p><em>Spring Cloud Feign</em> uses <em>Spring Cloud LoadBalancer</em> which when running on Kubernetes leverages <a href="https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/index.html#discoveryclient-for-kubernetes" rel="noreferrer">Discovery Client for Kubernetes</a> to check for service instances. As a result, it only chooses from instances that are up and running. <strong>The only requirement is to align the Kubernetes service name with</strong> <code>spring.application.name</code> <strong>property</strong>. Just like that:</p>
<p><em>application.properties(yaml)</em>:</p>
<pre><code>spring.application.name=library-book-service
</code></pre>
<p>combined with the following Kubernetes configuration</p>
<p><em>kubectl get svc</em>:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
library-book-service ClusterIP 10.100.200.235 <none> 8080/TCP 5d21h
</code></pre>
|
<p>I'm running Docker Desktop for MacOS and I don't know how to stop the Docker service. It runs all the time using up the MacBook battery.</p>
<img src="https://i.stack.imgur.com/buRf4.png" width="200" />
<p>On a simple search, there are docs showing how to stop the containers but not the docker service itself.</p>
<p>I might be missing something obvious, but is there a way to stop both Kubernetes and Docker service without having to kill the desktop app?</p>
| <p>The docker desktop app starts a qemu vm, so the desktop app has no control over the PIDs.
To overcome the "situation" do the following:</p>
<ul>
<li><p>open the Terminal app</p>
</li>
<li><p>edit the file <code>~/.bash_profile</code></p>
</li>
<li><p>add the following lines</p>
</li>
</ul>
<pre>
#macro to kill the docker desktop app and the VM (excluding vmnetd -> it's a service)
function kdo() {
ps ax|grep -i docker|egrep -iv 'grep|com.docker.vmnetd'|awk '{print $1}'|xargs kill
}
</pre>
<ul>
<li>save the file</li>
</ul>
<p>Quit the terminal app and open it again.</p>
<p>Type <code>kdo</code> to kill all the dependend apps (hypervisor, docker daemon etc.)</p>
|
<p>I have an image that requires root privilege to start.</p>
<p>Now I'm trying to deploy it on OpenShift.</p>
<p>this is the deployment yaml I used to deploy it</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: xyz
annotations:
k8s.v1.cni.cncf.io/networks: macvlan-conf
spec:
selector:
matchLabels:
name: xyz
template:
metadata:
labels:
name: xyz
spec:
containers:
- name: xyz
image: 172.30.1.1:5000/myproject/xyz@sha256:bf3d219941ec0de7f52f6babbca23e03cc2611d327552b08f530ead9ec627ec2
imagePullPolicy: Always
securityContext:
capabilities:
add:
- ALL
privileged: false
allowPrivilegeEscalation: false
runAsUser: 0
serviceAccount: runasanyuid
serviceAccountName: runasanyuid
hostNetwork: true
resources:
limits:
memory: "12000Mi"
requests:
memory: "6000Mi"
ports:
- containerPort: 2102
command:
- /usr/sbin/sshd -D
</code></pre>
<p>please note that I already created a SCC called 'scc-admin' to run the pods in the project I'm working on with any UID, as I know that OpenShift doesn't allow pods to start with root privilege by default.</p>
<pre><code>kind: SecurityContextConstraints
apiVersion: v1
metadata:
name: scc-admin
allowPrivilegedContainer: true
runAsUser:
type: RunAsAny
seLinuxContext:
type: RunAsAny
fsGroup:
type: RunAsAny
supplementalGroups:
type: RunAsAny
users:
- developer
groups:
- developer
</code></pre>
<p>that's what I found on the internet as a solution for my issue, but I guess it didn't work as well :(</p>
<pre><code>[root@centos72_base ~]# oc get scc
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES
anyuid true [] MustRunAs RunAsAny RunAsAny RunAsAny 10 false [configMap downwardAPI emptyDir hostPath persistentVolumeClaim projected secret]
hostaccess false [] MustRunAs MustRunAsRange MustRunAs RunAsAny <none> false [configMap downwardAPI emptyDir hostPath persistentVolumeClaim projected secret]
hostmount-anyuid false [] MustRunAs RunAsAny RunAsAny RunAsAny <none> false [configMap downwardAPI emptyDir hostPath nfs persistentVolumeClaim projected secret]
hostnetwork false [] MustRunAs MustRunAsRange MustRunAs MustRunAs <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret]
nonroot false [] MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret]
privileged true [*] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [*]
restricted false [] MustRunAs MustRunAsRange MustRunAs RunAsAny <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret]
scc-admin true [] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [awsElasticBlockStore azureDisk azureFile cephFS cinder configMap downwardAPI emptyDir fc flexVolume flocker gcePersistentDisk gitRepo glusterfs iscsi nfs persistentVolumeClaim photonPersistentDisk portworxVolume projected quobyte rbd scaleIO secret storageOS vsphere]
[root@centos72_base ~]#
</code></pre>
<p>please also note that this image works fine with docker using the below command</p>
<pre><code>docker run -d --network host --privileged --cap-add=ALL --security-opt seccomp=unconfined --name xyz 172.30.1.1:5000/myproject/xyz /usr/sbin/sshd -D
</code></pre>
<pre><code>[root@centos72_base ~]# docker ps | grep xyz
793e339ff732 172.30.1.1:5000/myproject/xyz "/usr/sbin/sshd -D" About a minute ago Up About a minute xyz
</code></pre>
<p>and on OpenShift i get these errors with the deployment file I provided above</p>
<blockquote>
<p>Error creating: pods "xyz-7966f58588-" is forbidden: unable to validate against any security context constraint: [spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 0: must be in the ranges: [1000140000, 1000149999] capabilities.add: Invalid value: "ALL": capability may not be added]</p>
</blockquote>
<p>which means that i have to remove</p>
<pre><code> capabilities:
add:
- ALL
</code></pre>
<p>and</p>
<pre><code> runAsUser: 0
</code></pre>
<p>to start the pod</p>
<p>and when I remove them from the yaml file, I get a crash loopback error from the pod</p>
<p>so can anyone please help me with that</p>
| <p>The SCC you created currently applies to the User <code>developer</code>, and the group <code>developer</code>.</p>
<p>Your deployment uses a ServiceAccount <code>runasanyuid</code>.</p>
<p>You need to edit your SecurityContextConstraint, allowing that ServiceAccount:</p>
<pre><code>[...]
users:
- developer
- system:serviceaccount:<namespace-for-your-deployment>:runasanyuid
groups:
- developer
</code></pre>
<p>As a side note, instead of creating your own SCC, if you just need to run a privileged container, OpenShift ships with the <code>anyuid</code> one that you could have re-used - just adding your SA into its users list.</p>
<p>While the best practice would be to avoid privileged containers. Unless there is a good reason, you should not run processes as root - which has been true, long before OCI.</p>
<p>If your application needs to write stuff some place, you could use some emptyDir volumes. If your application tries to bind on privileged ports, you should be able to reconfigure it. If your application complains about not being able to resolve an username for its UID, you could look into nsswrapper. Granting privileges with SecurityContextConstraints or PodSecurityPolicies isn't the norm, they're meant to be exceptions, carefully thought.</p>
<p>If your Pod crashes when running as non-root, try to check its logs. If that doesn't help, try to <code>oc debug -n <namespace> pod/<podname></code>, which should start a new Pod, opening a shell into it, where you would be able to execute its entrypoint yourself, maybe try another set of options, slightly change your configuration, ... until you would get it right. You could even try from your workstation, <code>docker run --user=12435679 xxx</code>: pass it a random UID and see how your container/application deals with it.</p>
|
<p>I have deployed an app and exposed it as a loadbalancer service. I added the resource field in the yaml of the deployment to request for 100m cpu. Defined a HPA to scale the app when the CPU goes above 50%. The app fails to autoscale and the cpu utilization always shows as unknown.<code>kubectl describe hpa</code> gives the following result:</p>
<pre><code>Name: storyexporter-hpa
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"autoscaling/v2beta1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"storyexporter-hpa","namespace":"...
CreationTimestamp: Sat, 24 Oct 2020 18:23:46 +0530
Reference: Deployment/storyexporter-deployment
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): <unknown> / 50%
Min replicas: 1
Max replicas: 3
Deployment pods: 1 current / 0 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: missing request for cpu
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedGetResourceMetric 11s (x7 over 103s) horizontal-pod-autoscaler missing request for cpu
Warning FailedComputeMetricsReplicas 11s (x7 over 103s) horizontal-pod-autoscaler invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: missing request for cpu
</code></pre>
<p>Kubectl top node commands works. I deployed a demo wordpress app and attached HPA for the same and it shows CPU utilization instead of unknown.</p>
<p>Attaching my yaml for deployment and HPA.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: storyexporter
labels:
app: storyexporter
spec:
ports:
- port: 8080
selector:
app: storyexporter
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: storyexporter-deployment
spec:
selector:
matchLabels:
app: storyexporter
replicas: 1
template:
metadata:
labels:
app: storyexporter
spec:
containers:
- name: storyexporter
image: <ImagePath>
env:
- name: STORYEXPORTER_MONGO_HOST
value: storyexporter-mongodb
- name: STORYEXPORTER_MONGO_USERNAME
value: admin
- name: STORYEXPORTER_MONGO_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-password
key: password
- name: STORYEXPORTER_RABBIT_HOST
value: storyexporter-rabbitmq
- name: STORYEXPORTER_RABBIT_USERNAME
value: guest
- name: STORYEXPORTER_RABBIT_PASSWORD
valueFrom:
secretKeyRef:
name: rabbitmq-password
key: password
- name: EXPIRED_RESOURCES_TTL
value: '3600000'
- name: CHROMIUM_TIMEOUT_IN_SECONDS
value: '900'
- name: CHROMIUM_WINDOW_SIZE
value: '1920,1020'
- name: AVG_MB_PER_STORY
value: '1000'
- name: CHROMIUM_ATTEMPTS_BEFORE_FAIL
value: '0'
- name: JAVA_OPTS
value: ''
- name: SKIP_EQS_ROUTING
value: 'false'
- name: CHROMIUM_POOL_SIZE
value: '4'
- name: DEV
value: 'true'
- name: LOCAL
value: 'true'
ports:
- containerPort: 8080
resources:
requests:
cpu: "100m"
limits:
cpu: "200m"
imagePullPolicy: Always
imagePullSecrets:
- name: regcred
</code></pre>
<p>HPA YAML</p>
<pre><code>apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: storyexporter-hpa
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: storyexporter-deployment
minReplicas: 1
maxReplicas: 3
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 50
</code></pre>
| <p>If you are using a multi container based Pod, you must set resources (i.e. requests and limits for CPU and Memory) for all the containers.</p>
<pre><code>+ resources:
+ requests:
+ cpu: 100m
+ memory: "256Mi"
+ limits:
+ cpu: 200m
+ memory: "512Mi"
</code></pre>
|
<p>I'd like to create a nginx ingress controller with AWS internal NLB, the requirement is fix the IP address of NLB endpoint, for example, currently the NLB dns of Nginx ingress service is abc.elb.eu-central-1.amazonaws.com which is resolved to ip address 192.168.1.10, if I delete and re-create nginx ingress controller, I want the NLB DNS must be the same as before.
Having a look in kubernetes service annotation, I did not see any way to re-use existing NLB, however, I find out the annotation service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses in <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/service/annotations/" rel="nofollow noreferrer">link</a>, as far as I understand that it allow me to set ip address for NLB, but it not work as my expectation, everytime I re-created nginx controller, the ip address is difference, Below is K8s service yaml file.</p>
<pre><code># Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses: "10.136.103.251"
service.beta.kubernetes.io/aws-load-balancer-subnets: "subnet-00df069133b22"
labels:
helm.sh/chart: ingress-nginx-3.23.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.44.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
spec:
type: LoadBalancer
externalTrafficPolicy: Local
</code></pre>
<p>I know this requirement is werid, is it possible to do that?</p>
| <p>If your Kubernetes cluster runs on a VPC with more than one subnet (which is probably the case), you must provide a private ip address for each subnet.</p>
<p>I installed the <a href="https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html" rel="nofollow noreferrer">AWS Load balancer controller</a> with the helm chart, then i installed the nginx ingress controller with this helm chart :</p>
<pre><code>helm install nginx-ingress ingress-nginx/ingress-nginx --namespace nginx-ingress -f internal-ingress-values.yaml
</code></pre>
<p>Here the content of internal-ingress-values.yaml</p>
<pre><code>controller:
ingressClass: nginx
service:
enableHttp: false
enableHttps: true
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internal
service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses: 10.136.103.251, 10.136.104.251
service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-00a1a7f9949aa0ba1, subnet-12ea9f1df24aa332c
ingressClassResource:
enabled: true
default: true
</code></pre>
<p>According to the <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/service/annotations/#subnets" rel="nofollow noreferrer">documentation</a> the <strong>service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses</strong> annotation <em>length/order must match subnets</em></p>
<p>So, you must provide IP addresses and subnet in the same order (don't mismatch).
If you take my example above, you must make sure that :</p>
<ul>
<li>10.136.103.251 is included in subnet-00a1a7f9949aa0ba1</li>
<li>10.136.104.251 is included in subnet-12ea9f1df24aa332c</li>
</ul>
<p>It's a good idea to tag your subnets according to the <a href="https://aws.amazon.com/fr/premiumsupport/knowledge-center/eks-vpc-subnet-discovery/" rel="nofollow noreferrer">documentation</a> :</p>
<p>Key: kubernetes.io/cluster/my-cluster-name
Value: shared</p>
<p>Key: kubernetes.io/role/internal-elb
Value: 1</p>
<p>I tested this K8S on 1.20 and it works for my project.
Don't provide "ingressClassResource" if you're on K8S <= 1.17.</p>
|
<p>I am trying to install Argo CLI by following this (<a href="https://github.com/argoproj/argo-workflows/releases" rel="nofollow noreferrer">https://github.com/argoproj/argo-workflows/releases</a>) documentation.</p>
<pre><code># Download the binary
curl -sLO https://github.com/argoproj/argo/releases/download/v3.1.3/argo-linux-amd64.gz
# Unzip
gunzip argo-linux-amd64.gz
# Make binary executable
chmod +x argo-linux-amd64
# Move binary to path
mv ./argo-linux-amd64 /usr/local/bin/argo
# Test installation
argo version
</code></pre>
<p>The above instructions are not working. So, I followed the answer to this (<a href="https://stackoverflow.com/questions/64916480/how-to-update-argo-cli">How to update Argo CLI?</a>) question.</p>
<pre><code>curl -sLO https://github.com/argoproj/argo/releases/download/v2.12.0-rc2/argo-linux-amd64
chmod +x argo-linux-amd64
./argo-linux-amd64
</code></pre>
<p>But I am getting the following error:</p>
<pre><code>./argo-linux-amd64: line 1: Not: command not found
</code></pre>
<p>I also tried moving the <code>argo-linux-amd64</code> binary to <code>/usr/local/bin/argo</code> but still getting the same error (as expected).</p>
<p>Is there any solution to this?</p>
<p>Thank you.</p>
| <p>The download links on the Releases page are incorrect. Try this one:</p>
<pre><code>curl -sLO https://github.com/argoproj/argo-workflows/releases/download/v3.1.3/argo-linux-amd64.gz
</code></pre>
<p>I've submitted an <a href="https://github.com/argoproj/argo-workflows/issues/6440" rel="nofollow noreferrer">issue</a> to get the links fixed.</p>
|
<p>After some struggling I got fluentd to forward Openshift audit log files to Splunk. However this resulted in a huge number of events, so I applied a filter to exclude "get" and "watch". I would like to include the get secrets.</p>
<p>My question, how to change the filter to exclude "get" but include "get secret"?</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: splunk-kubernetes-audit
namespace: splunk-logging
labels:
app: splunk-kubernetes-audit
data:
fluent.conf: |-
<system>
log_level info
</system>
@include source.audit.conf
@include output.conf
output.conf: |-
<label @SPLUNK>
# = filters for non-container log files =
# extract sourcetype
<filter tail.file.**>
@type grep
<exclude>
key verb
pattern /watch/
</exclude>
<and>
<exclude>
key verb
pattern /get/
</exclude>
</and>
</filter>
<filter tail.file.**>
@type jq_transformer
jq '.record.sourcetype = (.tag | ltrimstr("tail.file.")) | .record.cluster_name = "opcdev" | .record.splunk_index = "openshift_audit_n" | .record'
</filter>
# = custom filters specified by users =
# = output =
<match **>
@type splunk_hec
protocol https
hec_host "splunk-heavyforwarder.linux.rabobank.nl"
hec_port 8088
hec_token "#{ENV['SPLUNK_HEC_TOKEN']}"
index_key splunk_index
insecure_ssl false
ca_file /fluentd/etc/splunk/hec_ca_file
host "#{ENV['K8S_NODE_NAME']}"
source_key source
sourcetype_key sourcetype
<fields>
# currently CRI does not produce log paths with all the necessary
# metadata to parse out pod, namespace, container_name, container_id.
# this may be resolved in the future by this issue: https://github.com/kubernetes/kubernetes/issues/58638#issuecomment-385126031
pod
namespace
container_name
cluster_name
container_id
</fields>
app_name splunk-kubernetes-audit
app_version 1.4.7
<buffer>
@type memory
chunk_limit_records 100000
chunk_limit_size 10m
flush_interval 10s
flush_thread_count 1
overflow_action block
retry_max_times 5
retry_type exponential_backoff
retry_wait 2
retry_max_interval 300
total_limit_size 600m
</buffer>
<format>
@type "json"
</format>
</match>
</label>
source.audit.conf: |-
# This fluentd conf file contains sources for log files other than container logs.
<source>
@id tail.file.kube-api-audit
@type tail
@label @SPLUNK
tag tail.file.kube-api-audit
path /var/log/kube-apiserver/audit.log
pos_file /var/log/splunk-fluentd-audit-kube-api-audit.pos
read_from_head true
path_key source
<parse>
@type json
</parse>
</source>
<source>
@id tail.file.oauth-api-audit
@type tail
@label @SPLUNK
tag tail.file.oauth-api-audit
path /var/log/oauth-apiserver/audit.log
pos_file /var/log/splunk-fluentd-audit-oauth-api-audit.pos
read_from_head true
path_key source
<parse>
@type json
</parse>
</source>
<source>
@id tail.file.openshift-api-audit
@type tail
@label @SPLUNK
tag tail.file.openshift-api-audit
path /var/log/openshift-apiserver/audit.log
pos_file /var/log/splunk-fluentd-audit-openshift-api-audit.pos
read_from_head true
path_key source
<parse>
@type json
</parse>
</source>
</code></pre>
<p>the secrets</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
labels:
app: splunk-kubernetes-audit
name: splunk-kubernetes-audit
namespace: splunk-logging
type: Opaque
data:
hec_ca_file: {{ base64 encoded CA certificate }}
splunk_hec_token: {{ base64 encoded Get_token_for_index }}
</code></pre>
<p>and the daemonset</p>
<pre><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
annotations:
configmap.update: "1"
deprecated.daemonset.template.generation: "34"
generation: 34
labels:
app: splunk-kubernetes-audit
engine: fluentd
name: splunk-kubernetes-audit
namespace: splunk-logging
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
app: splunk-kubernetes-audit
release: rabo-splunk
template:
metadata:
annotations:
checksum/config: 0574cfe32baa34dcb02d7e3293f7c5ac0379ffb45cf4b7e455eb6975e6102320
configmap.update.trigger: "1"
prometheus.io/port: "24231"
prometheus.io/scrape: "true"
creationTimestamp: null
labels:
app: splunk-kubernetes-audit
release: rabo-splunk
spec:
containers:
- env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: MY_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: SPLUNK_HEC_TOKEN
valueFrom:
secretKeyRef:
key: splunk_hec_token
name: splunk-kubernetes-audit
- name: SSL_CERT_FILE
value: /fluentd/etc/splunk/hec_ca_file
image: docker.io/splunk/fluentd-hec:1.2.6
imagePullPolicy: Always
name: splunk-fluentd-k8s-audit
ports:
- containerPort: 24231
name: metrics
protocol: TCP
resources:
requests:
cpu: 500m
memory: 600Mi
securityContext:
privileged: true
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/log
name: varlog
- mountPath: /var/log/kube-apiserver
name: varlogkube
readOnly: true
- mountPath: /var/log/oauth-apiserver
name: varlogoauth
readOnly: true
- mountPath: /var/log/openshift-apiserver
name: varlogopenshift
readOnly: true
- mountPath: /fluentd/etc
name: conf-configmap
- mountPath: /fluentd/etc/splunk
name: secrets
readOnly: true
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: acr-secret
nodeSelector:
node-role.kubernetes.io/master: ''
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: splunk-logging
serviceAccountName: splunk-logging
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
volumes:
- hostPath:
path: /var/log
type: ""
name: varlog
- hostPath:
path: /var/log/kube-apiserver
type: ""
name: varlogkube
- hostPath:
path: /var/log/oauth-apiserver
type: ""
name: varlogoauth
- hostPath:
path: /var/log/openshift-apiserver
type: ""
name: varlogopenshift
- configMap:
defaultMode: 420
name: splunk-kubernetes-audit
name: conf-configmap
- name: secrets
secret:
defaultMode: 420
secretName: splunk-kubernetes-audit
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
</code></pre>
| <p>Instead of excluding all get, list and watch actions except for secrets I opted to exclude the objects that cause the most events, like namespaces, pods and configmaps. This resulted in the extra filters below. This gives a reduction in Splunk events of about 65%. An Openshift cluster in rest generates without filtering about 12 GB audit logging a day.</p>
<pre><code> # reduce the number of events by removing get, watch and list api calls
<filter tail.file.**>
@type grep
<and>
<exclude>
key verb
pattern /list/
</exclude>
<exclude>
key $.objectRef.resource
pattern /namespaces/
</exclude>
</and>
</filter>
<filter tail.file.**>
@type grep
<and>
<exclude>
key verb
pattern /list/
</exclude>
<exclude>
key $.objectRef.resource
pattern /pods/
</exclude>
</and>
</filter>
<filter tail.file.**>
@type grep
<and>
<exclude>
key verb
pattern /watch/
</exclude>
<exclude>
key $.objectRef.resource
pattern /namespaces/
</exclude>
</and>
</filter>
<filter tail.file.**>
@type grep
<and>
<exclude>
key verb
pattern /watch/
</exclude>
<exclude>
key $.objectRef.resource
pattern /pods/
</exclude>
</and>
</filter>
<filter tail.file.**>
@type grep
<and>
<exclude>
key verb
pattern /watch/
</exclude>
<exclude>
key $.objectRef.resource
pattern /configmaps/
</exclude>
</and>
</filter>
<filter tail.file.**>
@type grep
<and>
<exclude>
key verb
pattern /get/
</exclude>
<exclude>
key $.objectRef.resource
pattern /configmaps/
</exclude>
</and>
</filter>
<filter tail.file.**>
@type grep
<and>
<exclude>
key verb
pattern /get/
</exclude>
<exclude>
key $.objectRef.resource
pattern /namespaces/
</exclude>
</and>
</filter>
<filter tail.file.**>
@type grep
<and>
<exclude>
key verb
pattern /get/
</exclude>
<exclude>
key $.objectRef.resource
pattern /clusterrolebindings/
</exclude>
</and>
</filter>
</code></pre>
|
<p>Recently I was searching for ways to reduce cloud bill and came up to a company named <code>CAST.AI</code>.</p>
<p>So to run a savings report you need to install their agent to your cluster and they claim it is <code>read-only</code>.</p>
<p>How do I check if this is true?</p>
<p>This comes from the <a href="https://pastebin.com/pLbYAEGP" rel="nofollow noreferrer">yaml file they provide</a> (too long to paste whole manifest here)</p>
| <h2 id="short-answer">Short answer</h2>
<p>Based on <code>cast.io</code> manifest <strong>it's indeed <code>read-only</code> and safe to say it won't mess up anything in the cluster</strong></p>
<h2 id="detailed-answer">Detailed answer</h2>
<p>In short words manifest will create: namespace, serviceaccount, clusterole with read-only permissions, clusterrolebinding (where mapping between service account and cluster role happens), secret and deployment with pod which will collect cluster's data.</p>
<p><code>ClusterRole</code> means that service account linked to this <code>ClusterRole</code> will have access with given verbs within <strong>all namespaces</strong> (which is fine for resource audit).</p>
<p>Below is <code>ClusterRole</code> from manifest (added several comments at the beginning, structure is the same):</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: castai-agent
labels:
"app.kubernetes.io/name": castai-agent
rules:
# ---
# Required for cost savings estimation features.
# ---
- apiGroups: # api group to look in
- ""
resources: # resources where this ClusterRole will have access to
- pods
- nodes
- replicationcontrollers
- persistentvolumeclaims
- persistentvolumes
- services
verbs: # what this cluster role is allowed to do
- get
- list
- watch
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- apiGroups:
- "apps"
resources:
- deployments
- replicasets
- daemonsets
- statefulsets
verbs: # what this cluster role is allowed to do with resources above
- get
- list
- watch
- apiGroups:
- "storage.k8s.io"
resources:
- storageclasses
- csinodes
verbs: # what this cluster role is allowed to do
- get
- list
- watch
- apiGroups:
- "batch"
resources:
- jobs
verbs: # what this cluster role is allowed to do
- get
- list
- watch
</code></pre>
<p>All actions that <code>ClusterRole</code> is allowed to perform are: <code>get</code>, <code>list</code> and <code>watch</code> which are harmless.</p>
<p>Here is a list of all available verbs:</p>
<ul>
<li>get</li>
<li>list</li>
<li>create</li>
<li>update</li>
<li>patch</li>
<li>watch</li>
<li>delete</li>
<li>deletecollection</li>
</ul>
<p><a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/#review-your-request-attributes" rel="nofollow noreferrer">list of all available attributes, including verbs</a></p>
<h2 id="resources-and-limits">Resources and limits</h2>
<p>Worst case scenario <code>cast.io</code> pod will consume resources by its limit (this part in deployment), however with today's clusters it shouldn't be an issue:</p>
<pre><code> resources:
requests:
cpu: 100m
memory: 64Mi
limits:
cpu: 1000m
memory: 256Mi
</code></pre>
<p><strong>Requests</strong> means that this amount of resources are required for <code>kubelet</code> to run this pod on the node.</p>
<p><strong>Limits</strong> as it's named limits maximum possible resources allocation for pod. If it tries to consume more, it will be evicted and rescheduled again to be created.</p>
<p><strong>Useful links:</strong></p>
<ul>
<li><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">Kubernetes RBAC - Role Base Access Control</a></li>
<li><a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/" rel="nofollow noreferrer">Kubernetes autorization overview</a></li>
<li><a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits" rel="nofollow noreferrer">Resources and limits</a></li>
</ul>
|
<p>I understand that an NGINX Ingress controller allows <a href="https://kubernetes.github.io/ingress-nginx/examples/customization/custom-headers/" rel="nofollow noreferrer">custom header creation</a> using a <code>ConfigMap</code>. Is there either:</p>
<ol>
<li>a way to use NGINX for GKE or</li>
<li>to directly specify custom headers in the networking.gke.io
namespace?</li>
</ol>
<p>I am specifically interested in setting <a href="https://datatracker.ietf.org/doc/html/rfc6797" rel="nofollow noreferrer">HTTPS Strict Transport Security</a>, <a href="https://www.w3.org/TR/upgrade-insecure-requests/#preference" rel="nofollow noreferrer">Upgrade Insecure Requests</a> and <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP" rel="nofollow noreferrer">Content Security Policy</a> headers. I find it a little award that the <code>redirectToHttp</code>feature does not enabled these by default so I am desperate for any ideas.</p>
| <p>Taken from one of my Nginx ingress config values:</p>
<p>HSTS - under controller.config:</p>
<pre><code> hsts: "True" # default is "False". Enables HTTP Strict Transport Security (HSTS): the HSTS header is added to the responses from backends. See https://www.nginx.com/blog/http-strict-transport-security-hsts-and-nginx/
hsts-max-age: "31536000" # default is 2592000 (1 month).
hsts-include-subdomains: "True" # default is "False".
</code></pre>
<p>Redirect to HTTPS - under anotations:</p>
<pre><code>nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
</code></pre>
<p>CORS - Under anotations:</p>
<pre><code>nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "http://localhost:8888/"
nginx.ingress.kubernetes.io/cors-max-age: "3600"
nginx.ingress.kubernetes.io/cors-allow-headers: "DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization,apikey,x-apikey,Accept-Language,impersonated,source"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, PATCH, OPTIONS"
</code></pre>
<p>So, an example would look like this:</p>
<p>❯ kubectl get configmaps -n ingress nginx-ingress-0-24-controller -o yaml</p>
<pre><code>apiVersion: v1
data:
Content-Security-Policy: default-src 'self'; script-src 'self'; object-src 'none';
style-src 'self' 'unsafe-inline'; frame-src 'self'
Referrer-Policy: 'Referrer-Policy: strict-origin-when-cross-origin'
X-API-Token: x
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Using-Nginx-Controller: "true"
X-XSS-Protection: 1; mode=block
client_body_buffer_size: 128k
client_max_body_size: 24M
enable-vts-status: "true"
hsts: "True"
hsts-include-subdomains: "True"
hsts-max-age: "31536000"
http-snippet: |
more_clear_headers 'Server';
log-format-upstream: '{"time": "$time_iso8601", "remote_addr": "$proxy_protocol_addr",
"x-forward-for": "$proxy_add_x_forwarded_for", "request_id": "$request_id", "remote_user":
"$remote_user", "bytes_sent": $bytes_sent, "request_time": $request_time, "status":
$status, "vhost": "$host", "request_proto": "$server_protocol", "path": "$uri",
"request_query": "$args", "request_length": $request_length, "duration": $request_time,
"method": "$request_method", "http_referrer": "$http_referer", "http_user_agent":
"$http_user_agent"}'
proxy-hide-headers: Server, server, Access-Control-Allow-Origin, X-Using-Nginx-Controller
proxy-set-headers: ingress/nginx-ingress-0-24-custom-headers
server-tokens: "False"
ssl-ciphers: ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA
ssl-protocols: TLSv1.2
use-http2: "true"
kind: ConfigMap
metadata:
creationTimestamp: "2020-08-20T08:46:22Z"
labels:
app: nginx-ingress
chart: nginx-ingress-1.8.2
component: controller
heritage: Tiller
release: nginx-ingress-0-24
name: nginx-ingress-0-24-controller
namespace: ingress
resourceVersion: "205918413"
selfLink: /api/v1/namespaces/ingress/configmaps/nginx-ingress-0-24-controller
uid: 9fc20850-e2c1-11ea-87b8-42010af00186
</code></pre>
<p>While the annotations go in the ingress yaml:</p>
<p>❯ kubectl get ingresses -n system nginx-ingress-ingress-config protect-private -o yaml</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx-0-24
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
creationTimestamp: "2021-03-05T13:03:49Z"
(...)
</code></pre>
|
<p>I have a set of kubernetes config files that work in one environment. I'm looking to deploy into another environment where I need to add an imagePullSecrets entry to all of the <code>Deployment</code> configs.</p>
<p>I can do:</p>
<p>regcred-1.yaml:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: deployment-1
spec:
template:
spec:
imagePullSecrets:
- name: regcred
</code></pre>
<p>kustomization.yaml:</p>
<pre><code>bases:
- ../base
patchesStrategicMerge:
- regcred-1.yaml
</code></pre>
<p>and that will patch only <code>deployment-1</code>.</p>
<p>Is there a way to apply the patch to all deployments?</p>
| <p>Using <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/inlinePatch.md" rel="noreferrer">Inline Patch</a>:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Kustomization
apiVersion: kustomize.config.k8s.io/v1beta1
resources:
- ../../base
patches:
- target:
kind: Deployment
patch: |-
- op: add
path: /spec/template/spec/imagePullSecrets
value: [{ name: image-pull-secret }]
</code></pre>
<p>Reference: <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/patchMultipleObjects.md" rel="noreferrer">Patching multiple resources at once.</a></p>
|
<p>I am very new to Kubernetes and am working with an eks cluster. I am trying to pull images and I have added a cert to /etc/docker/certs.d// and I am able to pull fine after logging in. However when I create a deployment to deploy apps to my pods, it seems like I have to manually ssh into my EKS nodes and copy over the cert. Otherwise, I am left with a x509 Certificate error. Additionally, if I terminate a node and new nodes are created, those new nodes obviously don't have the cert anymore in which I have to copy over the cert again. Is there a way to configure a secret or configmap so that new nodes will automatically have this cert. I know you can add a mount to a configmap, but it seems like this only works for pods.</p>
<p>Also, what is the best way I can replace these certs for cases where the certs expire (i.e. pulling images from ECR)?</p>
| <p>You can use the <strong>secret</strong> to pull the <strong>docker</strong> and storing the cert in <strong>Kubernetes</strong> level but yes you are right it will work with <strong>POD</strong>. There is no way you can manage or inject at the <strong>node</strong> level.</p>
<p>The only option you are left with to create the <strong>custom AMI</strong> and use that for creating the nodes inside the <strong>EKS</strong> <strong>node</strong> so by default you will be having that <strong>cert</strong> into the Node if scale <strong>up</strong> or <strong>down</strong>.</p>
<p><a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-custom-linux-ami/" rel="nofollow noreferrer">https://aws.amazon.com/premiumsupport/knowledge-center/eks-custom-linux-ami/</a></p>
|
<p>I'm new to kubernetes.
I would like to know how to pass default.yaml in secrets to env in deployment?
i tried it separately but it saves the default empty and i want to override it
Thank you.</p>
| <p>You can store the file in Secret or Configmap</p>
<p>Example :</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: nginx-data
data:
default.yaml: |-
server {
server_name _;
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_certificate /etc/cert/tls.crt;
ssl_certificate_key /etc/cert/tls.key;
location / {
proxy_pass http://127.0.0.1:80;
}
}
</code></pre>
<p>and inject it into the POD as per requirement. Update <code>default.yaml</code> as per need.</p>
<p>You can store and create the YAML inside the secret and inject into the deployment of <strong>MongoD</strong> it will work.</p>
<p>Configmap example : <a href="https://www.cloudytuts.com/guides/kubernetes/how-to-deploy-mongodb-on-kubernetes/" rel="nofollow noreferrer">https://www.cloudytuts.com/guides/kubernetes/how-to-deploy-mongodb-on-kubernetes/</a></p>
<p>For more : <a href="https://www.jeffgeerling.com/blog/2019/mounting-kubernetes-secret-single-file-inside-pod" rel="nofollow noreferrer">https://www.jeffgeerling.com/blog/2019/mounting-kubernetes-secret-single-file-inside-pod</a></p>
|
<p>I have 3 Kubernetes deployments and services for each of the deployments (namespace = firstspace).
each deployment is labelled as app1, app2, app3 in order.</p>
<p>As an example, if I run the following command. I will get the first pod as the result.</p>
<pre><code>kubectl get pods -l app=app1 --namespace firstspace
</code></pre>
<p>My goal is to restrict the Ingress access of the third pod (app=app3) using the following Network Policy allowing traffic only from the second pod (app=app2) and any pods from another namespace called "secondspace".</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: ingress-app3
namespace: firstspace
spec:
podSelector:
matchLabels:
app: app3
ingress:
- from:
- namespaceSelector:
matchLabels:
name: secondspace
- podSelector:
matchExpressions:
- {key: app, operator: In, values: [app2]}
policyTypes:
- Ingress
</code></pre>
<p>However, when I deploy the network policy to the "firstspace" namespace, I can still curl (and get a sample JSON response) the service of the third pod (app=app3) using the first pod (app=app1).</p>
<p>Following is a sample command. Here, 10.100.150.0 is the ClusterIP of the service created for the third pod.</p>
<pre><code>kubectl exec app1-849b94c6df-rzdls --namespace firstspace-- curl -sL 10.100.150.0:8080/testendpoint
</code></pre>
<p>Can someone help me understand what I'm doing wrong here?</p>
| <p>After some trial and error, I observed the following.
According to the Kubernetes Network Policies <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">documentation</a>, Deployed Network Policies will be only effective if a <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/" rel="nofollow noreferrer">network plugin</a> is installed in the Kubernetes cluster.</p>
<p>Since my local <a href="https://minikube.sigs.k8s.io/docs/drivers/hyperv/" rel="nofollow noreferrer">minikube</a> cluster did not have a <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/" rel="nofollow noreferrer">network plugin</a> the network policy I have mentioned in the question description was not effective.</p>
<p>After, installing the <a href="https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy/" rel="nofollow noreferrer">Cillium Network Plugin</a> in my <a href="https://minikube.sigs.k8s.io/docs/drivers/hyperv/" rel="nofollow noreferrer">minikube</a> cluster, the network policy worked as expected.</p>
<p><strong>Notes:</strong></p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy/" rel="nofollow noreferrer">Cillium Network Plugin</a> installation was not successful on <a href="https://minikube.sigs.k8s.io/docs/drivers/hyperv/" rel="nofollow noreferrer">minikube</a> when using <code>docker</code> as the driver. But it worked when selected <code>hyperv</code> as the driver.</li>
<li>I had to create an Egress policy as well for the pod with app=app2 label to allow egress traffic from the pod with app=app3 label (See the example below).</li>
</ul>
<hr />
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: egress-app2
namespace: firstspace
spec:
podSelector:
matchLabels:
app: app2
egress:
- to:
- podSelector:
matchLabels:
app: app3
policyTypes:
- Egress
</code></pre>
|
<p>I have a k8s environment, where I am running 3 masters and 7 worker nodes. Daily my pods are in evicted states due to disk pressure.</p>
<p>I am getting the below error on my worker node.</p>
<pre><code>Message: The node was low on resource: ephemeral-storage.
</code></pre>
<pre><code>Status: Failed
Reason: Evicted
Message: Pod The node had condition: [DiskPressure].
</code></pre>
<p>But my worker node has enough resources to schedule pods.</p>
| <p>Having analysed the comments it looks like pods go in the Evicted state when they're using more resources then available depending on a particular pod limit. A solution in that case might be manually deleting the evicting pods since they're not using resources at that given time. To read more about Node-pressure Eviction one can visit <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/" rel="nofollow noreferrer">the official documentation</a>.</p>
|
<p>I'm trying to find some way to log audit events for each user, i.e whatever the kubectl commands the user executed and the commands which users ran by logging into any of the pods in the cluster. This logs can pushed to elasticsearch or any other similar logging applications for auditing of the cluster.</p>
<p>Can someone please help me here, is there any tool which helps to do this or is there any way that I can achieve this requirement.</p>
| <p>Audit records begin their lifecycle inside the kube-apiserver component. Each request on each stage of its execution generates an audit event, which is then pre-processed according to a certain policy and written to a backend. The policy determines what's recorded and the backends persist the records. Now, this backend can be of two types:</p>
<ol>
<li>WebHook Backend (send audit events to a remote web API)</li>
<li>Log backend (send events to log file)</li>
</ol>
<p>You need to pass the policy file to your kubeapi-server, with the rules defined for your resources. Since, every event is audited, but with policy in place you will only get those which you specified as rule in policy yaml.</p>
<p>More info is itself available in k8s docs:</p>
<p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/audit/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug-application-cluster/audit/</a></p>
|
<p>I'm running a bare metal Kubernetes cluster with 1 master node and 3 worker Nodes. I have a bunch of services deployed inside with Istio as an Ingress-gateway.</p>
<p>Everything works fine since I can access my services from outside using the ingress-gateway NodePort.</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.106.9.2 <pending> 15021:32402/TCP,80:31106/TCP,443:31791/TCP 2d23h
istiod ClusterIP 10.107.220.130 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 2d23h
</code></pre>
<p>In our case the port <code>31106</code>.</p>
<p>The issues is, I don't want my customer to access my service on port 31106. that's not user friendly. So is there a way to expose the port 80 to the outside ?</p>
<p>In other word, instead of typing <a href="http://example.com:31106/" rel="nofollow noreferrer">http://example.com:31106/</a> , I want them to be able to type <a href="http://example.com/" rel="nofollow noreferrer">http://example.com/</a></p>
<p>Any solution could help.</p>
| <p>Based on <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports" rel="nofollow noreferrer">official documentation</a>:</p>
<blockquote>
<p>If the <code>EXTERNAL-IP</code> value is set, your environment has an external load balancer that you can use for the ingress gateway. If the <code>EXTERNAL-IP</code> value is <code><none></code> (or perpetually <code><pending></code>), your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the service’s <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">node port</a>.</p>
</blockquote>
<p>This is in line with what <a href="https://stackoverflow.com/users/10008173/david-maze">David Maze</a> wrote in the comment:</p>
<blockquote>
<p>A LoadBalancer-type service would create that load balancer, but only if Kubernetes knows how; maybe look up <code>metallb</code> for an implementation of that. The <code>NodePort</code> port number will be stable unless the service gets deleted and recreated, which in this case would mean wholesale uninstalling and reinstalling Istio.</p>
</blockquote>
<p>In your situation you need to access the gateway using the <code>NodePort</code>. Then you can configure istio. Everything is described step by step in <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control" rel="nofollow noreferrer">this doc</a>. You need to choose the instructions corresponding to <code>NodePort</code> and then set the ingress IP depends on the cluster provider. You can also find sample yaml files in the documentation.</p>
|
<p>I have a cron job that continues to run though I have no deployments or jobs. I am running minikube:</p>
<pre><code>$ kubectl get deployments
No resources found in default namespace.
$ kubectl delete pods --all && kubectl delete jobs --all && get deployments
pod "hello-27125612-lmcb5" deleted
pod "hello-27125613-w5ln9" deleted
pod "hello-27125614-fz84r" deleted
pod "hello-27125615-htf4z" deleted
pod "hello-27125616-k5czn" deleted
pod "hello-27125617-v79hx" deleted
pod "hello-27125618-bxg52" deleted
pod "hello-27125619-d6wps" deleted
pod "hello-27125620-66b65" deleted
pod "hello-27125621-cj8m9" deleted
pod "hello-27125622-vx5kp" deleted
pod "hello-27125623-xj7nj" deleted
job.batch "hello-27125612" deleted
job.batch "hello-27125613" deleted
job.batch "hello-27125614" deleted
...
$ kb get jobs
No resources found in default namespace.
$ kb get deployments
No resources found in default namespace.
$ kb get pods
No resources found in default namespace.
</code></pre>
<p>Yet a few seconds later:</p>
<pre><code>$ kb get jobs
NAME COMPLETIONS DURATION AGE
hello-27125624 0/1 79s 79s
hello-27125625 0/1 19s 19s
</code></pre>
<p>Get the job:</p>
<pre><code>$ kubectl get job hello-27125624 -oyaml
apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: "2021-07-29T05:44:00Z"
labels:
controller-uid: 26beb7de-1c60-4854-a70f-54b6d066c22c
job-name: hello-27125624
name: hello-27125624
namespace: default
ownerReferences:
- apiVersion: batch/v1
blockOwnerDeletion: true
controller: true
kind: CronJob
name: hello
uid: 32be2372-d827-4971-a659-129823de18e2
resourceVersion: "551585"
uid: 26beb7de-1c60-4854-a70f-54b6d066c22c
spec:
backoffLimit: 6
completions: 1
parallelism: 1
selector:
matchLabels:
controller-uid: 26beb7de-1c60-4854-a70f-54b6d066c22c
template:
metadata:
creationTimestamp: null
labels:
controller-uid: 26beb7de-1c60-4854-a70f-54b6d066c22c
job-name: hello-27125624
spec:
containers:
- command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
image: kahunacohen/hello-kube:latest
imagePullPolicy: IfNotPresent
name: hello
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: OnFailure
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
active: 1
startTime: "2021-07-29T05:44:00Z"
</code></pre>
<p>I tried this:</p>
<pre><code>$ kubectl get ReplicationController
No resources found in default namespace.
</code></pre>
<p>Here is the pod running the job:</p>
<pre><code>$ kubectl get pod hello-27125624-kc9zw -oyaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2021-07-29T05:44:00Z"
generateName: hello-27125624-
labels:
controller-uid: 26beb7de-1c60-4854-a70f-54b6d066c22c
job-name: hello-27125624
name: hello-27125624-kc9zw
namespace: default
ownerReferences:
- apiVersion: batch/v1
blockOwnerDeletion: true
controller: true
kind: Job
name: hello-27125624
uid: 26beb7de-1c60-4854-a70f-54b6d066c22c
resourceVersion: "551868"
uid: f0c10049-b3f9-4352-9201-774dbd91d7c3
spec:
containers:
- command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
image: kahunacohen/hello-kube:latest
imagePullPolicy: IfNotPresent
name: hello
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-7cw4q
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: minikube
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: OnFailure
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-7cw4q
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2021-07-29T05:44:00Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2021-07-29T05:44:00Z"
message: 'containers with unready status: [hello]'
reason: ContainersNotReady
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2021-07-29T05:44:00Z"
message: 'containers with unready status: [hello]'
reason: ContainersNotReady
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2021-07-29T05:44:00Z"
status: "True"
type: PodScheduled
containerStatuses:
- image: kahunacohen/hello-kube:latest
imageID: ""
lastState: {}
name: hello
ready: false
restartCount: 0
started: false
state:
waiting:
message: Back-off pulling image "kahunacohen/hello-kube:latest"
reason: ImagePullBackOff
hostIP: 192.168.49.2
phase: Pending
podIP: 172.17.0.2
podIPs:
- ip: 172.17.0.2
qosClass: BestEffort
startTime: "2021-07-29T05:44:00Z"
</code></pre>
<p>How do I track down who is spawning these jobs and how do I stop it?</p>
| <p>These pods are managed by <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">cronjob controller</a>.</p>
<p>Use <code>kubectl get cronjobs</code> to list them.</p>
|
<p>I am completely new in Kubernetes.I was trying to play with minikube in my local machine. I have installed kubectl and minikube locally. When I run</p>
<pre><code>minikube start --vm-driver=virtualbox # and then
minikube status
</code></pre>
<p>my output is </p>
<pre><code>E0512 23:33:45.964815 23423 status.go:233] kubeconfig endpoint: extract IP: "minikube"
does not appear in /.kube/config
m01
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Misconfigured
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
</code></pre>
<p>When I ran <code>minikube update-context</code> the output is </p>
<pre><code>panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x148ecb0]
..... and more
</code></pre>
<p>Can anybody help? Thanks in advance </p>
| <p>try these commands in this order:</p>
<ol>
<li>minikube stop</li>
<li>minikube delete</li>
<li>minikube start</li>
</ol>
<p>This worked from me</p>
|
<p>As part of rolling updates version 1 pod is rolled up with version 2 pod.</p>
<p>We need to review the logs of shutdown process of service in the pod (version one).</p>
<hr />
<ol>
<li><p>Does rolling update delete the version one pod?</p>
</li>
<li><p>If yes, can we review the logs of deleted pod (version one)? To verify the shutdown process of service in version one pod...</p>
</li>
</ol>
| <blockquote>
<ol>
<li>Does rolling update delete the version one pod?</li>
</ol>
</blockquote>
<p>The short answer is: Yes.</p>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment" rel="nofollow noreferrer">Rolling Update Deployment</a>:</p>
<blockquote>
<p>The Deployment updates Pods in a rolling update fashion when
<code>.spec.strategy.type==RollingUpdate</code>. You can specify <code>maxUnavailable</code>
and <code>maxSurge</code> to control the rolling update process.</p>
</blockquote>
<p>See the examples below:</p>
<pre><code>spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
</code></pre>
<p>In this example there would be one additional Pod (<code>maxSurge: 1</code>) above the desired number of 2, and the number of available Pods cannot go lower than that number (<code>maxUnavailable: 0</code>).</p>
<p>Choosing this config, the Kubernetes will spin up an additional Pod, then stop an “old” one. If there’s another Node available to deploy this Pod, the system will be able to handle the same workload during deployment. If not, the Pod will be deployed on an already used Node at the cost of resources from other Pods hosted on the same Node.</p>
<p>You can also try something like this:</p>
<pre><code>spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
</code></pre>
<p>With the example above there would be no additional Pods (<code>maxSurge: 0</code>) and only a single Pod at a time will be unavailable (<code>maxUnavailable: 1</code>).</p>
<p>In this case, Kubernetes will first stop a Pod before starting up a new one. The advantage of that is that the infrastructure doesn’t need to scale up but the maximum workload will be lower.</p>
<hr />
<blockquote>
<ol start="2">
<li>if yes, can we review the logs of deleted pod(version one)? To verify the shutdown process of service in version one pod...</li>
</ol>
</blockquote>
<p>See the <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/" rel="nofollow noreferrer">Debug Running Pods</a> docs. You can find several useful ways of checking logs/events such as:</p>
<ul>
<li><p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-pod-replication-controller/#debugging-pods" rel="nofollow noreferrer">Debugging Pods</a> by executing <code>kubectl describe pods ${POD_NAME}</code> and checking the reason behind it's failure.</p>
</li>
<li><p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#examine-pod-logs" rel="nofollow noreferrer">Examining pod logs</a>: with <code>kubectl logs ${POD_NAME} ${CONTAINER_NAME}</code> or <code>kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}</code></p>
</li>
<li><p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#container-exec" rel="nofollow noreferrer">Debugging with container exec</a>: by running commands inside a specific container with <code>kubectl exec</code></p>
</li>
<li><p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#ephemeral-container" rel="nofollow noreferrer">Debugging with an ephemeral debug container</a>: Ephemeral containers are useful for interactive troubleshooting when <code>kubectl exec</code> is insufficient because a container has crashed or a container image doesn't include debugging utilities, such as with <a href="https://github.com/GoogleContainerTools/distroless" rel="nofollow noreferrer">distroless images</a>.</p>
</li>
<li><p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#node-shell-session" rel="nofollow noreferrer">Debugging via a shell on the node</a>: If none of these approaches work, you can find the host machine that the pod is running on and SSH into that host.</p>
</li>
</ul>
<hr />
<p>However, <code>--previous</code> flag works only if the previous container instance still exists in a Pod. Check out <a href="https://stackoverflow.com/a/57009702/11560878">this answer</a> for further options.</p>
<p>Also, see this topic: <a href="https://stackoverflow.com/questions/40636021/how-to-list-kubernetes-recently-deleted-pods">How to list Kubernetes recently deleted pods?</a></p>
|
<ul>
<li><p>I have an app that works by deployment service and communicates with the world with Nginx-Ingress-controller.</p>
</li>
<li><p>This app is connected to the MongoDB database.</p>
</li>
<li><p>I asked to create a cronjob that purges all MongoDB logs once daily at midnight.</p>
</li>
</ul>
<p>"Add a CronJob to purge all MongoDB Logs once daily at midnight - we want to
start the day with clean logs!"</p>
<p>I used Helm to create the Nginx-INgress-controller, and for MongoDB as well.</p>
<p>Does someone know which commands I should add to the CronJob?</p>
<p>Thanks in advance. :)</p>
<p>The code:</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "0 0 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
</code></pre>
<p>for MongoDB I used I helm - artifact hub with the commands:</p>
<ul>
<li>helm repo add bitnami <a href="https://charts.bitnami.com/bitnami" rel="nofollow noreferrer">https://charts.bitnami.com/bitnami</a></li>
<li>helm install my-release bitnami/mongodb --values test-mongodb.yaml</li>
</ul>
<ul>
<li>link for the repo- <a href="https://artifacthub.io/packages/helm/bitnami/mongodb" rel="nofollow noreferrer">https://artifacthub.io/packages/helm/bitnami/mongodb</a>.</li>
</ul>
<p>The values I inserted into test-mongodb.yaml:</p>
<pre><code>architecture: replicaset
replicaCount: 3
persistence:
storageClass: "standard"
auth:
rootPassword: password
password: password
database: echo
username: koren
</code></pre>
| <p>If you want to manage the <strong>logs</strong> due to <strong>Kubernetes</strong> also store the <strong>stdin</strong> & <strong>out</strong> logs. Kubernetes auto manages the logs rotation however if your <strong>application</strong> or <strong>mongoD</strong> internally storing the Logs in <strong>PVC</strong> or <strong>filesystem</strong> you might need to rotate it <strong>daily</strong>.</p>
<p>Instead of configuring the special job for deleting the job you can just config <strong>mongoD</strong> to rotate the logs config file</p>
<pre><code>/var/log/mongodb/*.log {
daily
missingok
rotate 5
compress
dateext
delaycompress
copytruncate
notifempty
}
</code></pre>
<p>Example : <a href="https://www.percona.com/blog/2018/09/27/automating-mongodb-log-rotation/" rel="nofollow noreferrer">https://www.percona.com/blog/2018/09/27/automating-mongodb-log-rotation/</a></p>
<p>However still if you wish to run the job for clearing the logs from <strong>PVC</strong> or <strong>filesystem</strong> you can create the <strong>side car</strong> container.</p>
<p><strong>Update</strong></p>
<p>i have not tested however you can create the simple code and run it as the Cronjob inside the Kubernetes, that <strong>node JS or any script</strong> will connect to <strong>MongoDB</strong> admin database and rotate the log simply.</p>
<pre><code>db.adminCommand( { logRotate : server } )
</code></pre>
<p>Connect to DB and rotate the log document :</p>
<p><a href="https://docs.mongodb.com/manual/tutorial/rotate-log-files/" rel="nofollow noreferrer">https://docs.mongodb.com/manual/tutorial/rotate-log-files/</a></p>
<p>I think it will be simple program will make a connection and run rotate query.</p>
|
<p>I am running a k8 cluster with 8 workers and 3 master nodes. And my pods are evicting repetively with the ephemeral storage issues.
Below is the error I am getting on Evicted pods:</p>
<pre><code>Message: The node was low on resource: ephemeral-storage. Container xpaas-logger was using 30108Ki, which exceeds its request of 0. Container wso2am-gateway-am was using 406468Ki, which exceeds its request of 0.
</code></pre>
<p>To overcome the above error, I have added ephemeral storage limits and request to my namespace.</p>
<pre><code>apiVersion: v1
kind: LimitRange
metadata:
name: ephemeral-storage-limit-range
spec:
limits:
- default:
ephemeral-storage: 2Gi
defaultRequest:
ephemeral-storage: 130Mi
type: Container
</code></pre>
<p>Even after adding the above limits and requests to my namespace, my pod is reaching its limits and then evicting.</p>
<pre><code>Message: Pod ephemeral local storage usage exceeds the total limit of containers 2Gi.
</code></pre>
<p>How can I monitor my ephemeral storage, where does it store on my instance?
How can I set the docker logrotate to my ephemeral storage based on size? Any suggestions?</p>
| <p>"Ephemeral storage" here refers to space being used in the container filesystem that's not in a volume. Something inside your process is using a lot of local disk space. In the abstract this is relatively easy to debug: use <code>kubectl exec</code> to get a shell in the pod, and then use normal Unix commands like <code>du</code> to find where the space is going. Since it's space inside the pod, it's not directly accessible from the nodes, and you probably can't use tools like <code>logrotate</code> to try to manage it.</p>
<p>One specific cause of this I've run into in the past is processes configured to log to a file. In Kubernetes you should generally set your logging setup to log to stdout instead. This avoids this specific ephemeral-storage problem, but also avoids a number of practical issues around actually getting the log file out of the pod. <code>kubectl logs</code> will show you these logs and you can set up cluster-level tooling to export them to another system.</p>
|
<p>I've deployed a small K3S cluster with a master and two workers:</p>
<p>The VMs were made with <code>Multipass</code>:</p>
<pre><code>$ multipass ls
Name State IPv4 Image
master-node Running 10.200.68.230 Ubuntu 20.04 LTS
10.42.0.0
10.42.0.1
worker01 Running 10.200.68.67 Ubuntu 20.04 LTS
10.42.1.0
10.42.1.1
worker02 Running 10.200.68.227 Ubuntu 20.04 LTS
10.42.2.0
10.42.2.1
</code></pre>
<p>The cluster was made with <code>k3sup</code>:</p>
<pre><code>$ kubectl get node
NAME STATUS ROLES AGE VERSION
master-node Ready control-plane,etcd,master 13m v1.21.3+k3s1
worker01 Ready <none> 10m v1.21.3+k3s1
worker02 Ready <none> 9m46s v1.21.3+k3s1
</code></pre>
<p>Workers are all labelled with <code>ols.role=worker</code>.</p>
<p>I'd like to install a <code>NodeRed</code> service on the workers nodes. I've used the following commands:</p>
<pre><code>helm repo add k8s-at-home https://k8s-at-home.com/charts/
helm repo update
helm install node-red k8s-at-home/node-red --set nodeSelector."ols\.role"=worker
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=node-red,app.kubernetes.io/instance=node-red" -o jsonpath="{.items[0].metadata.name}")
while [[ $(kubectl get node $POD_NAME -o 'jsonpath={..status.conditions[?(@.type=="Running")].status}') != "True" ]]; do echo "waiting for pod" && sleep 1; done
kubectl port-forward $POD_NAME 8080:1880&
</code></pre>
<p>The service is supposed to be running on port 8080.</p>
<p>Pod's logs look ok:</p>
<pre><code>$ kubectl logs $POD_NAME
> node-red-docker@1.3.5 start /usr/src/node-red
> node $NODE_OPTIONS node_modules/node-red/red.js $FLOWS "--userDir" "/data"
29 Jul 08:20:12 - [info]
Welcome to Node-RED
===================
29 Jul 08:20:12 - [info] Node-RED version: v1.3.5
29 Jul 08:20:12 - [info] Node.js version: v10.24.1
29 Jul 08:20:12 - [info] Linux 5.4.0-80-generic x64 LE
29 Jul 08:20:12 - [info] Loading palette nodes
29 Jul 08:20:12 - [info] Settings file : /data/settings.js
29 Jul 08:20:12 - [info] Context store : 'default' [module=memory]
29 Jul 08:20:12 - [info] User directory : /data
29 Jul 08:20:12 - [warn] Projects disabled : editorTheme.projects.enabled=false
29 Jul 08:20:12 - [info] Flows file : /data/flows.json
29 Jul 08:20:12 - [warn]
---------------------------------------------------------------------
Your flow credentials file is encrypted using a system-generated key.
If the system-generated key is lost for any reason, your credentials
file will not be recoverable, you will have to delete it and re-enter
your credentials.
You should set your own key using the 'credentialSecret' option in
your settings file. Node-RED will then re-encrypt your credentials
file using your chosen key the next time you deploy a change.
---------------------------------------------------------------------
29 Jul 08:20:12 - [info] Server now running at http://127.0.0.1:1880/
29 Jul 08:20:12 - [info] Starting flows
29 Jul 08:20:12 - [info] Started flows
</code></pre>
<p>When I try to reach the webpage (<code>http://192.168.1.14:8080</code> or even <code>http://127.0.0.1:1880/</code>), the server responds an error: <code>ERR_CONNECTION_REFUSED</code></p>
<p>The services should be running:</p>
<pre><code>$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 32m
node-red ClusterIP 10.43.18.33 <none> 1880/TCP 26m
</code></pre>
<p>Is there something else to do to make it work ?</p>
| <p>Since your service is Cluster Ip you can not access the service out of Kubernetes cluster.</p>
<p>You have to expose your service as <strong>Node port</strong> or <strong>Loadbalancer</strong>.</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a></p>
<p>however, for testing and debugging locally you can use this command :</p>
<pre><code>kubectl port-forward svc/node-red -n <replace-namespace-name> 1880:1880
</code></pre>
<p>once command running open the browser and open URL</p>
<pre><code>HTTP://localhost:1880
</code></pre>
|
<p>I am trying to set a postgres parameter (shared_buffers) into my postgres database pod. I am trying to set an init container to set the db variable, but it is not working because the init container runs as the root user.</p>
<p>What is the best way to edit the db variable on the pods? I do not have the ability to make the change within the image, because the variable needs to be different for different instances. If it helps, the command I need to run is a "postgres -c" command.</p>
<pre><code>"root" execution of the PostgreSQL server is not permitted.
The server must be started under an unprivileged user ID to prevent
possible system security compromise. See the documentation for
more information on how to properly start the server.
</code></pre>
| <p>In my case, the <a href="https://stackoverflow.com/a/60783722/7856777">@Rico answer</a> didn't help me out of the box because I don't use postgres with a persistent storage mount, which means there is no /var/lib/postgresql/data folder and pre-existed database (so both proposed options have failed in my case).</p>
<p>To successfully apply postgres settings, I used only args (without <code>command</code> section).</p>
<p>In that case, k8s will pass these args to the default entrypoint defined in the docker image (<a href="https://kubernetes.io/docs/tasks/inject-data-application/_print/#notes" rel="noreferrer">docs</a>), and as for postgres entrypoint, it is made so that any options passed to the docker command will be passed along to the postgres server daemon (look section <em>Database Configuration</em> at: <a href="https://hub.docker.com/_/postgres" rel="noreferrer">https://hub.docker.com/_/postgres</a>)</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: postgres
spec:
containers:
- image: postgres:9.6.8
name: postgres
args: ["-c", "shared_buffers=256MB", "-c", "max_connections=207"]
</code></pre>
<p>To check that the settings applied:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl exec -it postgres -- bash
root@postgres:/# su postgres
$ psql -c 'show max_connections;'
max_connections
-----------------
207
(1 row)
</code></pre>
|
<p>I have configured Prometheus on one of the kubernetes cluster nodes using [this][1]. After that I added following <code>prometheus.yml</code> file. I can list nodes and apiservers but for pods, all the pods shows down and error:</p>
<pre><code>Get "https:// xx.xx.xx:443 /metrics": dial tcp xx.xx.xx:443: connect: connection refused and for some pods the status is unknown.
</code></pre>
<p>Can someone point me what am I doing wrong here?</p>
<pre><code>Cat prometheus.yml
global:
scrape_interval: 1m
scrape_configs:
- job_name: 'prometheus'
scrape_interval: 5s
static_configs:
- targets: \['localhost:9090'\]
# metrics for default/kubernetes api's from the kubernetes master
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
bearer_token_file: /dfgdjk/token
api_server: https://masterapi.com:3343
tls_config:
insecure_skip_verify: true
tls_config:
insecure_skip_verify: true
bearer_token_file: /dfgdjk/token
scheme: https
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: \[__meta_kubernetes_namespace\]
action: replace
target_label: kubernetes_namespace
- source_labels: \[__meta_kubernetes_pod_name\]
action: replace
target_label: kubernetes_pod_name
# metrics for default/kubernetes api's from the kubernetes master
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
api_server: https://masterapi.com:3343
bearer_token_file: /dfgdjk/token
tls_config:
insecure_skip_verify: true
tls_config:
insecure_skip_verify: true
bearer_token_file: /dfgdjk/token
scheme: https
relabel_configs:
- source_labels: \[__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name\]
action: keep
regex: default;kubernetes;https][1]
[1]: https://devopscube.com/install-configure-prometheus-linux/
</code></pre>
| <p>It's impossible to get metrics to external prometheus server without having any prometheus components inside the kubernetes cluster. This happens because cluster network is isolated with host's network and it's not possible to scrape metrics from pods directly from outside the cluster.</p>
<p>Please refer to <a href="https://github.com/prometheus/prometheus/issues/4633" rel="nofollow noreferrer">Monitoring kubernetes with prometheus from outside of k8s cluster GitHub issue</a></p>
<p>There options which can be done:</p>
<ul>
<li>install prometheus inside the cluster using <code>prometheus operator</code> or manually - <a href="https://sysdig.com/blog/kubernetes-monitoring-prometheus/#install" rel="nofollow noreferrer">example</a></li>
<li>use proxy solutions, for example this one from the same thread on GitHub - <a href="https://github.com/americanexpress/k8s-prometheus-proxy" rel="nofollow noreferrer">k8s-prometheus-proxy</a></li>
<li>on top of the prometheus installed within the cluster, it's possible to have external prometheus in <code>federation</code> so all logs are saved outside of the cluster. Please refer to <a href="https://prometheus.io/docs/prometheus/latest/federation/" rel="nofollow noreferrer">prometheus federation</a>.</li>
</ul>
<p>Also important part is <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">kube state metrics</a> should be installed as well in kubernetes cluster. <a href="https://devopscube.com/setup-kube-state-metrics/" rel="nofollow noreferrer">How to set it up</a>.</p>
<p><strong>Edit:</strong> also you can refer to <a href="https://stackoverflow.com/questions/41845307">another SO question/answer</a> which confirms that only with additional steps or OP resolved it by another proxy solution.</p>
|
<p>I'm creating my Kubernetes single node cluster using kubeadm. After applying the ingress ngnix controller, I'm getting the following error when I try to apply the ingress file.</p>
<p>I'm getting the following error while applying the ingress:</p>
<p><code>Error from server (InternalError): error when creating "ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": dial tcp 10.101.247.233:443: connect: connection refused</code></p>
<p>I'm currently using the ingress nginx controller. How can I resolve this issue?</p>
| <p>This usually happens because there is a problem in the configuration of the ValidatingWebhookConfiguration. When you deploy the nginx controller it deploys multiple resources, one of those is this validation, which function is to validate all the ingress that you create later on. Sometimes it might happen that there is a communication problem and you get that error even having the correct structure for the ingress.</p>
<p>My recommendations:</p>
<ul>
<li>Check if the structure of your ingress is correct</li>
<li>Backup the current validation and delete it, check if the problem is solved (even though this might not be the best solution for sure it will work)
You can achieve this by doing:</li>
</ul>
<pre><code> kubectl get ValidatingWebhookConfiguration -o yaml > ./validating-backup.yaml
kubectl delete ValidatingWebhookConfiguration <name of the resource>
</code></pre>
<p><strong>be carefull since the ValidatingWebhookConfiguration is cluster wide</strong></p>
<p>Finally, if you want to keep the ValidatingWebhookConfiguration, the best option might to redeploy the whole Ingress Controller using helm, so you ensure it gets correctly deployed. Here is they why that validation is used: <a href="https://kubernetes.github.io/ingress-nginx/how-it-works/#avoiding-outage-from-wrong-configuration" rel="nofollow noreferrer">link</a></p>
<p>Sources:</p>
<ul>
<li><p>Personal experience</p>
</li>
<li><p><a href="https://github.com/kubernetes/ingress-nginx/issues/5401" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/5401</a></p>
</li>
<li><p><a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/</a></p>
</li>
</ul>
|
<p>I have a testing scenario to check if the API requests are being handled by another pod if one goes down. I know this is the default behaviour, but I want to stimulate the following scenario.</p>
<p>Pod replicas - 2 (pod A and B)</p>
<p>During my API requests, I want to kill/stop only pod A.
During downtime of A, requests should be handled by B.</p>
<p>I am aware that we can restart the deployment and also scale replicas to 0 and again to 2, but this won't work for me.</p>
<p>Is there any way to kill/stop/crash only pod A?
Any help will be appreciated.</p>
| <p>If you want to simulate what happens if one of the pods just gets lost, you can scale down the deployment</p>
<pre class="lang-sh prettyprint-override"><code>kubectl scale deployment the-deployment-name --replicas=1
</code></pre>
<p>and Kubernetes will terminate all but one of the pods; you should almost immediately see all of the traffic going to the surviving pod.</p>
<p>But if instead you want to simulate what happens if one of the pods crashes and restarts, you can delete the pod</p>
<pre class="lang-sh prettyprint-override"><code># kubectl scale deployment the-deployment-name --replicas=2
kubectl get pods
kubectl delete pod the-deployment-name-12345-f7h9j
</code></pre>
<p>Once the pod starts getting deleted, the Kubernetes Service should route all of the traffic to the surviving pod(s) (those with <code>Running</code> status). However, the pod is managed by a ReplicaSet that wants there to be 2 replicas, so if one of the pods is deleted, the ReplicaSet will immediately create a new one. This is similar to what would happen if the pod crashes and restarts (in this scenario you'd get the same pod and the same node, if you delete the pod it might come back in a different place).</p>
|
<p>Memory and cpu resources of a container can be tracked using prometheus. But can we track I/O of a container? Are there any metrices available?</p>
| <p>If you are using Docker containers you can check the data with the <code>docker stats</code> command (as <a href="https://stackoverflow.com/users/6309601/p">P...</a> mentioned in the comment). <a href="https://docs.docker.com/engine/reference/commandline/stats/#extended-description" rel="nofollow noreferrer">Here</a> you can find more information about this command.</p>
<blockquote>
<p>If you want to check pods cpu/memory usage without installing any third party tool then you can get memory and cpu usage of pod from cgroup.</p>
<ol>
<li>Go to pod's exec mode <code>kubectl exec pod_name -- /bin/bash</code></li>
<li>Go to <code>cd /sys/fs/cgroup/cpu</code> for cpu usage run <code>cat cpuacct.usage</code></li>
<li>Go to <code>cd /sys/fs/cgroup/memory</code> for memory usage run <code>cat memory.usage_in_bytes</code></li>
</ol>
</blockquote>
<p>For more look at this <a href="https://stackoverflow.com/questions/54531646/checking-kubernetes-pod-cpu-and-memory">similar question</a>.
<a href="https://stackoverflow.com/questions/51641310/kubernetes-top-vs-linux-top/51656039#51656039">Here</a> you can find another interesting question. You should know, that</p>
<blockquote>
<p>Containers inside pods partially share <code>/proc</code>with the host system include path about a memory and CPU information.</p>
</blockquote>
<p>See also this article about <a href="https://fabiokung.com/2014/03/13/memory-inside-linux-containers/" rel="nofollow noreferrer">Memory inside Linux containers</a>.</p>
|
<p>I have an Airflow application running in kubernetes along with Vault. I am already using Vault to manage certain Airflow Connections and it's working fine (which proves the connectivity with Vault is working fine).</p>
<p>I am now trying to configure Airflow's configs <code>core.sql_alchemy_conn</code>, <code>celery.result_backend</code> and <code>celery.broker_url</code> through Vault as mentioned <a href="https://airflow.apache.org/docs/apache-airflow/stable/cli-and-env-variables-ref.html?highlight=_secret#envvar-AIRFLOW__-SECTION-__-KEY-_SECRET" rel="nofollow noreferrer">here</a>.</p>
<p>In my <code>values.yaml</code> I have</p>
<pre class="lang-yaml prettyprint-override"><code>secrets_backend_kwargs:
auth_type: 'kubernetes'
kubernetes_role: 'role-id'
connections_path: 'applications/secrets/airflow/connections/'
config_path: 'applications/secrets/airflow/config/'
config:
core:
sql_alchemy_conn_secret: "sql_alchemy_conn"
celery:
broker_url_secret: 'broker_url_conn'
result_backend_secret: 'result_backend_conn'
secrets:
backend: 'airflow.providers.hashicorp.secrets.vault.VaultBackend'
backend_kwargs: '{{ .Values.secrets_backend_kwargs | toJson }}'
</code></pre>
<p>In my vault, I have the proper encoded URI (<code>postgresql%2Bpsycopg2%3A%2F%2Fblahblahblah...</code>) stored at the right location:</p>
<p><a href="https://i.stack.imgur.com/V0R2G.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/V0R2G.png" alt="enter image description here" /></a></p>
<p>Now when I deploy the application, <code>run-airflow-migration</code> job fails with AirflowConfigException while validating the configs.</p>
<pre><code># AirflowConfigException
error: cannot use sqlite with the CeleryExecutor
</code></pre>
<p>Which means it is not picking the postgres database connection settings from Vault and using sqlite settings from Airflow configs.</p>
<p>I also tried this by providing these configs in the variables for the containers.</p>
<pre class="lang-yaml prettyprint-override"><code>env:
- name: VAULT_ADDR
value: 'https://path.to.vault/'
- name: AIRFLOW__CORE__SQL_ALCHEMY_CONN_SECRET
value: 'sql_alchemy_conn'
- name: AIRFLOW__CELERY__BROKER_URL_SECRET
value: 'broker_url_conn'
- name: AIRFLOW__CELERY__RESULT_BACKEND_SECRET
value: 'result_backend_conn'
</code></pre>
<p>I can see these environment variables in the container but the deployment fails because of the error in migration job.</p>
<p>I verified the Vault path and connectivity by changing the <code>config_path</code> in the secret backend kwargs and if I change that I still reach the Vault but since the key/connection id doesn't exist on the incorrect path so I get error from Vault. Which means that When I am setting the <code>_secret</code> as config, application is reaching Vault to fetch value but not updating or using the value.</p>
<p>I can't figure what am I missing.</p>
| <p>Configs are set like Variables in the secret backend, while I was setting them like a Connection with <code>conn_uri</code> as key.</p>
<p>It is not clear in the official Documentation and had to go through the code to find the fix.</p>
<p>So after putting my configs in correct way (where mount point is <code>secret</code> and <code>config_path</code> is <code>applications/secrets/airflow/config/</code>), it worked.</p>
<pre><code>vault kv put secret/applications/secrets/airflow/config/sql_alchemy_conn value=postgresql://user:pass@host:5432/db
</code></pre>
|
<p>Currently GCEPersistentDisk does not support ReadWriteMany natively, there are other options like filestore which creates a NFS volume, but there are 2 problem here:</p>
<ol>
<li><p>minimum 1 TB size is needed to be created</p>
</li>
<li><p>secondly its a manually effort of creating PVs and then PVCs</p>
</li>
</ol>
<p>Whats the best solutions for dynamically allocating a claim request in GKE ?</p>
| <p>If you have requirement of dynamic provising with you can alos try using the <a href="https://www.gluster.org/" rel="nofollow noreferrer">Gluster</a> FS or <a href="https://min.io/" rel="nofollow noreferrer">Minio</a>.</p>
<p>here is one nice example using the GlusterFS dynamic provisioning : <a href="https://docs.openshift.com/container-platform/3.9/install_config/storage_examples/gluster_dynamic_example.html" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/3.9/install_config/storage_examples/gluster_dynamic_example.html</a></p>
<p>you have to create the <strong>storageclass</strong></p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: glusterfs
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://10.42.0.0:8080"
restauthenabled: "false"
</code></pre>
<p>and create the <strong>PVC</strong> with specific <strong>storageclass</strong></p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gluster1
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 30Gi
storageClassName: glusterfs
</code></pre>
<p>Also, <strong>You can create Filestore file shares between 1 and 63.9 TiB</strong>
and <strong>dynamic provisioning</strong>: <a href="https://cloud.google.com/community/tutorials/gke-filestore-dynamic-provisioning" rel="nofollow noreferrer">https://cloud.google.com/community/tutorials/gke-filestore-dynamic-provisioning</a></p>
|
<p>We want to load <strong>example.xyz.com</strong> in <strong>site.abc.com</strong>. The best way is to redirect/rewrite all the requests from <strong>site.abc.com</strong> to <strong>example.xyz.com</strong>. However, we don't want the browser URL to be changed. <a href="https://serverfault.com/questions/927820/nginx-rewrite-url-without-changing-browser-address">From this similar SO problem</a> we understand that we need an Nginx location config as below</p>
<pre><code>server {
servername site.abc.com;
listen 80;
....
....
....
....
location / {
proxy_pass http://example.xyz.com;
rewrite /(.*)$ /$1 break;
}
}
</code></pre>
<p>However, I'm not sure how to create a similar rule in <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">Kubernetes ingress-nginx</a> as it adds proxy_pass for each rule, which prevents us from adding proxy_pass config in <code>nginx.ingress.kubernetes.io/configuration-snippet:</code> annotation.</p>
<p>Also providing <code>nginx.ingress.kubernetes.io/rewrite-target: http://example.xyz.com/$1</code> annotation in ingress as below, redirects to <strong>example.xyz.com</strong> instead of loading <strong>example.xyz.com</strong> in <strong>site.abc.com</strong>.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: http://example.xyz.com/$1
name: url-rewrite
namespace: default
spec:
rules:
- host: site.abc.com
http:
paths:
- backend:
service:
name: service
port:
number: 80
path: /(.*)
pathType: ImplementationSpecific
</code></pre>
<p>How can we load <strong>example.xyz.com</strong> in <strong>site.abc.com</strong> without any change in browser URL using ingress-nginx in this case?</p>
| <p>With <a href="https://stackoverflow.com/a/60354450/6663095">this solution</a> as a reference, pointed out by <a href="https://stackoverflow.com/users/11560878/wytrzyma%C5%82y-wiktor">@WytrzymałyWiktor</a> I was able to make changes and it worked.
Here is the updated ingress file.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/server-snippet: |
location ~ "^/(.*)" {
proxy_pass http://example.xyz.com;
rewrite /(.*)$ /$1 break;
}
name: url-rewrite
namespace: default
spec:
rules:
- host: site.abc.com
</code></pre>
<p>One problem though here in making SSL redirect work. In some cases the target(<strong><a href="http://example.xyz.com" rel="nofollow noreferrer">http://example.xyz.com</a></strong>) will return <code>302 /some/other/path</code> in such cases <a href="http://site.abc.com" rel="nofollow noreferrer">http://site.abc.com</a> gets redirected as <a href="http://site.abc.com/some/other/path" rel="nofollow noreferrer">http://site.abc.com/some/other/path</a>. Not sure how to make it to redirect as <a href="https://site.abc.com/some/other/path" rel="nofollow noreferrer">https://site.abc.com/some/other/path</a>.
Setting <code>nginx.ingress.kubernetes.io/ssl-redirect: "true"</code> and <code>nginx.ingress.kubernetes.io/force-ssl-redirect: "true"</code> doesn't seem to work.</p>
<p>Adding this as an answer for documentation, As it will be helpful for people with a similar problem. Not a possible duplicate as the referenced solution addresses on adding <code>proxy_pass</code> whereas this, addresses URL rewrite without changing browser URL.</p>
|
<p>I have a AzureKeyVaultSecret object yaml. I don't want to use output tag to sync my secret from Azure Key Vault to Kubernetes.
How can I automatically get my pod/Deployment/StatefulSet to restart when the secret in AKV(Azure key vault) is changed/updated.</p>
| <p>This is going to be a multi-step scenario. Key Vault can be configured to send a message to an Event Hub when a secret changes:</p>
<p><a href="https://learn.microsoft.com/en-us/azure/key-vault/general/event-grid-overview" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/key-vault/general/event-grid-overview</a></p>
<p>You can configure a Function App or a Logic App to listen to the event hub and respond accordingly.</p>
<p><a href="https://learn.microsoft.com/en-us/azure/connectors/connectors-create-api-azure-event-hubs" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/connectors/connectors-create-api-azure-event-hubs</a></p>
<p><a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-event-hubs" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-event-hubs</a></p>
<p>The responding app can then perform the actions that you need to have happen - restart the pod/deployment/statefulset using powershell/CLI/ARM or whatever the correct method is.</p>
<p><a href="https://learn.microsoft.com/en-us/azure/aks/start-stop-cluster" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/start-stop-cluster</a></p>
|
<p>I have two pods, after the first pod has rebooted, the other should automatically reboot after it. Pods are created using different deployments</p>
| <p>You can write down the script in python or any language which will look for the event for <strong>POD</strong> or <strong>deployment</strong> as soon as one gets restarted your script will <strong>roll out</strong> the deployment this way for the second deployment PODs will be <strong>restarted</strong>.</p>
<p>You can run the script inside one POD or as deployment which will continuously monitor the <strong>Deployment-1</strong> and restart the <strong>deployment-2</strong> if any changes occurred in <strong>deployment-1</strong>.</p>
<p>Example python client</p>
<pre><code>import logging
import time
from kubernetes import client, config, watch
import json
import requests
import time
logger = logging.getLogger('k8s_events')
logger.setLevel(logging.DEBUG)
# If running inside pod
config.load_incluster_config()
# If running locally
#config.load_kube_config()
v1 = client.CoreV1Api()
v1ext = client.ExtensionsV1beta1Api()
w = watch.Watch()
mydict={}
webhook_url = '';
while True:
pod_list= v1.list_namespaced_pod("default");
</code></pre>
<p>further, you can write the logic for checking the <strong>restart</strong> count.</p>
<p><a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/</a></p>
<p>List of official libraries : <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/#officially-supported-kubernetes-client-libraries" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/using-api/client-libraries/#officially-supported-kubernetes-client-libraries</a></p>
|
<p>I have a weird issue with envFrom:</p>
<pre><code> - name: template-api
envFrom:
- secretRef:
name: common-secrets
- secretRef:
name: template-api
</code></pre>
<p>in common-secrets I have variables like this:</p>
<pre><code>MAILHOST=smtp.gmail.com
MAILPORT=587
</code></pre>
<p>And template-api is like:</p>
<pre><code>MAIL_HOST=$MAILHOST
MAIL_PORT=$MAILPORT
</code></pre>
<p>This is like that, because pods have different variables names for same info.
But when the container is running the variables are replaced with literal $VAR instead of var value.
Maybe Im using the wrong solution for this. Did somebody face the same issue?</p>
| <p>Kubernetes won't update it that way, if you are running that approach with any code or script it will work inside the code. like <code>process.env($MAILHOST)</code></p>
<p>Whatever you have mentioned in secret it will get injected into the <strong>OS</strong> <strong>environment</strong> now if the same values are there it will get <strong>overwritten</strong>.</p>
<p>Kubernetes inject the secret based on the <strong>YAML</strong> configuration either to the <strong>file system</strong> or either inside the <strong>OS</strong>.</p>
<p>Kubernetes simply inject the values into the <strong>POD</strong> as set in secret. it won't check <strong>whether</strong> anything is already set in the environment and replaces it with values.</p>
|
<p>I am testing istio 1.10.3 to add headers with minikube but I am not able to do so.</p>
<p><strong>Istio</strong> is installed in the <code>istio-system</code> namespaces.
The namespace where the deployment is deployed is labeled with <code>istio-injection=enabled</code>.</p>
<p>In the <code>config_dump</code> I can see the LUA code only when the context is set to <code>ANY</code>. When I set it to <code>SIDECAR_OUTBOUND</code> the code is not listed:</p>
<pre><code>"name": "envoy.lua",
"typed_config": {
"@type": "type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua",
"inline_code": "function envoy_on_request(request_handle)\n request_handle:headers():add(\"request-body-size\", request_handle:body():length())\nend\n\nfunction envoy_on_response(response_handle)\n response_handle:headers():add(\"response-body-size\", response_handle:body():length())\nend\n"
}
</code></pre>
<p>Someone can give me some tips?</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: headers-envoy-filter
namespace: nginx-echo-headers
spec:
configPatches:
- applyTo: HTTP_FILTER
match:
context: SIDECAR_OUTBOUND
listener:
filterChain:
filter:
name: envoy.filters.network.http_connection_manager
subFilter:
name: envoy.filters.http.router
patch:
operation: INSERT_BEFORE
value:
name: envoy.lua
typed_config:
'@type': type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua
inline_code: |
function envoy_on_request(request_handle)
request_handle:headers():add("request-body-size", request_handle:body():length())
end
function envoy_on_response(response_handle)
response_handle:headers():add("response-body-size", response_handle:body():length())
end
workloadSelector:
labels:
app: nginx-echo-headers
version: v1
</code></pre>
<p>Below is my deployment and Istio configs:</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-echo-headers-v1
namespace: nginx-echo-headers
labels:
version: v1
spec:
selector:
matchLabels:
app: nginx-echo-headers
version: v1
replicas: 2
template:
metadata:
labels:
app: nginx-echo-headers
version: v1
spec:
containers:
- name: nginx-echo-headers
image: brndnmtthws/nginx-echo-headers:latest
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: nginx-echo-headers-svc
namespace: nginx-echo-headers
labels:
version: v1
service: nginx-echo-headers-svc
spec:
type: ClusterIP
ports:
- name: http
port: 80
targetPort: 8080
selector:
app: nginx-echo-headers
version: v1
---
# ISTIO GATEWAY
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: nginx-echo-headers-gateway
namespace: istio-system
spec:
selector:
app: istio-ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "api.decchi.com.ar"
# ISTIO VIRTUAL SERVICE
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginx-echo-headers-virtual-service
namespace: nginx-echo-headers
spec:
hosts:
- 'api.decchi.com.ar'
gateways:
- istio-system/nginx-echo-headers-gateway
http:
- route:
- destination:
# k8s service name
host: nginx-echo-headers-svc
port:
# Services port
number: 80
# workload selector
subset: v1
## ISTIO DESTINATION RULE
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: nginx-echo-headers-dest
namespace: nginx-echo-headers
spec:
host: nginx-echo-headers-svc
subsets:
- name: "v1"
labels:
app: nginx-echo-headers
version: v1
</code></pre>
<p>It is only working when I configure the context in <code>GATEWAY</code>. The <code>envoyFilter</code> is running in the <code>istio-system</code> namespace and the <code>workloadSelector</code> is configured like this:</p>
<pre><code>workloadSelector:
labels:
istio: ingressgateway
</code></pre>
<p>But my idea is to configure it in <code>SIDECAR_OUTBOUND</code>.</p>
| <blockquote>
<p>it is only working when I configure the context in <strong>GATEWAY</strong>, the envoyFilter is running in the <strong>istio-system</strong> namespace</p>
</blockquote>
<p>That's correct! You should apply your <code>EnvoyFilter</code> in the config root namespace <code>istio-system</code>- in your case.</p>
<p>And the most important part, just omit <code>context</code> field, when matching your <code>configPatches</code>, so that this applies to both sidecars and gateways. You can see the examples of usage in <a href="https://istio.io/latest/docs/reference/config/networking/envoy-filter/" rel="nofollow noreferrer">this Istio Doc</a>.</p>
|
<p>I'm trying to set up an ingress controller in Kubernetes that will give me strict alternation between two (or more) pods running in the same service.</p>
<p>My testing setup is a single Kubernetes node, with a deployment of two nginx pods.
The deployment is then exposed with a NodePort service.</p>
<p>I've then deployed an ingress contoller (I've tried both <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">Kubernetes Nginx Ingress Controller</a> and <a href="https://github.com/nginxinc/kubernetes-ingress" rel="nofollow noreferrer">Nginx Kubernetes Ingress Controller</a>, separately) and created an ingress rule for the NodePort service.</p>
<p>I edited index.html on each of the nginx pods, so that one shows "SERVER A" and the other "SERVER B", and ran a script that then <code>curl</code>s the NodePort service 100 times. It <code>grep</code>s "SERVER x" each time, appends it to an output file, and then tallies the number of each at the end.</p>
<p>As expected, curling the NodePort service itself (which uses kube-proxy), I got completely random results-- anything from 50:50 to 80:20 splits between the pods.</p>
<p>Curling the ingress controller, I consistently get something between 50:50 and 49:51 splits, which is great-- the default round-robin distribution is working well.</p>
<p><strong>However</strong>, looking at the results, I can see that I've curled the same server up to 4 times in a row, but I need to enforce a strict alternation A-B-A-B. I've spent quite a researching this and trying out different options, but I can't find a setting that will do this. Does anyone have any advice, please?</p>
<p>I'd prefer to stick with one of the ingress controllers I've tried, but I'm open to trying a different one, if it will do what I need.</p>
| <p>Nginx default behavior is like <strong>round-robin</strong> only. You can use it to perform most tests on Nginx ingress with different config <strong>tweaks</strong> if required.</p>
<p>There is also other options like you can use the <strong>Istio</strong> <strong>service mesh</strong>.</p>
<p>You can Load balance the traffic as you required by changing the config only</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: bookinfo-ratings
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: LEAST_CONN
subsets:
- name: testversion
labels:
version: v3
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
</code></pre>
<p>Read more at : <a href="https://istio.io/latest/docs/reference/config/networking/destination-rule/" rel="nofollow noreferrer">https://istio.io/latest/docs/reference/config/networking/destination-rule/</a></p>
<p>& <a href="https://istio.io/latest/docs/reference/config/networking/destination-rule/#LoadBalancerSettings" rel="nofollow noreferrer">https://istio.io/latest/docs/reference/config/networking/destination-rule/#LoadBalancerSettings</a></p>
<p>however, i would suggest going with <code>service mesh</code> only when there is a large cluster implementing for 2-3 services better use the <strong>Nginx</strong> <strong>ingress</strong> or <strong>haproxy-ingress</strong> also good option.</p>
|
<p>I've upgrade from Flux V1 to V2. It all went fairly smooth but I can't seem to get the <code>ImageUpdateAutomation</code> to work. Flux knows I have images to update but it doesn't change the container image in the <code>deployment.yaml</code> manifest and commit the changes to Github. I have no errors in my logs so I'm at a bit of a loss as to what to do next.</p>
<p>I have an file structure that looks something like this:</p>
<pre><code>├── README.md
├── staging
│ ├── api
│ │ ├── deployment.yaml
│ │ ├── automation.yaml
│ │ └── service.yaml
│ ├── app
│ │ ├── deployment.yaml
│ │ ├── automation.yaml
│ │ └── service.yaml
│ ├── flux-system
│ │ ├── gotk-components.yaml
│ │ ├── gotk-sync.yaml
│ │ └── kustomization.yaml
│ ├── image_update_automation.yaml
</code></pre>
<p>My <code>staging/api/automation.yaml</code> is pretty strait-forward:</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: image.toolkit.fluxcd.io/v1beta1
kind: ImageRepository
metadata:
name: api
namespace: flux-system
spec:
image: xxx/api
interval: 1m0s
secretRef:
name: dockerhub
---
apiVersion: image.toolkit.fluxcd.io/v1beta1
kind: ImagePolicy
metadata:
name: api
namespace: flux-system
spec:
imageRepositoryRef:
name: api
policy:
semver:
range: ">=1.0.0"
</code></pre>
<p>My <code>staging/image_update_automation.yaml</code> looks something like this:</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: image.toolkit.fluxcd.io/v1beta1
kind: ImageUpdateAutomation
metadata:
name: flux-system
namespace: flux-system
spec:
git:
checkout:
ref:
branch: master
commit:
author:
email: fluxcdbot@users.noreply.github.com
name: fluxcdbot
messageTemplate: '{{range .Updated.Images}}{{println .}}{{end}}'
push:
branch: master
interval: 1m0s
sourceRef:
kind: GitRepository
name: flux-system
update:
path: ./staging
strategy: Setters
</code></pre>
<p>Everything seems to be ok here:</p>
<pre class="lang-sh prettyprint-override"><code>❯ flux get image repository
NAME READY MESSAGE LAST SCAN SUSPENDED
api True successful scan, found 23 tags 2021-07-28T17:11:02-06:00 False
app True successful scan, found 18 tags 2021-07-28T17:11:02-06:00 False
❯ flux get image policy
NAME READY MESSAGE LATEST IMAGE
api True Latest image tag for 'xxx/api' resolved to: 1.0.1 xxx/api:1.0.1
app True Latest image tag for 'xxx/app' resolved to: 3.2.1 xxx/app:3.2.1
</code></pre>
<p>As you can see from the policy output the <code>LATEST IMAGE</code> api is 1.0.1, however when I view the current version of my app and api they have not been updated.</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get deployment api -n xxx -o json | jq '.spec.template.spec.containers[0].image'
"xxx/api:0.1.5"
</code></pre>
<p>Any advice on this would be much appreciated.</p>
| <p>My issue was that I didn't add the comment after my image declaration in my deployment yaml. <a href="https://fluxcd.io/docs/guides/image-update/#configure-image-updates" rel="nofollow noreferrer">More details</a>. Honestly, I'm surprised this is not <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/" rel="nofollow noreferrer">Annotation</a> instead of a comment.</p>
<pre class="lang-yaml prettyprint-override"><code> spec:
containers:
- image: docker.io/xxx/api:0.1.5 # {"$imagepolicy": "flux-system:api"}
</code></pre>
|
<p><em>*Cross-posted from k3d GitHub Discussion: <a href="https://github.com/rancher/k3d/discussions/690" rel="nofollow noreferrer">https://github.com/rancher/k3d/discussions/690</a></em></p>
<p>I am attempting to expose two services over two ports. As an alternative, I'd also love to know how to expose them over the same port and use different routes. I've attempted a few articles and a lot of configurations. Let me know where I'm going wrong with the networking of k3d + k3s / kubernetes + traefik (+ klipper?)...</p>
<p>I posted an example:
<a href="https://github.com/ericis/k3d-networking" rel="nofollow noreferrer">https://github.com/ericis/k3d-networking</a></p>
<h3 id="the-goal-xa62">The goal:</h3>
<ul>
<li>Reach "app-1" on host over port 8080</li>
<li>Reach "app-2" on host over port 8091</li>
</ul>
<h3 id="steps-wa5i">Steps</h3>
<p><em>*See: <a href="https://github.com/ericis/k3d-networking" rel="nofollow noreferrer">files in repo</a></em></p>
<ol>
<li><p>Configure <code>k3d</code> cluster and expose app ports to load balancer</p>
<pre><code>ports:
# map localhost to loadbalancer
- port: 8080:80
nodeFilters:
- loadbalancer
# map localhost to loadbalancer
- port: 8091:80
nodeFilters:
- loadbalancer
</code></pre>
</li>
<li><p>Deploy apps with "deployment.yaml" in Kubernetes and expose container ports</p>
<pre><code>ports:
- containerPort: 80
</code></pre>
</li>
<li><p>Expose services within kubernetes. Here, I've tried two methods.</p>
<ul>
<li><p>Using CLI</p>
<pre><code>$ kubectl create service clusterip app-1 --tcp=8080:80
$ kubectl create service clusterip app-2 --tcp=8091:80
</code></pre>
</li>
<li><p>Using "service.yaml"</p>
<pre><code>spec:
ports:
- protocol: TCP
# expose internally
port: 8080
# map to app
targetPort: 80
selector:
run: app-1
</code></pre>
</li>
</ul>
</li>
<li><p>Expose the services outside of kubernetes using "ingress.yaml"</p>
<pre><code>backend:
service:
name: app-1
port:
# expose from kubernetes
number: 8080
</code></pre>
</li>
</ol>
| <p>You either have to use an ingress, or have to open ports on each individual node (k3d runs on docker, so you have to expose the docker ports)</p>
<p><em>Without opening a port <em>during the creation of the k3d cluster</em>, a nodeport service will not expose your app</em></p>
<p><code>k3d cluster create mycluster -p 8080:30080@agent[0]</code></p>
<p>For example, this would open an "outside" port (on your localhost) 8080 and map it to 30080 on the node - then you can use a NodePort service to actually connect the traffic from that port to your app:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: some-service
spec:
ports:
- protocol: TCP
port: 80
targetPort: some-port
nodePort: 30080
selector:
app: pgadmin
type: NodePort
</code></pre>
<p>You can also open ports on the server node like:
<code>k3d cluster create mycluster -p 8080:30080@server[0]</code></p>
<p>Your apps can get scheduled to run on whatever node, and if you force a pod to appear on a specific node (lets say you open a certain port on agent[0] and set up your .yaml files to work with that certain port), for some reason the local-path rancher storage-class just breaks and will not create a persistent volume for your claim. You kinda have to get lucky & have your pod get scheduled where you need it to. (if you find a way to schedule pods on specific nodes without the storage provisioner breaking, let me know)</p>
<p>You also can map a whole range of ports, like:
<code>k3d cluster create mycluster --servers 1 --agents 1 -p "30000-30100:30000-30100@server[0]"</code>
but be careful with the amount of ports you open, if you open too much, k3d will crash.</p>
<p><em>Using a load balancer</em> - it's similar, you just have to open one port & map to to the load balancer.</p>
<p><code>k3d cluster create my-cluster --port 8080:80@loadbalancer</code></p>
<p>You then <em>have</em> to use an ingress, (or the traffic won't reach)</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: hello
port:
number: 80
</code></pre>
<p>I also think that ingress will only route http & https traffic, https should be done on the port <code>443</code>, supposedly you can map both port <code>80</code> and port <code>443</code>, but I haven't been able to get that to work (I think that certificates need to be set up as well).</p>
|
<p>I Have 1 master and 2 work kubernetes cluster,
i am running application on that cluster but pods
running on the worker node is not able to connect to the internet
and i am also enable egress network policy for all pods.
this is my network policy</p>
<pre><code>kind: NetworkPolicy
metadata:
name: allow-all-egress
spec:
podSelector: {}
egress:
- {}
policyTypes:
- Egress```
</code></pre>
| <p>Try this example</p>
<pre><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-all-traffic
namespace: default
spec:
podSelector: {}
ingress:
- {}
</code></pre>
<p><a href="https://github.com/ahmetb/kubernetes-network-policy-recipes/blob/master/02a-allow-all-traffic-to-an-application.md" rel="nofollow noreferrer">https://github.com/ahmetb/kubernetes-network-policy-recipes/blob/master/02a-allow-all-traffic-to-an-application.md</a></p>
|
<p>Need help on setting up the ingress in K8S. Thank you in advance.</p>
<p>I have configured, NGINX Controller and Cert-Manager on the AKS cluster using Helm.</p>
<p>Here is the svc, pods list.
<a href="https://i.stack.imgur.com/wb6AN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wb6AN.png" alt="enter image description here" /></a></p>
<p>Next - deployed two pods, and respected services here are those names and can be found in the above-attached image.</p>
<ul>
<li>pod/rpe-delivers-orders-dcs-api*</li>
<li>service/rpe-delivers-orders-dcs-api</li>
<li>pod/rpe-delivers-orders-proxy-api*</li>
<li>service/rpe-delivers-orders-proxy-api</li>
</ul>
<p>Next - when I try to set up the ingress like below for the above two services.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rpe-delivers-orders-dcs-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: dcs-cluster-issuer
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
tls:
- hosts:
- raag.westus2.cloudapp.azure.com
secretName: dcs-tls-secret
rules:
- host: raag.westus2.cloudapp.azure.com
http:
paths:
- path: /orders-dcs(/|$)(.*)
pathType: Prefix
backend:
service:
name: rpe-delivers-orders-dcs-api
port:
number: 80
- path: /orders-proxy(/|$)(.*)
pathType: Prefix
backend:
service:
name: rpe-delivers-orders-proxy-api
port:
number: 80
- path: /(.*)
pathType: Prefix
backend:
service:
name: rpe-delivers-orders-dcs-api
port:
number: 80
</code></pre>
<ul>
<li>Working as expected - raag.westus2.cloudapp.azure.com</li>
<li>Working as expected - raag.westus2.cloudapp.azure.com/rpe-delivers-orders-dcs-api</li>
<li><strong>Not working</strong> - raag.westus2.cloudapp.azure.com/rpe-delivers-orders-proxy-api</li>
</ul>
<p>My goal is for the below URLs should redirect to two different web pages:</p>
<ul>
<li>raag.westus2.cloudapp.azure.com/rpe-delivers-orders-dcs-api</li>
<li>raag.westus2.cloudapp.azure.com/rpe-delivers-orders-proxy-api</li>
</ul>
<p>Please let me know if I'm missing any information or didn't do the setup right.</p>
<p>Thank you.</p>
| <p>Looks like you are missing the block of <code>rpe-delivers-orders-proxy-api</code> in your ingress config</p>
<pre><code>- path: /(.*)
pathType: Prefix
backend:
service:
name: rpe-delivers-orders-proxy-api
port:
number: 80
</code></pre>
<p>If you have any path rewrite option and requirement in that case you can create the two different ingress</p>
<p><strong>No 1.</strong> <strong>rewrite</strong> <strong>ingress</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-first
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: dcs-cluster-issuer
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
tls:
- hosts:
- raag.westus2.cloudapp.azure.com
secretName: dcs-tls-secret
rules:
- host: raag.westus2.cloudapp.azure.com
http:
paths:
- path: /orders-dcs(/|$)(.*)
pathType: Prefix
backend:
service:
name: rpe-delivers-orders-dcs-api
port:
number: 80
- path: /orders-proxy(/|$)(.*)
pathType: Prefix
backend:
service:
name: rpe-delivers-orders-proxy-api
port:
number: 80
</code></pre>
<p><strong>No 2.</strong> simple ingress which will be also there as per your requirement</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-second
annotations:
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- raag.westus2.cloudapp.azure.com
secretName: dcs-tls-secret
rules:
- host: raag.westus2.cloudapp.azure.com
http:
paths:
- path: /
backend:
service:
name: rpe-delivers-orders-proxy-api
port:
number: 80
- path: /
backend:
service:
name: rpe-delivers-orders-dcs-api
port:
number: 80
</code></pre>
<p><strong>Update</strong> :</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-second
annotations:
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- raag.westus2.cloudapp.azure.com
secretName: dcs-tls-secret
rules:
- host: raag.westus2.cloudapp.azure.com
http:
paths:
- path: /orders-proxy
pathType: Prefix
backend:
service:
name: rpe-delivers-orders-proxy-api
port:
number: 80
</code></pre>
|
<p>I can understand basic management and operation of container on bare-metal running Docker engine and Kubernetes Orchestration. I'm wondering how is the management and orchestration of container on virtual machine (VM)? Anyone familiar working of container on VM, does it difficult to manage and orchestrate compare to container on bare-metal?</p>
<ol>
<li>Resources of container on VM as my understanding the VM instance itself is already mapped to specific flavor (e.g 2vCPU and 8G memory) and does it means if container on VM will be limited by the defined VM flavor?</li>
<li>How K8s will manage container on VM, does it see the VM as a VM or as a POD?</li>
</ol>
<p>Thanks for sharing your comments and input. Please advise and enlighten me.</p>
| <p>There is no difference if you are looking forward to using <strong>VM</strong> as <strong>worker node</strong> of the <strong>Kubernetes</strong> cluster and manage the POD (container) over it. Kubernetes consider and manage the VM as Node.</p>
<p>If you are looking forward to running standalone containers on top of a <strong>VM</strong> using simple <strong>docker</strong> without any <strong>Orchestration</strong> tool, it will be hard to manage</p>
<ul>
<li>Deployment options</li>
<li>Scaling containers</li>
<li>Resource management</li>
<li>Load balancing the traffic across containers</li>
<li>Handling the routing</li>
<li>Monitor the health of containers and hosts</li>
</ul>
<p>If you are still looking forward to running the container top of <strong>only VM</strong> there are few managed services from <strong>AWS & GCP</strong>.</p>
<ul>
<li><strong>Cloud Run</strong></li>
<li><strong>ECS</strong></li>
</ul>
<p>above are managed container Orchestration services and using it you can manage the container top of <strong>VM</strong>.</p>
<p>If you looking forward to the running the container by your ownself on simple you can do it using simple <strong>docker</strong> or <strong>docker-compose</strong> also. But at very first you will face an issue routing the traffic across multiple containers.</p>
<blockquote>
<p>How K8s will manage container on VM, does it see the VM as a VM or as
a POD?</p>
</blockquote>
<p>It sees the VM as a node and runs the necessary services top of VM first and manages it.</p>
|
<p>I need to create additional cluster in minikube. I searched for a while I haven't got any resource on this.How to create a cluster in minikube?</p>
| <p>Create first cluster named <code>cluster-1</code>:</p>
<pre><code>minikube start -p cluster-1
</code></pre>
<p>Create second cluster named <code>cluster-2</code>:</p>
<pre><code>minikube start -p cluster-2
</code></pre>
<p>Switch between the clusters:</p>
<pre><code># switch to cluster-1
minikube profile cluster-1
# switch to cluster-2
minikube profile cluster-2
</code></pre>
|
<p>First of all, what I want to build is right below.
<a href="https://i.stack.imgur.com/b5yKz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/b5yKz.png" alt="enter image description here" /></a></p>
<p>as above diagram, I want Ingress to make distribute traffics to service which is at other namespace <code>me</code> in same cluster. (Ingress is in <code>main</code> namespace) But the Ingress doesn't allow to point dns directly, I make ExternalName Service that points to <code>me-service</code> dns <code>me-service.me.svc.cluster.local</code> and then Ingress points to it.</p>
<p>Yaml of it is</p>
<p>main.k8s.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Namespace
metadata:
name: main
---
apiVersion: v1
kind: Service
metadata:
name: me-service
namespace: main
spec:
externalName: me-service.me.svc.cluster.local
type: ExternalName
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: gce
name: main-router
namespace: main
spec:
rules:
- host: some-domain.me
http:
paths:
- backend:
service:
name: me-service
port:
number: 80
path: /
pathType: ImplementationSpecific
</code></pre>
<p>me.k8s.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Namespace
metadata:
labels:
stag: production
name: me
---
apiVersion: v1
kind: Service # <-- this is the service I want to point
metadata:
labels:
app: me
stag: production
name: me-service
namespace: me
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: me
stag: production
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: me
stag: production
name: me-deployment
namespace: me
spec:
replicas: 2
selector:
matchLabels:
app: me
stag: production
template:
metadata:
labels:
app: me
stag: production
spec:
containers:
- image: gcr.io/me:latest
name: me
ports:
- containerPort: 80
resources:
limits:
cpu: 300m
memory: 512M
requests:
cpu: 250m
memory: 512M
</code></pre>
<p>And I checked dns address works but Ingress object doesn't created with error message</p>
<pre><code>me-service:80 (<error: endpoints "me-service" not found>)
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Translate 6m21s (x233 over 22h) loadbalancer-controller Translation failed: invalid ingress spec: could not find port "80" in service "main/me-service"
</code></pre>
<p>How can I make ingress work? If you need more information, please let me know. :pray:</p>
<p>GKE Engine: <strong>1.20.6-gke.1000</strong></p>
<p>HTTP Load Balancing: <strong>Enabled</strong></p>
<p>Network policy: <strong>Disabled</strong></p>
<p>Dataplane V2: <strong>Enabled</strong></p>
| <p><em>I'm posting it as an answer for better visibility. As I already mentioned in my comments:</em></p>
<p>As far as I know you cannot use <strong>GKE ingress</strong> with <code>ExternalName</code> Service type. The two supported types are <code>LoadBalancer</code> and <code>NodePort</code>. If nothing changed recently, you shouldn't be able to create an ingress resource even with a simple <code>ClusterIP</code>, only two above mentioned svc types so I don't believe that <code>ExternalName</code> would work. Well, you can actually use <code>ClusterIP</code> but only if you use <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/container-native-load-balancing" rel="nofollow noreferrer">container native load balancing</a> which requires your <strong>GKE</strong> cluster to be VPC-native.</p>
<p>You can still use <strong>GKE</strong> but you don't have to use <strong>GCE ingress</strong> as an ingress controller at the same time. But I would try first if it doesn't work with the mentioned container-native load balancing.</p>
<p>You can always deploy <strong>different ingress controller on your <strong>GKE</strong> cluster e.g. nginx-ingress</strong>. It can use <code>ClusterIP</code> services out of the box, but I'm not sure if it can handle the <code>ExternalName</code> so you would have to try this out.</p>
<p>OP confirmed that with <strong>nginx-ingress</strong> it was possible to distribute traffic to services located in different namespaces:</p>
<blockquote>
<p>@mario Thank you for your comment. I successfully distribute traffics
to other namespace svc using NGINX-ingress. – HyeonJunOh Jul 23 at
9:23</p>
</blockquote>
|
<p>We have an existing ALB Ingress controller deployment in EKS cluster having image as v1.1.9:</p>
<p>docker.io/amazon/aws-alb-ingress-controller:v1.1.9</p>
<p>Now we are in process of migrating from AWS ALB Ingress Controller (v1) to the new AWS Load Balancer Controller (v2).</p>
<p>Following this document : <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/deploy/upgrade/migrate_v1_v2/" rel="nofollow noreferrer">https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/deploy/upgrade/migrate_v1_v2/</a></p>
<p>It mentions that <strong>first need to manually uninstall existing AWSALBIngressController</strong>, but missing details like how to uninstall it. Tried to find AWS EKS documentation on this but everywhere it is mentioned without commands or other details for actual manual uninstallation.</p>
<p>Any help will be appreciated.</p>
| <p>First, I would remove all the ingress objects that use ALB Ingress controller (will cause downtime). To do so, <code>kubectl get ingress -A</code>, go through each one of them and see if it has annotation -</p>
<pre><code>annotations:
kubernetes.io/ingress.class: alb
</code></pre>
<p>Then use <code>kubectl delete ingress <ingress-name></code> to delete the ingress objects.</p>
<p>To uninstall the controller itself, you need to figure out how it was installed.</p>
<ol>
<li>Check if helm was used - <code>helm ls -A</code>. If you find a release for ALB ingress controller, then simply delete the release using <code>helm delete <release-name></code> command.</li>
<li>If helm was not use, then go through the installation of v1.1 docs - <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v1.1/guide/controller/setup/#installation" rel="nofollow noreferrer">https://kubernetes-sigs.github.io/aws-load-balancer-controller/v1.1/guide/controller/setup/#installation</a> and undo the steps. For example -</li>
</ol>
<pre><code>kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.9/docs/examples/rbac-role.yaml
kubectl delete -f alb-ingress-controller.yaml
</code></pre>
<p>Make sure to <code>kubectl get</code> before deleting the resources blindly and tweak the delete commands according to your setup, thanks.</p>
|
<p>We are evaluating Dapr for our microservice framework. We will be using Kubernetes. One of the advertised selling points for Dapr is service invocation and service discovery. Doesn't K8s already offer service discovery out of the box?</p>
| <p>Short answer: <strong>Yes</strong> (<em>Kubernetes</em> offers Service Discovery)</p>
<p>While there may be several patterns (and tools being the implementation of these patterns) for Service Discovery, <em>Kubernetes</em> offers service discovery <em>at its core</em> through the <strong>Service</strong> objects avoiding any needs of using a particular technology or tool to achieve a basic Container-Managed runtime environment.</p>
<p>You can read <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">more on <em>Kubernetes Service</em>s in the official documentation</a>.</p>
<p>It is worth noting that <em>dapr</em> is a platform agnostic portable runtime that does not depend on Kubernetes and its core Service Discovery feature.</p>
<p>It offers more features than simply discovering your services (it has been usually compared to Service Meshes tools as the tend to look out being the same):</p>
<ul>
<li>It offers transparent and secured service-to-service calls</li>
<li>It allows Publish-Subscribe communication style</li>
<li>It offers a way to register triggers and resource bindings (allowing for function-as-a-service development style)</li>
<li>It offers observability out-of-the-box</li>
<li>...</li>
</ul>
|
<p>I am trying to provision a <strong>private AKS cluster</strong> using terraform. I want to connect my private AKS cluster to <strong>an existing VNET</strong> that I have created using the Azure portal.</p>
<p>The Virtual network option is available in the Azure portal. Please find the below image.</p>
<p><a href="https://i.stack.imgur.com/9sueQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9sueQ.png" alt="enter image description here" /></a></p>
<p>However, the terraform documentation on <a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster" rel="nofollow noreferrer">azurerm_kubernetes_cluster</a> has very limited information on how to achieve that.</p>
<p>Please find my <code>main.tf</code> below</p>
<pre><code>resource "azurerm_kubernetes_cluster" "kubernetes_cluster" {
name = var.cluster_name
location = var.location
resource_group_name = var.resource_group_name
private_cluster_enabled = true
default_node_pool {
name = "default"
node_count = var.node_count
vm_size = var.vm_size
max_pods = var.max_pods_count
}
kube_dashboard {
enabled = true
}
network_profile {
network_plugin = "azure"
}
}
</code></pre>
<p><strong>Please note that the VNET and the cluster that is to be created share the same location and resource group.</strong></p>
<p>Any help on how to provision a private AKS cluster to an existing VNET using Terraform would be much appreciated.</p>
| <p>I used an existing code from Github with some changes as we already have vnet so instead of resource block I have used data block to get the details of the existing Vnet and instead of using the default subnet I created a subnet for aks and other one for firewall.</p>
<pre><code>terraform {
required_version = ">= 0.14"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">=2.50.0"
}
}
}
provider "azurerm" {
features {}
}
#local vars
locals {
environment = "test"
resource_group = "AKS-test"
resource_group_location = "East US"
name_prefix = "private-aks"
aks_node_prefix = ["10.3.1.0/24"]
firewall_prefix = ["10.3.2.0/24"]
}
#Existing vnet with address space "10.3.0.0/16"
data "azurerm_virtual_network" "base" {
name = "existing-vnet"
resource_group_name = "AKS-test"
}
#subnets
resource "azurerm_subnet" "aks" {
name = "snet-${local.name_prefix}-${local.environment}"
resource_group_name = local.resource_group
address_prefixes = local.aks_node_prefix
virtual_network_name = data.azurerm_virtual_network.base.name
}
resource "azurerm_subnet" "firewall" {
name = "AzureFirewallSubnet"
resource_group_name = local.resource_group
virtual_network_name = data.azurerm_virtual_network.base.name
address_prefixes = local.firewall_prefix
}
#user assigned identity
resource "azurerm_user_assigned_identity" "base" {
resource_group_name = local.resource_group
location = local.resource_group_location
name = "mi-${local.name_prefix}-${local.environment}"
}
#role assignment
resource "azurerm_role_assignment" "base" {
scope = "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/AKS-test"
role_definition_name = "Network Contributor"
principal_id = azurerm_user_assigned_identity.base.principal_id
}
#route table
resource "azurerm_route_table" "base" {
name = "rt-${local.name_prefix}-${local.environment}"
location = data.azurerm_virtual_network.base.location
resource_group_name = local.resource_group
}
#route
resource "azurerm_route" "base" {
name = "dg-${local.environment}"
resource_group_name = local.resource_group
route_table_name = azurerm_route_table.base.name
address_prefix = "0.0.0.0/0"
next_hop_type = "VirtualAppliance"
next_hop_in_ip_address = azurerm_firewall.base.ip_configuration.0.private_ip_address
}
#route table association
resource "azurerm_subnet_route_table_association" "base" {
subnet_id = azurerm_subnet.aks.id
route_table_id = azurerm_route_table.base.id
}
#firewall
resource "azurerm_public_ip" "base" {
name = "pip-firewall"
location = data.azurerm_virtual_network.base.location
resource_group_name = local.resource_group
allocation_method = "Static"
sku = "Standard"
}
resource "azurerm_firewall" "base" {
name = "fw-${local.name_prefix}-${local.environment}"
location = data.azurerm_virtual_network.base.location
resource_group_name = local.resource_group
ip_configuration {
name = "ip-${local.name_prefix}-${local.environment}"
subnet_id = azurerm_subnet.firewall.id
public_ip_address_id = azurerm_public_ip.base.id
}
}
#kubernetes_cluster
resource "azurerm_kubernetes_cluster" "base" {
name = "${local.name_prefix}-${local.environment}"
location = local.resource_group_location
resource_group_name = local.resource_group
dns_prefix = "dns-${local.name_prefix}-${local.environment}"
private_cluster_enabled = true
network_profile {
network_plugin = "azure"
outbound_type = "userDefinedRouting"
}
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_D2_v2"
vnet_subnet_id = azurerm_subnet.aks.id
}
identity {
type = "UserAssigned"
user_assigned_identity_id = azurerm_user_assigned_identity.base.id
}
depends_on = [
azurerm_route.base,
azurerm_role_assignment.base
]
}
</code></pre>
<p><strong>Reference:</strong> <a href="https://github.com/kuhlman-labs/terraform_azurerm_aks_private_cluster" rel="nofollow noreferrer">Github</a></p>
<p><strong>Before Test:</strong></p>
<p><a href="https://i.stack.imgur.com/6RboM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6RboM.png" alt="enter image description here" /></a></p>
<p><strong>Doing a terraform Plan on the above code:</strong></p>
<p><a href="https://i.stack.imgur.com/FoTBv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FoTBv.png" alt="enter image description here" /></a></p>
<p><strong>After applying the code:</strong></p>
<p><a href="https://i.stack.imgur.com/U0DGX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/U0DGX.png" alt="enter image description here" /></a></p>
<p><strong>After the deployment :</strong></p>
<p><a href="https://i.stack.imgur.com/oNYXq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oNYXq.png" alt="enter image description here" /></a></p>
|
<p>I have a k8 setup that looks like this</p>
<p><code>ingress -> headless service (k8 service with clusterIp: none) -> statefulsets ( 2pods)</code></p>
<p>Fqdn looks like this:</p>
<pre><code>nslookup my-service
Server: 100.4.0.10
Address: 100.4.0.10#53
Name: my-service.my-namespace.svc.cluster.local
Address: 100.2.2.8
Name: my-service.my-namespace.svc.cluster.local
Address: 100.1.4.2
</code></pre>
<p>I am trying to reach one of the pod directly via the service using the following fqdn but not able to do so.</p>
<pre><code>curl -I my-pod-0.my-service.my-namespace.svc.cluster.local:8222
curl: (6) Could not resolve host: my-pod-0.my-service.my-namespace.svc.cluster.local
</code></pre>
<p>If I try to hit the service directly then it works correctly (as a loadbalancer)</p>
<pre><code>curl -I my-service.my-namespace.svc.cluster.local:8222
HTTP/1.1 200 OK
Date: Sat, 31 Jul 2021 21:24:42 GMT
Content-Length: 656
</code></pre>
<p>If I try to hit the pod directly using it's cluster ip, it also works fine</p>
<pre><code>curl -I 100.2.2.8:8222
HTTP/1.1 200 OK
Date: Sat, 31 Jul 2021 21:29:22 GMT
Content-Length: 656
Content-Type: text/html; charset=utf-8
</code></pre>
<p>But my use case requires me to be able to hit the statefulset pod using fqdn i.e <code>my-pod-0.my-service.my-namespace.svc.cluster.local</code> . What am I missing here?</p>
| <p>example statefulset called <code>foo</code> with image <code>nginx</code>:</p>
<pre><code>k get statefulsets.apps
NAME READY AGE
foo 3/3 8m55s
</code></pre>
<p>This stateful set created following pods(<code>foo-0,foo-1,foo-2</code>):</p>
<pre><code>k get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 1/1 Running 1 3h47m 10.1.198.71 ps-master <none> <none>
foo-0 1/1 Running 0 12m 10.1.198.121 ps-master <none> <none>
foo-1 1/1 Running 0 12m 10.1.198.77 ps-master <none> <none>
foo-2 1/1 Running 0 12m 10.1.198.111 ps-master <none> <none>
</code></pre>
<p>Now create a headless service(<code>clusterIP is none</code>) as follow:(make sure to use correct selector same as your statefulset)</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: foo
spec:
type: ClusterIP
clusterIP: None
ports:
- port: 80
name: web
selector:
app: foo
</code></pre>
<p>Now, do <code>nslookup</code> to see the <code>dns</code> resolution working for the service.(Optional step)</p>
<pre><code>k exec -it busybox -- nslookup nginx.default.svc.cluster.local
Server: 10.152.183.10
Address 1: 10.152.183.10 kube-dns.kube-system.svc.cluster.local
Name: nginx.default.svc.cluster.local
Address 1: 10.1.198.77 foo-1.nginx.default.svc.cluster.local
Address 2: 10.1.198.111 foo-2.nginx.default.svc.cluster.local
Address 3: 10.1.198.121 foo-0.nginx.default.svc.cluster.local
</code></pre>
<p>Now validate that, individual resolution per-pod is working:</p>
<pre><code>k exec -it busybox -- nslookup foo-1.nginx.default.svc.cluster.local
Server: 10.152.183.10
Address 1: 10.152.183.10 kube-dns.kube-system.svc.cluster.local
Name: foo-1.nginx.default.svc.cluster.local
Address 1: 10.1.198.77 foo-1.nginx.default.svc.cluster.local
</code></pre>
<p>More info: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id" rel="nofollow noreferrer">Here</a></p>
<p>Note: In this case OP had incorrect mapping of <code>headless</code> service and the <code>statefulset</code>, this can be verified with below command:</p>
<pre><code>k get statefulsets.apps foo -o jsonpath="{.spec.serviceName}{'\n'}"
nignx
</code></pre>
<p>Ensure that, the mapping.</p>
|
<p>I have questions about my basic understanding about the setup of my k8s cluster.</p>
<p>I have a K8s running on Hetzner-cloud and allocated a "physical" <code>Loadbalancer</code> (which can be controlled via <code>annotations</code> on a Service.)
I use a nginx (or traefik) as my <code>ingress-controller</code>.</p>
<p>Please correct me if I am wrong:</p>
<ol>
<li>I create the service <code>Loadbalancer</code> with the <code>annotations</code> in the same <code>namespace</code> of my <code>ingress-controller</code> right?</li>
<li>Then I create an <code>ingress</code> with label <code>kubernetes.io/ingress-controller=nginx</code> in my <code>default</code> namespace with the settings to point to my services in the <code>default</code> namespace (one for frontend, one for backend)</li>
</ol>
<p><strong>Is this the correct way to set this up?</strong></p>
| <p>1.- No. Ingress Controller and your workload doesn't have to be in the same namespace. In fact, you will have the Ingress Controller running in a separate namespace than your workload.</p>
<p>2.-Yes. Generally speaking your <code>Ingress</code> rules, meaning your <code>Ingress</code> object, meaning your <code>Ingress</code> yaml and your <code>Service</code> must be in the same namespace. So <code>Ingress</code> can't transpass a namespace.</p>
<p><strong>Note: There is a way to have an <code>Ingress</code> object to send trafffic to a <code>Service</code> in a different namespace.</strong></p>
|
<p>I currently have a GitHub repository that contains my application code and a Kubernetes deployment configuration file (called <code>deployment.yml</code>). I.e., my repository has the following structure:</p>
<pre><code>repository
+- ...application code...
+- Dockerfile
\- deployment.yml
</code></pre>
<p>When a change is pushed to this GitHub repository, a series of GitHub Actions are executed that containerize my application into a Docker image and publish that image to Docker Hub.</p>
<p>On a development machine, I have a Kubernetes cluster running. I pull the <code>deployment.yml</code> file from the repository and either apply that configuration using <code>kubectl apply -f deployment.yml</code> or perform a rolling-update using <code>kubectl rollout restart deployment/<name></code>.</p>
<p>My <code>deployment.yml</code> is configured as follows:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
replicas: 1
...
template:
...
spec:
containers:
- name: <name>
image: foo/bar:v1.0.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: <port>
imagePullSecrets:
- name: <creds-id>
</code></pre>
<p>The version of the image (i.e., <code>v1.0.0</code> in <code>foo/bar:v1.0.0</code>) originates from the tags I use in Git. I.e., if I tag a commit as <code>v1.0.0</code>, a new build is run for that tag and a new Docker image is published to Docker Hub with the tag <code>v1.0.0</code>.</p>
<p>My problem is that I am storing my Kubernetes configuration (<code>deployment.yml</code>) in the same repository that is being tagged. This means that I am tagging a commit (i.e., <code>v2.0.0</code>) that contains an image of <code>foo/bar:v1.0.0</code> in the <code>deployment.yml</code>. I.e., I make changes to my code and then decide once those changes are sufficient, that a specific commit will be tagged. Since I want the cluster on my development machine to use the latest, approved (i.e., tagged) code, I then go an update the <code>deployment.yml</code> and commit, but that new commit is after the tagged commit.</p>
<p>To resolve this, I would have to change the <code>deployment.yml</code> file for the commit I know will be tagged. I.e., knowing that the next commit I make will be tagged as <code>v2.0.0</code>, I will have to change the <code>deployment.yml</code> to use the image <code>foo/bar:v2.0.0</code> and add that change to the commit (i.e., the one to be tagged). In that case, the commit tagged as <code>v2.0.0</code> will have an image of <code>foo/bar:v2.0.0</code> in its <code>deployment.yml</code>.</p>
<p>Is there a technique or best practice (such as templating or another practice) that can solve this issue?</p>
<p>Thank you.</p>
| <p><strong>Helm</strong> templating is also a good option however if your project is basic and won't be much requirement there use this basic method keeping the <strong>deployment.yaml</strong> in repo simple way.</p>
<p>Ideally, you can keep your <strong>deployment</strong> file dynamic as much as possible rather than fix values to the add-in.</p>
<pre><code>apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: test-image
labels:
app: test-image
spec:
selector:
matchLabels:
app: test-image
tier: frontend
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: test-image
tier: frontend
spec:
containers:
- image: TEST_IMAGE_NAME
name: test-image
ports:
- containerPort: 8080
name: http
- containerPort: 443
name: https
</code></pre>
<p>and in the <strong>YAML</strong> <strong>CI</strong> config we change the <strong>IMAGE URL</strong> in deployment.yaml file as per need.</p>
<p><code>Google could build</code> CI file example, however you can write or update the logic in your <strong>YAML</strong> CI</p>
<pre><code>steps:
- id: 'set test core image in yamls'
name: 'ubuntu'
args: ['bash','-c','sed -i "s,TEST_IMAGE_NAME,gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA," deployment.yaml']
</code></pre>
<p>In your process, the <strong>YAML</strong> file and run time replace the string with <strong>URL</strong> you want to add.</p>
<p><strong>Git</strong> will give you necessary variables like <strong>COMMIT HASH</strong> or <strong>TAGGED VERSION</strong>.</p>
<p>Example <strong>CI</strong> file <strong>YAML</strong> file,</p>
<pre><code>steps:
- id: 'build test core image'
name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA', '.']
- id: 'push test image'
name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA']
- id: 'set test core image in yamls'
name: 'ubuntu'
args: ['bash','-c','sed -i "s,TEST_IMAGE_NAME,gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA," deployment.yaml']
- name: 'gcr.io/cloud-builders/kubectl'
args: ['apply', '-f', 'deployment.yaml']
</code></pre>
<p>inside your <code>deployment.yaml</code> there will be one string <code>TEST_IMAGE_NAME</code> which will get replaced during the <code>CI</code> operation</p>
<p>using simple ubuntu command <strong>sed</strong> : <code>sed -i "s,TEST_IMAGE_NAME,gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA," deployment.yaml</code></p>
<p>variables like <code>$SHORT_SHA</code> will get auto injected or added by <strong>Github</strong> so now on your <strong>CI server</strong> you have one <code>deployment.yaml</code> with <code>Image-URL</code> that you have pushed to docker hub.</p>
<p>We have replaced our <code>TEST_IMAGE_NAME</code> string in <code>deployment.yaml</code> dynamically during CI.</p>
<p>Now you can apply this <code>deployment.yaml</code> from CI server, & it will update or push the deployment to <strong>K8s</strong> cluster wherever it's running.</p>
<p><strong>Note</strong> :</p>
<p>if you want to store the config in repo also, you can commit back file from CI server repo, you have to store <code>3</code> deployment file, dev-<code>deploymentl.yaml</code>, <code>stag-deployment.yaml</code> if any changes in any branch you can commit that file from CI server to repo again and your YAML config will be saved in <strong>repo</strong> also.</p>
|
<p>I need some help from the community, I'm still new to K8 and Spring Boot. Thanks all in advance.<br />
what I need is to have 4 K8 pods running in K8 environment and each pod have slightly different configuration from each other, for example, I have a property in one of my java class called regions, it extract it's value from Application.yml, like</p>
<p><strong>@Value("${regions}")<br />
Private String regions;</strong></p>
<p>Now after deploy it to K8 env I want to have 4 pods(I can configure it in helm file) running and in each pod the <strong>regions</strong> field should have different value.
Is this something achievable ? Can anyone please give any advice ?</p>
| <p>If you want to run <strong>4</strong> different pods with different configurations, you have to deploy the <strong>4</strong> different deployments in <strong>kubernetes</strong>.</p>
<p>You can create the different <code>configmap</code> as per need storing the whole <code>Application.yaml</code> file or environment variables and inject it to <strong>different</strong> <strong>deployments</strong>.</p>
<p>how to store whole <code>application.yaml</code> inside <code>config map</code></p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: yaml-region-first
data:
application.yaml: |-
data: test,
region: first-region
</code></pre>
<p>the same way you can create the config map with the second deployment.</p>
<pre><code> apiVersion: v1
kind: ConfigMap
metadata:
name: yaml-region-second
data:
application.yaml: |-
data: test,
region: second-region
</code></pre>
<p>you can inject this <code>configmap</code> to each deployment</p>
<p>example :</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hello-app
name: hello-app
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: hello-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: hello-app
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /etc/nginx/app.yaml
name: yaml-file
readOnly: true
volumes:
- configMap:
name: yaml-region-second
optional: false
name: yaml-file
</code></pre>
<p>accordingly, you can also create the helm chart.</p>
<p>If you just to pass the single environment instead of storing the whole file inside the <code>configmap</code> you can directly add value to the deployment.</p>
<p>Example :</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: print-greeting
spec:
containers:
- name: env-print-demo
image: bash
env:
- name: REGION
value: "one"
- name: HONORIFIC
value: "The Most Honorable"
- name: NAME
value: "Kubernetes"
command: ["echo"]
args: ["$(GREETING) $(HONORIFIC) $(NAME)"]
</code></pre>
<p><a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/</a></p>
<p>for each deployment, your environment will be different and in helm, you can <code>dynamically</code> also <code>update</code> or <code>overwrite</code> it using CLI command.</p>
|
<p>I followed this tutorial for deploying vault into a minikube cluster: <a href="https://learn.hashicorp.com/tutorials/vault/kubernetes-minikube?in=vault/kubernetes" rel="nofollow noreferrer">https://learn.hashicorp.com/tutorials/vault/kubernetes-minikube?in=vault/kubernetes</a>.</p>
<p>I don't understand, however, how this is reproducible. These seem like a lot of manual steps. Is there a way to easily deploy these pods again if I destroy them? Would I need to script this or can I somehow get the consul and vault pods, output them as yaml and use that to recreate them?</p>
| <p>I found this blog post on hashicorp's site which seems to address configuration once you are up and running: <a href="https://www.hashicorp.com/blog/codifying-vault-policies-and-configuration" rel="nofollow noreferrer">https://www.hashicorp.com/blog/codifying-vault-policies-and-configuration</a>.</p>
<p>There's also this: <a href="https://kubevault.com/docs/v2021.08.02/welcome/" rel="nofollow noreferrer">https://kubevault.com/docs/v2021.08.02/welcome/</a></p>
<p>Setting it up before the API is running seems to require either manual steps or a pretty simple shell script.</p>
|
<p>I need to start by saying that I have no experience using Cassandra and I am not the one who who created this deployment.</p>
<p>I have Cassandra running in a cluster in AKS. The PVC as configured in the statefulset is 1000Gi. Currently the pods are out of storage and are in a constant unhealthy state.</p>
<p>I am looking to expand the volumes attached to the pods. The problem I am facing is that I cannot scale down the statefulset because the statefulsets only scale down when <em>all</em> their pods are healthy.</p>
<p>I even tried deleting the statefulset and then recreateing it with a larger PVC (as recomended <a href="https://adamrushuk.github.io/increasing-the-volumeclaimtemplates-disk-size-in-a-statefulset-on-aks" rel="nofollow noreferrer">here</a>)</p>
<p>Howerver, I can't seem to delete the statefulset. It looks to me like the CassandraDatacenter CRD keeps recreating the statefulset as soon as I delete it. Giving me no time to change anything.</p>
<p>My question are as follows:</p>
<ol>
<li>Is there a standard way to expand the volume without losing data?</li>
<li>What would happen if I scale down the replicas in the CassandraDatacenter? Will it delete the PVC or keep it?</li>
<li>If there is no standard, does anyone have any ideas on how to accomplish expanding the volume size without losing storage?</li>
</ol>
| <p>Ordinarily in a Cassandra cluster, the best practice is to scale horizontally (not vertically). You want more Cassandra nodes to spread the load out to achieve maximum throughput.</p>
<p>The equivalent in Kubernetes is to scale up your deployment. As you increase the node count, the amount of data on each individual Cassandra node will decrease proportionally.</p>
<p>If you really want to resize the PVC, you will only be able to do it dynamically if you have enabled <code>allowVolumeExpansion</code>. You won't lose data as you do this.</p>
<p>Deleting a STS isn't going to work because by design it will be automatically replaced as you already know. You also won't be able to scale down because there isn't enough capacity (disk space) in your cluster if you do. Cheers!</p>
|
<p>I am using Prometheus version 2.28, I enabled basic Authentication on Prometheus server and seems that after enabling that, <code>/-/healthy</code> and <code>/-/ready</code> endpoints are also secured. I am using these endpoints in Kubernates for liveness and rediness probe. I have option to pass in Authorization header as a part of yaml as follows. But, is their any other way we can securely pass the username and password as its only base64 encoded. Or is it possible to disable security on Prometheus health endpoints.</p>
<pre><code>livenessProbe:
httpGet:
path: /-/healthy
port: 9090
httpHeaders:
- name: Authorization
value: Basic dXNlcjpwYXNz
</code></pre>
| <p>Unfortunately <strong>no</strong>. What you have is probably the only solution that works, as of today.</p>
<p>Back in 2015 there was a GitHub issue (<a href="https://github.com/kubernetes/kubernetes/issues/16910" rel="nofollow noreferrer">#16910</a>) to implement authentication support for HTTP liveness probes, but it was ultimately scrapped.</p>
|
<p>I have an Azure Kubernetes cluster with Velero installed. A Service Principal was created for Velero, per <a href="https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure/tree/master#option-1-create-service-principal" rel="nofollow noreferrer">option 1 of the instructions</a>.</p>
<p>Velero was working fine until the credentials for the Service Principal were reset. Now the scheduled backups are failing.</p>
<pre><code>NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
daily-entire-cluster-20210727030055 Failed 0 0 2021-07-26 23:00:55 -0000 13d default <none>
</code></pre>
<p>How can I update the secret for Velero?</p>
| <h1>1. Update credentials file</h1>
<p>First, update your credentials file (for most providers, this is <code>credentials-velero</code> and the contents are described in the plugin installation instructions: <a href="https://github.com/vmware-tanzu/velero-plugin-for-aws#set-permissions-for-velero" rel="nofollow noreferrer">AWS</a>, <a href="https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure#create-service-principal" rel="nofollow noreferrer">Azure</a>, <a href="https://github.com/vmware-tanzu/velero-plugin-for-gcp#option-1-set-permissions-with-a-service-account" rel="nofollow noreferrer">GCP</a>)</p>
<h1>2. Update secret</h1>
<p>Now update the velero secret. On linux:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl patch -n velero secret cloud-credentials -p '{"data": {"cloud": "'$(base64 -w 0 credentials-velero)'"}}'
</code></pre>
<ul>
<li><code>patch</code> tells <code>kubectl</code> to update a resource by merging the provided data</li>
<li><code>-n velero</code> tells <code>kubectl</code> to use the <code>velero</code> namespace</li>
<li><code>secret</code> is the resource type</li>
<li><code>cloud-credentials</code> is the name of the secret used by Velero to store credentials</li>
<li><code>-p </code> specifies that the next word is the patch data. It's more common to patch using JSON rather than YAML</li>
<li><code>'{"data": {"cloud": "<your-base64-encoded-secret-will-go-here>"}}'</code> this is the JSON data that matches the existing structure of the Velero secret in Kubernetes. <code><your-base64-encoded-secret-will-go-here></code> is a placeholder for the command we'll insert.</li>
<li><code>$(base64 -w 0 credentials-velero)</code> reads the file <code>credentials-velero</code> in the current directory, turns off word wrapping of the output (<code>-w 0</code>), BASE64-encodes the contents of the file, and inserts the result in the data.</li>
</ul>
|
<p>Is there a way to combine <code>kubectl top pod</code> and <code>kubectl top nodes</code>?</p>
<p>Basically I want to know pods sorted by cpu/memory usage <strong>BY</strong> node.</p>
<p>I can only get pods sorted by memory/cpu for whole cluster with <code>kubectl top pod</code> or directly memory/cpu usage per whole node with <code>kubectl top nodes</code>.</p>
<p>I have been checking the documentation but couldnt find the exact command.</p>
| <p>There is no built-in solution to achieve your expectations. <code>kubectl top pod</code> and <code>kubectl top node</code> are different commands and cannot be mixed each other. It is possible to <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#interacting-with-running-pods" rel="nofollow noreferrer">sort results</a> from <code>kubectl top pod</code> command only by <code>cpu</code> or <code>memory</code>:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl top pod POD_NAME --sort-by=cpu # Show metrics for a given pod and sort it by 'cpu' or 'memory'
</code></pre>
<p>If you want to "combine" <code>kubectl top pod</code> and <code>kubectl top node</code> you need to write custom solution. For example script in Bash based on <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/" rel="nofollow noreferrer">this commands</a>.</p>
|
<p>I am trying to create a Service: NodePort for one of the pods I have deployed,</p>
<p><strong>Below is my service definition</strong></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: voting-service
labels:
name: voting-service
app: demo-voting-app
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30004
selector:
name: voting-app-pod
app: demo-voting-app
</code></pre>
<p>I am deploying this service with command below</p>
<pre class="lang-sh prettyprint-override"><code>kubectl create -f voting-app-service.yaml
</code></pre>
<p><strong>Here is the Error</strong></p>
<pre><code>The Service "voting-service" is invalid: spec.ports[0].nodePort: Invalid value: 30004: provided port is already allocated
</code></pre>
<p>So I tried to find services that are using port 30004 with netstat and lsof commands but couldnt find any services using that port.</p>
<pre class="lang-sh prettyprint-override"><code>➜ Voting-app kubectl create -f voting-app-service.yaml
The Service "voting-service" is invalid: spec.ports[0].nodePort: Invalid value: 30004: provided port is already allocated
➜ Voting-app sudo netstat -lntp | grep 30004
➜ Voting-app lsof -i :30004
➜ Voting-app
</code></pre>
<p>minikube version: v1.22.0
kubectl : 1.21 version</p>
| <p>As @HarshManvar mentioned you can change the port in the service file for one that isn't allocated.</p>
<p>Later you find that port <code>30004</code> was already allocated as there was a pod using that port:</p>
<blockquote>
<p>kubectl get svc | grep 30004</p>
</blockquote>
|
<p>I have two services one for serving static files and other for serving apis. I have created a single ingress controller for these.</p>
<p>I want to serve <code>/</code> from service1 and <code>/api</code> from service2. My services are running fine.
but I am getting 404 for <code>/api</code> path.</p>
<p>Below is my kubernetes yaml</p>
<pre><code>---
apiVersion: v1
kind: Namespace
metadata:
name: "myapp"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: "service2"
namespace: "myapp"
labels:
project: "myapp"
env: "prod"
spec:
replicas: 2
selector:
matchLabels:
project: "myapp"
run: "service2"
matchExpressions:
- {key: project, operator: In, values: ["myapp"]}
template:
metadata:
labels:
project: "myapp"
env: "prod"
run: "service2"
spec:
securityContext:
sysctls:
- name: net.ipv4.ip_local_port_range
value: "1024 65535"
imagePullSecrets:
- name: tildtr
containers:
- name: "node-container"
image: "images2"
imagePullPolicy: Always
ports:
- containerPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: "service1"
namespace: "myapp"
labels:
project: "myapp"
env: "prod"
spec:
replicas: 2
selector:
matchLabels:
project: "myapp"
run: "service1"
matchExpressions:
- {key: project, operator: In, values: ["myapp"]}
template:
metadata:
labels:
project: "myapp"
env: "prod"
run: "service1"
spec:
securityContext:
sysctls:
- name: net.ipv4.ip_local_port_range
value: "1024 65535"
imagePullSecrets:
- name: tildtr
containers:
- name: "nginx-container"
image: "image1"
imagePullPolicy: Always
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: "service1"
namespace: "myapp"
labels:
project: "myapp"
env: "prod"
run: "service1"
spec:
selector:
project: "myapp"
type: ClusterIP
ports:
- name: "service1"
port: 80
targetPort: 80
selector:
run: "service1"
---
apiVersion: v1
kind: Service
metadata:
name: "service2"
namespace: "myapp"
labels:
project: "myapp"
env: "prod"
run: "service2"
spec:
selector:
project: "myapp"
type: ClusterIP
ports:
- name: "service2"
port: 80
targetPort: 3000
selector:
run: "service2"
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: "myapp"
namespace: "myapp"
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/server-alias: "*.xyz.in"
nginx.ingress.kubernetes.io/server-snippet: |
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 50;
keepalive_requests 100000;
reset_timedout_connection on;
client_body_timeout 20;
send_timeout 2;
types_hash_max_size 2048;
client_max_body_size 20M;
gzip on;
gzip_min_length 10240;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml application/json;
gzip_disable "MSIE [1-6]\.";
spec:
rules:
- host: "myhost.in"
http:
paths:
- path: /api
backend:
serviceName: "service2"
servicePort: 80
- path: /
backend:
serviceName: "service1"
servicePort: 80
</code></pre>
<p>And this is my ingress desc.</p>
<pre><code>
Name: myapp
Namespace: myapp
Address: 10.100.160.106
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
ds-vaccination.timesinternet.in
/api service2:3000 (10.0.1.113:3000,10.0.2.123:3000)
/ service1:80 (10.0.1.37:80,10.0.2.59:80)
Annotations: nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/server-alias: *.xyz.in
nginx.ingress.kubernetes.io/server-snippet:
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 50;
keepalive_requests 100000;
reset_timedout_connection on;
client_body_timeout 20;
send_timeout 2;
types_hash_max_size 2048;
client_max_body_size 20M;
gzip on;
gzip_min_length 10240;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml application/json;
gzip_disable "MSIE [1-6]\.";
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 6s (x5 over 31m) nginx-ingress-controller Scheduled for sync
Normal Sync 6s (x5 over 31m) nginx-ingress-controller Scheduled for sync
</code></pre>
| <p>remove this annotation and try</p>
<pre><code>annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
</code></pre>
<p>if your cluster is supporting old <strong>API</strong> : extensions/v1beta1</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: service-ingress
spec:
rules:
- host: service1.example.com
http:
paths:
- backend:
serviceName: service1
servicePort: 80
- host: service2.example.com
http:
paths:
- backend:
serviceName: service2
servicePort: 80
</code></pre>
|
<p>I am using kubernetes cluster provided from infra team(not minikube), I have created traefik ingress controller with all the configuration, ingress container, our applications are running in the cluster. now i want to access the application using domain name or ip address, for this, i have created a Ingress resource as below</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-web-ui
namespace: kube-system
spec:
rules:
- host: dummy.domain.com
http:
paths:
- path: /
backend:
serviceName: app-web-ui
servicePort: web
</code></pre>
<p>What i am not getting here is the host name, since the applications are running in cluster, should this host name be the cluster master node name? what should be the hostname or ip address in the actual prod environment.</p>
<p>Note: we are not using any cloud providers, its just plain docker with kubernetes cluster environment(cluster is configured in Mirantis kubernetes and client is Windows).</p>
| <p>If you using any Load balancer behind the ingress or any ingress controller with a load Balancer you have to use the Load balancer IP everywhere.</p>
<p>So you have to map the <code>Loadbalancer IP</code> into the <code>DNS</code> for <code>dummy.domain.com</code>, or else if you are not using the load balancer you have to use the Master IP.</p>
<p>You are testing from the <code>local machine</code> you can add the entry of <code>domain</code> and IP in <code>etc/host</code> file and check.</p>
|
<p>Very new to K3s and I'm attempting to just practice by creating a deployment with 3 replicas of an ngnix pod. It creates on 2 of my worker nodes but one of the pods tried creating on my master node but I got a CreateContainerError.
After digging further I found the following error: Error: failed to get sandbox container task: no running task found: task e2829c0383965aa4556c9eecf1ed72feb145211d23f714bdc0962b188572f849 not found: not found.</p>
<p>Any help would be greatly appreciated</p>
<p>After running <code>kubectl describe node</code> and checking the taints for the master node, it shows <code><none></code></p>
| <p>So all it needed was a fresh install and that seems to have solved everything. Probably should have tried that first.</p>
|
<p>I want to run pods with <code>runsc</code> as default on my k8s nodes but <code>kube-proxy</code> or other nodes can't be run via <code>runsc</code>. So, I want to automate the process in a way that after the Kubernetes start, every new pod will be run via <code>runsc</code>.</p>
| <p>According to the <a href="https://kubernetes.io/docs/concepts/containers/runtime-class/" rel="nofollow noreferrer">official documentation</a>, this can be done with the below steps:</p>
<ul>
<li><p><a href="https://kubernetes.io/docs/concepts/containers/runtime-class/#1-configure-the-cri-implementation-on-nodes" rel="nofollow noreferrer">Configure the CRI implementation on nodes</a></p>
</li>
<li><p><a href="https://kubernetes.io/docs/concepts/containers/runtime-class/#2-create-the-corresponding-runtimeclass-resources" rel="nofollow noreferrer">Create the corresponding RuntimeClass resources</a></p>
</li>
<li><p>Specify a <code>runtimeClassName</code> in the Pod spec.</p>
</li>
</ul>
<p>A step by step guide alongside all necessary details can be found in the linked docs.</p>
<p>Notice that:</p>
<ul>
<li><p>low-level resources, such as <code>nodes</code> and <code>persistentVolumes</code>, are not in any namespace, <code>RuntimeClass</code> is a non-namespaced resource</p>
</li>
<li><p><code>RuntimeClass</code> assumes a homogeneous node configuration across the cluster by default (which means that all nodes are configured the same way with respect to container runtimes). To support heterogeneous node configurations, see <a href="https://kubernetes.io/docs/concepts/containers/runtime-class/#scheduling" rel="nofollow noreferrer">Scheduling</a> below.</p>
</li>
</ul>
|
<p>I have a simple Node.js chat example which uses socket.io. I have tested this deployment with a docker container locally, and a web client succesfully connects to the websocket (socket.io). The problem arise when I try to deploy this on a Kubernetes cluster with an ingress Controller.</p>
<p>My question is, is there anything <em><strong>special</strong></em> socket.io specifically needs when using an ingress controller?</p>
<p><strong>The error message I get is</strong> <code><<browser name>> Can't estabilish a connection to the server at ws://<<address>>/socket.io/EIO=3&transport=websocket</code></p>
<p>Below is the simple implementation of the <strong>server side</strong></p>
<pre><code>var io = socket(server);
io.on('connection', (socket)=>{
console.log('made socket connection')
socket.on('chat', (message)=>{
message:message.value
})
</code></pre>
<p>Below is the client:</p>
<pre><code>var socket = io.connect("http://localhost:3002", {
upgrade: false,
transports: ['websocket'],
secure: True
});
</code></pre>
<p>My <strong>ingress</strong> yaml file looks like this</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ws-ingress
namespace: websocket-ns
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/websocket-services: "ws-service"
nginx.ingress.kubernetes.io/proxy-read-timeout: '3600'
nginx.ingress.kubernetes.io/proxy-send-timeout: '3600'
nginx.ingress.kubernetes.io/server-snippet: |
http {
server {
listen 3002;
location = / {
proxy_set_header Upgrade "websocket";
proxy_http_version 1.1;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_set_header Connection "upgrade";
proxy_cache_bypass $http_upgrade;
}
upstream nodes {
hash $remote_addr consistent;
server app01:3002;
}
}
spec:
rules:
- host: "$host_address"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ws-service
port:
number: 80
</code></pre>
<p>Service yaml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ws-service
namespace: websocket-ns
labels:
app: ws-service
spec:
type: LoadBalancer
selector:
app: ws-app
ports:
- port: 80
protocol: TCP
targetPort: 3002
</code></pre>
<p>Server deployment file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: ws-app
namespace: websocket-ns
labels:
app: ws-app
spec:
replicas: 1
selector:
matchLabels:
app: ws-app
template:
metadata:
labels:
app: ws-app
spec:
containers:
- name: ws-app
image: themuchy/socketexample
ports:
- containerPort: 3002
name: ws-app
</code></pre>
| <p>This one is working with one</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
certmanager.k8s.io/cluster-issuer: core-prod
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/websocket-services: ws-service
nginx.org/websocket-services: ws-service
name: core-ingress
spec:
rules:
- host: test.io
http:
paths:
- backend:
serviceName: ws-service
servicePort: 80
tls:
- hosts:
- test.io
secretName: core-prod
</code></pre>
<p>Nginx itself will upgrade the HTTP request and convert it to WebSocket. You don't have to add any annotation of the same.</p>
<p><code>Deployment</code> and <code>service</code>, <code>YAML</code> configuration looking good however try removing the annotation <code>server-snippet</code>. This way traffic flow will be something like</p>
<pre><code>LB > ingress (Connection upgrade) > service > pods
</code></pre>
|
<p>I am trying to use Nginx ingress to access kubernetes dashboard on my local pc. The step I followed are:</p>
<p>Getting nginx ingress</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.48.1/deploy/static/provider/cloud/deploy.yaml
</code></pre>
<p>Getting kubernetes dashboard</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
</code></pre>
<p>Applying this ingress</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
spec:
rules:
- host: "kubernetes.docker.internal"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: kubernetes-dashboard
port:
number: 443
</code></pre>
<p>Checking that my host file has this line</p>
<pre><code>127.0.0.1 kubernetes.docker.internal
</code></pre>
<p>If I try to open <a href="http://kubernetes.docker.internal/" rel="nofollow noreferrer">http://kubernetes.docker.internal/</a> on my browser I get "Http Error 400 this page isn't working", while on postman I get an error 400 with message "Client sent an HTTP request to an HTTPS server."</p>
<p>How can I resolve?</p>
| <p>I resolved adding annotations section in ingress yaml.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- host: "kubernetes.docker.internal"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: kubernetes-dashboard
port:
number: 443
</code></pre>
|
<p>I'm creating a custom resource definition (CRD) with an associated controller using <a href="https://github.com/kubernetes-sigs/kubebuilder" rel="nofollow noreferrer">kubebuilder</a>. My controller reconcile loop creates a deployment sub-resource and parents it to the custom resource using <code> controllerutil.SetControllerReference(&myResource, deployment, r.Scheme)</code>. I've also configured my reconciler so "own" the sub-resource, as follows:</p>
<pre class="lang-golang prettyprint-override"><code>// SetupWithManager sets up the controller with the Manager.
func (r *MyResourceReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&mygroupv1alpha1.MyResource{}).
Owns(&appsv1.Deployment{}).
Complete(r)
}
</code></pre>
<p>However, when I run my controller locally using <code>make run</code>, I noticed that deleting the my CR (the root object) doesn't cause the Deployment sub-resource to get garbage collected. I also noticed that deleting the Deployment sub-resource doesn't trigger my reconciler to run. Why is this? Is there something I'm not doing or is this possibly a limitation of local development/testing?</p>
| <p>Using @coderanger's hint, I could see that the <code>metadata.ownerReferences</code> weren't being set correctly when running the following command:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get deployments sample-deployment -o yaml
</code></pre>
<p>The problem was my controller's reconcile code. I was calling <code>controllerutil.SetControllerReference(&myResource, deployment, r.Scheme)</code> only after I'd already created and persisted the Deployment.</p>
<p><strong>Buggy code</strong></p>
<pre class="lang-golang prettyprint-override"><code>log.Info("Creating a deployment")
deployment := &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: deploymentName,
Namespace: myResource.Namespace,
},
Spec: deploymentSpec,
}
if err = r.Create(ctx, deployment); err != nil {
log.Error(err, "Failed to create deployment")
if errors.IsInvalid(err) {
// Don't retry on validation errors
err = nil
}
return ctrl.Result{}, err
}
// Establish the parent-child relationship between my resource and the deployment
log.Info("Making my resource a parent of the deployment")
if err = controllerutil.SetControllerReference(&myResource, deployment, r.Scheme); err != nil {
log.Error(err, "Failed to set deployment controller reference")
return ctrl.Result{}, err
}
</code></pre>
<p>To fix it, I needed to swap the order of the call to <code>r.Create</code> and <code>controllerutil.SetControllerReference</code>:</p>
<p><strong>Working code</strong></p>
<pre class="lang-golang prettyprint-override"><code>log.Info("Creating a deployment")
deployment := &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: deploymentName,
Namespace: myResource.Namespace,
},
Spec: deploymentSpec,
}
// Establish the parent-child relationship between my resource and the deployment
log.Info("Making my resource a parent of the deployment")
if err = controllerutil.SetControllerReference(&myResource, deployment, r.Scheme); err != nil {
log.Error(err, "Failed to set deployment controller reference")
return ctrl.Result{}, err
}
// Create the deployment with the parent/child relationship configured
if err = r.Create(ctx, deployment); err != nil {
log.Error(err, "Failed to create deployment")
if errors.IsInvalid(err) {
// Don't retry on validation errors
err = nil
}
return ctrl.Result{}, err
}
</code></pre>
<p>I was able to confirm that this worked by looking at the <code>metadata.ownerReferences</code> YAML data for my created deployment (using the command referenced above).</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2021-08-02T16:22:04Z"
generation: 1
name: sample-deployment
namespace: default
ownerReferences:
- apiVersion: resources.mydomain.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: MyResource
name: myresource-sample
uid: 6ebb146c-afc7-4601-bd75-58efc29beac9
resourceVersion: "569913"
uid: d9a4496f-7418-4831-ab87-4804dcd1f8aa
</code></pre>
|
<p>I've created a cluster on Google Kubernetes Engine :</p>
<pre><code>gcloud container clusters create training-cluster --num-nodes=1 --zone=us-central1-a --machine-type="n1-highmem-2" --scopes="gke-default,storage-rw"
</code></pre>
<p>I get the credentials:</p>
<pre><code>gcloud container clusters get-credentials --zone us-central1-a training-cluster
</code></pre>
<p>and apply my yaml file:</p>
<pre><code>kubectl apply -f pod.yaml
</code></pre>
<p>The yaml file looks like:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: gke-training-pod
spec:
containers:
- name: my-custom-container
image: gcr.io/xyz/object-classification:gpu
args:
{I put my container arguments here}
</code></pre>
<p>I can see in the logs that training starts and eventually reaches the end. Problem is that it restarts each time unless I delete my cluster. Is there an argument that I should add to avoid such a behavior?</p>
| <p>If you are running the <strong>POD</strong> you set the <code>restartPolicy: Never</code> also</p>
<p>The spec of a Pod has a <code>restartPolicy</code> field with possible values <strong>Always</strong>, <strong>OnFailure</strong>, and <strong>Never</strong>. The default value is Always.</p>
<blockquote>
<p>The restartPolicy applies to all containers in the Pod. restartPolicy
only refers to restarts of the containers by the kubelet on the same
node. After containers in a Pod exit, the kubelet restarts them with
an exponential back-off delay (10s, 20s, 40s, …), that is capped at
five minutes. Once a container has executed for 10 minutes without any
problems, the kubelet resets the restart backoff timer for that
container.</p>
</blockquote>
<p><strong>OR else</strong></p>
<p>You can use the <strong>cronjobs</strong> or <strong>jobs</strong> in <strong>Kubernetes</strong> which will come and go once the job or container ends the <strong>process</strong>.</p>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/</a></p>
<p>If you can use the <strong>cronjob</strong> you can add <strong>successfulJobsHistoryLimit: 0</strong> so once your job finished it will remove that job and <strong>delete</strong> the pod from <strong>cluster</strong> also automatically :</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
successfulJobsHistoryLimit: 0
failedJobsHistoryLimit: 0
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
</code></pre>
|
<p>In Nginx, we can set up basic auth by adding annotation in the ingress.</p>
<pre><code>nginx.ingress.kubernetes.io/auth-realm: Authentication Required
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-type: basic
</code></pre>
<p>But if we are using azure application gateway instead of Nginx how can we set up basic auth.
<a href="https://i.stack.imgur.com/Qiynp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Qiynp.png" alt="enter image description here" /></a></p>
| <p>Unfortunately <strong>Azure Application Gateway</strong> doesn't support <strong>basic auth</strong> and I would say using an <strong>ingress controller</strong> like <strong>nginx-ingress</strong> is the proper choice in a scenario where you need this feature. The lack of support for authentication in <strong>Azure Application Gateway</strong> was already reported in <a href="https://feedback.azure.com/forums/217313-networking/suggestions/19473664-authentication-support-for-application-gateway" rel="nofollow noreferrer">this thread</a>.</p>
|
<p>I've created a cluster on Google Kubernetes Engine :</p>
<pre><code>gcloud container clusters create training-cluster --num-nodes=1 --zone=us-central1-a --machine-type="n1-highmem-2" --scopes="gke-default,storage-rw"
</code></pre>
<p>I get the credentials:</p>
<pre><code>gcloud container clusters get-credentials --zone us-central1-a training-cluster
</code></pre>
<p>and apply my yaml file:</p>
<pre><code>kubectl apply -f pod.yaml
</code></pre>
<p>The yaml file looks like:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: gke-training-pod
spec:
containers:
- name: my-custom-container
image: gcr.io/xyz/object-classification:gpu
args:
{I put my container arguments here}
</code></pre>
<p>I can see in the logs that training starts and eventually reaches the end. Problem is that it restarts each time unless I delete my cluster. Is there an argument that I should add to avoid such a behavior?</p>
| <p>If you want to run the pod only once (stop the lifecycle as soon as the code/training is done), then you have to change the restartPolicy to Never or OnFailure in your pod yaml definition file.</p>
<pre><code>containers:
- name: my-custom-container
image: gcr.io/xyz/object-classification:gpu
args:
{I put my container arguments here}
restartPolicy: Never
</code></pre>
<p><strong>Always</strong> means that the container will be restarted even if it exited with a zero exit code (i.e. successfully). This is useful when you don't care why the container exited, you just want to make sure that it is always running (e.g. a web server). This is the default.</p>
<p><strong>OnFailure</strong> means that the container will only be restarted if it exited with a non-zero exit code (i.e. something went wrong). This is useful when you want accomplish a certain task with the pod, and ensure that it completes successfully - if it doesn't it will be restarted until it does.</p>
<p><strong>Never</strong> means that the container will not be restarted regardless of why it exited.</p>
<p>Now, if you want to run the pod only once multiple times the best approach to go is with Kubernetes CronJobs/Jobs as mentioned by Harsh. That would be the best approach in this case.</p>
|
<p>I am creating a GitHub workflow action, to deploy the build into an integration cluster and for that first, I wanted to create a namespace using kubectl, but I am getting an error: <code> error: You must be logged in to the server (Unauthorized)</code></p>
<p>Here is my action.yaml snippet</p>
<pre><code> uses: cancue/eks-action@v0.0.2
env:
aws_access_key_id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws_secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws_region: $AWS_REGION
cluster_name: $EKS_CLUSTER_NAME
kubeconfig: ${{ secrets.INTEGRATION_CLUSTER_SECRETS }}
with:
args: |
kubectl create ns namespace:pr#${{ github.sha }}
</code></pre>
<p>I am new to writing action workflows.
Thanks in advance.</p>
| <p>you can try out the</p>
<pre><code>- name: verify deployment
uses: kodermax/kubectl-aws-eks@master
env:
KUBE_CONFIG_DATA: ${{ secrets.KUBE_CONFIG_DATA }}
with:
args: create ns <namespace-name>
</code></pre>
<p>however your config looks good but <strong>makesure</strong> you have added the proper data for <strong>kubeconfig</strong>.</p>
<p>Also, check <code>access_key</code> and <code>secret_key</code> you are using have the access to <strong>EKS</strong> cluster.</p>
<p>Example : <a href="https://github.com/marketplace/actions/kubectl-aws-eks" rel="nofollow noreferrer">https://github.com/marketplace/actions/kubectl-aws-eks</a></p>
<p><strong>Quick fix</strong> :</p>
<p>Once your access & secret set in environment use the command to featch the config from AWS, and you can remove adding the config file in Gitaction seceret.</p>
<pre><code>aws eks update-kubeconfig --region ap-south-1 --name <Cluster name>
</code></pre>
<p><strong>Secrets</strong></p>
<p><strong>KUBE_CONFIG_DATA</strong> – required: A base64-encoded <strong>kubeconfig</strong> file with credentials for Kubernetes to access the cluster. You can get it by running the following command:</p>
<pre><code>cat $HOME/.kube/config | base64
</code></pre>
|
<p><strong>Background</strong></p>
<p>On the Google Kubernetes Engine we've been using Cloud Endpoints, and the Extensible Service Proxy (v2) for service-to-service authentication.</p>
<p>The services authenticate themselves by including the bearer JWT token in the <code>Authorization</code> header of the HTTP requests.</p>
<p>The identity of the services has been maintained with GCP Service Accounts, and during deployment, the Json Service Account key is mounted to the container at a predefined location, and that location is set as the value of the <code>GOOGLE_APPLICATION_CREDENTIALS</code> env var.</p>
<p>The services are implemented in C# with ASP.NET Core, and to generate the actual JWT token, we use the Google Cloud SDK (<a href="https://github.com/googleapis/google-cloud-dotnet" rel="nofollow noreferrer">https://github.com/googleapis/google-cloud-dotnet</a>, and <a href="https://github.com/googleapis/google-api-dotnet-client" rel="nofollow noreferrer">https://github.com/googleapis/google-api-dotnet-client</a>), where we call the following method:</p>
<pre class="lang-cs prettyprint-override"><code>var credentials = GoogleCredential.GetApplicationDefault();
</code></pre>
<p>If the <code>GOOGLE_APPLICATION_CREDENTIALS</code> is correctly set to the path of the Service Account key, then this returns a <code>ServiceAccountCredential</code> object, on which we can call the <code>GetAccessTokenForRequestAsync()</code> method, which returns the actual JWT token.</p>
<pre class="lang-cs prettyprint-override"><code>var jwtToken = await credentials.GetAccessTokenForRequestAsync("https://other-service.example.com/");
var authHeader = $"Bearer {jwtToken}";
</code></pre>
<p>This process has been working correctly without any issues.</p>
<p>The situation is that we are in the process of migrating from using the manually maintained Service Account keys to using <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity" rel="nofollow noreferrer">Workload Identity</a> instead, and I cannot figure out how to correctly use the Google Cloud SDK to generate the necessary JWT tokens in this case.</p>
<p><strong>The problem</strong></p>
<p>When we enable Workload Identity in the container, and don't mount the Service Account key file, nor set the <code>GOOGLE_APPLICATION_CREDENTIALS</code> env var, then the <code>GoogleCredential.GetApplicationDefault()</code> call returns a <code>ComputeCredential</code> instead of a <code>ServiceAccountCredential</code>.<br />
And if we call the <code>GetAccessTokenForRequestAsync()</code> method, that returns a token which is not in the JWT format.</p>
<p>I checked the implementation, and the token seems to be retrieved from the Metadata server, of which the expected response format seems to be the standard OAuth 2.0 model (represented in <a href="https://github.com/googleapis/google-api-dotnet-client/blob/master/Src/Support/Google.Apis.Auth/OAuth2/Responses/TokenResponse.cs" rel="nofollow noreferrer">this model class</a>):</p>
<pre><code>{
"access_token": "foo",
"id_token": "bar",
"token_type": "Bearer",
...
}
</code></pre>
<p>And the <code>GetAccessTokenForRequestAsync()</code> method returns the value of <code>access_token</code>. But as far as I understand, that's not a JWT token, and indeed when I tried using it to authenticate against ESP, it responded with</p>
<pre><code>{
"code": 16,
"message": "JWT validation failed: Bad JWT format: Invalid JSON in header",
..
}
</code></pre>
<p>As far as I understand, normally the <code>id_token</code> contains the JWT token, which should be accessible via the <code>IdToken</code> property of the <code>TokenResponse</code> object, which is also accessible via the SDK, I tried accessing it like this:</p>
<pre><code>var jwtToken = ((ComputeCredential)creds.UnderlyingCredential).Token.IdToken;
</code></pre>
<p>But this returns <code>null</code>, so apparently the metadata server does not return anything in the <code>id_token</code> field.</p>
<p><strong>Question</strong></p>
<p>What would be the correct way to get the JWT token with the .NET Google Cloud SDK for accessing ESP, when using Workload Identity in GKE?</p>
| <p>To get an IdToken for the attached service account, you can use <code>GoogleCredential.GetApplicationDefault().GetOidcTokenAsync(...)</code>.</p>
|
<p>I know each pod gets a (unique) UUID, but is it possible to also give it, automatically, a low id number, that can be reusable as long as no two pods use them at the same time? So if pod 4 dies, the next time a pod is started gets 4. There can be gaps from time to time.</p>
<p>My goal is to use this number as the worker in a twitter-snowflake-like algorithm.</p>
<p>I'd like to achieve that without using StatefulSet, since it comes with a bunch of limitations and complexity that I'd rather not add to a big deployment.</p>
| <p>There is no built-in way to do this.</p>
<p>The most straightforward workaround would be a <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="noreferrer">initcontainer</a>. On the pod start, the initcontainer runs before the start of the original containers, determines this id, and sets it to the pod. This initcontainer would run the following steps (in bash or the language of your choice):</p>
<ol>
<li><p><a href="https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/" rel="noreferrer">Connect to the Kubernetes API</a> and select the <code>ownerReference</code> of the current pod</p>
</li>
<li><p>Count the number of pods matching the <code>ownerReference</code> of the current pod, e.g.,</p>
<p>kubectl get pods -o jsonpath='{range .items[?(.metadata.ownerReferences.uid=)]}{.metadata.name}{end}'</p>
</li>
<li><p><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#label" rel="noreferrer">Set the label</a> for your id on the current pod to <code>number of previously selected pods + 1</code></p>
</li>
</ol>
|
<p>We are using AKS version 1.19.11.</p>
<p>It is noticed that whenever a new rollout is in placed for our deployments or a new pod got created as part of the hpa settings or pod got restarted, We are getting high cpu usage alerts.</p>
<p>For example, -if a new pod got created as part of any of the above activities, will this take up more CPU than the allowed Threshold ? [ the “Maximum limit” of 1 core specified in the deployment spec and the apps are light weight and doesnt need thatmuch cpu anuyways ] ? its in turn makes sudden spike in the AzureMonitor for a short time and then it became normal.</p>
<p>Why the pods are taking more cpu during its startup or creation time?
if the pods are not using thatmuch cpu, what will be the reason for this repeating issues?</p>
<p>hpa settings as below</p>
<pre><code>Name: myapp
Namespace: myapp
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: myapp
meta.helm.sh/release-namespace: myapp
CreationTimestamp: Mon, 26 Apr 2021 07:02:32 +0000
Reference: Deployment/myapp
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 5% (17m) / 75%
Min replicas: 5
Max replicas: 12
Deployment pods: 1 current / 1 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale recommended size matches current size
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
</code></pre>
<p>ading the events when a new rollout placed.</p>
<p>as per the events captured from the “myapp” Namespace , there were new deployment rolled out for myapp as below.</p>
<p>During the new pods creation its showing more CPU spikes as we are getting alert from the Azuremonitor that its exceeds the threshold of 80%.[the “Maximum limit” of 1 core specified in the deployment spec]</p>
<pre><code>30m Normal SuccessfulDelete replicaset/myapp-1a2b3c4d5e Deleted pod: myapp-1a2b3c4d5e-9fmrk
30m Normal SuccessfulDelete replicaset/myapp-1a2b3c4d5e Deleted pod: myapp-1a2b3c4d5e-hfr8w
29m Normal SuccessfulDelete replicaset/myapp-1a2b3c4d5e Deleted pod: myapp-1a2b3c4d5e-l2pnd
31m Normal ScalingReplicaSet deployment/myapp Scaled up replica set myapp-5ddc98fb69 to 1
30m Normal ScalingReplicaSet deployment/myapp Scaled down replica set myapp-1a2b3c4d5e to 2
30m Normal ScalingReplicaSet deployment/myapp Scaled up replica set myapp-5ddc98fb69 to 2
30m Normal ScalingReplicaSet deployment/myapp Scaled down replica set myapp-1a2b3c4d5e to 1
30m Normal ScalingReplicaSet deployment/myapp Scaled up replica set myapp-5ddc98fb69 to 3
29m Normal ScalingReplicaSet deployment/myapp Scaled down replica set myapp-1a2b3c4d5e to 0
</code></pre>
<p>Alert settings</p>
<pre><code>Period Over the last 15 mins
Value 100.274747
Operator GreaterThan
Threshold 80
</code></pre>
| <p>i am not sure what metrics you are looking at in AKS monitoring specifically as you have not mentioned it but it could be possible,</p>
<p>when you are deploying the POD or HPA scaling the replicas your AKS showing the total resource of all replicas.</p>
<p>During the <strong>deployment</strong>, it's possible at a certain stage all PODs are in the running phase and taking & consuming the resources.</p>
<blockquote>
<p>Are you checking specific resources of one single POD and it's going
above the threshold ?</p>
</blockquote>
<p>As you have mentioned application is lightweight however it is possible initially it taking resources to start the process, in that case, you might have to check resources using profiling.</p>
|
<p>I've installed Jenkins on GKE using <a href="https://artifacthub.io/packages/helm/bitnami/jenkins" rel="nofollow noreferrer">Bitnami Chart</a> and it is online.</p>
<p>When I want to adjust it using <code>helm upgrade</code>, Kubernetes brings up a new instance while leaving the other running (as expected), but the new instance fails to come up with</p>
<pre><code> Warning FailedAttachVolume 18m attachdetach-controller Multi-Attach error for volume "pvc-b3d609b3-ec10-4966-8713-595702220c40" Volume is already used by pod(s) jenkins-9ddcc795c-vflvm
Warning FailedMount 11m kubelet Unable to attach or mount volumes: unmounted volumes=[jenkins-data[], unattached volumes=[default-token-2qsvk jenkins-data]: timed out waiting for the condition
</code></pre>
<p>This error makes sense - two instances can't share storage.</p>
<p>If I take down the first instance, then it comes right back. If I <code>helm uninstall</code>, both instances are deleted including the storage.</p>
<p>What is the proper process to upgrade versions/update chart settings?</p>
| <p>You can delete the <strong>deployment</strong> of <strong>Jenkin</strong> first if you will delete the deployment other components will be there along with the storage disk which can reattached to the new deployment</p>
<pre><code>kubectl delete deployments.apps jenkins
</code></pre>
<p><a href="https://artifacthub.io/packages/helm/bitnami/jenkins#upgrading" rel="nofollow noreferrer">https://artifacthub.io/packages/helm/bitnami/jenkins#upgrading</a></p>
<p>and run command to upgrade the helm chart by updating the <strong>value</strong> file and using <strong>--set</strong>.</p>
<pre><code>helm upgrade jenkins bitnami/jenkins --set jenkinsPassword=$JENKINS_PASSWORD --set jenkinsHome=/bitnami/jenkins/jenkins_home
</code></pre>
|
<p>I have set up a k8s cluster (1 node cluster) using DOKS. The service is running fine with nodeport config. However, I am unable to access it using <code>http://${NodeIP}:${NodePort}</code> from browsers. I have even tried to add the firewall rule, but i am getting <code>error response from backend</code> while trying to add a new inbound TCP rule. Not a useful error message!</p>
<p>Curl and Telnet are failing as well.</p>
<p>Please find below my dockerfile, deployment and service yaml files.</p>
<p><strong>Dockerfile</strong></p>
<pre><code>FROM nginx:1.21.1
COPY build/ /usr/share/nginx/html
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/nginx.conf /etc/nginx/conf.d
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
</code></pre>
<p><strong>Service YAML file</strong></p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: int
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
nodePort: 31000
selector:
app: int
</code></pre>
<p><strong>Deployment YAML</strong></p>
<pre><code>kind: Deployment
apiVersion: apps/v1
metadata:
name: int
spec:
replicas: 2
selector:
matchLabels:
app: int
template:
metadata:
labels:
app: int
spec:
containers:
- name: int
image: registry.digitalocean.com/xxxxx/int:latest
imagePullPolicy: Always
ports:
- containerPort: 80
restartPolicy: Always
</code></pre>
<p><strong>Kubectl get pods output</strong></p>
<pre><code>root@ast-a1:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
int-7cc5445c7-hnwvp 1/1 Running 0 3h14m
int-7cc5445c7-qtr6n 1/1 Running 0 3h14m
</code></pre>
<p><strong>Kubectl get svc output</strong></p>
<pre><code>root@ast-a1:~# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
int NodePort 10.xxx.xx.xx <none> 80:31000/TCP 152m
kubernetes ClusterIP 10.xxx.x.x <none> 443/TCP 3d3h
</code></pre>
<p><strong>The response</strong></p>
<p><a href="https://i.stack.imgur.com/ACyqQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ACyqQ.png" alt="enter image description here" /></a></p>
<p>Am I making a mistake somewhere? I am just trying out DOKS.</p>
<p>EDIT:</p>
<p>Added tracert output.</p>
<pre><code>C:\Users\ck5>tracert 1xx.xx.xx.xxx
Tracing route to 1xx.xx.xx.xxx over a maximum of 30 hops
1 * * * Request timed out.
2 * * * Request timed out.
3 4 ms 2 ms 3 ms 1x.1xx.xx.xx.static-hydrabad.vsnl.net.in [1x.1xx.xx.xx]
4 * * * Request timed out.
5 49 ms 52 ms 12 ms 2xx.6x.xxx.xxx.static-bangalore.vsnl.net.in [2xx.xx.xxx.xxx]
6 13 ms 12 ms 110 ms 1xx.1xx.2xx.15
7 * * * Request timed out.
8 * * * Request timed out.
9 * * * Request timed out.
10 * * * Request timed out.
11 * * * Request timed out.
12 * * * Request timed out.
13 * * * Request timed out.
14 * * * Request timed out.
15 * * * Request timed out.
16 * * * Request timed out.
17 * * * Request timed out.
18 * * * Request timed out.
19 * * * Request timed out.
20 * * * Request timed out.
21 * * * Request timed out.
22 * * * Request timed out.
23 * * * Request timed out.
24 * * * Request timed out.
25 * * * Request timed out.
26 * * * Request timed out.
27 * * * Request timed out.
28 * * * Request timed out.
29 * * * Request timed out.
30 * * * Request timed out.
Trace complete.
</code></pre>
| <p>Looks like a security group or a firewall issue. Run a traceroute to the destination IP from the machine with your browser.</p>
<p>If it stops at the last hop, it is most likely the security group not allowing connections to your port from the source subnet.</p>
<p>If the traceroute stops in the middle it is more likely a firewall issue.</p>
|
<p>I'm running prometheus and grafana under k3s, accessible (respectively) at <a href="http://monitoring.internal/prometheus" rel="nofollow noreferrer">http://monitoring.internal/prometheus</a> and <a href="http://monitoring.internal/grafana" rel="nofollow noreferrer">http://monitoring.internal/grafana</a>. The grafana Ingress object, for example, looks like:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: monitoring.internal
http:
paths:
- path: /grafana
pathType: Prefix
backend:
service:
name: grafana
port:
number: 3000
</code></pre>
<p>This works fine, except that if you land at
<a href="http://monitoring.internal/" rel="nofollow noreferrer">http://monitoring.internal/</a>, you get a 404 error. I would like
requests for <a href="http://monitoring.internal/" rel="nofollow noreferrer">http://monitoring.internal/</a> to redirect to
<a href="http://monitoring.internal/grafana" rel="nofollow noreferrer">http://monitoring.internal/grafana</a>. I could perhaps create another
service that runs something like <code>darkhttpd ... --forward-all http://monitoring.internal/grafana</code>, and create an Ingress object
that would map <code>/</code> to that service, but it seems like there ought to
be a way to do this with Traefik itself.</p>
<p>It looks like I'm running Traefik 2.4.8 locally:</p>
<pre><code>$ kubectl -n kube-system exec -it deployment/traefik -- traefik version
Version: 2.4.8
Codename: livarot
Go version: go1.16.2
Built: 2021-03-23T15:48:39Z
OS/Arch: linux/amd64
</code></pre>
<p>I've found <a href="https://doc.traefik.io/traefik/v1.7/configuration/backends/kubernetes/" rel="nofollow noreferrer">this documentation for 1.7</a> that suggests there is an annotation for exactly this purpose:</p>
<ul>
<li><code>traefik.ingress.kubernetes.io/app-root: "/index.html"</code>: Redirects
all requests for / to the defined path.</li>
</ul>
<p>But setting that on the grafana ingress object doesn't appear to have
any impact, and I haven't been able to find similar docs for 2.x
(I've looked around
<a href="https://github.com/traefik/traefik/tree/master/docs" rel="nofollow noreferrer">here</a>, for
example).</p>
<p>What's the right way to set up this sort of redirect?</p>
| <p>Since I haven't been able to figure out traefik yet, I thought I'd post my solution here in case anyone else runs into the same situation. I am hoping someone comes along who knows The Right Way to to do this, and if I figure out I'll update this answer.</p>
<p>I added a new deployment that runs <a href="https://github.com/emikulic/darkhttpd" rel="nofollow noreferrer">darkhttpd</a> as a simple director:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: redirector
spec:
replicas: 1
template:
spec:
containers:
- name: redirector
image: docker.io/alpinelinux/darkhttpd
ports:
- containerPort: 8080
args:
- --forward-all
- http://monitoring.internal/grafana
</code></pre>
<p>A corresponding Service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: redirector
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
</code></pre>
<p>And the following Ingress object:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: redirector
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: monitoring.internal
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: redirector
port:
number: 8080
</code></pre>
<p>These are all deployed with <a href="https://kustomize.io/" rel="nofollow noreferrer">kustomize</a>, which takes care of
adding labels and selectors in the appropriate places. The
<code>kustomization.yaml</code> look like:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- ingress.yaml
- service.yaml
commonLabels:
component: redirector
</code></pre>
<p>With all this in place, requests to <code>http://monitoring.internal/</code> hit the redirector pod.</p>
|
<p>I have a couple of helm charts in a <code>myapp/templates/</code> directory, and they deploy as expected with <code>helm install myapp</code>.</p>
<p>These two templates are for example:</p>
<ul>
<li>database.yaml</li>
<li>cronjob.yaml</li>
</ul>
<p>I'd like for the <code>cronjob.yaml</code> to only run after the <code>database.yaml</code> is in a running state. I currently have an issue where <code>database.yaml</code> fairly regularly fails in a way we half expect (it's not ideal, but it is what it is).</p>
<p>I've found hooks, but I think I'm either using them incorrectly, or they don't determine whether the pod is in Running, Pending, some state of crashed, etc...</p>
<p>There are no changes I've made to database.yaml in order to use hooks, but my cronjob.yaml which I only want to run if database.yaml is in a running state, I added the annotations as follows:</p>
<p><strong>cronjob.yaml</strong></p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: database
annotations:
"helm.sh/hook": "post-install"
labels:
app: database
service: database
spec:
schedule: "* * * * *"
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: customtask
image: "{{ .Values.myimage }}"
command:
- /bin/sh
- -c
- supercooltask.sh
restartPolicy: Never
</code></pre>
<p>How can I change this hook configuration to allow <code>cronjob.yaml</code> to only run if <code>database.yaml</code> deploys and runs successfully?</p>
| <p>Use init containers in the Pod Spec of Cron Job to check DB is up and running.
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#podspec-v1-core" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#podspec-v1-core</a></p>
<p>Example:</p>
<pre><code>spec:
template:
spec:
initContainers:
..
containers:
..
restartPolicy: OnFailure
</code></pre>
|
<p>I have tried to read all the questions and answers in Stack Overflow, and doing a lot of googling, ask some of my Kubernetes Guru around me, but to not available... I am becoming crazy with that problem...</p>
<p>Here is my problem, we have several environment, with different tenant, and each of them has an NFS server (on AIX, Solaris, Linux, Windows,... depending on the tenant). And want to mount the NFS share on our Kubernetes deployement on a specific POD.</p>
<p>For now, that works, we can mount the NFS share, with NFS V4. And that for everyone of our external NFS servers.</p>
<p>I am using that Kubernetes Provisioner (<a href="https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner</a>) and that works.</p>
<p>Here are my configuration to make it work:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-xy-provisioner
labels:
app: nfs-xy-provisioner
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-xy-provisioner
template:
metadata:
labels:
app: nfs-xy-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-xy-provisioner
image: XYZ/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-xxx-xy-provisioner
- name: NFS_SERVER
value: myServer.example.com
- name: NFS_PATH
value: /my/path/xy_eingang
volumes:
- name: nfs-client-root
nfs:
server: myServer.example.com
path: /my/path/xy_eingang
</code></pre>
<p>With the following StorageClass:</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-xxx-xy-nfs-storage
provisioner: k8s-sigs.io/nfs-xxx-xy-provisioner
parameters:
pathPattern: ""
archiveOnDelete: "false"
reclaimPolicy: Retain
</code></pre>
<p>with the following pv claim</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app-xy-pvc
annotations:
volume.beta.kubernetes.io/storage-class: "managed-xxx-xy-nfs-storage"
spec:
storageClassName: "managed-xxx-xy-nfs-storage"
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
</code></pre>
<p>The mount in pod:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
template:
spec:
volumes:
- name: app-xy
persistentVolumeClaim:
claimName: app-xy-pvc
containers:
- name: app
volumeMounts:
- name: app-xy
mountPath: /my/pod/path/xy
</code></pre>
<p>Here is the mount</p>
<pre><code>myServer.example.com:/my/path/xy_eingang on /my/pod/path/xy type nfs4 (rw,relatime,vers=4.0,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=1.2.3.4,local_lock=none,addr=2.3.4.5)
</code></pre>
<p>Now when I am on the mounted path, I can see the following:</p>
<pre><code>drwxrws--- 3 65534 4294967294 73728 Jul 2 07:52 33
drwxrws--- 2 65534 4294967294 69632 Jul 2 07:52 44
drwxrws--- 2 65534 4294967294 90112 Jul 2 07:52 55
-rwxrws--- 1 65534 4294967294 630793 Oct 20 2014 5905001_00001.ZIP
</code></pre>
<p>So we have the UID=65534 and GID=4294967294.
I've tried to change the <em>fsgroup</em> or <em>supplementalGroups</em> to 4294967294, but kubernetes is complaining, that it can <em>only</em> use number from 0 to 2147483647 inclusive.</p>
<p>On our NFS Server (in that example), we have the following user/group:</p>
<ul>
<li>User: usernfs (UID=56008)</li>
<li>Group: usernfs (GID=56001)</li>
</ul>
<p>And this nfs mapping is not done, and in the pod, since it is only the application that is run, the idmapd is not started.
From what I've understood, the mount is done on the node, and then in the pod we have only the mount from the node.</p>
<p>And we are not owner of the kubernetes installation, we are simple users, we have no possibilities to change anything on the Kubernetes configuration/nodes, etc... We are simple "users" using the Kubernetes functionalities to deploy our application. We cannot use Helm, the only thing we can use is Kustomize.</p>
<p>We cannot change the permissions on the NFS server to 777/644/744/666 or such things for security reasons. So all the advice to change the permission on the share disk are not working for us.</p>
<p>I've tried to change to NFS V3, but also there from a security point of view, our security team doesn't want to use such an old protocol, so we must use NFS V4.</p>
<p>I know that for NFS V4, we need to have idmapd running, but I have no idea, where we need to have it, on the node, the pod, somewhere else? No idea, I am quite new to Kubernetes, and things I could do in minutes are taking me weeks to do (like this problem) and I cannot find the way to solve that problem.</p>
<p>So any help would be welcome to solve that permission problem...</p>
<p>The version of Kubernetes is the following:</p>
<pre><code>Client Version: version.Info{
Major:"1",
Minor:"18",
GitVersion:"v1.18.12",
GitCommit:"7cd5e9086de8ae25d6a1514d0c87bac67ca4a481",
GitTreeState:"clean",
BuildDate:"2020-11-12T09:18:55Z",
GoVersion:"go1.13.15",
Compiler:"gc",
Platform:"linux/amd64"
}
Server Version: version.Info{
Major:"1",
Minor:"19",
GitVersion:"v1.19.9+vmware.1",
GitCommit:"f856d899461199c512c21d0fdc67d49cc70a8963",
GitTreeState:"clean", BuildDate:"2021-03-19T23:57:11Z",
GoVersion:"go1.15.8",
Compiler:"gc",
Platform:"linux/amd64"
}
</code></pre>
<p>Kind regards,
Alessandro</p>
| <p>I know how frustrated it is, I have used this on centos 8, Ubuntu 18, 20 on baremetal and digital ocean, and we have to install nfs tools on host server, than it worked like a charm. We even not have to touch user security uuid etc.</p>
|
<p>i have lots of deployments yaml files and lots of services yaml files, i want to know if it is possible to execute all the yaml files at once ?
i tried this command:</p>
<pre><code>kubectl create -f *.yaml
</code></pre>
<p>But it's not working </p>
<p>Thanks</p>
| <p>If you are inside any directory and you want to apply all the YAML</p>
<pre><code>kubectl apply -f ./
</code></pre>
<p>or else you merge many <code>YAML</code> files in single <code>YAML</code> file</p>
<pre><code>YAML1 content
---
YAML 2 content
---
YAML 3 contnet
</code></pre>
|
<p>I have the following <code>configmap</code> where I want to retrieve the IPs using jsonpath how can that be done?</p>
<pre><code>apiVersion: v1
data:
domain.yaml: |
dns:
- 127.0.0.1
- 127.0.0.2
</code></pre>
<p>I have tried the following which does not work: <code>kubectl get cm domain -o jsonpath={.domain.yaml.dns[0]}</code></p>
| <p>this is not quite simple, as the</p>
<pre><code>dns:
- 127.0.0.1
- 127.0.0.2
</code></pre>
<p>is interpreted as a single json value.</p>
<p>for example <code>kubectl get cm testcm -o jsonpath='{.data}'</code> returns the following output <code>{"domain.yaml":"dns:\n - 127.0.0.1\n - 127.0.0.2\n"}</code>
as you can see it has "domain.yaml" as key, and the rest is a simple string value.</p>
<p>in order to get the ips, we can use jq and cut magic. For example</p>
<pre><code>kubectl get cm testcm -o jsonpath='{.data.*}' | cut -d$'\n' -f2 | sed 's/ - //g'
</code></pre>
<p>would return <code>127.0.0.1</code></p>
|
<p>my.cnf in kubernetes configmap is not recognized by mysql pod.</p>
<p>This is
<strong>mysql.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: *****
name: *****
spec:
serviceName: mysql-service
replicas: 1
selector:
matchLabels:
app: *****
template:
metadata:
labels:
app: *****
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: eks.amazonaws.com/nodegroup
operator: In
values:
- db-node
containers:
- name: mysql
image: mysql:5.7
args:
- "--ignore-db-dir=lost+found"
env:
- name: MYSQL_ROOT_PASSWORD
value: password
- name: MYSQL_USER
value: mysql
- name: MYSQL_DATABASE
value: ****
ports:
- containerPort: 3306
protocol: TCP
volumeMounts:
- name: *****
mountPath: /var/lib/mysql
- name: mysql-configmap
mountPath: /etc/mysql/conf.d/mysql.cnf
subPath: mysql.cnf
volumes:
- name: mysql-configmap
configMap:
name: mysql-configmap
</code></pre>
<p><strong>configmap.yaml</strong></p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-configmap
namespace: ****
data:
mysql.cnf: |
[client]
default-character-set=utf8
[mysql]
default-character-set=utf8
</code></pre>
<p><strong>MySQL Container in my.cnf</strong></p>
<pre><code>root@********:/etc/mysql/conf.d# cat mysql.cnf
[client]
default-character-set=utf8
[mysql]
default-character-set=utf8
</code></pre>
<p><strong>The file is located in the folder, but the settings I want are not applied.</strong></p>
<pre><code>mysql> show variables like 'char%'; +--------------------------+----------------------------+
| Variable_name | Value |
+--------------------------+----------------------------+
| character_set_client | utf8mb4 |
| character_set_connection | utf8mb4 |
| character_set_database | latin1 |
| character_set_filesystem | binary |
| character_set_results | utf8mb4 |
| character_set_server | latin1 |
| character_set_system | utf8 |
| character_sets_dir | /usr/share/mysql/charsets/ |
+--------------------------+----------------------------+
</code></pre>
<p>Thank you for reading.<br />
I think it's because my.cnf is root authority, but it's not visible in other use cases.</p>
| <p>From the official documentation of the docker image found <a href="https://hub.docker.com/_/mysql" rel="nofollow noreferrer">here</a> this could have several reasons.</p>
<p>Check the default mysql configuration under <code>/etc/mysql/my.cnf</code> if there is an <code>includedir</code> statement which includes your <code>/etc/mysql/conf.d</code> directory with your custom configuration.</p>
<p>It is also possible to add encoding options as arguments, meaning that if you cannot fix it via the configmap option you are using right now, you could add extra arguments to your mysql image.</p>
<p>Your statefulset yaml would then look like that:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: *****
name: *****
spec:
serviceName: mysql-service
replicas: 1
selector:
matchLabels:
app: *****
template:
metadata:
labels:
app: *****
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: eks.amazonaws.com/nodegroup
operator: In
values:
- db-node
containers:
- name: mysql
image: mysql:5.7
args:
- "--ignore-db-dir=lost+found"
- "--character-set-server=utf8mb4"
- "--character-set-client=utf8mb4"
env:
- name: MYSQL_ROOT_PASSWORD
value: password
- name: MYSQL_USER
value: mysql
- name: MYSQL_DATABASE
value: ****
ports:
- containerPort: 3306
protocol: TCP
volumeMounts:
- name: *****
mountPath: /var/lib/mysql
- name: mysql-configmap
mountPath: /etc/mysql/conf.d/mysql.cnf
subPath: mysql.cnf
volumes:
- name: mysql-configmap
configMap:
name: mysql-configmap
</code></pre>
<p>The documentation uses <code>--character-set-server=utf8mb4</code> instead of <code>default-character-set=utf8</code> so you might need to try both options to find out which one is the corret one.</p>
|
<p>I have a .NET 5 API app running a Kestrel server inside a Docker container. I can run the app locally inside of the Docker container without any trouble. The issue occurs when I deploy to Kubernetes which has a LoadBalancer that appears to be causing issues.</p>
<p>I receive the following error(s) repeatedly:</p>
<pre><code>←[40m←[37mdbug←[39m←[22m←[49m: Microsoft.AspNetCore.Server.Kestrel[17]
Connection id "0HMAMHC4BPDGN" bad request data: "Unrecognized HTTP version: '10.98.6.196 10.123.90.100 62168 443'"
Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException: Unrecognized HTTP version: '10.98.6.196 10.123.90.100 62168 443'
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpParser`1.RejectUnknownVersion(Int32 offset, ReadOnlySpan`1 requestLine)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpParser`1.ParseRequestLine(TRequestHandler handler, ReadOnlySpan`1 requestLine)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpParser`1.ParseRequestLine(TRequestHandler handler, SequenceReader`1& reader)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.Http1Connection.ParseRequest(SequenceReader`1& reader)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.Http1Connection.TryParseRequest(ReadResult result, Boolean& endConnection)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.ProcessRequests[TContext](IHttpApplication`1 application)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.ProcessRequestsAsync[TContext](IHttpApplication`1 application)
←[40m←[37mdbug←[39m←[22m←[49m: Microsoft.AspNetCore.Server.Kestrel[10]
Connection id "0HMAMHC4BPDGN" disconnecting.
←[40m←[37mdbug←[39m←[22m←[49m: Microsoft.AspNetCore.Server.Kestrel[2]
Connection id "0HMAMHC4BPDGN" stopped.
←[40m←[37mdbug←[39m←[22m←[49m: Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets[7]
Connection id "0HMAMHC4BPDGN" sending FIN because: "The Socket transport's send loop completed gracefully."
</code></pre>
<p>My kubernetes infrastructure deployment. It is mapping all 443 requests to port 80 of the running Docker container.</p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: api-base
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-base
spec:
replicas: 1
selector:
matchLabels:
app: api-base
template:
metadata:
labels:
app: api-base
spec:
containers:
- name: api-base
image: /path/to/image
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: api-base
labels:
app: api-base
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: mycert/path
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
spec:
selector:
app: api-base
ports:
- port: 443
targetPort: 80
protocol: TCP
name: https
type: LoadBalancer
</code></pre>
<p>Dockerfile (abbreviated) exposes port 80 and configures .net core to run on port 80.</p>
<pre><code>FROM mcr.microsoft.com/dotnet/aspnet:5.0
WORKDIR /app
EXPOSE 80
ENV ASPNETCORE_URLS http://*:80
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "API.dll"]
</code></pre>
<p>Startup.cs</p>
<pre><code>public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public IConfiguration Configuration { get; }
private const string CorsOriginName = "AllowedOrigins";
public void ConfigureServices(IServiceCollection services)
{
services.AddCors(options =>
{
options.AddPolicy(name: CorsOriginName,
builder =>
{
builder.AllowAnyOrigin()
.AllowAnyMethod()
.AllowAnyHeader();
});
});
services.Configure<ForwardedHeadersOptions>(options =>
{
options.ForwardedHeaders =
ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto;
});
services.AddControllers();
services.AddSwaggerGen(c =>
{
c.SwaggerDoc("v1", new OpenApiInfo { Title = "API", Version = "v1" });
});
}
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseSwagger();
app.UseSwaggerUI(c => c.SwaggerEndpoint("/swagger/v1/swagger.json", "API v1"));
app.UseCors(CorsOriginName);
app.UseForwardedHeaders();
app.UseStaticFiles();
app.UseRouting();
app.UseAuthentication();
app.UseAuthorization();
app.UseEndpoints(endpoints =>
{
endpoints.MapControllers();
});
}
}
</code></pre>
<p>Program.cs</p>
<pre><code>public class Program
{
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.UseServiceProviderFactory(new AutofacServiceProviderFactory())
.ConfigureAppConfiguration((hostContext, builder) =>
{
// Add other providers for JSON, etc.
if (hostContext.HostingEnvironment.IsDevelopment())
{
builder.AddUserSecrets<Program>();
}
})
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseKestrel().UseStartup<Startup>();
});
}
</code></pre>
<p>How do I configure Kestrel to accept requests from the LoadBalancer?</p>
<p><strong>Update:</strong></p>
<p>To accomplish this I terminate https traffic and communicate with the container using http</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: api-base
annotations:
# Note that the backend talks over HTTP.
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# TODO: Fill in with the ARN of your certificate.
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:{region}:{user id}:certificate/{id}
# Only run SSL on the port named "https" below.
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
spec:
selector:
app: api-base
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 80
type: LoadBalancer
</code></pre>
<p><a href="https://aws.amazon.com/premiumsupport/knowledge-center/terminate-https-traffic-eks-acm/" rel="nofollow noreferrer">https://aws.amazon.com/premiumsupport/knowledge-center/terminate-https-traffic-eks-acm/</a></p>
| <p>You need to decide if you want to terminate https on your application or before that. I would suggest that you use Ingress instead of LB type service to expose your service to external world. That way you can configure https termination on the ingress it self and the app would remain http based with no need to configure certs, encryption etc. on the app it self, although you would still need to configure ingress properly.</p>
|
<p>[ Disclaimer: this question was originally posted <a href="https://serverfault.com/questions/939527/kubernetes-calico-on-oracle-cloud-oci">on ServerFault</a>. However, since the official K8s <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/" rel="nofollow noreferrer">documentation</a> states "ask your questions on StackOverflow", I am also adding it here ]</p>
<p>I am trying to deploy a test Kubernetes cluster on Oracle Cloud, using OCI VM instances - however, I'm having issues with pod networking.</p>
<p>The networking plugin is Calico - it seems to be installed properly, but no traffic gets across the tunnels from one host to another. For example, here I am trying to access nginx running on another node:</p>
<pre><code>root@kube-01-01:~# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
nginx-dbddb74b8-th9ns 1/1 Running 0 38s 192.168.181.1 kube-01-06 <none>
root@kube-01-01:~# curl 192.168.181.1
[ ... timeout... ]
</code></pre>
<p>Using tcpdump, I see the IP-in-IP (protocol 4) packets leaving the first host, but they never seem to make it to the second one (although all other packets, including BGP traffic, make it through just fine).</p>
<pre><code>root@kube-01-01:~# tcpdump -i ens3 proto 4 &
[1] 16642
root@kube-01-01:~# tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens3, link-type EN10MB (Ethernet), capture size 262144 bytes
root@kube-01-01:~# curl 192.168.181.1
09:31:56.262451 IP kube-01-01 > kube-01-06: IP 192.168.21.64.52268 > 192.168.181.1.http: Flags [S], seq 3982790418, win 28000, options [mss 1400,sackOK,TS val 9661065 ecr 0,nop,wscale 7], length 0 (ipip-proto-4)
09:31:57.259756 IP kube-01-01 > kube-01-06: IP 192.168.21.64.52268 > 192.168.181.1.http: Flags [S], seq 3982790418, win 28000, options [mss 1400,sackOK,TS val 9661315 ecr 0,nop,wscale 7], length 0 (ipip-proto-4)
09:31:59.263752 IP kube-01-01 > kube-01-06: IP 192.168.21.64.52268 > 192.168.181.1.http: Flags [S], seq 3982790418, win 28000, options [mss 1400,sackOK,TS val 9661816 ecr 0,nop,wscale 7], length 0 (ipip-proto-4)
root@kube-01-06:~# tcpdump -i ens3 proto 4 &
[1] 12773
root@kube-01-06:~# tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens3, link-type EN10MB (Ethernet), capture size 262144 bytes
</code></pre>
<p>What I have checked so far:</p>
<ol>
<li>The Calico routing mesh comes up just fine. I can see the BGP traffic on the packet capture, and I can see all nodes as "up" using calicoctl</li>
</ol>
<p>root@kube-01-01:~# ./calicoctl node status
Calico process is running.</p>
<pre><code>IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+--------------+-------------------+-------+----------+-------------+
| 10.13.23.123 | node-to-node mesh | up | 09:12:50 | Established |
| 10.13.23.124 | node-to-node mesh | up | 09:12:49 | Established |
| 10.13.23.126 | node-to-node mesh | up | 09:12:50 | Established |
| 10.13.23.129 | node-to-node mesh | up | 09:12:50 | Established |
| 10.13.23.127 | node-to-node mesh | up | 09:12:50 | Established |
| 10.13.23.128 | node-to-node mesh | up | 09:12:50 | Established |
| 10.13.23.130 | node-to-node mesh | up | 09:12:52 | Established |
+--------------+-------------------+-------+----------+-------------+
</code></pre>
<ol start="2">
<li>The security rules for the subnet allow all traffic. All the nodes are in the same subnet, and I have a stateless rule permitting all traffic from other nodes within the subnet (I have also tried adding a rule permitting IP-in-IP traffic explicitly - same result).</li>
<li>The source/destination check is disabled on all the vNICs on the K8s nodes.</li>
</ol>
<p>Other things I have noticed:</p>
<ol>
<li>I can get calico to work if I disable IP in IP encapsulation for same-subnet traffic, and use regular routing inside the subnet (as described <a href="https://docs.projectcalico.org/v3.2/reference/public-cloud/aws" rel="nofollow noreferrer">here</a> for AWS)</li>
<li>Other networking plugins (such as weave) seem to work correctly.</li>
</ol>
<p>So my question here is - what is happening to the IP-in-IP encapsulated traffic? Is there anything else I can check to figure out what is going on? </p>
<p>And yes, I know that I could have used managed Kubernetes engine directly, but where is the fun (and the learning opportunity) in that? :D </p>
<p>Edited to address Rico's answer below:</p>
<p>1) I'm also not getting any pod-to-pod traffic to flow through (no communication between pods on different hosts). But I was unable to capture that traffic, so I used node-to-pod as an example.</p>
<p>2) I'm also getting a similar result if I hit a NodePort svc on another node than the one the pod is running on - I see the outgoing IP-in-IP packets from the first node, but they never show up on the second node (the one actually running the pod):</p>
<pre><code>root@kube-01-01:~# tcpdump -i ens3 proto 4 &
[1] 6499
root@kube-01-01:~# tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens3, link-type EN10MB (Ethernet), capture size 262144 bytes
root@kube-01-01:~# curl 127.0.0.1:32137
20:24:08.460069 IP kube-01-01 > kube-01-06: IP 192.168.21.64.40866 > 192.168.181.1.http: Flags [S], seq 3175451438, win 43690, options [mss 65495,sackOK,TS val 19444115 ecr 0,nop,wscale 7], length 0 (ipip-proto-4)
20:24:09.459768 IP kube-01-01 > kube-01-06: IP 192.168.21.64.40866 > 192.168.181.1.http: Flags [S], seq 3175451438, win 43690, options [mss 65495,sackOK,TS val 19444365 ecr 0,nop,wscale 7], length 0 (ipip-proto-4)
20:24:11.463750 IP kube-01-01 > kube-01-06: IP 192.168.21.64.40866 > 192.168.181.1.http: Flags [S], seq 3175451438, win 43690, options [mss 65495,sackOK,TS val 19444866 ecr 0,nop,wscale 7], length 0 (ipip-proto-4)
20:24:15.471769 IP kube-01-01 > kube-01-06: IP 192.168.21.64.40866 > 192.168.181.1.http: Flags [S], seq 3175451438, win 43690, options [mss 65495,sackOK,TS val 19445868 ecr 0,nop,wscale 7], length 0 (ipip-proto-4)
</code></pre>
<p>Nothing on the second node ( <code>kube-01-06</code>, the one that is actually running the nginx pod ):</p>
<pre><code>root@kubespray-01-06:~# tcpdump -i ens3 proto 4
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens3, link-type EN10MB (Ethernet), capture size 262144 bytes
</code></pre>
<p>I used 127.0.0.1 for ease of demonstration - of course, the exact same thing happens when I hit that NodePort from an outside host:</p>
<pre><code>20:25:17.653417 IP kube-01-01 > kube-01-06: IP 192.168.21.64.56630 > 192.168.181.1.http: Flags [S], seq 980178400, win 64240, options [mss 1440,nop,wscale 8,nop,nop,sackOK], length 0 (ipip-proto-4)
20:25:17.654371 IP kube-01-01 > kube-01-06: IP 192.168.21.64.56631 > 192.168.181.1.http: Flags [S], seq 3932412963, win 64240, options [mss 1440,nop,wscale 8,nop,nop,sackOK], length 0 (ipip-proto-4)
20:25:17.667227 IP kube-01-01 > kube-01-06: IP 192.168.21.64.56632 > 192.168.181.1.http: Flags [S], seq 2017119223, win 64240, options [mss 1440,nop,wscale 8,nop,nop,sackOK], length 0 (ipip-proto-4)
20:25:20.653656 IP kube-01-01 > kube-01-06: IP 192.168.21.64.56630 > 192.168.181.1.http: Flags [S], seq 980178400, win 64240, options [mss 1440,nop,wscale 8,nop,nop,sackOK], length 0 (ipip-proto-4)
20:25:20.654577 IP kube-01-01 > kube-01-06: IP 192.168.21.64.56631 > 192.168.181.1.http: Flags [S], seq 3932412963, win 64240, options [mss 1440,nop,wscale 8,nop,nop,sackOK], length 0 (ipip-proto-4)
20:25:20.668595 IP kube-01-01 > kube-01-06: IP 192.168.21.64.56632 > 192.168.181.1.http: Flags [S], seq 2017119223, win 64240, options [mss 1440,nop,wscale 8,nop,nop,sackOK], length 0 (ipip-proto-4)
</code></pre>
<p>3) As far as I can tell (please correct me if I'm wrong here), the nodes <em>are</em> aware of routes to pod networks, and pod-to-node traffic is also encapsulated IP-in-IP (notice the protocol 4 packets in the first capture above)</p>
<pre><code>root@kube-01-01:~# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
alpine-9d85bf65c-2wx74 1/1 Running 1 23m 192.168.82.194 kube-01-08 <none>
nginx-dbddb74b8-th9ns 1/1 Running 0 10h 192.168.181.1 kube-01-06 <none>
root@kube-01-01:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
<snip>
192.168.181.0 10.13.23.127 255.255.255.192 UG 0 0 0 tunl0
</code></pre>
| <p>After a long time and a lot of testing, my belief is that this was caused by IP-in-IP (ipip, or IP protocol 4) traffic being blocked by the Oracle cloud networking layer.</p>
<p>Even though I was unable to find this documented anywhere, it is something that is common for cloud providers (Azure, for example, does the same thing - disallows IP-in-IP and unknown IP traffic).</p>
<p>So the possible workarounds here should be the same ones as the ones listed in the <a href="https://docs.projectcalico.org/reference/public-cloud/azure" rel="nofollow noreferrer">Calico documentation for Azure</a>:</p>
<ul>
<li>Disabling IP-in-IP for same-subnet traffic (as I mentioned in the question)</li>
<li>Switching Calico to VXLAN encapsulation</li>
<li>Using Calico for policy only, and flannel for encapsulation (VXLAN)</li>
</ul>
|
<p>I have a mosquitto broker running on a pod, this server is exposed as a service as both DNS and IP address.</p>
<p>But this service is accessible by any pod in the cluster.</p>
<p>I want to restrict access to this service such that pods trying to connect to this DNS or IP address should only be able to if the pods have certain name/metadata.</p>
<p>One solution I guess will be to use namespaces? What other solution is there?</p>
| <p>The UseCase you are describing is exactly what <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="noreferrer">NetworkPolicies</a> are here for.</p>
<p>Basically you define selector for pods which the network traffic should be restricted (i.e. your mosquito broker) and what specifica pods need to have in order to be allowed to reach it. For example a label <code>"broker-access: true"</code> or whatever seems to be suitable for you.</p>
<p>an example network policy could look like this:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: broker-policy
namespace: default
spec:
podSelector:
matchLabels:
role: message-broker
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
broker-access: true
ports:
- protocol: TCP
port: 6379
</code></pre>
<p>this network policy would be applied to every pod with label <code>role=message-broker</code>.
and it would restrict all incoming traffic except for traffic from pods with label <code>broker-acces=true</code> on port 6379.</p>
<p>Hope this helps and gives you a bit of a skaffold for your NetworkPolicy</p>
|
<p>we have an aks cluster and having one system nodepool and 2 user nodepools (usnp1&usnp2) currently. We have multiple applications pods currently running across the usernodepools.</p>
<p>So now we have a requirement to run one of our existing application pods to dedicately have a seperate nodepool and namespace seperately. Example, our application myapp is currently running in namespace "all-app-ns" which has nodeselector set "usnp1", and in the same pool we have other application pods also. So need to move the myapp pods and all related components completely to new namespace specifically for "myapp-ns" and it should be allocated only specific to "myapp-pool"</p>
<ol>
<li><p>The myapp-pool" should not allocate any other pods than the myapp. Which option will be more precedence here - node selector with pods or taints? i read like nodeselector will force the scheduler "should assign" to specic node, where taint will do "can assign".. so nodeselector will be better option?</p>
</li>
<li><p>since myapp deployment and pods are already running currently in "all-app-ns", whether moving those to new namespace "myapp-ns", will these delete existing myapp pods in the namespace -all-app-ns"? will this make any downtime? currentlt we deployed this using helm chart and will the helmstate will delete the old pods and create the newpods, and any downtime will happen ?</p>
</li>
</ol>
| <ol>
<li>
<blockquote>
<p>... so nodeselector will be a better option?</p>
</blockquote>
</li>
</ol>
<p>You could use nodeSelector but also nodeAffinity. It is just a matter of configuration. The nodeSelector strictly define that the Pod with e.g</p>
<pre><code>nodeSelector:
usnp1: "true"
</code></pre>
<p>could only be deployed to node with label <code>usnp1=true</code>.
But you could define same with nodeAffinity in Pod configuration e.g</p>
<pre><code>kind: Pod
metadata:
name: <your-pod-name>
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: usnp1
operator: In
values:
- "true"
</code></pre>
<p>Those configurations are equal.</p>
<ol start="2">
<li>
<blockquote>
<p>... and any downtime will happen?</p>
</blockquote>
</li>
</ol>
<p>If I understand clearly, you need to move <code>myapp pods</code> which are currently deployed in <code>all-app-ns</code> to <code>myapp-ns</code>. So in this scenario, if you are deploying to the different namespace in your case <code>myapp-ns</code>, the pods in <code>all-app-ns</code> will not be undeployed. I suppose, because in <code>helm install</code> you need to define the <code>--namespace</code> option if you are not already switched with <code>kubectl set-context</code>. So to undeploy you have to <code>helm uninstall <RELEASE> --namespace=all-app-ns</code>. The availability of the application depends on your DNS records, so you will probably need to configure them if you need to expose the app publicly.</p>
<hr />
<p>To answer your questions bellow:</p>
<blockquote>
<p>so for point1. if we have any pods without any nodeselector defined,
there will be chance to allocate that pods to the new nodepool in this
cae right ? the aim here is not allow the new nodepools to have any
other pods otherthan the myapp pods. whether taint will help is this
scenario than nodeselctor or a admission controller called
"podnodeselector" ?</p>
</blockquote>
<p>Use <code>nodeAffinity</code> configuration desribed above for <code>myapp</code> pod configuration.</p>
<p>Add the label to <code>myapp</code> pod configuration e.g</p>
<pre><code> template:
metadata:
labels:
usnp1: myapp
</code></pre>
<p>For any other pods which you do not want to be scheduled at that node, or better say not scheduled with pods which have a label <code>usnp1: myapp</code> create <code>podAntiAffinity</code> configuration e.g</p>
<pre><code> affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: usnp1
operator: In
values:
- myapp
topologyKey: "kubernetes.io/hostname"
</code></pre>
<p>Look at <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/</a></p>
<p>Anyway this also doesn't to be 100% solution, because scheduling of pods is complex algorithm with many rules, which are score weighted. You could see the score and weights in schedulerds log.</p>
<p>Look at <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/</a></p>
<blockquote>
<p>Also for point2- we will be proceeding with helm upgrade with the
modified manifests with the namespace changes from pipeline, in that
case whether the helm statefile play a role here to delete the old
pods?</p>
</blockquote>
<p>To this, I don't know about a functionality in helm where you could parallel undeploy from one namespace and deploy to second namespace. Because if I understand clearly the state of deployment applied by <code>helm install</code> is per namespace. So to deploy/undeploy you need to always define <code>--namespace</code> if you are not already switched in. And that probably means that you couldn't interfere with namespace state when deploying same helm chart.</p>
<p>But I'm do not have much experiences with helm.</p>
|
<p>This is a really basic question, that I just can't seem to find it ANYWHERE.
I need to create a <strong>CronJob on OpenShift Container Platform</strong>. </p>
<p>I wasn't able to find a page on the Container Platform on how to directly create a CronJob.</p>
<p>But I did manage to find instruction on creating it by pasting the Job yaml file in the <strong>Add to Application</strong> Button.
<a href="https://docs.openshift.com/container-platform/4.1/nodes/jobs/nodes-nodes-jobs.html" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/4.1/nodes/jobs/nodes-nodes-jobs.html</a></p>
<p>Now, having created a <strong>CronJob</strong>(I think). </p>
<p>Lol, how do I even <strong>find/modify/delete</strong> it on <strong>Container Platform</strong>?</p>
| <p>You can find cron jobs in the <code>Cluster Console/Workloads/Cron Jobs</code>.</p>
|
<p>I am trying to set up external dns from Eks manifest file.</p>
<p>I created EKS cluster and created 3 fargate profiles, default, kube-system and dev.
Coredns pods are up and running.</p>
<p>I then installed AWS Load Balancer Controller by following this doc.
<a href="https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html</a>
The load balancer controller came up in kube-system.</p>
<p>I then installed external-dns deployment using the following manifest file.</p>
<pre><code> apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
namespace: kube-system
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::xxxxxxxxx:role/eks-externaldnscontrollerRole-XST756O4A65B
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups: [""]
resources: ["services"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
namespace: kube-system
spec:
selector:
matchLabels:
app: external-dns
strategy:
type: Recreate
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: bitnami/external-dns:0.7.1
args:
- --source=service
- --source=ingress
#- --domain-filter=xxxxxxxxxx.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones
- --provider=aws
#- --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization
- --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)
- --registry=txt
- --txt-owner-id=my-identifier
#securityContext:
# fsGroup: 65534
</code></pre>
<p>I used both namespace kube-system and dev for external-dns, both came up fine.</p>
<p>I then deployed, the application and ingress manifest files. I used both namespaces, kube-system and dev.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: app1-nginx-deployment
labels:
app: app1-nginx
namespace: kube-system
spec:
replicas: 2
selector:
matchLabels:
app: app1-nginx
template:
metadata:
labels:
app: app1-nginx
spec:
containers:
- name: app1-nginx
image: kube-nginxapp1:1.0.0
ports:
- containerPort: 80
resources:
requests:
memory: "128Mi"
cpu: "500m"
limits:
memory: "500Mi"
cpu: "1000m"
---
apiVersion: v1
kind: Service
metadata:
name: app1-nginx-nodeport-service
labels:
app: app1-nginx
namespace: kube-system
annotations:
#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer
alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html
spec:
type: NodePort
selector:
app: app1-nginx
ports:
- port: 80
targetPort: 80
----------
# Annotations Reference: https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/ingress/annotation/
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-usermgmt-restapp-service
labels:
app: usermgmt-restapp
namespace: kube-system
annotations:
# Ingress Core Settings
alb.ingress.kubernetes.io/scheme: internet-facing
kubernetes.io/ingress.class: alb
# Health Check Settings
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer
#alb.ingress.kubernetes.io/healthcheck-path: /usermgmt/health-status
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15'
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5'
alb.ingress.kubernetes.io/success-codes: '200'
alb.ingress.kubernetes.io/healthy-threshold-count: '2'
alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'
## SSL Settings
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:0xxxxxxxxxx:certificate/0xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
#alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used)
# SSL Redirect Setting
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
# External DNS - For creating a Record Set in Route53
external-dns.alpha.kubernetes.io/hostname: palb.xxxxxxx.com
# For Fargate
alb.ingress.kubernetes.io/target-type: ip
spec:
rules:
- http:
paths:
- path: /* # SSL Redirect Setting
pathType: ImplementationSpecific
backend:
service:
name: ssl-redirect
port:
name: use-annotation
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: app1-nginx-nodeport-service
port:
number: 80
</code></pre>
<p>All pods came up fine but it is not dynamically registering the the dns alias for thr alb.
Can you please guide me to know what i am going wrong?</p>
<hr />
| <p>First, check the ingress itself works. Check the AWS load balancers and load balancer target groups. The target group targets should be active.
If you do a <code>kubectl get ingress</code> this should be also output the DNS name of the load balancer created.
Use <code>curl</code> to check this url works!
The annotation <code>external-dns.alpha.kubernetes.io/hostname: palb.xxxxxxx.com</code> does not work for ingresses. It is only valid for services. But you don't need it. Just specify the <code>host</code> field for the ingress <code>spec.rules</code>. In your example there is no such property. Specify it!</p>
|
<p>I am currently trying to modify my image:tag in my build config from shell command using oc patch.<br />
My command is as such :</p>
<pre><code>oc patch bc/my-bc-name --patch '{"spec":{"strategy":{"sourceStrategy":{"from":{"name": "image:tag" }}}}}'
</code></pre>
<p>what i want to do :</p>
<pre><code>oc patch bc/my-bc-name --patch '{"spec":{"strategy":{"sourceStrategy":{"from":{"name": $myImage }}}}}'
</code></pre>
<p>such that the image name is specify as an environnement variable.
Unfortunately, i got :
Error from server (BadRequest): invalid character '$' looking for beginning of value</p>
<p>using simple/double quote nor ${myImage} does not seems to work as well.</p>
<p>any workaround to bypass this is more than welcomed :)</p>
<p>Kind regards</p>
| <p>Surround your environment variable with single quotes as shown below. This will allow the shell to replace/expand it with its value.</p>
<pre><code>oc patch bc/my-bc-name --patch '{"spec":{"strategy":{"sourceStrategy":{"from":{"name": "'$myImage'"}}}}}'
</code></pre>
|
<p>Is it possible to control the percentage of traffic going to a particular pod with Kubernetes Service, without controlling the number of underlying pod? By default, kube- chooses a backend via a round-robin algorithm.</p>
| <p>Yes, possible with the <strong>extra</strong> configuration of <strong>Service mesh</strong>.</p>
<p>If you looking forward to do it using the simple service it's hard to do it based on <strong>%</strong> as the default behavior is <strong>round-robin</strong>.</p>
<p>For example, if you are using the <strong>Istio service mesh</strong></p>
<p>You can route the traffic based on <strong>weight</strong></p>
<pre><code>apiVersion: networking.istio.io/v1beta1
kind: VirtualService
...
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1
weight: 50
- destination:
host: reviews
subset: v3
weight: 50
</code></pre>
<p>where <strong>subset</strong> you can consider is more like the <strong>label</strong> so you are running the deployments with multiple <strong>labels</strong> and do the routing based on <strong>weight</strong> using the <strong>Istio</strong>.</p>
<p>See the <a href="https://istio.io/latest/docs/tasks/traffic-management/traffic-shifting/" rel="nofollow noreferrer">example</a>.</p>
|
<p>i have lots of deployments yaml files and lots of services yaml files, i want to know if it is possible to execute all the yaml files at once ?
i tried this command:</p>
<pre><code>kubectl create -f *.yaml
</code></pre>
<p>But it's not working </p>
<p>Thanks</p>
| <p>Addition to @Harsh answer.</p>
<p>Apply resources from a directory:</p>
<pre><code>kubectl apply -k dir/
</code></pre>
<p>Include the sub directories to include all manifest files from sub-directories.</p>
<pre><code> kubectl apply -R -f dir/
</code></pre>
<blockquote>
<p>-R, --recursive=false: Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests
organized within the same directo</p>
</blockquote>
|
<p>Ok.. so, we have Google Secret Manager on GCP, AWS Secret Manager in AWS, Key Vault in Azure... and so on.</p>
<p>Those services give you libs so you can code the way your software will access the secrets there. They all look straightforward and sort of easy to implement. Right?</p>
<p>For instance, using Google SM you can like:</p>
<pre><code>from google.cloud import secretmanager
client = secretmanager.SecretManagerServiceClient()
request = {"name": f"projects/<proj-id>/secrets/mysecret/versions/1"}
response = client.access_secret_version(request)
payload = response.payload.data.decode("UTF-8")
</code></pre>
<p>and you're done.</p>
<p>I mean, if we talk about K8S, you can improve the code above by reading the vars from a configmap where you may have all the resources of your secrets, like:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: myms
namespace: myns
data:
DBPASS: projects/<proj-id>/secrets/mysecretdb/versions/1
APIKEY: projects/<proj-id>/secrets/myapikey/versions/1
DIRTYSECRET: projects/<proj-id>/secrets/mydirtysecret/versions/1
</code></pre>
<p>And then use part of the code above to load the vars and get the secrets from the SM.</p>
<p>So, when I was looking the <em>interwebs</em> for best practices and examples, I found projects like the below:</p>
<ol>
<li><a href="https://github.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp</a></li>
<li><a href="https://github.com/doitintl/secrets-init" rel="nofollow noreferrer">https://github.com/doitintl/secrets-init</a></li>
<li><a href="https://github.com/doitintl/kube-secrets-init" rel="nofollow noreferrer">https://github.com/doitintl/kube-secrets-init</a></li>
<li><a href="https://github.com/aws-samples/aws-secret-sidecar-injector" rel="nofollow noreferrer">https://github.com/aws-samples/aws-secret-sidecar-injector</a></li>
<li><a href="https://github.com/aws/secrets-store-csi-driver-provider-aws" rel="nofollow noreferrer">https://github.com/aws/secrets-store-csi-driver-provider-aws</a></li>
</ol>
<p>But those projects don't clearly explain what's the point of mounting my secrets as files or env_vars..</p>
<p>I got really confused, maybe I'm too newbie on the K8S and cloud world... and that's why I'm here asking, maybe, a really really dumb questions. Sorry :/</p>
<p>My questions are:</p>
<ol>
<li>Are the projects, mentioned above, recommended for old code that I do not want to touch? I mean, let's say that my code already use a env var called DBPASS=mypass and I would like to workaround it so the value from the DBPASS env would be <em>hackinjected</em> by a value from a SM.</li>
<li>The implementation to handle a secret manager is very hard. So it is recommended to use one of the solutions above?</li>
<li>What are the advantages of such injection approach?</li>
</ol>
<p>Thx a lot!</p>
| <p>There are many possible motivations why you may want to use an abstraction (such as the CSI driver or sidecar injector) over a native integration:</p>
<ul>
<li><p><strong>Portability</strong> - If you're multi-cloud or multi-target, you may have multiple secret management solutions. Or you might have a different secret manager target for local development versus production. Projecting secrets onto a virtual filesystem or into environment variables provides a "least common denominator" approach that decouples the application from its secrets management provider.</p>
</li>
<li><p><strong>Local development</strong> - Similar to the previous point on portability, it's common to have "fake" or fakeish data for local development. For local dev, secrets might all be fake and not need to connect to a real secret manager. Moving to an abstraction avoids error-prone spaghetti code like:</p>
<pre class="lang-js prettyprint-override"><code>let secret;
if (process.env.RAILS_ENV === 'production') {
secret = secretmanager.access('...')
} else {
secret = 'abcd1234'
}
</code></pre>
</li>
<li><p><strong>De-risking</strong> - By avoiding a tight coupling, you can de-risk upstream API changes in an abstraction layer. This is conceptual similar to the benefits of microservices. As a platform team, you make a guarantee to your developers that their secret will live at <code>process.env.FOO</code>, and it doesn't matter <em>how</em> it gets there, so long as you continue to fulfill that API contract.</p>
</li>
<li><p><strong>Separate of duties</strong> - In some organizations, the platform team (k8s team) is separate from the security team, is separate from development teams. It might not be realistic for a developer to ever have direct access to a secret manager.</p>
</li>
<li><p><strong>Preserving identities</strong> - Depending on the implementation, it's possible that the actor which accesses the secret varies. Sometimes it's the k8s cluster, sometimes it's the individual pod. They both had trade-offs.</p>
</li>
</ul>
<hr />
<p>Why might you <em>not</em> want this abstraction? Well, it adds additional security concerns. Exposing secrets via environment variables or via the filesystem makes you subject to a generic series of supply chain attacks. Using a secret manager client library or API directly doesn't entirely prevent this, but it forces a more targeted attack (e.g. core dump) instead of a more generic path traversal or env-dump-to-pastebin attack.</p>
|
<p>ConfigMap exists in the right namespace, but getting this error that it couldn't construct reference to configmap, and did not get a confirmation that leaderelection has been successfully required. Any advice would be appreciated</p>
<p><strong>error log snippet</strong></p>
<pre><code>...
2021-08-04T18:14:35.659Z INFO setup starting manager
2021-08-04T18:14:35.659Z INFO controller-runtime.manager starting metrics server {"path": "/metrics"}
E0804 18:14:35.701075 1 event.go:247] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"controller-leader-election-helper", GenerateName:"", Namespace:"kubestone-system", SelfLink:"", UID:"e28283ca-45ad-4095-9077-d55c8c0a40be", ResourceVersion:"10947", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63763697675, loc:(*time.Location)(0x21639e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"kubestone-controller-manager-f467b7c47-8n8nz_d30009f2-f54f-11eb-9895-d6c135c24951\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2021-08-04T18:14:35Z\",\"renewTime\":\"2021-08-04T18:14:35Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0002739e0), Fields:(*v1.Fields)(nil)}}}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'kubestone-controller-manager-f467b7c47-8n8nz_d30009f2-f54f-11eb-9895-d6c135c24951 became leader'
2021-08-04T18:14:37.760Z INFO controller-runtime.controller Starting Controller {"controller": "sysbench"}
...
</code></pre>
| <p>Looks like the error is telling you:</p>
<pre><code>...
SelfLink: "",
...
</code></pre>
<p><code>... due to: 'selfLink was empty, can't make reference' ...</code></p>
<p>Thread here suggests some solutions <a href="https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/issues/25#issuecomment-742616668" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/issues/25#issuecomment-742616668</a></p>
<p>You may need to update your Kubernetes client but hard to say without more info</p>
|
<p>So, we can see from this fun issue: <a href="https://github.com/aws/aws-cli/issues/4947" rel="noreferrer">https://github.com/aws/aws-cli/issues/4947</a></p>
<p>that AWS CLI 2 is never going to be pushed to pip.</p>
<p>In AWS MWAA (Managed AIrflow), we <em>are not given</em> the ability to specify apt-get level installs. We get a <code>requirements.txt</code>, only, and AWS V2 is not available via that. Meaning, we have no interface or medium to do what the Amazon representative in that issue talks about, which is to use the package manager (from Amazon's MWAA).</p>
<p>Recently, we have hit issues with authenticating to our kubernetes clusters, because the AWS CLI in various versions <em>hardcodes</em> the auth version: <a href="https://github.com/aws/aws-cli/blob/1.20.11/awscli/customizations/eks/get_token.py#L77" rel="noreferrer">https://github.com/aws/aws-cli/blob/1.20.11/awscli/customizations/eks/get_token.py#L77</a></p>
<p>In other versions, it's hardcoded to <code>alpha</code> etc. Which means depending on what aws cli version a worker comes up with, it may or may not match what your kubeconfig in S3 for MWAA has. We've been hitting transient mismatches like</p>
<pre><code>{{refresh_config.py:71}} ERROR - exec: plugin api version client.authentication.k8s.io/v1alpha1 does not match client.authentication.k8s.io/v1beta1
</code></pre>
<p>and, if we "flip" our kubeconfig,</p>
<pre><code>{{refresh_config.py:71}} ERROR - exec: plugin api version client.authentication.k8s.io/v1beta1 does not match client.authentication.k8s.io/v1alpha1
</code></pre>
<p>Since we use the kubernetes executor from Airflow, we need <code>get-token</code>:</p>
<pre><code> user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- --region
- us-east-1
- eks
- get-token
- --cluster-name
- thor
command: aws
</code></pre>
<p>but this get-token returns a different version of the auth sometimes as discussed.</p>
<p>Currently, we have pinned <code>awscli</code> hard in <code>requirements.txt</code> to <code>1.20.11</code>. However, AWS Says to <em>stop</em> using V1: <a href="https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-mac.html" rel="noreferrer">https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-mac.html</a></p>
<p>So, with all of this said, how can I install a specific version of AWS CLI 2 on my AWS MWAA workers? Help from Amazon would be appreciated; I will file an AWS Support ticket there too.</p>
| <p>MWAA doesn't currently support installing a specific AWS CLI version. This is a limitation that the MWAA team is aware of.</p>
|
<p>I need to monitoring a lot of legacy containers in my eks cluster that having a nfs mountpath. To map nfs directory in container i using nfs-client helm chart.</p>
<p>I need to monitor when my mountpath for some reason is lost, and the only way that i find to do that is exec a command in container.</p>
<pre><code>#!/bin/bash
df -h | grep ip_of_my_nfs_server | wc -l
</code></pre>
<p>if the output above returns 1 i know that my nfs mountpath is ok.</p>
<p>Anybody knows some whay that monitoring an output script exec in container with prometheus?</p>
<p>Thanks!</p>
| <p>As Matt has pointed out in the comments: first order of business should be to see if you can simply facilitate your monitoring requirement from node_exporter.</p>
<p>Below is a more generic answer on collecting metrics from arbitrary shell commands.</p>
<hr />
<p>Prometheus is a pull-based monitoring system. You configure it with "scrape targets": these are effectively just HTTP endpoints that expose metrics in a specific format. <em>Some</em> target needs to be alive for long enough to allow it to be scraped.</p>
<p>The two most obvious options you have are:</p>
<ul>
<li>Wrap your logic in a long-running process that exposes this metric on an HTTP endpoint, and configure it as a scrape target</li>
<li>Spin up an instance of <a href="https://github.com/prometheus/pushgateway" rel="nofollow noreferrer">pushgateway</a>, and configure it as a scrape target , and have your command push its metrics there</li>
</ul>
<p>Based on the little information you provided, the latter option seems like the most sane one. Important and relevant note from the README:</p>
<blockquote>
<p>The Prometheus Pushgateway exists to allow ephemeral and batch jobs to expose their metrics to Prometheus. Since these kinds of jobs may not exist long enough to be scraped, they can instead push their metrics to a Pushgateway. The Pushgateway then exposes these metrics to Prometheus.</p>
</blockquote>
<p>Your command would look something like:</p>
<pre><code>#!/bin/bash
printf "mount_path_up %d" $(df -h | grep ip_of_my_nfs_server | wc -l) | curl --data-binary @- http://pushgateway.example.org:9091/metrics/job/some_job_name
</code></pre>
|
<p>i have encountered a strange problem with my cluster</p>
<p>in my cluster i have a deployment and a Loadbalancer service exposing this deployment
it worked like a charm but suddenly the Loadbalancer started to return an error:</p>
<pre><code>curl: (56) Recv failure: Connection reset by peer
</code></pre>
<p>the error is showing while the pod and the loadbalancer are running and have no errors in their logs</p>
<p>what i already tried:</p>
<ul>
<li>deleting the pod</li>
<li>redeploying service + deployment from scratch
but the issue persist</li>
</ul>
<p>my service yaml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"RELEASE-NAME","app.kubernetes.io/name":"APP-NAME","app.kubernetes.io/version":"latest"},"name":"APP-NAME","namespace":"namespacex"},"spec":{"ports":[{"name":"web","port":3000}],"selector":{"app.kubernetes.io/instance":"RELEASE-NAME","app.kubernetes.io/name":"APP-NAME"},"type":"LoadBalancer"}}
creationTimestamp: "2021-08-03T07:55:00Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/name: APP-NAME
app.kubernetes.io/version: latest
name: APP-NAME
namespace: namespacex
resourceVersion: "14583904"
uid: 7fb4d7e6-4316-44e5-8f9b-7a466bc776da
spec:
clusterIP: 10.4.18.36
clusterIPs:
- 10.4.18.36
externalTrafficPolicy: Cluster
ports:
- name: web
nodePort: 30970
port: 3000
protocol: TCP
targetPort: 3000
selector:
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/name: APP-NAME
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: xx.xxx.xxx.xxx
</code></pre>
<p>my deployment yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: APP-NAME
labels:
app.kubernetes.io/name: APP-NAME
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "latest"
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: APP-NAME
app.kubernetes.io/instance: RELEASE-NAME
template:
metadata:
annotations:
checksum/config: 5e6ff0d6fa64b90b0365e9f3939cefc0a619502b32564c4ff712067dbe44ab90
checksum/secret: 76e0a1351da90c0cef06851e3aa9e7c80b415c29b11f473d4a2520ade9c892ce
labels:
app.kubernetes.io/name: APP-NAME
app.kubernetes.io/instance: RELEASE-NAME
spec:
serviceAccountName: APP-NAME
containers:
- name: APP-NAME
image: 'docker.io/xxxxxxxx:latest'
imagePullPolicy: "Always"
ports:
- name: http
containerPort: 3000
livenessProbe:
httpGet:
path: /balancer/
port: http
readinessProbe:
httpGet:
path: /balancer/
port: http
env:
...
volumeMounts:
- name: config-volume
mountPath: /home/app/config/
resources:
limits:
cpu: 400m
memory: 256Mi
requests:
cpu: 400m
memory: 256Mi
volumes:
- name: config-volume
configMap:
name: app-config
imagePullSecrets:
- name: secret
</code></pre>
| <p>The issue in my case turned to be a network component (like a FW) blocking the outbound connection after dimming the cluster 'unsafe' for no apparent reason</p>
<p>so in essence it was not a K8s issue but an IT one</p>
|
<p>I am trying to configure Kubernetes on docker-for-desktops and I want to change the default network assigned to containers. </p>
<blockquote>
<p>Example: the default network is <code>10.1.0.0/16</code> but I want <code>172.16.0.0/16</code>. </p>
</blockquote>
<p>I changed the docker network section to <code>Subnet address: 172.16.0.0 and netmask 255.255.0.0</code> but the cluster keeps assigning the network 10.1.0.0/16.
<a href="https://i.stack.imgur.com/mdlFB.png" rel="noreferrer"><img src="https://i.stack.imgur.com/mdlFB.png" alt="Network configuration"></a></p>
<p>The problem I am facing here is that I am in a VPN which has the same network IP of kubernetes default network (<code>10.1.0.0/16</code>) so if I try to ping a host that is under the vpn, the container from which I am doing the ping keeps saying <code>Destination Host Unreachable</code>.</p>
<p>I am running Docker Desktop (under Windows Pro) Version 2.0.0.0-win81 (29211) Channel: stable Build: 4271b9e.</p>
<p>Kubernetes is provided from Docker desktop <a href="https://i.stack.imgur.com/xshra.png" rel="noreferrer"><img src="https://i.stack.imgur.com/xshra.png" alt="Kuberbetes"></a></p>
<p>From the official <a href="https://docs.docker.com/docker-for-windows/kubernetes/" rel="noreferrer">documentation</a> I know that </p>
<blockquote>
<p>Kubernetes is available in Docker for Windows 18.02 CE Edge and higher, and 18.06 Stable and higher , this includes a standalone Kubernetes server and client, as well as Docker CLI integration. The Kubernetes server runs locally within your Docker instance, <strong>is not configurable</strong>, and is a single-node cluster</p>
</blockquote>
<p>Said so, should Kubernetes use the underlying docker's configuration (like network, volumes etc.)?</p>
| <p>Kubernetes also has a subnet that it uses and a corresponding network setting, which defaults to <code>10.1.0.0/16</code>, but this setting is not exposed in the Docker for-win UI. In <a href="https://github.com/docker/for-win/issues/1667#issuecomment-367357067" rel="nofollow noreferrer">docker/for-win issue #1667: Can not access local network, with kubernetes installed</a>, guillaumerose shows a workaround that I've altered a bit to read:</p>
<ol>
<li>Disable Kubernetes</li>
<li>Restart Docker for-win to clicking on "Restart" from the "Troubleshoot" screen (the one with the bug icon) - this step is missing in guillaumerose's workaround, see below</li>
<li><code>docker run -it --privileged --pid=host justincormack/nsenter1</code> and edit <code>/var/lib/cni/conf/10-default.conflist</code>. Change the mentioned <code>10.1.0.0/16</code> to the network you want. Don't forget the gateway and the dns</li>
<li>Enable Kubernetes</li>
</ol>
<p>In step 3, I changed <code>/var/lib/cni/conf/10-default.conflist</code> like this:</p>
<pre><code># diff -u 10-default.conflist.orig 10-default.conflist
--- 10-default.conflist.orig
+++ 10-default.conflist
@@ -10,11 +10,11 @@
"hairpinMode": true,
"ipam": {
"type": "host-local",
- "subnet": "10.1.0.0/16",
- "gateway": "10.1.0.1"
+ "subnet": "10.250.0.0/16",
+ "gateway": "10.250.0.1"
},
"dns": {
- "nameservers": ["10.1.0.1"]
+ "nameservers": ["10.250.0.1"]
}
},
{
</code></pre>
<p>And this works. I can now ping <code>10.1.119.43</code> <em>and</em> use kubernetes.</p>
<h2>OBS! <code>10-default.conflist</code> gets reset/reverted whenever docker is restarted</h2>
<p>Yes, every time docker gets restarted (e.g. because of a windows reboot), kubernetes reverts back to using <code>10.1.0.0/16</code> and then it is broken again. Apply the workaround above once more, and it will work again.</p>
<p>So I personally have a <code>~/10-default.conflist.250</code> file with the above patch applied and then do:</p>
<pre><code>docker run -i --rm --privileged --pid=host justincormack/nsenter1 /bin/sh -c '/bin/cat > /var/lib/cni/conf/10-default.conflist' < ~/10-default.conflist.250
</code></pre>
<p>as step 3 above instead of editing the file by hand over and over.</p>
<p>It is quite annoying that this the workaround has to be applied every time docker for-win is restarted, but it is better than it not working :-).</p>
<h2>About the need to restart Docker for-win after disabling kubernetes</h2>
<p>My experience is that when kubernetes has been restarted and has reverted to using <code>10.1.0.0/16</code>, if I skip step 2 - the "restart Docker for-win" step - it takes more then 5 minutes to attempt to start kubernetes after which I give up waiting. When I now restart docker (because kubernetes is in a bad state), kubernetes will be re-enabled (again using <code>10.1.0.0/16</code>) but now I can follow the workaround again (including step 2). So restarting docker between disabling kubernetes and modifying <code>10-default.conflist</code> makes the subsequent start of kubernetes actually work.</p>
<p>If anybody has any idea why the contents of <code>/var/lib/cni/conf/10-default.conflist</code> revert to factory defaults every time docker gets restarted, I'm very curious to understand why that is and how to fix this last annoying problem.</p>
|
<p>I'm curious on how the <code>@Value</code> works internally on Spring so that it can actually read value from <code>ConfigMap</code> of Kubernetes Cluster.</p>
<p>I know that:</p>
<ul>
<li><code>@Value("${my.nested.variable}")</code> were used to access variables
declared on <code>application.properties</code> or in OS' environment variable
(higher priority).</li>
<li>When creating new ConfigMap on kubernetes (for Spring project), you usually do <code>kubectl create configmap my-config-name --from-file=application.properties</code>, and it will magically connect those <code>ConfigMap</code> values with respective <code>@Value()</code> on Spring, of course we have to select <code>my-config-name</code> on deployment YAML file.</li>
</ul>
<p>Notice above that we <strong>didnt expose/map</strong> those configmap to container's environment variable, already checked the inside container with <code>printenv</code> , <strong>can't find it</strong>.
However, Spring were still able to retrieve those value from ConfigMap to be used in java program.</p>
<p>How is it possible? anyone know how Spring's <code>@Value</code> works or how the <code>ConfigMap</code> actually works internally so those two can <strong>magically</strong> connected?</p>
<p>Thank You.</p>
| <p>This is a community wiki answer. Feel free to expand it.</p>
<p>As already mentioned by David Maze in the comments, <a href="https://spring.io/projects/spring-cloud-kubernetes" rel="nofollow noreferrer">Spring Cloud Kubernetes</a> is reading <a href="https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/#kubernetes-propertysource-implementations" rel="nofollow noreferrer">ConfigMaps</a> by using <a href="https://kubernetes.io/docs/reference/using-api/" rel="nofollow noreferrer">Kubernetes API</a>. The mechanisms behind it are described in the linked docs.</p>
|
<p>I'm using <code>hyperkube</code> to start <code>kube-controller-manager</code> docker container. In order to rotate the kubernetes CA I followed <a href="https://kubernetes.io/docs/tasks/tls/manual-rotation-of-ca-certificates/" rel="nofollow noreferrer">this doc</a>. I have to point the KCM <code>client-ca</code> and <code>cluster-signing-cert</code> to different certs. Automating this is difficult because KCM process uses command line arguments.</p>
<p>I don't see any option <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/" rel="nofollow noreferrer">here</a>. Does anyone know a way of migrating command line arguments to a <code>config.yaml</code> file for <code>kube-controller-manager</code>?</p>
<p>NOTE: My question is about starting the KCM process with config file just like how we have one for kubelets <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/#start-a-kubelet-process-configured-via-the-config-file" rel="nofollow noreferrer">here</a>.</p>
| <p>There are two possible ways of starting kube-controller-manager with customized settings, by providing YAML files.</p>
<h4>Method #1</h4>
<p>The <code>kube-controller-manager</code> runs as a pod in your control plane. It's config file is located in <code>/etc/kubernetes/manifests</code>, a <code>kube-controller-manager.yaml</code>. By adding <code>.spec.containers.command</code> like so:</p>
<pre class="lang-yaml prettyprint-override"><code>spec:
containers:
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
...
</code></pre>
<p>you can change the defaults.</p>
<p>Then you would have to restart docker (or containerd)</p>
<pre class="lang-text prettyprint-override"><code>sudo systemctl restart docker (or containerd)
</code></pre>
<p>or, if you want to restart just <code>kube-controller-manager</code></p>
<pre class="lang-text prettyprint-override"><code>docker restart kube-controller-mamnager
</code></pre>
<h4>Method #2</h4>
<p>You can change use <code>ClusterConfiguration</code> with <code>extraArgs</code> like so<sup>[<a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/#controllermanager-flags" rel="nofollow noreferrer">reference</a>]</sup>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: v1.16.0
controllerManager:
extraArgs:
cluster-signing-key-file: /home/johndoe/keys/ca.key
deployment-controller-sync-period: "50"
</code></pre>
<p>For this you would have to extract your current cluster configuration</p>
<pre class="lang-text prettyprint-override"><code>kubeadm config view > kubeadm-config.yaml
</code></pre>
<p>edit this file accordingly, and then upgrade the control plane</p>
<pre class="lang-text prettyprint-override"><code>kubeadm upgrade apply --config kubeadm-config.yaml
</code></pre>
<hr />
<p>Now, to answer your question - <code>kube-controller-manager</code> does not support <code>--config</code> or any other flag that would allow you to pass a YAML file as it's argument (you can check all available flags <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/#options" rel="nofollow noreferrer">here</a>).</p>
<p>The only possible solutions are the two above.</p>
|
<p>In Kubernetes, more specifically GKE, I created a Filestore volume, with the minimum of 1TB.</p>
<p>I created then a Persistent Volume Claim of 20gb.</p>
<p>What happens, if for another service I create another 20gb PVC ?</p>
<p>Specifically, how does this kind of storage work if you want to share the disk space between pods but having different data? Does Google make "batches" of X gb requested and this space is reserved in the disk?</p>
| <p>There is a one-to-one mapping between a PVC and a PV, so you cannot create multiple PVCs pointing to the same PV.</p>
<p>But there are two possible ways you can share the underlying Filestore file share across multiple pods.</p>
<ol>
<li><p>You can share a single PVC across multiple pods. So you could create a single PVC with the total amount of storage you want across all your pods (or just set it to size of the Filestore instance). You would then mount the same PVC on all of your pods. To ensure that you don't have any data collision, each pod should write to it's own directory within the PVC. This will definitely work, but it won't limit or report the storage used by an individual pod.</p>
</li>
<li><p>You can create multiple PVs backed by the same Filestore instance. When you create each PV, specify the storage size for each PV. Then create PVCs for each pod. I believe that you'd still want to make sure that each pod writes to a unique directory.</p>
</li>
</ol>
<p>Lastly, unless you have a specific reason for using Filestore instances, consider using dynamic provisioning with GCE PDs.</p>
|
<p>I have a wehook running in my cluster.</p>
<p>I created a certificate and signed it successfully.</p>
<p>certificate configuration:</p>
<pre><code>cat > csr.conf <<EOF
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1 = s-controller.ns-controller
DNS.2 = s-controller.ns-controller.svc
EOF
</code></pre>
<p>I them create the certificate as following:</p>
<pre><code>openssl genrsa -out server-key.pem 2048
openssl req -new -key server-key.pem -subj "/CN=s-controller.ns-controller.svc" -out server.csr -config csr.conf
</code></pre>
<p>certificate signing request (<code>v1beta1</code>)</p>
<pre><code>cat <<EOF | kubectl create -f -
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: csr-controller
spec:
groups:
- system:authenticated
request: $(cat server.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- server auth
EOF
</code></pre>
<p>This worked just fine!</p>
<p>Since I updated my kubernetes version, I get the following warning: <code>Warning: certificates.k8s.io/v1beta1 CertificateSigningRequest is deprecated in v1.19+, unavailable in v1.22+; use certificates.k8s.io/v1</code>, I updated the <code>CertificateSigningRequest</code> so now it is as following:</p>
<pre><code>cat <<EOF | kubectl create -f -
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: csr-controller
spec:
groups:
- system:authenticated
request: $(cat server.csr | base64 | tr -d '\n')
signerName: kubernetes.io/kube-apiserver-client
usages:
- digital signature
- key encipherment
- client auth
EOF
</code></pre>
<p>And now the api server fails to connect to my webhook: <code>Post "https://s-controller.ns-controller.svc:443/mutate?timeout=30s": x509: certificate specifies an incompatible key usage</code></p>
<p>I tried updateding the certificate configuration to <code>extendedKeyUsage = clientAuth</code> but it didnt help.</p>
<p>Any idea what is the correct <code>signerName</code> and configuration to the <code>certificates.k8s.io/v1</code> apiVersion</p>
| <p>In your old CertificateSigningRequest yaml you were using <em>server auth</em> as one of the key usages but, in latest one you changed it to <em>client auth</em> . the cert needed by webhook need to be signed with server auth key and signerName should be kubernetes.io/kubelet-serving . So update your files as follows to avoid the issue :</p>
<p>csr.conf</p>
<pre><code> cat > csr.conf <<EOF
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
prompt = no
[req_distinguished_name]
CN = s-controller.ns-controller.svc
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth, serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1 = s-controller.ns-controller
DNS.2 = s-controller.ns-controller.svc
EOF
</code></pre>
<p>Generate csr with subject.organization as "system:nodes"</p>
<pre><code>openssl genrsa -out server.key 2048
openssl req -new -key server.key -subj "/CN=system:node:s-contoller.ns-controller.svc /OU="system:nodes" /O=system:nodes" -out $server.csr -config csr.conf
</code></pre>
<p>csr-for-webhook.yaml</p>
<pre><code>cat <<EOF | kubectl create -f -
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: csr-controller
spec:
groups:
- system:authenticated
request: $(cat server.csr | base64 | tr -d '\n')
signerName: kubernetes.io/kubelet-serving
usages:
- digital signature
- key encipherment
- server auth
EOF
</code></pre>
<p>source : <a href="https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/</a></p>
<p>PS: I have tested it with Kubernetes version 1.21.3</p>
|
<p>I am working with kubernetes and I need to pass parquet files containing datasets between pods , but I don't know which option will work best.</p>
<p>As I know, persistent disk allows me to mount a shared volume on my pods, but with cloud storage I can share these files too.</p>
<p>All the process is hosted on google cloud.</p>
| <p>If you want to persist the data you have to use the file store of Google. Which will support the read write many.</p>
<blockquote>
<p>Persistent Volumes in GKE are supported using the Persistent Disks.
The problem with these disks is that they only support
ReadWriteOnce(RWO) (the volume can be mounted as read-write by a
single node) and ReadOnlyMany (ROX)(the volume can be mounted
read-only by many nodes) access modes.</p>
</blockquote>
<p>Read more at : <a href="https://medium.com/@Sushil_Kumar/readwritemany-persistent-volumes-in-google-kubernetes-engine-a0b93e203180" rel="nofollow noreferrer">https://medium.com/@Sushil_Kumar/readwritemany-persistent-volumes-in-google-kubernetes-engine-a0b93e203180</a></p>
<p>With disk, it won't be possible to share the data between <strong>pods</strong> as it will only support the <strong>read-write</strong> once. The single disk will get attach to a <strong>single</strong> node.</p>
<p>If you looking forward to mounting the storage like a <code>cloud bucket</code> behind the POD using <code>CSI driver</code>, your file writing <strong>IO</strong> will be very slow. Storage can give better performance with API.</p>
<p>You can create the NFS server in Kubernetes and use also which will provide support again to read writ many.</p>
<p><a href="https://www.gluster.org/" rel="nofollow noreferrer">Gluster FS</a> & <a href="https://min.io/" rel="nofollow noreferrer">MinIo</a> is one of the option to use, however if looking for <strong>managed</strong> <strong>NFS</strong> use the <strong><a href="https://cloud.google.com/filestore/docs/accessing-fileshares" rel="nofollow noreferrer">filestore</a></strong> of Google.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.