prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>I am setting up my first ingress in kubernetes using nginx-ingress. I set up the <code>ingress-nginx</code> load balancer service like so:</p>
<pre><code>{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "ingress-nginx",
"namespace": "...",
"labels": {
"k8s-addon": "ingress-nginx.addons.k8s.io"
},
"annotations": {
"service.beta.kubernetes.io/aws-load-balancer-backend-protocol": "tcp",
"service.beta.kubernetes.io/aws-load-balancer-proxy-protocol": "*",
"service.beta.kubernetes.io/aws-load-balancer-ssl-cert": "arn....",
"service.beta.kubernetes.io/aws-load-balancer-ssl-ports": "443"
}
},
"spec": {
"ports": [
{
"name": "http",
"protocol": "TCP",
"port": 80,
"targetPort": "http",
"nodePort": 30591
},
{
"name": "https",
"protocol": "TCP",
"port": 443,
"targetPort": "http",
"nodePort": 32564
}
],
"selector": {
"app": "ingress-nginx"
},
"clusterIP": "...",
"type": "LoadBalancer",
"sessionAffinity": "None",
"externalTrafficPolicy": "Cluster"
},
"status": {
"loadBalancer": {
"ingress": [
{
"hostname": "blablala.elb.amazonaws.com"
}
]
}
}
}
</code></pre>
<p>Notice how the <code>https</code> port has its <code>targetPort</code> property pointing to port 80 (http) in order to terminate ssl at the load balancer.</p>
<p>My ingress looks something like this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: something
namespace: ...
annotations:
ingress.kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
rules:
- host: www.exapmle.com
http:
paths:
- path: /
backend:
serviceName: some-service
servicePort: 2100
</code></pre>
<p>Now when I navigate to the url I get a <code>Too many redirects error</code>. Something that is confusing me is that when I add the following header "X-Forwarded-Proto: https" I get the expected response (<code>curl https://www.example.com -v -H "X-Forwarded-Proto: https"</code>). </p>
<p>Any ideas how I can resolve the issue?</p>
<p>P.S. this works just fine with <code>ingress.kubernetes.io/force-ssl-redirect: "false"</code> and it doesn't seem that there are any extraneous redirects.</p>
| <p>Adding another cause for the <code>Too many redirects error</code>.</p>
<p>While working with <em>ingress-nginx</em> as an ingress controller in front of some k8s services.</p>
<p><strong>One of the services</strong> (ArgoCD in my case) <strong>handled TLS termination by itself and always redirects HTTP requests to HTTPS</strong>. <br></p>
<p><strong>The problem is that the nginx ingress controller also handled TLS termination</strong> and communicates with the backend service with HTTP then the result is that the ArgoCD's server always responding with a redirects to HTTPS which is the cause for the multiple redirects.</p>
<p>Any attempts to pass relevant values to the ingress <strong>annotations below will not help</strong>:</p>
<pre><code>annotations:
nginx.ingress.kubernetes.io/ssl-redirect: false/true
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"/"HTTPS"
</code></pre>
<p><strong>The solution was to ensure that the service doesn't handle TLS by passing <code>--insecure</code> flag to the argocd-server deployment</strong>:</p>
<p>spec:</p>
<pre><code> template:
spec:
containers:
- name: argocd-server
command:
- argocd-server
- --repo-server
- argocd-repo-server:8081
- --insecure # <-- Here
</code></pre>
|
<p>I am learning about Volumes in the <code>Kubernetes</code>.<br />
I understood the concept of Volume and types of volume.<br />
But, I am confused about the <code>mouthPath</code> property. Following my <code>YAML</code> file:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx-alpine-volume
spec:
containers:
- name: nginx
image: nginx:alpine
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
readOnly: true
resources:
- name: html-updater
image: alpine
command: ["/bin/sh", "-c"]
args:
- while true; do date >> /mohit/index.html;sleep 10; done
resources:
volumeMounts:
- name: html
mountPath: /mohit
volumes:
- name: html
emptyDir: {}
</code></pre>
<p>Question: What is the use of the <code>mountPath</code> property. Here, I am using one pod with two containers. Both containers have different <code>mounPath</code> values.</p>
<h2>Update:</h2>
<p><a href="https://i.stack.imgur.com/qvqJI.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qvqJI.jpg" alt="enter image description here" /></a></p>
| <p>Consider the mount path as the directory where you are attaching or mounting the files or system</p>
<p>While your actual volume is <strong>emptyDir</strong></p>
<p>What basically the idea is there to both container have different mount path</p>
<p>as both containers need to use different folders</p>
<p>While as your volume is single name html so locally from volume both container pointing or using the different folders</p>
<p>both containers manage the different files at their mounting point (or folder)</p>
<p>so mount path is a point or directly where your container will be managing files.</p>
<p>Empty dir : <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#emptydir</a></p>
<p>Read more at : <a href="https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/</a></p>
<p>if you see this example: <a href="https://github.com/harsh4870/Kubernetes-wordpress-php-fpm-nginx/blob/master/wordpress-deployment.yaml" rel="nofollow noreferrer">https://github.com/harsh4870/Kubernetes-wordpress-php-fpm-nginx/blob/master/wordpress-deployment.yaml</a></p>
<p>it has the same two containers with mount path and <strong>emptydir</strong> volume</p>
<p>what i am doing is attaching the Nginx configuration file, to the container so Nginx will use that configuration file which i am mounting from outside to the container.</p>
<p>My config file stored inside <strong>configmap</strong> or <strong>secret</strong>.</p>
|
<p>I'm using the <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack" rel="nofollow noreferrer">kube-prometheus-stack</a> chart and I'm trying to get rid of all the <code>kube_secret</code> metrics.</p>
<p>When I query these metrics I see that they originate from a job named <code>kubernetes-service-endpoints</code>, but I can't figure out what service monitor controls it in <a href="https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml" rel="nofollow noreferrer"><code>values.yaml</code></a>, so I tried dropping them from every single configurable service monitor - in each one of them I put:</p>
<pre class="lang-yaml prettyprint-override"><code> metricRelabelings:
- action: drop
regex: 'kube_secret_.+'
sourceLabels: [__name__]
</code></pre>
<p>Didn't help of course. What do I need to define in <code>values.yaml</code> in order to drop those metrics?
Thanks</p>
| <p>I'm silly - I defined the <code>kubernetes-service-endpoints</code> job in <code>prometheus.prometheusSpec.additionalScrapeConfigs</code>...</p>
<p>In order to remove all <code>kube_secret</code> metrics I dropped metric names that matched <code>kube_secret_.+</code> for this job and in the service monitor of <code>kubeStateMetrics</code> (<code>kubeStateMetrics.serviceMonitor.metricRelabelings</code>)</p>
|
<p>How can I get events messages from a pod, like this command using client-go Kubernetes API:</p>
<pre><code>kubectl describe pod spark-t2f59 -n spark
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 104s default-scheduler 0/19 nodes are available: 15 Insufficient cpu, 19 Insufficient memory.
Warning FailedScheduling 104s default-scheduler 0/19 nodes are available: 15 Insufficient cpu, 19 Insufficient memory.
Warning FailedScheduling 45s default-scheduler 0/20 nodes are available: 16 Insufficient cpu, 20 Insufficient memory.
Warning FailedScheduling 34s default-scheduler 0/20 nodes are available: 16 Insufficient cpu, 20 Insufficient memory.
Normal NotTriggerScaleUp 97s cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added): 1 Insufficient memory, 1 max node group size reached
</code></pre>
<p>Is there a way to get the same output of events but using <strong>client-go</strong> instead of <strong>kubectl</strong>??</p>
| <p>Since you know the namespace and the pod name, you can do:</p>
<pre><code>package main
import (
"context"
"fmt"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
func main() {
config, _ := rest.InClusterConfig()
clientset, _ := kubernetes.NewForConfig(config)
events, _ := clientset.CoreV1().Events("spark").List(context.TODO(),metav1.ListOptions{FieldSelector: "involvedObject.name=spark-t2f59", TypeMeta: metav1.TypeMeta{Kind: "Pod"}})
for _, item := range events.Items {
fmt.Println(item)
}
}
</code></pre>
|
<p>I'm a newbie to lightweight virtual machines. I'm going to install Knative with Kata Container Runtime. Is it possible?</p>
<p>I know that Kubernetes works with Kata Container runtime. But, Does Knative work properly with that?</p>
| <p>Knative should work fine with Kata Containers (I know of no incompatibility, and the runtime pods are pretty standard), but I don't think I've seen it tested or written up before.</p>
|
<p>I had a project that wanted to update the DNS configuration of Pod with Operator,</p>
<pre><code>get dns message
get matched pod
modify:
pod.Spec.DNSConfig = CRD_SPEC
pod.Spec.DNSPolicy = corev1.DNSNone
client.Update(ctx,&pod)
</code></pre>
<p>But when I implemented it, I got the following error:</p>
<pre><code> ERROR controller-runtime.manager.controller.dnsinjection Reconciler error {"reconciler group": "xxxx", "reconciler kind": "xxxxx", "name": "dnsinjection", "namespace": "default", "error": "Pod \"busybox\" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)\n core.PodSpec{\n \t... // 21 identical fields\n \tPriority: &0,\n \tPreemptionPolicy: nil,\n \tDNSConfig: &core.PodDNSConfig{\n \t\tNameservers: []string{\n \t\t\t\"1.2.3.4\",\n- \t\t\t\"0.0.0.0\",\n \t\t},\n \t\tSearches: []string{\"ns1.svc.cluster-domain.example\", \"my.dns.search.suffix\"},\n \t\tOptions: []core.PodDNSConfigOption{{Name: \"ndots\", Value: &\"2\"}, {Name: \"edns0\"}},\n \t},\n \tReadinessGates: nil,\n \tRuntimeClassName: nil,\n \t... // 3 identical fields\n }\n"}
</code></pre>
<p><code>DNSConfig</code> and <code>DNSPoicy</code> fields are not declared to be unable to be <strong>updated</strong> in the source code, so why did the update fail?</p>
<p>I got the same error with <code>kubect edit pod busybox</code> and <code>kubectl apply -f modifyed_pod.yml(add DNSConfig)</code> command.</p>
<p>I would appreciate it if you could tell me how to solve it.</p>
| <p>Like the message says you cannot update a DNS config of a pod: <code>Forbidden: pod updates may not change fields other than spec.containers[*].image, spec.initContainers[*].image</code>.</p>
<p>If you want to inject a DNS config into all pods you need to add the configuration before the pod is created. Look into <a href="https://medium.com/ibm-cloud/diving-into-kubernetes-mutatingadmissionwebhook-6ef3c5695f74" rel="nofollow noreferrer">MutatingAdmissionWebhook</a> as an approach for this.</p>
|
<p>I have created a GKE Service Account.</p>
<p>I have been trying to use it within GKE, but I get the error:</p>
<pre><code>pods "servicepod" is forbidden: error looking up service account service/serviceaccount: serviceaccount "serviceaccount" not found
</code></pre>
<p>I have followed the setup guide in this <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform#gcloud_1" rel="nofollow noreferrer">documentation</a>.</p>
<p>1.Created a GCP Service Account called "serviceaccount"</p>
<p>2.I created, and downloaded the JSON key as key.json.</p>
<p>3.<code>kubectl create secret generic serviceaccountkey --from-file key.json -n service</code></p>
<p>4.Added the following items to my deployment:</p>
<pre><code> spec:
volumes:
- name: serviceaccountkey
secret:
secretName: serviceaccountkey
containers:
volumeMounts:
- name: serviceaccountkey
mountPath: /var/secrets/google
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
</code></pre>
<p>When I deploy this out, I get:
<code>pods "service-7cdbcc67b9-" is forbidden: error looking up service account service/serviceaccount: serviceaccount "serviceaccount" not found</code></p>
<p>I'm not sure what else to do to get this working, I've followed the guide and can't see anything that's been missed.</p>
<p>Any help on this would be greatly appreciated!</p>
| <p>One of the reasons for getting this error can be if you have created a service account in one namespace and trying to use that service account only for another namespace.</p>
<p>We can resolve this error by rolebinding the service account with a new namespace. If the existing service account is in default namespace then you can use this YAML file with the new namespace for rolebinding.</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-enforce-default
namespace: <new-namespace>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-enforce
subjects:
- kind: ServiceAccount
name: kubernetes-enforce
namespace: kube-system
</code></pre>
<p>Refer to this <a href="https://stackoverflow.com/questions/63283438/how-can-i-create-a-service-account-for-all-namespaces-in-a-kubernetes-cluster">similar issue</a> for more information.</p>
|
<p>I'm trying to setup a very simple 2 node k8s 1.13.3 cluster in a vSphere private cloud. The VMs are running Ubuntu 18.04. Firewalls are turned off for testing purposes. yet the initialization is failing due to a refused connection. Is there something else that could be causing this other than ports being blocked? I'm new to k8s and am trying to wrap my head around all of this. </p>
<p>I've placed a vsphere.conf in /etc/kubernetes/ as shown in this gist.
<a href="https://gist.github.com/spstratis/0395073ac3ba6dc24349582b43894a77" rel="noreferrer">https://gist.github.com/spstratis/0395073ac3ba6dc24349582b43894a77</a></p>
<p>I've also created a config file to point to when I run <code>kubeadm init</code>. Here's the example of it'\s content.
<a href="https://gist.github.com/spstratis/086f08a1a4033138a0c42f80aef5ab40" rel="noreferrer">https://gist.github.com/spstratis/086f08a1a4033138a0c42f80aef5ab40</a></p>
<p>When I run
<code>sudo kubeadm init --config /etc/kubernetes/kubeadminitmaster.yaml</code>
it times out with the following error. </p>
<pre><code>[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
</code></pre>
<p>Checking <code>sudo systemctl status kubelet</code> shows me that the kubelet is running. I have the firewall on my master VM turned off for now for testing puposes so that I can verify the cluster will bootstrap itself. </p>
<pre><code> Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Sat 2019-02-16 18:09:58 UTC; 24s ago
Docs: https://kubernetes.io/docs/home/
Main PID: 16471 (kubelet)
Tasks: 18 (limit: 4704)
CGroup: /system.slice/kubelet.service
└─16471 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cloud-config=/etc/kubernetes/vsphere.conf --cloud-provider=vsphere --cgroup-driver=systemd --network-plugin=cni --pod-i
</code></pre>
<p>Here are some additional logs below showing that the connection to <a href="https://192.168.0.12:6443/" rel="noreferrer">https://192.168.0.12:6443/</a> is refused. All of this seems to be causing kubelet to fail and prevent the init process from finishing. </p>
<pre><code> Feb 16 18:10:22 k8s-master-1 kubelet[16471]: E0216 18:10:22.633721 16471 kubelet.go:2266] node "k8s-master-1" not found
Feb 16 18:10:22 k8s-master-1 kubelet[16471]: E0216 18:10:22.668213 16471 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://192.168.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-master-1&limit=500&resourceVersion=0: dial tcp 192.168.0.1
Feb 16 18:10:22 k8s-master-1 kubelet[16471]: E0216 18:10:22.669283 16471 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://192.168.0.12:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.0.12:6443: connect: connection refused
Feb 16 18:10:22 k8s-master-1 kubelet[16471]: E0216 18:10:22.670479 16471 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.0.12:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-master-1&limit=500&resourceVersion=0: dial tcp 192.1
Feb 16 18:10:22 k8s-master-1 kubelet[16471]: E0216 18:10:22.734005 16471 kubelet.go:2266] node "k8s-master-1" not found
</code></pre>
| <p>In order to address the error (dial tcp 127.0.0.1:10248: connect: connection refused.), run the following:</p>
<pre class="lang-sh prettyprint-override"><code>sudo mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo kubeadm reset
sudo kubeadm init
</code></pre>
<p><strong>Use the same commands if the same error occurs while configuring worker node.</strong></p>
|
<p>I'm trying to do TCP/UDP port-forwarding with an ingress.</p>
<p>Following the docs: <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/</a></p>
<p>It says to set: <code>--tcp-services-configmap</code> but doesn't tell you where to set it. I assume it is command line arguments. I then googled the list of command line arguments for nginx-ingress</p>
<p><a href="https://kubernetes.github.io/ingress-nginx/user-guide/cli-arguments/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/cli-arguments/</a></p>
<p>Here you can clearly see its an argument of the controller:</p>
<p>--tcp-services-configmap Name of the ConfigMap containing the definition of the TCP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form "namespace/name:port", where "port" can either be a port number or name. TCP ports 80 and 443 are reserved by the controller for servicing HTTP traffic.</p>
<p><strong>First Question:</strong> how do I dynamically add to the container arguments of the nginx-ingress helm chart I don't see that documented anywhere?</p>
<p><strong>Second Question:</strong> What is the proper way to set this with the current version of nginx-ingress because setting the command line argument fails the container startup because the binary doesn't have that argument option.</p>
<p>Here in the default helm chart values.yaml there are some options about setting the namespace for the configmap for tcp-services but given the docs say I have to set it as an argument but that argument fails the startup I'm not sure how you actually set this.</p>
<p><a href="https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml</a></p>
<p>I manually edited the deployment and set the flag on the container args:</p>
<pre><code> - args:
- -nginx-plus=false
- -nginx-reload-timeout=60000
- -enable-app-protect=false
- -nginx-configmaps=$(POD_NAMESPACE)/emoney-nginx-controller-nginx-ingress
- -default-server-tls-secret=$(POD_NAMESPACE)/emoney-nginx-controller-nginx-ingress-default-server-tls
- -ingress-class=emoney-ingress
- -health-status=false
- -health-status-uri=/nginx-health
- -tcp-services-configmap=emoney-node/tcp-services-configmap
- -nginx-debug=false
- -v=1
- -nginx-status=true
- -nginx-status-port=8080
- -nginx-status-allow-cidrs=127.0.0.1
- -report-ingress-status
- -external-service=emoney-nginx-controller-nginx-ingress
- -enable-leader-election=true
- -leader-election-lock-name=emoney-nginx-controller-nginx-ingress-leader-election
- -enable-prometheus-metrics=true
- -prometheus-metrics-listen-port=9113
- -prometheus-tls-secret=
- -enable-custom-resources=true
- -enable-tls-passthrough=false
- -enable-snippets=false
- -enable-preview-policies=false
- -ready-status=true
- -ready-status-port=8081
- -enable-latency-metrics=false
env:
</code></pre>
<p>When I set this like the docs say should be possible the pod fails to start up because it errors out saying that argument isn't an option of the binary.</p>
<pre><code>kubectl logs emoney-nginx-controller-nginx-ingress-5769565cc7-vmgrf -n emoney-node
flag provided but not defined: -tcp-services-configmap
Usage of /nginx-ingress:
-alsologtostderr
log to standard error as well as files
-default-server-tls-secret string
A Secret with a TLS certificate and key for TLS termination of the default server. Format: <namespace>/<name>.
If not set, than the certificate and key in the file "/etc/nginx/secrets/default" are used.
If "/etc/nginx/secrets/default" doesn't exist, the Ingress Controller will configure NGINX to reject TLS connections to the default server.
If a secret is set, but the Ingress controller is not able to fetch it from Kubernetes API or it is not set and the Ingress Controller
fails to read the file "/etc/nginx/secrets/default", the Ingress controller will fail to start.
-enable-app-protect
Enable support for NGINX App Protect. Requires -nginx-plus.
-enable-custom-resources
Enable custom resources (default true)
-enable-internal-routes
Enable support for internal routes with NGINX Service Mesh. Requires -spire-agent-address and -nginx-plus. Is for use with NGINX Service Mesh only.
-enable-latency-metrics
Enable collection of latency metrics for upstreams. Requires -enable-prometheus-metrics
-enable-leader-election
Enable Leader election to avoid multiple replicas of the controller reporting the status of Ingress, VirtualServer and VirtualServerRoute resources -- only one replica will report status (default true). See -report-ingress-status flag. (default true)
-enable-preview-policies
Enable preview policies
-enable-prometheus-metrics
Enable exposing NGINX or NGINX Plus metrics in the Prometheus format
-enable-snippets
Enable custom NGINX configuration snippets in Ingress, VirtualServer, VirtualServerRoute and TransportServer resources.
-enable-tls-passthrough
Enable TLS Passthrough on port 443. Requires -enable-custom-resources
-external-service string
Specifies the name of the service with the type LoadBalancer through which the Ingress controller pods are exposed externally.
The external address of the service is used when reporting the status of Ingress, VirtualServer and VirtualServerRoute resources. For Ingress resources only: Requires -report-ingress-status.
-global-configuration string
The namespace/name of the GlobalConfiguration resource for global configuration of the Ingress Controller. Requires -enable-custom-resources. Format: <namespace>/<name>
-health-status
Add a location based on the value of health-status-uri to the default server. The location responds with the 200 status code for any request.
Useful for external health-checking of the Ingress controller
-health-status-uri string
Sets the URI of health status location in the default server. Requires -health-status (default "/nginx-health")
-ingress-class string
A class of the Ingress controller.
An IngressClass resource with the name equal to the class must be deployed. Otherwise, the Ingress Controller will fail to start.
The Ingress controller only processes resources that belong to its class - i.e. have the "ingressClassName" field resource equal to the class.
The Ingress Controller processes all the VirtualServer/VirtualServerRoute/TransportServer resources that do not have the "ingressClassName" field for all versions of kubernetes. (default "nginx")
-ingress-template-path string
Path to the ingress NGINX configuration template for an ingress resource.
(default for NGINX "nginx.ingress.tmpl"; default for NGINX Plus "nginx-plus.ingress.tmpl")
-ingresslink string
Specifies the name of the IngressLink resource, which exposes the Ingress Controller pods via a BIG-IP system.
The IP of the BIG-IP system is used when reporting the status of Ingress, VirtualServer and VirtualServerRoute resources. For Ingress resources only: Requires -report-ingress-status.
-leader-election-lock-name string
Specifies the name of the ConfigMap, within the same namespace as the controller, used as the lock for leader election. Requires -enable-leader-election. (default "nginx-ingress-leader-election")
-log_backtrace_at value
when logging hits line file:N, emit a stack trace
-log_dir string
If non-empty, write log files in this directory
-logtostderr
log to standard error instead of files
-main-template-path string
Path to the main NGINX configuration template. (default for NGINX "nginx.tmpl"; default for NGINX Plus "nginx-plus.tmpl")
-nginx-configmaps string
A ConfigMap resource for customizing NGINX configuration. If a ConfigMap is set,
but the Ingress controller is not able to fetch it from Kubernetes API, the Ingress controller will fail to start.
Format: <namespace>/<name>
-nginx-debug
Enable debugging for NGINX. Uses the nginx-debug binary. Requires 'error-log-level: debug' in the ConfigMap.
-nginx-plus
Enable support for NGINX Plus
-nginx-reload-timeout int
The timeout in milliseconds which the Ingress Controller will wait for a successful NGINX reload after a change or at the initial start. (default 60000) (default 60000)
-nginx-status
Enable the NGINX stub_status, or the NGINX Plus API. (default true)
-nginx-status-allow-cidrs string
Add IPv4 IP/CIDR blocks to the allow list for NGINX stub_status or the NGINX Plus API. Separate multiple IP/CIDR by commas. (default "127.0.0.1")
-nginx-status-port int
Set the port where the NGINX stub_status or the NGINX Plus API is exposed. [1024 - 65535] (default 8080)
-prometheus-metrics-listen-port int
Set the port where the Prometheus metrics are exposed. [1024 - 65535] (default 9113)
-prometheus-tls-secret string
A Secret with a TLS certificate and key for TLS termination of the prometheus endpoint.
-proxy string
Use a proxy server to connect to Kubernetes API started by "kubectl proxy" command. For testing purposes only.
The Ingress controller does not start NGINX and does not write any generated NGINX configuration files to disk
-ready-status
Enables the readiness endpoint '/nginx-ready'. The endpoint returns a success code when NGINX has loaded all the config after the startup (default true)
-ready-status-port int
Set the port where the readiness endpoint is exposed. [1024 - 65535] (default 8081)
-report-ingress-status
Updates the address field in the status of Ingress resources. Requires the -external-service or -ingresslink flag, or the 'external-status-address' key in the ConfigMap.
-spire-agent-address string
Specifies the address of the running Spire agent. Requires -nginx-plus and is for use with NGINX Service Mesh only. If the flag is set,
but the Ingress Controller is not able to connect with the Spire Agent, the Ingress Controller will fail to start.
-stderrthreshold value
logs at or above this threshold go to stderr
-transportserver-template-path string
Path to the TransportServer NGINX configuration template for a TransportServer resource.
(default for NGINX "nginx.transportserver.tmpl"; default for NGINX Plus "nginx-plus.transportserver.tmpl")
-v value
log level for V logs
-version
Print the version, git-commit hash and build date and exit
-virtualserver-template-path string
Path to the VirtualServer NGINX configuration template for a VirtualServer resource.
(default for NGINX "nginx.virtualserver.tmpl"; default for NGINX Plus "nginx-plus.virtualserver.tmpl")
-vmodule value
comma-separated list of pattern=N settings for file-filtered logging
-watch-namespace string
Namespace to watch for Ingress resources. By default the Ingress controller watches all namespaces
-wildcard-tls-secret string
A Secret with a TLS certificate and key for TLS termination of every Ingress host for which TLS termination is enabled but the Secret is not specified.
Format: <namespace>/<name>. If the argument is not set, for such Ingress hosts NGINX will break any attempt to establish a TLS connection.
If the argument is set, but the Ingress controller is not able to fetch the Secret from Kubernetes API, the Ingress controller will fail to start.
</code></pre>
<p>Config Map</p>
<pre><code>apiVersion: v1
data:
"1317": emoney-node/emoney-api:1317
"9090": emoney-node/emoney-grpc:9090
"26656": emoney-node/emoney:26656
"26657": emoney-node/emoney-rpc:26657
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: emoney
meta.helm.sh/release-namespace: emoney-node
creationTimestamp: "2021-11-01T18:06:49Z"
labels:
app.kubernetes.io/managed-by: Helm
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:1317: {}
f:9090: {}
f:26656: {}
f:26657: {}
f:metadata:
f:annotations:
.: {}
f:meta.helm.sh/release-name: {}
f:meta.helm.sh/release-namespace: {}
f:labels:
.: {}
f:app.kubernetes.io/managed-by: {}
manager: helm
operation: Update
time: "2021-11-01T18:06:49Z"
name: tcp-services-configmap
namespace: emoney-node
resourceVersion: "2056146"
selfLink: /api/v1/namespaces/emoney-node/configmaps/tcp-services-configmap
uid: 188f5dc8-02f9-4ee5-a5e3-819d00ff8b67
Name: emoney
Namespace: emoney-node
Labels: app.kubernetes.io/instance=emoney
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ibcnode
app.kubernetes.io/version=1.16.0
helm.sh/chart=ibcnode-0.1.0
Annotations: meta.helm.sh/release-name: emoney
meta.helm.sh/release-namespace: emoney-node
Selector: app.kubernetes.io/instance=emoney,app.kubernetes.io/name=ibcnode
Type: ClusterIP
IP: 172.20.30.240
Port: p2p 26656/TCP
TargetPort: 26656/TCP
Endpoints: 10.0.36.192:26656
Session Affinity: None
Events: <none>
Name: emoney-api
Namespace: emoney-node
Labels: app.kubernetes.io/instance=emoney
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ibcnode
app.kubernetes.io/version=1.16.0
helm.sh/chart=ibcnode-0.1.0
Annotations: meta.helm.sh/release-name: emoney
meta.helm.sh/release-namespace: emoney-node
Selector: app.kubernetes.io/instance=emoney,app.kubernetes.io/name=ibcnode
Type: ClusterIP
IP: 172.20.166.97
Port: api 1317/TCP
TargetPort: 1317/TCP
Endpoints: 10.0.36.192:1317
Session Affinity: None
Events: <none>
Name: emoney-grpc
Namespace: emoney-node
Labels: app.kubernetes.io/instance=emoney
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ibcnode
app.kubernetes.io/version=1.16.0
helm.sh/chart=ibcnode-0.1.0
Annotations: meta.helm.sh/release-name: emoney
meta.helm.sh/release-namespace: emoney-node
Selector: app.kubernetes.io/instance=emoney,app.kubernetes.io/name=ibcnode
Type: ClusterIP
IP: 172.20.136.177
Port: grpc 9090/TCP
TargetPort: 9090/TCP
Endpoints: 10.0.36.192:9090
Session Affinity: None
Events: <none>
Name: emoney-nginx-controller-nginx-ingress
Namespace: emoney-node
Labels: app.kubernetes.io/instance=emoney-nginx-controller
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=emoney-nginx-controller-nginx-ingress
helm.sh/chart=nginx-ingress-0.11.3
Annotations: meta.helm.sh/release-name: emoney-nginx-controller
meta.helm.sh/release-namespace: emoney-node
Selector: app=emoney-nginx-controller-nginx-ingress
Type: LoadBalancer
IP: 172.20.16.202
LoadBalancer Ingress: lb removed
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 32250/TCP
Endpoints: 10.0.43.32:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 32375/TCP
Endpoints: 10.0.43.32:443
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 30904
Events: <none>
Name: emoney-rpc
Namespace: emoney-node
Labels: app.kubernetes.io/instance=emoney
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ibcnode
app.kubernetes.io/version=1.16.0
helm.sh/chart=ibcnode-0.1.0
Annotations: meta.helm.sh/release-name: emoney
meta.helm.sh/release-namespace: emoney-node
Selector: app.kubernetes.io/instance=emoney,app.kubernetes.io/name=ibcnode
Type: ClusterIP
IP: 172.20.42.163
Port: rpc 26657/TCP
TargetPort: 26657/TCP
Endpoints: 10.0.36.192:26657
Session Affinity: None
Events: <none>
helm repo add nginx-stable https://helm.nginx.com/stable --kubeconfig=./kubeconfig || echo "helm repo already added"
helm repo update --kubeconfig=./kubeconfig || echo "helm repo already updated"
helm upgrade ${app_name}-nginx-controller -n ${app_namespace} nginx-stable/nginx-ingress \
--install \
--kubeconfig=./kubeconfig \
--create-namespace \
--set controller.service.type=LoadBalancer \
--set controller.tcp.configMapNamespace=${app_namespace} \
--set controller.ingressClass="${app_name}-ingress"
kubectl rollout status -w deployment/${app_name} --kubeconfig=./kubeconfig -n ${app_namespace}
#- --tcp-services-configmap=emoney-node/tcp-services-configmap
</code></pre>
| <p>You could say the helm chart is biased in that it doesn't expose the option to set those args as chart value. It will set them by itself based on conditional logic when it's required according to the values.</p>
<p>When I check the nginx template in the repo, I see that <a href="https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/templates/controller-deployment.yaml#L81" rel="noreferrer">additional args</a> are passed from the template in the <a href="https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/templates/_params.tpl" rel="noreferrer">params helper file</a>. Those seem to be generated dynamically. I.E.</p>
<pre class="lang-yaml prettyprint-override"><code>{{- if .Values.tcp }}
- --tcp-services-configmap={{ default "$(POD_NAMESPACE)" .Values.controller.tcp.configMapNamespace }}/{{ include "ingress-nginx.fullname" . }}-tcp
{{- end }}
</code></pre>
<p>So, it seems it will use this flag only if the tcp value isn't empty. On the same condition, it will <a href="https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/templates/controller-configmap-tcp.yaml" rel="noreferrer">create the configmap</a>.</p>
<p>Further, the tcp value allows you to set a key <code>configMapNamespace</code>. So if you were to set this key only, then the flag would be used as per paramaters helpers. Now you need to create your configmap only in the provided namespace and let it match the name <code>{{ include "ingress-nginx.fullname" . }}-tcp</code>.</p>
<p>So you could create the configmap in the <code>default</code> namespace and name it <code>ingress-nginx-tcp</code> or similar, depending on how you set the release name.</p>
<pre class="lang-sh prettyprint-override"><code>kubectl create configmap ingress-nginx-tcp --from-literal 1883=mqtt/emqx:1883 -n default
helm install --set controller.tcp.configMapNamespace=default ingress-nginx ingress-nginx/ingress-nginx
</code></pre>
<p>I think the only problem with that is that you cannot create it in the <code>.Release.Namespace</code>, since when tcp isn't empty it will attempt to create a configmap there by itself, which would result in conflicts. At least that's how I interpret the templates in the chart repo.</p>
<p>I personally, have configured TCP via values file that I pass to helm with <code>-f</code>.</p>
<pre class="lang-sh prettyprint-override"><code>helm install -f values.yaml ingress-nginx ingress-nginx/ingress-nginx
</code></pre>
<pre class="lang-yaml prettyprint-override"><code># configure the tcp configmap
tcp:
1883: mqtt/emqx:1883
8883: mqtt/emqx:8883
# enable the service and expose the tcp ports.
# be careful as this will pontentially make them
# availble on the public web
controller:
service:
enabled: true
ports:
http: 80
https: 443
mqtt: 1883
mqttssl: 8883
targetPorts:
http: http
https: https
mqtt: mqtt
mqttssl: mqttssl
</code></pre>
|
<p>What options are available to me if I want to restrict the usage of Priorty Classes, if I'm running a managed Kubernetes service (AKS)?</p>
<p>The use-case here is that, I am as cluster admin want to restrict the usage of these. So that developers are not using those that are supposed to be used by critical components.</p>
<p>Multi-tenant cluster, semi 0-trust policy. This is something that could be "abused".</p>
<p>Is this something I can achieve with resource quotas even though I'm running Azure Kubernetes Service?
<a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/#limit-priority-class-consumption-by-default" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/policy/resource-quotas/#limit-priority-class-consumption-by-default</a></p>
| <p>Cloud managed cluster does not allow you to customize the api-server. In this case, you can use <a href="https://kubernetes.io/blog/2019/08/06/opa-gatekeeper-policy-and-governance-for-kubernetes/#policies-and-constraints" rel="nofollow noreferrer">opa gatekeeper</a> or <a href="https://kyverno.io/policies/other/allowed_pod_priorities/allowed_pod_priorities/" rel="nofollow noreferrer">kyverno</a> to write rules that reject un-necessary priority class settings.</p>
|
<p>I had a project that wanted to update the DNS configuration of Pod with Operator,</p>
<pre><code>get dns message
get matched pod
modify:
pod.Spec.DNSConfig = CRD_SPEC
pod.Spec.DNSPolicy = corev1.DNSNone
client.Update(ctx,&pod)
</code></pre>
<p>But when I implemented it, I got the following error:</p>
<pre><code> ERROR controller-runtime.manager.controller.dnsinjection Reconciler error {"reconciler group": "xxxx", "reconciler kind": "xxxxx", "name": "dnsinjection", "namespace": "default", "error": "Pod \"busybox\" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)\n core.PodSpec{\n \t... // 21 identical fields\n \tPriority: &0,\n \tPreemptionPolicy: nil,\n \tDNSConfig: &core.PodDNSConfig{\n \t\tNameservers: []string{\n \t\t\t\"1.2.3.4\",\n- \t\t\t\"0.0.0.0\",\n \t\t},\n \t\tSearches: []string{\"ns1.svc.cluster-domain.example\", \"my.dns.search.suffix\"},\n \t\tOptions: []core.PodDNSConfigOption{{Name: \"ndots\", Value: &\"2\"}, {Name: \"edns0\"}},\n \t},\n \tReadinessGates: nil,\n \tRuntimeClassName: nil,\n \t... // 3 identical fields\n }\n"}
</code></pre>
<p><code>DNSConfig</code> and <code>DNSPoicy</code> fields are not declared to be unable to be <strong>updated</strong> in the source code, so why did the update fail?</p>
<p>I got the same error with <code>kubect edit pod busybox</code> and <code>kubectl apply -f modifyed_pod.yml(add DNSConfig)</code> command.</p>
<p>I would appreciate it if you could tell me how to solve it.</p>
| <p>As the message says, Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/pods/#pod-update-and-replacement" rel="nofollow noreferrer">does not support updating most pod's fields directly</a>:</p>
<blockquote>
<p>Kubernetes doesn't prevent you from managing Pods directly. It is possible to update some fields of a running Pod, in place. However, Pod update operations like <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#patch-pod-v1-core" rel="nofollow noreferrer"><code>patch</code></a>, and <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#replace-pod-v1-core" rel="nofollow noreferrer"><code>replace</code></a> have some limitations:</p>
<ul>
<li>Most of the metadata about a Pod is immutable. For example, you cannot change the <code>namespace</code>, <code>name</code>, <code>uid</code>, or <code>creationTimestamp</code> fields; the <code>generation</code> field is unique. It only accepts updates that increment the field's current value.</li>
<li>If the <code>metadata.deletionTimestamp</code> is set, no new entry can be added to the <code>metadata.finalizers</code> list.</li>
<li>Pod updates may not change fields other than <code>spec.containers[*].image</code>, <code>spec.initContainers[*].image</code>, <code>spec.activeDeadlineSeconds</code> or <code>spec.tolerations</code>. For <code>spec.tolerations</code>, you can only add new entries.</li>
</ul>
</blockquote>
<p><em>Why is that?</em></p>
<p>Pods in Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/pods/#working-with-pods" rel="nofollow noreferrer">are designed as relatively ephemeral, disposable entities</a>:</p>
<blockquote>
<p>You'll rarely create individual Pods directly in Kubernetes—even singleton Pods. This is because Pods are designed as relatively ephemeral, disposable entities. When a Pod gets created (directly by you, or indirectly by a <a href="https://kubernetes.io/docs/concepts/architecture/controller/" rel="nofollow noreferrer">controller</a>), the new Pod is scheduled to run on a <a href="https://kubernetes.io/docs/concepts/architecture/nodes/" rel="nofollow noreferrer">Node</a> in your cluster. The Pod remains on that node until the Pod finishes execution, the Pod object is deleted, the Pod is <em>evicted</em> for lack of resources, or the node fails.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/workloads/pods/#using-pods" rel="nofollow noreferrer">Kubernetes assumes that for managing pods and doing any updates you should use</a> a <a href="https://kubernetes.io/docs/concepts/workloads/" rel="nofollow noreferrer">workload resources</a> instead of creating pods directly:</p>
<blockquote>
<p>Pods are generally not created directly and are created using workload resources. See <a href="https://kubernetes.io/docs/concepts/workloads/pods/#working-with-pods" rel="nofollow noreferrer">Working with Pods</a> for more information on how Pods are used with workload resources.
Here are some examples of workload resources that manage one or more Pods:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a></li>
<li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a></li>
<li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset" rel="nofollow noreferrer">DaemonSet</a></li>
</ul>
</blockquote>
<p>You can easily update most fields in workload resources definition and it will work properly. Keep in mind that it won't edit any existing pods - <a href="https://kubernetes.io/docs/concepts/workloads/pods/#pod-templates" rel="nofollow noreferrer">it will delete the currently running pods with old configuration and start the new ones - Kubernetes will make sure that this process goes smoothly</a>:</p>
<blockquote>
<p>Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create replacement Pods that use the updated template.</p>
</blockquote>
<blockquote>
<p>For example, the StatefulSet controller ensures that the running Pods match the current pod template for each StatefulSet object. If you edit the StatefulSet to change its pod template, the StatefulSet starts to create new Pods based on the updated template. Eventually, all of the old Pods are replaced with new Pods, and the update is complete.</p>
</blockquote>
<blockquote>
<p>Each workload resource implements its own rules for handling changes to the Pod template. If you want to read more about StatefulSet specifically, read <a href="https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets" rel="nofollow noreferrer">Update strategy</a> in the StatefulSet Basics tutorial.</p>
</blockquote>
<p>So based on all above information I'd suggest to switch to workload resource, for example <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">deployment</a>:</p>
<blockquote>
<p>A <em>Deployment</em> provides declarative updates for <a href="https://kubernetes.io/docs/concepts/workloads/pods/" rel="nofollow noreferrer">Pods</a> and <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer">ReplicaSets</a>.</p>
</blockquote>
<p>For example - right now I have pod with below definition:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox:1.28
command:
- sleep
- "9999999"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
</code></pre>
<p>When I try to run <code>kubectl edit pod busybox</code> command to change <code>dnsPolicy</code> I have the same error as you.
However, If I changed to deployment with the same pod definition:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox-deployment
labels:
app: busybox
spec:
replicas: 1
selector:
matchLabels:
app: busybox
template:
metadata:
labels:
app: busybox
spec:
containers:
- image: busybox:1.28
command:
- sleep
- "9999999"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
</code></pre>
<p>Then if I run <code>kubectl edit deploy busybox-deployment</code> and change <code>dnsPolicy</code> field I will get a new pod with new configuration (the old pod will be automatically deleted).</p>
<p>Keep in mind, if you want to stick with direct pod definition, you can always just delete pod and apply a new, modified yaml as you tried (<code>kubectl delete pod {your-pod-name} && kubectl apply -f {modified.yaml}</code>). It will work properly.</p>
<p>Also check:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/41583672/kubernetes-deployments-vs-statefulsets">Kubernetes Deployments vs StatefulSets</a></li>
<li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/" rel="nofollow noreferrer">Workload Resources</a></li>
<li><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/" rel="nofollow noreferrer">Pod Lifecycle</a></li>
</ul>
|
<p>Does anyone know if there is a way to define static selectors based on the namespace name instead of label selectors? The reason is that some of the namespaces are created by an operator and I don't have any control over the labels.</p>
<p>Thanks
Essey</p>
| <p>Each namespace has the so-called <a href="https://kubernetes.io/docs/reference/labels-annotations-taints/#kubernetes-io-metadata-name" rel="nofollow noreferrer">well-known label</a> <code>kubernetes.io/metadata.name</code></p>
<p>So your <code>namespaceSelector</code> can be something like:</p>
<pre class="lang-yaml prettyprint-override"><code>namespaceSelector:
matchExpressions:
- key: kubernetes.io/metadata.name
operator: "In"
values:
- "staging"
- "demo"
</code></pre>
|
<p>I'm trying to apply egress port range for my k8s network policy like this:</p>
<pre><code> egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 32000
endPort: 32768
</code></pre>
<p>Starting fine but when I describe that, I only see that port <code>32000</code> is allowed.
Do I miss something? Or have I made some mistake?</p>
<p>Thanks.</p>
| <p>It seems you took this example from <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#targeting-a-range-of-ports" rel="nofollow noreferrer">Targeting a range of Ports</a>. Here are 2 questions:</p>
<ol>
<li><p>I see <code>endPort</code> works only with <code>NetworkPolicyEndPort</code> enabled feature. Despite the fact it is states, this feature enabled by default, can you please
check if it turned for you?
<a href="https://i.stack.imgur.com/R0RwO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R0RwO.png" alt="enter image description here" /></a></p>
</li>
<li><p>Whats your CNI plugin and does it support <code>endPort</code> in NetworkPolicy spec?</p>
</li>
</ol>
<p><a href="https://i.stack.imgur.com/bB8ou.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bB8ou.png" alt="enter image description here" /></a></p>
|
<p>Using latest VSCode and the plugin version.</p>
<p>AWS Toolkit is working fine.</p>
<p>kubectl get pods works fine from terminal.</p>
<p>Kubernetes extension showing the cluster name, BUT while trying to open Nodes or other things getting this error:</p>
<pre><code>Unable to parse config file: /Users/yurib/.aws/config Unable to parse config file: /Users/yurib/.aws/config Unable to parse config file: /Users/yurib/.aws/config Unable to parse config file: /Users/yurib/.aws/config Unable to parse config file: /Users/yurib/.aws/config Unable to connect to the server: getting credentials: exec: executable aws failed with exit code 255
</code></pre>
<p>No logs, nothing...</p>
<p><a href="https://i.stack.imgur.com/QkI4k.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QkI4k.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/J1ogw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J1ogw.png" alt="enter image description here" /></a></p>
<p>config:</p>
<pre><code>[okta]
# Okta Dev APP
#####################
aws_saml_url = home/amazon_aws/adfdglkdfgkldfgj/274
# Dev is the HUB account
#########################
[profile dev]
# Role to assume - each team will use it’s own role
role_arn = arn:aws:iam::xxxxxxxx:role/okta-admin-role
region = us-east-1
# source_profile = dev
session_ttl = 12h
#Spoke Accounts
###################
[profile development]
# Role to assume - each team will use it’s own role
role_arn = arn:aws:iam::xxxxxxxx:role/okta-admin-role
region = us-east-1
source_profile = dev
session_ttl = 12h
#Staging
##########
[profile staging]
source_profile = dev
role_arn = arn:aws:iam::xxxxxxxx:role/aws-okta-admin-role
region = us-east-1
assume_role_ttl = 1h
#GAS
##########
[profile gass]
source_profile = dev
role_arn = arn:aws:iam::xxxxxxxx:role/aws-okta-admin-role
region = us-east-1
assume_role_ttl = 1h
#CRISPR
###########
[profile cris]
source_profile = dev
role_arn = arn:aws:iam::xxxxxxxx:role/aws-okta-admin-role
region = eu-west-1
assume_role_ttl = 1h
</code></pre>
<p>credentials:</p>
<pre><code>[dev]
aws_access_key_id = XXXXXXXXX
aws_secret_access_key = XXXXXX
aws_session_token = XXXXXXXXX
aws_security_token = XXXXXXXXX
[gas]
aws_access_key_id = XXXXXXXXX
aws_secret_access_key = XXXXXXXXX
aws_session_token = XXXXXXXXX
aws_security_token = XXXXXXXXX
[crispr]
aws_access_key_id = XXXXXXXXX
aws_secret_access_key = XXXXXXXXX
aws_session_token = XXXXXXXXX
aws_security_token = XXXXXXXXX
</code></pre>
<p>The cluster is on CRISPR account.</p>
<p>kubeconfig is ok.</p>
| <p>according to the <a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html" rel="nofollow noreferrer">docs</a></p>
<p>should config looks like that:</p>
<pre><code>[default]
aws_access_key_id = xxxxxxxxxxxxxxx
aws_secret_access_key = yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
</code></pre>
<p>I had found previously broken config on my Mac:</p>
<pre><code># Amazon Web Services Config File used by AWS CLI, SDKs, and tools
# This file was created by the AWS Toolkit for JetBrains plugin.
#
# Your AWS credentials are represented by access keys associated with IAM users.
# For information about how to create and manage AWS access keys for a user, see:
# https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html
#
# This config file can store multiple access keys by placing each one in a
# named "profile". For information about how to change the access keys in a
# profile or to add a new profile with a different access key, see:
# https://docs.aws.amazon.com/cli/latest/userguide/cli-config-files.html
#
# If both a credential and config file exists, the values in the credential file
# take precedence
[default]
# The access key and secret key pair identify your account and grant access to AWS.
aws_access_key_id = [accessKey]
# Treat your secret key like a password. Never share your secret key with anyone. Do
# not post it in online forums, or store it in a source control system. If your secret
# key is ever disclosed, immediately use IAM to delete the access key and secret key
# and create a new key pair. Then, update this file with the replacement key details.
aws_secret_access_key = [secretKey]
# [profile user1]
aws_access_key_id = xxxxxxxxxxxxxxx
aws_secret_access_key = yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
</code></pre>
|
<p>I'm running my deployment on OpenShift, and found that I need to have a GID of 2121 to have write access.</p>
<p>I still don't seem to have write access when I try this:</p>
<pre><code>security:
podSecurityContext:
fsGroup: 2121
</code></pre>
<p>This gives me a <code>2121 is not an allowed group</code> error.</p>
<p>However, this does seem to be working for me:</p>
<pre><code>security:
podSecurityContext:
fsGroup: 100010000 # original fsGroup value
supplementalGroups: [2121]
</code></pre>
<p>I am wondering what the difference of <code>fsGroup</code> and <code>supplementalGroups</code> is.</p>
<p>I've read the documentation <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#volumes-and-file-systems" rel="nofollow noreferrer">here</a> and have also looked at <code>kubectl explain deployment.spec.template.spec.securityContext</code>, but I still can't quite understand the difference.</p>
<p>Could I get some clarification on what are the different use cases?</p>
| <p><code>FSGroup</code> is used to set the group that owns the pod volumes. This group will be used by Kubernetes to change the permissions of all files in volumes, when volumes are mounted by a pod.</p>
<blockquote>
<ol>
<li><p>The owning GID will be the FSGroup</p>
</li>
<li><p>The setgid bit is set (new files created in the volume will be owned by FSGroup)</p>
</li>
<li><p>The permission bits are OR'd with rw-rw----</p>
<p>If unset, the Kubelet will not modify the ownership and permissions of
any volume.</p>
</li>
</ol>
</blockquote>
<p>Some caveats when using <code>FSGroup</code>:</p>
<ul>
<li><p>Changing the ownership of a volume for slow and/or large file systems
can cause delays in pod startup.</p>
</li>
<li><p>This can harm other processes using the same volume if their
processes do not have permission to access the new GID.</p>
</li>
</ul>
<p><code>SupplementalGroups</code> - controls which supplemental group ID can be assigned to processes in a pod.</p>
<blockquote>
<p>A list of groups applied to the first process run in each container,
in addition to the container's primary GID. If unspecified, no groups
will be added to any container.</p>
</blockquote>
<p>Additionally from the <a href="https://docs.openshift.com/container-platform/4.9/storage/persistent_storage/persistent-storage-nfs.html#storage-persistent-storage-nfs-group-ids_persistent-storage-nfs" rel="noreferrer">OpenShift documentation</a>:</p>
<blockquote>
<p>The recommended way to handle NFS access, assuming it is not an option
to change permissions on the NFS export, is to use supplemental
groups. Supplemental groups in OpenShift Container Platform are used
for shared storage, of which NFS is an example. In contrast, block
storage such as iSCSI uses the fsGroup SCC strategy and the fsGroup
value in the securityContext of the pod.</p>
</blockquote>
|
<p>I have a service that uses Spring Cloud Kubernetes Config to reload its configuration when a value in a ConfigMap changes. That all works great.</p>
<p>Is it possible to use Spring Cloud Kubernetes (or one of its dependencies) to <strong>write</strong> a ConfigMap value? I didn't see any examples of this in the documentation (<a href="https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/" rel="nofollow noreferrer">here</a>). Can I do this programmatically, or do I need to call the underlying Kubernetes APIs to do this?</p>
| <p>Based on Eugene's reply:</p>
<p>No, this is impossible at the moment to do so. You can go to <a href="https://github.com/spring-cloud/spring-cloud-kubernetes#1-why-do-you-need-spring-cloud-kubernetes" rel="nofollow noreferrer">GitHub</a> and create an issue with the explanation of your use case, and this feature <strong>can be created</strong> within the future releases.</p>
|
<p>My k8s version:</p>
<pre><code> kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:38:50Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.7-eks-d88609", GitCommit:"d886092805d5cc3a47ed5cf0c43de38ce442dfcb", GitTreeState:"clean", BuildDate:"2021-07-31T00:29:12Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.22) and server (1.20) exceeds the supported minor version skew of +/-1
</code></pre>
<p>My cronjob file:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
labels:
app: git-updater
name: git-updater
namespace: my-ns
spec:
concurrencyPolicy: Replace
failedJobsHistoryLimit: 2
jobTemplate:
spec:
template:
metadata:
labels:
app: git-updater
name: git-updater
spec:
containers:
- args:
- sh
- -c
- apk add --no-cache git openssh-client && cd /repos && ls -d */*.git/
| while read dir; do cd /repos/$dir; git gc; git fetch --prune; done
image: alpine
name: updater
volumeMounts:
- mountPath: test
name: test
- mountPath: test
name: test
restartPolicy: Never
volumes:
- persistentVolumeClaim:
claimName: pvc
name: repos
schedule: '@daily'
successfulJobsHistoryLimit: 4
...
</code></pre>
<p>When I create the job from file, all goes well:<br></p>
<pre><code>kubectl -n my-ns create -f git-updater.yaml
cronjob.batch/git-updater created
</code></pre>
<p>But I would like to trigger it manually just for testing purposes, so I do:<br></p>
<pre><code>kubectl -n my-ns create job --from=cronjob/git-updater test-job
error: unknown object type *v1beta1.CronJob
</code></pre>
<p>Which is strange because I have just been able to create it from file.<br>
In another similar post it was suggested to switch from ApiVersion <code>v1beta1</code> to <code>v1</code>.... but when I do so then I get:<br></p>
<pre><code> kubectl -n my-ns create -f git-updater.yaml
error: unable to recognize "git-updater.yaml": no matches for kind "CronJob" in version "batch/v1"
</code></pre>
<p>I am a little stuck here, how can I test my newly and successfully created CronJob?</p>
| <p>That's your <strong>v1.20</strong> cluster version, that doing that.
Short answer is you should <strong>upgrade cluster to 1.21</strong>, where cronjobs works more or less stable.</p>
<p>Check</p>
<ol>
<li><a href="https://stackoverflow.com/questions/67520866/no-matches-for-kind-cronjob-in-version-batch-v1(https://stackoverflow.com/a/67521713/9929015)">no matches for kind "CronJob" in version "batch/v1"</a> , especially comment</li>
</ol>
<blockquote>
<p>One thing is the api-version, another one is in which version the
resource you are creating is available. By version 1.19.x you do have
batch/v1 as an available api-resource, but you don't have the resource
CronJob under it.</p>
</blockquote>
<ol start="2">
<li><a href="https://stackoverflow.com/a/68902379/9929015">Kubernetes Create Job from Cronjob not working</a></li>
</ol>
<hr />
<p>I have an 1.20.9 GKE cluster and face the same issue as you</p>
<pre><code>$kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:38:50Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.9-gke.1001", GitCommit:"1fe18c314ed577f6047d2712a9d1c8e498e22381", GitTreeState:"clean", BuildDate:"2021-08-23T23:06:28Z", GoVersion:"go1.15.13b5", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.22) and server (1.20) exceeds the supported minor version skew of +/-1
cronjob.batch/git-updater created
$kubectl -n my-ns create job --from=cronjob/git-updater test-job
error: unknown object type *v1beta1.CronJob
</code></pre>
<hr />
<p>At the same time, your <code>cronjob</code> yaml perfectly works with <code>apiVersion: batch/v1</code> on <strong>1.21</strong> GKE one.</p>
<pre><code>$kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:38:50Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3-gke.2001", GitCommit:"77cdf52c2a4e6c1bbd8ae4cd75c864e9abf4c79e", GitTreeState:"clean", BuildDate:"2021-08-20T18:38:46Z", GoVersion:"go1.16.6b7", Compiler:"gc", Platform:"linux/amd64"}
$kubectl -n my-ns create -f cronjob.yaml
cronjob.batch/git-updater created
$ kubectl -n my-ns create job --from=cronjob/git-updater test-job
job.batch/test-job created
$kubectl describe job test-job -n my-ns
Name: test-job
Namespace: my-ns
Selector: controller-uid=ef8ab972-cff1-4889-ae11-60c67a374660
Labels: app=git-updater
controller-uid=ef8ab972-cff1-4889-ae11-60c67a374660
job-name=test-job
Annotations: cronjob.kubernetes.io/instantiate: manual
Parallelism: 1
Completions: 1
Start Time: Tue, 02 Nov 2021 17:54:38 +0000
Pods Statuses: 1 Running / 0 Succeeded / 0 Failed
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 16m job-controller Created pod: test-job-vcxx9
</code></pre>
|
<p>I've installed <strong>Prometheus</strong> and <strong>Grafana</strong> to monitor my <strong>K8S cluster</strong> and <strong>microservices</strong> using <strong>helm charts</strong>:</p>
<pre><code>helm install monitoring prometheus-community/kube-promehteus-stack --values prometheus-values.yaml --version 16.10.0 --namespace monitoring --create-namespace
</code></pre>
<p>the content of <code>promehteus-values.yaml</code> is:</p>
<pre><code>prometheus:
prometheusSpec:
serviceMonitorSelector:
matchLabels:
prometheus: devops
commonLabels:
prometheus: devops
grafana:
adminPassword: test123
</code></pre>
<p>Then I installed <code>kong-ingress-controller</code> using <code>helm-charts</code>:</p>
<pre><code>helm install kong kong/kong --namespace kong --create-namespace --values kong.yaml --set ingressController.installCRDs=false
</code></pre>
<p>the content of <code>kong.yaml</code> file is:</p>
<pre><code>podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8100"
</code></pre>
<p>I've also changed the value of <code>metricsBindAdress</code> in <code>kube-proxy</code> <strong>configmap</strong> to <code>0.0.0.0:10249</code> .</p>
<p>then I installed <code>kong prometheus plugin</code> using this <code>yaml file</code> :</p>
<pre><code>apiVersion: configuration.konghq.com/v1
kind: KongClusterPlugin
metadata:
name: prometheus
annotations:
kubernetes.io/ingress.class: kong
labels:
global: "true"
plugin: prometheus
</code></pre>
<p>My <code>kong endpoint</code> is :</p>
<p><code>$ kubectl edit endpoints -n kong kong-kong-proxy</code></p>
<pre><code># Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Endpoints
metadata:
annotations:
endpoints.kubernetes.io/last-change-trigger-time: "2021-10-27T03:28:25Z"
creationTimestamp: "2021-10-26T04:44:57Z"
labels:
app.kubernetes.io/instance: kong
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kong
app.kubernetes.io/version: "2.6"
enable-metrics: "true"
helm.sh/chart: kong-2.5.0
name: kong-kong-proxy
namespace: kong
resourceVersion: "6553623"
uid: 91f2054f-7fb9-4d63-8b65-be098b8f6547
subsets:
- addresses:
- ip: 10.233.96.41
nodeName: node2
targetRef:
kind: Pod
name: kong-kong-69fd7d7698-jjkj5
namespace: kong
resourceVersion: "6510442"
uid: 26c6bdca-e9f1-4b32-91ff-0fadb6fce529
ports:
- name: kong-proxy
port: 8000
protocol: TCP
- name: kong-proxy-tls
port: 8443
protocol: TCP
</code></pre>
<p>Finally I wrote the <code>serviceMonitor</code> for <code>kong</code> like this :</p>
<pre><code>
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
generation: 1
labels:
prometheus: devops
name: kong-sm
namespace: kong
spec:
endpoints:
- interval: 30s
port: kong-proxy
scrapeTimeout: 10s
namespaceSelector:
matchNames:
- kong
selector:
matchLabels:
app.kubernetes.io/instance: kong
</code></pre>
<p>After all of this ; the <code>targets</code> in <code>prometheus dashboard</code> looks like this:</p>
<p><a href="https://i.stack.imgur.com/GikR0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GikR0.png" alt="enter image description here" /></a></p>
<p>What did I miss/do wrong?</p>
| <p>Let's take a look to the <strong>Kong deployment</strong> first <strong>(</strong> pay extra attention to the bottom of this file <strong>)</strong>:</p>
<p><code>kubectl edit deploy -n kong kong-kong</code> :</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
[...]
creationTimestamp: "2021-10-26T04:44:58Z"
generation: 1
labels:
app.kubernetes.io/component: app
app.kubernetes.io/instance: kong
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kong
app.kubernetes.io/version: "2.6"
helm.sh/chart: kong-2.5.0
name: kong-kong
namespace: kong
[...]
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: app
app.kubernetes.io/instance: kong
app.kubernetes.io/name: kong
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
[...]
- env:
[...]
image: kong:2.6
imagePullPolicy: IfNotPresent
[...]
name: proxy
ports:
- containerPort: 8000
name: proxy
protocol: TCP
- containerPort: 8443
name: proxy-tls
protocol: TCP
############################################
## THIS PART IS IMPORTANT TO US : #
############################################
- containerPort: 8100
name: status
protocol: TCP
[...]
</code></pre>
<p>As you can see, in the <code>sepc.template.spec.env.ports</code> part we have <strong>3 ports</strong>, the <strong>8100</strong> will be used for get <strong>metrics</strong> so if you can't see this port in the <strong>kong endpoint</strong>, add it manually to the bottom of kong endpoint:</p>
<p><code>$ kubectl edit endpoints -n kong kong-kong-proxy</code> :</p>
<pre><code>apiVersion: v1
kind: Endpoints
metadata:
annotations:
endpoints.kubernetes.io/last-change-trigger-time: "2021-10-26T04:44:58Z"
creationTimestamp: "2021-10-26T04:44:57Z"
labels:
app.kubernetes.io/instance: kong
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kong
app.kubernetes.io/version: "2.6"
enable-metrics: "true"
helm.sh/chart: kong-2.5.0
name: kong-kong-proxy
namespace: kong
resourceVersion: "7160332"
uid: 91f2054f-7fb9-4d63-8b65-be098b8f6547
subsets:
- addresses:
- ip: 10.233.96.41
nodeName: node2
targetRef:
kind: Pod
name: kong-kong-69fd7d7698-jjkj5
namespace: kong
resourceVersion: "6816178"
uid: 26c6bdca-e9f1-4b32-91ff-0fadb6fce529
ports:
- name: kong-proxy
port: 8000
protocol: TCP
- name: kong-proxy-tls
port: 8443
protocol: TCP
#######################################
## ADD THE 8100 PORT HERE #
#######################################
- name: kong-status
port: 8100
protocol: TCP
</code></pre>
<p>Then save this file and change the <strong>serviceMonitor</strong> of <strong>kong</strong> like this <strong>(</strong> the <strong>port</strong> name is the <strong>same</strong> to the <strong>endpoint</strong> we added recently <strong>)</strong>:</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
generation: 1
labels:
prometheus: devops
name: kong-sm
namespace: kong
spec:
endpoints:
- interval: 30s
#############################################################################
## THE NAME OF THE PORT IS SAME TO THE NAME WE ADDED TO THE ENDPOINT FILE #
#############################################################################
port: kong-status
scrapeTimeout: 10s
namespaceSelector:
matchNames:
- kong
selector:
matchLabels:
app.kubernetes.io/instance: kong
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kong
app.kubernetes.io/version: "2.6"
enable-metrics: "true"
helm.sh/chart: kong-2.5.0
</code></pre>
<p>Apply the <strong>serviceMonitor</strong> yaml file and after few seconds <strong>Prometheus</strong> will detect it as a <strong>target</strong> and scrape kong's metrics successfully.</p>
|
<p>I had a Jenkins values.yaml file written prior to v1.19, I need some help to change it to be v1.19 compliant.</p>
<p>In the old <code>Values.yaml</code> below, I tried adding <code>http path:/</code>. Should the <code>pathType</code> be <code>ImplementationSpecific</code>?</p>
<p>Only <code>defaultBackend</code> works for some reason, not sure what I'm doing wrong with <code>path</code> and <code>pathType</code>.</p>
<pre><code>ingress:
enabled: true
# Override for the default paths that map requests to the backend
paths:
# - backend:
# serviceName: ssl-redirect
# servicePort: use-annotation
- backend:
serviceName: >-
{{ template "jenkins.fullname" . }}
# Don't use string here, use only integer value!
servicePort: 8080
# For Kubernetes v1.14+, use 'networking.k8s.io/v1'
apiVersion: "networking.k8s.io/v1"
labels: {}
annotations:
kubernetes.io/ingress.global-static-ip-name: jenkins-sandbox-blah
networking.gke.io/managed-certificates: jenkins-sandbox-blah
kubernetes.io/ingress.allow-http: "true"
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
# Set this path to jenkinsUriPrefix above or use annotations to rewrite path
# path: "/jenkins"
# configures the hostname e.g. jenkins.example.com
hostName: jenkins.sandbox.io
</code></pre>
| <p>There are several changes to the definition of Ingress resources between <code>v1.18</code> and <code>v1.19</code>.</p>
<p>In <code>v1.18</code>, we defined paths like this (see: <a href="https://v1-18.docs.kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource" rel="nofollow noreferrer">A minimal Ingress resource example</a>):</p>
<pre><code> paths:
- path: /testpath
pathType: Prefix
backend:
serviceName: test
servicePort: 80
</code></pre>
<p>In version <code>1.19</code> it was changed to: (see: <a href="https://v1-19.docs.kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource" rel="nofollow noreferrer">A minimal Ingress resource example</a>):</p>
<pre><code> paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
</code></pre>
<p>In your example, you can slightly modified the <code>values.yaml</code> and try again:<br />
<strong>NOTE:</strong> You may need to change the port number and <a href="https://v1-19.docs.kubernetes.io/docs/concepts/services-networking/ingress/#path-types" rel="nofollow noreferrer">pathType</a> depending on your configuration. Additionally, I've added the <code>kubernetes.io/ingress.class: nginx</code> annotation because I'm using <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">NGINX Ingress Controller</a> and I didn't configure the hostname.</p>
<pre><code>$ cat values.yaml
controller:
ingress:
enabled: true
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: >-
{{ template "jenkins.fullname" . }}
port:
number: 8080
apiVersion: "networking.k8s.io/v1"
annotations:
kubernetes.io/ingress.global-static-ip-name: jenkins-sandbox-blah
networking.gke.io/managed-certificates: jenkins-sandbox-blah
kubernetes.io/ingress.allow-http: "true"
kubernetes.io/ingress.class: nginx
# configures the hostname e.g. jenkins.example.com
# hostName: jenkins.sandbox.io
</code></pre>
|
<p>I want to watch Kubernetes pod events for a certain application.</p>
<p>I went with NewSharedInformerFactoryWithOptions. I have added appropriate labels selector, But it is not getting filtered out. As I want to filter it using the label: 'app=nats-box'</p>
<p>Here is the code -</p>
<pre><code>req, err := labels.NewRequirement("app", selection.Equals, []string{"nats-box"})
if err != nil {
log.Fatalf("error %s, bad requirement", err.Error())
}
selector := labels.NewSelector()
selector.Add(*req)
filteredFactory := informers.NewSharedInformerFactoryWithOptions(clientset, 0, informers.WithNamespace("default"), informers.WithTweakListOptions(func(options *metav1.ListOptions) {
options.LabelSelector = selector.String()
}))
podInformer := filteredFactory.Core().V1().Pods().Informer()
podInformer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
fmt.Println("pod add")
name := obj.(*corev1.Pod).Name
labels := obj.(*corev1.Pod).Labels
fmt.Printf("pod Name: %s\nLabels - %v\n\n", name, labels)
})
</code></pre>
<p>I get the output for all the pods, the namespace is getting filtered but not the labels. Not able to figure out where I am making the mistake.</p>
| <p>You can directly provide the label string in the <code>opts.LabelSelector</code>:</p>
<pre><code>labelOptions := informers.WithTweakListOptions(func(opts *metav1.ListOptions) {
opts.LabelSelector = "app=nats-box"
})
factory := informers.NewSharedInformerFactoryWithOptions(clientset, 0, informers.WithNamespace("default") ,labelOptions)
informer := factory.Core().V1().Pods().Informer()
stopper := make(chan struct{})
defer close(stopper)
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
fmt.Println("pod add")
name := obj.(*corev1.Pod).Name
labels := obj.(*corev1.Pod).Labels
fmt.Printf("pod Name: %s\nLabels - %v\n\n", name, labels)
},
})
informer.Run(stopper)
</code></pre>
|
<p>I am able to create an EKS cluster but when I try to add nodegroups, I receive a "Create failed" error with details:
"NodeCreationFailure": Instances failed to join the kubernetes cluster</p>
<p>I tried a variety of instance types and increasing larger volume sizes (60gb) w/o luck.
Looking at the EC2 instances, I only see the below problem. However, it is difficult to do anything since i'm not directly launching the EC2 instances (the EKS NodeGroup UI Wizard is doing that.)</p>
<p>How would one move forward given the failure happens even before I can jump into the ec2 machines and "fix" them?</p>
<blockquote>
<p>Amazon Linux 2</p>
<blockquote>
<p>Kernel 4.14.198-152.320.amzn2.x86_64 on an x86_64</p>
<p>ip-187-187-187-175 login: [ 54.474668] cloud-init[3182]: One of the
configured repositories failed (Unknown),
[ 54.475887] cloud-init[3182]: and yum doesn't have enough cached
data to continue. At this point the only
[ 54.478096] cloud-init[3182]: safe thing yum can do is fail. There
are a few ways to work "fix" this:
[ 54.480183] cloud-init[3182]: 1. Contact the upstream for the
repository and get them to fix the problem.
[ 54.483514] cloud-init[3182]: 2. Reconfigure the baseurl/etc. for
the repository, to point to a working
[ 54.485198] cloud-init[3182]: upstream. This is most often useful
if you are using a newer
[ 54.486906] cloud-init[3182]: distribution release than is
supported by the repository (and the
[ 54.488316] cloud-init[3182]: packages for the previous
distribution release still work).
[ 54.489660] cloud-init[3182]: 3. Run the command with the
repository temporarily disabled
[ 54.491045] cloud-init[3182]: yum --disablerepo= ...
[ 54.491285] cloud-init[3182]: 4. Disable the repository
permanently, so yum won't use it by default. Yum
[ 54.493407] cloud-init[3182]: will then just ignore the repository
until you permanently enable it
[ 54.495740] cloud-init[3182]: again or use --enablerepo for
temporary usage:
[ 54.495996] cloud-init[3182]: yum-config-manager --disable </p>
</blockquote>
</blockquote>
| <p>In my case, the problem was that I was deploying my node group in a private subnet, but this private subnet had no NAT gateway associated, hence no internet access. What I did was:</p>
<ol>
<li><p>Create a NAT gateway</p>
</li>
<li><p>Create a new routetable with the following routes (the second one is the internet access route, through nat):</p>
</li>
</ol>
<ul>
<li>Destination: VPC-CIDR-block Target: local</li>
<li><strong>Destination: 0.0.0.0/0 Target: NAT-gateway-id</strong></li>
</ul>
<ol start="3">
<li>Associate private subnet with the routetable created in the second-step.</li>
</ol>
<p>After that, nodegroups joined the clusters without problem.</p>
|
<p>I have a project that needs to <strong>inject or update environment variables</strong> in the pod container, using <code>kubebuilder</code> and <code>controller-runtime</code>,</p>
<p>My plan is as follows:</p>
<pre><code>func Reconcile(){
// get added environment variables
// get matched pods
// update container env
}
</code></pre>
<p>After I tried to update, the following error occurred</p>
<pre><code> "namespace": "default", "error": "Pod \"busybox\" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)\n core.PodSpec{\n \tVolumes: []core.Volume{{Name: \"default-token-bcr86\", VolumeSource: core.VolumeSource{Secret: &core.SecretVolumeSource{SecretName: \"default-token-bcr86\", DefaultMode: &420}}}},\n \tInitContainers: nil,\n \tContainers: []core.Container{\n \t\t{\n \t\t\t... // 5 identical fields\n \t\t\tPorts: nil,\n \t\t\tEnvFrom: nil,\n- \t\t\tEnv: []core.EnvVar{
</code></pre>
<pre><code># pod.yml of busybox
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
labels:
match: "test"
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always
</code></pre>
<p>but it seems that env cannot be updated in the source code</p>
<p><code>// k8s.io/api/core/v1/types.go</code></p>
<pre><code>// A single application container that you want to run within a pod.
type Container struct {
.....
// List of sources to populate environment variables in the container.
// The keys defined within a source must be a C_IDENTIFIER. All invalid keys
// will be reported as an event when the container is starting. When a key exists in multiple
// sources, the value associated with the last source will take precedence.
// Values defined by an Env with a duplicate key will take precedence.
// Cannot be updated.
// +optional
EnvFrom []EnvFromSource `json:"envFrom,omitempty" protobuf:"bytes,19,rep,name=envFrom"`
// List of environment variables to set in the container.
// Cannot be updated.
// +optional
// +patchMergeKey=name
// +patchStrategy=merge
Env []EnvVar `json:"env,omitempty" patchStrategy:"merge" patchMergeKey:"name" protobuf:"bytes,7,rep,name=env"`
.....
}
</code></pre>
<p>I would appreciate it if you could tell me how to solve it.</p>
| <p>You can't do this on a running pod without killing the pod and restarting it with a new config (containing your updated env vars) because these are set at pod startup.</p>
<p>Some options:</p>
<ol>
<li>Kill the pod and recreate it with the new env vars.</li>
<li>Update the env vars in the deployment instead of the pod which will rollout the change.</li>
<li>Consider using some other means of storing these key-value pairs (ex: a volume, or a Custom Resource).</li>
</ol>
<p>Can you describe your use case a bit more?</p>
|
<p>is it possible to setup a prometheus/grafana running on centos to monitor several K8S clusters in the lab? the architecture can be similar to the bellow one, although not strictly required. Right now the kubernetes clusters we have, do not have prometheus and grafana installed. The documentation is not very much clear if an additional component/agent remote-push is required or not and how the central prometheus and the K8S need to be configured to achieve the results?
Thanks.</p>
<p><img src="https://i.stack.imgur.com/bCWTX.png" alt="Required architecture" /></p>
| <p>You have different solution in order to implement your use case :</p>
<ol>
<li>You can use <a href="https://prometheus.io/docs/prometheus/latest/federation/" rel="nofollow noreferrer">prometheus federation</a>. This will allow you to have a central prometheus server that will scrape samples from other prometheus servers.</li>
<li>You can use <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write" rel="nofollow noreferrer">remote_write configuration</a>. This will allow you to send your samples to a remote endpoint (and then eventually scrape that central endpoint). You'll also be able to apply relabeling rules with this configuration.</li>
<li>As @JulioHM said in the comment, you can use another tool like <a href="https://thanos.io/" rel="nofollow noreferrer">thanos</a> or <a href="https://github.com/cortexproject/cortex" rel="nofollow noreferrer">Cortex</a>. Those tools are great and allow you to do more stuff than just writing to a remote endpoint. You'll be able to implement horizontal scalling of your prometheus servers, long-term storage, etc.</li>
</ol>
|
<p>I have the following nodejs dockerfile:</p>
<pre><code># pull image
FROM node:13.12.0-alpine
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
RUN npm install
# add app
COPY . ./
# start app
CMD node server dev
</code></pre>
<p>I need to dynamically run a custom JS script inside the container after start up. How can I achieve this?</p>
<p>UPDATE:
I tried adding the following entry point after CMD, but neither CMD not ENTRYPOINT was executed:</p>
<pre><code>ENTRYPOINT node customScript.js
</code></pre>
<p>Added a wrapper shell script (startup.sh) to include both commands:</p>
<pre><code>#!/bin/sh
nohup node server dev > startup.log && node data/scripts/custom.js > custom.log
</code></pre>
<p>Replaced CMD with:</p>
<pre><code>CMD ["./startup.sh"]
</code></pre>
<p>This only executes the first command in the shell and not the second. I also don't see the output-redirect log files being created in the container.</p>
| <p>ideally, you should be running the process of <strong>nohup</strong> in background</p>
<p>there is no issue due to <strong>CMD</strong> or <strong>ENTRYPOINT</strong></p>
<pre><code>#!/bin/sh
nohup node server dev > startup.log && node data/scripts/custom.js > custom.log
</code></pre>
<p>use <strong>&</strong> last at your <strong>nohup</strong> command to start the process into backgroud mode</p>
<pre><code>nohup ./yourscript &
</code></pre>
<p><strong>or</strong></p>
<pre><code>nohup command &
</code></pre>
<p>after this you can run the <strong>node</strong> command</p>
<p><strong>sh</strong> example</p>
<pre><code>nohup nice "$0" "calling_myself" "$@" > $nohup_out &
node index.js
sleep 1
tail -f $nohup_out
</code></pre>
|
<p>I'm working on Kubernetes deployment services using minikube locally on my windows 10 machine, so when I expose my service which is an expressjs API I can reach it via: <code>localhost:3000</code></p>
<p><a href="https://i.stack.imgur.com/2s3ii.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2s3ii.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/EFDnF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EFDnF.png" alt="enter image description here" /></a></p>
<p>I want to expose that service on the network so I can test the API from another device (My mobile phone) to do that I installed Nginx to set a reverse proxy to forward all incoming requests on port 80 to <code>localhost:3000/api</code> but when I hit localhost it shows the default page of Nginx ?</p>
<p><a href="https://i.stack.imgur.com/Hcsld.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hcsld.png" alt="enter image description here" /></a></p>
<p>this is my nginx.conf</p>
<pre><code>#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://localhost:3000/api/;
}
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
# another virtual host using mix of IP-, name-, and port-based configuration
#
#server {
# listen 8000;
# listen somename:8080;
# server_name somename alias another.alias;
# location / {
# root html;
# index index.html index.htm;
# }
#}
# HTTPS server
#
#server {
# listen 443 ssl;
# server_name localhost;
# ssl_certificate cert.pem;
# ssl_certificate_key cert.key;
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 5m;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
# location / {
# root html;
# index index.html index.htm;
# }
#}
}
`
</code></pre>
| <p>A few things could be happening here:</p>
<ol>
<li><p>After the Nginx configuration changed, the stop and reload of the application was not performed; this is necessary to load the new configuration and is performed with the commands <code>nginx -s stop</code>, and then <code>nginx -s reload</code>.</p>
</li>
<li><p>Accidentally running multiple instances: if for any reason the command <code>start nginx</code> was run more than once, you will have a process running for each time and cannot kill them with the <code>nginx -s stop</code> command; in this case, you will need to kill the processes on the Task Manager or restart the Windows system.</p>
</li>
<li><p>Be aware that Nginx for Windows is considered a <em>beta</em> version and it has some performance and operability issues, as stated in the following documentation [1].</p>
</li>
</ol>
<p>I recommend switching to a Linux system which fully supports Minikube and Nginx. Your scenario has been replicated on an Ubuntu VM and is working as expected.</p>
<p>[1] <a href="http://nginx.org/en/docs/windows.html" rel="nofollow noreferrer">http://nginx.org/en/docs/windows.html</a></p>
|
<p>I have deployed an ingress in Kubernetes and using two applications on different ingress namespaces.</p>
<p>When I access the APP2 I can reach the website and it's working fine but APP1 is displaying BLANK page. No errors just BLANK and response 200 OK.</p>
<p>Basically I integrated ArgoCd with Azure AD. The integration it is fine but I think ingress rules are not totally fine.</p>
<p>Both Apps are on different namespaces so I have to use two different ingress on different namespaces:</p>
<p>This is the APP1:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-server-ingress
namespace: argocd
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /argo-cd/$2
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
defaultBackend:
service:
name: argocd-server
port:
number: 443
rules:
- http:
paths:
- path: /argo-cd
pathType: Prefix
backend:
service:
name: argocd-server
port:
number: 443
</code></pre>
<p>And this is the APP2:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: sonarqube-ingress
namespace: ingress-nginx
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
defaultBackend:
service:
name: sonarqube
port:
number: 9000
tls:
- hosts:
- sq-example
secretName: nginx-cert
rules:
- host: sq.example.com
http:
paths:
- path: /sonarqube(/|$)(.*)
pathType: Prefix
backend:
service:
name: sonarqube
port:
number: 9000
- path: /(.*)
pathType: Prefix
backend:
service:
name: sonarqube
port:
number: 9000
</code></pre>
<p>args of ingress deployment:</p>
<pre><code>spec:
containers:
- args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
- --election-id=ingress-controller-leader
- --controller-class=k8s.io/ingress-nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
- --default-ssl-certificate=ingress-nginx/ca-key-pair
- --enable-ssl-passthrough
</code></pre>
<p>logs ingress controller pod:</p>
<pre><code>10.200.140.160 - - [03/Nov/2021:15:00:34 +0000] "GET /argo-cd HTTP/1.1" 200 831 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36" 489 0.002 [argocd-argocd-server-443] [] 10.200.140.177:8080, 10.200.140.177:8080 0, 831 0.000, 0.004 502, 200 d491c01cd741fa9f155642f8616b6d9f
2021/11/03 15:09:05 [error] 867#867: *534643 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream, client: 10.200.140.160, server: _, request: "GET /argo-cd/ HTTP/1.1", upstream: "https://10.200.140.177:8080/argo-cd/", host: "10.200.140.211"
10.200.140.160 - - [03/Nov/2021:15:09:05 +0000] "GET /argo-cd/ HTTP/1.1" 200 831 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36" 440 0.006 [argocd-argocd-server-443] [] 10.200.140.177:8080, 10.200.140.177:8080 0, 831 0.000, 0.004 502, 200 8995b914ae6e39d8ca781e1f4f269f50
10.200.140.160 - - [03/Nov/2021:15:09:16 +0000] "GET /argo-cd HTTP/1.1" 200 831 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36" 489 0.001 [argocd-argocd-server-443] [] 10.200.140.177:8080 831 0.004 200 0adadba11c87f9b88ed75d52e4ca387a
</code></pre>
<p>I tryied playing with the path: /argo-cd on APP1 like:</p>
<p>path: /argo-cd/
path: /argo-cd/(/|$)(.<em>)
path: /argo-cd/(.</em>)
path: /argo-cd/*</p>
<p>but impossible to make it work. Am I doing something wrong here?</p>
<p>Thanks in advance.</p>
| <p>The problem is that you didn't configure the argo-cd root path.</p>
<hr />
<p><em>Why?</em></p>
<p>First, it's worth to remind that NGINX Ingress controller <a href="https://docs.nginx.com/nginx-ingress-controller/installation/running-multiple-ingress-controllers/#running-multiple-nginx-ingress-controllers" rel="nofollow noreferrer">by default is <em>Cluster-wide</em></a>:</p>
<blockquote>
<ul>
<li><strong>Cluster-wide Ingress Controller (default)</strong>. The Ingress Controller handles configuration resources created in any namespace of the cluster. As NGINX is a high-performance load balancer capable of serving many applications at the same time, this option is used by default in our installation manifests and Helm chart.</li>
</ul>
</blockquote>
<p>So even if you have configured Ingresses in different namespaces at the end you are using the same NGINX Ingress Controller. You can check it by running:</p>
<pre><code>kubectl get ing -n ingress-nginx
kubectl get ing -n argocd
</code></pre>
<p>You can observe that <code>ADDRESS</code> is the same for both ingresses in different namespaces.</p>
<p>Let's assume that I have applied only the first ingress definition (APP1). If I try to reach <code>https://{ingress-ip}/argo-cd</code> I will be redirected to the <code>https://{ingress-ip}/applications</code> website - it works probably because you also setup the <code>defaultBackend</code> setting. Anyway it's not a good approach - you should configure the argo-cd root path correctly.</p>
<p>When I applied the second ingress definition (APP2) I'm also getting the blank page as you - probably because the definitions from both ingresses are mixing and this is causing an issue.</p>
<p><em>How to setup the argo-cd root path?</em></p>
<p>Based on <a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#option-2-mapping-crd-for-path-based-routing" rel="nofollow noreferrer">this documentation</a>:</p>
<blockquote>
<p>Edit the <code>argocd-server</code> deployment to add the <code>--rootpath=/argo-cd</code> flag to the argocd-server command.</p>
</blockquote>
<p>It's not really explained in detailed way in the docs, but I figured how to setup it:</p>
<p>First, we need to get current deployment configuration:</p>
<pre><code>kubectl get deploy argocd-server -o yaml -n argocd > argocd-server-deployment.yaml
</code></pre>
<p>Now, we need to edit the <code>argocd-server-deployment.yaml</code> file. Under <code>command</code> (in my case it was line 52) we need to add <code>rootpath</code> flag - before:</p>
<pre><code>containers:
- command:
- argocd-server
env:
</code></pre>
<p>After:</p>
<pre><code>containers:
- command:
- argocd-server
- --rootpath=/argo-cd
env:
</code></pre>
<p>Save it, and run <code>kubectl apply -f argocd-server-deployment.yaml</code>.</p>
<p>Now, it's time to edit ingress definition also - as we setup root path we need to delete <code>nginx.ingress.kubernetes.io/rewrite-target:</code> annotation:</p>
<pre><code>annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
</code></pre>
<p>After these changes, if I reach <code>https://{ingress-ip}/argo-cd</code> I will be redirected to the <code>https://{ingress-ip}/argo-cd/applications</code>. Everything is working properly.</p>
|
<p>I want to be able to know the Kubernetes uid of the Deployment that created the pod, from within the pod.</p>
<p>The reason for this is so that the Pod can spawn another Deployment and set the <code>OwnerReference</code> of that Deployment to the original Deployment (so it gets Garbage Collected when the original Deployment is deleted).</p>
<p>Taking inspiration from <a href="https://stackoverflow.com/questions/42274229/kubernetes-deployment-name-from-within-a-pod">here</a>, I've tried*:</p>
<ol>
<li>Using field refs as env vars:</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>containers:
- name: test-operator
env:
- name: DEPLOYMENT_UID
valueFrom:
fieldRef: {fieldPath: metadata.uid}
</code></pre>
<ol start="2">
<li>Using <code>downwardAPI</code> and exposing through files on a volume:</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>containers:
volumeMounts:
- mountPath: /etc/deployment-info
name: deployment-info
volumes:
- name: deployment-info
downwardAPI:
items:
- path: "uid"
fieldRef: {fieldPath: metadata.uid}
</code></pre>
<p>*Both of these are under <code>spec.template.spec</code> of a resource of kind: Deployment.</p>
<p>However for both of these the uid is that of the Pod, not the Deployment. Is what I'm trying to do possible?</p>
| <p>The behavior is correct, the <code>Downward API</code> is for <code>pod</code> rather than <code>deployment/replicaset</code>.</p>
<p>So I guess the solution is set the name of deployment manually in <code>spec.template.metadata.labels</code>, then adopt <code>Downward API</code> to inject the labels as env variables.</p>
|
<p>I am trying to create an Istio <code>Virtualservice</code>. However, I am getting the below error, despite me having the cluster-admin role bound to.</p>
<pre><code>UPGRADE FAILED: could not get information about the resource: virtualservices.networking.istio.io "admin-ui" is forbidden: User "vaish@admin" cannot get resource "virtualservices" in API group "networking.istio.io" in the namespace "onboarding"
</code></pre>
<p>I also tried to create a new <code>Clusterrole</code> as below and create a binding to my user, which also does not yield any result.</p>
<pre><code>---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: istio-editor-role
labels:
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rules:
- apiGroups: ["config.istio.io", "networking.istio.io", "rbac.istio.io", "authentication.istio.io", "security.istio.io"]
resources: ["virtualservices"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"
</code></pre>
<pre><code> kubectl create clusterrolebinding istio-editor-binding --clusterrole=istio-editor-role --user=vaish@admin
</code></pre>
| <p>The solution was to add the user to the cluster-admin role</p>
|
<p>Coming from classic Java application development and being new to "all this cloud stuff", I have a (potentially naive) basic question about scalability e.g. in Kubernetes.</p>
<p>Let's assume I've written an application for the JVM (Java or Kotlin) that scales well locally across CPUs / CPU cores for compute-intensive work that is managed via job queues, where different threads (or processes) take their work items from the queues.</p>
<p>Now, I want to scale the same application beyond the local machine, in the cloud. In my thinking the basic way of working stays the same: There are job queues, and now "cloud nodes" instead of local processes get their work from the queues, requiring some network communication.</p>
<p>As on that high level work organization is basically the same, I was hoping that there would be some JVM framework whose API / interfaces I would simply need to implement, and that framework has then different backends / "engines" that are capable of orchestrating a job queuing system either locally via processes, or in the cloud involving network communication, so my application would "magically" just scale indefinitely depending on now many cloud resources I throw at it (in the end, I want dynamic horizontal auto-scaling in Kubernetes).</p>
<p>Interestingly, I failed to find such a framework, although I was pretty sure that it would be a basic requirement for anyone who wants to bring local applications to the cloud for better scalability. I found e.g. <a href="https://github.com/jobrunr/jobrunr" rel="nofollow noreferrer">JobRunr</a>, which seems to come very close, but unfortunately so far lacks the capability to dynamically ramp up Kubernetes nodes based on load.</p>
<p>Are there other frameworks someone can recommend?</p>
| <p>Scale your code as Kubernetes Jobs, try <a href="https://keda.sh/docs/2.4/concepts/scaling-jobs/" rel="nofollow noreferrer">keda</a>.</p>
|
<p>I deployed an EFS in AWS and a test pod on EKS from this document: <a href="https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html" rel="nofollow noreferrer">Amazon EFS CSI driver</a>.</p>
<p>EFS CSI Controller pods in the <code>kube-system</code>:</p>
<pre><code>kube-system efs-csi-controller-5bb76d96d8-b7qhk 3/3 Running 0 26s
kube-system efs-csi-controller-5bb76d96d8-hcgvc 3/3 Running 0 26s
</code></pre>
<p>After deployed a sample application from the doc, when confirm <code>efs-csi-controller</code> sa pod logs, it seems they didn't work well.</p>
<p>Pod 1:</p>
<pre><code>$ kubectl logs efs-csi-controller-5bb76d96d8-b7qhk \
> -n kube-system \
> -c csi-provisioner \
> --tail 10
W1030 08:15:59.073406 1 feature_gate.go:235] Setting GA feature gate Topology=true. It will be removed in a future release.
I1030 08:15:59.073485 1 feature_gate.go:243] feature gates: &{map[Topology:true]}
I1030 08:15:59.073500 1 csi-provisioner.go:132] Version: v2.1.1-0-g353098c90
I1030 08:15:59.073520 1 csi-provisioner.go:155] Building kube configs for running in cluster...
I1030 08:15:59.087072 1 connection.go:153] Connecting to unix:///var/lib/csi/sockets/pluginproxy/csi.sock
I1030 08:15:59.087512 1 common.go:111] Probing CSI driver for readiness
I1030 08:15:59.090672 1 csi-provisioner.go:202] Detected CSI driver efs.csi.aws.com
I1030 08:15:59.091694 1 csi-provisioner.go:244] CSI driver does not support PUBLISH_UNPUBLISH_VOLUME, not watching VolumeAttachments
I1030 08:15:59.091997 1 controller.go:756] Using saving PVs to API server in background
I1030 08:15:59.092834 1 leaderelection.go:243] attempting to acquire leader lease kube-system/efs-csi-aws-com...
</code></pre>
<p>Pod 2:</p>
<pre><code>$ kubectl logs efs-csi-controller-5bb76d96d8-hcgvc \
> -n kube-system \
> -c csi-provisioner \
> --tail 10
I1030 08:16:32.628759 1 controller.go:1099] Final error received, removing PVC 111111a-d6fb-440a-9bb1-132901jfas from claims in progress
W1030 08:16:32.628783 1 controller.go:958] Retrying syncing claim "111111a-d6fb-440a-9bb1-132901jfas", failure 5
E1030 08:16:32.628798 1 controller.go:981] error syncing claim "111111a-d6fb-440a-9bb1-132901jfas": failed to provision volume with StorageClass "efs-sc": rpc error: code = Unauthenticated desc = Access Denied. Please ensure you have the right AWS permissions: Access denied
I1030 08:16:32.628845 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"efs-claim", UID:"111111a-d6fb-440a-9bb1-132901jfas", APIVersion:"v1", ResourceVersion:"1724705", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "efs-sc": rpc error: code = Unauthenticated desc = Access Denied. Please ensure you have the right AWS permissions: Access denied
I1030 08:17:04.628997 1 controller.go:1332] provision "default/efs-claim" class "efs-sc": started
I1030 08:17:04.629193 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"efs-claim", UID:"111111a-d6fb-440a-9bb1-132901jfas", APIVersion:"v1", ResourceVersion:"1724705", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/efs-claim"
I1030 08:17:04.687957 1 controller.go:1099] Final error received, removing PVC 111111a-d6fb-440a-9bb1-132901jfas from claims in progress
W1030 08:17:04.687977 1 controller.go:958] Retrying syncing claim "111111a-d6fb-440a-9bb1-132901jfas", failure 6
E1030 08:17:04.688001 1 controller.go:981] error syncing claim "111111a-d6fb-440a-9bb1-132901jfas": failed to provision volume with StorageClass "efs-sc": rpc error: code = Unauthenticated desc = Access Denied. Please ensure you have the right AWS permissions: Access denied
I1030 08:17:04.688044 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"efs-claim", UID:"111111a-d6fb-440a-9bb1-132901jfas", APIVersion:"v1", ResourceVersion:"1724705", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "efs-sc": rpc error: code = Unauthenticated desc = Access Denied. Please ensure you have the right AWS permissions: Access denied
</code></pre>
<p>From the events, I can see:</p>
<pre><code>$ kubectl get events
27m Warning FailedScheduling pod/efs-app skip schedule deleting pod: default/efs-app
7m38s Warning FailedScheduling pod/efs-app 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
7m24s Warning FailedScheduling pod/efs-app 0/2 nodes are available: 2 persistentvolumeclaim "efs-claim" is being deleted.
7m24s Warning FailedScheduling pod/efs-app skip schedule deleting pod: default/efs-app
17s Warning FailedScheduling pod/efs-app 0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
27m Normal ExternalProvisioning persistentvolumeclaim/efs-claim waiting for a volume to be created, either by external provisioner "efs.csi.aws.com" or manually created by system administrator
10m Normal ExternalProvisioning persistentvolumeclaim/efs-claim waiting for a volume to be created, either by external provisioner "efs.csi.aws.com" or manually created by system administrator
11m Normal Provisioning persistentvolumeclaim/efs-claim External provisioner is provisioning volume for claim "default/efs-claim"
11m Warning ProvisioningFailed persistentvolumeclaim/efs-claim failed to provision volume with StorageClass "efs-sc": rpc error: code = Unauthenticated desc = Access Denied. Please ensure you have the right AWS permissions: Access denied
7m47s Normal Provisioning persistentvolumeclaim/efs-claim External provisioner is provisioning volume for claim "default/efs-claim"
7m47s Warning ProvisioningFailed persistentvolumeclaim/efs-claim failed to provision volume with StorageClass "efs-sc": rpc error: code = Unauthenticated desc = Access Denied. Please ensure you have the right AWS permissions: Access denied
74s Normal ExternalProvisioning persistentvolumeclaim/efs-claim waiting for a volume to be created, either by external provisioner "efs.csi.aws.com" or manually created by system administrator
2m56s Normal Provisioning persistentvolumeclaim/efs-claim External provisioner is provisioning volume for claim "default/efs-claim"
2m56s Warning ProvisioningFailed persistentvolumeclaim/efs-claim failed to provision volume with StorageClass "efs-sc": rpc error: code = Unauthenticated desc = Access Denied. Please ensure you have the right AWS permissions: Access denied
</code></pre>
<p><code>ServiceAccount</code> was created by:</p>
<pre><code>---
apiVersion: v1
kind: ServiceAccount
metadata:
name: efs-csi-controller-sa
namespace: kube-system
labels:
app.kubernetes.io/name: aws-efs-csi-driver
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::111111111111:role/AmazonEKS_EFS_CSI_Driver_Policy
</code></pre>
<p>The <code>AmazonEKS_EFS_CSI_Driver_Policy</code> is the json from <a href="https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/v1.3.2/docs/iam-policy-example.json" rel="nofollow noreferrer">here</a>.</p>
<hr />
<h1>Example code</h1>
<p>storageclass.yaml</p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: efs-sc
provisioner: efs.csi.aws.com
parameters:
provisioningMode: efs-ap
fileSystemId: fs-92107410
directoryPerms: "700"
gidRangeStart: "1000" # optional
gidRangeEnd: "2000" # optional
basePath: "/dynamic_provisioning" # optional
</code></pre>
<p>pod.yaml</p>
<pre><code>---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: efs-sc
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Pod
metadata:
name: efs-app
spec:
containers:
- name: app
image: centos
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: efs-claim
</code></pre>
| <p>Posted community wiki answer for better visibility. Feel free to expand it.</p>
<hr />
<p>Based on @Miantian comment:</p>
<blockquote>
<p>The reason was the efs driver image is using the different region from mine. I changed to the right one and it works.</p>
</blockquote>
<p>You can find steps to setup the Amazon EFS CSI driver in the proper region in <a href="https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html" rel="nofollow noreferrer">this documentation</a>.</p>
|
<p>I fail to deploy istio and met this problem. When I tried to deploy istio using <code>istioctl install --set profile=default -y</code>. The output is like:</p>
<pre><code>➜ istio-1.11.4 istioctl install --set profile=default -y
✔ Istio core installed
✔ Istiod installed
✘ Ingress gateways encountered an error: failed to wait for resource: resources not ready after 5m0s: timed out waiting for the condition
Deployment/istio-system/istio-ingressgateway (containers with unready status: [istio-proxy])
- Pruning removed resources Error: failed to install manifests: errors occurred during operation
</code></pre>
<p>After running <code>kubectl get pods -n=istio-system</code>, I found the pod of istio-ingressgateway was created, and the result of describe:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m36s default-scheduler Successfully assigned istio-system/istio-ingressgateway-8dbb57f65-vc85p to k8s-slave
Normal Pulled 4m35s kubelet Container image "docker.io/istio/proxyv2:1.11.4" already present on machine
Normal Created 4m35s kubelet Created container istio-proxy
Normal Started 4m35s kubelet Started container istio-proxy
Warning Unhealthy 3m56s (x22 over 4m34s) kubelet Readiness probe failed: Get "http://10.244.1.4:15021/healthz/ready": dial tcp 10.244.1.4:15021: connect: connection refused
</code></pre>
<p>And I can't get the log of this pod:</p>
<pre><code>➜ ~ kubectl logs pods/istio-ingressgateway-8dbb57f65-vc85p -n=istio-system
Error from server: Get "https://192.168.0.154:10250/containerLogs/istio-system/istio-ingressgateway-8dbb57f65-vc85p/istio-proxy": dial tcp 192.168.0.154:10250: i/o timeout
</code></pre>
<p>I run all this command on two VM in Huawei Cloud, with a 2C8G master and a 2C4G slave in ubuntu18.04. I have reinstall the environment and the kubernetes cluster, but that doesn't help.</p>
<h2>Without ingressgateway</h2>
<p>I also tried <code>istioctl install --set profile=minimal -y</code> that only run istiod. But when I try to run httpbin(<code>kubectl apply -f samples/httpbin/httpbin.yaml</code>) with auto injection on, the deployment can't create pod.</p>
<pre><code>➜ istio-1.11.4 kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
httpbin 0/1 0 0 5m24s
➜ istio-1.11.4 kubectl describe deployment/httpbin
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 6m6s deployment-controller Scaled up replica set httpbin-74fb669cc6 to 1
</code></pre>
<p>When I unlabel the default namespace(<code>kubectl label namespace default istio-injection-</code>), everything works fine.</p>
<p>I hope to deploy istio ingressgateway and run demo like <a href="https://istio.io/latest/zh/docs/tasks/traffic-management/ingress/ingress-control/#cleanup" rel="nofollow noreferrer">istio-ingressgateway</a>, but I have no idea to solve this situation. Thanks for any help.</p>
| <p>I made a silly mistake Orz.</p>
<p>After communiation with my cloud provider, I was informed that there was a network security policy of my cloud server. It's strange that one server has full access and the other has partial access (which only allow for port like 80, 443 and so on). After I change the policy, everything works fine.</p>
<p>For someone who may meet the similar question, I found all these questions seem to come with network problems like dns configuration, k8s configuration or server network problem after hours of searching in google. Like what howardjohn said in this <a href="https://github.com/istio/istio/issues/12446" rel="nofollow noreferrer">issue</a>, this is not a istio problem.</p>
|
<p>We're having a bare metal K8s cluster with an NGINX Ingress Controller.</p>
<p>Is there a way to tell how much traffic is transmitted/received of each Ingress?</p>
<p>Thanks!</p>
| <p>Ingress Controllers are implemented as standard Kubernetes applications. Any monitoring method adopted by organizations can be applied to Ingress controllers to track the health and lifetime of k8s workloads. To track network traffic statistics, controller-specific mechanisms should be used.</p>
<p>To <strong>observe Kubernetes Ingress traffic</strong> you can send your statistic to <a href="https://prometheus.io/" rel="nofollow noreferrer">Prometheus</a> and view them in <a href="https://grafana.com/" rel="nofollow noreferrer">Grafana</a> (widely adopted open source software for data visualization).</p>
<p><a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/monitoring.md" rel="nofollow noreferrer">Here</a> is a monitoring guide from the <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">ingress-nginx</a> project, where you can read how do do it step by step. Start with installing those tools.</p>
<p>To deploy Prometheus in Kubernetes run the below command:</p>
<pre><code>kubectl apply --kustomize github.com/kubernetes/ingress-nginx/deploy/prometheus/
</code></pre>
<p>To install grafana run this one:</p>
<pre><code>kubectl apply --kustomize github.com/kubernetes/ingress-nginx/deploy/grafana/
</code></pre>
<p>Follow the next steps in the mentioned before <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/monitoring.md" rel="nofollow noreferrer">monitoring guide</a>.</p>
<p>See also <a href="https://traefik.io/blog/observing-kubernetes-ingress-traffic-using-metrics/" rel="nofollow noreferrer">this article</a> and <a href="https://stackoverflow.com/questions/57755180/standard-way-to-monitor-ingress-traffic-in-k8-or-eks">this similar question</a>.</p>
|
<p>I'm developing a Kubernetes scheduler and I want to test its performance when nodes join and leave a cluster, as well as how it handles node failures.</p>
<p>What is the best way to test this locally on Windows 10?</p>
<p>Thanks in advance!</p>
| <p>Unfortunately, you can't add nodes to Docker Desktop with Kubernetes enabled. Docker Desktop is single-node only.</p>
<p>I can think of two possible solutions, off the top of my head:</p>
<ul>
<li>You could use any of the cloud providers. Major (AWS, GCP, Azure) ones have some kind of free tier (under certain usage, or timed). Adding nodes in those environments is trivial.</li>
<li>Create local VM for each node. This is less than perfect solution - very resource intesive. To make adding nodes easier, you could use <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/" rel="nofollow noreferrer">kubeadm</a> to provision your cluster.</li>
</ul>
|
<p>I am attempting to install gitlab using helm. I have a certificate issued to me by the internal Certificate Authority and I have used the <code>.pem</code> and <code>.key</code> file to generate a tls secrete with this command:</p>
<pre><code>kubectl create secret tls gitlab-cert --cert=<cert>.pem --key=<cert>.key
</code></pre>
<p>When I run the helm installation, I am expecting to be able to view gitlab with <code>https://{internal-domain}</code>, however I get the below image.</p>
<p><a href="https://i.stack.imgur.com/CoYaG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CoYaG.png" alt="enter image description here" /></a></p>
<p><strong>HELM installtion configuration</strong></p>
<pre><code>helm install gitlab gitlab/gitlab \
--timeout 600s \
--set global.hosts.domain=${hosts_domain} \
--namespace ${helm_namespace} \
--set global.hosts.externalIP=${static_ip} \
--set postgresql.install=false \
--set global.psql.host=${postgres_sql_ip} \
--set global.psql.password.secret=${k8s_password_secret} \
--set global.psql.username=${postgres_sql_user} \
--set global.psql.password.key=${k8s_password_key}
--set global.psql.ssl.secret=${psql_ssl_secret} \
--set global.psql.ssl.clientCertificate=${psql_ssl_client_certificate} \
--set global.psql.ssl.clientKey=${psql_ssl_client_key} \
--set global.psql.ssl.serverCA=${psql_ssl_server_ca} \
--set global.extraEnv.PGSSLCERT=${extra_env_pg_ssl_cert} \
--set global.extraEnv.PGSSLKEY=${extra_env_pg_ssl_key} \
--set global.extraEnv.PGSSLROOTCERT=${extra_env_pg_ssl_root_cert} \
--set global.host.https=true \
--set global.ingress.tls.enabled=true \
--set global.ingress.tls.secretName=${gitlab-cert} \
--set certmanager.install=false \
--set global.ingress.configureCertmanager=false \
--set gitlab.webservice.ingress.tls.secretName=${gitlab-cert}
</code></pre>
<p>The pods run fine.</p>
| <p>Posted community wiki answer for better visibility. Feel free to expand it.</p>
<hr />
<p>Based on @sytech comment:</p>
<blockquote>
<p>The error you have there is <code>CERT_WEAK_SIGNATURE_ALGORITHM</code>. It seems you should probably regenerate your certificate using a stronger algorithm.</p>
</blockquote>
<p>You are probably using some weak signature algorithm. Both <a href="https://www.chromium.org/Home/chromium-security/education/tls#TOC-Deprecated-and-Removed-Features" rel="nofollow noreferrer">Chrome</a> and <a href="https://developer.mozilla.org/en-US/docs/Web/Security/Weak_Signature_Algorithm" rel="nofollow noreferrer">Mozilla Firefox</a> are not <a href="https://developer.mozilla.org/en-US/docs/Web/Security/Weak_Signature_Algorithm" rel="nofollow noreferrer">treating certificates based on weak algorithms as secure</a>:</p>
<blockquote>
<p>SHA-1 certificates will no longer be treated as secure by major browser manufacturers beginning in 2017.</p>
</blockquote>
<blockquote>
<p>Support for MD5 based signatures was removed in early 2012.</p>
</blockquote>
<p>Please make sure that you are <a href="https://blog.mozilla.org/security/2014/09/23/phasing-out-certificates-with-sha-1-based-signature-algorithms/" rel="nofollow noreferrer">using more secure algorithm</a>:</p>
<blockquote>
<p>We encourage Certification Authorities (CAs) and Web site administrators to upgrade their certificates to use signature algorithms with hash functions that are stronger than SHA-1, such as SHA-256, SHA-384, or SHA-512.</p>
</blockquote>
<p>Another option is that it may be issue at your end - check your network and browser settings - <a href="https://www.auslogics.com/en/articles/fix-neterr-cert-weak-signature-algorithm-error/" rel="nofollow noreferrer">steps are presented in this article</a>.</p>
|
<p>I have an ingress, defined as the following:</p>
<pre><code>Name: online-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
jj.cloud.com
/online/(.*) online:80 (172.16.1.66:5001)
/userOnline online:80 (172.16.1.66:5001)
Annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal AddedOrUpdated 29m (x4 over 74m) nginx-ingress-controller Configuration for default/online-ingress was added or updated
</code></pre>
<p>If I test it with no rewrite, it's ok.</p>
<pre><code>curl -X POST jj.cloud.com:31235/userOnline -H 'Content-Type: application/json' -d '{"url":"baidu.com","users":["ua"]}'
OK
</code></pre>
<p>However, if I try to use rewrite, it will fail.</p>
<pre><code>curl -X POST jj.cloud.com:31235/online/userOnline -H 'Content-Type: application/json' -d '{"url":"baidu.com","users":["ua"]}'
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.21.3</center>
</body>
</html>
</code></pre>
<p>And it will produce the following error logs:</p>
<pre><code>2021/11/03 10:21:25 [error] 134#134: *63 open() "/etc/nginx/html/online/userOnline" failed (2: No such file or directory), client: 172.16.0.0, server: jj.cloud.com, request: "POST /online/userOnline HTTP/1.1", host: "jj.cloud.com:31235"
172.16.0.0 - - [03/Nov/2021:10:21:25 +0000] "POST /online/userOnline HTTP/1.1" 404 153 "-" "curl/7.29.0" "-"
</code></pre>
<p>Why the path <code>/online/userOnline</code> doesn't match <code>/online/(.*)</code> and rewrite it to <code>/userOnline</code>? Or are there some other errors?
Here is the yaml:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: online-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: jj.cloud.com
http:
paths:
- path: /online(/|$)/(.*)
pathType: Prefix
backend:
service:
name: online
port:
number: 80
- path: /userOnline
pathType: Prefix
backend:
service:
name: online
port:
number: 80
ingressClassName: nginx
</code></pre>
<p>When I checked the generated nginx config, I found (default-online-ingress.conf):</p>
<pre><code>location /online(/|$)/(.*) {
</code></pre>
<p>It seems lost of the modifier for regex match, like this:</p>
<pre><code>location ~* "^/online(/|$)/(.*)" {
</code></pre>
<p>If it's true, how to make rewrite take effect and generate correct nginx config?</p>
| <p>If I understood your issue correctly, you have a problem with captured groups.</p>
<p>According to <a href="https://github.com/kubernetes/ingress-nginx/tree/main/docs/examples/rewrite#rewrite-target" rel="nofollow noreferrer">Nginx Rewrite targets</a> it seems to me,</p>
<ul>
<li>your patch should be <code>path: /online(/|$)(.*)</code></li>
<li>your <code>rewrite-target:</code> should be <code>rewrite-target: /$2</code></li>
<li>in addition, if you use nginx ingress, I believe you should specify that in annotation section as <code>kubernetes.io/ingress.class: nginx</code></li>
</ul>
<hr />
<p>yaml:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: online-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: jj.cloud.com
http:
paths:
- path: /online(/|$)(.*)
pathType: Prefix
backend:
service:
name: online
port:
number: 80
- path: /userOnline
pathType: Prefix
backend:
service:
name: online
port:
number: 80
ingressClassName: inner-nginx
</code></pre>
|
<p>I have a Google Cloud Composer 1 environment (Airflow 2.1.2) where I want to run an Airflow DAG that utilizes the <a href="https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/operators.html" rel="nofollow noreferrer">KubernetesPodOperator</a>.</p>
<p>Cloud Composer <a href="https://cloud.google.com/composer/docs/concepts/cloud-storage#folders_in_the_bucket" rel="nofollow noreferrer">makes available</a> to all DAGs a shared file directory for storing application data. The files in the directory reside in a Google Cloud Storage bucket managed by Composer. Composer uses FUSE to map the directory to the path <code>/home/airflow/gcs/data</code> on all of its Airflow worker pods.</p>
<p>In my DAG I run several Kubernetes pods like so:</p>
<pre class="lang-py prettyprint-override"><code> from airflow.contrib.operators import kubernetes_pod_operator
# ...
splitter = kubernetes_pod_operator.KubernetesPodOperator(
task_id='splitter',
name='splitter',
namespace='default',
image='europe-west1-docker.pkg.dev/redacted/splitter:2.3',
cmds=["dotnet", "splitter.dll"],
)
</code></pre>
<p>The application code in all the pods that I run needs to read from and write to the <code>/home/airflow/gcs/data</code> directory. But when I run the DAG my application code is unable to access the directory. Likely this is because Composer has mapped the directory into the worker pods but does not extend this courtesy to my pods.</p>
<p>What do I need to do to give my pods r/w access to the <code>/home/airflow/gcs/data</code> directory?</p>
| <p>Cloud Composer uses FUSE to mount certain directories from Cloud Storage into Airflow worker pods running in Kubernetes. It mounts these with default permissions that cannot be overwritten, because that metadata is not tracked by Google Cloud Storage. A possible solution is to use a bash operator that runs at the beginning of your DAG to copy files to a new directory. Another possible solution can be to use a non-Google Cloud Storage path like a <code>/pod</code> path.</p>
|
<p>When working with helm charts (generated by <code>helm create <name></code>) and specifying a docker image in values.yaml such as the image "kubernetesui/dashboard:v2.4.0" in which the exposed ports are written as <code>EXPOSE 8443 9090</code> I found it hard to know how to properly specify these ports in the actual helm chart-files and was wondering if anyone could explain a bit further on the topic.</p>
<p>By my understanding, the <code>EXPOSE 8443 9090</code> means that hostPort "8443" maps to containerPort "9090". It in that case seems clear that service.yaml should specify the ports in a manner similar to the following:</p>
<pre><code>spec:
type: {{ .Values.service.type }}
ports:
- port: 8443
targetPort: 9090
</code></pre>
<p>The deployment.yaml file however only comes with the field "containerPort" and no port field for the 8443 port (as you can see below) Should I add some field here in the deployment.yaml to include port 8443?</p>
<pre><code>spec:
template:
spec:
containers:
- name: {{ .Chart.Name }}
ports:
- name: http
containerPort: 9090
protocol: TCP
</code></pre>
<p>As of now when I try to install the helm charts it gives me an error message: <code>"Container image "kubernetesui/dashboard:v2.4.0" already present on machine"</code> and I've heard that it means the ports in service.yaml are not configured to match the docker images exposed ports. I have tested this with simpler docker image which only exposes one port and just added the port everywhere and the error message goes away so it seems to be true, but I am still confused about how to do it with two exposed ports.</p>
<p>I would really appreciate some help, thank you in advance if you have any experience of this and is willing to share.</p>
| <p>A Docker image never gets to specify any host resources it will use. If the Dockerfile has <a href="https://docs.docker.com/engine/reference/builder/#expose" rel="nofollow noreferrer"><code>EXPOSE</code></a> with two port numbers, then both ports are exposed (where "expose" means almost nothing in modern Docker). That is: this line says the container listens on both port 8443 and 9090 without requiring any specific external behavior.</p>
<p>In your Kubernetes Pod spec (usually nested inside a Deployment spec), you'd then generally list both ports as <code>containerPorts:</code>. Again, this doesn't really say anything about how a Service uses it.</p>
<pre class="lang-yaml prettyprint-override"><code># inside templates/deployment.yaml
ports:
- name: http
containerPort: 9090
protocol: TCP
- name: https
containerPort: 8443
protocol: TCP
</code></pre>
<p>Then in the corresponding Service, you'd republish either or both ports.</p>
<pre class="lang-yaml prettyprint-override"><code># inside templates/service.yaml
spec:
type: {{ .Values.service.type }}
ports:
- port: 80 # default HTTP port
targetPort: http # matching name: in pod, could also use 9090
- port: 443 # default HTTP/TLS port
targetPort: https # matching name: in pod, could also use 8443
</code></pre>
<p>I've chosen to publish the unencrypted and TLS-secured ports on their "normal" HTTP ports, and to bind the service to the pod using the port names.</p>
<p>None of this setup is Helm-specific; in this the only Helm template reference is the Service <code>type:</code> (for if the operator needs to publish a <code>NodePort</code> or <code>LoadBalancer</code> service).</p>
|
<p>I am trying to explore vault enterprise but getting permission denied for sidecar when I use the vault enterprise but seems to work fine when I tried to use local vault server.</p>
<p>Here is the repository that contains a working example with the local vault <a href="https://github.com/Adiii717/vault-sidecar-injector-app" rel="nofollow noreferrer">vault-sidecar-injector-app</a></p>
<p><a href="https://i.stack.imgur.com/oBZxm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oBZxm.png" alt="enter image description here" /></a></p>
<p><strong>Vault config</strong></p>
<pre><code>export VAULT_ADDR="https://vault-cluster.vault.c1c633fa-91ef-4e86-b025-4f31b3f14730.aws.hashicorp.cloud:8200"
export VAULT_NAMESPACE="admin"
#install agent
helm upgrade --install vault hashicorp/vault --set "injector.externalVaultAddr=$VAULT_ADDR"
vault auth enable kubernetes
# get certs & host
VAULT_HELM_SECRET_NAME=$(kubectl get secrets --output=json | jq -r '.items[].metadata | select(.name|startswith("vault-token-")).name')
TOKEN_REVIEW_JWT=$(kubectl get secret $VAULT_HELM_SECRET_NAME --output='go-template={{ .data.token }}' | base64 --decode)
KUBE_CA_CERT=$(kubectl config view --raw --minify --flatten --output='jsonpath={.clusters[].cluster.certificate-authority-data}' | base64 --decode)
KUBE_HOST=$(kubectl config view --raw --minify --flatten --output='jsonpath={.clusters[].cluster.server}')
# set Kubernetes config
vault write auth/kubernetes/config \
token_reviewer_jwt="$TOKEN_REVIEW_JWT" \
kubernetes_host="$KUBE_HOST" \
kubernetes_ca_cert="$KUBE_CA_CERT" \
issuer="https://kubernetes.default.svc.cluster.local" \
disable_iss_validation="true" \
disable_local_ca_jwt="true"
vault auth enable approle
# create admin policy
vault policy write admin admin-policy.hcl
vault write auth/approle/role/admin policies="admin"
vault read auth/approle/role/admin/role-id
# generate secret
vault write -f auth/approle/role/admin/secret-id
#Enable KV
vault secrets enable -version=2 kv
</code></pre>
<p>I can see the role and policy
<a href="https://i.stack.imgur.com/DGOnr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DGOnr.png" alt="enter image description here" /></a></p>
<p><strong>Admin policy</strong></p>
<p>Here is the admin policy for the enterprise</p>
<pre><code>path "*" {
capabilities = ["create", "read", "update", "delete", "list", "sudo"]
}
</code></pre>
<p><strong>Deploy Script for helm</strong></p>
<p>here is the deploy script, tried <code>hcp-root</code> root policy but no luck</p>
<pre class="lang-sh prettyprint-override"><code>RELEASE_NAME=demo-managed
NAMESPACE=default
ENVIRONMENT=develop
export role_id="f9782a53-823e-2c08-81ae-abc"
export secret_id="1de3b8c5-18c7-60e3-24ca-abc"
export VAULT_ADDR="https://vault-cluster.vault.c1c633fa-91ef-4e86-b025-4f31b3f14730.aws.hashicorp.cloud:8200"
export VAULT_TOKEN=$(vault write -field="token" auth/approle/login role_id="${role_id}" secret_id="${secret_id}")
vault write auth/kubernetes/role/${NAMESPACE}-${RELEASE_NAME} bound_service_account_names=${RELEASE_NAME} bound_service_account_namespaces=${NAMESPACE} policies=hcp-root ttl=1h
helm upgrade --install $RELEASE_NAME ../helm-chart --set environment=$ENVIRONMENT --set nameOverride=$RELEASE_NAME
</code></pre>
<p>also tried with root token</p>
<pre class="lang-sh prettyprint-override"><code>RELEASE_NAME=demo-managed
NAMESPACE=default
ENVIRONMENT=develop
vault write auth/kubernetes/role/${NAMESPACE}-${RELEASE_NAME} bound_service_account_names=${RELEASE_NAME} bound_service_account_namespaces=${NAMESPACE} policies=hcp-root ttl=1h
helm upgrade --install $RELEASE_NAME ../helm-chart --set environment=$ENVIRONMENT --set nameOverride=$RELEASE_NAME
</code></pre>
<p><strong>Sidecar config</strong></p>
<p>With namespace annotation, as my understanding namespace is required</p>
<blockquote>
<p>vault.hashicorp.com/namespace - configures the Vault Enterprise namespace to be used when requesting secrets from Vault.</p>
</blockquote>
<p><a href="https://www.vaultproject.io/docs/platform/k8s/injector/annotations" rel="nofollow noreferrer">https://www.vaultproject.io/docs/platform/k8s/injector/annotations</a></p>
<pre><code>vault.hashicorp.com/namespace : "admin"
</code></pre>
<p><strong>Error</strong></p>
<pre><code> | Error making API request.
|
| URL: PUT https://vault-cluster.vault.c1c633fa-91ef-4e86-b025-4f31b3f14730.aws.hashicorp.cloud:8200/v1/admin/auth/kubernetes/login
| Code: 403. Errors:
|
| * permission denied
</code></pre>
<p>Without namespace annotation getting below error</p>
<pre><code> | URL: PUT https://vault-cluster.vault.c1c633fa-91ef-4e86-b025-4f31b3f14730.aws.hashicorp.cloud:8200/v1/auth/kubernetes/login
| Code: 400. Errors:
|
| * missing client token
</code></pre>
<p>Even enabling debug logs <code>vault.hashicorp.com/log-level : "debug"</code> does not help me with this error, any help or suggestions will be appreciated.</p>
<p>Also tried
<a href="https://support.hashicorp.com/hc/en-us/articles/4404389946387-Kubernetes-auth-method-Permission-Denied-error" rel="nofollow noreferrer">https://support.hashicorp.com/hc/en-us/articles/4404389946387-Kubernetes-auth-method-Permission-Denied-error</a></p>
<p>So seems like I am missing something very specific to the vault enterprise</p>
| <p>Finally able to resolve the weird issue with vault, posting as an answer might help someone else.</p>
<p>The only thing that I missed to understand the flow between vault server, sidecar, and Kubernetes.</p>
<p><a href="https://i.stack.imgur.com/qHDWm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qHDWm.png" alt="enter image description here" /></a></p>
<p>Kubernetes should be reachable to vault enterprise for Token review API calls. As you can see the when sidecar makes a request to the vault, then the vault enterprise server performs a token review API call.</p>
<blockquote>
<p>Use the /config endpoint to configure <strong>Vault to talk to Kubernetes</strong>. Use kubectl cluster-info to validate the <strong>Kubernetes host address</strong> and TCP port.</p>
</blockquote>
<p><a href="https://www.vaultproject.io/docs/auth/kubernetes" rel="nofollow noreferrer">https://www.vaultproject.io/docs/auth/kubernetes</a></p>
<pre><code>
| Error making API request.
|
| URL: PUT https://vault-cluster.vault.c1c633fa-91ef-4e86-b025-4f31b3f14730.aws.hashicorp.cloud:8200/v1/admin/auth/kubernetes/login
| Code: 403. Errors:
|
| * permission denied
backoff=2.99s
</code></pre>
<p>This error does not indicate it has a connectivity issue but this also happens when the vault is not able to communicate with the Kubernetes cluster.</p>
<h1>Kube Host</h1>
<pre><code>vault write auth/kubernetes/config \
token_reviewer_jwt="$TOKEN_REVIEW_JWT" \
kubernetes_host="$KUBE_HOST" \
kubernetes_ca_cert="$KUBE_CA_CERT" \
issuer="https://kubernetes.default.svc.cluster.local"
</code></pre>
<p><code>KUBE_HOST</code> should be reachable for vault enterprise for tokenreview process.</p>
<p>So for the vault to communicate with our cluster, we need a few changes.</p>
<pre><code>minikube start --apiserver-ips=14.55.145.30 --vm-driver=none
</code></pre>
<p>Now update the <code>vaul-config.sh</code> file</p>
<pre><code>KUBE_HOST=$(kubectl config view --raw --minify --flatten --output='jsonpath={.clusters[].cluster.server}')
</code></pre>
<p>change this to</p>
<pre><code>KUBE_HOST=""https://14.55.145.30:8443/"
</code></pre>
<p>No manual steps, for the first time configuration run</p>
<pre><code>./vault-config.sh
</code></pre>
<p>and for the rest of the deployment in your CI/CD you can use</p>
<pre><code>./vault.sh
</code></pre>
<p>Each release has only been able to access its own secrets.</p>
<p>Furter details can be found <a href="https://github.com/Adiii717/vault-sidecar-injector-app/blob/main/README.md#start-minikube-in-ec2" rel="nofollow noreferrer">start-minikube-in-ec2</a></p>
<p>TLDR,</p>
<p><strong>Note: Kubernetes cluster should be reachable to vault enterprise for authentication, so vault enterprise would not able to communicate with your local minikube cluster. Better to test it out on EC2</strong></p>
|
<p>For my React App made with <code>create-react-app</code>, I want to use Kubernetes secrets as environment variables.</p>
<p>These secrets are used for different NodeJS containers in my cluster and they work just fine. I used a shell to echo the variables within the frontend container itself, and they are there but I realize that the environment variables are not loaded in the React App.</p>
<p>I have ensured that the environment variable keys start with <code>REACT_APP_</code>, so that is not the problem.</p>
<p>Here are some files:</p>
<p><code>frontend.yml</code></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
labels:
app: frontend
spec:
selector:
matchLabels:
app: frontend
replicas: 1
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: build_link
ports:
- containerPort: 3000
envFrom:
- secretRef:
name: prod
</code></pre>
<p>Dockerfile</p>
<pre><code>FROM node:17.0.1-alpine3.12
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# latest yarn
RUN npm i -g --force yarn serve
COPY package.json yarn.lock .
RUN yarn --frozen-lockfile
# Legacy docker settings cos node 17 breaks build
ENV NODE_OPTIONS=--openssl-legacy-provider
COPY . .
RUN yarn build
ENV NODE_ENV=production
EXPOSE 3000
CMD ["yarn", "prod"]
</code></pre>
<p>Kubernetes <code>prod</code> secret</p>
<pre><code>kubectl describe secret prod
Name: prod
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
REACT_APP_DOCS_SERVER_URL: 41 bytes
REACT_APP_AUTH0_CLIENT_ID: 32 bytes
REACT_APP_AUTH0_DOMAIN: 24 bytes
</code></pre>
<p>env of react app console logged in production (sorry i was desperate)</p>
<pre class="lang-yaml prettyprint-override"><code>FAST_REFRESH: true
NODE_ENV: "production"
PUBLIC_URL: ""
WDS_SOCKET_HOST: undefined
WDS_SOCKET_PATH: undefined
WDS_SOCKET_PORT: undefined
</code></pre>
| <p>The build command for <code>create-react-app</code> takes in environment variables present during the build phase, causing the environment variables added later to be missing.</p>
<p>The fix I made involves combining removing the build command in the Dockerfile and then combining the build and run command in the same command:
<code>package.json</code></p>
<pre class="lang-json prettyprint-override"><code>{
...
"scripts": {
"prod": "yarn build && serve build -l 3000",
"build": "react-scripts build"
}
}
</code></pre>
|
<p>I was given a docker compose file for superset which included volumes mounted from the repo itself.
<a href="https://github.com/apache/superset/blob/master/docker-compose-non-dev.yml" rel="nofollow noreferrer">docker-compose-non-dev.yml</a></p>
<p>I have to deploy this as containers in a pod in an EKS cluster. I can't figure out how the volumes should be done because the files are mounted locally from the repo when we run:</p>
<pre><code>docker-compose up
</code></pre>
<p>[ EDIT ]
I just built the container with the files I needed inside it.</p>
| <p>Docker compose is a tool geared towards local deployments (as you may know) and so it optimizes its workflows with that assumption. One way to work this around is by wrapping the docker image(s) that compose <code>up</code> with the additional files you have on your local environment. For example a wrapper <code>dockerfile</code> would be something like</p>
<pre><code>FROM <original image>
ADD <local files to new image>
</code></pre>
<p>The resulting image is what you would run in the cloud on EKS.</p>
<p>Of course there are many other ways to work it around such as using Kubernetes volumes and (pre-)populating them with the local files, or bake the local files in the original image from the get go, etc.</p>
<p>All in all the traditional compose model of thinking (with local file mappings) isn't very "cloud deployments friendly".</p>
|
<p>I have created a service like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: amen-sc
spec:
ports:
- name: http
port: 3030
targetPort: 8000
selector:
component: scc-worker
</code></pre>
<p>I am able to access this service, from within my pods of the same cluster (& Namespace), using the IP address I get from <code>kubectl get svc</code>, but I am not able to access using the service name like <code>curl amen-sc:3030</code>.</p>
<p>Please advise what could possibly be wrong.
I intend to expose certain pods, only within my cluster and access them using the <code>service-name:port</code> format.</p>
| <p>Make sure you have DNS service configured and corresponding pods are running.</p>
<pre class="lang-text prettyprint-override"><code>kubectl get svc -n kube-system -l k8s-app=kube-dns
</code></pre>
<p>and</p>
<pre class="lang-text prettyprint-override"><code>kubectl get pods -n kube-system -l k8s-app=kube-dns
</code></pre>
|
<p>I need to setup integration tests in a Spring Boot project using Postgres, Redis and Elasticsearch (at some point later also Kafka will be added). So far I've found two options:</p>
<ol>
<li>H2 Database and Embedded Redis. This could work but we are using some Postgres specific functions in some of our queries so this would not allow to cover all the cases.</li>
<li><a href="https://www.testcontainers.org/" rel="nofollow noreferrer">Testcontainers</a>. This looked perfect but apparently it's not possible to run those in pipeline since we are using Kubernetes</li>
</ol>
<p>Third option requires a bit of manual work. It's definitely possible to make a docker compose file with all the services needed for testing for local usage and let infrastructure team build containers the way they want for running tests in a pipeline. But this way a developer would have to manually start/stop those containers when running tests locally.</p>
<p>Is there a way to automate this process? Or any options similar to Testcontainers that would work in Kubernetes? Or perhaps there is a way to run Testcontainers in Kubernetes?</p>
<p>Thank you in advance.</p>
| <p>You can run testcontainers in Kubernetes based builds. You just need a container runtime available in your build (container). This could be rootless docker and is independent of the container runtime used by your kubernetes cluster.</p>
|
<p>I was given a docker compose file for superset which included volumes mounted from the repo itself.
<a href="https://github.com/apache/superset/blob/master/docker-compose-non-dev.yml" rel="nofollow noreferrer">docker-compose-non-dev.yml</a></p>
<p>I have to deploy this as containers in a pod in an EKS cluster. I can't figure out how the volumes should be done because the files are mounted locally from the repo when we run:</p>
<pre><code>docker-compose up
</code></pre>
<p>[ EDIT ]
I just built the container with the files I needed inside it.</p>
| <p>You can convert <em>docker-compose.yaml</em> files with a tool called <a href="https://kompose.io/" rel="nofollow noreferrer"><code>kompose</code></a>.<br />
It's as easy as running</p>
<pre class="lang-text prettyprint-override"><code>kompose convert
</code></pre>
<p>in a directory containing <em>docker-ccompose.yaml</em> file.<br />
This will create a bunch of files which you can deploy with <code>kubectl apply -f .</code> (or <code>kompose up</code>). You can read more <a href="https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/" rel="nofollow noreferrer">here</a>.</p>
<p><strong>However</strong>, even though <code>kompose</code> will generate PersistentVolueClaim manifests, no <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">PersistentVolumes</a> will be created. You have to make those yourself (cluster may try to create PVs by itself, but it's strongly based on PVCs generated by <code>kompose</code>, I would not rely on that).</p>
<hr />
<p>Docker compose is mainly used for devlopment, testing and single host deployments <sup>[<a href="https://docs.docker.com/compose/#common-use-cases" rel="nofollow noreferrer">reference</a>]</sup>, which is not exactly what Kubernetes was created for (latter being cloud oriented).</p>
|
<p>I have job failure alerts in prometheus, which resolves itself right after 2 hours I got the alert where the alert actually is not resolved. How come Prometheus resolves it? Just so you know, this is only happening with this job alert.</p>
<p>Job Alert:</p>
<pre><code> - alert: Failed Job Status
expr: increase(kube_job_status_failed[30m]) > 0
for: 1m
labels:
severity: warning
annotations:
identifier: '{{ $labels.namespace }} {{ $labels.job_name }}'
description: '{{ $labels.namespace }} - {{ $labels.job_name }} Failed'
</code></pre>
<p>An example of the alert:</p>
<pre><code>At 3:01 pm
[FIRING:1] Failed Job Status @ <environment-name> <job-name>
<environment-name> - <job-name> Failed
At 5:01 pm
[RESOLVED]
Alerts Resolved:
- <environment-name> - <job-name>: <environment-name> - <job-name> Failed
</code></pre>
<p>Here's the related pods as it can be seen that nothing seems to be resolved.</p>
<p><a href="https://i.stack.imgur.com/P2Gaz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/P2Gaz.png" alt="Scription here" /></a></p>
<p>Thanks for your help in advance!</p>
| <p><code>kube_job_status_failed</code> is a gauge representing the number of failed job pods at a given time. The expression <code>increase(kube_job_status_failed[30m]) > 0</code> asks the question: "have there been new failures in the last 30 minutes?" If there haven't, it won't be true, even if old failures remain in the Kubernetes API.</p>
<p>A refinement of this approach is <code>sum(rate(kube_job_status_failed[5m])) by (namespace, job_name) > 0</code>, plus an alert manager configuration to <em>not send resolved notices</em> for this alert. This is because a job pod failure is an event that can't be reversed - the job could be retried, but the pod can't be un-failed so resolution only means the alert has "aged out" or the pods have been deleted.</p>
<p>An expression that looks at the current number of failures recorded in the API server is <code>sum(kube_job_status_failed) by (namespace, job_name) > 0</code>. An alert based on this could be "resolved", but only by the <code>Job</code> objects being removed from the API (which doesn't necessarily mean that a process has succeeded...)</p>
|
<p>I have been learning Kubernetes for a few weeks and now I am trying to figure out the right way to connect a web server to a <code>statefulset</code> correctly.</p>
<p>Let's say I deployed a master-slave Postgres <code>statefulset</code> and now I will connect my web server to it. By using a cluster IP service, the requests will be load balanced across the master and the slaves for both reading (<code>SELECT</code>) and writing (<code>UPDATE</code>, <code>INSERT</code>, <code>DELETE</code>) records, right? But I can't do that because writing requests should be handled by the master. However, when I point my web server to the master using the headless service that will give us a DNS entry for each pod, I won't get any load balancing to the other slave replications and all of the requests will be handled by one instance and that is the master. So how am I supposed to connect them the right way? By obtaining both load balancing to all replications along with the slave in reading records and forwarding writing records requests to the master?</p>
<p>Should I use two endpoints in the web server and configure them in writing and reading records?</p>
<p>Or maybe I am using headless services and <code>statefulsets</code> the wrong way since I am new to Kubernetes?</p>
| <p>Well, your thinking is correct - the master should be read-write and replicas should be read only. How to configure it properly? There are different possible approaches.</p>
<hr />
<p>First approach is what you thinking about, to setup two headless services - one for accessing primary instances, the second one to access to the replica instances - good example is <a href="https://www.kubegres.io/doc/getting-started.html" rel="nofollow noreferrer">Kubegres</a>:</p>
<blockquote>
<p>In this example, Kubegres created 2 Kubernetes Headless services (of default type ClusterIP) using the name defined in YAML (e.g. "mypostgres"):</p>
<ul>
<li>a Kubernetes service "mypostgres" allowing to access to the Primary PostgreSql instances</li>
<li>a Kubernetes service "mypostgres-replica" allowing to access to the Replica PostgreSql instances</li>
</ul>
</blockquote>
<p>Then you will have two endpoints:</p>
<blockquote>
<p>Consequently, a client app running inside a Kubernetes cluster, would use the hostname "mypostgres" to connect to the Primary PostgreSql for read and write requests, and optionally it can also use the hostname "mypostgres-replica" to connect to any of the available Replica PostgreSql for read requests.</p>
</blockquote>
<p>Check <a href="https://www.kubegres.io/doc/getting-started.html" rel="nofollow noreferrer">this starting guide for more details</a>.</p>
<p>It's worth noting that there are many database solutions which are using this approach - another example is MySQL. <a href="https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/" rel="nofollow noreferrer">Here is a good article in Kubernetes documentation about setting MySQL using Stateful set</a>.</p>
<p>Another approach is to use some <a href="https://devopscube.com/deploy-postgresql-statefulset/" rel="nofollow noreferrer">middleware component which will act as a gatekeeper to the cluster, for example Pg-Pool</a>:</p>
<blockquote>
<p>Pg pool is a middleware component that sits in front of the Postgres servers and acts as a gatekeeper to the cluster.<br />
It mainly serves two purposes: Load balancing & Limiting the requests.</p>
</blockquote>
<blockquote>
<ol>
<li><strong>Load Balancing:</strong> Pg pool takes connection requests and queries. It analyzes the query to decide where the query should be sent.</li>
<li>Read-only queries can be handled by read-replicas. Write operations can only be handled by the primary server. In this way, it loads balances the cluster.</li>
<li><strong>Limits the requests:</strong> Like any other system, Postgres has a limit on no. of concurrent connections it can handle gracefully.</li>
<li>Pg-pool limits the no. of connections it takes up and queues up the remaining. Thus, gracefully handling the overload.</li>
</ol>
</blockquote>
<p>Then you will have one endpoint for all operations - the Pg-Pool service. Check <a href="https://devopscube.com/deploy-postgresql-statefulset/" rel="nofollow noreferrer">this article for more details, including the whole setup process</a>.</p>
|
<p>I installed Prometheus on my Kubernetes cluster with Helm, using the community chart <a href="https://artifacthub.io/packages/helm/prometheus-community/kube-prometheus-stack" rel="nofollow noreferrer">kube-prometheus-stack</a> - and I get some beautiful dashboards in the bundled Grafana instance. I now wanted the recommender from the Vertical Pod Autoscaler to use Prometheus as a data source for historic metrics, <a href="https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/FAQ.md#how-can-i-use-prometheus-as-a-history-provider-for-the-vpa-recommender" rel="nofollow noreferrer">as described here</a>. Meaning, I had to make a change to the Prometheus scraper settings for cAdvisor, and <a href="https://stackoverflow.com/a/65421764/310937">this answer</a> pointed me in the right direction, as after making that change I can now see the correct <code>job</code> tag on metrics from cAdvisor.</p>
<p>Unfortunately, now some of the charts in the Grafana dashboards are broken. It looks like it no longer picks up the CPU metrics - and instead just displays "No data" for the CPU-related charts.</p>
<p>So, I assume I have to tweak the charts to be able to pick up the metrics correctly again, but I don't see any obvious places to do this in Grafana?</p>
<p>Not sure if it is relevant for the question, but I am running my Kubernetes cluster on Azure Kubernetes Service (AKS).</p>
<p>This is the full <code>values.yaml</code> I supply to the Helm chart when installing Prometheus:</p>
<pre class="lang-yaml prettyprint-override"><code>kubeControllerManager:
enabled: false
kubeScheduler:
enabled: false
kubeEtcd:
enabled: false
kubeProxy:
enabled: false
kubelet:
serviceMonitor:
# Diables the normal cAdvisor scraping, as we add it with the job name "kubernetes-cadvisor" under additionalScrapeConfigs
# The reason for doing this is to enable the VPA to use the metrics for the recommender
# https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/FAQ.md#how-can-i-use-prometheus-as-a-history-provider-for-the-vpa-recommender
cAdvisor: false
prometheus:
prometheusSpec:
retention: 15d
storageSpec:
volumeClaimTemplate:
spec:
# the azurefile storage class is created automatically on AKS
storageClassName: azurefile
accessModes: ["ReadWriteMany"]
resources:
requests:
storage: 50Gi
additionalScrapeConfigs:
- job_name: 'kubernetes-cadvisor'
scheme: https
metrics_path: /metrics/cadvisor
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
</code></pre>
<p>Kubernetes version: 1.21.2</p>
<p>kube-prometheus-stack version: 18.1.1</p>
<p>helm version: version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"dirty", GoVersion:"go1.16.5"}</p>
| <p>Unfortunately, I don't have access to Azure AKS, so I've reproduced this issue on my GKE cluster. Below I'll provide some explanations that may help to resolve your problem.</p>
<p>First you can try to execute this <code>node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate</code> rule to see if it returns any result:</p>
<p><a href="https://i.stack.imgur.com/UZ4uk.png" rel="noreferrer"><img src="https://i.stack.imgur.com/UZ4uk.png" alt="enter image description here" /></a></p>
<p>If it doesn't return any records, please read the following paragraphs.</p>
<h2>Creating a scrape configuration for cAdvisor</h2>
<p>Rather than creating a completely new scrape configuration for cadvisor, I would suggest using one that is generated by default when <code>kubelet.serviceMonitor.cAdvisor: true</code>, but with a few modifications such as changing the label to <code>job=kubernetes-cadvisor</code>.</p>
<p>In my example, the 'kubernetes-cadvisor' scrape configuration looks like this:</p>
<p><strong>NOTE:</strong> I added this config under the <code>additionalScrapeConfigs</code> in the <code>values.yaml</code> file (the rest of the <code>values.yaml</code> file may be like yours).</p>
<pre><code>- job_name: 'kubernetes-cadvisor'
honor_labels: true
honor_timestamps: true
scrape_interval: 30s
scrape_timeout: 10s
metrics_path: /metrics/cadvisor
scheme: https
authorization:
type: Bearer
credentials_file: /var/run/secrets/kubernetes.io/serviceaccount/token
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true
follow_redirects: true
relabel_configs:
- source_labels: [job]
separator: ;
regex: (.*)
target_label: __tmp_prometheus_job_name
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name]
separator: ;
regex: kubelet
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_service_label_k8s_app]
separator: ;
regex: kubelet
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_endpoint_port_name]
separator: ;
regex: https-metrics
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name]
separator: ;
regex: Node;(.*)
target_label: node
replacement: ${1}
action: replace
- source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name]
separator: ;
regex: Pod;(.*)
target_label: pod
replacement: ${1}
action: replace
- source_labels: [__meta_kubernetes_namespace]
separator: ;
regex: (.*)
target_label: namespace
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_service_name]
separator: ;
regex: (.*)
target_label: service
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_pod_name]
separator: ;
regex: (.*)
target_label: pod
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_pod_container_name]
separator: ;
regex: (.*)
target_label: container
replacement: $1
action: replace
- separator: ;
regex: (.*)
target_label: endpoint
replacement: https-metrics
action: replace
- source_labels: [__metrics_path__]
separator: ;
regex: (.*)
target_label: metrics_path
replacement: $1
action: replace
- source_labels: [__address__]
separator: ;
regex: (.*)
modulus: 1
target_label: __tmp_hash
replacement: $1
action: hashmod
- source_labels: [__tmp_hash]
separator: ;
regex: "0"
replacement: $1
action: keep
kubernetes_sd_configs:
- role: endpoints
kubeconfig_file: ""
follow_redirects: true
namespaces:
names:
- kube-system
</code></pre>
<h3>Modifying Prometheus Rules</h3>
<p>By default, Prometheus rules fetching data from cAdvisor use <code>job="kubelet"</code> in their PromQL expressions:</p>
<p><a href="https://i.stack.imgur.com/oNFnF.png" rel="noreferrer"><img src="https://i.stack.imgur.com/oNFnF.png" alt="enter image description here" /></a></p>
<p>After changing <code>job=kubelet</code> to <code>job=kubernetes-cadvisor</code>, we also need to modify this label in the Prometheus rules:<br />
<strong>NOTE:</strong> We just need to modify the rules that have <code>metrics_path="/metrics/cadvisor</code> (these are rules that retrieve data from cAdvisor).</p>
<pre><code>$ kubectl get prometheusrules prom-1-kube-prometheus-sta-k8s.rules -o yaml
...
- name: k8s.rules
rules:
- expr: |-
sum by (cluster, namespace, pod, container) (
irate(container_cpu_usage_seconds_total{job="kubernetes-cadvisor", metrics_path="/metrics/cadvisor", image!=""}[5m])
) * on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (
1, max by(cluster, namespace, pod, node) (kube_pod_info{node!=""})
)
record: node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate
...
here we have a few more rules to modify...
</code></pre>
<p>After modifying Prometheus rules and waiting some time, we can see if it works as expected. We can try to execute <code>node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate</code> as in the beginning.</p>
<p>Additionally, let's check out our Grafana to make sure it has started displaying our dashboards correctly:
<a href="https://i.stack.imgur.com/Z7LRc.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Z7LRc.png" alt="enter image description here" /></a></p>
|
<p>As stated in the title, I currently have a configuration with 2 ingress-nginx v1.0.0 on gke v1.20.10.</p>
<p>When I deploy one alone the configuration is working and I have no issue, but when I deploy the second one the validatingwebhook and then try to deploy an ingress the 2 validatingwebhook try to evaluate the newly created ingress.</p>
<p>This result in this error:</p>
<pre><code>**Error from server (InternalError): error when creating "ingress-example.yaml": Internal error occurred: failed calling webhook "validate.nginx-public.ingress.kubernetes.io": Post "https://ingress-nginx-controller-admission-public.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": x509: certificate is valid for ingress-nginx-controller-admission-private, ingress-nginx-controller-admission-private.ingress-nginx.svc, not ingress-nginx-controller-admission-public.ingress-nginx.svc**
</code></pre>
<p>I checked and everything seems to be correctly separated, my validatingwebhook is deployed like that, the {{ ingress_type }} is a placeholder for -public or -private:</p>
<pre><code>---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
labels:
app.kubernetes.io/name: ingress-nginx{{ ingress_type }}
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/component: admission-webhook
name: ingress-nginx-admission{{ ingress_type }}
webhooks:
- name: validate.nginx{{ ingress_type }}.ingress.kubernetes.io
matchPolicy: Equivalent
objectSelector:
matchLabels:
ingress-nginx : nginx{{ ingress_type }}
rules:
- apiGroups:
- networking.k8s.io
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- ingresses
failurePolicy: Fail
sideEffects: None
admissionReviewVersions:
- v1
clientConfig:
service:
namespace: ingress-nginx
name: ingress-nginx-controller-admission{{ ingress_type }}
path: /networking/v1/ingresses
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: ingress-nginx{{ ingress_type }}
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/component: controller
name: ingress-nginx-controller-admission{{ ingress_type }}
spec:
type: ClusterIP
ports:
- name: https-webhook
port: 443
targetPort: webhook
appProtocol: https
selector:
app.kubernetes.io/name: ingress-nginx{{ ingress_type }}
</code></pre>
<p>I can't seem to find a solution, there is an old github issue on that with no answer, maybe I'm doing something wrong but I just can't see it.</p>
<p>As asked in comment, here is the ingress-example I'm trying to deploy, this works perfectly fine with only one ingress, not with two:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: nginx-private
# external-dns.alpha.kubernetes.io/target: "IP"
labels:
ingress-nginx : nginx-public
spec:
rules:
- host: hello.MYDOMAINHERE
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
</code></pre>
| <p>So for those that may encounter this error.</p>
<p>I tried different things before finding what was wrong. You have to rename all the labels but the version of the ingress-nginx, I did not think that it would break for so little, but it does. In the end I'm using something like this:</p>
<pre><code>---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
labels:
app.kubernetes.io/name: ingress-nginx{{ ingress_type }}
app.kubernetes.io/instance: ingress-nginx{{ ingress_type }}
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/component: admission-webhook{{ ingress_type }}
name: ingress-nginx-admission{{ ingress_type }}
webhooks:
- name: validate.nginx{{ ingress_type }}.ingress.kubernetes.io
matchPolicy: Equivalent
objectSelector:
matchLabels:
ingress-nginx : nginx{{ ingress_type }}
rules:
- apiGroups:
- networking.k8s.io
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- ingresses
failurePolicy: Fail
sideEffects: None
admissionReviewVersions:
- v1
clientConfig:
service:
namespace: ingress-nginx
name: ingress-nginx-controller-admission{{ ingress_type }}
path: /networking/v1/ingresses
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: ingress-nginx{{ ingress_type }}
app.kubernetes.io/instance: ingress-nginx{{ ingress_type }}
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/component: controller{{ ingress_type }}
name: ingress-nginx-controller-admission{{ ingress_type }}
spec:
type: ClusterIP
ports:
- name: https-webhook
port: 443
targetPort: webhook
appProtocol: https
selector:
app.kubernetes.io/name: ingress-nginx{{ ingress_type }}
</code></pre>
<p>I think in this case it's really important to do the same on all the resources.</p>
|
<p>I'm maintaining a Kubernetes cluster which includes two PostgreSQL servers in two different pods, a primary and a replica. The replica is sync'ed from the primary via log shipping.</p>
<p>A glitch caused the log shipping to start failing so the replica is no longer in sync with the primary.</p>
<p>The process for bringing a replica back into sync with the primary requires, amongst other things, stopping the postgres service of the replica. And this is where I'm having trouble.</p>
<p>It appears that Kubernetes is restarting the container as soon as I shut down the postgres service, which immediately restarts postgres again. I need the container running with the postgres service inside it stopped, to allow me to perform the next steps in fixing the broken replication.</p>
<p>How can I get Kubernetes to allow me to shut down the postgres service without restarting the container?</p>
<p><strong>Further Details:</strong></p>
<p>To stop the replica I run a shell on the replica pod via <code>kubectl exec -it <pod name> -- /bin/sh</code>, then run <code>pg_ctl stop</code> from the shell. I get the following response:</p>
<pre><code>server shutting down
command terminated with exit code 137
</code></pre>
<p>and I'm kicked out of the shell.</p>
<p>When I run <code>kubectl describe pod</code> I see the following:</p>
<pre><code>Name: pgset-primary-1
Namespace: qa
Priority: 0
Node: aks-nodepool1-95718424-0/10.240.0.4
Start Time: Fri, 09 Jul 2021 13:48:06 +1200
Labels: app=pgset-primary
controller-revision-hash=pgset-primary-6d7d65c8c7
name=pgset-replica
statefulset.kubernetes.io/pod-name=pgset-primary-1
Annotations: <none>
Status: Running
IP: 10.244.1.42
IPs:
IP: 10.244.1.42
Controlled By: StatefulSet/pgset-primary
Containers:
pgset-primary:
Container ID: containerd://bc00b4904ab683d9495ad020328b5033ecb00d19c9e5b12d22de18f828918455
Image: *****/crunchy-postgres:centos7-9.6.8-1.6.0
Image ID: docker.io/*****/crunchy-postgres@sha256:2850e00f9a619ff4bb6ff889df9bcb2529524ca8110607e4a7d9e36d00879057
Port: 5432/TCP
Host Port: 0/TCP
State: Running
Started: Sat, 06 Nov 2021 18:29:34 +1300
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 06 Nov 2021 18:28:09 +1300
Finished: Sat, 06 Nov 2021 18:29:18 +1300
Ready: True
Restart Count: 6
Limits:
cpu: 250m
memory: 512Mi
Requests:
cpu: 10m
memory: 256Mi
Environment:
PGHOST: /tmp
PG_PRIMARY_USER: primaryuser
PG_MODE: set
PG_PRIMARY_HOST: pgset-primary
PG_REPLICA_HOST: pgset-replica
PG_PRIMARY_PORT: 5432
[...]
ARCHIVE_TIMEOUT: 60
MAX_WAL_KEEP_SEGMENTS: 400
Mounts:
/backrestrepo from backrestrepo (rw)
/pgconf from pgbackrestconf (rw)
/pgdata from pgdata (rw)
/var/run/secrets/kubernetes.io/serviceaccount from pgset-sa-token-nh6ng (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
pgdata:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pgdata-pgset-primary-1
ReadOnly: false
backrestrepo:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: backrestrepo-pgset-primary-1
ReadOnly: false
pgbackrestconf:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: pgbackrest-configmap
Optional: false
pgset-sa-token-nh6ng:
Type: Secret (a volume populated by a Secret)
SecretName: pgset-sa-token-nh6ng
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 88m (x3 over 3h1m) kubelet Back-off restarting failed container
Normal Pulled 88m (x7 over 120d) kubelet Container image "*****/crunchy-postgres:centos7-9.6.8-1.6.0" already present on machine
Normal Created 88m (x7 over 120d) kubelet Created container pgset-primary
Normal Started 88m (x7 over 120d) kubelet Started container pgset-primary
</code></pre>
<p>The events suggest the container was started by Kubernetes.</p>
<p>The pod has no liveness or readiness probes so I don't know what would prompt Kubernetes to restart the container when I shut down the postgres service running within it.</p>
| <p>This happens due to <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy" rel="nofollow noreferrer">restartPolicy</a>. Container lifecycle is terminated due to its process being completed. If you do not want a new container to be created you need to change the restart policy for these pods.</p>
<p>If this pod is a part of deployment, look into <code>kubectl explain deployment.spec.template.spec.restartPolicy</code></p>
|
<p>I want to use <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">https://github.com/kubernetes/client-go</a> to copy a file from my file system to a container and vice versa.</p>
<pre><code>kubectl cp <file-spec-src> <file-spec-dest> -c <specific-container>
</code></pre>
<p>Is there a function in the go client that wraps the calls? Or can I use something like the <a href="https://godoc.org/k8s.io/client-go/kubernetes/typed/core/v1#CoreV1Interface.RESTClient" rel="nofollow noreferrer">RESTClient</a>?</p>
| <p>Since the answer to this is pretty old, here is how I did it :</p>
<pre><code>package main
import (
"bytes"
"fmt"
"io"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/runtime/serializer"
"k8s.io/cli-runtime/pkg/genericclioptions"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/kubernetes/scheme"
"k8s.io/client-go/rest"
"k8s.io/kubectl/pkg/cmd/cp"
"k8s.io/kubectl/pkg/cmd/exec"
"log"
"os"
)
type PodExec struct {
RestConfig *rest.Config
*kubernetes.Clientset
}
func NewPodExec(config rest.Config, clientset *kubernetes.Clientset) *PodExec {
config.APIPath = "/api" // Make sure we target /api and not just /
config.GroupVersion = &schema.GroupVersion{Version: "v1"} // this targets the core api groups so the url path will be /api/v1
config.NegotiatedSerializer = serializer.WithoutConversionCodecFactory{CodecFactory: scheme.Codecs}
return &PodExec{
RestConfig: &config,
Clientset: clientset,
}
}
func (p *PodExec) PodCopyFile(src string, dst string, containername string) (*bytes.Buffer, *bytes.Buffer, *bytes.Buffer, error) {
ioStreams, in, out, errOut := genericclioptions.NewTestIOStreams()
copyOptions := cp.NewCopyOptions(ioStreams)
copyOptions.Clientset = p.Clientset
copyOptions.ClientConfig = p.RestConfig
copyOptions.Container = containername
err := copyOptions.Run([]string{src, dst})
if err != nil {
return nil, nil, nil, fmt.Errorf("Could not run copy operation: %v", err)
}
return in, out, errOut, nil
}
</code></pre>
<p>You can then use PodCopyFile just like kubectl cp</p>
<pre><code>podExec := podexec.NewPodExec(*restconfig, clientset) // Here, you need to get your restconfig and clientset from either ~/.kube/config or built-in pod config.
_, out, _, err := podExec.PodCopyFile("/srcfile", "/dstfile", "containername")
if err != nil {
fmt.Printf("%v\n", err)
}
fmt.Println("out:")
fmt.Printf("%s", out.String())
</code></pre>
|
<p>I will briefly describe my application workflow: I have one application (cronjob), this application read my database and I want to, based on the output from database, run a few jobs in Kubernetes. Sometimes 1 job, sometimes 10 jobs, it depends. Additionally, I would like to pass some env's to this job.</p>
<p>How Can I do that in the most proper way? Probably I should use K8s API but are there any other options?</p>
| <p>Have you considered using Tekton?</p>
<p>Their EventListener would allow you to trigger jobs through some HTTP endpoint, and may allow you to set environment variables based on your payload.</p>
<p>See:</p>
<ul>
<li><a href="https://github.com/tektoncd/pipeline/blob/main/docs/pipelines.md" rel="nofollow noreferrer">https://github.com/tektoncd/pipeline/blob/main/docs/pipelines.md</a></li>
<li><a href="https://github.com/tektoncd/pipeline/blob/main/docs/tasks.md" rel="nofollow noreferrer">https://github.com/tektoncd/pipeline/blob/main/docs/tasks.md</a></li>
<li><a href="https://github.com/tektoncd/triggers/blob/main/docs/eventlisteners.md" rel="nofollow noreferrer">https://github.com/tektoncd/triggers/blob/main/docs/eventlisteners.md</a></li>
</ul>
|
<p>I'm trying to use helm from my github actions runner to deploy to my GKE cluster but I'm running into a permissions error.</p>
<p>Using a google cloud service account for authentication</p>
<p><strong>GitHub Actions CI step</strong></p>
<pre><code> - name: Install gcloud cli
uses: google-github-actions/setup-gcloud@master
with:
version: latest
project_id: ${{ secrets.GCLOUD_PROJECT_ID }}
service_account_email: ${{ secrets.GCLOUD_SA_EMAIL }}
service_account_key: ${{ secrets.GCLOUD_SA_KEY }}
export_default_credentials: true
- name: gcloud configure
run: |
gcloud config set project ${{secrets.GCLOUD_PROJECT_ID}};
gcloud config set compute/zone ${{secrets.GCLOUD_COMPUTE_ZONE}};
gcloud container clusters get-credentials ${{secrets.GCLOUD_CLUSTER_NAME}};
- name: Deploy
run: |
***
helm upgrade *** ./helm \
--install \
--debug \
--reuse-values \
--set-string "$overrides"
</code></pre>
<p><strong>The error</strong></p>
<pre><code>history.go:56: [debug] getting history for release blog
Error: query: failed to query with labels: secrets is forbidden: User "***" cannot list resource "secrets" in API group "" in the namespace "default": requires one of ["container.secrets.list"] permission(s).
helm.go:88: [debug] secrets is forbidden: User "***" cannot list resource "secrets" in API group "" in the namespace "default": requires one of ["container.secrets.list"] permission(s).
</code></pre>
| <p>It seems you're trying to deploy code by using the GKE <strong>viewer role</strong> , hence your getting the permission issue. You can create the required <strong>IAM policies</strong> and <strong>role based access control (RBAC)</strong> as per your <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/access-control" rel="nofollow noreferrer">requirement</a>.</p>
<p>You can also check kubernetes engine roles and responsibilities by using this <a href="https://cloud.google.com/iam/docs/understanding-roles#kubernetes-engine-roles" rel="nofollow noreferrer">reference</a>.</p>
|
<p>I have been playing with Digital Ocean's new managed Kubernetes service. I have created a new cluster using Digital Ocean's dashboard and, seemingly, successfully deployed my yaml file (attached). </p>
<p>running in context <code>kubectl get services</code></p>
<p><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
api-svc NodePort XX.XXX.XXX.XXX <none> 8080:30000/TCP 2h
kubernetes ClusterIP XX.XXX.X.X <none> 443/TCP 2h</code></p>
<p>My question is, <strong>how do I go exposing my service without a load balancer?</strong></p>
<p>I have been able to do this locally using minikube. To get the cluster IP I run <code>minikube ip</code> and use port number <code>30000</code>, as specified in my nodePort config, to reach the <code>api-svc</code> service. </p>
<p>From what I understand, Digital Ocean's managed service abstracts the master node away. So where would I find the public IP address to access my cluster?</p>
<p>Thank you in advance!</p>
<p><strong>my yaml file for reference</strong></p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: regcred
data:
.dockerconfigjson: <my base 64 key>
type: kubernetes.io/dockerconfigjson
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api-deployment
labels:
app: api-deployment
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: <my-dockerhub-user>/api:latest
ports:
- containerPort: 8080
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: api-svc
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 30000
protocol: TCP
selector:
app: api
type: NodePort
</code></pre>
| <p>Slightly more detailed answer: DigitalOcean manages firewall rules for your NodePort services automatically, so once you expose the service, the NodePort is automatically open to public traffic from all worker nodes in your cluster. See <a href="https://docs.digitalocean.com/products/kubernetes/resources/managed/#worker-node-firewalls" rel="nofollow noreferrer">docs</a></p>
<p>To find the public IP of any of your worker nodes, execute the following <code>doctl</code> commands:</p>
<pre class="lang-sh prettyprint-override"><code># Get the first worker node from the first node-pool of your cluster
NODE_NAME=$(doctl kubernetes cluster node-pool get <cluster-name> <pool-name> -o json | jq -r '.[0].nodes[0].name')
WORKER_NODE_IP=$(doctl compute droplet get $NODE_NAME --template '{{.PublicIPv4}}')
</code></pre>
|
<p>My Terraform code describes some AWS infrastructure to build a Kubernetes cluster including some deployments into the cluster. When I try to destroy the infrastructure using <code>terraform plan -destroy</code> I get a cycle:</p>
<pre><code>module.eks_control_plane.aws_eks_cluster.this[0] (destroy)
module.eks_control_plane.output.cluster
provider.kubernetes
module.aws_auth.kubernetes_config_map.this[0] (destroy)
data.aws_eks_cluster_auth.this[0] (destroy)
</code></pre>
<p>Destroying the infrastructure works by hand using just <code>terraform destroy</code> works fine. Unfortunately, Terraform Cloud uses <code>terraform plan -destroy</code> to plan the destructuion first, which causes this to fail. Here is the relevant code:</p>
<p>excerpt from eks_control_plane module:</p>
<pre><code>resource "aws_eks_cluster" "this" {
count = var.enabled ? 1 : 0
name = var.cluster_name
role_arn = aws_iam_role.control_plane[0].arn
version = var.k8s_version
# https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html
enabled_cluster_log_types = var.control_plane_log_enabled ? var.control_plane_log_types : []
vpc_config {
security_group_ids = [aws_security_group.control_plane[0].id]
subnet_ids = [for subnet in var.control_plane_subnets : subnet.id]
}
tags = merge(var.tags,
{
}
)
depends_on = [
var.dependencies,
aws_security_group.node,
aws_iam_role_policy_attachment.control_plane_cluster_policy,
aws_iam_role_policy_attachment.control_plane_service_policy,
aws_iam_role_policy.eks_cluster_ingress_loadbalancer_creation,
]
}
output "cluster" {
value = length(aws_eks_cluster.this) > 0 ? aws_eks_cluster.this[0] : null
}
</code></pre>
<p>aws-auth Kubernetes config map from aws_auth module:</p>
<pre><code>resource "kubernetes_config_map" "this" {
count = var.enabled ? 1 : 0
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data = {
mapRoles = jsonencode(
concat(
[
{
rolearn = var.node_iam_role.arn
username = "system:node:{{EC2PrivateDNSName}}"
groups = [
"system:bootstrappers",
"system:nodes",
]
}
],
var.map_roles
)
)
}
depends_on = [
var.dependencies,
]
}
</code></pre>
<p>Kubernetes provider from root module:</p>
<pre><code>data "aws_eks_cluster_auth" "this" {
count = module.eks_control_plane.cluster != null ? 1 : 0
name = module.eks_control_plane.cluster.name
}
provider "kubernetes" {
version = "~> 1.10"
load_config_file = false
host = module.eks_control_plane.cluster != null ? module.eks_control_plane.cluster.endpoint : null
cluster_ca_certificate = module.eks_control_plane.cluster != null ? base64decode(module.eks_control_plane.cluster.certificate_authority[0].data) : null
token = length(data.aws_eks_cluster_auth.this) > 0 ? data.aws_eks_cluster_auth.this[0].token : null
}
</code></pre>
<p>And this is how the modules are called:</p>
<pre><code>module "eks_control_plane" {
source = "app.terraform.io/SDA-SE/eks-control-plane/aws"
version = "0.0.1"
enabled = local.k8s_enabled
cluster_name = var.name
control_plane_subnets = module.vpc.private_subnets
k8s_version = var.k8s_version
node_subnets = module.vpc.private_subnets
tags = var.tags
vpc = module.vpc.vpc
dependencies = concat(var.dependencies, [
# Ensure that VPC including all security group rules, network ACL rules,
# routing table entries, etc. is fully created
module.vpc,
])
}
# aws-auth config map module. Creating this config map will allow nodes and
# Other users to join the cluster.
# CNI and CSI plugins must be set up before creating this config map.
# Enable or disable this via `aws_auth_enabled` variable.
# TODO: Add Developer and other roles.
module "aws_auth" {
source = "app.terraform.io/SDA-SE/aws-auth/kubernetes"
version = "0.0.0"
enabled = local.aws_auth_enabled
node_iam_role = module.eks_control_plane.node_iam_role
map_roles = [
{
rolearn = "arn:aws:iam::${var.aws_account_id}:role/Administrator"
username = "admin"
groups = [
"system:masters",
]
},
{
rolearn = "arn:aws:iam::${var.aws_account_id}:role/Terraform"
username = "terraform"
groups = [
"system:masters",
]
}
]
}
</code></pre>
<p>Removing the aws_auth config map, which means not using the Kubernetes provider at all, breaks the cycle. The problem is obviously that Terraform tries to destroys the Kubernetes cluster, which is required for the Kubernetes provider. Manually removing the resources step by step using multiple <code>terraform apply</code> steps works fine, too.</p>
<p>Is there a way that I can tell Terraform first to destroy all Kubernetes resources so that the Provider is not required anymore, then destroy the EKS cluster?</p>
| <p>You can control the order of destruction using the <code>depends_on</code> meta-argument, like you did with some of your Terraform code.</p>
<p>If you add the <code>depends_on</code> argument to all of the required resources that are needing to be destroyed first and have it depend on the <code>eks-cluster</code> Terraform will destroy those resources before the cluster.</p>
<p>You can also visualize your configuration and dependency with the <code>terraform graph</code> command to help you make decisions on what dependencies need to be created.</p>
<p><a href="https://www.terraform.io/docs/cli/commands/graph.html" rel="nofollow noreferrer">https://www.terraform.io/docs/cli/commands/graph.html</a></p>
|
<p>I had found a definition, when reviewing a project, that intrigued me: the use of Ingress with a Service of type LoadBalancer without installing the load balancer itself on <a href="https://kind.sigs.k8s.io/docs/user/loadbalancer/" rel="nofollow noreferrer">Kind</a>. As defined, it seems to work but I don't know why. I say that it works because I'm able to curl the application on browser as expected.</p>
<p>Code:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: server
labels:
app: server
spec:
replicas: 1
selector:
matchLabels:
app: server
template:
metadata:
labels:
app: server
spec:
containers:
- name: server
image: localhost:5000/server:2.0
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: server
name: server
spec:
type: LoadBalancer
selector:
app: server
ports:
- port: 8080
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: server
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: server
port:
number: 8080
</code></pre>
<p>I thought that hitting a Ingress and then the LoadBalancer would cause some redirect issue. My choice for that case would be a simple ClusterIP instead, <a href="https://stackoverflow.com/a/41565042/6328506">this</a> answer agrees. <strong>If MetalLB is not installed on the cluster, what the LoadBalancer do here?</strong> <a href="https://stackoverflow.com/a/69230856/6328506">This</a> may have a point for wanting something like that, but as far as I know, the load balancer by itself is a resource outside the cluster, i.e: a cloud provider load balancer.</p>
| <blockquote>
<p>If MetalLB is not installed on the cluster, what the LoadBalancer do
here?</p>
</blockquote>
<p><strong>MetalLB</strong> is the <code>load-balancer</code> implementation for <strong>bare metal</strong> <strong>Kubernetes</strong> clusters. So MetaLB comes into the picture if you are on the baremetal or so.</p>
<p>If you are on managed could <strong>LoadBalancer</strong> will get auto-created if you will mention into <strong>YAML</strong> config without MetalLB.</p>
<blockquote>
<p>what the LoadBalancer do here?</p>
</blockquote>
<p>Since it mentioned in YAML config LoadBalancer will get created for that service.</p>
<p>Yes, you can use the <strong>ClusterIP</strong> instead of creating the <strong>LoadBalancer</strong> service. Change type and remove from the service.</p>
<p>Ideal <strong>traffic</strong> flow goes like</p>
<pre><code>Internet > LoadBalancer > Ingress > ingress controller (Backend) > service > deployment > Pods > container
</code></pre>
<p>Yes, LB is resource of <strong>CloudProvider</strong> if you are using any managed.</p>
|
<p>I am installing gitlab with helm, I have created the certificate in kubernetes with the below commands.</p>
<p>However when i run the helm install, and i enter the address in the browers (<strong>ip-address.nip.io</strong>) I still get "Your connection is not private" message. I have installed the certificate to my Trusted Root Certificate Authority.</p>
<p>When I inspect the certificate details from the browser, i see that it still shows the <strong>Subject Alternative Name -> DNS Name=ingress.local</strong></p>
<p>I am unable to get the my endpoint of <strong>ip-address.nip.io</strong> from the browser and that is the goal.</p>
<p><strong>K8s certificate</strong></p>
<pre><code>cat <<EOF | cfssl genkey - | cfssljson -bare server
{
"hosts": [
"<ip-address>.nip.io",
"registry.<ip-address>.nip.io",
"gitlab.<ip-address>.nip.io",
"minio.<ip-address>.nip.io"
],
"CN": "<ip-address>.nip.io",
"key": {
"algo": "rsa",
"size": 2048
}
}
EOF
cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: <ip-address>.nip.io
spec:
request: $(cat server.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- server auth
EOF
kubectl certificate approve <ip-address>.nip.io
kubectl get csr <ip-address>.nip.io -o jsonpath='{.status.certificate}' | base64 --decode > server.crt
kubectl create secret tls <secret-name> --key server-key.pem --cert server.crt
</code></pre>
| <p>Check your ingress <strong>hosts</strong> config and <strong>secrets</strong> are properly getting set</p>
<pre><code>spec:
tls:
- secretName: cert-secret
hosts:
- app.dev.example.com <---- this entry should match below
- secretName: dev-wildcard-tls-cert
hosts:
- "*.app.example.com" <---- this entry should match below
rules:
- host: app.dev.example.com <---- this entry should match
http:
paths:
- path: /
backend:
serviceName: service-2
servicePort: 80
- host: "*.app.example.com" <---- this entry should match
http:
paths:
- path: /
backend:
serviceName: service-1
servicePort: 80
</code></pre>
|
<p>I have a problem mounting 2 files in a pod, one is being treated as a directory for some reason (maybe stupid one, but I looked and looked, couldn't find a solution).</p>
<p>in my <code>config</code> folder there's 2 files:</p>
<pre><code>config
|- log4j.properties
|- server.main.properties
</code></pre>
<p>Running <code>StatefulSet</code>, here's the Volumes part of the manifest file:</p>
<pre><code>containers:
...
volumeMounts:
- mountPath: /usr/local/confluent/config/log4j.properties
name: log4j-properties
subPath: log4j.properties
- mountPath: /usr/local/confluent/config/server.properties
name: server-properties
subPath: server.properties
restartPolicy: Always
volumes:
- name: log4j-properties
configMap:
name: log4j-properties
defaultMode: 0777
- name: server-properties
configMap:
name: server-properties
defaultMode: 0777
volumeClaimTemplates:
- metadata:
name: confluent-persistent-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
</code></pre>
<p>Created ConfigMaps:</p>
<pre><code>kubectl create configmap server-properties --from-file=config/server.main.properties
kubectl create configmap log4j-properties --from-file=config/log4j.properties
</code></pre>
<p><code>kubectl describe pod</code> gives mounted volumes as:</p>
<pre><code> Volumes:
confluent-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: confluent-persistent-storage-confluent-0
ReadOnly: false
server-properties:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: server-properties
Optional: false
log4j-properties:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: log4j-properties
Optional: false
</code></pre>
<p>It's being initialized for 5-6 minutes and in logs I can see that server.properties is not a file, but a folder, and when I exec into pod, I can see that the actual folder is been created instead of file.. What am I doing wrong here?</p>
| <blockquote>
<p>subPath: server.properties</p>
</blockquote>
<p>Wouldn't you want to use it as below?</p>
<blockquote>
<p>subPath: server.main.properties</p>
</blockquote>
|
<p>How can I make <code>Skaffold</code> forward privileged/protected/special ports which have numbers below <code>1024</code>? In my <code>skaffold.yaml</code> I added:</p>
<pre class="lang-yaml prettyprint-override"><code>portForward:
- resourceType: service
resourceName: foo
port: 80
localPort: 80
</code></pre>
<p>It works fine for all unprotected ports, but in case of port <code>80</code>, <code>Skaffold</code> automatically picks another unprotected port instead of <code>80</code>.</p>
<p>According to the documentation <code>Skaffold</code> runs <code>kubectl port-forward</code> on each of user-defined ports, so I granted the <code>kubectl</code> binary the capability to open privileged ports with this command <code>sudo setcap CAP_NET_BIND_SERVICE=+eip /path/to/kubectl</code>.</p>
<p>Everything works fine when directly running <code>kubectl port-forward services/foo 80:80</code>, but when I run <code>skaffold dev --port-forward</code> it still picks another unprotected port.</p>
<p>I have been using <code>Skaffold v1.28.1</code> with <code>Minikube v1.22.0</code> on <code>Ubuntu 20.04</code>.</p>
| <p>I solved the issue by granting the capability to open privileged ports for both <code>skaffold</code> and <code>kubectl</code> binaries:</p>
<pre class="lang-sh prettyprint-override"><code>sudo setcap CAP_NET_BIND_SERVICE=+eip /usr/local/bin/kubectl;
sudo setcap CAP_NET_BIND_SERVICE=+eip /usr/local/bin/skaffold;
</code></pre>
|
<p>I deployed mysql using this command:</p>
<pre><code>helm install --namespace ns-mysql project-mysql --version 8.8.8
</code></pre>
<p>In the future, when I'll need to change the three passwords: mysql-password, mysql-replication-password and mysql-root-password, is there a standard way to do that with helm?</p>
<p>NOTES:</p>
<ul>
<li>The passwords have been saved into the secrets of EKS</li>
</ul>
| <p>You need to use the ordinary MySQL commands to change the passwords. Running <code>helm update</code> with the changed passwords isn't a bad idea (especially if you have other components that are getting the password from the same Secret) but it can't on its own actually change the password.</p>
<p>The problem here is that the password is stored inside the MySQL data. Kubernetes knows that data is in a PersistentVolumeClaim, but it doesn't have a way to read or modify the data, and you can't directly change the password at the filesystem level. Even if that was possible, Kubernetes API calls also can't directly access the PVC contents. That means Kubernetes has no way to change the password, and Helm can't do anything Kubernetes can't do.</p>
|
<p>Facing this error <strong>error: there was a problem with the editor "vi"</strong> when trying to change a deployment file in the Kubernetes cluster (AKS)</p>
<p><code>kubectl edit deploy xxxxx -o yaml</code></p>
<p>upon exiting the file, the below error is thrown with no changes applied.</p>
<p>This issue occurs only when trying to edit this particular deployment. Seems to work fine for other file changes</p>
<p>Few screenshots...</p>
<p><a href="https://i.stack.imgur.com/Fgrhs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Fgrhs.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/1SJBL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1SJBL.png" alt="enter image description here" /></a></p>
| <p>The issue is that kubectl is using vi instead of vim.</p>
<p>To resolve run (or add to your .bashrc or .bash_profile):</p>
<p><code>export EDITOR=vim</code></p>
<p>Then run your kubectl edit command.</p>
<p>Enjoy.</p>
|
<p>I am currently using command <code>curl -sfL https://get.k3s.io</code> to download k3s. The script <strong><a href="https://get.k3s.io/" rel="nofollow noreferrer">get.k3s.io</a></strong> is using third party internet calls to URLs:</p>
<pre><code>GITHUB_URL=https://github.com/k3s-io/k3s/releases
STORAGE_URL=https://storage.googleapis.com/k3s-ci-builds
</code></pre>
<p>I want to avoid the above calls to URLs and want same k3s package to be stored locally in my personal github project location and download/read it from there. Can some one let me know the steps/procedure to edit this script or download and store k3s locally?</p>
| <p>K3s GitHub page has instructions for <a href="https://github.com/k3s-io/k3s#manual-download" rel="nofollow noreferrer">Manual Download</a>. You just download the appropriate version from <a href="https://github.com/k3s-io/k3s/releases" rel="nofollow noreferrer">releases</a>, and use it like any other binary.</p>
<p>Downloaded binary is not executable by default. You have to make it so, before using it</p>
<pre class="lang-text prettyprint-override"><code>chmod +x k3s
</code></pre>
<p>If you want <code>k3s</code> to be available system-wide, you need to put it in correct location (e.g. <code>/usr/local/bin</code>)</p>
<pre class="lang-text prettyprint-override"><code>sudo mv k3s /usr/local/bin
</code></pre>
<p>If you skip the above step, replace <code>k3s</code> with <code>./k3s</code> in the steps below.</p>
<pre class="lang-text prettyprint-override"><code>sudo k3s server &
# Kubeconfig is written to /etc/rancher/k3s/k3s.yaml
sudo k3s kubectl get nodes
# On a different node run the below. NODE_TOKEN comes from
# /var/lib/rancher/k3s/server/node-token on your server
sudo k3s agent --server https://myserver:6443 --token ${NODE_TOKEN}
</code></pre>
<p><sup>[<a href="https://github.com/k3s-io/k3s#manual-download" rel="nofollow noreferrer">source</a>]</sup></p>
<p>Alternatively you can <a href="https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository" rel="nofollow noreferrer">clone the repository</a>, and replace the URLs in the script with your repo. I'm not sure, however, how well it would work.</p>
<p><code>STORAGE_URL</code> variable is used to download specific commit version. For it to work <code>INSTALL_K3S_COMMIT</code> environment variable must be set prior. You should not be concerned by it, unless you are a developer or a QA.</p>
|
<p>We want to use <a href="https://hub.tekton.dev/tekton/task/buildpacks" rel="nofollow noreferrer">the official Tekton buildpacks task</a> from Tekton Hub to run our builds using Cloud Native Buildpacks. The <a href="https://buildpacks.io/docs/tools/tekton/" rel="nofollow noreferrer">buildpacks documentation for Tekton</a> tells us to install the <code>buildpacks</code> & <code>git-clone</code> Task from Tekton Hub, create <code>Secret</code>, <code>ServiceAccount</code>, <code>PersistentVolumeClaim</code> and <a href="https://buildpacks.io/docs/tools/tekton/#43-pipeline" rel="nofollow noreferrer">a Tekton <code>Pipeline</code></a>.</p>
<p>As the configuration is parameterized, we don't want to start our Tekton pipelines using a huge kubectl command but instead configure the <code>PipelineRun</code> using a separate <code>pipeline-run.yml</code> YAML file (<a href="https://buildpacks.io/docs/tools/tekton/#5-create--apply-pipelinerun" rel="nofollow noreferrer">as also stated in the docs</a>) containing the references to the <code>ServiceAccount</code>, workspaces, image name and so on:</p>
<pre><code>apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: buildpacks-test-pipeline-run
spec:
serviceAccountName: buildpacks-service-account # Only needed if you set up authorization
pipelineRef:
name: buildpacks-test-pipeline
workspaces:
- name: source-workspace
subPath: source
persistentVolumeClaim:
claimName: buildpacks-source-pvc
- name: cache-workspace
subPath: cache
persistentVolumeClaim:
claimName: buildpacks-source-pvc
params:
- name: image
value: <REGISTRY/IMAGE NAME, eg gcr.io/test/image > # This defines the name of output image
</code></pre>
<p>Now running the Tekton pipeline once is no problem using <code>kubectl apply -f pipeline-run.yml</code>. But how can we restart or reuse this YAML-based configuration for all the other pipelines runs?</p>
| <p><a href="https://github.com/tektoncd/cli/" rel="nofollow noreferrer">tkn cli</a> has the switch --use-pipelinerun to the command <code>tkn pipeline start</code>, what this command does is to reuse the params/workspaces from that pipelinerun and create a new one, so effectively "restarting" it.</p>
<p>so to 'restart' the pipelinerun pr1 which belong to the pipeline p1 you would do:</p>
<p><code>tkn pipeline start p1 --use-pipelinerun pr1</code></p>
<p>maybe we should have a easier named command, I kicked the discussion sometime ago feel free to contribute a feedback :</p>
<p><a href="https://github.com/tektoncd/cli/issues/1091" rel="nofollow noreferrer">https://github.com/tektoncd/cli/issues/1091</a></p>
|
<p>I'm new with Kubernetes, i'm testing with Minikube locally. I need some advice with Kubernetes's horizontal scaling.<br />
In the following scenario :</p>
<ul>
<li>Cluster composed of only 1 node</li>
<li>There is only 1 pod on this node</li>
<li>Only one application running on this pod</li>
</ul>
<p>Is there a benefit of deploying new pod <strong>on this node only</strong> to scale my application ?<br />
If i understand correctly, pod are sharing the system's resources. So if i deploy 2 pods instead of 1 on the same node, there will be no performance increase.</p>
<p>There will be no availability increase either, because if the node fails, the two pods will also shut.</p>
<p>Am i right about my two previous statements ?</p>
<p>Thanks</p>
| <p>Yes, you are right. Pods on the same node are anyhow utilizing the same CPU and Memory resources and therefore are expected to go down in event of node failure.</p>
<p>But, you need to consider it at pod level also. There can be situation where the pod itself gets failed but node is working fine. In such cases, multiple pods can help you serve better and make application highly available.
From performance perspective also, more number of pods can serve requests faster, thereby dropping down latency issues for your application.</p>
|
<p>There is a utility called <strong>nsenter</strong> in ubuntu. nsenter is a small tool allowing to enter into namespaces. It will enter into your docker container. I want to control the host machine from the docker container. How do I connect the host machine from the container using the nsenter utility?</p>
| <p><code>nsenter</code> allows you to join the Linux namespaces of a targeted process id (PID).</p>
<p>First, run a container that shares your hosts PID namespace with <code>--pid=host</code>. The container has to be privileged with <code>--privileged</code>, otherwise executing <code>nsenter</code> will fail with an "Operation not permitted" error. The container is kept running indefinitely by executing <code>tail -f /dev/null</code>.</p>
<pre><code>docker run --pid=host --privileged --name admin-container ubuntu:latest tail -f /dev/null
</code></pre>
<p>Then exec into the container with <code>nsenter</code>, entering the file system, ipc, utc and network namespace of the host machine's very first init process (PID = 1):</p>
<pre><code>docker exec -it admin-container nsenter --target 1 --mount --uts --ipc --net /bin/bash
</code></pre>
<p>Have a look around and you will notice, you are on the host machine.</p>
|
<p>I was trying to upgrade from bitnami pg image 11 -> version 14. When trying to do so i was prompted with the following error:</p>
<p><code>The data directory was initialized by PostgreSQL version 11, which is not compatible with this version 14.0</code></p>
<p>In order to get around this, I created a new postgres deployment with a new PVC, used pgdump to take a backup of the data and imported it within the new postgres deployment which is running version 14.</p>
<p>However I'm going to need to repeat this process on larger databases which terabytes of data and don't think pgdump is going to be sufficient.</p>
<p>With the bitnami image is it possible to use the likes of pg_upgrade?</p>
| <p>Yeah, backup and restoring the way you did is always a good option with new <strong>PVC</strong> and <strong>PV</strong>.</p>
<p>Pg_dump & Pg_restore is a robust native option i think you can use the <code>-j</code> to start the multiple <strong>threads</strong> to migrate the data.</p>
<p>To migrate the TB of data you might need good Network bandwidth and a scalable solution.</p>
<p>Not sure how you are running this instances or replicas.</p>
<p>You can do something like :</p>
<p>Create <strong>new</strong> helm release of <strong>Postgres</strong> while the <strong>old</strong> one running</p>
<p>Migrate the data</p>
<pre><code>kubectl exec -it new-helm-db-postgresql-0 -- bash -c 'export PGPASSWORD=${POSTGRES_PASSWORD}; time pg_dump -h old-postgresql -U postgres | psql -U postgres'
</code></pre>
<p>As suggested above you can add the <code>-j</code> also to start multi threads however it will increase the resources of POD and Disk usage if migrating TB of data.</p>
<p>You can also refer : <a href="https://www.citusdata.com/blog/2021/02/20/faster-data-migrations-in-postgres/" rel="nofollow noreferrer">https://www.citusdata.com/blog/2021/02/20/faster-data-migrations-in-postgres/</a></p>
<p>i would like to suggest using the <strong>DMS AWS</strong> if you are on the managed cloud.</p>
<p>Or else setting up one <strong>VM</strong> with <a href="https://github.com/urbica/pg-migrate" rel="nofollow noreferrer">Migration Tool</a> and migrating data to old postgres cluster to new.</p>
|
<p>Afaik, the K8s <code>NetworkPolicy</code> can only allow pods matching a label to do something. I do not want to:</p>
<ul>
<li>Deny all traffic</li>
<li>Allow traffic for all pods except the ones matching my label</li>
</ul>
<p>but instead:</p>
<ul>
<li>Allow all traffic</li>
<li>Deny traffic for pods matching my label</li>
</ul>
<p>How do I do that?</p>
<p>From <code>kubectl explain NetworkPolicy.spec.ingress.from</code>:</p>
<pre><code>DESCRIPTION:
List of sources which should be able to access the pods selected for this
rule. Items in this list are combined using a logical OR operation. If this
field is empty or missing, this rule matches all sources (traffic not
restricted by source). If this field is present and contains at least one
item, this rule allows traffic only if the traffic matches at least one
item in the from list.
</code></pre>
<p>As far as I understand this, we can only allow, not deny.</p>
| <p>As you mentioned in the comments, you are using the Kind tool for running Kubernetes. Instead of <a href="https://github.com/aojea/kindnet" rel="nofollow noreferrer">kindnet CNI plugin</a> (default CNI plugin for Kind) which does not support <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">Kubernetes network policies</a>, you can use <a href="https://github.com/projectcalico/cni-plugin" rel="nofollow noreferrer">Calico CNI plugin</a> which support Kubernetes network policies + it has its own, similar solution called <a href="https://docs.projectcalico.org/security/calico-network-policy" rel="nofollow noreferrer">Calico network policies</a>.</p>
<hr />
<p>Example - I will create cluster with disabled default kind CNI plugin + <a href="https://stackoverflow.com/a/62433164/16391991">enabled NodePort</a> for testing (assuming that you have <code>kind</code> + <code>kubectl</code> tools already installed):</p>
<p><em>kind-cluster-config.yaml</em> file:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
disableDefaultCNI: true # disable kindnet
podSubnet: 192.168.0.0/16 # set to Calico's default subnet
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30000
hostPort: 30000
listenAddress: "0.0.0.0" # Optional, defaults to "0.0.0.0"
protocol: tcp # Optional, defaults to tcp
</code></pre>
<p>Time for create a cluster using above config:</p>
<pre><code>kind create cluster --config kind-cluster-config.yaml
</code></pre>
<p>When cluster is ready, I will install Calico CNI plugin:</p>
<pre><code>kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
</code></pre>
<p>I will wait until all calico pods are ready (<code>kubectl get pods -n kube-system</code> command to check). Then, I will create sample <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment" rel="nofollow noreferrer">nginx deployment</a> + service type NodePort for accessing:</p>
<p><em>nginx-deploy-service.yaml</em></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30000
</code></pre>
<p>Let's apply it: <code>kubectl apply -f nginx-deploy-service.yaml</code></p>
<p>So far so good. Now I will try to access <code>nginx-service</code> using node IP (<code>kubectl get nodes -o wide</code> command to check node IP address):</p>
<pre><code>curl 172.18.0.2:30000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
</code></pre>
<p>Okay, it's working.</p>
<p>Now time to <a href="https://docs.projectcalico.org/getting-started/clis/calicoctl/install" rel="nofollow noreferrer">install <code>calicoctl</code></a> and apply some example policy - <a href="https://docs.projectcalico.org/security/tutorials/calico-policy" rel="nofollow noreferrer">based on this tutorial</a> - to block ingress traffic only for pods with label <code>app</code> with value <code>nginx</code>:</p>
<p><em>calico-rule.yaml</em>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: default-deny
spec:
selector: app == "nginx"
types:
- Ingress
</code></pre>
<p>Apply it:</p>
<pre><code>calicoctl apply -f calico-rule.yaml
Successfully applied 1 'GlobalNetworkPolicy' resource(s)
</code></pre>
<p>Now I can't reach the address <code>172.18.0.2:30000</code> which was working previously. The policy is working fine!</p>
<p>Read more about calico policies:</p>
<ul>
<li><a href="https://docs.projectcalico.org/security/calico-network-policy" rel="nofollow noreferrer">Get started with Calico network policy</a></li>
<li><a href="https://docs.projectcalico.org/security/tutorials/calico-policy" rel="nofollow noreferrer">Calico policy tutorial</a></li>
</ul>
<p>Also check <a href="https://github.com/kubernetes-sigs/kind/issues/842" rel="nofollow noreferrer">this GitHub topic</a> for more information about NetworkPolicy support in Kind.</p>
<p><strong>EDIT:</strong></p>
<p>Seems like Calico plugin <a href="https://docs.projectcalico.org/security/kubernetes-network-policy" rel="nofollow noreferrer">supports as well Kubernetes NetworkPolicy</a>, so you can just install Calico CNI plugin and apply the following policy:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: default-deny
spec:
podSelector:
matchLabels:
app: nginx
policyTypes:
- Ingress
</code></pre>
<p>I tested it and seems it's working fine as well.</p>
|
<p>I have created a custom <strong>alpine</strong> image (alpine-audit) which includes a <em>jar</em> file in the <em><strong>/tmp</strong></em> directory. What I need is to use that alpine-audit image as the <strong>initContainers</strong> base image and copy that <em>jar</em> file that I've included, to a location where the Pod container can access.</p>
<p>My yaml file is like below</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
initContainers:
- name: install
image: my.private-artifactory.com/audit/alpine-audit:0.1.0
command: ['cp', '/tmp/auditlogger-1.0.0.jar', 'auditlogger-1.0.0.jar']
volumeMounts:
- name: workdir
mountPath: "/tmp"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
</code></pre>
<p>I think there is some mistake in the <code>command</code> line.
I assumed <em>initContainer</em> copy that jar file to <em>emptyDir</em> then the nginx based container can access that jar file using the <em>mountPath</em>.
But it does not even create the Pod. Can someone point me where it has gone wrong?</p>
| <p>When you are mouting a volume to a directory in pod, that directory have only content of the volume. If you are mounting <code>emptyDir</code> into your <code>alpine-audit:0.1.0</code> the /tmp directory becomes empty. I would mount that volume on some other dir, like <code>/app</code>, then copy the <code>.jar</code> from /tmp to /app.</p>
<p>The container is not starting probably because the initContainer failes running the command.</p>
<p>Try this configuration:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
initContainers:
- name: install
image: my.private-artifactory.com/audit/alpine-audit:0.1.0
command: ['cp', '/tmp/auditlogger-1.0.0.jar', '/app/auditlogger-1.0.0.jar'] # <--- copy from /tmp to new mounted EmptyDir
volumeMounts:
- name: workdir
mountPath: "/app" # <-- change the mount path not to overwrite your .jar
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
</code></pre>
|
<p>for days I'm trying to track down a weird behaviour concerning NodePort Services when running Kubernetes on openSUSE Leap 15.3.</p>
<p>For testing purposes on my own server I installed 3 VMs with openSUSE 15.3. With this article: <a href="https://stackoverflow.com/questions/62795930/how-to-install-kubernetes-in-suse-linux-enterprize-server-15-virtual-machines">How to install kubernetes in Suse Linux enterprize server 15 virtual machines?</a> I set up this Kubernetes Cluster:</p>
<pre><code>kubix01:~ # k get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kubix01 Ready control-plane,master 25h v1.22.2 192.168.42.51 <none> openSUSE Leap 15.3 5.3.18-59.27-default docker://20.10.6-ce
kubix02 Ready <none> 25h v1.22.2 192.168.42.52 <none> openSUSE Leap 15.3 5.3.18-59.27-default docker://20.10.6-ce
kubix03 Ready <none> 25h v1.22.2 192.168.42.53 <none> openSUSE Leap 15.3 5.3.18-59.27-default docker://20.10.6-ce
</code></pre>
<p>For testing things out I made a new 3 Replica Deployment for a traefik/whoami Image with this yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: whoami
labels:
app: whoami
spec:
replicas: 3
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: traefik/whoami
ports:
- containerPort: 80
</code></pre>
<p>This results in three Pods spread over the 2 worker nodes as expected:</p>
<pre><code>kubix01:~/k8s/whoami # k get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
whoami-8557b59f65-2qkvq 1/1 Running 2 (24h ago) 25h 10.244.2.7 kubix03 <none> <none>
whoami-8557b59f65-4wnmd 1/1 Running 2 (24h ago) 25h 10.244.1.6 kubix02 <none> <none>
whoami-8557b59f65-xhx5x 1/1 Running 2 (24h ago) 25h 10.244.1.7 kubix02 <none> <none>
</code></pre>
<p>After that I created a NodePort Service for making things available to the world with this yaml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: whoami
spec:
type: NodePort
selector:
app: whoami
ports:
- protocol: TCP
port: 8080
targetPort: 80
nodePort: 30080
</code></pre>
<p>This is the result:</p>
<pre><code>kubix01:~/k8s/whoami # k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
whoami NodePort 10.105.214.86 <none> 8080:30080/TCP 25h
kubix01:~/k8s/whoami # k describe svc whoami
Name: whoami
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=whoami
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.105.214.86
IPs: 10.105.214.86
Port: <unset> 8080/TCP
TargetPort: 80/TCP
NodePort: <unset> 30080/TCP
Endpoints: 10.244.1.6:80,10.244.1.7:80,10.244.2.7:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>So everything looks fine and I tested things out with curl:</p>
<ol>
<li>curl on one Cluster Node to PodIP:PodPort</li>
</ol>
<pre><code>kubix01:~/k8s/whoami # curl 10.244.1.6
Hostname: whoami-8557b59f65-4wnmd
IP: 127.0.0.1
IP: 10.244.1.6
RemoteAddr: 10.244.0.0:50380
GET / HTTP/1.1
Host: 10.244.1.6
User-Agent: curl/7.66.0
Accept: */*
kubix01:~/k8s/whoami # curl 10.244.1.7
Hostname: whoami-8557b59f65-xhx5x
IP: 127.0.0.1
IP: 10.244.1.7
RemoteAddr: 10.244.0.0:36062
GET / HTTP/1.1
Host: 10.244.1.7
User-Agent: curl/7.66.0
Accept: */*
kubix01:~/k8s/whoami # curl 10.244.2.7
Hostname: whoami-8557b59f65-2qkvq
IP: 127.0.0.1
IP: 10.244.2.7
RemoteAddr: 10.244.0.0:43924
GET / HTTP/1.1
Host: 10.244.2.7
User-Agent: curl/7.66.0
Accept: */*
</code></pre>
<p>==> Everything works as expected</p>
<ol start="2">
<li>curl on Cluster Node to services ClusterIP:ClusterPort:</li>
</ol>
<pre><code>kubix01:~/k8s/whoami # curl 10.105.214.86:8080
Hostname: whoami-8557b59f65-xhx5x
IP: 127.0.0.1
IP: 10.244.1.7
RemoteAddr: 10.244.0.0:1106
GET / HTTP/1.1
Host: 10.105.214.86:8080
User-Agent: curl/7.66.0
Accept: */*
kubix01:~/k8s/whoami # curl 10.105.214.86:8080
Hostname: whoami-8557b59f65-4wnmd
IP: 127.0.0.1
IP: 10.244.1.6
RemoteAddr: 10.244.0.0:9707
GET / HTTP/1.1
Host: 10.105.214.86:8080
User-Agent: curl/7.66.0
Accept: */*
kubix01:~/k8s/whoami # curl 10.105.214.86:8080
Hostname: whoami-8557b59f65-2qkvq
IP: 127.0.0.1
IP: 10.244.2.7
RemoteAddr: 10.244.0.0:25577
GET / HTTP/1.1
Host: 10.105.214.86:8080
User-Agent: curl/7.66.0
Accept: */*
</code></pre>
<p>==> Everything fine, Traffic is LoadBalanced to the different pods.</p>
<ol start="3">
<li>curl on Cluster Node to NodeIP:NodePort</li>
</ol>
<pre><code>kubix01:~/k8s/whoami # curl 192.168.42.51:30080
Hostname: whoami-8557b59f65-2qkvq
IP: 127.0.0.1
IP: 10.244.2.7
RemoteAddr: 10.244.0.0:5463
GET / HTTP/1.1
Host: 192.168.42.51:30080
User-Agent: curl/7.66.0
Accept: */*
kubix01:~/k8s/whoami # curl 192.168.42.52:30080
^C [NoAnswer]
kubix01:~/k8s/whoami # curl 192.168.42.53:30080
^C [NoAnswer]
</code></pre>
<p>==> NodePort Service is only working at the same Node, no answer from the other nodes</p>
<ol start="4">
<li>curl from another Network Host to NodeIP:NodePort</li>
</ol>
<pre><code>user@otherhost:~$ curl 192.168.42.51:30080
^C [NoAnswer]
user@otherhost:~$ curl 192.168.42.52:30080
^C [NoAnswer]
user@otherhost:~$ curl 192.168.42.53:30080
^C [NoAnswer]
</code></pre>
<p>==> Service is not reachable from the outside at all, no answer on all nodes</p>
<p>Has anybody an idea what is going wrong here?</p>
<p>Thx in advance</p>
<p>T0mcat</p>
<p>PS:
Additionally here a little image for clearing things a bit more. Red curved arrows are the non - working connections, gree curved arrows the working ones:</p>
<p><a href="https://i.stack.imgur.com/L95E7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L95E7.png" alt="diagram" /></a></p>
| <p>FINALLY found the solution:</p>
<p>Weird behaviour of SUSE concerning ip_forward sysctl setting. During OS Installation the Option "IPv4 forwarding" was activated in the installer. After OS was installed, additionally added "net.ipv4.ip_forward = 1" and
"net.ipv4.conf.all.forwarding = 1" in /etc/sysctl.conf.
With "sysctl -l|grep ip_forward" checked, both options where activated but this obviously wasn't the truth.
Found the file "/etc/sysctl.d/70-yast.conf"... here both options where set to 0, which obviously was the truth. But manually setting these options to 1 in this file still weren't enough...
Using SUSEs YAST tool and entering "System/Network Settings" -> Routing one can see the Option "Enable IPv4 Forwarding", which was checked (remember: in 70-yast.conf these options where set to 0). After playing a bit with the options in YAST, so that the 70-yast.conf gets rewritten and both options where set to 1 in 70-yast.conf, NodePort services are working as expected now.</p>
<p>Conclusion:</p>
<p>IMHO this seems to be a Bug in SUSE... Activating "IPv4 Forwarding" during Installation leads to a 70-yast.conf with disabled forwarding options. And as a plus, "sysctl -l" displays wrong settings, so as YAST does...
The solution is to play around in YASTs network Settings, so that 70-yast.conf gets rewritten with activated ip_forwarding options.</p>
|
<p>I started learning about Kubernetes and I installed minikube and kubectl on Windows 7.</p>
<p>After that I created a pod with command:</p>
<pre><code>kubectl run firstpod --image=nginx
</code></pre>
<p>And everything is fine:</p>
<p>[![enter image description here][1]][1]</p>
<p>Now I want to go inside the pod with this command: <code>kubectl exec -it firstpod -- /bin/bash</code> but it's not working and I have this error:</p>
<pre><code>OCI runtime exec failed: exec failed: container_linux.go:380: starting container
process caused: exec: "C:/Program Files/Git/usr/bin/bash.exe": stat C:/Program
Files/Git/usr/bin/bash.exe: no such file or directory: unknown
command terminated with exit code 126
</code></pre>
<p>How can I resolve this problem?</p>
<p>And another question is about this <code>firstpod</code> pod. With this command <code>kubectl describe pod firstpod</code> I can see information about the pod:</p>
<pre><code>Name: firstpod
Namespace: default
Priority: 0
Node: minikube/192.168.99.100
Start Time: Mon, 08 Nov 2021 16:39:07 +0200
Labels: run=firstpod
Annotations: <none>
Status: Running
IP: 172.17.0.3
IPs:
IP: 172.17.0.3
Containers:
firstpod:
Container ID: docker://59f89dad2ddd6b93ac4aceb2cc0c9082f4ca42620962e4e692e3d6bcb47d4a9e
Image: nginx
Image ID: docker-pullable://nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36
Port: <none>
Host Port: <none>
State: Running
Started: Mon, 08 Nov 2021 16:39:14 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9b8mx (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-9b8mx:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 32m default-scheduler Successfully assigned default/firstpod to minikube
Normal Pulling 32m kubelet Pulling image "nginx"
Normal Pulled 32m kubelet Successfully pulled image "nginx" in 3.677130128s
Normal Created 31m kubelet Created container firstpod
Normal Started 31m kubelet Started container firstpod
</code></pre>
<p>So I can see it is a docker container id and it is started, also there is the image, but if I do <code>docker images</code> or <code>docker ps</code> there is nothing. Where are these images and container? Thank you!
[1]: <a href="https://i.stack.imgur.com/xAcMP.jpg" rel="nofollow noreferrer">https://i.stack.imgur.com/xAcMP.jpg</a></p>
| <p>One error for certain is gitbash adding Windows the path. You can disable that with a double slash:</p>
<pre><code>kubectl exec -it firstpod -- //bin/bash
</code></pre>
<p>This command will only work if you have bash in the image. If you don't, you'll need to pick a different command to run, e.g. <code>/bin/sh</code>. Some images are distroless or based on scratch to explicitly not include things like shells, which will prevent you from running commands like this (intentionally, for security).</p>
|
<p>Does anyone know how to configure Promtail to watch and tail custom log paths in a Kubernetes pod? I have a deployment that creates customized log files in a directory like so <code>/var/log/myapp</code>. I found some documentation <a href="https://github.com/jafernandez73/grafana-loki/blob/master/docs/promtail-setup.md" rel="nofollow noreferrer">here</a> that says to deploy Promtail as a sidecar to the container you want to collect logs from. I was hoping someone could explain how this method works in practice. Does it need to be done as a sidecar or could it be done as a Daemonset? Or if you have alternative solution that has proven to work could please show me an example.</p>
| <p>Posting comment as the community wiki answer for better visibility:</p>
<hr />
<p><em>Below information is taken from README.md from the GitHun repo provided by atlee19:</em></p>
<p><strong>This docs assume</strong>:</p>
<ul>
<li><p>you have loki and grafana already deployed. Please refered to official documentation for installation</p>
</li>
<li><p>The logfile you want to scrape is in JSON format</p>
</li>
</ul>
<p>This Helm chart deploy a application pod with 2 containers: - a Golang app writing logs in a separate file. - a Promtail that read that log file and send it to loki.</p>
<p>The file path can be updated via the <a href="https://github.com/giantswarm/simple-logger/blob/master/helm/values.yaml" rel="nofollow noreferrer">./helm/values.yaml</a> file.</p>
<p><code>sidecar.labels</code> is a map where you can add the labels that will be added to your log entry in Loki.</p>
<p>Example:</p>
<ul>
<li><code>Logfile</code> located at <code>/home/slog/creator.log</code></li>
<li>Adding labels
<ul>
<li><code>job: promtail-sidecar</code></li>
<li><code>test: golang</code></li>
</ul>
</li>
</ul>
<pre><code>sidecar:
logfile:
path: /home/slog
filename: creator.log
labels:
job: promtail-sidecar
test: golang
</code></pre>
|
<p>I am using Kustomize to manage multiple variations of the same cluster. I am using <code>nameSuffix</code> option to add a suffix to all my resources:</p>
<pre><code>nameSuffix: -mysfx
</code></pre>
<p>My problem is that everything works fine but adding this suffix only to one Service resource cause me an issue. My problem is that the application (Patroni) interact with a service that must be called:</p>
<pre><code>CLUSTER-NAME-config
</code></pre>
<p>so I want to exclude this single resource from the <code>nameSuffix</code>. I know this is not possible due to how this feature was designed. I read several articles here on StackOverflow and on the web. I know I can skip the use of <code>nameSuffix</code> for a category of resources. So I tried to put in my <code>kustomization.yaml</code> the rows:</p>
<pre><code>configurations:
- kustomize-config/kustomize-config.yaml
</code></pre>
<p>to skip all the Service resources. Then in the file <code>kustomize-config/kustomize-config.yaml</code></p>
<pre><code>nameSuffix:
- path: metadata/name
apiVersion: v1
kind: Service
skip: true
</code></pre>
<p>but this doesn't work.</p>
<p>Does anyone know what's wrong with this configuration?</p>
<p>Then suppose I am able now to skip the use of <code>nameSuffix</code> for only Service resources, I have other two Services where I want to add this suffix. What do I have to do to add <code>nameSuffix</code> to these two Services and not the one mentioned above?</p>
<p>If there is a better solution for this, please let me know.</p>
| <p>Skipping selected <code>kind</code>s doesn't work because this feature wasn't implemented - from <a href="https://github.com/kubernetes-sigs/kustomize/issues/519#issuecomment-527734888" rel="nofollow noreferrer">this comment on GitHub issue 519</a>.</p>
<p>Also <a href="https://github.com/keleustes/kustomize/tree/allinone/examples/issues/issue_0519#preparation-step-resource0" rel="nofollow noreferrer">this is an example</a> how it was supposed to be (what you tried)</p>
<hr />
<p>Based on <a href="https://github.com/kubernetes-sigs/kustomize/issues/519#issuecomment-557303870" rel="nofollow noreferrer">this comment</a>, it works on <code>kind</code>s that were explicitly mentioned:</p>
<blockquote>
<p>The plugin's config is currently oriented towards specifying which
kinds to modify, ignoring others.</p>
</blockquote>
<p>Also based on some tests I performed, it looks for <code>kind</code>s only, it doesn't look for names or anything, so only the whole <code>kind</code> can be included. Hence second part of your question is I'm afraid not possible (well, using kustomize, you can use <code>sed</code> for instance and modify everything you need on the go).</p>
<p>I created a simple structure and tested it:</p>
<pre><code>$ tree
.
├── cm1.yaml
├── cm2.yaml
├── kustomization.yaml
├── kustomizeconfig
│ └── skip-prefix.yaml
├── pod.yaml
├── secret.yaml
└── storageclass.yaml
1 directory, 7 files
</code></pre>
<p>There are two configmaps, pod, secret and storageclass, total 5 objects.</p>
<pre><code>$ cat kustomization.yaml
namePrefix: prefix-
nameSuffix: -suffix
resources:
- cm1.yaml
- cm2.yaml
- secret.yaml
- pod.yaml
- storageclass.yaml
configurations:
- ./kustomizeconfig/skip-prefix.yaml
</code></pre>
<p>And configuration (specified explicitly all kinds except for configmaps). Also it's called <code>namePrefix</code>, however it works for both: <code>prefix</code> and <code>suffix</code>:</p>
<pre><code>$ cat kustomizeconfig/skip-prefix.yaml
namePrefix:
- path: metadata/name
apiVersion: v1
kind: Secret
- path: metadata/name
apiVersion: v1
kind: Pod
- path: metadata/name
apiVersion: v1
kind: StorageClass
</code></pre>
<p>Eventually <code>kustomize build .</code> looks like:</p>
<pre><code>$ kustomize build .
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: prefix-local-storage-suffix # modified
provisioner: kubernetes.io/no-provisioner
---
apiVersion: v1
kind: ConfigMap
metadata:
name: cm1 # skipped
---
apiVersion: v1
kind: ConfigMap
metadata:
name: cm2 # skipped
---
apiVersion: v1
kind: Secret
metadata:
name: prefix-secret-suffix # modified
---
apiVersion: v1
kind: Pod
metadata:
name: prefix-pod-suffix # modified
spec:
containers:
- image: image
name: pod
</code></pre>
<hr />
<p>Another potential option is to use <a href="https://kubectl.docs.kubernetes.io/guides/extending_kustomize/builtins/#_prefixsuffixtransformer_" rel="nofollow noreferrer"><code>PrefixSuffixTransformer</code> plugin</a> - it works differently in terms of specifying what <code>prefix</code> and/or <code>suffix</code> should be added and <code>fieldSpec</code>s where.</p>
<p>Please find an <a href="https://github.com/keleustes/kustomize/tree/allinone/examples/issues/issue_0519_b#preparation-step-resource10" rel="nofollow noreferrer">example</a> and final results below within this Feature Test for Issue 0519_b.</p>
<p>Also there's already a <a href="https://stackoverflow.com/a/66438033/15537201">good answer on StackOverflow</a> about using <code>PrefixSuffixTransformer</code>.</p>
|
<p>I am mounting an <code>emptyDir</code> volume so it can be used for sharing files between containers running in the same pod. Lets say the mount point is called <code>/var/log/mylogs</code>. When I mount the <code>emptydir</code> all of the pre-existing files that were in mylogs get deleted. I know this is part of the Kubernetes functionality but I was wondering if there is a way to get around it? I <a href="https://medium.com/hackernoon/mount-file-to-kubernetes-pod-without-deleting-the-existing-file-in-the-docker-container-in-the-88b5d11661a6" rel="nofollow noreferrer">tried using subPath</a> but it looks like that only works for singular files.</p>
| <p>Consider using <code>PersistentVolumes</code> instead, since it serves as a long-term storage in your Kubernetes cluster. They exist beyond containers, pods, and nodes. A pod uses a persistent volume claim to to get read and write access to the persistent volume. <code>PersistentVolume</code> decouples the storage from the Pod. Its lifecycle is independent. It enables safe pod restarts and sharing data between pods.</p>
<blockquote>
<p>But will the pvc delete all of the existing contents? How would multiple files work with the subPath?</p>
</blockquote>
<p>Forget about finding any workaround using emptryDir or SubPath when you can easily use <code>PersistentVolumes</code>. Data persistence — a mechanism that keeps data even after the Pod is deleted — is required.</p>
<p>You can find more useful info about <code>PersistentVolumes</code> on <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">official documentation</a> or <a href="https://loft.sh/blog/kubernetes-persistent-volumes-examples-and-best-practices/" rel="nofollow noreferrer">here in a fresh new article</a></p>
|
<p>I have my <a href="https://support.dnsimple.com/articles/a-record/" rel="nofollow noreferrer">A record</a> on Netlify mapped to my Load Balancer IP Address on Digital Ocean, and it's able to hit the nginx server, but I'm getting a 404 when trying to access any of the apps APIs. I noticed that the status of my Ingress doesn't show that it is bound to the Load Balancer.</p>
<p><a href="https://i.stack.imgur.com/6IB0k.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6IB0k.png" alt="enter image description here" /></a></p>
<p>Does anybody know what I am missing to get this setup?</p>
<p>Application Ingress:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: d2d-server
spec:
rules:
- host: api.cloud.myhostname.com
http:
paths:
- backend:
service:
name: d2d-server
port:
number: 443
path: /
pathType: ImplementationSpecific
</code></pre>
<p>Application Service:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: d2d-server
spec:
selector:
app: d2d-server
ports:
- name: http-api
protocol: TCP
port: 443
targetPort: 8080
type: ClusterIP
</code></pre>
<p>Ingress Controller:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
uid: fc64d9f6-a935-49b2-9d7a-b862f660a4ea
resourceVersion: '257931'
generation: 1
creationTimestamp: '2021-10-22T05:31:26Z'
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/version: 1.0.4
helm.sh/chart: ingress-nginx-4.0.6
annotations:
deployment.kubernetes.io/revision: '1'
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
spec:
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
defaultMode: 420
containers:
- name: controller
image: >-
k8s.gcr.io/ingress-nginx/controller:v1.0.4@sha256:545cff00370f28363dad31e3b59a94ba377854d3a11f18988f5f9e56841ef9ef
args:
- /nginx-ingress-controller
- '--publish-service=$(POD_NAMESPACE)/ingress-nginx-controller'
- '--election-id=ingress-controller-leader'
- '--controller-class=k8s.io/ingress-nginx'
- '--configmap=$(POD_NAMESPACE)/ingress-nginx-controller'
- '--validating-webhook=:8443'
- '--validating-webhook-certificate=/usr/local/certificates/cert'
- '--validating-webhook-key=/usr/local/certificates/key'
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
resources:
requests:
cpu: 100m
memory: 90Mi
volumeMounts:
- name: webhook-cert
readOnly: true
mountPath: /usr/local/certificates/
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 101
allowPrivilegeEscalation: true
restartPolicy: Always
terminationGracePeriodSeconds: 300
dnsPolicy: ClusterFirst
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: ingress-nginx
serviceAccount: ingress-nginx
securityContext: {}
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
</code></pre>
<p>Load Balancer:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/version: 1.0.4
helm.sh/chart: ingress-nginx-4.0.6
annotations:
kubernetes.digitalocean.com/load-balancer-id: <LB_ID>
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: 'true'
service.beta.kubernetes.io/do-loadbalancer-name: ingress-nginx
service.beta.kubernetes.io/do-loadbalancer-protocol: https
status:
loadBalancer:
ingress:
- ip: <IP_HIDDEN>
spec:
ports:
- name: http
protocol: TCP
appProtocol: http
port: 80
targetPort: http
nodePort: 31661
- name: https
protocol: TCP
appProtocol: https
port: 443
targetPort: https
nodePort: 32761
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
clusterIP: <IP_HIDDEN>
clusterIPs:
- <IP_HIDDEN>
type: LoadBalancer
sessionAffinity: None
externalTrafficPolicy: Local
healthCheckNodePort: 30477
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
</code></pre>
| <p>I just needed to add the field <a href="https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/basic-configuration/" rel="noreferrer"><code>ingressClassName</code></a> of <code>nginx</code> to the ingress spec.</p>
|
<p>I am trying to configure Kubernetes on docker-for-desktops and I want to change the default network assigned to containers. </p>
<blockquote>
<p>Example: the default network is <code>10.1.0.0/16</code> but I want <code>172.16.0.0/16</code>. </p>
</blockquote>
<p>I changed the docker network section to <code>Subnet address: 172.16.0.0 and netmask 255.255.0.0</code> but the cluster keeps assigning the network 10.1.0.0/16.
<a href="https://i.stack.imgur.com/mdlFB.png" rel="noreferrer"><img src="https://i.stack.imgur.com/mdlFB.png" alt="Network configuration"></a></p>
<p>The problem I am facing here is that I am in a VPN which has the same network IP of kubernetes default network (<code>10.1.0.0/16</code>) so if I try to ping a host that is under the vpn, the container from which I am doing the ping keeps saying <code>Destination Host Unreachable</code>.</p>
<p>I am running Docker Desktop (under Windows Pro) Version 2.0.0.0-win81 (29211) Channel: stable Build: 4271b9e.</p>
<p>Kubernetes is provided from Docker desktop <a href="https://i.stack.imgur.com/xshra.png" rel="noreferrer"><img src="https://i.stack.imgur.com/xshra.png" alt="Kuberbetes"></a></p>
<p>From the official <a href="https://docs.docker.com/docker-for-windows/kubernetes/" rel="noreferrer">documentation</a> I know that </p>
<blockquote>
<p>Kubernetes is available in Docker for Windows 18.02 CE Edge and higher, and 18.06 Stable and higher , this includes a standalone Kubernetes server and client, as well as Docker CLI integration. The Kubernetes server runs locally within your Docker instance, <strong>is not configurable</strong>, and is a single-node cluster</p>
</blockquote>
<p>Said so, should Kubernetes use the underlying docker's configuration (like network, volumes etc.)?</p>
| <p>On Windows, edit this file for a permanent fix:</p>
<pre><code>%AppData%\Docker\cni\10-default.conflist
</code></pre>
|
<p>I created a Dockerfile for running Jupyter in Docker.</p>
<pre><code>FROM ubuntu:latest
FROM python:3.7
WORKDIR /app
ADD . /app
RUN pip install -r requirements.txt
CMD ["jupyter", "notebook", "--allow-root", "--ip=0.0.0.0"]
</code></pre>
<p>My requirements.txt file looks like this:</p>
<pre><code>jupyter
git+https://github.com/kubernetes-client/python.git
</code></pre>
<p>I ran <code>docker build -t hello-jupyter .</code> and it builds fine. Then I ran <code>docker run -p 8888:8888 hello-jupyter</code> and it runs fine.</p>
<p>I'm able to open Jupyter notebook in a web browser (127.0.0.1:8888) when I run the Docker image hello-jupyter.</p>
<hr>
<p>Now I would like to run Jupyter as a Kubernetes deployment. I created this deployment.yaml file:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: hello-jupyter-service
spec:
selector:
app: hello-jupyter
ports:
- protocol: "TCP"
port: 8888
targetPort: 8888
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-jupyter
spec:
replicas: 4
selector:
matchLabels:
app: hello-jupyter
template:
metadata:
labels:
app: hello-jupyter
spec:
containers:
- name: hello-jupyter
image: hello-jupyter
imagePullPolicy: Never
ports:
- containerPort: 8888
</code></pre>
<p>I ran this command in shell:</p>
<pre><code>$ kubectl apply -f deployment.yaml
service/hello-jupyter-service unchanged
deployment.apps/hello-jupyter unchanged
</code></pre>
<p>When I check my pods, I see crash loops</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-jupyter-66b88b5f6d-gqcff 0/1 CrashLoopBackOff 6 7m16s
hello-jupyter-66b88b5f6d-q59vj 0/1 CrashLoopBackOff 15 55m
hello-jupyter-66b88b5f6d-spvl5 0/1 CrashLoopBackOff 6 7m21s
hello-jupyter-66b88b5f6d-v2ghb 0/1 CrashLoopBackOff 6 7m20s
hello-jupyter-6758977cd8-m6vqz 0/1 CrashLoopBackOff 13 43m
</code></pre>
<p>The pods have crash loop as their status and I'm not able to open Jupyter in a web browser.</p>
<p>What is wrong with the deployment.yaml file? The deployment.yaml file simply runs the Docker image hello-jupyter in four different pods. Why does the Docker image run in Docker but not in Kubernetes pods?</p>
<p>Here is the log of one of my pods:</p>
<pre><code>$ kubectl logs hello-jupyter-66b88b5f6d-gqcff
[I 18:05:03.805 NotebookApp] Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/traitlets/traitlets.py", line 537, in get
value = obj._trait_values[self.name]
KeyError: 'port'
</code></pre>
<p>I do specify a port in my deployment.yaml file. I'm not sure why I get this error in the log.</p>
| <p>There are many reasons on getting the <code>CrashLoopBackOff</code> error. In your case, it seems like your deployment file is locked or a lack of resources prevents the container from loading.</p>
<p>As I understood, you've built docker image locally and added it to your local Docker registry. Since <code>imagePullPolicy: Never</code> specified and there is no error <code>ErrImageNeverPull</code>, so there is no problem with your docker registries between your local docker and kubernetes.</p>
<p>You can start by running the command: <code>kubectl describe pod [name]</code> to get more from kubelet.</p>
<p>Unless, try deploying single pod first instead of deployment with 4 replicas to make sure that kubernetes runs your image correctly.</p>
|
<p>How can I delete more than couple of pods at a time?</p>
<p>The commands I run:</p>
<pre><code>kubectl delete pod pod1
kubectl delete pod pod2
kubectl delete pod pod3
</code></pre>
<p>The approach I want to use:</p>
<pre><code>kubectl delete pod pod1 pod2 pod3
</code></pre>
<p>Any commands or style that can help me do this? Thanks!</p>
| <p>The approach that you say that you want:</p>
<pre><code>kubectl delete pod pod1 pod2 pod3
</code></pre>
<p><strong>actually works</strong>. Go ahead and use it if you want.</p>
<p>In Kubernetes it is more common to operate on subsets that share common labels, e.g:</p>
<pre><code>kubectl delete pod -l app=myapp
</code></pre>
<p>and this command will delete all pods with the label <code>app: myapp</code>.</p>
|
<p>I am working on a use case where sidecar container runs a shell script continuously on changes directory permission of main container.
I want to Start side container only after Main Container in pod is ready .</p>
<p>I was looking at Init Containers and what I could see is init containers are good candidate when we have inter-pod dependency (pod A starts only after Pod B has started/healthy).</p>
<p>In my case I need container B to start only after Container A is started in the same Pod.</p>
<p>Deployment.yaml for refrence</p>
<pre><code> containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
- name: container2
image: some-repo
imagePullPolicy: {{ .Values.image.pullPolicy }}
</code></pre>
| <p>I am not aware of any K8s native feature or hook that satisfies the use case you describe.</p>
<p>I would solve it like this:</p>
<p>Run your sidecar container within the same Pod as a regular container and get it to executes a script that pings the readiness endpoint of your application container before proceeding to check directory permissions.</p>
|
<p>I use GKE for years and I wanted to experiment with GKE with AutoPilot mode, and my initial expectation was, it starts with 0 worker nodes, and whenever I deploy a workload, it automatically scales the nodes based on requested memory and CPU. However, I created a GKE Cluster, there is nothing related to nodes in UI, but in <code>kubectl get nodes</code> output I see there are 2 nodes. Do you have any idea how to start that cluster with no node initially?</p>
<p><a href="https://i.stack.imgur.com/fcqXO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fcqXO.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/wGkGW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wGkGW.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/73efy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/73efy.png" alt="enter image description here" /></a></p>
| <p>The principle of GKE autopilot is NOT TO worry about the node, it's managed for you. No matter if there is 1, 2 or 10 node to your cluster, you don't pay for them, you pay only when a POD run in your cluster (CPU and Memory time usage).</p>
<p>So, you can't handle the number of node, number of pools and low level management like that, something similar to serverless product (Google prefers saying "nodeless" cluster)</p>
<p>At the opposite, it's great to already have resources provisioned that you don't pay on your cluster, you will deploy and scale quicker!</p>
<hr />
<p><strong>EDIT 1</strong></p>
<p>You can have a look to <a href="https://cloud.google.com/kubernetes-engine/pricing#cluster_management_fee_and_free_tier" rel="nofollow noreferrer">the pricing</a>. You have a flat fee of $74.40 per month ($0.10/hour) for the control plane. And then you pay your pods (CPU + Memory).</p>
<p>You have 1 free cluster per Billing account.</p>
|
<p>Managing Resources for Containers.</p>
<blockquote>
<p>When you specify a Pod, you can optionally specify how much of each resource a Container needs. The most common resources to specify are CPU and memory (RAM); there are others.</p>
</blockquote>
<p>Kubernetes defined a special metrics for CPU and Memory allocation for a Pod. While Memory looks straightforward, CPU is a bit tricky to understand. It's like breaking something whole into pieces.</p>
<p><strong>Is there any best practices to estimate/calculate those kubernetes resources for a Pod?</strong></p>
| <p>Different ways:</p>
<ol>
<li><p>See current usage with: <code>kubectl top pod</code></p>
</li>
<li><p>Deploy the Kubernetes dashboard to see short-term data</p>
</li>
<li><p>Deploy Prometheus to see also trend usage</p>
</li>
</ol>
|
<p>Let's say I and the partner company I'm dealing with live in a country that employs <a href="https://en.wikipedia.org/wiki/Daylight_saving_time" rel="noreferrer">Daylight Saving Time</a>.</p>
<p>I have a CronJob in a kubernetes cluster that uploads files to the partner company daily at, say, 4:00, 10:00, 16:00, and 22:00. E.g. <code>0 4,10,16,22 * * *</code>.</p>
<p>Kubernetes SIG <a href="https://github.com/kubernetes/kubernetes/issues/47202#issuecomment-360820586" rel="noreferrer">decided</a> that CronJob object will not support local timezone and will always run in the default one, which is UTC.</p>
<p>I can change the schedule, so that it reflects local time specified above in UTC and give the CronJob that schedule. However every time daylight saving kicks in (twice a year) I would need somehow modify all the CronJobs to use the new time, <em>and</em> I need to modify my deployment pipeline to create new releases of the CronJobs with the new time.</p>
<p>I cannot keep the CronJob running on the same schedule past daylight saving change, as the job will upload files not during the times expected by the partner.</p>
<p>What is the easiest way to manage this?</p>
<p><strong>Option 1</strong></p>
<p>It was <a href="https://github.com/kubernetes/kubernetes/issues/47202#issuecomment-400421237" rel="noreferrer">suggested</a> that writing a new kubernetes controller could do, but it does not seem that anyone raised up to that challenge and published working solution.</p>
<p><strong>Option 2</strong></p>
<p>Another option I have considered is changing the timezone of the entire cluster. But if you <a href="https://www.google.com/search?&q=kubernetes+cluster+change+timezone" rel="noreferrer">google it</a> it does not seem to be very popular solution, and some people strongly believe that kubernetes being a cloud application should be run in UTC.</p>
<p>From what I understand <code>cron</code> uses a local time zone which, in case of kubernetes will be the timezone of the controller manager, which is not necessary a timezone of the node it is running on. On the other hand, changing time zone of the controller manager container, sounds risky, as it is not clear how it will interact with other components of the kubernetes, such as etcd and kubelets.</p>
<p><strong>Option 3</strong></p>
<p>Doing it manually twice a year. Since people in the organisation come and go it would be difficult to retain the knowledge when and how. I do not want our partner to complain twice a year. Also it might be tricky to set up notifications for this as the dates change every year.</p>
<p><strong>Option 4</strong></p>
<p>Write some homegrown automation to run twice a year and hope that when time comes it works as expected. That is actually triggers, triggers at the right time and does everything it's supposed to do. (the latter is easier to test but the former is harder).</p>
<p>All these options feel quite unsatisfactory. I googled a lot and I have not found a lot, I feel it should be quite common issue, and yet nothing comes up in the search. Am I overlooking something? Is there an easy and natural way to solve it?</p>
| <p>Time Zones was added to CronJob in Kubernetes 1.22. To cite <a href="https://github.com/kubernetes/kubernetes/issues/47202#issuecomment-950887675" rel="nofollow noreferrer">github thread</a></p>
<blockquote>
<p>Just to clarify, it's available in 1.22.</p>
</blockquote>
<blockquote>
<p>It was implemented in a very strange way that's not exposed in the schema, though. I'm not aware of any explanation for why that was done, and it ignores core concepts behind Kubernetes. I think this still needs work.</p>
</blockquote>
<p>It can be done via <code>CRON_TZ</code> variable, as described <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#cron-schedule-syntax" rel="nofollow noreferrer">here</a>.</p>
<pre><code># ┌────────────────── timezone (optional)
# | ┌───────────── minute (0 - 59)
# | │ ┌───────────── hour (0 - 23)
# | │ │ ┌───────────── day of the month (1 - 31)
# | │ │ │ ┌───────────── month (1 - 12)
# | │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday;
# | │ │ │ │ │ 7 is also Sunday on some systems)
# | │ │ │ │ │
# | │ │ │ │ │
# CRON_TZ=UTC * * * * *
</code></pre>
<p><strong>Update</strong></p>
<p>This feature is to be marked as <a href="https://github.com/kubernetes/kubernetes/pull/104404#issuecomment-970406114" rel="nofollow noreferrer">unsupported</a></p>
|
<p>I have a cluster of 4 raspberry pi 4 model b, on which Docker and Kubernetes are installed. The versions of these programs are the same and are as follows:</p>
<p>Docker:</p>
<pre><code>Client:
Version: 18.09.1
API version: 1.39
Go version: go1.11.6
Git commit: 4c52b90
Built: Fri, 13 Sep 2019 10:45:43 +0100
OS/Arch: linux/arm
Experimental: false
Server:
Engine:
Version: 18.09.1
API version: 1.39 (minimum version 1.12)
Go version: go1.11.6
Git commit: 4c52b90
Built: Fri Sep 13 09:45:43 2019
OS/Arch: linux/arm
Experimental: false
</code></pre>
<p>Kubernetes:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:41:28Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/arm"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:35:25Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/arm"}
</code></pre>
<p>My problem occurs when a kubernetes pod is deployed on machine "02". Only on that machine the pod never goes into a running state and the logs say:</p>
<pre><code>standard_init_linux.go:207: exec user process caused "exec format error"
</code></pre>
<p>On the other hand, when the same pod is deployed on any of the other 3 raspberry pi, it goes correctly in a running state and does what it has to do.
I have tried to see similar topics to mine, but there seems to be no match with my problem. I put below my Dockerfile and my .yaml file.</p>
<p>Dockerfile</p>
<pre><code>FROM ubuntu@sha256:f3113ef2fa3d3c9ee5510737083d6c39f74520a2da6eab72081d896d8592c078
CMD ["bash"]
</code></pre>
<p>yaml file</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
labels:
name: mongodb
name: mongodb
spec:
nodeName: diamond02.xxx.xx
containers:
- name : mongodb
image: ohserk/mongodb:latest
imagePullPolicy: "IfNotPresent"
name: mongodb
ports:
- containerPort: 27017
protocol: TCP
command:
- "sleep"
- "infinity"
</code></pre>
<p>In closing, this is what happens when I run <code>kubectl apply -f file.yaml</code> specifying to go to machine 02, while on any other machine the output is this:</p>
<p><a href="https://i.stack.imgur.com/ciDrZ.png" rel="nofollow noreferrer">kubectl get pod -w -o wide</a></p>
<p>I could solve this problem by specifying precisely on which raspberry to deploy the pod, but it doesn't seem like a decent solution to me. Would you know what to do in this case?</p>
<p><strong>EDIT 1</strong></p>
<p>Here the <code>journelctl</code> output just after the deploy on machine 02</p>
<pre><code>Nov 05 08:33:39 diamond02.xxx.xx kubelet[1563]: I1105 08:33:39.744957 1563 topology_manager.go:200] "Topology Admit Handler"
Nov 05 08:33:39 diamond02.xxx.xx systemd[1]: Created slice libcontainer container kubepods-besteffort-pod6a0d621a_55ab_449a_91cb_a88ac10df0cf.slice.
Nov 05 08:33:39 diamond02.xxx.xx kubelet[1563]: I1105 08:33:39.906608 1563 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trqs4\" (UniqueName: \"kubernetes.io/projected/6a0d621a-55ab-449a-91cb-a88ac10df0cf-kube-api-access-trqs4\") pod \"mongodb\" (UID: \"6a0d621a-55ab-449a-91cb-a88ac10df0cf\") "
Nov 05 08:33:40 diamond02.xxx.xx systemd[9494]: var-lib-docker-overlay2-03b99c20a2e9dd9b6f06a99625272c899d6e7a36e2071e268b326dfee54476c8\x2dinit-merged.mount: Succeeded.
Nov 05 08:33:40 diamond02.xxx.xx dockerd[578]: time="2021-11-05T08:33:40.702427163Z" level=info msg="shim docker-containerd-shim started" address=/containerd-shim/moby/a62195de2c6319ff8624561322d9f60e4a68bc14d56248e8d2badd7cdeda7dc4/shim.sock debug=false pid=15599
Nov 05 08:33:40 diamond02.xxx.xx systemd[1]: libcontainer-15607-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:40 diamond02.xxx.xx systemd[1]: libcontainer-15607-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:40 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15607_systemd_test_default.slice.
Nov 05 08:33:40 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15607_systemd_test_default.slice.
Nov 05 08:33:40 diamond02.xxx.xx systemd[1]: Started libcontainer container a62195de2c6319ff8624561322d9f60e4a68bc14d56248e8d2badd7cdeda7dc4.
Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: libcontainer-15648-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: libcontainer-15648-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15648_systemd_test_default.slice.
Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15648_systemd_test_default.slice.
Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: libcontainer-15654-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: libcontainer-15654-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15654_systemd_test_default.slice.
Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15654_systemd_test_default.slice.
Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: libcontainer-15661-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: libcontainer-15661-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15661_systemd_test_default.slice.
Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15661_systemd_test_default.slice.
Nov 05 08:33:41 diamond02.xxx.xx kubelet[1563]: I1105 08:33:41.673178 1563 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="a62195de2c6319ff8624561322d9f60e4a68bc14d56248e8d2badd7cdeda7dc4"
Nov 05 08:33:41 diamond02.xxx.xx kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth27f79edb: link becomes ready
Nov 05 08:33:41 diamond02.xxx.xx kernel: cni0: port 1(veth27f79edb) entered blocking state
Nov 05 08:33:41 diamond02.xxx.xx kernel: cni0: port 1(veth27f79edb) entered disabled state
Nov 05 08:33:41 diamond02.xxx.xx kernel: device veth27f79edb entered promiscuous mode
Nov 05 08:33:41 diamond02.xxx.xx kernel: cni0: port 1(veth27f79edb) entered blocking state
Nov 05 08:33:41 diamond02.xxx.xx kernel: cni0: port 1(veth27f79edb) entered forwarding state
Nov 05 08:33:41 diamond02.xxx.xx dhcpcd[573]: veth27f79edb: IAID 58:9b:78:38
Nov 05 08:33:41 diamond02.xxx.xx dhcpcd[573]: veth27f79edb: adding address fe80::5979:f76a:862:765a
Nov 05 08:33:41 diamond02.xxx.xx avahi-daemon[389]: Joining mDNS multicast group on interface veth27f79edb.IPv6 with address fe80::5979:f76a:862:765a.
Nov 05 08:33:41 diamond02.xxx.xx avahi-daemon[389]: New relevant interface veth27f79edb.IPv6 for mDNS.
Nov 05 08:33:41 diamond02.xxx.xx avahi-daemon[389]: Registering new address record for fe80::5979:f76a:862:765a on veth27f79edb.*.
Nov 05 08:33:41 diamond02.xxx.xx dhcpcd[573]: veth27f79edb: IAID 58:9b:78:38
Nov 05 08:33:41 diamond02.xxx.xx kubelet[1563]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"10.244.3.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xa, 0xf4, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x0, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xcaa76c), "name":"cbr0", "type":"bridge"}
Nov 05 08:33:41 diamond02.xxx.xx systemd[9494]: var-lib-docker-overlay2-5e681f6bfcebe1b72e78d4af37e60f4032b31d883247f66631ddec92b8495b8b\x2dinit-merged.mount: Succeeded.
Nov 05 08:33:41 diamond02.xxx.xx systemd[1]: var-lib-docker-overlay2-5e681f6bfcebe1b72e78d4af37e60f4032b31d883247f66631ddec92b8495b8b\x2dinit-merged.mount: Succeeded.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: var-lib-docker-overlay2-5e681f6bfcebe1b72e78d4af37e60f4032b31d883247f66631ddec92b8495b8b-merged.mount: Succeeded.
Nov 05 08:33:42 diamond02.xxx.xx systemd[9494]: var-lib-docker-overlay2-5e681f6bfcebe1b72e78d4af37e60f4032b31d883247f66631ddec92b8495b8b-merged.mount: Succeeded.
Nov 05 08:33:42 diamond02.xxx.xx dockerd[578]: time="2021-11-05T08:33:42.283254485Z" level=info msg="shim docker-containerd-shim started" address=/containerd-shim/moby/1bcf46307ed16e46a25a86aa79dbe9a2b053ebe3042ee6cc08433e49213f2234/shim.sock debug=false pid=15718
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15725-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15725-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15725_systemd_test_default.slice.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15725_systemd_test_default.slice.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Started libcontainer container 1bcf46307ed16e46a25a86aa79dbe9a2b053ebe3042ee6cc08433e49213f2234.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15749-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15749-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15749_systemd_test_default.slice.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15749_systemd_test_default.slice.
Nov 05 08:33:42 diamond02.xxx.xx dhcpcd[573]: veth27f79edb: soliciting an IPv6 router
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15755-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15755-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15755_systemd_test_default.slice.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15755_systemd_test_default.slice.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: docker-1bcf46307ed16e46a25a86aa79dbe9a2b053ebe3042ee6cc08433e49213f2234.scope: Succeeded.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: docker-1bcf46307ed16e46a25a86aa79dbe9a2b053ebe3042ee6cc08433e49213f2234.scope: Consumed 39ms CPU time.
Nov 05 08:33:42 diamond02.xxx.xx dhcpcd[573]: veth27f79edb: soliciting a DHCP lease
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15766-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15766-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15766_systemd_test_default.slice.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15766_systemd_test_default.slice.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15778-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: libcontainer-15778-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15778_systemd_test_default.slice.
Nov 05 08:33:42 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15778_systemd_test_default.slice.
Nov 05 08:33:43 diamond02.xxx.xx systemd[1]: libcontainer-15784-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:43 diamond02.xxx.xx systemd[1]: libcontainer-15784-systemd-test-default-dependencies.scope: Scope has no PIDs. Refusing.
Nov 05 08:33:43 diamond02.xxx.xx systemd[1]: Created slice libcontainer_15784_systemd_test_default.slice.
Nov 05 08:33:43 diamond02.xxx.xx systemd[1]: Removed slice libcontainer_15784_systemd_test_default.slice.
Nov 05 08:33:43 diamond02.xxx.xx dockerd[578]: time="2021-11-05T08:33:43.097966208Z" level=info msg="shim reaped" id=1bcf46307ed16e46a25a86aa79dbe9a2b053ebe3042ee6cc08433e49213f2234
Nov 05 08:33:43 diamond02.xxx.xx dockerd[578]: time="2021-11-05T08:33:43.107322948Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 05 08:33:43 diamond02.xxx.xx systemd[9494]: var-lib-docker-overlay2-5e681f6bfcebe1b72e78d4af37e60f4032b31d883247f66631ddec92b8495b8b-merged.mount: Succeeded.
Nov 05 08:33:43 diamond02.xxx.xx systemd[1]: var-lib-docker-overlay2-5e681f6bfcebe1b72e78d4af37e60f4032b31d883247f66631ddec92b8495b8b-merged.mount: Succeeded.
Nov 05 08:33:43 diamond02.xxx.xx avahi-daemon[389]: Registering new address record for fe80::cc12:58ff:fe9b:7838 on veth27f79edb.*.
Nov 05 08:33:44 diamond02.xxx.xx kubelet[1563]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"10.244.3.0/24"}]],"routes":[{"dst":"10.244.0.0/16"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}I1105 08:33:44.040009 1563 scope.go:110] "RemoveContainer" containerID="1bcf46307ed16e46a25a86aa79dbe9a2b053ebe3042ee6cc08433e49213f2234"
</code></pre>
| <p>Posting comment as the community wiki answer for better visibility:</p>
<p>Reinstalling both Kubernetes and Docker solves the issue</p>
|
<p>I am trying to do <a href="https://rancher.com/docs/k3s/latest/en/installation/airgap/#install-options" rel="nofollow noreferrer">offline setup of k3s</a> i.e. without internet connectivity for <em>Single Server Configuration</em> by below steps, but at the end k3s service status is <code>loaded</code> instead of <code>active</code> and the <code>default/kube-system</code> pods not coming up.</p>
<p>I Downloaded k3s binary from <a href="https://github.com/k3s-io/k3s/releases/tag/v1.22.3+k3s1" rel="nofollow noreferrer">Assets</a> and <a href="https://get.k3s.io/" rel="nofollow noreferrer">install.sh script</a>, then:</p>
<ol>
<li><code>cp /home/my-ubuntu/k3s /usr/local/bin/</code></li>
<li><code>cd /usr/local/bin/</code></li>
<li><code>chmod 770 k3s</code> - giving executable rights to k3s binary</li>
<li>placed <a href="https://github.com/k3s-io/k3s/releases/download/v1.22.3%2Bk3s1/k3s-airgap-images-amd64.tar" rel="nofollow noreferrer">airgap-images-amd64.tar</a> image at <code>/var/lib/rancher/k3s/agent/images/</code></li>
<li><code>mkdir /etc/rancher/k3s</code></li>
<li><code>cp /home/my-ubuntu/k3s.yaml /etc/rancher/k3s</code> - copying k3s config file from different machine (because when I tried without config file, I cant set export variable (Step 7) & can't get to see default pods by <code>kubectl get all -A</code>). I think I am mistaken at this step, please confirm.</li>
<li><code>chmod 770 /etc/rancher/k3s/k3s.yaml</code></li>
<li><code>export KUBECONFIG=/etc/rancher/k3s/k3s.yaml</code> - setting KUBECONFIG env. variable</li>
<li><code>INSTALL_K3S_SKIP_DOWNLOAD=true ./install.sh</code></li>
</ol>
<p>Error in <code>journalctl -xe</code>:</p>
<pre class="lang-text prettyprint-override"><code> -- Unit k3s.service has begun starting up.
Nov 09 19:11:51 my-ubuntu sh[14683]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
Nov 09 19:11:51 my-ubuntu sh[14683]: /bin/sh: 1: /usr/bin/systemctl: not found
Nov 09 19:11:51 my-ubuntu k3s[14695]: time="2021-11-09T19:11:51.488895919+05:30" level=fatal msg="no default routes found in \"/proc/net/route\" or \"/proc/net/ipv6_route\""
Nov 09 19:11:51 my-ubuntu systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Nov 09 19:11:51 my-ubuntu systemd[1]: k3s.service: Failed with result 'exit-code'.
Nov 09 19:11:51 my-ubuntu systemd[1]: Failed to start Lightweight Kubernetes.
-- Subject: Unit k3s.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
-- Unit k3s.service has failed.
-- The result is RESULT.
Nov 09 19:11:56 shreya-Virtual-Machine systemd[1]: k3s.service: Service hold-off time over, scheduling restart.
Nov 09 19:11:56 shreya-Virtual-Machine systemd[1]: k3s.service: Scheduled restart job, restart counter is at 20.
-- Subject: Automatic restarting of a unit has been scheduled
</code></pre>
<p><strong>PS:</strong> At this stage, the moment I connect this machine to internet, below default pods started coming for command <code>kubectl get all -A</code>:</p>
<pre class="lang-text prettyprint-override"><code>NAMESPACE NAME READY
kube-system metrics-server-86cbb8457f-rthkn 1/1
kube-system helm-install-traefik-crd-w6wgf 1/1
kube-system helm-install-traefik-m7lkg 1/1
kube-system svclb-traefik-x6qbc 2/2
kube-system traefik-97b44b794-98nkl 1/1
kube-system local-path-provisioner-5ff76fc89d-l8825 1/1
kube-system coredns-7448499f4d-br6tm 1/1
</code></pre>
<p><strong>My aim is simply install k3s without internet connectivity (offline) & get all these pods up running.</strong> Please let me what am I missing here?</p>
| <p>There is an <a href="https://github.com/k3s-io/k3s/issues/1103" rel="nofollow noreferrer">open issue</a> for offline installation- default gateway need to be set.</p>
<p>follow <a href="https://github.com/k3s-io/k3s/issues/1144#issuecomment-559316077" rel="nofollow noreferrer">this comment</a>, it should work.</p>
<blockquote>
<p>[aiops@7 ~]$ ip route<br/>
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1<br/>
192.168.100.0/24 dev ens192 proto kernel scope link src 192.168.100.7 metric 100<br/>
sudo ip route add default via 192.168.100.1</p>
</blockquote>
|
<p>It might take a while to explain what I'm trying to do but bear with me please.</p>
<p>I have the following infrastructure specified:
<a href="https://i.stack.imgur.com/mHJcE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mHJcE.png" alt="enter image description here" /></a></p>
<p>I have a job called <code>questo-server-deployment</code> (I know, confusing but this was the only way to access the deployment without using ingress on minikube)</p>
<p>This is how the parts should talk to one another:
<a href="https://i.stack.imgur.com/pRZA8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pRZA8.png" alt="enter image description here" /></a></p>
<p>And <a href="https://github.com/matewilk/questo/pull/8/files#diff-a9b6f2819e0e34608760f562298832266b7da1a55cb004d9cdc7e2dc8c6d6e54" rel="nofollow noreferrer">here</a> you can find the entire Kubernetes/Terraform config file for the above setup</p>
<p>I have 2 endpoints exposed from the <code>node.js</code> app (<code>questo-server-deployment</code>)
I'm making the requests using <code>10.97.189.215</code> which is the <code>questo-server-service</code> external IP address (as you can see in the first picture)</p>
<p>So I have 2 endpoints:</p>
<ul>
<li>health - which simply returns <code>200 OK</code> from the <code>node.js</code> app - and this part is fine confirming the node app is working as expected.</li>
<li>dynamodb - which should be able to send a request to the <code>questo-dynamodb-deployment</code> (pod) and get a response back, but it can't.</li>
</ul>
<p>When I print env vars I'm getting the following:</p>
<pre><code>➜ kubectl -n minikube-local-ns exec questo-server-deployment--1-7ptnz -- printenv
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=questo-server-deployment--1-7ptnz
DB_DOCKER_URL=questo-dynamodb-service
DB_REGION=local
DB_SECRET_ACCESS_KEY=local
DB_TABLE_NAME=Questo
DB_ACCESS_KEY=local
QUESTO_SERVER_SERVICE_PORT_4000_TCP=tcp://10.97.189.215:4000
QUESTO_SERVER_SERVICE_PORT_4000_TCP_PORT=4000
QUESTO_DYNAMODB_SERVICE_SERVICE_PORT=8000
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP_PROTO=tcp
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP_PORT=8000
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
QUESTO_SERVER_SERVICE_SERVICE_HOST=10.97.189.215
QUESTO_SERVER_SERVICE_PORT=tcp://10.97.189.215:4000
QUESTO_SERVER_SERVICE_PORT_4000_TCP_PROTO=tcp
QUESTO_SERVER_SERVICE_PORT_4000_TCP_ADDR=10.97.189.215
KUBERNETES_PORT_443_TCP_PROTO=tcp
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP=tcp://10.107.45.125:8000
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP_ADDR=10.107.45.125
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
QUESTO_SERVER_SERVICE_SERVICE_PORT=4000
QUESTO_DYNAMODB_SERVICE_SERVICE_HOST=10.107.45.125
QUESTO_DYNAMODB_SERVICE_PORT=tcp://10.107.45.125:8000
KUBERNETES_SERVICE_PORT_HTTPS=443
NODE_VERSION=12.22.7
YARN_VERSION=1.22.15
HOME=/root
</code></pre>
<p>so it looks like the configuration is aware of the dynamodb address and port:</p>
<pre><code>QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP=tcp://10.107.45.125:8000
</code></pre>
<p>You'll also notice in the above env variables that I specified:</p>
<pre><code>DB_DOCKER_URL=questo-dynamodb-service
</code></pre>
<p>Which is supposed to be the <code>questo-dynamodb-service</code> url:port which I'm assigning to the config <a href="https://github.com/matewilk/questo/pull/8/files#diff-a9b6f2819e0e34608760f562298832266b7da1a55cb004d9cdc7e2dc8c6d6e54R163" rel="nofollow noreferrer">here</a> (in the configmap) which is then used <a href="https://github.com/matewilk/questo/pull/8/files#diff-a9b6f2819e0e34608760f562298832266b7da1a55cb004d9cdc7e2dc8c6d6e54R67" rel="nofollow noreferrer">here</a> in the <code>questo-server-deployment</code> (job)</p>
<p>Also, when I log:</p>
<pre><code>kubectl logs -f questo-server-deployment--1-7ptnz -n minikube-local-ns
</code></pre>
<p>I'm getting the following results:</p>
<p><a href="https://i.stack.imgur.com/4Oh6s.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4Oh6s.png" alt="enter image description here" /></a></p>
<p>Which indicates that the app (node.js) tried to connect to the db (dynamodb) but on the wrong port <code>443</code> instead of <code>8000</code>?</p>
<p>The <code>DB_DOCKER_URL</code> should contain the full address (with port) to the <code>questo-dynamodb-service</code></p>
<p>What am I doing wrong here?</p>
<p>Edit ----</p>
<p>I've explicitly assigned the port <code>8000</code> to the <code>DB_DOCKER_URL</code> as suggested in the answer but now I'm getting the following error:
<a href="https://i.stack.imgur.com/3t7tD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3t7tD.png" alt="enter image description here" /></a></p>
<p>Seems to me there is some kind of default behaviour in Kubernetes and it tries to communicate between pods using <code>https</code> ?</p>
<p>Any ideas what needs to be done here?</p>
| <p>How about specify the port in the ConfigMap:</p>
<pre><code>...
data = {
DB_DOCKER_URL = ${kubernetes_service.questo_dynamodb_service.metadata.0.name}:8000
...
</code></pre>
<p>Otherwise it may default to 443.</p>
|
<p>Are there any third party tools for Kubernetes UI? Such as to view pods on the main page or to edit the config of an object?</p>
| <p>Here we go the list of K8s dashboard alternatives</p>
<ul>
<li><strong>Lens - K8s</strong> <a href="https://k8slens.dev/" rel="nofollow noreferrer">https://k8slens.dev/</a></li>
</ul>
<p>The best K8s IDE for <strong>monitoring</strong>, <strong>Debugging</strong></p>
<ul>
<li><strong>K8Dash</strong>
<a href="https://github.com/herbrandson/k8dash" rel="nofollow noreferrer">https://github.com/herbrandson/k8dash</a>, web, node.js</li>
</ul>
<p>“K8Dash is the easiest way to manage your Kubernetes cluster.”</p>
<ul>
<li><strong>Konstellate</strong>
<a href="https://github.com/containership/konstellate" rel="nofollow noreferrer">https://github.com/containership/konstellate</a>, web, Clojure</li>
</ul>
<p>“Visualize Kubernetes Applications”</p>
<ul>
<li><strong>Kubernator</strong>
<a href="https://github.com/smpio/kubernator" rel="nofollow noreferrer">https://github.com/smpio/kubernator</a>, web, node.js</li>
</ul>
<p>“Kubernator is an alternative Kubernetes UI. In contrast to high-level Kubernetes Dashboard, it provides low-level control and clean view on all objects in a cluster with the ability to create new ones, edit and resolve conflicts. As an entirely client-side app (like kubectl), it doesn’t require any backend except Kubernetes API server itself, and also respects cluster’s access control.”</p>
<ul>
<li><strong>Kubernetes Dashboard</strong>
<a href="https://github.com/kubernetes/dashboard" rel="nofollow noreferrer">https://github.com/kubernetes/dashboard</a>, web</li>
</ul>
<p>“Kubernetes Dashboard is a general purpose, web-based UI for Kubernetes clusters. It allows users to manage applications running in the cluster and troubleshoot them, as well as manage the cluster itself.”
Kubernetes Operational View
<a href="https://codeberg.org/hjacobs/kube-ops-view" rel="nofollow noreferrer">https://codeberg.org/hjacobs/kube-ops-view</a>, web</p>
<p>“Read-only system dashboard for multiple K8s clusters”
Uses WebGL to render nodes and pods.</p>
<ul>
<li><strong>Kubernetes Resource Report</strong>
<a href="https://codeberg.org/hjacobs/kube-resource-report/" rel="nofollow noreferrer">https://codeberg.org/hjacobs/kube-resource-report/</a>, web</li>
</ul>
<p>“Report Kubernetes cluster and pod resource requests vs usage and generate static HTML”
Generates static HTML files for cost reporting.</p>
<ul>
<li><strong>Kubevious</strong>
<a href="https://kubevious.io/" rel="nofollow noreferrer">https://kubevious.io/</a>, web</li>
</ul>
<p>“Application-centric Kubernetes viewer and validator. Correlates labels, metadata, and state. Renders configuration in a way easy to understand and debug. TimeMachine enables travel back in time to identify why things broke. Extensible. Lets users define their own validation rules in the UI.”</p>
<ul>
<li><strong>Kubricks</strong></li>
</ul>
<p><a href="https://github.com/kubricksllc/Kubricks" rel="nofollow noreferrer">https://github.com/kubricksllc/Kubricks</a>, desktop app</p>
<p>“Visualizer/troubleshooting tool for single Kubernetes clusters”</p>
<ul>
<li><strong>Octant</strong></li>
</ul>
<p><a href="https://github.com/vmware/octant" rel="nofollow noreferrer">https://github.com/vmware/octant</a>, web, Go</p>
<p>“A web-based, highly extensible platform for developers to better understand the complexity of Kubernetes clusters.”
Weave Scope
<a href="https://github.com/weaveworks/scope" rel="nofollow noreferrer">https://github.com/weaveworks/scope</a>, web</p>
<p>“Monitoring, visualisation & management for Docker & Kubernetes”</p>
<p>Refrence doc : <a href="https://kube-web-view.readthedocs.io/en/latest/alternatives.html" rel="nofollow noreferrer">here</a></p>
|
<p>I have a StatefulSet like this:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: myns
name: myapp
spec:
replicas: 3
template:
spec:
containers:
- name: mycontainer
image: ...
...
env:
- name: MY_ENV1
value: "1"
</code></pre>
<p>Now I want to add via Kustomize a second environment variable because it is used only in the dev environment. I did something like this:</p>
<pre><code>namespace: myns
resources:
...
patches:
- patch: |-
- op: add
path: "/spec/template/spec/containers/0/env/-"
value:
- name: MY_ENV2
value: "2"
target:
kind: StatefulSet
namespace: myns
name: myapp
</code></pre>
<p>The problem is that it doesn't work. If I run <code>kustomize build</code> I don't see this additional variable (I see other variations).</p>
<p><strong>Can anyone help me to understand how to implement it?</strong></p>
| <p>The problem with the issue was that I forgot to add <code>version: v1</code> in the target section of the patch. Then you should also remove the <code>-</code> in front of the tag name in the value section. The result should be something like this:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: myns
name: myapp
spec:
replicas: 3
template:
spec:
containers:
- name: mycontainer
image: ...
...
env:
- name: MY_ENV1
value: "1"
</code></pre>
<p>here the <code>kustomization.yaml</code>file:</p>
<pre><code>namespace: myns
resources:
...
patches:
- patch: |-
- op: add
path: "/spec/template/spec/containers/0/env/-"
value:
name: MY_ENV2
value: "2"
target:
kind: StatefulSet
namespace: myns
name: myapp
</code></pre>
<p>This worked fine for my original question. I have now a similar problem where this time I need to replace an existing environment variable in the base deployment.</p>
|
<p>So I wish to limit resources used by pod running for each of my namespace, and therefor want to use resource quota.
I am following this <a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/" rel="nofollow noreferrer">tutorial</a>.
It works well, but I wish something a little different.
When trying to schedule a pod which will go over the limit of my quota, I am getting a <code>403</code> error.
What I wish is the request to be scheduled, but waiting in a pending state until one of the other pod end and free some resources.</p>
<p>Any advice?</p>
| <p>Instead of using straight pod definitions (<code>kind: Pod</code>) use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">deployment</a>.</p>
<p><em>Why?</em></p>
<p>Pods in Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/pods/#working-with-pods" rel="nofollow noreferrer">are designed as relatively ephemeral, disposable entities</a>:</p>
<blockquote>
<p>You'll rarely create individual Pods directly in Kubernetes—even singleton Pods. This is because Pods are designed as relatively ephemeral, disposable entities. When a Pod gets created (directly by you, or indirectly by a <a href="https://kubernetes.io/docs/concepts/architecture/controller/" rel="nofollow noreferrer">controller</a>), the new Pod is scheduled to run on a <a href="https://kubernetes.io/docs/concepts/architecture/nodes/" rel="nofollow noreferrer">Node</a> in your cluster. The Pod remains on that node until the Pod finishes execution, the Pod object is deleted, the Pod is <em>evicted</em> for lack of resources, or the node fails.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/workloads/pods/#using-pods" rel="nofollow noreferrer">Kubernetes assumes that for managing pods you should</a> a <a href="https://kubernetes.io/docs/concepts/workloads/" rel="nofollow noreferrer">workload resources</a> instead of creating pods directly:</p>
<blockquote>
<p>Pods are generally not created directly and are created using workload resources. See <a href="https://kubernetes.io/docs/concepts/workloads/pods/#working-with-pods" rel="nofollow noreferrer">Working with Pods</a> for more information on how Pods are used with workload resources.
Here are some examples of workload resources that manage one or more Pods:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a></li>
<li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a></li>
<li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset" rel="nofollow noreferrer">DaemonSet</a></li>
</ul>
</blockquote>
<p>By using deployment you will get very similar behaviour to the one you want.</p>
<p>Example below:</p>
<p>Let's suppose that I created <a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/" rel="nofollow noreferrer">pod quota</a> for a custom namespace, set to "2" as <a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-pod-namespace/#create-a-resourcequota" rel="nofollow noreferrer">in this example</a> and I have two pods running in this namespace:</p>
<pre><code>kubectl get pods -n quota-demo
NAME READY STATUS RESTARTS AGE
quota-demo-1 1/1 Running 0 75s
quota-demo-2 1/1 Running 0 6s
</code></pre>
<p>Third pod definition:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: quota-demo-3
spec:
containers:
- name: quota-demo-3
image: nginx
ports:
- containerPort: 80
</code></pre>
<p>Now I will try to apply this third pod in this namespace:</p>
<pre><code>kubectl apply -f pod.yaml -n quota-demo
Error from server (Forbidden): error when creating "pod.yaml": pods "quota-demo-3" is forbidden: exceeded quota: pod-demo, requested: pods=1, used: pods=2, limited: pods=2
</code></pre>
<p>Not working as expected.</p>
<p>Now I will change pod definition into deployment definition:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: quota-demo-3-deployment
labels:
app: quota-demo-3
spec:
selector:
matchLabels:
app: quota-demo-3
template:
metadata:
labels:
app: quota-demo-3
spec:
containers:
- name: quota-demo-3
image: nginx
ports:
- containerPort: 80
</code></pre>
<p>I will apply this deployment:</p>
<pre><code>kubectl apply -f deployment-v3.yaml -n quota-demo
deployment.apps/quota-demo-3-deployment created
</code></pre>
<p>Deployment is created successfully, but there is no new pod, Let's check this deployment:</p>
<pre><code>kubectl get deploy -n quota-demo
NAME READY UP-TO-DATE AVAILABLE AGE
quota-demo-3-deployment 0/1 0 0 12s
</code></pre>
<p>We can see that a pod quota is working, deployment is monitoring resources and waiting for the possibility to create a new pod.</p>
<p>Let's now delete one of the pod and check deployment again:</p>
<pre><code>kubectl delete pod quota-demo-2 -n quota-demo
pod "quota-demo-2" deleted
kubectl get deploy -n quota-demo
NAME READY UP-TO-DATE AVAILABLE AGE
quota-demo-3-deployment 1/1 1 1 2m50s
</code></pre>
<p>The pod from the deployment is created automatically after deletion of the pod:</p>
<pre><code>kubectl get pods -n quota-demo
NAME READY STATUS RESTARTS AGE
quota-demo-1 1/1 Running 0 5m51s
quota-demo-3-deployment-7fd6ddcb69-nfmdj 1/1 Running 0 29s
</code></pre>
<p>It works the same way for <a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/" rel="nofollow noreferrer">memory and CPU quotas for namespace</a> - when the resources are free, deployment will automatically create new pods.</p>
|
<p>I'm trying to make argocd cli output yaml/json to prep it for script ingestion.</p>
<p>According to this PR: <a href="https://github.com/argoproj/argo-cd/pull/2551" rel="nofollow noreferrer">https://github.com/argoproj/argo-cd/pull/2551</a>
It should be available but I can't find the option in cli help nor in documentation.</p>
<pre><code>#argocd version:
argocd: v2.1.2+7af9dfb
...
argocd-server: v2.0.3+8d2b13d
</code></pre>
| <p>Some commands accept the <code>-o json</code> flag to request JSON output.</p>
<p>Look in the <a href="https://github.com/argoproj/argo-cd/tree/master/docs/user-guide/commands" rel="nofollow noreferrer">commands documentation</a> to find commands which support that flag.</p>
<p><code>argocd cluster list -o json</code>, for example, will return a JSON list of configured clusters. <a href="https://github.com/argoproj/argo-cd/blob/caa246a38d56639f76ba0efb712967a191fddf44/docs/user-guide/commands/argocd_cluster_get.md#options" rel="nofollow noreferrer">The documentation</a> looks like this:</p>
<blockquote>
<h2>Options</h2>
<pre><code> -h, --help help for get
-o, --output string
Output format. One of: json|yaml|wide|server (default "yaml")
</code></pre>
</blockquote>
|
<p>I have a container that keeps crashing in my k8s cluster for unknown reasons. The container's process is an nginx server. The container appears to be receiving a SIGQUIT signal.</p>
<h5>Dockerfile</h5>
<pre><code># build environment
FROM node:16-alpine as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json ./
COPY package-lock.json ./
RUN npm ci --silent
RUN npm install react-scripts@3.4.1 -g --silent
COPY . ./
RUN npm run build
# production environment
FROM nginx:stable-alpine
COPY --from=build /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
</code></pre>
<h5>container logs</h5>
<pre><code>/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2021/11/11 06:40:37 [notice[] 1#1: using the "epoll" event method
2021/11/11 06:40:37 [notice[] 1#1: nginx/1.20.1
2021/11/11 06:40:37 [notice[] 1#1: built by gcc 10.2.1 20201203 (Alpine 10.2.1_pre1)
2021/11/11 06:40:37 [notice[] 1#1: OS: Linux 5.4.120+
2021/11/11 06:40:37 [notice[] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2021/11/11 06:40:37 [notice[] 1#1: start worker processes
2021/11/11 06:40:37 [notice[] 1#1: start worker process 32
2021/11/11 06:40:37 [notice[] 1#1: start worker process 33
10.15.128.65 - - [11/Nov/2021:06:40:41 +0000] "\x16\x03\x01\x01\x00\x01\x00\x00\xFC\x03\x03>\x85O#\xCC\xB9\xA5j\xAB\x8D\xC1PpZ\x18$\xE5ah\xDF7\xB1\xFF\xAD\x22\x050\xC3.+\xB6+ \x0F}S)\xC9\x1F\x0BY\x15_\x10\xC6\xAAF\xAA\x9F\x9E_@dG\x01\xF5vzt\xB50&;\x1E\x15\x00&\xC0/\xC00\xC0+\xC0,\xCC\xA8\xCC\xA9\xC0\x13\xC0\x09\xC0\x14\xC0" 400 157 "-" "-" "-"
10.15.128.65 - - [11/Nov/2021:06:40:44 +0000] "\x16\x03\x01\x01\x00\x01\x00\x00\xFC\x03\x03\xD8['\xE75x'\xC3}+v\xC9\x83\x84\x96EKn\xC5\xB6}\xEE\xBE\xD9Gp\xE9\x1BX<n\xB2 \xD9n\xD1\xC5\xFC\xF2\x8D\x92\xAC\xC0\xA8mdF\x17B\xA3y9\xDD\x98b\x0E\x996\xB6\xA5\xAB\xEB\xD4\xDA" 400 157 "-" "-" "-"
10.15.128.65 - - [11/Nov/2021:06:40:47 +0000] "\x16\x03\x01\x01\x00\x01\x00\x00\xFC\x03\x03Fy\x03N\x0E\x11\x89k\x7F\xC5\x00\x90w}\xEB{\x7F\xB1=\xF0" 400 157 "-" "-" "-"
2021/11/11 06:40:47 [notice[] 1#1: signal 3 (SIGQUIT) received, shutting down
2021/11/11 06:40:47 [notice[] 32#32: gracefully shutting down
2021/11/11 06:40:47 [notice[] 32#32: exiting
2021/11/11 06:40:47 [notice[] 33#33: gracefully shutting down
2021/11/11 06:40:47 [notice[] 32#32: exit
2021/11/11 06:40:47 [notice[] 33#33: exiting
2021/11/11 06:40:47 [notice[] 33#33: exit
2021/11/11 06:40:47 [notice[] 1#1: signal 17 (SIGCHLD) received from 33
2021/11/11 06:40:47 [notice[] 1#1: worker process 33 exited with code 0
2021/11/11 06:40:47 [notice[] 1#1: signal 29 (SIGIO) received
2021/11/11 06:40:47 [notice[] 1#1: signal 17 (SIGCHLD) received from 32
2021/11/11 06:40:47 [notice[] 1#1: worker process 32 exited with code 0
2021/11/11 06:40:47 [notice[] 1#1: exit
</code></pre>
<h5>kubectl get pod PODNAME --output=yaml</h5>
<pre><code>apiVersion: v1
kind: Pod
metadata:
annotations:
seccomp.security.alpha.kubernetes.io/pod: runtime/default
creationTimestamp: "2021-11-11T06:40:30Z"
generateName: sgb-web-master-fb9f995fb-
labels:
app: sgb-web-master
pod-template-hash: fb9f995fb
name: sgb-web-master-fb9f995fb-zwhgl
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: sgb-web-master-fb9f995fb
uid: 96ebf43d-e2e6-4632-a536-764bcab8daeb
resourceVersion: "66168456"
uid: ed80b0d0-6681-4c2a-8edd-16c8ef6bee86
spec:
containers:
- env:
- name: PORT
value: "80"
image: cflynnus/saigonbros-web:master-d70f3001d130bf986da236a08e1fded4b64e8097
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /
port: 80
scheme: HTTPS
initialDelaySeconds: 3
periodSeconds: 3
successThreshold: 1
timeoutSeconds: 1
name: saigonbros-web
ports:
- containerPort: 80
name: sgb-web-port
protocol: TCP
resources:
limits:
cpu: 500m
ephemeral-storage: 1Gi
memory: 2Gi
requests:
cpu: 500m
ephemeral-storage: 1Gi
memory: 2Gi
securityContext:
capabilities:
drop:
- NET_RAW
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-rkwb2
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: gk3-autopilot-cluster-1-default-pool-43dd48b9-tf0n
preemptionPolicy: PreemptLowerPriority
priority: 0
readinessGates:
- conditionType: cloud.google.com/load-balancer-neg-ready
restartPolicy: Always
schedulerName: gke.io/optimize-utilization-scheduler
securityContext:
seccompProfile:
type: RuntimeDefault
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-rkwb2
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: null
message: 'Pod is in NEG "Key{\"k8s1-301c19bd-default-sgb-web-master-80-48ae70f6\",
zone: \"asia-southeast1-a\"}". NEG is not attached to any BackendService with
health checking. Marking condition "cloud.google.com/load-balancer-neg-ready"
to True.'
reason: LoadBalancerNegWithoutHealthCheck
status: "True"
type: cloud.google.com/load-balancer-neg-ready
- lastProbeTime: null
lastTransitionTime: "2021-11-11T06:40:33Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2021-11-11T06:44:42Z"
message: 'containers with unready status: [saigonbros-web]'
reason: ContainersNotReady
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2021-11-11T06:44:42Z"
message: 'containers with unready status: [saigonbros-web]'
reason: ContainersNotReady
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2021-11-11T06:40:33Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: containerd://dfc32581c1edda1a221dc00cede918cfb93225e51e505ea7a9f935fc9ab893d5
image: docker.io/cflynnus/saigonbros-web:master-d70f3001d130bf986da236a08e1fded4b64e8097
imageID: docker.io/cflynnus/saigonbros-web@sha256:ff8d6d42511ed6520967007714dfbd46817fca06bb65ae984bc04a8b90346222
lastState:
terminated:
containerID: containerd://dfc32581c1edda1a221dc00cede918cfb93225e51e505ea7a9f935fc9ab893d5
exitCode: 0
finishedAt: "2021-11-11T06:44:41Z"
reason: Completed
startedAt: "2021-11-11T06:44:30Z"
name: saigonbros-web
ready: false
restartCount: 6
started: false
state:
waiting:
message: back-off 2m40s restarting failed container=saigonbros-web pod=sgb-web-master-fb9f995fb-zwhgl_default(ed80b0d0-6681-4c2a-8edd-16c8ef6bee86)
reason: CrashLoopBackOff
hostIP: 10.148.15.200
phase: Running
podIP: 10.15.128.103
podIPs:
- ip: 10.15.128.103
qosClass: Guaranteed
startTime: "2021-11-11T06:40:33Z"
</code></pre>
| <p>Your liveness probe is configured as <code>HTTPS</code> on port 80. Just change it to <code>HTTP</code>. Look at the key <code>spec.containers.livenessProbe.httpGet.scheme</code>.</p>
<p>Kubernetes thinks that your pod isn't alive (bad liveness probe) and cause the SIGQUIT.<br />
Normally this will help you. When your pod isn't alive, then Kubernetes tries to restart the app for you.</p>
<hr />
<h3>Edit</h3>
<p>You can also identify that behavior in the logs of your nginx:</p>
<pre><code>10.15.128.65 - - [11/Nov/2021:06:40:41 +0000] "\x16\x03\x01\x01\x00\x01\x00\x00\xFC\x03\x03>\x85O#\xCC\xB9\xA5j\xAB\x8D\xC1PpZ\x18$\xE5ah\xDF7\xB1\xFF\xAD\x22\x050\xC3.+\xB6+ \x0F}S)\xC9\x1F\x0BY\x15_\x10\xC6\xAAF\xAA\x9F\x9E_@dG\x01\xF5vzt\xB50&;\x1E\x15\x00&\xC0/\xC00\xC0+\xC0,\xCC\xA8\xCC\xA9\xC0\x13\xC0\x09\xC0\x14\xC0" 400 157 "-" "-" "-"
10.15.128.65 - - [11/Nov/2021:06:40:44 +0000] "\x16\x03\x01\x01\x00\x01\x00\x00\xFC\x03\x03\xD8['\xE75x'\xC3}+v\xC9\x83\x84\x96EKn\xC5\xB6}\xEE\xBE\xD9Gp\xE9\x1BX<n\xB2 \xD9n\xD1\xC5\xFC\xF2\x8D\x92\xAC\xC0\xA8mdF\x17B\xA3y9\xDD\x98b\x0E\x996\xB6\xA5\xAB\xEB\xD4\xDA" 400 157 "-" "-" "-"
10.15.128.65 - - [11/Nov/2021:06:40:47 +0000] "\x16\x03\x01\x01\x00\x01\x00\x00\xFC\x03\x03Fy\x03N\x0E\x11\x89k\x7F\xC5\x00\x90w}\xEB{\x7F\xB1=\xF0" 400 157 "-" "-" "-"
2021/11/11 06:40:47 [notice[] 1#1: signal 3 (SIGQUIT) received, shutting down
</code></pre>
<p>There are the three configured liveness probes with a period of three seconds. They are unreadable, because kubernetes send TLS packets (which are in a plain-view not human readable).<br />
Immediately after that, there is the shutdown.</p>
<p>The other way is to read the description of your pod. There you can see, that <code>HTTPS</code> and port 80 are configured. <code>HTTPS</code> runs over port 443, so it must be a configuration error.</p>
|
<p>Our AKS cluster was configured to auto-renew Let's Encrypt certificates through Ingress Cert-Manager annotation and this worked perfectly until we upgraded to AKS 1.20.7. This then stopped working and the certificates started to expire without them being renewed - I double-checked all changes to K8S and CertManager APIs and reviewed all YAMLs, but I'm not seeing anything obviously wrong. Would appreciate any pointers.</p>
<p>My understanding is that as long as I add the "cert-manager.io/cluster-issuer: letsencrypt-prod-p9v2" to my ingress - the whole renewal should happen automatically - this is not happening though.</p>
<pre><code>> kubectl cert-manager version
util.Version{GitVersion:"v1.4.0", GitCommit:"5e2a6883c1202739902ac94b5f4884152b810925", GitTreeState:"clean", GoVersion:"go1.16.2", Compiler:"gc", Platform:"linux/amd64"}
AKS version: 1.20.7
cat shipit-ingress-p9v2.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
certmanager.k8s.io/cluster-issuer: letsencrypt-prod-p9v2
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: 15m
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.org/client-max-body-size: 15m
generation: 4
name: shipit-ingress-p9v2
namespace: supplier
resourceVersion: "147087245"
uid: 6751dbff-83b1-48a1-a467-e75cc843ee79
spec:
rules:
- host: xxx.westeurope.cloudapp.azure.com
http:
paths:
- backend:
service:
name: planet9v2
port:
number: 8080
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- xxx.westeurope.cloudapp.azure.com
secretName: tls-secret-p9v2
status:
loadBalancer:
ingress:
- ip: 10.240.0.5
>>kubectl get clusterissuer -o yaml letsencrypt-prod-p9v2
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
annotations:
creationTimestamp: "2020-05-29T13:31:10Z"
generation: 2
name: letsencrypt-prod-p9v2
resourceVersion: "25493731"
uid: 0e0e46f5-4cdf-42ea-a022-2dfe9ed56ad8
spec:
acme:
email: xxx
http01: {}
privateKeySecretRef:
name: letsencrypt-prod
server: https://acme-v02.api.letsencrypt.org/directory
status:
acme:
uri: https://acme-v02.api.letsencrypt.org/acme/acct/76984529
conditions:
- lastTransitionTime: "2020-05-29T13:31:11Z"
message: The ACME account was registered with the ACME server
reason: ACMEAccountRegistered
status: "True"
type: Ready
>>kubectl cert-manager inspect secret tls-secret-p9v2
...
Debugging:
Trusted by this computer: no: x509: certificate has expired or is not yet valid: current time 2021-08-24T07:03:32Z is after 2021-08-22T06:40:20Z
CRL Status: No CRL endpoints set
OCSP Status: Cannot check OCSP: error reading OCSP response: ocsp: error from server: unauthorized
kubectl describe secret tls-secret-p9v2
Name: tls-secret-p9v2
Namespace: supplier
Labels: certmanager.k8s.io/certificate-name=tls-secret-p9v2
Annotations: certmanager.k8s.io/alt-names: shipit-dev-p9v2.westeurope.cloudapp.azure.com
certmanager.k8s.io/common-name: shipit-dev-p9v2.westeurope.cloudapp.azure.com
certmanager.k8s.io/ip-sans:
certmanager.k8s.io/issuer-kind: ClusterIssuer
certmanager.k8s.io/issuer-name: letsencrypt-prod-p9v2
Type: kubernetes.io/tls
Data
====
tls.key: 1679 bytes
ca.crt: 0 bytes
tls.crt: 5672 bytes
kubectl get order
NAME STATE AGE
tls-secret-p9v2-4123722043 valid 24d
[(⎈ |shipit-k8s-dev:supplier)]$ k describe order tls-secret-p9v2-4123722043
Name: tls-secret-p9v2-4123722043
Namespace: supplier
Labels: acme.cert-manager.io/certificate-name=tls-secret-p9v2
Annotations: <none>
API Version: certmanager.k8s.io/v1alpha1
Kind: Order
Metadata:
Creation Timestamp: 2021-07-31T04:12:42Z
Generation: 4
Managed Fields:
API Version: certmanager.k8s.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.:
f:acme.cert-manager.io/certificate-name:
f:ownerReferences:
.:
k:{"uid":"a1dec741-0fe7-42be-99d2-176c3d4cdf38"}:
.:
f:apiVersion:
f:blockOwnerDeletion:
f:controller:
f:kind:
f:name:
f:uid:
f:spec:
.:
f:config:
f:csr:
f:dnsNames:
f:issuerRef:
.:
f:kind:
f:name:
f:status:
.:
f:certificate:
f:challenges:
f:finalizeURL:
f:state:
f:url:
Manager: jetstack-cert-manager
Operation: Update
Time: 2021-07-31T04:13:09Z
Owner References:
API Version: certmanager.k8s.io/v1alpha1
Block Owner Deletion: true
Controller: true
Kind: Certificate
Name: tls-secret-p9v2
UID: a1dec741-0fe7-42be-99d2-176c3d4cdf38
Resource Version: 143545958
UID: a646985b-6d44-4c99-bb39-ceb6c4919047
Spec:
Config:
Domains:
shipit-dev-p9v2.westeurope.cloudapp.azure.com
http01:
Ingress Class: nginx
Csr: MIIC3zCCAccCAQAwTzEVMBMGA1UEChMMY2VydC1tYW5hZ2VyMTYwNAYDVQQDEy1zaGlwaXQtZGV2LXA5djIud2VzdGV1cm9wZS5jbG91ZGFwcC5henVyZS5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCdqF08foRx7qVCU4YpVLHxodqMJp1h10l0s89MUVK7C2IWwHdQ5w2BjUB12gT6T6NK9ZhJEzzYtLk18wFAojKUOFjuwF5Kklh+Qe6rFiZNNJ2+uDN/WhCLylbsXjHzQ+N3XMZ0jhGv+72XQyeK/X8jurMmVk5dSZbYP0ysk7w7gSFjjpeN2EIpYcnp2rCjTU+ksfeJ04DDm84hN9snMpGKspIhTFBphCQQgScPO9Fx+S5NVG/ScoM0CLSYiQVB0oPYUaw84O/lNC7kq/UWERli2pNy9Gnxdw2g37nFTj2uvPGGbPE1WBTFtdzkFWMepaw1l25X1//Nsap3zuZY0C3jAgMBAAGgSzBJBgkqhkiG9w0BCQ4xPDA6MDgGA1UdEQQxMC+CLXNoaXBpdC1kZXYtcDl2Mi53ZXN0ZXVyb3BlLmNsb3VkYXBwLmF6dXJlLmNvbTANBgkqhkiG9w0BAQsFAAOCAQEAWEqfGuYcgf2ujby+K9MK+9q/r0cajo4q0JM6ZkBQQGb88b/nwxa7sr4n7hnlpKhXdLPp5QoeBMr3UM7Nwc7PrYxOws9v51mq3ekUOARgO4/4eJw4agFf8KKQLjtkFr2Q3OFJp6GuYKDCo2+Z1jqs76v7ReKlBoVhMtxOkjykQJheFQzg7ezGshE5trXh3NL/FaaThp1vP+qp8nDnq1YXkvOyaoc7u4X2sl831FTPcv3tsQJJzrOPlZPUJcgCC9cZiCTwqdttaJFRobTEGSk+pzc54C6eRQv9muto8D29Eg2G9f9xDSJULT6WbZWL6gzbJ/5pu3ep+V+cB43f5H+Sqg==
Dns Names:
shipit-dev-p9v2.westeurope.cloudapp.azure.com
Issuer Ref:
Kind: ClusterIssuer
Name: letsencrypt-prod-p9v2
Status:
Certificate: LS0tLS1CRUdJTiBDRVJUSUZJ.....
Challenges:
Authz URL: https://acme-v02.api.letsencrypt.org/acme/authz-v3/17660284180
Config:
http01:
Ingress Class: nginx
Dns Name: shipit-dev-p9v2.westeurope.cloudapp.azure.com
Issuer Ref:
Kind: ClusterIssuer
Name: letsencrypt-prod-p9v2
Key: AxP1pv5I087QVyKXIkGyT5pqlD4Aa-UYmJHAOgzHPu4.mIcOL5pBlkZJSpSUslpjJTC_hFunxNRCEA82VcfFAHE
Token: AxP1pv5I087QVyKXIkGyT5pqlD4Aa-UYmJHAOgzHPu4
Type: http-01
URL: https://acme-v02.api.letsencrypt.org/acme/chall-v3/17660284180/Sh057Q
Wildcard: false
Finalize URL: https://acme-v02.api.letsencrypt.org/acme/finalize/75003870/13444902230
State: valid
URL: https://acme-v02.api.letsencrypt.org/acme/order/75003870/13444902230
Events: <none>
</code></pre>
| <p>i was facing the same issue, updating the version of Cert-manager resolved the issue.</p>
<p>i was not on AKS but was using the GKE and i upgraded to the 1.5 cert-manager releases.</p>
<p>Currently as of now supported releases are the : <strong>1.5 & 1.6</strong></p>
<p><a href="https://cert-manager.io/docs/installation/supported-releases/" rel="noreferrer">Releases</a></p>
<p><a href="https://cert-manager.io/docs/installation/supported-releases/" rel="noreferrer">Refer this Document</a></p>
<p>Based on my understanding Cert-manger stop supporting old release and support only the latest <strong>2</strong> releases.</p>
<p>i upgraded to <strong>1.5</strong> and issue got resolved.</p>
|
<p>I'm using Traefik as IngressRoute.</p>
<p>With <code>kubectl api-resources</code> it is defined as:</p>
<pre><code>NAME SHORTNAMES APIVERSION NAMESPACED KIND
...
ingressroutes traefik.containo.us/v1alpha1 true IngressRoute
...
</code></pre>
<p>My problem is that in Kubernetes Dashboard only ingress resources can be viewed, therefore ingressroute resources is not displayed.</p>
<p>How to implement the ability to see ingressroute resources instead of ingresses?</p>
| <p>Kubernetes Dashboard does not have the ability to display Traefik IngressRoute, the same way it shows Ingress, without changing it's source code.</p>
<p>If you want, you can create feature request in <a href="https://github.com/kubernetes/dashboard/labels/kind%2Ffeature" rel="nofollow noreferrer">dashboard GitHub repo</a>, and follow <a href="https://github.com/kubernetes/dashboard/issues/5232" rel="nofollow noreferrer">Improve resource support #5232</a> issue. Maybe in the future such feature will be added.</p>
<p>In the meantime, you can use Traefik's own <a href="https://doc.traefik.io/traefik/operations/dashboard/" rel="nofollow noreferrer">dashboard</a>.</p>
|
<p>i'm learning docker/k8s; I want to pass/store a .pem file to my boostrap container which runs on a k8s cluster. This container uses the .pem to create a k8s secret (kubectl create secrets ...) which will be used by the other apps running on k8s by mounting the kubernetes secrets.</p>
<p>I can think of the following options,</p>
<ul>
<li>I can pass the .pem details as ENV to the container.</li>
<li>I can build the image with the .pem file.</li>
<li>I can store the .pem file in S3 and download it from within the container.</li>
</ul>
<p>Wanted to understand which of these is the best practice/secure method to accomplish this task.</p>
| <p>(Although one could say that K8S Secrets are type of Config map). <br>
I think that the <strong>better approach is to use K8S Secrets (over Config maps)</strong> like specified in <a href="https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets" rel="nofollow noreferrer">here</a>.</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: secret-tls
type: kubernetes.io/tls
data:
# the data is abbreviated in this example
tls.crt: |
MIIC2DCCAcCgAwIBAgIBATANBgkqh ...
tls.key: |
MIIEpgIBAAKCAQEA7yn3bRHQ5FHMQ ...
</code></pre>
<p>Then you can create a <a href="https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#create-a-pod-that-has-access-to-the-secret-data-through-a-volume" rel="nofollow noreferrer">pod that has access to the secret data through a Volume</a>:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: secret-test-pod
spec:
containers:
- name: test-container
image: nginx
volumeMounts:
# name must match the volume name below
- name: secret-volume
mountPath: /etc/secret-dir
# The secret data is exposed to Containers in the Pod through a Volume.
volumes:
- name: secret-volume
secret:
secretName: secret-tls
</code></pre>
<p>(*) In this specific example the <code>tls.crt</code> and <code>tls.key</code> will be created under <code>/etc/secret-dir</code>.</p>
|
<p>I have been running an application stack successfully on a server using the <em>k3s</em> Kubernetes implementation. I am now attempting to deploy it on my Windows PC in Docker Desktop. I get a <strong>404 not found</strong> when accessing the application on the localhost.</p>
<ul>
<li>I tried using 'localhost', '127.0.0.1' and
'kubernetes.docker.internal' (the latter is assigned to 127.0.0.1 by
Docker Desktop on installation) - none work.</li>
<li>I have tried all these
host names with ports 80, 8000, 443, 8443, etc. No luck.</li>
</ul>
<p>I'm using <em>Traefik</em> as my ingress controller, with TLS switched on, certificates are issued by cert-manager. For the local deployment, I use the cert-manager <em>self-signed</em> option. I have also <em>removed</em> the TLS spec from my ingress, but it makes no difference.</p>
<p>Here is my ingress definition, using 'kubernetes.docker.internal' as host:</p>
<pre><code> apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: traefik-ingress
annotations:
kubernetes.io/ingress.class: "traefik"
cert-manager.io/issuer: self-signed
spec:
tls:
- secretName: s3a-tls-certificate-secret
hosts:
- kubernetes.docker.internal
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: s3a-frontend
port:
number: 80
</code></pre>
<ul>
<li>When I try http<strong>s</strong>://kubernetes.docker.internal (implied port 443), on first attempt, Chrome shows me the "unsafe self-signed certificate, press Advanced to continue" stuff, but when I do continue, I get 404 page not found.</li>
<li>When I try plain <a href="http://kubernetes.docker.internal" rel="nofollow noreferrer">http://kubernetes.docker.internal</a> (implied port 80), I get Chrome ERR_EMPTY_RESPONSE.</li>
<li>When I remove the TLS stuff completely from the ingress, it does exactly the same as above for ports http and https.</li>
<li>You'll see the spec/rules don't specify a specific host, i.e. the spec applies to <em>all</em> hosts. I've tried fixing it to host 'kubernetes.docker.internal', but as expected, it makes no difference.</li>
</ul>
<p>The target service "s3a-frontend" is running on its exposed port 80: <em>I can access it</em> when hitting it directly on its node port, but <em>not via the ingress</em>.</p>
<p>EDIT:
The Traefik ingress controller comes with k3s, but I installed it manually in Docker Desktop. When I attach to the traefik pod, I can see a request being logged when I access <a href="https://kubernetes.docker.internal" rel="nofollow noreferrer">https://kubernetes.docker.internal</a>, here are the logs. You can see the cluster health checks ("ping@internal"), but also my request for root ("/") and then Chrome's request for favicon, neither of which return a response code.</p>
<pre><code>10.1.0.1 - - [05/Nov/2021:16:36:23 +0000] "GET /ping HTTP/1.1" 200 2 "-" "-" 2141 "ping@internal" "-" 0ms
192.168.65.3 - - [05/Nov/2021:16:36:24 +0000] "GET / HTTP/2.0" - - "-" "-" 2142 "-" "-" 0ms
10.1.0.1 - - [05/Nov/2021:16:36:24 +0000] "GET /ping HTTP/1.1" 200 2 "-" "-" 2143 "ping@internal" "-" 0ms
192.168.65.3 - - [05/Nov/2021:16:36:24 +0000] "GET /favicon.ico HTTP/2.0" - - "-" "-" 2144 "-" "-" 0ms
</code></pre>
<p>/END EDIT</p>
<p>I have done a lot of research, including on the possibility that something else is hogging ports 80/443. netstat says that <em>Docker itself</em> is listening on those ports: how can I claim those ports if Docker itself is using it, or is this a red herring?</p>
<p><strong>System</strong></p>
<ul>
<li>Windows 10 Pro, 10.0.19043</li>
<li>WSL2</li>
<li>Docker Desktop reports engine v20.10.8</li>
</ul>
| <p>I couldn't get this working on Docker Desktop. I switched to <a href="https://rancherdesktop.io/" rel="nofollow noreferrer">Rancher Desktop</a> and it worked.</p>
|
<p>I am looking to output a comma separated list of all user accounts from Kubernetes.</p>
<p>I understand that one can return a list of namespaces, pods, and so on from Kubernetes using the 'kubectl get namespace' and 'kubectl get pods' command. However, is there an equivalent for returning a list of Kubernetes users?</p>
<p>Currently, I can see a list of all of the user account and their respective names, emails, and IDs from within Kubernetes via our management platform Rancher but, the problem is, I can't seem to find a way to return a comma-separated list of these users via the command line through Kubectl or Powershell.</p>
| <p>This really depends on your k8s setup and rbac model.</p>
<p>I would suggest that you look at the objects talked about here <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/rbac/</a></p>
<p>In the end, the commands you probably want are:</p>
<pre><code>kubectl get clusterroles.rbac.authorization.k8s.io --all-namespaces
kubectl get roles.rbac.authorization.k8s.io --all-namespaces
</code></pre>
|
<p>I'm trying to make a server-client communication between pods in a K8s cluster using the python <code>socket</code> library.</p>
<p>When running outside of a cluster, the server-client connection works, however in the k8s the server doesn't even set up:</p>
<pre><code>import socket
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_socket.bind(("myApp", 30152)) # it breaks here
server_socket.listen()
</code></pre>
<p>And here are my configuration YAMLs:</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myApp
labels:
app: myApp
spec:
replicas: 1
selector:
matchLabels:
app: myApp
template:
metadata:
labels:
app: myApp
spec:
containers:
- name: myApp
image: app_image:version
ports:
- containerPort: 30152
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: myApp
labels:
app: myApp
spec:
selector:
app: myApp
ports:
- name: myApp
protocol: TCP
port: 30152
type: ClusterIP
---
</code></pre>
<p>The <code>service type</code> is a <code>ClusterIP</code> as the connection would only be between pods in the same cluster. Does anyone know where the problem might come from?</p>
| <p>Here, I've build a little bit of an example with a client and a server python apps, talking to each other via k8s service. Almost from scratch
(clone all the files from <a href="https://github.com/jabbson/q69936079" rel="nofollow noreferrer">here</a> if you want to follow along)</p>
<h1>Server</h1>
<h2>server.py</h2>
<pre><code>import socket
import sys
import os
PORT = int(os.getenv('LISTEN_PORT'))
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = ('0.0.0.0', PORT)
print('Starting up on {} port {}'.format(*server_address))
sock.bind(server_address)
sock.listen()
while True:
print('\nWaiting for a connection')
connection, client_address = sock.accept()
try:
print('Connection from', client_address)
while True:
data = connection.recv(64)
print('Received {!r}'.format(data))
if data:
print('Sending data back to the client')
connection.sendall(data)
else:
print('No data from', client_address)
break
finally:
connection.close()
</code></pre>
<h2>Dockerfile</h2>
<pre><code>FROM python:3-alpine
WORKDIR /app
COPY server.py .
CMD ["/usr/local/bin/python3", "/app/server.py"]
</code></pre>
<p>building an image, tagging, pushing to container repo (GCP):</p>
<pre><code>docker build --no-cache -t q69936079-server .
docker tag q69936079-server gcr.io/<project_id>/q69936079-server
docker push gcr.io/<project_id>/q69936079-server
</code></pre>
<h1>Client</h1>
<h2>client.py</h2>
<pre><code>import socket
import os
import sys
import time
counter = 0
SRV = os.getenv('SERVER_ADDRESS')
PORT = int(os.getenv('SERVER_PORT'))
while 1:
if counter != 0:
time.sleep(5)
counter += 1
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = (SRV, PORT)
print("Connection #{}".format(counter))
print('Connecting to {} port {}'.format(*server_address))
try:
sock.connect(server_address)
except Exception as e:
print("Cannot connect to the server,", e)
continue
try:
message = b'This is the message. It will be repeated.'
print('Sending: {!r}'.format(message))
sock.sendall(message)
amount_received = 0
amount_expected = len(message)
while amount_received < amount_expected:
data = sock.recv(64)
amount_received += len(data)
print('Received: {!r}'.format(data))
finally:
print('Closing socket\n')
sock.close()
</code></pre>
<h2>Dockerfile</h2>
<pre><code>FROM python:3-alpine
WORKDIR /app
COPY client.py .
CMD ["/usr/local/bin/python3", "/app/client.py"]
</code></pre>
<p>building an image, tagging, pushing to container repo (GCP in my case):</p>
<pre><code>docker build --no-cache -t q69936079-client .
docker tag q69936079-client gcr.io/<project_id>/q69936079-client
docker push gcr.io/<project_id>/q69936079-client
</code></pre>
<h1>K8S</h1>
<h2>server deployment</h2>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
spec:
selector:
matchLabels:
app: server
replicas: 1
template:
metadata:
labels:
app: server
spec:
containers:
- name: server
image: "gcr.io/<project_id>/q69936079-server:latest"
env:
- name: PYTHONUNBUFFERED
value: "1"
- name: LISTEN_PORT
value: "30152"
</code></pre>
<h2>client deployment</h2>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: client-deployment
spec:
selector:
matchLabels:
app: client
replicas: 1
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: "gcr.io/<project_id>/q69936079-client:latest"
env:
- name: PYTHONUNBUFFERED
value: "1"
- name: SERVER_ADDRESS
value: my-server-service
- name: SERVER_PORT
value: "30152"
</code></pre>
<h2>server service</h2>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-server-service
spec:
type: ClusterIP
selector:
app: server
ports:
- protocol: TCP
port: 30152
</code></pre>
<h1>Validation</h1>
<p>k8s object</p>
<pre><code>k get all
NAME READY STATUS RESTARTS AGE
pod/client-deployment-7dd5d675ff-pvwd4 1/1 Running 0 14m
pod/server-deployment-56bd44cc68-w6jns 1/1 Running 0 13m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.140.0.1 <none> 443/TCP 12h
service/my-server-service ClusterIP 10.140.13.183 <none> 30152/TCP 38m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/client-deployment 1/1 1 1 14m
deployment.apps/server-deployment 1/1 1 1 13m
NAME DESIRED CURRENT READY AGE
replicaset.apps/client-deployment-7dd5d675ff 1 1 1 14m
replicaset.apps/server-deployment-56bd44cc68 1 1 1 13m
</code></pre>
<h2>Server logs</h2>
<pre><code>k logs -f deployment.apps/server-deployment
Starting up on 0.0.0.0 port 30152
Waiting for a connection
Connection from ('10.136.1.11', 48234)
Received b'This is the message. It will be repeated.'
Sending data back to the client
Received b''
No data from ('10.136.1.11', 48234)
Waiting for a connection
Connection from ('10.136.1.11', 48246)
Received b'This is the message. It will be repeated.'
Sending data back to the client
Received b''
No data from ('10.136.1.11', 48246)
</code></pre>
<h2>Client logs</h2>
<pre><code>k logs -f deployment.apps/client-deployment
Connection #1
Connecting to my-server-service port 30152
Cannot connect to the server, [Errno 111] Connection refused
Connection #2
Connecting to my-server-service port 30152
Cannot connect to the server, [Errno 111] Connection refused
Connection #3
Connecting to my-server-service port 30152
Sending: b'This is the message. It will be repeated.'
Received: b'This is the message. It will be repeated.'
Closing socket
Connection #4
Connecting to my-server-service port 30152
Sending: b'This is the message. It will be repeated.'
Received: b'This is the message. It will be repeated.'
Closing socket
</code></pre>
|
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment</a> mentions that a <code>deployment</code> creates a <code>replicaSet</code> but appends a <code>pod-template-hash</code> to the name of the <code>replicaSet</code> and also adds <code>pod-template-hash</code> as <code>replicaSet</code>'s label.</p>
<p>my best guess is that <code>deployment</code> creates multiple <code>replicaSets</code> and this hash ensures that the replicas do not overlap. Is that correct?</p>
| <p>Correct, the documentation states this really well:</p>
<blockquote>
<p>The <code>pod-template-hash</code> label is added by the Deployment controller to
every ReplicaSet that a Deployment creates or adopts.</p>
<p>This label ensures that child ReplicaSets of a Deployment do not
overlap. It is generated by hashing the <code>PodTemplate</code> of the ReplicaSet
and using the resulting hash as the label value that is added to the
ReplicaSet selector, Pod template labels, and in any existing Pods
that the ReplicaSet might have.</p>
</blockquote>
<p>This is necessary for a bunch of different reasons:</p>
<ul>
<li>When you apply a new version of a Deployment, depending on how the deployment is configured and on probes, the previous Pod / Pods could stay up until the new one / ones is not Running and Ready and only then is gracefully terminated. So it may happens that Pods of different <code>ReplicaSet</code> (previous and current) run at the same time.</li>
<li>Deployment History is available to be consulted and you may also want to rollback to an older revision, should the current one stops behaving correctly (for example you changed the image that needs to be used and it jsut crash in error). Each revision has its own ReplicaSet ready to be scaled up or down as necessary as explained in the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#checking-rollout-history-of-a-deployment" rel="noreferrer">docs</a></li>
</ul>
|
<p>Let's consider a python web application deployed under uWSGI via Nginx.</p>
<blockquote>
<p>HTTP client ↔ Nginx ↔ Socket/HTTP ↔ uWSGI (web server) ↔ webapp</p>
</blockquote>
<p>Where nginx is used as reverse proxy / load balancer.</p>
<p><strong>How to scale this kind of applications in kubernetes?</strong>
Several options come to my mind:</p>
<ol>
<li>Deploy nginx and uWSGI in a single pod. Simple approach.</li>
<li>Deploy nginx + uWSGI in single container? Violate the “one process per container” principle.</li>
<li>Deploy only a uWSGI (via HTTP). Omit the usage of nginx.</li>
</ol>
<p>or there is another solution, involving nginx ingress/load balancer services?</p>
| <p>It depends.</p>
<p>I see two scenarios:</p>
<ol>
<li><p><strong>Ingress is used</strong></p>
<p>In this case there's no need to have nginx server within the pod, but it can be <code>ingress-nginx</code> which will be balancing traffic across a kubernetes cluster. You can find a good example in <a href="https://github.com/nginxinc/kubernetes-ingress/issues/143#issuecomment-347814243" rel="nofollow noreferrer">this comment on GitHub issue</a>.</p>
</li>
<li><p><strong>No ingress is used.</strong></p>
<p>In this case I'd go with <code>option 1</code> - <code>Deploy nginx and uWSGI in a single pod. Simple approach.</code>. This way you can easily scale in/out your application and don't have any complicated/unnecessary dependencies.</p>
</li>
</ol>
<p>In case you're not familiar with <code>what ingress is</code>, please find <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">kubernetes documentation - ingress</a>.</p>
|
<p>My Argo setup is on GKE.</p>
<p>What I observed is after a successful completion of a workflow, it's getting auto deleted. However, if a workflow fails, the workflow and all the node pods remain in the cluster.</p>
<p>I could not find any documentations around this. I'll be extremely grateful if you could share some pointers around this.</p>
<p>Note, this is happening only on GKE cluster. Running it on minikube doesn't reproduce this behaviour.</p>
| <p><a href="https://argoproj.github.io/argo-workflows/fields/#ttlstrategy" rel="nofollow noreferrer">https://argoproj.github.io/argo-workflows/fields/#ttlstrategy</a></p>
<p>Setting up ttlStrategy in my workflow/workflowTemplate spec solved this issue.</p>
|
<p>I usually use the kustomize supplied with kubectl. I'm wondering if there is a way to find the kustomize version that is shipped with kubectl ?</p>
| <p>For earlier version of kubectl, there was no such way of doing it and there was an issue related to it: <a href="https://github.com/kubernetes-sigs/kustomize/issues/1424" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kustomize/issues/1424</a></p>
<p>But it has been fixed recently and it seems starting from <code>1.24</code> we are able to get the version by just doing <code>kubectl version</code>:</p>
<pre><code>❯ ./kubectl version --short --client
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.24.8
Kustomize Version: v4.5.4
</code></pre>
<p>For the older clients, you can find that it is documented in the kustomize's README here: <a href="https://github.com/kubernetes-sigs/kustomize#kubectl-integration" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kustomize#kubectl-integration</a></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Kubectl version</th>
<th>Kustomize version</th>
</tr>
</thead>
<tbody>
<tr>
<td>< v1.14</td>
<td>n/a</td>
</tr>
<tr>
<td>v1.14-v1.20</td>
<td>v2.0.3</td>
</tr>
<tr>
<td>v1.21</td>
<td>v4.0.5</td>
</tr>
<tr>
<td>v1.22</td>
<td>v4.2.0</td>
</tr>
</tbody>
</table>
</div> |
<p>I need to restrict pod egress traffic to external destinations. Pod should be able to access any destination on the internet and all cluster internal destinations should be denied.</p>
<p>This is what I tried and it is not passing validation:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1beta1
kind: Sidecar
metadata:
name: test
spec:
workloadSelector:
labels:
k8s-app: mypod
outboundTrafficPolicy:
mode: REGISTRY_ONLY
egress:
- hosts:
- 'default/*'
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: all-external
spec:
location: MESH_EXTERNAL
resolution: DNS
hosts:
- '*'
ports:
- name: http
protocol: HTTP
number: 80
- name: https
protocol: TLS
number: 443
</code></pre>
<p>Istio 1.11.4</p>
| <p>I did it using <code>NetworkPolicy</code>. Allow traffic to kubernetes and istio related services (could be more restrictive not just based on the namespace):</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: myapp-eg-system
spec:
podSelector:
matchLabels:
app: myapp
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: istio-system
</code></pre>
<p>Allow anything except cluster network IP space:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: myapp-eg-app
spec:
podSelector:
matchLabels:
app: myapp
policyTypes:
- Egress
egress:
- to:
# Restrict to external traffic
- ipBlock:
cidr: '0.0.0.0/0'
except:
- '172.0.0.0/8'
- podSelector:
matchLabels:
app: myapp
ports:
- protocol: TCP
</code></pre>
|
<p>I use Network load balancer which was provisioned using the yaml file I got here: <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/aws/deploy-tls-termination.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/aws/deploy-tls-termination.yaml</a></p>
<p>I have also pointed my domain to the Network Load balancer created by ingress-nginx.</p>
<p>But whenever I try it to access the site, I get a 502 Bad Gateway error.</p>
<p>Below is a sample of my ingress-nginx resource:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
# for backend TLS
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- host: "my.sub.domain.com"
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: apigateway
port:
number: 1234
</code></pre>
<p>Please what can I do to solve the issue? I have searched the internet for days now. Thank you.</p>
| <p>i would suggest try checking out the</p>
<p>The annotation <code>nginx.ingress.kubernetes.io/ssl-passthrough</code> instructs the controller to send <strong>TLS</strong> connections directly to the backend instead of letting <strong>NGINX</strong> decrypt the communication.</p>
<p>You should also check out the</p>
<pre><code>nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_ssl_name "svc-s.default.svc.cluster.local";
</code></pre>
<p>Reference doc : <a href="https://github.com/kubernetes/ingress-nginx/issues/4928#issuecomment-574331462" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/4928#issuecomment-574331462</a></p>
<p>Hope your service is running, as Nginx only throws the 502 when there is the issue of backend or upstream service not running.</p>
|
<p>I'm running a Ubuntu container with SQL Server in my local Kubernetes environment with Docker Desktop on a Windows laptop.
Now I'm trying to mount a local folder (<code>C:\data\sql</code>) that contains database files into the pod.
For this, I configured a persistent volume and persistent volume claim in Kubernetes, but it doesn't seem to mount correctly. I don't see errors or anything, but when I go into the container using <code>docker exec -it</code> and inspect the data folder, it's empty. I expect the files from the local folder to appear in the mounted folder 'data', but that's not the case.</p>
<p>Is something wrongly configured in the PV, PVC or pod?</p>
<p>Here are my yaml files:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: dev-customer-db-pv
labels:
type: local
app: customer-db
chart: customer-db-0.1.0
release: dev
heritage: Helm
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /C/data/sql
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dev-customer-db-pvc
labels:
app: customer-db
chart: customer-db-0.1.0
release: dev
heritage: Helm
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: dev-customer-db
labels:
ufo: dev-customer-db-config
app: customer-db
chart: customer-db-0.1.0
release: dev
heritage: Helm
spec:
selector:
matchLabels:
app: customer-db
release: dev
replicas: 1
template:
metadata:
labels:
app: customer-db
release: dev
spec:
volumes:
- name: dev-customer-db-pv
persistentVolumeClaim:
claimName: dev-customer-db-pvc
containers:
- name: customer-db
image: "mcr.microsoft.com/mssql/server:2019-latest"
imagePullPolicy: IfNotPresent
volumeMounts:
- name: dev-customer-db-pv
mountPath: /data
envFrom:
- configMapRef:
name: dev-customer-db-config
- secretRef:
name: dev-customer-db-secrets
</code></pre>
<p>At first, I was trying to define a volume in the pod without PV and PVC, but then I got access denied errors when I tried to read files from the mounted data folder.</p>
<pre class="lang-yaml prettyprint-override"><code>spec:
volumes:
- name: dev-customer-db-data
hostPath:
path: C/data/sql
containers:
...
volumeMounts:
- name: dev-customer-db-data
mountPath: data
</code></pre>
<p>I've also tried to Helm install with <code>--set volumePermissions.enabled=true</code> but this didn't solve the access denied errors.</p>
| <p>Based on this info from <a href="https://github.com/docker/for-win/issues/5325#issuecomment-567481915" rel="noreferrer">GitHub for Docker</a> there is no support hostpath volumes in WSL 2.</p>
<p>Thus, <strong>next workaround can be used</strong>.</p>
<p>We need just to append <code>/run/desktop/mnt/host</code> to the initial path on the host <code>/c/data/sql</code>. No need for PersistentVolume and PersistentVolumeClaim in this case - just remove them.</p>
<p>I changed <code>spec.volumes</code> for Deployment according to <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath-configuration-example" rel="noreferrer">information about hostPath configuration on Kubernetes site</a>:</p>
<pre><code>volumes:
- name: dev-customer-db-pv
hostPath:
path: /run/desktop/mnt/host/c/data/sql
type: Directory
</code></pre>
<p>After applying these changes, the files can be found in <code>data</code> folder in the pod, since <code>mountPath: /data</code></p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.