prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>When I am installing CoreDNS using this command ,by the way,the OS version is: CentOS 7.6 and Kubernetes version is: v1.15.2:</p>
<pre><code>kubectl create -f coredns.yaml
</code></pre>
<p>The output is:</p>
<pre><code>[root@ops001 coredns]# kubectl create -f coredns.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
service/kube-dns created
Error from server (BadRequest): error when creating "coredns.yaml": Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Resources: v1.ResourceRequirements.Requests: Limits: unmarshalerDecoder: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$', error found in #10 byte of ...|__LIMIT__"},"request|..., bigger context ...|limits":{"memory":"__PILLAR__DNS__MEMORY__LIMIT__"},"requests":{"cpu":"100m","memory":"70Mi"}},"secu|...
</code></pre>
<p>this is my coredns.yaml:</p>
<pre><code># __MACHINE_GENERATED_WARNING__
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: Reconcile
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: EnsureExists
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
priorityClassName: system-cluster-critical
serviceAccountName: coredns
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
nodeSelector:
beta.kubernetes.io/os: linux
containers:
- name: coredns
image: gcr.azk8s.cn/google-containers/coredns:1.3.1
imagePullPolicy: IfNotPresent
resources:
limits:
memory: __PILLAR__DNS__MEMORY__LIMIT__
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.254.0.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
</code></pre>
<p>am I missing something?</p>
| <p>From this error message</p>
<pre><code>Error from server (BadRequest):
error when creating "coredns.yaml":
Deployment in version "v1" cannot be handled as a Deployment:
v1.Deployment.Spec:
v1.DeploymentSpec.Template: v
1.PodTemplateSpec.Spec:
v1.PodSpec.Containers: []v1.Container:
v1.Container.Resources:
v1.ResourceRequirements.Requests: Limits: unmarshalerDecoder: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$', error found in #10 byte of ...|__LIMIT__"},"request|..., bigger context ...|limits":{"memory":"__PILLAR__DNS__MEMORY__LIMIT__"},"requests":{"cpu":"100m","memory":"70Mi"}},"secu|...
</code></pre>
<p>This part is root-cause.</p>
<pre><code>unmarshalerDecoder:
quantities must match the regular expression
'^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'
</code></pre>
<p>What <code>quantities</code> are there?
Seems like</p>
<pre><code>v1.ResourceRequirements.Requests: Limits:
</code></pre>
<p>So please, change <code>Requests.Limits</code> from <code>__PILLAR__DNS__MEMORY__LIMIT__</code> to other value.</p>
|
<p>I ran:</p>
<pre><code>kubectl api-resources | grep "External"
externalmetrics metrics.aws true ExternalMetric
</code></pre>
<p>I want to delete this <code>metrics.aws</code> API resource, but I am not even sure how it was deployed. How can I delete this safely?</p>
| <ul>
<li>If it is a not a standard resource, then It might be implemented as a "Customer Resource Definition (crds)"</li>
</ul>
<pre class="lang-bash prettyprint-override"><code>kubectl get crds | grep externalmetrics
</code></pre>
<ul>
<li>Check if there are any Custom Resources created under this crd and delete them if there any :</li>
</ul>
<pre class="lang-bash prettyprint-override"><code>kubectl get externalmetrics
kubectl delete externalmetrics --all
</code></pre>
<ul>
<li>Then delete that CRD</li>
</ul>
<pre class="lang-bash prettyprint-override"><code>kubectl delete crd externalmetrics
</code></pre>
<ul>
<li>Check if it is gone from api-resources list</li>
</ul>
<pre class="lang-bash prettyprint-override"><code>kubectl get api-resources
</code></pre>
<p><strong>Update:</strong></p>
<p>If you see <code>error: the server doesn't have a resource type "v1beta1"</code> error then run the following command to remove it:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl delete apiservice v1beta1.metrics.k8s.io
</code></pre>
|
<p>I have the following .yaml file:</p>
<pre><code>apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
spec:
version: 8.0.0
count: 1
elasticsearchRef:
name: quickstart
</code></pre>
<p>when I try to create the instance using kubectl create -f , I get the error</p>
<pre><code>error: unable to recognize: no matches for kind "Kibana" in version "kibana.k8s.elastic.co/v1"
</code></pre>
| <p>How you have installed it? looks like you are missing the <strong>CRD</strong></p>
<p>Try applying this once:</p>
<p><code>kubectl apply -f https://download.elastic.co/downloads/eck/1.0.0/all-in-one.yaml</code></p>
<p>You can check the list of API resources available :</p>
<p><code>kubectl api-resources</code></p>
|
<p>I have an AKS cluster (Azure CNI) which I'm trying to implement NetworkPolicies on. I've created the network policy which is</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: myserver
spec:
podSelector:
matchLabels:
service: my-server
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
service: myotherserver
- podSelector:
matchLabels:
service: gateway
- podSelector:
matchLabels:
service: yetanotherserver
ports:
- port: 8080
protocol: TCP
egress:
- to:
ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
- port: 5432
protocol: TCP
- port: 8080
protocol: TCP
</code></pre>
<p>but when I apply the policy I'm seeing recurring messages that the host name cannot be resolved. I've installed dnsutils on the myserver pod; and can see the DNS requests are timing out; and I've also tried installing tcpdump on the same pod; and I can see requests going from myserver to kube-dns. I'm not seeing any responses coming back.</p>
<p>If I delete the networkpolicy DNS comes straight back; so I'm certain there's an issue with my networkpolicy but can't find a way to allow the DNS traffic. If anyone can shed any light on where I'm going wrong it would be greatly appreciated!</p>
| <p>Solution which does not require a <code>name</code> label to the target namespace. It's necessary to define a <code>namespaceSelector</code> as well as a <code>podSelector</code>. The default <code>namespaceSelector</code> will target the pod's own namespace.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns-access
namespace: <your-namespacename>
spec:
podSelector:
matchLabels: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
</code></pre>
<p>EDIT: Changed namespaceSelector to only target <code>kube-system</code> namespace based on the <code>kubernetes.io/metadata.name</code> label. This assumes you have automatic labeling enabled. <a href="https://kubernetes.io/docs/concepts/overview/_print/#automatic-labelling" rel="noreferrer">https://kubernetes.io/docs/concepts/overview/_print/#automatic-labelling</a></p>
<p>If you don't have this feature enabled, the next best thing is to define an allow-all <code>namespaceSelector</code> along with the <code>podSelector</code>.</p>
|
<p>I need to set up a kubernetes pod to create demo environments for clients of my web application, with a 20 day trial duration. After these 20 days, the pod should be automatically deleted, how can I make the pod self-destruct after 20 days?
I use Rancher to deploy my pods.</p>
| <p>You can achieve this using <strong>two</strong> ways, write your own code and run on K8s to check status which will delete the deployment (POD) after 20 days</p>
<p>Reference github : <a href="https://github.com/dignajar/clean-pods" rel="nofollow noreferrer">https://github.com/dignajar/clean-pods</a></p>
<p>There is no option for your pod to get auto-deleted.</p>
<p>Either you run <strong>cronjob</strong> at an interval of <strong>20</strong> <strong>days</strong> which will delete specific deployment but again in this case you have to <strong>pass</strong> deployment or pod <strong>name</strong> so cronjob has that variable.</p>
<p><strong>Example : 1</strong></p>
<p>use <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CoreV1Api.md" rel="nofollow noreferrer">delete_namespaced_pod</a></p>
<pre><code> from kubernetes import client, config
from kubernetes.client.rest import ApiException
config.load_incluster_config() # if running inside k8s cluster config.load_kube_config()
configuration = client.Configuration()
with client.ApiClient(configuration) as api_client:
api_instance = client.CoreV1Api(api_client)
namespace = '<Namespace name>'
name = '<POD name>'
api_instance.list_namespaced_pod(namespace)
try:
api_response = api_instance.delete_namespaced_pod(name, namespace)
print(api_response)
except ApiException as e:
print("Exception when calling CoreV1Api->delete_namespaced_pod: %s\n" % e)
</code></pre>
<p><strong>Example : 2</strong></p>
<p>cronjob</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cleanup
spec:
schedule: "30 1 1,20 * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: kubectl-container
image: bitnami/kubectl:latest
command: ["sh", "-c", "kubectl delete pod <POD name or add variable here>"]
restartPolicy: Never
</code></pre>
<p><strong>Extra</strong></p>
<p>You can also write <strong>shell script</strong> which run <strong>daily</strong> run few command to check the <strong>AGE</strong> of <strong>POD</strong> and delete if equal to <strong>20</strong> days</p>
<pre><code>kubectl get pods --field-selector=status.phase=Pending --sort-by=.metadata.creationTimestamp | awk 'match($5,/[20-9]d|[0-9][0-9]d|[0-9][0-9][0-9]d/) {print $0}'
</code></pre>
<p><strong>Update</strong></p>
<p>If you face any error for forbidden do create the service account and use that with cronjob</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-name
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: sa-role
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["list", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: sa-rolebinding
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: sa-role
subjects:
- kind: ServiceAccount
name: sa-name
namespace: default
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: jobs
spec:
schedule: "*/30 * * * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: sa-role
</code></pre>
|
<p>So right now we are trying to get a <a href="https://artifacthub.io/packages/helm/bitnami/redis" rel="nofollow noreferrer">Bitnami Redis Sentinel</a> cluster working, together with our Rails app + Sidekiq.</p>
<p>We tried different things, but it's not really clear to us, how we should specify the sentinels for Sidekiq (crutial part is here, that the sentinel nodes are READ ONLY, so we cannot use them for sidekiq, since job states get written).</p>
<p>Since on Kubernetes there are only 2 services available: "redis" and "redis-headless" (not sure how they differ?) - how can I specify the sentinels like this:</p>
<pre class="lang-rb prettyprint-override"><code>Sidekiq.configure_server do |config|
config.redis = {
url: "redis",
sentinels: [
{ host: "?", port: 26379 } # why do we have to specify it here seperately, since we should be able to get a unified answer via a service, or?
{ host: "?", port: 26379 }
{ host: "?", port: 26379 }
}
}
end
</code></pre>
<p>Would be nice if someone can shed some light on this. As far as I understood, <strong>the bitnami redis sentiel only returns the IP of the master and the application has to handle the corresponding writes to this master then</strong> (<a href="https://github.com/bitnami/charts/tree/master/bitnami/redis#master-replicas-with-sentinel" rel="nofollow noreferrer">https://github.com/bitnami/charts/tree/master/bitnami/redis#master-replicas-with-sentinel</a>) - but I really don't understand on how to do this with sidekiq?</p>
| <h2>Difference between a Kubernetes Service and a Headless Service</h2>
<p>Let's get started by clarifying the difference between a Headless Service and a Service.</p>
<p>A Service allows one to connect to one Pod, while a headless Service returns the list of available IP addresses from all the available pods, allowing to auto-discover.</p>
<p>A better detailed explanation by Marco Luksa has been published on SO <a href="https://stackoverflow.com/a/52713482/1996540">here</a>:</p>
<blockquote>
<p>Each connection to the service is forwarded to one randomly selected backing pod. But what if the client needs to connect to all of those pods? What if the backing pods themselves need to each connect to all the other backing pods. Connecting through the service clearly isn’t the way to do this. What is?</p>
<p>For a client to connect to all pods, it needs to figure out the the IP of each individual pod. One option is to have the client call the Kubernetes API server and get the list of pods and their IP addresses through an API call, but because you should always strive to keep your apps Kubernetes-agnostic, using the API server isn’t ideal</p>
<p>Luckily, Kubernetes allows clients to discover pod IPs through DNS lookups. Usually, when you perform a DNS lookup for a service, the DNS server returns a single IP — the service’s cluster IP. But if you tell Kubernetes you don’t need a cluster IP for your service (you do this by setting the clusterIP field to None in the service specification ), the DNS server will return the pod IPs instead of the single service IP. Instead of returning a single DNS A record, the DNS server will return multiple A records for the service, each pointing to the IP of an individual pod backing the service at that moment. Clients can therefore do a simple DNS A record lookup and get the IPs of all the pods that are part of the service. The client can then use that information to connect to one, many, or all of them.</p>
<p>Setting the clusterIP field in a service spec to None makes the service headless, as Kubernetes won’t assign it a cluster IP through which clients could connect to the pods backing it.</p>
</blockquote>
<p>"Kubernetes in Action" by Marco Luksa</p>
<h2>How to specify the sentinels</h2>
<p>As the <a href="https://github.com/redis/redis-rb#sentinel-support" rel="nofollow noreferrer">Redis documentation</a> say:</p>
<blockquote>
<p>When using the Sentinel support you need to specify a list of sentinels to connect to. The list does not need to enumerate all your Sentinel instances, but a few so that if one is down the client will try the next one. The client is able to remember the last Sentinel that was able to reply correctly and will use it for the next requests.</p>
</blockquote>
<p>So the idea is to give what you have, and if you scale up the redis pods, then you don't need to re-configure Sidekiq (or Rails if you're using Redis for caching).</p>
<h2>Combining all together</h2>
<p>Now you just need a way to fetch the IP addresses from the headless service in Ruby, and configure Redis client sentinels.</p>
<p>Fortunately, since Ruby 2.5.0, the <a href="https://ruby-doc.org/stdlib-2.5.1/libdoc/resolv/rdoc/Resolv.html" rel="nofollow noreferrer">Resolv class</a> is available and can do that for you.</p>
<pre><code>irb(main):007:0> Resolv.getaddresses "redis-headless"
=> ["172.16.105.95", "172.16.105.194", "172.16.9.197"]
</code></pre>
<p>So that you could do:</p>
<pre class="lang-rb prettyprint-override"><code>Sidekiq.configure_server do |config|
config.redis = {
# This `host` parameter is used by the Redis gem with the Redis command
# `get-master-addr-by-name` (See https://redis.io/topics/sentinel#obtaining-the-address-of-the-current-master)
# in order to retrieve the current Redis master IP address.
host: "mymaster",
sentinels: Resolv.getaddresses('redis-headless').map do |address|
{ host: address, port: 26379 }
end
}
end
</code></pre>
<p>That will create an Array of Hashes with the IP address as <code>host:</code> and 26379 as the <code>port:</code>.</p>
|
<p>I want to have a cron job inside specific ns which will be able to delete all the namespace pod, I tried like following</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: restart
namespace: foo
spec:
concurrencyPolicy: Forbid
schedule: "*/1 * * * *"
jobTemplate:
spec:
backoffLimit: 2
activeDeadlineSeconds: 600
template:
spec:
restartPolicy: Never
containers:
- name: kubectl
image: bitnami/kubectl:1.22.3
command:
- 'kubectl'
- 'delete'
- '--all'
- 'pods'
- '--namespace=foo'
</code></pre>
<p>and I get an error, any idea how to solve it?
this is pods of a deamonset (which are in ns foo) that I want to delete from this namespace <code>foo</code> , any idea?</p>
<p>This is the error:</p>
<blockquote>
<p>`Error from server (Forbidden): pods is forbidden: User
"system:serviceaccount:foo:default" cannot list resource "pods" in API
group "" in the namespace "foo ││ Stream closed EOF for
foo/restart-27415740--1-495xg (kubectl)</p>
</blockquote>
| <p>Your pod is using credentials of service account you are running it as. If you haven't specified service account name - it will use <code>default</code> one, or <code>system:serviceaccount:foo:default</code> in your case</p>
<p>It's generally a bad idea to give any additional rights to default sa, and by default it doesn't have any special privileges, so you need to create an additional service account, grant it rights to delete pods and then configure your cronjob to use it.</p>
<p>Let's create sa named <code>pod-exterminator</code>, since it will <em>exterminate</em> pods:</p>
<pre><code>kubectl create sa pod-exterminator
</code></pre>
<p>We need to create <code>Role</code>, which grants deletion rights</p>
<pre><code>kubectl create role pod-exterminator --verb=delete,list --resource=pods
</code></pre>
<p>We are granting <code>list</code> rights too, because without it <code>--all</code> switch won't work and you'll have to specify all pod names by yourself</p>
<p>And then a <code>RoleBinding</code>, to grant our sa these rights (foo is your namepsace name):</p>
<pre><code> kubectl create rolebinding --serviceaccount foo:pod-exterminator \
--role pod-exterminator pod-exterminator
</code></pre>
<p>Now you can specify <code>serivceAccountName</code> in your CronJob spec:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: restart
namespace: foo
spec:
concurrencyPolicy: Forbid
schedule: "*/1 * * * *"
jobTemplate:
spec:
backoffLimit: 2
activeDeadlineSeconds: 600
template:
spec:
serviceAccountName: pod-exterminator
restartPolicy: Never
containers:
- name: kubectl
image: bitnami/kubectl:1.22.3
command:
- 'kubectl'
- 'delete'
- '--all'
- 'pods'
- '--namespace=foo'
</code></pre>
<p>See also:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/</a></li>
<li><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/rbac/</a></li>
</ul>
|
<p>How can I disconnect from a kubernetes cluster from gitbash, I connected once and I can't "disconnect" or make the cluster name go away.</p>
<p><a href="https://i.stack.imgur.com/yVI20.png" rel="noreferrer"><img src="https://i.stack.imgur.com/yVI20.png" alt="enter image description here" /></a></p>
| <p>Thanks a lot. really what I needed was to run the command:</p>
<pre><code>kubectl config unset current-context
</code></pre>
|
<p>I need to persist the heap dump when the java process gets OOM and the pod is restarted.</p>
<p>I have following added in the jvm args</p>
<pre><code>-XX:+ExitOnOutOfMemoryError -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/opt/dumps
</code></pre>
<p>...and emptydir is mounted on the same path.</p>
<p>But the issue is if the pod gets restarted and if it gets scheduled on a different node, then we are losing the heap dump. How do I persist the heap dump even if the pod is scheduled to a different node?</p>
<p>We are using AWS EKS and we are having more than 1 replica for the pod.</p>
<p>Could anyone help with this, please?</p>
| <p>You will have to persists the heap dumps on a shared network location between the pods. In order to achieve this, you will need to provide persistent volume claims and in EKS, this could be achieved using an Elastic File System mounted on different availability zones. You can start learning about it by reading this <a href="https://www.eksworkshop.com/beginner/190_efs/efs-csi-driver/" rel="nofollow noreferrer">guide</a> about EFS-based PVCs.</p>
|
<p>I'm following this tutorial on Microservices
<a href="https://www.youtube.com/watch?v=DgVjEo3OGBI" rel="nofollow noreferrer">https://www.youtube.com/watch?v=DgVjEo3OGBI</a></p>
<p>At some point, I deploy a SQL Server image in Kubernetes, using this Yaml file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mssql-depl
spec:
replicas: 1
selector:
matchLabels:
app: mssql
template:
metadata:
labels:
app: mssql
spec:
containers:
- name: mssql
image: mcr.microsoft.com/mssql/server:2019-latest
ports:
- containerPort: 1433
env:
- name: MSSQL_PID
value: "Express"
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
valueFrom:
secretKeyRef:
name: mssql5
key: SA_PASSWORD
volumeMounts:
- mountPath: /var/opt/mssql/data
name: mssqldb
volumes:
- name: mssqldb
persistentVolumeClaim:
claimName: mssql-claim
---
apiVersion: v1
kind: Service
metadata:
name: mssql-clusterip-srv
spec:
type: ClusterIP
selector:
app: mssql
ports:
- name: mssql
protocol: TCP
port: 1433
targetPort: 1433
---
apiVersion: v1
kind: Service
metadata:
name: mssql-loadbalancer
spec:
type: LoadBalancer
selector:
app: mssql
ports:
- protocol: TCP
port: 1433
targetPort: 1433
</code></pre>
<p>And the PVC.yaml:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mssql-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 200Mi
</code></pre>
<p>I then create a secret in Kubernets, holding the password for the SQL Server sa account, and this secret is then used in the Yaml file.</p>
<pre><code>kubectl create secret generic mssql5 --from-literal=SA_PASSWORD="MyC0m9l&xPassw0rd"
</code></pre>
<p>It should then be possible to connect directly to the SQL Server container on localhost port 1433, using the sa account and the password.</p>
<p>However, I get the "Login failed for user SA" error when trying to connect. I've tried everything, including changing SA_PASSWORD to MSSQL_SA_PASSWORD, changing the complexity of the password, enabling login for the sa user in SQL Server, which was disabled before and googled like I've never googled before. TCP/IP setting is enabled in SQL Server configuration manager. I cannot get access. Can anyone shed some light on this issue?</p>
| <p><a href="https://i.stack.imgur.com/jt0c4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jt0c4.png" alt="enter image description here" /></a>
if you turn off that service then it will work</p>
|
<p>Is there a way to prevent a Pod from deploying onto Kubernetes if it does not have memory resource requests & limits set?</p>
| <p>Yes, you can apply <a href="https://kubernetes.io/docs/concepts/policy/limit-range/" rel="nofollow noreferrer">Limit Ranges</a>. See e.g. <a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/" rel="nofollow noreferrer">Configure Minimum and Maximum CPU Constraints for a Namespace</a> for an example for CPU resources, but it can be applied for e.g. memory and storage as well.</p>
|
<p>I want to allow a ServiceAccount in namespace A to access a resource in namespace B.
To achieve this I connect the ServiceAccount to a ClusterRole via a ClusterRoleBinding.
<a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole" rel="nofollow noreferrer">The documentation</a> says I can "use a ClusterRole to [1.] define permissions on namespaced resources and be granted within individual namespace(s)"</p>
<p>But looking through the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#policyrule-v1-rbac-authorization-k8s-io" rel="nofollow noreferrer">K8s documentation</a> I can't find a way how to create a ClusterRole with namespaced resources. How can I achieve this?</p>
| <p><code>...how to create a ClusterRole with namespaced resources...</code></p>
<p>Read further <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#clusterrole-example" rel="nofollow noreferrer">down</a> a bit:</p>
<blockquote>
<p>A ClusterRole can be used to grant the same permissions as a Role.
Because ClusterRoles are cluster-scoped. You can also use them to
grant access to:</p>
<p>...</p>
<ul>
<li>namespaced resources (like Pods), <strong>across all namespaces</strong></li>
</ul>
</blockquote>
<p><code>ClusterRole</code> won't help you to restraint access to a single namespaced object. You can however use <code>RoleBinding</code> to reference a <code>ClusterRole</code> and restraint access to the object in the namespace of the RoleBinding.</p>
|
<p>I have created a simple Springboot application which runs on my localhost port number 9091 which returns "Hello world!!!" when http://localhost:9091/helloWorld url is invoked.</p>
<p>Below are the code snippets of my Springboot main class and controller class</p>
<pre><code>@SpringBootApplication
public class HelloWorldApplication {
public static void main(String[] args) {
SpringApplication.run(HelloWorldApplication.class, args);
}
}
@RestController
public class HelloWorldController {
@GetMapping(value="/helloWorld")
public String helloWorld() {
return "Hello world!!!!";
}
}
</code></pre>
<p>Below are the dependencies in my pom.xml</p>
<pre><code> <dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<scope>runtime</scope>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<!-- https://mvnrepository.com/artifact/org.springdoc/springdoc-openapi-ui -->
<dependency>
<groupId>org.springdoc</groupId>
<artifactId>springdoc-openapi-ui</artifactId>
<version>1.6.3</version>
</dependency>
</code></pre>
<p>I can also access the swagger ui of my Springboot project using the following url http://localhost:9091/swagger-ui/index.html</p>
<p><a href="https://i.stack.imgur.com/UM55G.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UM55G.png" alt="Swagger UI Accessed on Localhost" /></a></p>
<p>I have deployed this Springboot application into my k8s cluster using the below yaml file configuration</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: moviepopcorn/hello_world:0.0.1
ports:
- containerPort: 9091
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
spec:
type: NodePort
selector:
app: hello-world
ports:
- port: 9091
targetPort: 9091
nodePort: 30001
</code></pre>
<p>And I could also access the helloWorld endpoint with the following url <K8S_MASTER_NODE_IP>:<NODE_PORT (30001)> as you can see below</p>
<p><a href="https://i.stack.imgur.com/Yql7d.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Yql7d.png" alt="Node port service accessed on a browser" /></a></p>
<p>But when I try to access the swagger-ui of my Springboot using the following url <a href="http://192.168.254.94:30001/swagger-ui/index.html" rel="nofollow noreferrer">http://192.168.254.94:30001/swagger-ui/index.html</a> I see the below Whilelabel Error Page</p>
<p><a href="https://i.stack.imgur.com/QEAeq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QEAeq.png" alt="Whitelabel Error Page" /></a></p>
<p>Could you please help me understand on what is causing this Whitelabel Error Page issue when I access my springboot app using my cluster master node ip?</p>
| <p>I found the solution for this. I just added the following dependency in the pom.xml</p>
<pre><code> <dependency>
<groupId>org.springdoc</groupId>
<artifactId>springdoc-openapi-webflux-ui</artifactId>
<version>1.6.3</version>
</dependency>
</code></pre>
|
<p>I am following the <code>Installation Instructions</code> from <a href="https://argoproj.github.io/argo-cd/getting_started/#3-access-the-argo-cd-api-server" rel="nofollow noreferrer">https://argoproj.github.io/argo-cd/getting_started/#3-access-the-argo-cd-api-server</a>
and even though the service type has been changes to <code>LoadBalancer</code> I cannot manage to login.</p>
<p>The information I have is:</p>
<pre><code>$ oc describe svc argocd-server
Name: argocd-server
Namespace: argocd
Labels: app.kubernetes.io/component=server
app.kubernetes.io/name=argocd-server
app.kubernetes.io/part-of=argocd
Annotations: <none>
Selector: app.kubernetes.io/name=argocd-server
Type: LoadBalancer
IP: 172.30.70.178
LoadBalancer Ingress: a553569222264478ab2xx1f60d88848a-642416295.eu-west-1.elb.amazonaws.com
Port: http 80/TCP
TargetPort: 8080/TCP
NodePort: http 30942/TCP
Endpoints: 10.128.3.91:8080
Port: https 443/TCP
TargetPort: 8080/TCP
NodePort: https 30734/TCP
Endpoints: 10.128.3.91:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>If I do:</p>
<pre><code>$ oc login https://a553569222264478ab2xx1f60d88848a-642416295.eu-west-1.elb.amazonaws.com
The server is using a certificate that does not match its hostname: x509: certificate is valid for localhost, argocd-server, argocd-server.argocd, argocd-server.argocd.svc, argocd-server.argocd.svc.cluster.local, not a553569222264478ab2xx1f60d88848a-642416295.eu-west-1.elb.amazonaws.com
You can bypass the certificate check, but any data you send to the server could be intercepted by others.
Use insecure connections? (y/n): y
error: Seems you passed an HTML page (console?) instead of server URL.
Verify provided address and try again.
</code></pre>
| <p>If you wish to expose an ArgoCD server via ingress, you can disable the TLS by patching the argocd-server deployment:</p>
<p>no-tls.yaml</p>
<hr />
<pre><code>spec:
template:
spec:
containers:
- name: argocd-server
command:
- argocd-server
- --insecure
</code></pre>
<pre><code>kubectl patch deployment -n argocd argocd-server --patch-file no-tls.yaml
</code></pre>
|
<p>Developing CRDs for Kubernetes, using VScode as an IDE. Want to provide autocompletion and Intellisense in IDE.</p>
<p>It needs a JSON schema to do so. I have a huge number of CRDs to support. I want to do it in an easy way to convert CRDs to JSON schema.</p>
| <p>You can export the swagger definition (including your CRDs) of your Kubernetes server and then generate the json schema from the swagger export.</p>
<p>Create a proxy to your API server and export the swagger</p>
<pre><code>kubectl proxy --port=8080
curl localhost:8080/openapi/v2 > k8s-swagger.json
</code></pre>
<p>Using <a href="https://github.com/instrumenta/openapi2jsonschema" rel="nofollow noreferrer">openapi2jsonschema</a> generate the json schemas</p>
<pre><code>openapi2jsonschema -o "schemas" --kubernetes --stand-alone k8s-swagger.json
</code></pre>
|
<p>In kubernetes we can use environment variable to pass hostIP using</p>
<pre><code> env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
</code></pre>
<p>So similarly how get hostName instead of HostIP?</p>
| <pre><code>env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
</code></pre>
<p>See: <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#the-downward-api" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#the-downward-api</a></p>
|
<p>I would like to block <code>/public/configs</code> in my k8s ingress.</p>
<p>My current settings doesnt work.</p>
<pre><code> - host: example.com
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: service-myapp
port:
number: 80
- path: /public/configs
pathType: ImplementationSpecific
backend:
service:
name: service-myapp
port:
number: 88 // fake port
</code></pre>
<p>Is there any better (easy) way?</p>
| <p>1- Create a dummy service and send it to that:</p>
<pre><code> - path: /public/configs
pathType: ImplementationSpecific
backend:
service:
name: dummy-service
port:
number: 80
</code></pre>
<p>2- use <code>server-snippets</code> as bellow to return 403 or any error you want:</p>
<p>a) for k8s nginx ingress:</p>
<pre><code> annotations:
nginx.ingress.kubernetes.io/server-snippet: |
location ~* "^/public/configs" {
deny all;
return 403;
}
</code></pre>
<p>b) for nginx ingress:</p>
<pre><code> annotations:
nginx.org/server-snippet: |
location ~* "^/public/configs" {
deny all;
return 403;
}
</code></pre>
|
<p>In my kubernetes deployment file I have a annotation as below</p>
<pre><code>spec:
template:
metadata:
annotations:
prometheus.io/port: "24231"
prometheus.io/scrape: "true"
</code></pre>
<p>But when i apply the deployment file it will be replaced with</p>
<pre><code>spec:
template:
metadata:
creationTimestamp: null
labels:
app: my-app
version: 4.0.5-164
</code></pre>
<p>Not sure why my annotations are not coming. It is getting replaced with data from a metadata section as shown below</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: my-app
appid: xxxxx-xxxx-xxxx
groupid: DEFAULT
version: 4.0.5-164
</code></pre>
<p>K8s version 1.18</p>
| <p>you are showing us different parts of the deployment manifest here, so I think you are confusing the different metadata sections in the same file.</p>
<p>the first section, <code>.metadata</code>, is applied to the deployment itself.</p>
<p>the <code>.spec.template.metadata</code> section is applied to the pods that are created by the deployment and those annotations will not appear in the top <code>.metadata</code> section of the deployment.</p>
<p><strong>summary</strong>:</p>
<p>if you want to specify labels/annotations for the deployment, use the <code>.metadata</code> section.</p>
<p>if you want to specify labels/annotations that are applied to your pod, use the <code>.spec.template.metadata</code> section.</p>
<p>if you want to specify labels/annotations for both, specify them in both places.</p>
<p>example:</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: MYAPP
# labels/annotations that are applied to the deployment
labels:
app: MYAPP
appid: xxxxx-xxxx-xxxx
groupid: DEFAULT
version: 4.0.5-164
annotations:
whatever: isapplied
spec:
...
template:
metadata:
# labels/annotations that are applied to the pods
labels:
app: MYAPP
appid: xxxxx-xxxx-xxxx
groupid: DEFAULT
version: 4.0.5-164
annotations:
prometheus.io/port: "24231"
prometheus.io/scrape: "true"
</code></pre>
|
<p>Test1
I created an ingress with a cert-manager annotation.
This one fails with the following error "nginx ingress-controller error : admission webhook "validate.nginx.ingress.kubernetes.io" denied the request host and path already defined "</p>
<p>Test2
I created the same ingress but without the cert-manager annotation.
This one succeeds.</p>
<p>Nginx release</p>
<pre><code>$ kubectl exec ngingress-ingress-nginx-controller-7f4db9965c-ht8t9 -- /nginx-ingress-controller --version
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: v1.1.0
Build: cacbee86b6ccc45bde8ffc184521bed3022e7dee
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.9
-------------------------------------------------------------------------------
</code></pre>
<p>Cert-manager release</p>
<pre><code>kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.6.0/cert-manager.yaml
</code></pre>
<p>Details of test1</p>
<pre><code># cat test-ingress-cert.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: sso-production
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/issuer: letsencrypt-staging
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
namespace: prod
spec:
tls:
- hosts:
- sso.mydomain.com
secretName: quickstart-example-tls
rules:
- host: sso.mydomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: sso
port:
number: 8080
# kubectl create -f test-ingress-cert.yaml
Error from server (BadRequest): error when creating "test-ingress-cert.yaml":
admission webhook "validate.nginx.ingress.kubernetes.io" denied the request:
host "sso.mydomain.com" and path "/" is already defined in ingress prod/sso-echopen-tls
# kubectl get ingress --all-namespaces
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
prod gateway-echopen-tls <none> gateway.mydomain.com 152.228.169.166 80, 443 7d
prod hapi-echopen-tls <none> hapi.mydomain.com 152.228.169.166 80, 443 9d
prod reader-echopen-tls <none> reader.mydomain.com 152.228.169.166 80, 443 7d
# kubectl get issuers.cert-manager.io -n prod
NAME READY AGE
letsencrypt-staging True 87m
# kubectl get all -n cert-manager
NAME READY STATUS RESTARTS AGE
pod/cert-manager-77fd97f598-c54px 1/1 Running 0 138m
pod/cert-manager-cainjector-7974c84449-vx54h 1/1 Running 0 138m
pod/cert-manager-webhook-5f4b965fbd-nccw5 1/1 Running 0 138m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/cert-manager ClusterIP 10.3.44.182 <none> 9402/TCP 138m
service/cert-manager-webhook ClusterIP 10.3.21.35 <none> 443/TCP 138m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/cert-manager 1/1 1 1 138m
deployment.apps/cert-manager-cainjector 1/1 1 1 138m
deployment.apps/cert-manager-webhook 1/1 1 1 138m
NAME DESIRED CURRENT READY AGE
replicaset.apps/cert-manager-77fd97f598 1 1 1 138m
replicaset.apps/cert-manager-cainjector-7974c84449 1 1 1 138m
replicaset.apps/cert-manager-webhook-5f4b965fbd 1 1 1 138m
# kubectl get all -n default
NAME READY STATUS RESTARTS AGE
pod/ngingress-ingress-nginx-controller-7f4db9965c-ht8t9 1/1 Running 0 9d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.3.0.1 <none> 443/TCP 89d
service/ngingress-ingress-nginx-controller LoadBalancer 10.3.34.184 152.228.169.166 80:30370/TCP,443:31584/TCP 23d
service/ngingress-ingress-nginx-controller-admission ClusterIP 10.3.82.29 <none> 443/TCP 23d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ngingress-ingress-nginx-controller 1/1 1 1 23d
NAME DESIRED CURRENT READY AGE
replicaset.apps/ngingress-ingress-nginx-controller-764c5b9596 0 0 0 10d
replicaset.apps/ngingress-ingress-nginx-controller-78fdb596f9 0 0 0 9d
replicaset.apps/ngingress-ingress-nginx-controller-7f4db9965c 1 1 1 23d
replicaset.apps/ngingress-ingress-nginx-controller-88fb6466f 0 0 0 9d
# kubectl logs cert-manager-webhook-5f4b965fbd-nccw5 -n cert-manager
W1220 10:28:37.440085 1 client_config.go:615] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
W1220 10:28:37.443639 1 client_config.go:615] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I1220 10:28:37.443841 1 webhook.go:70] cert-manager/webhook "msg"="using dynamic certificate generating using CA stored in Secret resource" "secret_name"="cert-manager-webhook-ca" "secret_namespace"="cert-manager"
I1220 10:28:37.444238 1 server.go:140] cert-manager/webhook "msg"="listening for insecure healthz connections" "address"=":6080"
I1220 10:28:37.444330 1 server.go:171] cert-manager/webhook "msg"="listening for secure connections" "address"=":10250"
I1220 10:28:37.444369 1 server.go:203] cert-manager/webhook "msg"="registered pprof handlers"
I1220 10:28:38.507011 1 dynamic_source.go:273] cert-manager/webhook "msg"="Updated serving TLS certificate"
# kubectl logs cert-manager-77fd97f598-c54px -n cert-manager
I1220 10:28:35.975050 1 start.go:75] cert-manager "msg"="starting controller" "git-commit"="49914a057b39c887be0974c4657c095bd7724bc7" "version"="v1.6.0"
W1220 10:28:35.975206 1 client_config.go:615] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I1220 10:28:35.977657 1 controller.go:268] cert-manager/controller/build-context "msg"="configured acme dns01 nameservers" "nameservers"=["10.3.0.10:53"]
I1220 10:28:35.978527 1 controller.go:85] cert-manager/controller "msg"="enabled controllers: [certificaterequests-approver certificaterequests-issuer-acme certificaterequests-issuer-ca certificaterequests-issuer-selfsigned certificaterequests-issuer-vault certificaterequests-issuer-venafi certificates-issuing certificates-key-manager certificates-metrics certificates-readiness certificates-request-manager certificates-revision-manager certificates-trigger challenges clusterissuers ingress-shim issuers orders]"
I1220 10:28:35.978792 1 controller.go:115] cert-manager/controller "msg"="starting leader election"
I1220 10:28:35.979117 1 controller.go:105] cert-manager/controller "msg"="starting metrics server" "address"={"IP":"::","Port":9402,"Zone":""}
I1220 10:28:35.979810 1 leaderelection.go:248] attempting to acquire leader lease kube-system/cert-manager-controller...
I1220 10:29:40.695753 1 leaderelection.go:258] successfully acquired lease kube-system/cert-manager-controller
I1220 10:29:40.696143 1 controller.go:163] cert-manager/controller "msg"="not starting controller as it's disabled" "controller"="certificatesigningrequests-issuer-vault"
I1220 10:29:40.696185 1 controller.go:163] cert-manager/controller "msg"="not starting controller as it's disabled" "controller"="certificatesigningrequests-issuer-venafi"
I1220 10:29:40.696436 1 controller.go:186] cert-manager/controller "msg"="starting controller" "controller"="certificaterequests-approver"
I1220 10:29:40.696548 1 controller.go:163] cert-manager/controller "msg"="not starting controller as it's disabled" "controller"="certificatesigningrequests-issuer-ca"
I1220 10:29:40.696615 1 controller.go:163] cert-manager/controller "msg"="not starting controller as it's disabled" "controller"="certificatesigningrequests-issuer-selfsigned"
I1220 10:29:40.696651 1 controller.go:186] cert-manager/controller "msg"="starting controller" "controller"="certificates-readiness"
I1220 10:29:40.696658 1 controller.go:163] cert-manager/controller "msg"="not starting controller as it's disabled" "controller"="gateway-shim"
I1220 10:29:40.696693 1 controller.go:186] cert-manager/controller "msg"="starting controller" "controller"="certificates-metrics"
I1220 10:29:40.697253 1 controller.go:186] cert-manager/controller "msg"="starting controller" "controller"="certificaterequests-issuer-ca"
I1220 10:29:40.697471 1 controller.go:186] cert-manager/controller "msg"="starting controller" "controller"="certificaterequests-issuer-acme"
I1220 10:29:40.697540 1 controller.go:186] cert-manager/controller "msg"="starting controller" "controller"="certificaterequests-issuer-selfsigned"
I1220 10:29:40.697963 1 controller.go:186] cert-manager/controller "msg"="starting controller" "controller"="certificaterequests-issuer-venafi"
I1220 10:29:40.698062 1 controller.go:186] cert-manager/controller "msg"="starting controller" "controller"="certificates-trigger"
I1220 10:29:40.698111 1 controller.go:186] cert-manager/controller "msg"="starting controller" "controller"="certificates-request-manager"
I1220 10:29:41.504721 1 controller.go:186] cert-manager/controller "msg"="starting controller" "controller"="challenges"
I1220 10:29:41.504762 1 controller.go:186] cert-manager/controller "msg"="starting controller" "controller"="orders"
I1220 10:29:41.504819 1 controller.go:186] cert-manager/controller "msg"="starting controller" "controller"="certificates-issuing"
I1220 10:29:41.504853 1 controller.go:186] cert-manager/controller "msg"="starting controller" "controller"="certificates-key-manager"
I1220 10:29:41.504884 1 controller.go:163] cert-manager/controller "msg"="not starting controller as it's disabled" "controller"="certificatesigningrequests-issuer-acme"
I1220 10:29:41.504942 1 controller.go:186] cert-manager/controller "msg"="starting controller" "controller"="certificates-revision-manager"
I1220 10:29:41.505066 1 controller.go:186] cert-manager/controller "msg"="starting controller" "controller"="clusterissuers"
I1220 10:29:41.505141 1 controller.go:186] cert-manager/controller "msg"="starting controller" "controller"="ingress-shim"
I1220 10:29:41.505220 1 controller.go:186] cert-manager/controller "msg"="starting controller" "controller"="issuers"
I1220 10:29:41.505467 1 controller.go:186] cert-manager/controller "msg"="starting controller" "controller"="certificaterequests-issuer-vault"
I1220 12:31:42.402384 1 setup.go:219] cert-manager/controller/issuers "msg"="ACME server URL host and ACME private key registration host differ. Re-checking ACME account registration" "related_resource_kind"="Secret" "related_resource_name"="letsencrypt-staging" "related_resource_namespace"="prod" "resource_kind"="Issuer" "resource_name"="letsencrypt-staging" "resource_namespace"="prod" "resource_version"="v1"
I1220 12:31:43.291565 1 setup.go:309] cert-manager/controller/issuers "msg"="verified existing registration with ACME server" "related_resource_kind"="Secret" "related_resource_name"="letsencrypt-staging" "related_resource_namespace"="prod" "resource_kind"="Issuer" "resource_name"="letsencrypt-staging" "resource_namespace"="prod" "resource_version"="v1"
I1220 12:31:43.291617 1 conditions.go:95] Setting lastTransitionTime for Issuer "letsencrypt-staging" condition "Ready" to 2021-12-20 12:31:43.291609136 +0000 UTC m=+7387.349555559
I1220 12:31:43.324585 1 setup.go:202] cert-manager/controller/issuers "msg"="skipping re-verifying ACME account as cached registration details look sufficient" "related_resource_kind"="Secret" "related_resource_name"="letsencrypt-staging" "related_resource_namespace"="prod" "resource_kind"="Issuer" "resource_name"="letsencrypt-staging" "resource_namespace"="prod" "resource_version"="v1"
# kubectl logs ngingress-ingress-nginx-controller-7f4db9965c-ht8t9 -n default -f
I1220 12:53:53.630296 7 status.go:300] "updating Ingress status" namespace="prod" ingress="gateway-echopen-tls" currentValue=[{IP:152.228.169.166 Hostname: Ports:[]}] newValue=[{IP:152.228.169.166 Hostname: Ports:[]}]
I1220 12:53:54.736079 7 status.go:300] "updating Ingress status" namespace="prod" ingress="hapi-echopen-tls" currentValue=[{IP:152.228.169.166 Hostname: Ports:[]}] newValue=[{IP:152.228.169.166 Hostname: Ports:[]}]
I1220 12:53:54.742908 7 status.go:300] "updating Ingress status" namespace="prod" ingress="reader-echopen-tls" currentValue=[{IP:152.228.169.166 Hostname: Ports:[]}] newValue=[{IP:152.228.169.166 Hostname: Ports:[]}]
E1220 12:54:23.467892 7 main.go:90] "invalid ingress configuration" err="host \"sso.mydomain.com\" and path \"/\" is already defined in ingress prod/sso-echopen-tls" ingress="sso-production/prod"
</code></pre>
<p>Details of Test 2</p>
<p>Same ingress but without the cert-manager annotation succeeds !</p>
<pre><code># cat ingress-sso-echopen-tls.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 8m
name: sso-echopen-tls
namespace: prod
spec:
tls:
- hosts:
- sso.mydomain.com
secretName: ingress-echopen-secret-tls
rules:
- host: sso.mydomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: sso
port:
number: 8080
# kubectl create -f ingress-sso-echopen-tls.yaml
ingress.networking.k8s.io/sso-echopen-tls created
# kubectl get ingress --all-namespaces
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
prod gateway-echopen-tls <none> gateway.mydomain.com 152.228.169.166 80, 443 7d
prod hapi-echopen-tls <none> hapi.mydomain.com 152.228.169.166 80, 443 9d
prod reader-echopen-tls <none> reader.mydomain.com 152.228.169.166 80, 443 7d
prod sso-echopen-tls <none> sso.mydomain.com 152.228.169.166 80, 443 26s
</code></pre>
| <p>The same domain name probably exist within the ingress. You can delete the ingress by executing following command:</p>
<p><code>kubectl get ingress --all-namespaces</code></p>
<p>to list installed ingresses</p>
<p><code>kubectl delete ingress ingress-name -n ingress-namespace</code>,</p>
<p>to delete the troublesome ingress</p>
<p>then re-run your command.</p>
|
<p>container "abc-job" in pod "abc-job-manual-h9k-vbbzw" is waiting to start: CreateContainerConfigError</p>
<p>Error: container has runAsNonRoot and image will run as root (pod: "abc-job-manual-h9k-xyz-ns-nonprod(e38ece94-b411-4d70-bc29-3711f36cfe45)", container: abc-cron-job)</p>
<p>Below image is from cluser in pod details
<a href="https://i.stack.imgur.com/CU5AP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CU5AP.png" alt="pod details" /></a></p>
<p>below is my yaml file for cronjob</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: abc-cron-job
spec:
schedule: "10 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: abc-cron-job
image: docker.repo1.jkl.com/xyz-services/abc/REPLACE_ME
imagePullPolicy: Always
env:
- name: spring-profile
valueFrom:
configMapKeyRef:
name: spring-profile
key: ENV
restartPolicy: OnFailure
</code></pre>
| <p>in yaml file, securityContext was missing, i added that and now it is working fine. Below one is the updated yaml file</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: abc-cron-job
spec:
schedule: "10 * * * *"
jobTemplate:
spec:
template:
spec:
securityContext:
runAsGroup: 3000
runAsUser: 3000
containers:
- name: abc-cron-job
image: docker.repo1.xyz.com/abc-services/abc-application/REPLACE_ME
imagePullPolicy: Always
env:
- name: spring-profile
valueFrom:
configMapKeyRef:
name: spring-profile
key: ENV
securityContext:
privileged: false
restartPolicy: OnFailure
</code></pre>
|
<p>I want to run a loop on the pods in specific namespace, however the trick is to do it in a cronJob,<em>is it possible inline</em>?</p>
<p><code>kubectl get pods -n foo</code></p>
<p>The trick here is after you get the list of the pods, I need to loop on then and delete each one by one with timeout of 15 seconde, is it possible to do it in cronJob?</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: restart
namespace: foo
spec:
concurrencyPolicy: Forbid
schedule: "*/1 * * * *"
jobTemplate:
spec:
backoffLimit: 2
activeDeadlineSeconds: 600
template:
spec:
serviceAccountName: pod-exterminator
restartPolicy: Never
containers:
- name: kubectl
image: bitnami/kubectl:1.22.3
command:
- 'kubectl'
- 'get'
- 'pods'
- '--namespace=foo'
</code></pre>
<p>When running the above script it works, but when you want to run loop its get complicated, how can I do it inline?</p>
| <p>In your case you can use something like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: restart
namespace: foo
spec:
concurrencyPolicy: Forbid
schedule: "*/1 * * * *"
jobTemplate:
spec:
backoffLimit: 2
activeDeadlineSeconds: 600
template:
spec:
serviceAccountName: pod-exterminator
restartPolicy: Never
containers:
- name: kubectl
image: bitnami/kubectl:1.22.3
command:
- /bin/sh
- -c
- kubectl get pods -o name | while read -r POD; do kubectl delete "$POD"; sleep 15; done
</code></pre>
<p>However, do you really need to wait 15 seconds? If you want to be sure that pod is gone before deleting next one, you can use <code>--wait=true</code>, so the command will become:</p>
<pre><code>kubectl get pods -o name | while read -r POD; do kubectl delete "$POD" --wait; done
</code></pre>
|
<p>Is there any <code>kubectl</code> command to see how much RAM (e.g. GB) has the entire cluster?</p>
<p>Basically I would like to get the sum of all the RAM of all the nodes in the cluster.</p>
<p>The command would be useful to understand the "size" of the Kubernetes cluster.</p>
| <p>You can install <code>view-utilization</code> kubectl plugin with:</p>
<pre><code>kubectl krew install view-utilization
</code></pre>
<p>Then you can run:</p>
<pre><code>kubectl view-utilization -h
</code></pre>
<p>...and you should look for values under "Alloc" columns:</p>
<pre><code>Resource Req %R Lim %L Alloc Sched Free
CPU 3.7 6% 4.3 7% 60 57 56
Memory 5.4G 2% 7.9G 3% 237G 232G 229G
</code></pre>
|
<p>I am new to K8s and trying to create a Helm chart to setup my application.</p>
<p>I want a frictionless experience for users setting up the application without much manual intervention.</p>
<p>Creating the helm chart i was pleased with the provided templating functionallity but missing one essential thing: Creating passwords.</p>
<p>I don't want the user to have to create the passwords for my API to talk to redis etc.</p>
<p>Setting up Vault is also one of the more difficult parts as its key has to be initially created it then needs to be unlocked and resources like userpass and other engines and resources have to be created.</p>
<p>For a docker-compose setup of the same app i have a "install container" that generates the passwords, creates resources on Vault with its API etc.</p>
<p>Is there another possibility using kubernetes/helm?</p>
<p>Thanks</p>
| <p>You could try <a href="https://github.com/bitnami-labs/sealed-secrets" rel="nofollow noreferrer">Sealed Secrets</a>. It stores secrets encrypted using assimetric keys, so they secrets can be only restored having the proper keys.</p>
|
<p>I am using elastic search version 7.9 and trying to set up kibana on my Kubernetes cluster. I have deployed kibana and added nodePort service to access kibana from my browser. I getting a timeout error on the browser. the following is my service and deployment YAML:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: kube-logging
labels:
app: kibana1
spec:
ports:
- port: 5602
targetPort: 5602
nodePort: 32343
type: NodePort
selector:
app: kibana1
-----
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana1
namespace: kube-logging
labels:
app: kibana1
spec:
replicas: 1
selector:
matchLabels:
app: kibana1
template:
metadata:
labels:
app: kibana1
spec:
containers:
- name: kibana1
image: docker.elastic.co/kibana/kibana:8.0.0
env:
- name: ELASTICSEARCH_URL
value: http://192.168.18.35:31200/
ports:
- containerPort: 5602
</code></pre>
<p>I found the pod logs in kubernetes dashboard like ways:</p>
<pre><code>[2022-02-16T06:48:43.717+00:00][INFO ][plugins-service] Plugin "metricsEntities" is disabled.
[2022-02-16T06:48:43.936+00:00][INFO ][http.server.Preboot] http server running at http://0.0.0.0:5601
[2022-02-16T06:48:44.002+00:00][INFO ][plugins-system.preboot] Setting up [1] plugins: [interactiveSetup]
[2022-02-16T06:48:44.006+00:00][INFO ][preboot] "interactiveSetup" plugin is holding setup: Validating Elasticsearch connection configuration…
[2022-02-16T06:48:44.080+00:00][INFO ][root] Holding setup until preboot stage is completed.
i Kibana has not been configured.
Go to http://0.0.0.0:5601/?code=950751 to get started.
</code></pre>
<p>Could anyone suggest how can I fix this?</p>
| <p>You need to extend the kibana docker image with the required plugins.</p>
<p>copy the plugin file and using kibana-plugin install the plugin from the given file as given below</p>
<pre><code>RUN /usr/share/kibana/bin/kibana-plugin install file:///kibana-xxxx-plugin.zip
</code></pre>
|
<p>We've an application and API, running on kubernetes on Azure, using an nginx-ingress and cert-manager which automatically creates letsencrypt certificates. The connection to the application/API is encrypted with TLS1.3.</p>
<p>From an older application, running on a Win 2012 server, we want to retrieve data from the API (on k8s). This isn't successful, since TLS1.3 isn't supported on that server.</p>
<p>I'd like to set the minimum version of TLS to 1.2 on kubernetes.
How can I achieve that?</p>
<p>I've read, that with kubelet, the tls-min-version can be configured, but I don't know how to apply this.</p>
<p><strong>Note:</strong> we use <code>az aks create</code> to create the k8s clusters.</p>
| <p>As your win server connects to the application on <strong>K8s</strong> you have to set the version of <strong>TLS</strong> on the Nginx ingress level.</p>
<p>Nginx ingress & cert-manager is point where you server connects and access API so you just have to update the TLS version of Nginx.</p>
<p>You can do it by changing the config map for <strong>Nginx</strong> ingress controller. Also, you might need to update the certificate also, there could be a chance by default Let's encrypt(CA) providing the default TLS 1.3.</p>
<p>So after enabling <strong>TLS 1.2</strong> for <strong>Nginx</strong> you might need to re-generate the cert-manager secret(certificate).</p>
<p><strong>Configmap Nginx ingress controller</strong></p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
namespace: ingress-nginx
data:
ssl-ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384"
ssl-protocols: "TLSv1.2 TLSv1.3"
</code></pre>
<p>above configmap will start both TLS versions for Nginx ingress controller.</p>
|
<p>When deleting a pod manually <code>kubectl delete</code> wait for the pod to be deleted and one can include a <code>kubectl wait --for....</code> condition in a script to wait for the pod to be deleted.</p>
<p>I would like to perform the same wait condition but when scaling down (replicas: 0) a deployment.</p>
<p>From the deployment json, the available/unavailable replicas doesn't count the "terminating" pods and as expected <code>kubectl wait --for=delete deployment test-dep</code> doesn't wait for pod termination but for deployment deletion.</p>
<p>So I would like to perform on my script:</p>
<pre><code>kubectl scale --replicas=0 deployment foo-bar
kubectl wait --for=deletion-of-pod-from-deploymenent=foo-bar
</code></pre>
<p>Is there a way to do that?</p>
<p>Remarks: I would like to have the code as generic as possible so no hard writing the labels from deployment.</p>
| <p>The easiest way would be to use labels and issue <code>kubectl wait</code> based on that.</p>
<pre class="lang-text prettyprint-override"><code>kubectl wait --for delete pod --selector=<label>=<value>
</code></pre>
<p><em>but</em>, since you don't want that, you can use the script below</p>
<pre class="lang-sh prettyprint-override"><code>#!/bin/bash
deployment_uid=$(kubectl get deployments ${1} -o=jsonpath='{.metadata.uid}')
rs_uids=$(kubectl get rs -o=json | jq -r '.items[] | select(.metadata.ownerReferences[].uid=='\"${deployment_uid}\"') | .metadata.uid')
PODS=""
for i in $rs_uids; do
PODS+=$(kubectl get pods -o=json | jq -r '.items[] | select(.metadata.ownerReferences[].uid=='\"$i\"') | "pod/" + .metadata.name')
done
[ -z "$PODS" ] && echo "Pods not found" || kubectl wait --for=delete --timeout=-1s ${PODS}
</code></pre>
<p>This uses Deployment name (as a first argument), to get ownerRefereces UIDs, chaining down to Pods names.</p>
<p>It's much more complicated, and prone to failure, than just using labels.</p>
|
<p>I have setup GKS in free trail access.</p>
<p>here is screenshot of cluster
<a href="https://i.stack.imgur.com/885zK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/885zK.png" alt="enter image description here" /></a></p>
<p>I have already setup vm instance in gce. So my kubernets cluster is having less resource for testing i have setup it but i want to know if i delete 1 node out of 3 what will happened</p>
<p>my pods are running in all 3 nodes(disturbed)</p>
<p>So i delete one node will it create a new node with deploy my running pods into another 2 nodes it will become heavy</p>
<p>how do i know its HA using and Scale Up and Scale Down</p>
<p>Please clear my questions</p>
| <blockquote>
<p>So i delete one node will it create a new node with deploy my running
pods into another 2 nodes it will become heavy</p>
</blockquote>
<p>GKE will manage the Nodes using Node pool config.</p>
<p>if inside your GKE you have set 3 nodes and manually remove 1 instance it will auto create new Node in cluster.</p>
<p>You pod might get moved to another node if space is left there or else it will go to pending state and wait for new node to join the GKE cluster.</p>
<p>If you want to redice nodes in GKE you have to redice minimum count in GKE node pool.</p>
<p>If you want to test scale up and down, you can enable auto scaling on Node pool and increase the POD count on cluster, GKE will auto add nodes. Make sure you have set correctly min and max nodes into node pool section for autoscaling.</p>
|
<p>Usually when I deploy a Simple HTTPS server in VM I do</p>
<p><strong>Create Certificate with ip</strong></p>
<pre><code>$ openssl req -new -x509 -keyout private_key.pem -out public_cert.pem -days 365 -nodes
Generating a RSA private key
..+++++
.................................+++++
writing new private key to 'private_key.pem'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:IN
State or Province Name (full name) [Some-State]:Tamil Nadu
Locality Name (eg, city) []:Chennai
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Company ,Inc
Organizational Unit Name (eg, section) []: company division
Common Name (e.g. server FQDN or YOUR name) []:35.222.65.55 <----------------------- this ip should be server ip very important
Email Address []:
</code></pre>
<p><strong>Start Simple HTTPS Python Server</strong></p>
<pre><code># libraries needed:
from http.server import HTTPServer, SimpleHTTPRequestHandler
import ssl , socket
# address set
server_ip = '0.0.0.0'
server_port = 3389
# configuring HTTP -> HTTPS
httpd = HTTPServer((server_ip, server_port), SimpleHTTPRequestHandler)
httpd.socket = ssl.wrap_socket(httpd.socket, certfile='./public_cert.pem',keyfile='./private_key.pem', server_side=True)
httpd.serve_forever()
</code></pre>
<p>now this works fine for</p>
<p><strong>Curl from local</strong></p>
<pre><code>curl --cacert /Users/padmanabanpr/Downloads/public_cert.pem --cert-type PEM https://35.222.65.55:3389
</code></pre>
<p>now how to deploy the same to kubernetes cluster and access via load-balancer?</p>
<p>Assuming i have</p>
<ul>
<li>public docker nginx container with write access , python3 , and this python https server file</li>
<li>deployment yaml with nginx</li>
</ul>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: external-nginx-server
labels:
app: external-nginx-server
spec:
replicas: 1
selector:
matchLabels:
app: external-nginx-server
template:
metadata:
labels:
app: external-nginx-server
spec:
containers:
- name: external-nginx-server
image: <docker nginx public image>
ports:
- containerPort: 3389
---
kind: Service
apiVersion: v1
metadata:
name: external-nginx-service
spec:
selector:
app: external-nginx-server
ports:
- protocol: TCP
port: 443
name: https
targetPort: 3389
type: LoadBalancer
</code></pre>
| <p>To do the same in Kubernetes you need to create a Secret with the certificate in it, like this one:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Secret
apiVersion: v1
metadata:
name: my-tls-secret
data:
tls.crt: BASE64-ENCODED CERTIFICATE
tls.key: BASE64-ENCODED KEY
</code></pre>
<p>Then you need to mount it inside all the pods that require it:</p>
<pre class="lang-yaml prettyprint-override"><code># deployment.yml
volumes:
- name: my-tls
secret:
secretName: my-tls-secret
containers:
- name: external-nginx-server
image: <docker nginx public image>
volumeMounts:
- name: my-tls
# Here will appear the "tls.crt" and "tls.key", defined in the secret's data block.
# Kubernetes will take care to decode the contents and make them separate files.
mountPath: /etc/nginx/tls
</code></pre>
<p><strong><em>But this is pain to manage manually!</em></strong> You will have to track the certificate expiration date, renew the secret, restart the pods... There is a better way.</p>
<p>You can install an ingress conroller (<a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">NGINX</a>, for example) and the <a href="https://cert-manager.io/docs/" rel="nofollow noreferrer">certificate manager</a> for Kubernetes. The certificate manager will take care to issue certificates (via LetsEncrypt or other providers), save them as secrets, and update them before the expiration date.</p>
<p>An ingress controller is a centralized endpoint to your cluster. You can make it to handle connections to multiple applications, just like with normal NGINX installation. The benefit of it in this case is that you will not have to mount certificates if there is a new one or an update. The ingress controller will take care of that for you.</p>
<p>The links above will lead you to the documentation, where you can find the details on how to install and use these.</p>
|
<p>We have cluster with Istio and also Jenkins job to get "stable" pods, which uses this <code>kubectl</code> query:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get po -o=jsonpath="{range .items[?(@.status.containerStatuses[-1].ready==true)]}{.spec.containers[0].image}{'\\n'}{end}"
</code></pre>
<pre><code>registry/my-proj/admin:2.0.0.000123
registry/my-proj/foo:2.0.0.000123
registry/my-proj/bar:2.0.0.000123
</code></pre>
<p>This query fetches pods where last container (application) is ready, because we also have Istio sidecar containers. But here is tricky thing, it looks like array is built using alphabet, so if Istio container will be last - it fetches it as ready pod, because last container is ready.</p>
<p>I've tried to use <code>go-template</code> also, but the best thing I've managed to do</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get po -o go-template='{{range .items}}{{range .status.containerStatuses}}{{if eq .ready true }}{{end}}{{end}}{{.metadata.name}}{{println}}{{end}}
</code></pre>
<pre><code>registry/my-proj/admin:2.0.0.000123
registry/my-proj/admin:2.0.0.000123
registry/my-proj/foo:2.0.0.000123
registry/my-proj/foo:2.0.0.000123
registry/my-proj/bar:2.0.0.000123
</code></pre>
<p>It fetches <strong>2</strong> times pods where 2 containers are ready and only <strong>1</strong> if <strong>1</strong> container is ready.</p>
<p><strong>TL;DR;</strong></p>
<p>I am looking for ultimate query which can fetch pods where <strong>all containers are ready</strong>, thanks</p>
| <p>What about something like this?</p>
<pre><code>kubectl get po -o go-template='{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}{{range .items}}{{$pod := .}}{{range .status.containerStatuses}}{{if eq .ready false}}{{$pod.metadata.name}}{{"\n"}}{{end}}{{end}}{{end}}' | sort | uniq -u
</code></pre>
<p>What's happening here:</p>
<ol>
<li>We are getting all existing pod names, delimited by newline</li>
<li>Appending all pod names, which have at least one not ready container, delimited by newline</li>
<li>Sorting output alphabetically</li>
<li>Getting unique lines, while excluding duplicate ones.</li>
</ol>
<p>The trick is that <code>-u</code> key excludes all duplicate entries, so all that is left is running pods.
<code>{{ $pod := .}}</code> is used to save outer scope to print pod name inside inner loop. "Get all pods" is coming after "get not ready pods" to reduce risk of possible race condition, when some pods may have become ready by the time we did a "get all pods" query.</p>
<p>I do believe that something like this can easily be achieved with jsonpath too, but I don't think that you'll be able to do it with kubectl only, without the usage of <code>sort</code> and <code>uniq</code>.</p>
|
<p>I am looking for a (more or less scientific) document/presentation that explains why the developers of the "Kubernbetes language" made the choice to fragment an application definition (for instance) in multiple yaml files instead of writing a single yaml file with all the details of the application deployment (all deployments, volumes, ...)?</p>
<p>I imagine that this has something to do with reusability, maintainability and readability but it would be nice to have a more structured argumentation (I think of a conference paper or presentation in a conference such as kubcon or dockercon)</p>
<p>Thanks,</p>
<p>Abdelghani</p>
| <p>First things first, Kubernetes does not have its own language, most used language is YAML, but you could work with JSON, or even XML if this were supported. YAML is more human readable and portable across several programming languages.
Kubernetes is complex because we want to do complex things. It simply makes things more manageable if we split the deployment in several YAML files, as programmers use different files for different purposes. And, when a modification occurs you only kubectl apply a 'tiny' file.
Why they actually chose YAML and separated files was influenced by Google's Borg system, which Kubernetes grabbed a lot of concepts from.</p>
|
<p>I have created a k8s cluster in GKE. But I want to configure API server for k8s audit purposes so I have to set <code>--audit-policy-file</code> flag and <code>--audit-webhook-config-file</code> flags as arguments in the API server. How do I do that?</p>
| <p>I am afraid it's not possible.</p>
<p>Please keep in mind that there are some differences between On-Premise Kubernetes cluster and GKE cluster. Most important is that GKE master is managed completely by Google and you cannot reach it or change anything there. For example in <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/control-plane-security#vulnerability_and_patch_management" rel="nofollow noreferrer">Vulnerability and patch management</a> documentation you can find information:</p>
<blockquote>
<p>GKE control plane components are managed by a team of Google site reliability engineers, and are kept up to date with the latest security patches. This includes patches to the host operating system, Kubernetes components, and containers running on the control plane VMs.</p>
</blockquote>
<p>In short, you cannot change <code>Audit Policy</code> for <code>GKE</code>, however in <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/audit-policy" rel="nofollow noreferrer">GKE Audit Policy</a> you have information on how it works.</p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/audit-logging" rel="nofollow noreferrer">Audit Logs</a> are pre configured by Google and the only thing you can do is filtering. There is also a <a href="https://cloud.google.com/logging/docs/audit/configure-data-access#yaml" rel="nofollow noreferrer">Data Access Log</a> which is disabled by default. You can enable it to get more information, however it's extra cost.</p>
<p>The last thing I want to mention, that there is already a <code>Feature Request</code> asking for an overwrite <code>audit-policy-file</code>. More details you can find in report - <a href="https://issuetracker.google.com/issues/185868707" rel="nofollow noreferrer">Adjusting audit log levels</a></p>
|
<p>We have many applications for which we have created helm charts.</p>
<p>Now we need to upgrade our k8s clusters to v1.22. Is there any efficient way to update the charts to support the latest APIs in v1.22? Are there any tools or tips to script the above functionality...?</p>
| <p>You have tools available like <a href="https://github.com/rikatz/kubepug" rel="nofollow noreferrer">https://github.com/rikatz/kubepug</a>. Still, in correction phase I encourage you to perform a manual assessment of the deployment and modify it accordingly. Some changes will be a simple keyword change, others will imply some redesign. Features are also first deprecated but usable before they are removed, correct them in deprecation release instead when you bump into removal phase.
<a href="https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-22" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-22</a></p>
|
<p>We have a kubernetes cluster deployed on AWS EKS, and are experiencing intermittent timeouts on CoreDNS pods, usually clustered in groups of 5-15 failed queries in span of around 5 minutes. In a cluster, all queries regard the same hostname.</p>
<p>CoreDNS spits such logs:</p>
<pre><code>[ERROR] plugin/errors: 2 example.com. A: read udp [coreDNSpodIP]:39068->172.16.0.2:53: i/o timeout
</code></pre>
<p>The cluster is deployed in a VPC with CIDR of <code>172.16.0.0/16</code>, tho I cant determine what's under <code>172.15.0.2</code></p>
<p>They appear from time to time, we cannot reproduce the event that triggers them, nor correlate to any event on cluster. The coreDNS pods are working normally and at the same time are serving other queries well. What may be the cause and solution to fix the behavior?</p>
| <p>You may be hitting a limit after which all requests are throttled.</p>
<p>There are 2 things that come to mind</p>
<ul>
<li><a href="https://coredns.io/plugins/cache/" rel="nofollow noreferrer">Enable DNS caching on CoreDNS</a></li>
<li><a href="https://aws.amazon.com/premiumsupport/knowledge-center/dns-resolution-failures-ec2-linux/" rel="nofollow noreferrer">Check for EC2 throttling issues</a></li>
</ul>
|
<p>For example, guestbook-ui service and bbs-ui service are installed in k8s.</p>
<p>And I want to map guestbook-ui only to the 8080 listener port and bbs-ui service to the 8081 listener port to the pre-generated k8s ALB ingress.</p>
<p>However, if you write and store the following in spec, all guestbook-ui and bbs-ui services are deployed to all ports of 8080, 8081, and routing is twisted.</p>
<pre class="lang-yaml prettyprint-override"><code># skip
metadata:
annotations:
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":8080}, {"HTTP":8081}]'
# skip
spec:
rules:
- http:
paths:
- backend:
service:
name: bbs-ui
port:
number: 80
path: /*
pathType: ImplementationSpecific
- http:
paths:
- backend:
service:
name: guestbook-ui
port:
number: 80
path: /*
pathType: ImplementationSpecific
</code></pre>
<p>How can I deploy the service to the listener port I want?</p>
| <p>There is a feature to automatically merge multiple ingress rules for all ingresses in the same <strong>ingress group</strong>. The AWS ALB ingress controller supports them with a single ALB.</p>
<pre class="lang-yaml prettyprint-override"><code>metadata:
annotations:
alb.ingress.kubernetes.io/group.name: my-group
</code></pre>
<blockquote>
<p>In the AWS ALB ingress controller, prior to version 2.0, each ingress object you created in Kubernetes would get its own ALB. Customers wanted a way to lower their cost and duplicate configuration by sharing the same ALB for multiple services and namespaces.
By sharing an ALB, you can still use annotations for advanced routing but share a single load balancer for a team, or any combination of apps by specifying the alb.ingress.kubernetes.io/group.name annotation. <strong>All services with the same group.name will use the same load balancer</strong>.</p>
</blockquote>
<p>So, you can create ingress like this:</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress1
namespace: mynamespace
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/tags: mytag=tag
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/group.name: my-group
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 8080}]'
spec:
rules:
- http:
paths:
- backend:
service:
name: bbs-ui
port:
number: 80
path: /*
pathType: ImplementationSpecific
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress2
namespace: mynamespace
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/tags: mytag=tag
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/group.name: my-group
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 8081}]'
spec:
rules:
- http:
paths:
- backend:
service:
name: guestbook-ui
port:
number: 80
path: /*
pathType: ImplementationSpecific
</code></pre>
<p>You can read more info about <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.0/guide/ingress/annotations/#ingressgroup" rel="nofollow noreferrer">IngressGroup here</a>.</p>
|
<p>I am looking for a (more or less scientific) document/presentation that explains why the developers of the "Kubernbetes language" made the choice to fragment an application definition (for instance) in multiple yaml files instead of writing a single yaml file with all the details of the application deployment (all deployments, volumes, ...)?</p>
<p>I imagine that this has something to do with reusability, maintainability and readability but it would be nice to have a more structured argumentation (I think of a conference paper or presentation in a conference such as kubcon or dockercon)</p>
<p>Thanks,</p>
<p>Abdelghani</p>
| <blockquote>
<p>why the developers of the "Kubernbetes language" made the choice to fragment an application definition (for instance) in multiple yaml files instead of writing a single yaml file with all the details of the application deployment (all deployments, volumes, ...)?</p>
</blockquote>
<p>This allows us to modify the configuration of Kubernetes objects easier. Thus, you do not have to go through the entire cluster configuration yaml file to make a small change in the service backend, for example. And yes, it is easier to develop and maintain a bunch of files, each containing some object or a group of related objects.</p>
<p>Keep in mind that you can call kubectl apply command on a directory of config files:</p>
<pre><code>kubectl apply -f <directory>
</code></pre>
<p><strong>Group related k8s objects into a single file whenever it makes sense</strong>. So, it is easier to manage.</p>
<p>But you definitely can put the entire cluster configuration in one yaml file using this syntax (note ---).</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
# skip
---
apiVersion: apps/v1
kind: Deployment
metadata:
# skip
</code></pre>
<p>Kubernetes developers suggest write your configuration files using YAML rather than JSON. Though these formats can be used interchangeably in almost all scenarios, YAML tends to be more user-friendly.</p>
<p>There is a good Kubernetes page on <a href="https://kubernetes.io/docs/concepts/configuration/overview/#general-configuration-tips" rel="nofollow noreferrer">Configuration Best Practices</a>.</p>
|
<p>I am using Kubernetes client: <a href="https://pypi.org/project/kubernetes/" rel="nofollow noreferrer">Kubernetes</a></p>
<p>My function:</p>
<pre><code> def __get_gateway_token_secret(self):
try:
self.__get_kubernetes_config()
api = client.CoreV1Api()
secret = api.read_namespaced_secret(self.secret_name, self.namespace)
logging.debug(f'Kubernetes secret found: {base64.b64decode(secret.data["value"])}')
except Exception as e:
logging.error(f'Error retrieving Kubernetes Secret: {e}')
raise e
return secret
</code></pre>
<p>Unit test:</p>
<pre><code>@patch.object(K8s, '_K8s__get_kubernetes_config')
def test_get_gateway_token_secret_returns_secret(self, kubernetes_config_mock):
kubernetes_config_mock.return_value = MagicMock()
api = client.CoreV1Api()
test_object = K8s()
with patch.object(api, 'read_namespaced_secret', return_value='test'):
result = test_object._K8s__get_gateway_token_secret()
</code></pre>
<p>I am mocking __get_kubernetes_config() private method just fine. I need to mock "api.read_namespaced_secret()" call. I've tried "with patch.object" as shown above, but that's making an actual call. Is it possible to mock?</p>
| <p>Everything (or almost everything) is possible to mock :)</p>
<p>In this example you're mocking method on actual instance of <code>CoreV1Api</code>, but inside your method <code>__get_gateway_token_secret</code> you're creating a distinct new instance of <code>CoreV1APi</code>.</p>
<p>You should apply patch to class, not an instance. Or refactor your code.</p>
|
<p>While deploying a Kubernetes application, I want to check if a resource is already present. If so it shall not be rendered. To archive this behaviour the <a href="https://helm.sh/docs/chart_template_guide/functions_and_pipelines/" rel="nofollow noreferrer">lookup function</a> of helm is used. As it seems is always empty while deploying (no dry-run). Any ideas what I am doing wrong?</p>
<pre><code> ---
{{- if not (lookup "v1" "ServiceAccount" "my-namespace" "my-sa") }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Chart.Name }}-{{ .Values.environment }}
namespace: {{ .Values.namespace }}
labels:
app: {{ $.Chart.Name }}
environment: {{ .Values.environment }}
annotations:
"helm.sh/resource-policy": keep
iam.gke.io/gcp-service-account: "{{ .Chart.Name }}-{{ .Values.environment }}@{{ .Values.gcpProjectId }}.iam.gserviceaccount.com"
{{- end }}
</code></pre>
<p>running the corresponding kubectl command return the expected service account</p>
<p><code>kubectl get ServiceAccount my-sa -n my-namespace</code> lists the expected service account</p>
<p>helm version: 3.5.4</p>
| <p>i think you cannot use this if-statement to validate what you want.</p>
<p>the lookup function returns a list of objects that were found by your lookup. so, if you want to validate that there are no serviceaccounts with the properties you specified, you should check if the returned list is empty.</p>
<p>test something like</p>
<pre><code>---
{{ if eq (len (lookup "v1" "ServiceAccount" "my-namespace" "my-sa")) 0 }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Chart.Name }}-{{ .Values.environment }}
namespace: {{ .Values.namespace }}
labels:
app: {{ $.Chart.Name }}
environment: {{ .Values.environment }}
annotations:
"helm.sh/resource-policy": keep
iam.gke.io/gcp-service-account: "{{ .Chart.Name }}-{{ .Values.environment }}@{{ .Values.gcpProjectId }}.iam.gserviceaccount.com"
{{- end }}
</code></pre>
<p>see: <a href="https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-lookup-function" rel="nofollow noreferrer">https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-lookup-function</a></p>
|
<p>I need some help regarding this OOM status of pods 137. I am kinda stuck here for 3 days now. I built a docker image of a flask application. I run the docker image, it was running fine with a memory usage of 2.7 GB.
I uploaded it to GKE with the following specification.
<a href="https://i.stack.imgur.com/o5dGZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/o5dGZ.png" alt="Cluster" /></a></p>
<p>Workloads show "Does not have minimum availability"</p>
<p><a href="https://i.stack.imgur.com/860ox.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/860ox.png" alt="Workload" /></a></p>
<p>Checking that pod "news-category-68797b888c-fmpnc" shows CrashLoopBackError</p>
<p>Error "back-off 5m0s restarting failed container=news-category-pbz8s pod=news-category-68797b888c-fmpnc_default(d5b0fff8-3e35-4f14-8926-343e99195543): CrashLoopBackOff"</p>
<p>Checking the YAML file shows OOM killed 137.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2022-02-17T16:07:36Z"
generateName: news-category-68797b888c-
labels:
app: news-category
pod-template-hash: 68797b888c
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:generateName: {}
f:labels:
.: {}
f:app: {}
f:pod-template-hash: {}
f:ownerReferences:
.: {}
k:{"uid":"8d99448a-04f6-4651-a652-b1cc6d0ae4fc"}:
.: {}
f:apiVersion: {}
f:blockOwnerDeletion: {}
f:controller: {}
f:kind: {}
f:name: {}
f:uid: {}
f:spec:
f:containers:
k:{"name":"news-category-pbz8s"}:
.: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:enableServiceLinks: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: kube-controller-manager
operation: Update
time: "2022-02-17T16:07:36Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"ContainersReady"}:
.: {}
f:lastProbeTime: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"Initialized"}:
.: {}
f:lastProbeTime: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
k:{"type":"Ready"}:
.: {}
f:lastProbeTime: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
f:containerStatuses: {}
f:hostIP: {}
f:phase: {}
f:podIP: {}
f:podIPs:
.: {}
k:{"ip":"10.16.3.4"}:
.: {}
f:ip: {}
f:startTime: {}
manager: kubelet
operation: Update
time: "2022-02-17T16:55:18Z"
name: news-category-68797b888c-fmpnc
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: news-category-68797b888c
uid: 8d99448a-04f6-4651-a652-b1cc6d0ae4fc
resourceVersion: "25100"
uid: d5b0fff8-3e35-4f14-8926-343e99195543
spec:
containers:
- image: gcr.io/projectiris-327708/news_category:noConsoleDebug
imagePullPolicy: IfNotPresent
name: news-category-pbz8s
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-z2lbp
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: gke-news-category-cluste-default-pool-42e1e905-ftzb
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-z2lbp
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-02-17T16:07:37Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2022-02-17T16:55:18Z"
message: 'containers with unready status: [news-category-pbz8s]'
reason: ContainersNotReady
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2022-02-17T16:55:18Z"
message: 'containers with unready status: [news-category-pbz8s]'
reason: ContainersNotReady
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2022-02-17T16:07:36Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: containerd://a582af0248a330b7d4087916752bd941949387ed708f00b3aac6f91a6ef75e63
image: gcr.io/projectiris-327708/news_category:noConsoleDebug
imageID: gcr.io/projectiris-327708/news_category@sha256:c4b3385bd80eff2a0c0ec0df18c6a28948881e2a90dd1c642ec6960b63dd017a
lastState:
terminated:
containerID: containerd://a582af0248a330b7d4087916752bd941949387ed708f00b3aac6f91a6ef75e63
exitCode: 137
finishedAt: "2022-02-17T16:55:17Z"
reason: OOMKilled
startedAt: "2022-02-17T16:54:48Z"
name: news-category-pbz8s
ready: false
restartCount: 13
started: false
state:
waiting:
message: back-off 5m0s restarting failed container=news-category-pbz8s pod=news-category-68797b888c-fmpnc_default(d5b0fff8-3e35-4f14-8926-343e99195543)
reason: CrashLoopBackOff
hostIP: 10.160.0.42
phase: Running
podIP: 10.16.3.4
podIPs:
- ip: 10.16.3.4
qosClass: BestEffort
startTime: "2022-02-17T16:07:37Z"
</code></pre>
<p>My question is what to do and how to do to solve this. I tried to add resources to the YAML file in spec-</p>
<pre><code>resources:
limits:
memory: 32Gi
requests:
memory: 16Gi
</code></pre>
<p>It also shows errors. How do I increase the memory of pods? And also if I increase memory it shows "Pod Unscheduled".</p>
<p>Someone plz give me an insight into clusters, nodes, and pods and how to solve this. Thank you.</p>
| <p>A Pod always runs on a Node and is the basic unit in a kubernetes engine.
A Node is a worker machine in Kubernetes and may be either a virtual or a physical machine, depending on the cluster.
A cluster is a set of nodes that run containerized applications.</p>
<p>Now coming back to your question on OOM issue. Mostly this occurs because your pods are trying to go beyond the memory that you have mentioned in your limits (in the YAML file). You can assume that your resource allocation ranges from your requests - limits in your case [16Gi - 32Gi]. Now should you assign more memory to solve this. Absolutely NOT! Thats not how the containerization or even any microservices concepts work. Read more about vertical scaling and horizontal scaling.</p>
<p>Now how you can avoid this problem. So lets assume that if your container runs a basic java spring boot application. Then you can try setting the jvm arguments like (-Xms , -Xms, -GCpolicies) etc., which are your standard configs for any java application and unless you specify it explicitly in a container environment, it will not work as it works in a local machine.</p>
<p>Since you are using flask you can look into this <a href="https://stackoverflow.com/questions/49991234/flask-app-memory-leak-caused-by-each-api-call">Flask App Memory Leak caused by each API call</a>, which provides you with an adequate information about how to fix memory leak in your flask application</p>
|
<p>My project is deployed in k8s environment and we are using fluent bit to send logs to ES. I need to send java stacktrace as one document. Therefore I have used fluent bit multi-line parser but I cannot get it work.</p>
<p><strong>Approach 1:</strong></p>
<p>As per lot of tutorials and documentations I configured fluent bit as follows.</p>
<pre><code>[INPUT]
Name tail
Tag kube.test.*
Path /var/log/containers/*.log
DB /var/log/test.db
Mem_Buf_Limit 50MB
Refresh_Interval 10
Multiline On
Parser_Firstline multine_parser_first_line
[PARSER]
Name multine_parser_first_line
Format regex
Regex /^(?<time>(\d)+(-\d+)+(\S)+\W(\S)+)(?<message>.*)/
Time_Key time
Time_Format %Y-%m-%d %H:%M:%S.%L
Time_Keep On
</code></pre>
<p><strong>Approach 2:</strong></p>
<p>As per answer in <a href="https://stackoverflow.com/questions/51645296/fluentbit-with-mycat-multiline-parsing">Fluentbit with mycat multiline parsing</a> used two parsers</p>
<pre><code>[INPUT]
Name tail
Tag kube.test.*
Path /var/log/containers/*.log
DB /var/log/test.db
Mem_Buf_Limit 50MB
Refresh_Interval 10
Multiline On
Parser_Firstline multine_parser_first_line
Parser_1 error_log_parser
[PARSER]
Name multine_parser_first_line
Format regex
Regex (\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2}).(\d{3})
Time_Key time
Time_Format %Y-%m-%d %H:%M:%S.%L
Time_Keep On
[PARSER]
Name error_log_parser
Format regex
Regex \n(?m)^.*?Exception.*(?:[\r\n]+^\s*at .*)+\n
Time_Key time
Time_Format %Y-%m-%d %H:%M:%S.%L
Time_Keep On
</code></pre>
<p>Following is my log:</p>
<pre><code> 2021-07-07 16:46:46.720 DEBUG [auth-service,0f6d420997b2b169,8ed349d5c074f8a9,false] [ADMIN,admin@testing.com,] 1 --- [nio-8080-exec-6] .m.m.a.ExceptionHandlerExceptionResolver : Invoking @ExceptionHandler method: public org.springframework.http.ResponseEntity<com.testing.fluentbit.exceptions.PlatformException> com.testing.platform.common.core.web.ApiExceptionHandler.handlePlatformException(com.testing.fluentbit.exceptions.PlatformException)
2021-07-07 16:46:46.720 WARN [auth-service,0f6d420997b2b169,8ed349d5c074f8a9,false] [ADMIN,admin@testing.com,] 1 --- [nio-8080-exec-6] c.l.p.c.core.web.ApiExceptionHandler : Request resulted in platform error com.testing.fluentbit.exceptions.ResourceNotFoundException:avatar.not.found. Stacktrace is available in debug level
2021-07-07 16:46:46.721 DEBUG [auth-service,0f6d420997b2b169,8ed349d5c074f8a9,false] [ADMIN,admin@testing.com,] 1 --- [nio-8080-exec-6] c.l.p.c.core.web.ApiExceptionHandler : Platform Exception Handler - com.testing.fluentbit.exceptions.ResourceNotFoundException : avatar.not.found => {value=null}
com.testing.fluentbit.exceptions.ResourceNotFoundException: avatar.not.found
at com.testing.platform.auth.services.AvatarService.getAvatarByIndividualId(AvatarService.java:163)
at com.testing.platform.auth.api.v1.LoginResource.getAvatarByIndividualId(LoginResource.java:541)
at com.testing.platform.auth.api.v1.LoginResource$$FastClassBySpringCGLIB$$9a81467a.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:746)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at org.springframework.security.access.intercept.aopalliance.MethodSecurityInterceptor.invoke(MethodSecurityInterceptor.java:69)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:688)
at com.testing.platform.auth.api.v1.LoginResource$$EnhancerBySpringCGLIB$$208fb4d9.getAvatarByIndividualId(<generated>)
at sun.reflect.GeneratedMethodAccessor327.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:209)
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:136)
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:102)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:891)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:797)
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:991)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:925)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:974)
at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:866)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:851)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.web.filter.CorsFilter.doFilterInternal(CorsFilter.java:96)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at com.testing.platform.common.core.web.filters.ContextLoggingFilter.doFilter(ContextLoggingFilter.java:45)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.security.oauth2.client.filter.OAuth2ClientContextFilter.doFilter(OAuth2ClientContextFilter.java:60)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.boot.actuate.web.trace.servlet.HttpTraceFilter.doFilterInternal(HttpTraceFilter.java:90)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:320)
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:127)
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:91)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:119)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:137)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:111)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:170)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.oauth2.provider.authentication.OAuth2AuthenticationProcessingFilter.doFilter(OAuth2AuthenticationProcessingFilter.java:176)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:116)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:66)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:105)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:56)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:215)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:178)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:357)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:270)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.web.filter.HttpPutFormContentFilter.doFilterInternal(HttpPutFormContentFilter.java:109)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:93)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.cloud.sleuth.instrument.web.ExceptionLoggingFilter.doFilter(ExceptionLoggingFilter.java:48)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at brave.servlet.TracingFilter.doFilter(TracingFilter.java:86)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:200)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:198)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:493)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:342)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:800)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:806)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1498)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
2021-07-07 16:46:46.722 DEBUG [auth-service,0f6d420997b2b169,8ed349d5c074f8a9,false] [ADMIN,admin@testing.com,] 1 --- [nio-8080-exec-6] o.s.w.s.m.m.a.HttpEntityMethodProcessor : Written [com.testing.fluentbit.exceptions.ResourceNotFoundException: avatar.not.found] as "application/json" using [org.springframework.http.converter.json.MappingJackson2HttpMessageConverter@5cd6719d]
2021-07-07 16:46:46.722 WARN [auth-service,0f6d420997b2b169,8ed349d5c074f8a9,false] [ADMIN,admin@testing.com,] 1 --- [nio-8080-exec-6] .m.m.a.ExceptionHandlerExceptionResolver : Resolved [com.testing.fluentbit.exceptions.ResourceNotFoundException: avatar.not.found]
</code></pre>
<p>My first line multi line parser regex matches the log line. However I cannot get this to work. Any help is much appreciated. I am using fluentbit amazon/aws-for-fluent-bit:2.15.1</p>
<p><strong>EDIT</strong>
Approach 1 parser works locally with ES.</p>
| <p>You actually need 2 multiline detections to make this work. You need one filtering out the docker output (on the tail input) and another filter to parse everything in the log key.</p>
<p>The solution I think would be:</p>
<pre><code>inputs: |
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*.log
multiline.parser docker, cri
DB /var/log/flb_kube.db
rotate_wait 15
Mem_Buf_Limit 100MB
Skip_Long_Lines On
Refresh_Interval 10
</code></pre>
<p>And a multiline filter</p>
<pre><code> filters: |
[FILTER]
name multiline
match *
multiline.key_content log
multiline.parser go, java, python
</code></pre>
|
<p>I have deployed my running application in AKS. I want to add new disk (Harddisk of 30GB) but I don't know how to do it.</p>
<p>I want to attach 3 disks.</p>
<p>Here is details of AKS:</p>
<ul>
<li>Node size: <code>Standard_DS2_v2</code></li>
<li>Node pools: <code>1 node pool</code></li>
<li>Storage is:</li>
</ul>
<hr />
<pre><code>default (default) kubernetes.io/azure-disk Delete WaitForFirstConsumer true
</code></pre>
<p>Please, tell me how to add it.</p>
| <p>Based on <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#:%7E:text=A%20PersistentVolume%20(PV)%20is%20a,node%20is%20a%20cluster%20resource.&text=Pods%20can%20request%20specific%20levels%20of%20resources%20(CPU%20and%20Memory)." rel="nofollow noreferrer">Kubernetes documentation</a>:</p>
<blockquote>
<p>A <em>PersistentVolume</em> (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">Storage Classes</a>.</p>
<p>It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV.</p>
</blockquote>
<p>In the <a href="https://learn.microsoft.com/en-us/azure/aks/concepts-storage#persistent-volumes" rel="nofollow noreferrer">Azure documentation</a> one can find clear guides how to:</p>
<ul>
<li><a href="https://learn.microsoft.com/en-us/azure/aks/azure-disk-volume" rel="nofollow noreferrer"><em>create a static volume using Azure Disks</em></a></li>
<li><a href="https://learn.microsoft.com/en-us/azure/aks/azure-files-volume" rel="nofollow noreferrer"><em>create a static volume using Azure Files</em></a></li>
<li><a href="https://learn.microsoft.com/en-us/azure/aks/azure-disks-dynamic-pv" rel="nofollow noreferrer"><em>create a dynamic volume using Azure Disks</em></a></li>
<li><a href="https://learn.microsoft.com/en-us/azure/aks/azure-files-dynamic-pv" rel="nofollow noreferrer"><em>create a dynamic volume using Azure Files</em></a></li>
</ul>
<p><strong>NOTE</strong>:
Before you begin you should have <a href="https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough" rel="nofollow noreferrer">existing AKS cluster</a> and Azure CLI version 2.0.59 or <a href="https://learn.microsoft.com/en-us/cli/azure/install-azure-cli" rel="nofollow noreferrer">later installed</a> and configured. To check your version run:</p>
<pre><code>az --version
</code></pre>
<p>See also <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">this documentation</a>.</p>
|
<p>I've built a Quarkus 2.7.1 console application using <a href="https://quarkus.io/guides/picocli" rel="nofollow noreferrer">picocli</a> that includes several subcommands. I'd like to be able to run this application within a Kubernetes cluster and decide its arguments at run-time. This is so that I can use the same container image to run the application in different modes within the cluster.</p>
<p>To get things started I added the <a href="https://quarkus.io/guides/container-image#jib" rel="nofollow noreferrer">JIB extension</a> and tried setting the arguments using a configuration value <code>quarkus.jib.jvm-arguments</code>. Unfortunately it seems like this configuration value is locked at build-time so I'm unable to update this at run-time.</p>
<p>Next I tried setting <code>quarkus.args</code> while using default settings for JIB. The configuration value documentation makes it sound general enough for the job but it doesn't seem to have an affect when the application is run in the container. Since most references to this configuration value in documentation are in the context of Dev Mode I'm wondering if this may be disabled outside of that.</p>
<p>How can I get this application running in a container image with its arguments decided at run-time?</p>
| <p>You can set <code>quarkus.jib.jvm-entrypoint</code> to any container entrypoint command you want, including scripts. An example in the <a href="https://quarkus.io/guides/container-image#jvm-debugging" rel="nofollow noreferrer">doc</a> is <code>quarkus.jib.jvm-entrypoint=/deployments/run-java.sh</code>. You could make use of <code>$CLI_ARGUMENTS</code> in such a script. Even something like <code>quarkus.jib.jvm-entrypoint=/bin/sh,-c,'/deployments/run-java.sh $CLI_ARGUMENTS'</code> should work too, as long as you place the script <code>run-java.sh</code> at <code>/deployments</code> in the image. The possibility is limitless.</p>
<p>Also see this <a href="https://stackoverflow.com/q/69845401/1701388">SO answer</a> if there's an issue. (The OP in the link put a customer script at <code>src/main/jib/docker/run-java.sh</code> (<code>src/main/jib</code> is Jib's default "extra files directory") so that Jib places the script in the image at <code>/docker/run-java.sh</code>.</p>
|
<p>When I run <code>kubectl delete raycluster <raycluster-name></code>, sometimes this command hangs. It looks like this is because Kubernetes finalizers for the raycluster are preventing deletion of the resource until some condition is met. Indeed, I see the raycluster gets marked with a deletion timestamp like below:</p>
<pre><code> creationTimestamp: "2022-02-17T06:06:40Z"
deletionGracePeriodSeconds: 0
deletionTimestamp: "2022-02-17T18:51:16Z"
finalizers:
- kopf.zalando.org/KopfFinalizerMarker
</code></pre>
<p>Looking at the logs, if termination happens correctly, I should see termination requests on the operator logs:</p>
<pre><code>2022-02-16 16:57:26,326 VINFO scripts.py:853 -- Send termination request to `"/home/ray/anaconda3/lib/python3.9/site-packages/ray/core/src/ray/thirdparty/redis/src/redis-server *:50343" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" ""` (via SIGTERM)
2022-02-16 16:57:26,328 VINFO scripts.py:853 -- Send termination request to `/home/ray/anaconda3/lib/python3.9/site-packages/ray/core/src/ray/raylet/raylet --raylet_socket_name=/tmp/ray/session_2022-02-16_16-41-20_595437_116/sockets/raylet --store_socket_name=/tmp/ray/session_2022-02-16_16-41-20_595437_116/sockets/plasma_store --object_manager_port=0 --min_worker_port=10002 --max_worker_port=19999 --node_manager_port=0 --node_ip_address=10.1.0.34 --redis_address=10.1.0.34 --redis_port=6379 --maximum_startup_concurrency=1 --static_resource_list=node:10.1.0.34,1.0,memory,367001600,object_store_memory,137668608 "--python_worker_command=/home/ray/anaconda3/bin/python /home/ray/anaconda3/lib/python3.9/site-packages/ray/workers/setup_worker.py /home/ray/anaconda3/lib/python3.9/site-packages/ray/workers/default_worker.py --node-ip-address=10.1.0.34 --node-manager-port=RAY_NODE_MANAGER_PORT_PLACEHOLDER --object-store-name=/tmp/ray/session_2022-02-16_16-41-20_595437_116/sockets/plasma_store --raylet-name=/tmp/ray/session_2022-02-16_16-41-20_595437_116/sockets/raylet --redis-address=10.1.0.34:6379 --temp-dir=/tmp/ray --metrics-agent-port=45522 --logging-rotate-bytes=536870912 --logging-rotate-backup-count=5 RAY_WORKER_DYNAMIC_OPTION_PLACEHOLDER --redis-password=5241590000000000" --java_worker_command= "--cpp_worker_command=/home/ray/anaconda3/lib/python3.9/site-packages/ray/cpp/default_worker --ray_plasma_store_socket_name=/tmp/ray/session_2022-02-16_16-41-20_595437_116/sockets/plasma_store --ray_raylet_socket_name=/tmp/ray/session_2022-02-16_16-41-20_595437_116/sockets/raylet --ray_node_manager_port=RAY_NODE_MANAGER_PORT_PLACEHOLDER --ray_address=10.1.0.34:6379 --ray_redis_password=5241590000000000 --ray_session_dir=/tmp/ray/session_2022-02-16_16-41-20_595437_116 --ray_logs_dir=/tmp/ray/session_2022-02-16_16-41-20_595437_116/logs --ray_node_ip_address=10.1.0.34 RAY_WORKER_DYNAMIC_OPTION_PLACEHOLDER" --native_library_path=/home/ray/anaconda3/lib/python3.9/site-packages/ray/cpp/lib --redis_password=5241590000000000 --temp_dir=/tmp/ray --session_dir=/tmp/ray/session_2022-02-16_16-41-20_595437_116 --log_dir=/tmp/ray/session_2022-02-16_16-41-20_595437_116/logs --resource_dir=/tmp/ray/session_2022-02-16_16-41-20_595437_116/runtime_resources --metrics-agent-port=45522 --metrics_export_port=43650 --object_store_memory=137668608 --plasma_directory=/dev/shm --ray-debugger-external=0 "--agent_command=/home/ray/anaconda3/bin/python -u /home/ray/anaconda3/lib/python3.9/site-packages/ray/dashboard/agent.py --node-ip-address=10.1.0.34 --redis-address=10.1.0.34:6379 --metrics-export-port=43650 --dashboard-agent-port=45522 --listen-port=0 --node-manager-port=RAY_NODE_MANAGER_PORT_PLACEHOLDER --object-store-name=/tmp/ray/session_2022-02-16_16-41-20_595437_116/sockets/plasma_store --raylet-name=/tmp/ray/session_2022-02-16_16-41-20_595437_116/sockets/raylet --temp-dir=/tmp/ray --session-dir=/tmp/ray/session_2022-02-16_16-41-20_595437_116 --runtime-env-dir=/tmp/ray/session_2022-02-16_16-41-20_595437_116/runtime_resources --log-dir=/tmp/ray/session_2022-02-16_16-41-20_595437_116/logs --logging-rotate-bytes=536870912 --logging-rotate-backup-count=5 --redis-password=5241590000000000"` (via SIGTERM)
</code></pre>
<p>However, in the case above where the finalizer condition is not met, I don't see the termination requests in the logs:</p>
<pre><code>Demands:
(no resource demands)
ray,ray:2022-02-16 17:21:10,145 DEBUG gcs_utils.py:253 -- internal_kv_put b'__autoscaling_status_legacy' b"Cluster status: 2 nodes\n - MostDelayedHeartbeats: {'10.244.0.11': 0.17503762245178223, '10.244.1.33': 0.17499160766601562, '10.244.0.12': 0.17495203018188477}\n - NodeIdleSeconds: Min=3926 Mean=3930 Max=3937\n - ResourceUsage: 0.0/3.0 CPU, 0.0 GiB/1.05 GiB memory, 0.0 GiB/0.38 GiB object_store_memory\n - TimeSinceLastHeartbeat: Min=0 Mean=0 Max=0\nWorker node types:\n - rayWorkerType: 2" True None
ray,ray:2022-02-16 17:21:10,145 DEBUG legacy_info_string.py:24 -- Cluster status: 2 nodes
- MostDelayedHeartbeats: {'10.244.0.11': 0.17503762245178223, '10.244.1.33': 0.17499160766601562, '10.244.0.12': 0.17495203018188477}
- NodeIdleSeconds: Min=3926 Mean=3930 Max=3937
- ResourceUsage: 0.0/3.0 CPU, 0.0 GiB/1.05 GiB memory, 0.0 GiB/0.38 GiB object_store_memory
- TimeSinceLastHeartbeat: Min=0 Mean=0 Max=0
Worker node types:
- rayWorkerType: 2
ray,ray:2022-02-16 17:21:10,220 DEBUG autoscaler.py:1148 -- ray-ray-worker-type-f5gsr is not being updated and passes config check (can_update=True).
ray,ray:2022-02-16 17:21:10,245 DEBUG autoscaler.py:1148 -- ray-ray-worker-type-fwkp7 is not being updated and passes config check (can_update=True).
ray,ray:2022-02-16 17:21:10,268 DEBUG autoscaler.py:1148 -- ray-ray-worker-type-f5gsr is not being updated and passes config check (can_update=True).
ray,ray:2022-02-16 17:21:10,285 DEBUG autoscaler.py:1148 -- ray-ray-worker-type-fwkp7 is not being updated and passes config check (can_update=True).
ray,ray:2022-02-16 17:21:10,389 DEBUG resource_demand_scheduler.py:189 -- Cluster resources: [{'CPU': 1.0, 'node:10.244.0.11': 1.0, 'object_store_memory': 135078297.0, 'memory': 375809638.0}, {'node:10.244.1.33': 1.0, 'memory': 375809638.0, 'object_store_memory': 137100902.0, 'CPU': 1.0}, {'object_store_memory': 134204620.0, 'CPU': 1.0, 'node:10.244.0.12': 1.0, 'memory': 375809638.0}]
ray,ray:2022-02-16 17:21:10,389 DEBUG resource_demand_scheduler.py:190 -- Node counts: defaultdict(<class 'int'>, {'rayHeadType': 1, 'rayWorkerType': 2})
ray,ray:2022-02-16 17:21:10,389 DEBUG resource_demand_scheduler.py:201 -- Placement group demands: []
ray,ray:2022-02-16 17:21:10,389 DEBUG resource_demand_scheduler.py:247 -- Resource demands: []
ray,ray:2022-02-16 17:21:10,389 DEBUG resource_demand_scheduler.py:248 -- Unfulfilled demands: []
ray,ray:2022-02-16 17:21:10,389 DEBUG resource_demand_scheduler.py:252 -- Final unfulfilled: []
ray,ray:2022-02-16 17:21:10,440 DEBUG resource_demand_scheduler.py:271 -- Node requests: {}
ray,ray:2022-02-16 17:21:10,488 DEBUG gcs_utils.py:253 -- internal_kv_put b'__autoscaling_status' b'{"load_metrics_report": {"usage": {"object_store_memory": [0.0, 406383819.0], "memory": [0.0, 1127428914.0], "node:10.244.0.11": [0.0, 1.0], "CPU": [0.0, 3.0], "node:10.244.1.33": [0.0, 1.0], "node:10.244.0.12": [0.0, 1.0]}, "resource_demand": [], "pg_demand": [], "request_demand": [], "node_types": [[{"memory": 375809638.0, "CPU": 1.0, "node:10.244.0.11": 1.0, "object_store_memory": 135078297.0}, 1], [{"object_store_memory": 137100902.0, "node:10.244.1.33": 1.0, "memory": 375809638.0, "CPU": 1.0}, 1], [{"object_store_memory": 134204620.0, "memory": 375809638.0, "node:10.244.0.12": 1.0, "CPU": 1.0}, 1]], "head_ip": null}, "time": 1645060869.937817, "monitor_pid": 68, "autoscaler_report": {"active_nodes": {"rayHeadType": 1, "rayWorkerType": 2}, "pending_nodes": [], "pending_launches": {}, "failed_nodes": []}}' True None
ray,ray:2022-02-16 17:21:15,493 DEBUG gcs_utils.py:238 -- internal_kv_get b'autoscaler_resource_request' None
ray,ray:2022-02-16 17:21:15,640 INFO autoscaler.py:304 --
======== Autoscaler status: 2022-02-16 17:21:15.640853 ========
Node status
---------------------------------------------------------------
Healthy:
1 rayHeadType
2 rayWorkerType
Pending:
(no pending nodes)
Recent failures:
(no failures)
Resources
---------------------------------------------------------------
Usage:
0.0/3.0 CPU
0.00/1.050 GiB memory
0.00/0.378 GiB object_store_memory
Demands:
(no resource demands)
ray,ray:2022-02-16 17:21:15,683 DEBUG gcs_utils.py:253 -- internal_kv_put b'__autoscaling_status_legacy' b"Cluster status: 2 nodes\n - MostDelayedHeartbeats: {'10.244.0.11': 0.14760899543762207, '10.244.1.33': 0.14756131172180176, '10.244.0.12': 0.1475226879119873}\n - NodeIdleSeconds: Min=3932 Mean=3936 Max=3943\n - ResourceUsage: 0.0/3.0 CPU, 0.0 GiB/1.05 GiB memory, 0.0 GiB/0.38 GiB object_store_memory\n - TimeSinceLastHeartbeat: Min=0 Mean=0 Max=0\nWorker node types:\n - rayWorkerType: 2" True None
ray,ray:2022-02-16 17:21:15,684 DEBUG legacy_info_string.py:24 -- Cluster status: 2 nodes
- MostDelayedHeartbeats: {'10.244.0.11': 0.14760899543762207, '10.244.1.33': 0.14756131172180176, '10.244.0.12': 0.1475226879119873}
- NodeIdleSeconds: Min=3932 Mean=3936 Max=3943
- ResourceUsage: 0.0/3.0 CPU, 0.0 GiB/1.05 GiB memory, 0.0 GiB/0.38 GiB object_store_memory
- TimeSinceLastHeartbeat: Min=0 Mean=0 Max=0
Worker node types:
- rayWorkerType: 2
ray,ray:2022-02-16 17:21:15,775 DEBUG autoscaler.py:1148 -- ray-ray-worker-type-f5gsr is not being updated and passes config check (can_update=True).
ray,ray:2022-02-16 17:21:15,799 DEBUG autoscaler.py:1148 -- ray-ray-worker-type-fwkp7 is not being updated and passes config check (can_update=True).
ray,ray:2022-02-16 17:21:15,833 DEBUG autoscaler.py:1148 -- ray-ray-worker-type-f5gsr is not being updated and passes config check (can_update=True).
ray,ray:2022-02-16 17:21:15,850 DEBUG autoscaler.py:1148 -- ray-ray-worker-type-fwkp7 is not being updated and passes config check (can_update=True).
ray,ray:2022-02-16 17:21:15,962 DEBUG resource_demand_scheduler.py:189 -- Cluster resources: [{'memory': 375809638.0, 'node:10.244.0.11': 1.0, 'CPU': 1.0, 'object_store_memory': 135078297.0}, {'CPU': 1.0, 'node:10.244.1.33': 1.0, 'object_store_memory': 137100902.0, 'memory': 375809638.0}, {'memory': 375809638.0, 'node:10.244.0.12': 1.0, 'CPU': 1.0, 'object_store_memory': 134204620.0}]
ray,ray:2022-02-16 17:21:15,962 DEBUG resource_demand_scheduler.py:190 -- Node counts: defaultdict(<class 'int'>, {'rayHeadType': 1, 'rayWorkerType': 2})
ray,ray:2022-02-16 17:21:15,963 DEBUG resource_demand_scheduler.py:201 -- Placement group demands: []
ray,ray:2022-02-16 17:21:15,963 DEBUG resource_demand_scheduler.py:247 -- Resource demands: []
ray,ray:2022-02-16 17:21:15,963 DEBUG resource_demand_scheduler.py:248 -- Unfulfilled demands: []
ray,ray:2022-02-16 17:21:15,963 DEBUG resource_demand_scheduler.py:252 -- Final unfulfilled: []
ray,ray:2022-02-16 17:21:16,032 DEBUG resource_demand_scheduler.py:271 -- Node requests: {}
ray,ray:2022-02-16 17:21:16,081 DEBUG gcs_utils.py:253 -- internal_kv_put b'__autoscaling_status' b'{"load_metrics_report": {"usage": {"memory": [0.0, 1127428914.0], "object_store_memory": [0.0, 406383819.0], "CPU": [0.0, 3.0], "node:10.244.0.11": [0.0, 1.0], "node:10.244.1.33": [0.0, 1.0], "node:10.244.0.12": [0.0, 1.0]}, "resource_demand": [], "pg_demand": [], "request_demand": [], "node_types": [[{"node:10.244.0.11": 1.0, "object_store_memory": 135078297.0, "CPU": 1.0, "memory": 375809638.0}, 1], [{"object_store_memory": 137100902.0, "node:10.244.1.33": 1.0, "CPU": 1.0, "memory": 375809638.0}, 1], [{"object_store_memory": 134204620.0, "node:10.244.0.12": 1.0, "CPU": 1.0, "memory": 375809638.0}, 1]], "head_ip": null}, "time": 1645060875.4946475, "monitor_pid": 68, "autoscaler_report": {"active_nodes": {"rayHeadType": 1, "rayWorkerType": 2}, "pending_nodes": [], "pending_launches": {}, "failed_nodes": []}}' True None
</code></pre>
<p>Reading through documentations, I found 2 workarounds:</p>
<pre><code>1. Use kubectl patch to remove the finalizer
2. Kill and restart the operator, this lifts the finalizer condition
</code></pre>
<p>However, I am not sure if either method is sustainable because:</p>
<pre><code>1. After I run kubectl patch, I can't seem to create new rayclusters with the same name.
This requires me to kill and restart the operator.
2. If I restart the operator to bring down a raycluster,
I am afraid this will affect other rayclusters that are currently running.
</code></pre>
<p>I am looking to understand the following:</p>
<ol>
<li>What happens if I restart the ray operator while other rayclusters are active?</li>
<li>What is the finalizer condition here and can I disable it?</li>
<li>Some suitable workarounds</li>
</ol>
| <p>Assuming the operator is running when you try to delete the resource, the hanging behavior is a bug.
Would you mind filing a <a href="https://github.com/ray-project/ray/issues/new?assignees=&labels=bug%2Ctriage&template=bug-report.yml&title=%5BBug%5D%20" rel="nofollow noreferrer">bug report</a> on the Ray GitHub with reproduction details?</p>
<p>Edit: I'm one of the maintainers of Ray's Kubernetes support.
Feel to tag me with "@DmitriGekhtman" in the bug report.</p>
|
<p>Here is my ingress.yml file</p>
<pre><code>spec:
rules:
- host: {{- if .Values.ingress.host }} {{ tpl .Values.ingress.host . }} {{- end }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ssl-redirect
port:
name: use-annotation
{{- range $port := .Values.container.app.port }}
- path: {{ tpl $port.path $ }}
pathType: Prefix
backend:
service:
name: {{ $.Release.Name }}
port:
number: {{ int $port.port }}
{{- end }}
{{ end }}
</code></pre>
<p>I want to override this rules some service here is what I try to do in my values.yml file</p>
<pre><code>ingress:
scheme: internal
host: test.com
paths:
- path: /
pathType: Prefix
backend:
service:
name: ssl-redirect
port:
name: use-annotation
- path: /
pathType: Prefix
backend:
service:
name: test
port:
number: 3000
</code></pre>
<p>but after I deploy it's not override at all may be in wrong format not sure</p>
<h4>UPDATE</h4>
<p>I try to this way but It still didn't override rules in ingress</p>
<pre><code>hosts:
- host: test.com
paths:
- path: /test
pathType: Prefix
backend:
service:
name: ssl-redirect
port:
name: use-annotation
- path: /test
pathType: Prefix
backend:
service:
name: test-dev
port:
number: 3000
</code></pre>
| <p><code>ingress.yaml</code> should be something like</p>
<pre><code>kind: Ingress
metadata:
name: {{ $name }}
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- with $_annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ . }}
backend:
serviceName: {{ $name }}
servicePort: 80
{{- end }}
{{- end }}
{{- end }}
</code></pre>
<p><strong>values.yaml</strong></p>
<pre><code>ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: test.example.io
paths: [/path]
# tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
</code></pre>
<p><strong>Another example :</strong></p>
<p><strong>Ingress.yaml</strong> : <a href="https://github.com/helm/charts/blob/master/stable/ghost/templates/ingress.yaml" rel="nofollow noreferrer">https://github.com/helm/charts/blob/master/stable/ghost/templates/ingress.yaml</a></p>
<p><strong>values.yaml</strong> : <a href="https://github.com/helm/charts/blob/master/stable/ghost/values.yaml" rel="nofollow noreferrer">https://github.com/helm/charts/blob/master/stable/ghost/values.yaml</a></p>
|
<p>I need some help regarding this OOM status of pods 137. I am kinda stuck here for 3 days now. I built a docker image of a flask application. I run the docker image, it was running fine with a memory usage of 2.7 GB.
I uploaded it to GKE with the following specification.
<a href="https://i.stack.imgur.com/o5dGZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/o5dGZ.png" alt="Cluster" /></a></p>
<p>Workloads show "Does not have minimum availability"</p>
<p><a href="https://i.stack.imgur.com/860ox.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/860ox.png" alt="Workload" /></a></p>
<p>Checking that pod "news-category-68797b888c-fmpnc" shows CrashLoopBackError</p>
<p>Error "back-off 5m0s restarting failed container=news-category-pbz8s pod=news-category-68797b888c-fmpnc_default(d5b0fff8-3e35-4f14-8926-343e99195543): CrashLoopBackOff"</p>
<p>Checking the YAML file shows OOM killed 137.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2022-02-17T16:07:36Z"
generateName: news-category-68797b888c-
labels:
app: news-category
pod-template-hash: 68797b888c
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:generateName: {}
f:labels:
.: {}
f:app: {}
f:pod-template-hash: {}
f:ownerReferences:
.: {}
k:{"uid":"8d99448a-04f6-4651-a652-b1cc6d0ae4fc"}:
.: {}
f:apiVersion: {}
f:blockOwnerDeletion: {}
f:controller: {}
f:kind: {}
f:name: {}
f:uid: {}
f:spec:
f:containers:
k:{"name":"news-category-pbz8s"}:
.: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:enableServiceLinks: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
manager: kube-controller-manager
operation: Update
time: "2022-02-17T16:07:36Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
k:{"type":"ContainersReady"}:
.: {}
f:lastProbeTime: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"Initialized"}:
.: {}
f:lastProbeTime: {}
f:lastTransitionTime: {}
f:status: {}
f:type: {}
k:{"type":"Ready"}:
.: {}
f:lastProbeTime: {}
f:lastTransitionTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
f:containerStatuses: {}
f:hostIP: {}
f:phase: {}
f:podIP: {}
f:podIPs:
.: {}
k:{"ip":"10.16.3.4"}:
.: {}
f:ip: {}
f:startTime: {}
manager: kubelet
operation: Update
time: "2022-02-17T16:55:18Z"
name: news-category-68797b888c-fmpnc
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: news-category-68797b888c
uid: 8d99448a-04f6-4651-a652-b1cc6d0ae4fc
resourceVersion: "25100"
uid: d5b0fff8-3e35-4f14-8926-343e99195543
spec:
containers:
- image: gcr.io/projectiris-327708/news_category:noConsoleDebug
imagePullPolicy: IfNotPresent
name: news-category-pbz8s
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-z2lbp
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: gke-news-category-cluste-default-pool-42e1e905-ftzb
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-z2lbp
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-02-17T16:07:37Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2022-02-17T16:55:18Z"
message: 'containers with unready status: [news-category-pbz8s]'
reason: ContainersNotReady
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2022-02-17T16:55:18Z"
message: 'containers with unready status: [news-category-pbz8s]'
reason: ContainersNotReady
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2022-02-17T16:07:36Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: containerd://a582af0248a330b7d4087916752bd941949387ed708f00b3aac6f91a6ef75e63
image: gcr.io/projectiris-327708/news_category:noConsoleDebug
imageID: gcr.io/projectiris-327708/news_category@sha256:c4b3385bd80eff2a0c0ec0df18c6a28948881e2a90dd1c642ec6960b63dd017a
lastState:
terminated:
containerID: containerd://a582af0248a330b7d4087916752bd941949387ed708f00b3aac6f91a6ef75e63
exitCode: 137
finishedAt: "2022-02-17T16:55:17Z"
reason: OOMKilled
startedAt: "2022-02-17T16:54:48Z"
name: news-category-pbz8s
ready: false
restartCount: 13
started: false
state:
waiting:
message: back-off 5m0s restarting failed container=news-category-pbz8s pod=news-category-68797b888c-fmpnc_default(d5b0fff8-3e35-4f14-8926-343e99195543)
reason: CrashLoopBackOff
hostIP: 10.160.0.42
phase: Running
podIP: 10.16.3.4
podIPs:
- ip: 10.16.3.4
qosClass: BestEffort
startTime: "2022-02-17T16:07:37Z"
</code></pre>
<p>My question is what to do and how to do to solve this. I tried to add resources to the YAML file in spec-</p>
<pre><code>resources:
limits:
memory: 32Gi
requests:
memory: 16Gi
</code></pre>
<p>It also shows errors. How do I increase the memory of pods? And also if I increase memory it shows "Pod Unscheduled".</p>
<p>Someone plz give me an insight into clusters, nodes, and pods and how to solve this. Thank you.</p>
| <p>The message <code>back-off restarting failed container</code> appears when you are facing a temporary resource overload, as a result of an activity spike. And the <code>OOMKilled code 137</code> means that a container or pod was terminated because they used more memory than the one allowed. OOM stands for <code>“Out Of Memory”</code>.</p>
<p>So based on this information from your GKE configuration (Total of memory 16Gi ), I suggest review the total of memory limited configured in your GKE, you can confirm this issue with the following command: <code>kubectl describe pod [name]</code></p>
<p>You will need to determine why Kubernetes decided to terminate the pod with the OOMKilled error, and adjust memory requests and limit values, here is an example to increase your memory and how to set limits to the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/" rel="nofollow noreferrer">memory</a> because looks like this is the main problem</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: memory-demo
namespace: mem-example
spec:
containers:
- name: memory-demo-ctr
image: polinux/stress
resources:
limits:
memory: "128Gi"
requests:
memory: "64Gi"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "64Gi", "--vm-hang", "1"]
</code></pre>
<p>To review all the GKE metrics, you go to the GCP console, then go to the Monitoring dashboard and select GKE. In this Monitoring dashboard, you will find the statistics related to memory and CPU.</p>
<p>Also, it is important to review if the containers have enough resources to run your applications. You can follow this <a href="https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-resource-requests-and-limits" rel="nofollow noreferrer">link</a> to check the Kubernetes best practices.</p>
<p>GKE also has an interesting feature, which is autoscaling. It is very useful because it automatically resizes your GKE cluster based on the demands of your workloads. Please follow this <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler" rel="nofollow noreferrer">guide</a> to learn how to do it.</p>
|
<p>In particular, %CPU/R, %CPU/L, %MEM/R, and %MEM/L. While I'm at it, what are the units of the memory and CPU (non-percentage) columns?</p>
| <p>They are explained in the K9s release notes <a href="https://github.com/derailed/k9s/blob/master/change_logs/release_v0.13.4.md#cpumem-metrics" rel="noreferrer">here</a></p>
<ul>
<li>%CPU/R Percentage of requested cpu</li>
<li>%MEM/R Percentage of requested memory</li>
<li>%CPU/L Percentage of limited cpu</li>
<li>%MEM/L Percentage of limited memory</li>
</ul>
|
<p>I see <code>patchesStrategicMerge</code>in my kustomization.yaml file, but not getting it clearly, like whats its need or why we require that?</p>
<p>kustomization.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>resources:
- a.yaml
patchesStrategicMerge:
- b.yaml
- c.yaml
</code></pre>
<p>I went through this: <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/#customizing" rel="noreferrer">https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/#customizing</a> and
<a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md" rel="noreferrer">https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md</a></p>
| <p>This comes in handy you <em>inherit</em> from some base and want to apply partial changes to said base. That way, you can have one source YAML file and perform different customizations based on it, without having to recreate the entire resource. That is one key selling point of <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/" rel="nofollow noreferrer">kustomize</a>.</p>
<p>The purpose of the <code>Strategic Merge Patch</code> is to be able to patch <em>rich objects</em> partially, instead of replacing them entirely.</p>
<p>Imagine you have a list, of object.</p>
<pre class="lang-yaml prettyprint-override"><code>mylist:
- name: foo
- name: bar
- name: baz
</code></pre>
<p>How could you change one of the items in the list? With a standard merge patch, you could only replace the entire list. But with a strategic merge patch, you can target one element in the list based on some property, in this case only the name makes sense as it's the only property.</p>
<pre><code>mylist:
- $patch: delete
name: foo
</code></pre>
<p>In the above example, I used the strategic merge patch to remove the item in the list with the name foo.</p>
<p>Here is another example, suppose I have the following project structure.</p>
<pre><code>sample
├── base
│ ├── kustomization.yaml
│ └── pod.yaml
└── layers
└── dev
├── kustomization.yaml
└── patch.yaml
</code></pre>
<p>In the base, is my full pod definition. While in the layers, I can create multiple layers for different environments, in this case I have only one for dev.</p>
<p>The kustomization.yaml in the base folder looks like this.</p>
<pre><code>resources:
- pod.yaml
</code></pre>
<p>If I execute the base itself with dry run I get this.</p>
<pre class="lang-bash prettyprint-override"><code>kubectl apply -k sample/base --dry-run=client -o yaml
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: myapp
namespace: default
spec:
containers:
- image: nginx
name: nginx
- command:
- sleep
- infinity
- image: busybox
name: sidecar
</code></pre>
<p>The kustomization.yaml in the dev folder looks like this.</p>
<pre class="lang-yaml prettyprint-override"><code>bases:
- ../../base
patchesStrategicMerge:
- patch.yaml
</code></pre>
<p>And the patch looks like this. I want to enable debug logging for the sidecar. Thefore I am using a merge directive to change its arguments without changing the image. I also want to keep the nginx container.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- $patch: merge
name: sidecar
args: [log-level, debug]
</code></pre>
<pre class="lang-bash prettyprint-override"><code>kubectl apply -k sample/layers/dev/ --dry-run=client -o yaml
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: myapp
namespace: default
spec:
containers:
- args:
- log-level
- debug
command:
- sleep
- infinity
image: busybox
name: sidecar
- image: nginx
name: nginx
</code></pre>
<p><em>Note: The command and args don't make much sense, It's just to illustrate the strategic merge patch.</em></p>
|
<p>I have an ingress controller working for UI container service and backend container service. my ingress configuration is as follows:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: testapp
annotations:
rules:
- host: test1.example.com
http:
paths:
- path: /static
backend:
serviceName: ui-service
servicePort: 80
- path: /apicall
backend:
serviceName: backend-service
servicePort: 8080
</code></pre>
<p>Which is working fine. Now I need to forward this ingress URL if it contains <code>?</code>.</p>
<p>For eg, if url is <code>test1.example.com/?productid=10001</code> I need to add static before this <code>?</code> and forward it to <code>test1.example.com/static?productid=10001</code>.</p>
<p>Is this behavior possible through below annotations?</p>
<pre><code> nginx.ingress.kubernetes.io/rewrite-target: /
</code></pre>
<p>If yes, how to have that regex kind of url where if <code>?</code> is present in the url followed by any string/characters, add <code>static</code> keyword before it?</p>
| <p>First of all look at the YAML below to understand how works <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/#rewrite" rel="nofollow noreferrer">rewrite rule</a>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: rewrite
namespace: default
spec:
ingressClassName: nginx
rules:
- host: rewrite.bar.com
http:
paths:
- path: /something(/|$)(.*)
pathType: Prefix
backend:
service:
name: http-svc
port:
number: 80
</code></pre>
<blockquote>
<p>In this ingress definition, any characters captured by <code>(.*)</code> will be assigned to the placeholder <code>$2</code>, which is then used as a parameter in the <code>rewrite-target</code> annotation.</p>
<p>For example, the ingress definition above will result in the following rewrites:</p>
<ul>
<li><code>rewrite.bar.com/something</code> rewrites to <code>rewrite.bar.com/</code></li>
<li><code>rewrite.bar.com/something/</code> rewrites to <code>rewrite.bar.com/</code></li>
<li><code>rewrite.bar.com/something/new</code> rewrites to <code>rewrite.bar.com/new</code></li>
</ul>
</blockquote>
<p>In your case, you will need to follow a similar procedure. But first, I suggest you do a separate ingress for each path. This will help you keep order later and additionally you will have an independent ingress for each path. Look at <a href="https://serverfault.com/questions/1065154/nginx-ingress-rewrite-rule/1065176#1065176">this question</a> to see why and how you should create a separate ingress.</p>
<blockquote>
<p>Now I need to forward this ingress URL if it contains ?. for eg if URL is test1.example.com/?productid=10001 i need to add static before this ? and forward it to test1.example.com/static?productid=10001.</p>
</blockquote>
<p>You will need to create an appropriate capture group. For the data you presented it will look like this:</p>
<pre class="lang-yaml prettyprint-override"><code> - path: /(/|$)(.*)
</code></pre>
<p>and the annotation should be like this:</p>
<pre class="lang-yaml prettyprint-override"><code>nginx.ingress.kubernetes.io/rewrite-target: /static/$2
</code></pre>
<p>In this case, everything that gets captured from the URL after the <code>/</code> character will be rewritten to the new address, which will look like <code>/static</code> and the address that was before. There should be no problem with the question mark.</p>
<p>If you need to create a regex that will act on a question mark, you will need to prefix it with a <code>\</code> character, which is described <a href="https://www.regular-expressions.info/characters.html#special" rel="nofollow noreferrer">here</a>.</p>
<p>Also, consider that you are currently targeting root <code>/</code> path and want to create 2 ingresses. I also suggest creating appropriate endpoints depending on the situation, so that you can easily create 2 different ingresses responsible for directing different traffic.</p>
|
<p>I have Kubernetes cluster running on a VM. A truncated overview of the mounts is:</p>
<pre class="lang-sh prettyprint-override"><code>$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 20G 4.5G 15G 24% /
/dev/mapper/vg001-lv--docker 140G 33G 108G 23% /var/lib/docker
</code></pre>
<p>As you can see, I added an extra disk to store the docker images and its volumes. However, when querying the node's capacity, the following is returned</p>
<pre><code>Capacity:
cpu: 12
ephemeral-storage: 20145724Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 65831264Ki
nvidia.com/gpu: 1
pods: 110
</code></pre>
<p><code>ephemeral-storage</code> is <code>20145724Ki</code> which is 20G, referring to the disk mounted at <code>/</code>.</p>
<p>How does Kubelet calculate its <code>ephemeral-storage</code>? Is it simply looking at the disk space available at <code>/</code>? Or is it looking at another folder like <code>/var/log/containers</code>?</p>
<p><a href="https://stackoverflow.com/questions/58269443/how-can-we-increase-the-size-of-ephemeral-storage-in-a-kubernetes-worker-node">This is a similar post</a> where the user eventually succumbed to increasing the disk mounted at <code>/</code>.</p>
| <p><strong>Some theory</strong></p>
<p>By default <code>Capacity</code> and <code>Allocatable</code> for ephemeral-storage in standard kubernetes environment is sourced from filesystem (mounted to /var/lib/kubelet).
This is the default location for kubelet directory.</p>
<p><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#eviction-signals" rel="noreferrer">The kubelet supports the following filesystem partitions:</a></p>
<blockquote>
<ol>
<li><code>nodefs</code>: The node's main filesystem, used for local disk volumes, emptyDir, log storage, and more. For example, <code>nodefs</code> contains
<code>/var/lib/kubelet/</code>.</li>
<li><code>imagefs</code>: An optional filesystem that container runtimes use to store container images and container writable layers.</li>
</ol>
<p>Kubelet auto-discovers these filesystems and ignores other
filesystems. Kubelet does not support other configurations.</p>
</blockquote>
<p>From <a href="https://kubernetes.io/docs/concepts/storage/volumes/#resources" rel="noreferrer">Kubernetes website</a> about volumes:</p>
<blockquote>
<p>The storage media (such as Disk or SSD) of an <code>emptyDir</code> volume is
determined by the medium of the filesystem holding the kubelet root
dir (typically <code>/var/lib/kubelet</code>).</p>
</blockquote>
<p>Location for kubelet directory can be configured by providing:</p>
<ol>
<li><a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="noreferrer">Command line parameter during kubelet initialization</a></li>
</ol>
<p><code>--root-dir</code> string
Default: /var/lib/kubelet</p>
<ol start="2">
<li>Via <a href="https://kubernetes.io/docs/reference/config-api/kubeadm-config.v1beta3/#kubeadm-init-configuration-types" rel="noreferrer">kubeadm with config file</a> (e.g.)</li>
</ol>
<pre><code>apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
root-dir: "/data/var/lib/kubelet"
</code></pre>
<p><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/#customizing-the-kubelet" rel="noreferrer">Customizing kubelet</a>:</p>
<blockquote>
<p>To customize the kubelet you can add a <code>KubeletConfiguration</code> next to
the <code>ClusterConfiguration</code> or <code>InitConfiguration</code> separated by <code>---</code>
within the same configuration file. This file can then be passed to
<code>kubeadm init</code>.</p>
</blockquote>
<p>When bootstrapping kubernetes cluster using kubeadm, <code>Capacity</code> reported by <code>kubectl get node</code> is equal to the disk capacity mounted into <code>/var/lib/kubelet</code></p>
<p>However <code>Allocatable</code> will be reported as:
<code>Allocatable</code> = <code>Capacity</code> - <code>10% nodefs</code> using the standard kubeadm configuration, since <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#hard-eviction-thresholds" rel="noreferrer">the kubelet has the following default hard eviction thresholds:</a></p>
<ul>
<li><code>nodefs.available<10%</code></li>
</ul>
<p>It can be configured during <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="noreferrer">kubelet initialization</a> with:
<code>-eviction-hard</code> mapStringString
Default: imagefs.available<15%,memory.available<100Mi,nodefs.available<10%</p>
<hr />
<p><strong>Example</strong></p>
<p>I set up a test environment for Kubernetes with a master node and two worker nodes (worker-1 and worker-2).</p>
<p>Both worker nodes have volumes of the same capacity: 50Gb.</p>
<p>Additionally, I mounted a second volume with a capacity of 20Gb for the Worker-1 node at the path <code>/var/lib/kubelet</code>.
Then I created a cluster with kubeadm.</p>
<p><strong>Result</strong></p>
<p><em>From worker-1 node:</em></p>
<pre><code>skorkin@worker-1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 49G 2.8G 46G 6% /
...
/dev/sdb 20G 45M 20G 1% /var/lib/kubelet
</code></pre>
<p>and</p>
<pre><code>Capacity:
cpu: 2
ephemeral-storage: 20511312Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 4027428Ki
pods: 110
</code></pre>
<p>Size of ephemeral-storage is the same as volume mounted at /var/lib/kubelet.</p>
<p><em>From worker-2 node:</em></p>
<pre><code>skorkin@worker-2:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 49G 2.7G 46G 6% /
</code></pre>
<p>and</p>
<pre><code>Capacity:
cpu: 2
ephemeral-storage: 50633164Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 4027420Ki
pods: 110
</code></pre>
|
<p>I have an emberjs application which has been deployed and in google chrome browser im getting the following errors for 2 of the .js files.</p>
<blockquote>
<p>Failed to find a valid digest in the 'integrity' attribute for
resource
'http://staging.org.com/assets/vendor-0ada2c9fb4d3e07ad2f0c6a990945270.js'
with computed SHA-256 integrity
'Sb4Xc/Oub27QW0MKlqK0sbq0Mm476jU7MgJaCzd/gKk='. The resource has been
blocked</p>
</blockquote>
<p>When i inspect the file i can see script tags for the two .js files in question. I'm not 100% sure how this integrity check works. You can see the integrity attribute below with the sha's.</p>
<pre><code><script src="/assets/vendor-0ada2c9fb4d3e07ad2f0c6a990945270.js" integrity="sha256-s3XY9h9v9IThygF6UkWRvWZsf7zeTqYJ1rLfDgg1bS0= sha512-k3lfqdeZw3OcsECfD3t99Hidh6IoRlFSoIu5nJk0FkLYHwx0q/rddirj4jh4J73dmLwKfG9mx0U5Zf6ZzRBsvA==" ></script>
<script src="/assets/g-web-56670cf0485cf52f54589091e2a25cc8.js" integrity="sha256-jNmWqO61OPijscQ5cHVSbB1Ms5wKX78ZACYdhrUo3X4= sha512-oiksgRFsqgHaCvXPvd3SAsUuX4zPeVClQBIgrOgIKNBMa3hPqCHknoFqDGRtSyfN4EdIkfk/x1zSqBqRvONAGQ==" ></script>
</code></pre>
<p>The emberjs application is built using a docker image, deployed to kubernetes and an aws elb running haproxy is handling the routing for this application such that when i navigate to <code>staging.x.com</code> it routes to the internal dns in kubernetes of this service (emberjs web app).</p>
<p>What's interesting to note;</p>
<ol>
<li>Running <code>ember serve</code> locally works and the applications loads fine.</li>
<li>Building and running the docker image locally works and the applications loads fine.</li>
</ol>
<p>The issue is only occurring on my deployed staging environment.</p>
| <p>I got this error when deploying a Blazor WebAssembly app.</p>
<blockquote>
<p>Failed to find a valid digest in the 'integrity' attribute for
resource</p>
</blockquote>
<p>And then it showed several NuGets. Manually deleted all <code>bin</code> and <code>obj</code> folders in the solution and then redeployed. After this everything worked.</p>
<p><a href="https://github.com/dotnet/aspnetcore/issues/28692#issuecomment-783333400" rel="noreferrer">https://github.com/dotnet/aspnetcore/issues/28692#issuecomment-783333400</a></p>
|
<p>I want to set wildcard subdomain for my project, using k8s, nginx ingress controller, helm chart:</p>
<p>In <code>ingress.yaml</code> file:</p>
<pre><code>...
rules:
- host: {{ .Values.ingress.host }}
...
</code></pre>
<p>In <code>values.yaml</code> file, I change host <code>example.local</code> to <code>*.example.local</code>:</p>
<pre><code>...
ingress:
enabled: true
host: "*.example.local"
...
</code></pre>
<p>Then, when I install chart using helm chart:</p>
<pre><code>Error: YAML parse error on example/templates/ingress.yaml: error converting YAML to JSON: yaml: line 15: did not find expected alphabetic or numeric character
</code></pre>
<p>How can I fix it?</p>
<p>Thank for your support.</p>
| <p>YAML treats strings starting with asterisk in a special way - that's why the hostname with wildcards like <code>*.example.local</code> breaks the ingress on <code>helm install</code>.
In order to be recognized as strings, the values in <code>ingress.yaml</code> file should be quoted with <code>" "</code> characters:</p>
<pre class="lang-yaml prettyprint-override"><code>...
rules:
- host: "{{ .Values.ingress.host }}"
...
</code></pre>
<p>One more option here - adding <code>| quote</code> :</p>
<pre class="lang-yaml prettyprint-override"><code>...
rules:
- host: {{ .Values.ingress.host | quote }}
...
</code></pre>
<p>I've reproduced your issue, both these options worked correctly. More information on quoting special characters for YAML is <a href="https://yaml.org/spec/1.2.2/#53-indicator-characters" rel="noreferrer">here</a>.</p>
|
<p>I am trying to deploy a service using helm. The cluster is Azure AKS & I have one DNS zone associated with a cluster that can be used for ingress.</p>
<p>But the issue is that the DNS zone is in k8s secret & I want to use it in ingress as host. like below</p>
<pre class="lang-yaml prettyprint-override"><code>spec:
tls:
- hosts:
- {{ .Chart.Name }}.{{ .Values.tls.host }}
rules:
- host: {{ .Chart.Name }}.{{ .Values.tls.host }}
http:
paths:
-
pathType: Prefix
backend:
service:
name: {{ .Chart.Name }}
port:
number: 80
path: "/"
</code></pre>
<p>I want <code>.Values.tls.host</code> value from secret. Currently, it is hardcoded in <code>values.yaml</code> file.</p>
| <p><em>This is a community wiki answer posted for better visibility. Feel free to expand it.</em></p>
<p>For the current version of Helm (3.8.0), it seems not possible to use values right from Secret <strong>with standard approach</strong>.
Based on the information from <a href="https://helm.sh/" rel="nofollow noreferrer">Helm website</a>:</p>
<blockquote>
<p>A template directive is enclosed in <code>{{</code> and <code>}}</code> blocks.
The values that are passed into a template can be thought of as <em>namespaced
objects</em>, where a dot (<code>.</code>) separates each namespaced element.</p>
</blockquote>
<p><a href="https://helm.sh/docs/chart_template_guide/builtin_objects/" rel="nofollow noreferrer">Objects are passed into a template from the template engine</a> and can be:</p>
<ul>
<li>Release</li>
<li>Values</li>
<li>Chart</li>
<li>Files</li>
<li>Capabilities</li>
<li>Template</li>
</ul>
<p><a href="https://helm.sh/docs/chart_template_guide/values_files/" rel="nofollow noreferrer">Contents for Values objects can come from multiple sources</a>:</p>
<blockquote>
<ul>
<li>The <code>values.yaml</code> file in the chart</li>
<li>If this is a subchart, the <code>values.yaml</code> file of a parent chart</li>
<li>A values file if passed into <code>helm install</code> or <code>helm upgrade</code> with the <code>-f</code> flag (<code>helm install -f myvals.yaml ./mychart</code>)</li>
<li>Individual parameters passed with <code>--set</code> (such as <code>helm install --set foo=bar ./mychart</code>)</li>
</ul>
</blockquote>
|
<p>I want to return a complex value (dict) from a Helm function, so that I can do further processing in the template or other functions.</p>
<p>I have defined this function:</p>
<pre><code>{{- define "return-dict-function" -}}
key1: value1
key2: value2
{{- end -}}
</code></pre>
<p>And I can output the function value in my template:</p>
<pre><code>{{ include "return-dict-function" . | nindent 2 }}
</code></pre>
<p>But how can I do further processing with the data?</p>
| <p>There are several solutions to return complex values and do further processing:</p>
<p><strong>1) function returns plain yaml</strong></p>
<p>Take the example function <code>return-dict-function</code> from the question.
If you use <code>fromYaml</code> you'll get a <code>dict</code>:</p>
<pre><code>{{ $dict := include "return-dict-function" . | fromYaml }}
type: {{ $dict | typeOf }}
{{- range $key, $value := $dict }}
simple-function-{{ $key }} : processed-value-{{ $value }}
{{- end -}}
</code></pre>
<p>Output:</p>
<pre><code>type: map[string]interface {}
simple-function-key1 : processed-value-value1
simple-function-key2 : processed-value-value2
</code></pre>
<p><strong>2) function needs to return dict</strong></p>
<p><strong>a) serialize to json</strong></p>
<p>If you have a dict which should be returned you can serialize the dict with <code>toJson</code></p>
<pre><code>{{- define "return-dict-function-json" -}}
{{- $result := dict "key1" "value1" "key2" "value2" }}
{{- $result | toJson -}}
{{- end -}}
</code></pre>
<p>Later you will deserialize with <code>fromJson</code></p>
<pre><code>{{ $dict := include "return-dict-function-json" . | fromJson }}
{{- range $key, $value := $dict }}
json-function-{{ $key }} : processed-value-{{ $value }}
{{- end -}}
</code></pre>
<p><strong>b) serialize to yaml</strong></p>
<p>You can also serialize the dict with <code>toYaml</code></p>
<pre><code>{{- define "return-dict-function-yaml" -}}
{{- $result := dict "key1" "value1" "key2" "value2" }}
{{- $result | toYaml -}}
{{- end -}}
</code></pre>
<p>Then you need to deserialize with <code>fromYaml</code></p>
<pre><code>{{ $dict := include "return-dict-function-yaml" . | fromYaml }}
{{- range $key, $value := $dict }}
yaml-function-{{ $key }} : processed-value-{{ $value }}
{{- end -}}
</code></pre>
<p><strong>Notes and further readings</strong></p>
<ul>
<li>Of course this also works with nested values</li>
<li><a href="https://blog.flant.com/advanced-helm-templating/" rel="nofollow noreferrer">Advanced Helm Templating</a></li>
<li><a href="https://helm.sh/docs/chart_template_guide/function_list/" rel="nofollow noreferrer">Helm Docs</a></li>
</ul>
|
<p>I have a use-case for concurrent restart of all pods in a statefulset.</p>
<p>Does kubernetes statefulset support concurrent restart of all pods?</p>
<p>According to the statefulset documentation, this can be accomplished by setting the pod update policy to parallel as in this example:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql-db
spec:
podManagementPolicy: Parallel
replicas: 3
</code></pre>
<p>However this does not seem to work in practice, as demonstrated on this statefulset running on EKS:</p>
<ul>
<li>Apply this:</li>
</ul>
<pre><code>---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: producer
namespace: ragnarok
spec:
selector:
matchLabels:
app: producer
replicas: 10
podManagementPolicy: "Parallel"
serviceName: producer-service
template:
metadata:
labels:
app: producer
spec:
containers:
- name: producer
image: archbungle/load-tester:pulsar-0.0.49
imagePullPolicy: IfNotPresent
</code></pre>
<p>Rollout restart happens in sequence as if disregarding the rollout policy setting:</p>
<pre><code>
(base) welcome@Traianos-MacBook-Pro eks-deploy % kubectl get pods -n ragnarok | egrep producer
producer-0 1/1 Running 0 3m58s
producer-1 1/1 Running 0 3m56s
producer-2 1/1 Running 0 3m53s
producer-3 1/1 Running 0 3m47s
producer-4 1/1 Running 0 3m45s
producer-5 1/1 Running 0 3m43s
producer-6 1/1 Running 1 3m34s
producer-7 0/1 ContainerCreating 0 1s
producer-8 1/1 Running 0 16s
producer-9 1/1 Running 0 19s
(base) welcome@Traianos-MacBook-Pro eks-deploy % kubectl get pods -n ragnarok | egrep producer
producer-0 1/1 Running 0 4m2s
producer-1 1/1 Running 0 4m
producer-2 1/1 Running 0 3m57s
producer-3 1/1 Running 0 3m51s
producer-4 1/1 Running 0 3m49s
producer-5 1/1 Running 0 3m47s
producer-6 0/1 Terminating 1 3m38s
producer-7 1/1 Running 0 5s
producer-8 1/1 Running 0 20s
producer-9 1/1 Running 0 23s
(base) welcome@Traianos-MacBook-Pro eks-deploy % kubectl get pods -n ragnarok | egrep producer
producer-0 1/1 Running 0 4m8s
producer-1 1/1 Running 0 4m6s
producer-2 1/1 Running 0 4m3s
producer-3 1/1 Running 0 3m57s
producer-4 1/1 Running 0 3m55s
producer-5 0/1 Terminating 0 3m53s
producer-6 1/1 Running 0 4s
producer-7 1/1 Running 0 11s
producer-8 1/1 Running 0 26s
producer-9 1/1 Running 0 29s
</code></pre>
| <p>As the document pointed, Parallel pod management will effective only in the scaling operations. <code>This option only affects the behavior for scaling operations. Updates are not affected.</code></p>
<p>Maybe you can try something like
<code>kubectl scale statefulset producer --replicas=0 -n ragnarok</code>
and
<code>kubectl scale statefulset producer --replicas=10 -n ragnarok</code></p>
<p>According to documentation, all pods should be deleted and created together by scaling them with the Parallel policy.</p>
<p>Reference : <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#parallel-pod-management" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#parallel-pod-management</a></p>
|
<p>In the new <strong>Kubespray</strong> release <strong>containerd</strong> is set as default, but the old one isn't.</p>
<p>I want to change docker to containerd in old version and install it with that version.</p>
<p>When I looked the <code>offline.yml</code> I don't see any option for <strong>containerd</strong> in <strong>Redhat</strong>. Below is the code from <code>offline.yml</code>:</p>
<pre><code># CentOS/Redhat/AlmaLinux/Rocky Linux
## Docker / Containerd
docker_rh_repo_base_url: "{{ yum_repo }}/docker-ce/$releasever/$basearch"
docker_rh_repo_gpgkey: "{{ yum_repo }}/docker-ce/gpg"
# Fedora
## Docker
docker_fedora_repo_base_url: "{{ yum_repo }}/docker-ce/{{ ansible_distribution_major_version }}/{{ ansible_architecture }}"
docker_fedora_repo_gpgkey: "{{ yum_repo }}/docker-ce/gpg"
## Containerd
containerd_fedora_repo_base_url: "{{ yum_repo }}/containerd"
containerd_fedora_repo_gpgkey: "{{ yum_repo }}/docker-ce/gpg"
# Debian
## Docker
docker_debian_repo_base_url: "{{ debian_repo }}/docker-ce"
docker_debian_repo_gpgkey: "{{ debian_repo }}/docker-ce/gpg"
## Containerd
containerd_debian_repo_base_url: "{{ ubuntu_repo }}/containerd"
containerd_debian_repo_gpgkey: "{{ ubuntu_repo }}/containerd/gpg"
containerd_debian_repo_repokey: 'YOURREPOKEY'
# Ubuntu
## Docker
docker_ubuntu_repo_base_url: "{{ ubuntu_repo }}/docker-ce"
docker_ubuntu_repo_gpgkey: "{{ ubuntu_repo }}/docker-ce/gpg"
## Containerd
containerd_ubuntu_repo_base_url: "{{ ubuntu_repo }}/containerd"
containerd_ubuntu_repo_gpgkey: "{{ ubuntu_repo }}/containerd/gpg"
containerd_ubuntu_repo_repokey: 'YOURREPOKEY'
</code></pre>
<p>How should I set containerd in <code>offline.ym</code>l and how to find which version of containerd is stable with this Kubespray?</p>
<p>Thanks for answering</p>
| <p>Always try to dig in history in documentation. Since you're looking for outdated version see <a href="https://github.com/kubernetes-sigs/kubespray/commit/8f2b0772f9ca2d146438638e1fb9f7484cbdbd55#:%7E:text=calicoctl%2Dlinux%2D%7B%7B%20image_arch%20%7D%7D%22-,%23%20CentOS/Redhat,extras_rh_repo_gpgkey%3A%20%22%7B%7B%20yum_repo%20%7D%7D/containerd/gpg%22,-%23%20Fedora" rel="nofollow noreferrer">this fragment</a> of offline.yaml:</p>
<pre class="lang-yaml prettyprint-override"><code># CentOS/Redhat
## Docker
## Docker / Containerd
docker_rh_repo_base_url: "{{ yum_repo }}/docker-ce/$releasever/$basearch"
docker_rh_repo_gpgkey: "{{ yum_repo }}/docker-ce/gpg"
## Containerd
extras_rh_repo_base_url: "{{ yum_repo }}/centos/{{ ansible_distribution_major_version }}/extras/$basearch"
extras_rh_repo_gpgkey: "{{ yum_repo }}/containerd/gpg"
</code></pre>
<p>Reference: <a href="https://github.com/kubernetes-sigs/kubespray" rel="nofollow noreferrer"><em>kubespray documentation</em></a>.</p>
|
<p>I have kubernetes job and I would like to get his pod logs in the jenkins pipeline.</p>
<p>So I try to grep pod name to the jenkins variable and then get logs.</p>
<pre><code>POD_NAME = sh script: "kubectl describe jobs.batch ${JOB_NAME} | grep 'Created pod' | cut -d':' -f2"
echo "${POD_NAME}"
sh "kubectl logs --follow ${POD_NAME}"
</code></pre>
<p>But I got <code>null</code> in the <code>POD_NAME</code> variable.</p>
| <p>I assume that your jenkins controller or agent is able to query the kubernetes api with kubectl because it has a serviceaccount or some other form of credential to access kubernetes.</p>
<p>If that is true, I propose that you use a label to identify the pods created by the job and to query anything related to them.</p>
<p>You can do that by adding a label to the <code>.spec.metadata.labels</code> section as shown below and then query with kubectl and the <code>--selector</code> flag:</p>
<pre><code>---
apiVersion: batch/v1
kind: Job
metadata:
name: MYAPP
...
spec:
template:
metadata:
...
labels:
test: value
spec:
containers:
- name: MYAPP
image: python:3.7.6-alpine3.10
...
</code></pre>
<p><code>kubectl logs --follow --selector test=value</code></p>
<p>Use <code>kubectl logs --help</code> to get further information and examples.</p>
|
<p>I have a custom Container Image for postgresql and try to run this as a Stateful kubernetes application</p>
<p>The image knows 2 Volumes which are mounted into</p>
<ol>
<li><code>/opt/db/data/postgres/data</code> (the <code>$PGDATA</code> directory of my postgres intallation)</li>
<li><code>/opt/db/backup</code></li>
</ol>
<p>the backup Volume contains a deeper folder structure which is defined in the Dockerfile</p>
<h3>Dockerfile</h3>
<p>(excerpts)</p>
<pre><code>...
...
# Environment variables required for this build (do NOT change)
# -------------------------------------------------------------
...
ENV PGDATA=/opt/db/data/postgres/data
ENV PGBASE=/opt/db/postgres
...
ENV PGBACK=/opt/db/backup/postgres/backups
ENV PGARCH=/opt/db/backup/postgres/archives
# Set up user and directories
# ---------------------------
RUN mkdir -p $PGBASE $PGBIN $PGDATA $PGBACK $PGARCH && \
useradd -d /home/postgres -m -s /bin/bash --no-log-init --uid 1001 --user-group postgres && \
chown -R postgres:postgres $PGBASE $PGDATA $PGBACK $PGARCH && \
chmod a+xr $PGBASE
# set up user env
# ---------------
USER postgres
...
...
# bindings
# --------
VOLUME ["$PGDATA", "$DBBASE/backup"]
...
...
# Define default command to start Database.
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["postgres", "-D", "/opt/db/data/postgres/data"]
</code></pre>
<p>When I run this as a container or single pod in kubernetes without any <code>volumeMounts</code> all is good and the folder structure looks like it should</p>
<pre><code>find /opt/db/backup -ls
2246982 4 drwxr-xr-x 3 root root 4096 Feb 18 09:00 /opt/db/backup
2246985 4 drwxr-xr-x 4 root root 4096 Feb 18 09:00 /opt/db/backup/postgres
2246987 4 drwxr-xr-x 2 postgres postgres 4096 Feb 11 14:59 /opt/db/backup/postgres/backups
2246986 4 drwxr-xr-x 2 postgres postgres 4096 Feb 11 14:59 /opt/db/backup/postgres/archives
</code></pre>
<p>However once I run this based on the Statefulset below (which mounts to Volumes into the pod @ <code>/opt/db/data/postgres/data</code> & <code>/opt/db/backup</code> (which includes folder structure that goes deeper then the mount point, as listed above) this is not bein carried out as intended</p>
<pre><code>find /opt/db/backup –ls
2 4 drwxr-xr-x 3 postgres postgres 4096 Feb 17 16:40 /opt/db/backup
11 16 drwx------ 2 postgres postgres 16384 Feb 17 16:40 /opt/db/backup/lost+found
</code></pre>
<p><code>/opt/db/backup/postgres/backups</code> & <code>/opt/db/backup/postgres/archives</code>, inherit in the Image are gone.</p>
<p>Can anybody point me to where to start looking for a solution in this?</p>
<h3>Statefulset</h3>
<pre><code>---
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: postgres-stateful
name: postgres-stateful
labels:
app.kubernetes.io/name: postgres
app.kubernetes.io/component: database
app.kubernetes.io/part-of: postgres
app: postgres
spec:
serviceName: "postgres"
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: postgres
app.kubernetes.io/component: database
app.kubernetes.io/part-of: postgres
app: postgres
template:
metadata:
labels:
app: postgres
app.kubernetes.io/name: postgres
app.kubernetes.io/component: database
app.kubernetes.io/part-of: postgres
spec:
serviceAccountName: default
initContainers: # Give `postgres` user (id 1001) permissions to mounted volumes
- name: take-volume-mounts-ownership
image: dev-local.dev.dlz.net/postgresql:14.2-deb11
securityContext:
readOnlyRootFilesystem: true
env:
- name: "PGDATA"
value: "/opt/db/data/postgres/data"
command: [ "/bin/sh" ]
args: ["-c", "chown -R 1001:1001 /opt/db/data/postgres /opt/db/backup /tmp" ]
volumeMounts:
- name: pv-data
mountPath: /opt/db/data/postgres
- name: pv-backup
mountPath: /opt/db/backup # /opt/db/backup/postgres
- name: emptydir-tmp
mountPath: /tmp
containers:
- name: postgres
image: dev-local.dev.dlz.net/postgresql:14.2-deb11
imagePullPolicy: Always
readinessProbe:
exec:
command: ["pg_isready", "-q"]
periodSeconds: 10
initialDelaySeconds: 7
timeoutSeconds: 2
livenessProbe:
exec:
command: ["psql", "-q", "-c", "SELECT 1 WHERE 1=0"]
periodSeconds: 15
initialDelaySeconds: 20
timeoutSeconds: 2
env:
- name: "PGDATA"
value: "/opt/db/data/postgres/data"
envFrom:
- configMapRef:
name: postgres-configuration
ports:
- containerPort: 5432
name: postgresdb
resources:
requests:
memory: "256Mi"
cpu: "50m"
limits:
memory: "1Gi"
cpu: "1"
securityContext:
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1001
allowPrivilegeEscalation: false
volumeMounts:
- name: pv-data
mountPath: /opt/db/data/postgres/data # /var/lib/postgresql/data
- name: pv-backup
mountPath: /opt/db/backup # /opt/db/backup/postgres
- name: emptydir-tmp
mountPath: /tmp
volumes:
- name: pv-data
persistentVolumeClaim:
claimName: pgdata-ina-pvc
- name: pv-backup
persistentVolumeClaim:
claimName: pgbackup-ina-pvc
- name: emptydir-tmp
emptyDir: {}
</code></pre>
| <p>The directory which you had created in Dockerfile will be overlaid when you mount persistent volume to the same path. You can re-construct the directory structure in your "take-volume-mounts-ownership" container:</p>
<pre><code>...
initContainers:
- name: take-volume-mounts-ownership
...
env:
- name: PGDATA
value: /opt/db/data/postgres/data
- name: PGBASE
value: /opt/db/postgres
- name: PGBACK
value: /opt/db/backup/postgres/backups
- name: PGARCH
value: /opt/db/backup/postgres/archives
...
args: ["-c", "mkdir -p $PGBASE $PGBIN $PGDATA $PGBACK $PGARCH && chown -R 1001:1001 /opt/db/data/postgres /opt/db/backup /tmp" ]
...
</code></pre>
|
<p>Based on <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes" rel="nofollow noreferrer">the documentation</a> <code>initialDelaySeconds</code> gives some delay before the first readiness probe is checking the pod. But the only effect of readiness probe failure is that the pod is marked unready: Does not get traffic from services and also affects the deployment state.</p>
<p>So giving readiness checks some delay is not really effective for the majority of the applications: We want to make the pod early as soon as we can, meaning we need to check if it's ready as soon as possible.</p>
| <p>Things to consider:</p>
<ol>
<li>No reason to set it earlier than the earliest possible startup time.</li>
<li>If you set it late, you are wasting resources (pods not receiving traffic): 1 minute delay for 60 pods is 1 hour.</li>
<li>How much resource does the readiness Probe consume? Does it make external calls (Database, REST, etc - IMHO this should be avoided)?</li>
<li>Can the pod serve the traffic? Are all the necessary connections (DB, REST) established, caches populated? - No reason to accept traffic if the pod cannot connect to the database/backend service.</li>
</ol>
<p>To summarize:</p>
<ul>
<li>You want to minimize startup time</li>
<li>You want to minimize readiness calls</li>
</ul>
<p>Measure it, set it to the earliest possible time a pod can start.
If your pods are failing the readiness regularly causing restarts, increase it.</p>
|
<p>I need to persist the heap dump when the java process gets OOM and the pod is restarted.</p>
<p>I have following added in the jvm args</p>
<pre><code>-XX:+ExitOnOutOfMemoryError -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/opt/dumps
</code></pre>
<p>...and emptydir is mounted on the same path.</p>
<p>But the issue is if the pod gets restarted and if it gets scheduled on a different node, then we are losing the heap dump. How do I persist the heap dump even if the pod is scheduled to a different node?</p>
<p>We are using AWS EKS and we are having more than 1 replica for the pod.</p>
<p>Could anyone help with this, please?</p>
| <p>As writing to EFS is too slow in your case, there is another option for AWS EKS - <code>awsElasticBlockStore</code>.</p>
<blockquote>
<p>The contents of an EBS volume are persisted and the volume is unmounted when a pod is removed. This means that an EBS volume can be pre-populated with data, and that data can be shared between pods.</p>
</blockquote>
<p><strong>Note</strong>: You must create an EBS volume by using aws ec2 create-volume or the AWS API before you can use it.</p>
<p>There are some restrictions when using an awsElasticBlockStore volume:</p>
<ul>
<li>the nodes on which pods are running must be AWS EC2 instances</li>
<li>those instances need to be in the same region and availability zone as the EBS volume</li>
<li>EBS only supports a single EC2 instance mounting a volume</li>
</ul>
<p>Check the <a href="https://kubernetes.io/docs/concepts/storage/volumes/#awselasticblockstore" rel="nofollow noreferrer">official k8s documentation page</a> on this topic, please.
And <a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-persistent-storage/" rel="nofollow noreferrer">How to use persistent storage in EKS</a>.</p>
|
<p>I've been having a daily issue with my Kubernetes cluster (running on 1.18) where one of the nodes will go over 100% CPU utilisation, and Kubernetes will fail to connect external visitors to my pods. (A website outage, basically)</p>
<p>The strange thing is the pods are always sitting at a comfortable 30% (or lower!) CPU. So the application itself seems okay.</p>
<p>When I <code>describe</code> the node in question, I see mention of a <code>node-problem-detector</code> timeout.</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ --- ---- -------
Normal NodeNotSchedulable 10m kubelet Node nodepoo1-vmss000007 status is now: NodeNotSchedulable
Warning KubeletIsDown 9m44s (x63 over 5h21m) kubelet-custom-plugin-monitor Timeout when running plugin "/etc/node-problem-detector.d/plugin/check_kubelet.s"
Warning ContainerRuntimeIsDown 9m41s (x238 over 5h25m) container-runtime-custom-plugin-monitor Timeout when running plugin "/etc/node-problem-detector.d/plugin/check_runtime.s"
</code></pre>
<p>My current approach has been to run three nodes on my nodepool, and effectively babysit Kubernetes by cordoning the troublesome node and moving all the pods onto one of the other nodes during the monitoring outage. After 15 minutes once things are back to normal, I will uncordon the effected node and start the cycle again.</p>
<p>I was particularly unlucky this weekend where I had three CPU spikes within 24 hours.</p>
<p><a href="https://i.stack.imgur.com/vjNeO.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/vjNeO.jpg" alt="CPU chart showing spike from one of three nodes" /></a></p>
<p>How can I go about fixing this issue? I can't seem to find any information on the <code>Timeout when running plugin "/etc/node-problem-detector.d/plugin/check_kubelet.s"</code> problem I'm seeing.</p>
| <p>You could try to open an <code>ssh</code> connection to the node and then check which process(es) consume CPU using <code>top</code>. If this process runs in a pod and your have <code>crictl</code> installed on the node, you can use <a href="https://github.com/k8s-school/pid2pod" rel="nofollow noreferrer">https://github.com/k8s-school/pid2pod</a> to retrieve the pod which is running the process.</p>
|
<p>I have an application running on my home cluster.<br>
My cluster is running K3S.<br>
The cluster exists of 5 Raspberry Pi's 3B and 1 Ubuntu VM.<br>
One of my deployments is Domoticz.<br>
This uses a SQLite database to store data.<br>
I'm using this as a StorageClass. <a href="https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner</a><br>
The NFS is hosted in OpenMediaVault with arguments subtree_check,insecure,no_root_squash</p>
<p>When I store this database (14MB) on a PV volume linked to an NFS Storage Class, the applications becomes very slow.<br>
When I use the Storage Class local-path, the application is as fast.<br>
Problem is now that when that node dies, the Deployment doesn't start because of the node-affinity linked to the PVC.<br>
I'm out of idea's to fix this issue.<br>
<br>
Is there another SC better suited to my setup?<br>
Is there a tweak to the SC local-path, so it's not linked to a node? (yes I know, database will not be transferred to the new node)</p>
| <p>I fixed this by using longhorn instead of NFS.
Thanks Andrew Skorkin for the tip</p>
|
<p>I'm hitting to 3rd party API, and the endpoint can handle 1 req in ~2s.</p>
<p>A naive way is to spin up a couple of servers on DigitalOcean or similar to make it possible to send more requests. Basically, one responsibility for each server, so the ~2s wait time is not a problem.</p>
<p>However, are there some techniques I could do so locally?</p>
<p>Docker came into my mind, but before I start figuring things out I think its helpful to ask here first is that even possible?</p>
| <p>Spinning up more containers does mean you could send more requests per second, in theory.</p>
<p><em>BUT</em>, every container will still have the same exit point - be it your PC, laptop or router (all requests originate from the same source). This means, depending on how 3rd party enforces QPS restrictions, all but one request every 2 seconds will be rejected.</p>
<p>What you need in this scenario is a botnet, which I highly discourage, or other type of distributed system, with multiple exit points.</p>
<p><strong>Bypassing 3rd party restrictions may be (probably is) against EULA. Check the 3rd party agreement, to make sure you are not breaking it.</strong></p>
|
<p>I am <a href="https://minikube.sigs.k8s.io/docs/handbook/mount/" rel="nofollow noreferrer">mounting a filesystem</a> on minikube:</p>
<pre><code>minikube mount /var/files/:/usr/share/ -p multinode-demo
</code></pre>
<p>But I found two complications:</p>
<ul>
<li>My cluster has two nodes. The pods in the first node are able to access the host files at <code>/var/files/</code>, but the pods in the second node are not. What can be the reason for that?</li>
<li>I have to mount the directory before the pods have been created. If I <code>apply</code> my deployment first, and then do the <code>mount</code>, the pods never get the filesystem. Is Kubernetes not able to apply the mounting later, over an existing deployment that required it?</li>
</ul>
| <p>As mentioned in the comments section, I believe your problem is related to the following GitHub issues: <a href="https://github.com/kubernetes/minikube/issues/12165#issuecomment-895104495" rel="nofollow noreferrer">Storage provisioner broken for multinode mode</a> and <a href="https://github.com/kubernetes/minikube/issues/11765#issuecomment-868965821" rel="nofollow noreferrer">hostPath permissions wrong on multi node</a>.</p>
<p>In my opinion, you might be interested in using NFS mounts instead, and I'll briefly describe this approach to illustrate how it works.</p>
<hr />
<p>First we need to install the NFS Server and create the NFS export directory on our host:<br />
<strong>NOTE:</strong> I'm using Debian 10 and your commands may be different depending on your Linux distribution.</p>
<pre><code>$ sudo apt install nfs-kernel-server -y
$ sudo mkdir -p /mnt/nfs_share && sudo chown -R nobody:nogroup /mnt/nfs_share/
</code></pre>
<p>Then, grant permissions for accessing the NFS server and export the NFS share directory:</p>
<pre><code>$ cat /etc/exports
/mnt/nfs_share *(rw,sync,no_subtree_check,no_root_squash,insecure)
$ sudo exportfs -a && sudo systemctl restart nfs-kernel-server
</code></pre>
<p>We can use the <code>exportfs -v</code> command to display the current export list:</p>
<pre><code>$ sudo exportfs -v
/mnt/nfs_share <world>(rw,wdelay,insecure,no_root_squash,no_subtree_check,sec=sys,rw,insecure,no_root_squash,no_all_squash)
</code></pre>
<p>Now it's time to create a minikube cluster:</p>
<pre><code>$ minikube start --nodes 2
$ kubectl get nodes
NAME STATUS ROLES VERSION
minikube Ready control-plane,master v1.23.1
minikube-m02 Ready <none> v1.23.1
</code></pre>
<p>Please note, we're going to use the <code>standard</code> <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">StorageClass</a>:</p>
<pre><code>$ kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION
standard (default) k8s.io/minikube-hostpath Delete Immediate false
</code></pre>
<p>Additionally, we need to find the Minikube gateway address:</p>
<pre><code>$ minikube ssh
docker@minikube:~$ ip r | grep default
default via 192.168.49.1 dev eth0
</code></pre>
<p>Let's create <code>PersistentVolume</code> and <code>PersistentVolumeClaim</code> which will use the NFS share:<br />
<strong>NOTE:</strong> The address <code>192.168.49.1</code> is the Minikube gateway.</p>
<pre><code>$ cat pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: standard
nfs:
server: 192.168.49.1
path: "/mnt/nfs_share"
$ kubectl apply -f pv.yaml
persistentvolume/nfs-volume created
$ cat pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nfs-claim
namespace: default
spec:
storageClassName: standard
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
$ kubectl apply -f pvc.yaml
persistentvolumeclaim/nfs-claim created
$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/nfs-volume 1Gi RWX Retain Bound default/nfs-claim standard 71s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/nfs-claim Bound nfs-volume 1Gi RWX standard 56s
</code></pre>
<p>Now we can use the NFS PersistentVolume - to test if it works properly I've created <code>app-1</code> and <code>app-2</code> Deployments:<br />
<strong>NOTE:</strong> The <code>app-1</code> will be deployed on different node than <code>app-2</code> (I've specified <code>nodeName</code> in the PodSpec).</p>
<pre><code>$ cat app-1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app-1
name: app-1
spec:
replicas: 1
selector:
matchLabels:
app: app-1
template:
metadata:
labels:
app: app-1
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: app-share
mountPath: /mnt/app-share
nodeName: minikube
volumes:
- name: app-share
persistentVolumeClaim:
claimName: nfs-claim
$ kubectl apply -f app-1.yaml
deployment.apps/app-1 created
$ cat app-2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app-2
name: app-2
spec:
replicas: 1
selector:
matchLabels:
app: app-2
template:
metadata:
labels:
app: app-2
spec:
nodeName: minikube
containers:
- image: nginx
name: nginx
volumeMounts:
- name: app-share
mountPath: /mnt/app-share
nodeName: minikube-m02
volumes:
- name: app-share
persistentVolumeClaim:
claimName: nfs-claim
$ kubectl apply -f app-2.yaml
deployment.apps/app-2 created
$ kubectl get deploy,pods -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/app-1 1/1 1 1 24s nginx nginx app=app-1
deployment.apps/app-2 1/1 1 1 21s nginx nginx app=app-2
NAME READY STATUS RESTARTS AGE NODE
pod/app-1-7874b8d7b6-p9cb6 1/1 Running 0 23s minikube
pod/app-2-fddd84869-fjkrw 1/1 Running 0 21s minikube-m02
</code></pre>
<p>To verify that our NFS share works as expected, we can create a file in the <code>app-1</code> and then check that we can see that file in the <code>app-2</code> and on the host:</p>
<p><em>app-1:</em></p>
<pre><code>$ kubectl exec -it app-1-7874b8d7b6-p9cb6 -- bash
root@app-1-7874b8d7b6-p9cb6:/# df -h | grep "app-share"
192.168.49.1:/mnt/nfs_share 9.7G 7.0G 2.2G 77% /mnt/app-share
root@app-1-7874b8d7b6-p9cb6:/# touch /mnt/app-share/app-1 && echo "Hello from the app-1" > /mnt/app-share/app-1
root@app-1-7874b8d7b6-p9cb6:/# exit
exit
</code></pre>
<p><em>app-2:</em></p>
<pre><code>$ kubectl exec -it app-2-fddd84869-fjkrw -- ls /mnt/app-share
app-1
$ kubectl exec -it app-2-fddd84869-fjkrw -- cat /mnt/app-share/app-1
Hello from the app-1
</code></pre>
<p><em>host:</em></p>
<pre><code>$ ls /mnt/nfs_share/
app-1
$ cat /mnt/nfs_share/app-1
Hello from the app-1
</code></pre>
|
<p>I have set the Kubernetes cronJob to prevent concurrent runs <a href="https://stackoverflow.com/a/62892617/2096986">like here</a> using <code>parallelism: 1</code>, <code>concurrencyPolicy: Forbid</code>, and <code>parallelism: 1</code>. However, when I try to create a cronJob manually I am allowed to do that.</p>
<pre><code>$ kubectl get cronjobs
...
$ kubectl create job new-cronjob-1642417446000 --from=cronjob/original-cronjob-name
job.batch/new-cronjob-1642417446000 created
$ kubectl create job new-cronjob-1642417446001 --from=cronjob/original-cronjob-name
job.batch/new-cronjob-1642417446001 created
</code></pre>
<p>I was expecting that a new cronjob would not be created. Or it could be created and fail with a state that references the <code>concurrencyPolicy</code>. If the property <code>concurrencyPolicy</code> is part of the CronJob spec, not the PodSpec, it should prevent a new job to be created. Why it does not?</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cronjob-name
annotations:
argocd.argoproj.io/sync-wave: "1"
spec:
schedule: "0 * * * *"
suspend: false
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 3
concurrencyPolicy: Forbid
jobTemplate:
spec:
parallelism: 1
completions: 1
backoffLimit: 3
template:
spec:
restartPolicy: Never
</code></pre>
<p>After reading the <a href="https://kubernetes.io/docs/tasks/job/_print/#creating-a-cron-job" rel="nofollow noreferrer">official documentation</a> about the <code>kubectl create -f</code> I didn't find a way to prevent that. Is this behavior expected? If it is, I think I should check inside my Docker image (app written in Java) if there is already a cronjob running. How would I do that?</p>
| <p>The <code>concurrencyPolicy: Forbidden</code> spec only prevents concurrent pod creations and executions of <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#concurrency-policy" rel="nofollow noreferrer">the same <code>CronJob</code></a>. It does not apply across separate CronJobs even though they effectively execute the same commands using the same Docker image. As far as Kubernetes is concerned, they are different jobs. If this was Java, what Kubernetes does is <code>if (stringA == stringB)</code> and not <code>if (stringA.equals(stringB))</code>.</p>
<blockquote>
<p>If it is, I think I should check inside my Docker image (app written in Java) if there is already a cronjob running. How would I do that?</p>
</blockquote>
<p>One way of using that is to use distributed lock mechanism using separate component such as Redis. Here is the link to the guide to utilize Java Redis library redisson for that purpose: <a href="https://github.com/redisson/redisson/wiki/8.-distributed-locks-and-synchronizers" rel="nofollow noreferrer">https://github.com/redisson/redisson/wiki/8.-distributed-locks-and-synchronizers</a>. Below is code sample taken from that page:</p>
<pre><code>RLock lock = redisson.getLock("myLock");
// wait for lock aquisition up to 100 seconds
// and automatically unlock it after 10 seconds
boolean res = lock.tryLock(100, 10, TimeUnit.SECONDS);
if (res) {
// do operation
} else {
// some other execution is doing it so just chill
}
</code></pre>
|
<p>I need to add Java command in container. For it I use Helm Chart.</p>
<pre><code>helm install mychart chart/mychart --set "command.cmd={java,-Disurz_dir=/mnt/isurz,-Dnifi_url=http://srv-ft-ads-01:9090/nifi-api/processors/20e6a079-3721-a43a-0fed7a8f1236,-Des_host_and_port=xxx:9200,-jar,*.jar}"
</code></pre>
<p>The result isn't what I expected:</p>
<pre><code> spec:
containers:
- command:
- java
- -Disurz_dir=/mnt/isurz
- -Dnifi_url=http://srv-ft-ads-01:9090/nifi-api/processors/20e6a079-3721-a43a-0fed7a8f1236
- -Des_host_and_port=xxx:9200
- -jar
- '*.jar'
</code></pre>
<p>I don't understand why Kubernetes puts quotes in <code>*.jar</code>.</p>
<p>If instead <code>*.jar</code> specify <code>myapp.jar</code> then Kubernetes don't add quotes:</p>
<pre><code>helm install mychart chart/mychart --set "command.cmd={java,-Disurz_dir=/mnt/isurz,-Dnifi_url=http://srv-ft-ads-01:9090/nifi-api/processors/20e6a079-3721-a43a-0fed7a8f1236,-Des_host_and_port=xxx:9200,-jar,myapp.jar}"
</code></pre>
<p>Result:</p>
<pre><code> spec:
containers:
- command:
- java
- -Disurz_dir=/mnt/isurz
- -Dnifi_url=http://srv-ft-ads-01:9090/nifi-api/processors/20e6a079-3721-a43a-0fed7a8f1236
- -Des_host_and_port=xxx:9200
- -jar
- myapp.jar
</code></pre>
| <p>Posting this as a community wiki, feel free to edit and expand.</p>
<hr />
<p>As @larsks correctly mentioned in the comment:</p>
<blockquote>
<p>The quotes are necessary because an unquoted * can't start a YAML
value. They are not part of the value itself.</p>
</blockquote>
<p>For example, this is from <a href="https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html#gotchas" rel="nofollow noreferrer">YAML syntax</a> document:</p>
<blockquote>
<p>In addition to ' and " there are a number of characters that are
special (or reserved) and cannot be used as the first character of an
unquoted scalar: [] {} > | * & ! % # ` @ ,.</p>
</blockquote>
<p>And in much more details with examples it's available in <a href="https://yaml.org/spec/1.2.2/#53-indicator-characters" rel="nofollow noreferrer">YAML spec</a>.</p>
|
<p>I am new in Kubernetes, and I want to run the simple flask program on docker in Kubernetes. The image in docker could work successfully, but when I start the K8s.yaml with <code>kubectl apply -f k8s.yaml</code> and execute <code>minikube service flask-app-service</code> the web result reply fail with ERR_CONNECTION_REFUSED, and pods status <code>Error: ErrImageNeverPull</code>.</p>
<p>app.py:</p>
<pre class="lang-python prettyprint-override"><code># flask_app/app/app.py
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello, World!"
if __name__ == '__main__':
app.debug = True
app.run(debug=True, host='0.0.0.0')
</code></pre>
<p>Dockerfile:</p>
<pre class="lang-sh prettyprint-override"><code>FROM python:3.9
RUN mkdir /app
WORKDIR /app
ADD ./app /app/
RUN pip install -r requirement.txt
EXPOSE 5000
CMD ["python", "/app/app.py"]
</code></pre>
<p>K8s.yaml:</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: v1
kind: Service
metadata:
name: flask-app-service
spec:
selector:
app: flask-app
ports:
- protocol: "TCP"
port: 5000
targetPort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask-app
spec:
selector:
matchLabels:
app: flask-app
replicas: 3
template:
metadata:
labels:
app: flask-app
spec:
containers:
- name: flask-app
image: flask_app:latest
imagePullPolicy: Never
ports:
- containerPort: 5000
</code></pre>
<p>After deploying I try to connect to <code>http://127.0.0.1:51145</code> from a browser, but it fails to connect with an <code>ERR_CONNECTION_REFUSED</code> message. I have a <a href="https://i.stack.imgur.com/zXtW3.png" rel="nofollow noreferrer">screenshot showing a more detailed Chinese-language error message</a> if that detail is helpful.</p>
<hr />
<p>update:</p>
<p>After switch <code>imagePullPolicy</code> from <code>never</code> to <code>Always</code> or <code>IfNotPresent</code>, the pod still can't run
<a href="https://i.stack.imgur.com/gYTz9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gYTz9.png" alt="enter image description here" /></a>
I try the <code>docker images</code> command it show the image exist:
<a href="https://i.stack.imgur.com/F3xa3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/F3xa3.png" alt="enter image description here" /></a>
But when I pull image with docker pull, it show me the error:
<a href="https://i.stack.imgur.com/nKRbc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nKRbc.png" alt="enter image description here" /></a>
After docker login still not work:
<a href="https://i.stack.imgur.com/89XG2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/89XG2.png" alt="enter image description here" /></a></p>
<p>p.s. I follow the website to pratice: <a href="https://lukahuang.com/running-flask-on-minikube/" rel="nofollow noreferrer">https://lukahuang.com/running-flask-on-minikube/</a></p>
| <p>Based on the error in the question:</p>
<blockquote>
<p>pods status Error: ErrImageNeverPull.</p>
</blockquote>
<p>pod doesn't start because you have <code>imagePullPolicy: Never</code> in your deployment manifest. Which means that if the image is missing, it won't be pulled anyway.</p>
<p>This is from official documentation:</p>
<blockquote>
<p>The imagePullPolicy for a container and the tag of the image affect
when the kubelet attempts to pull (download) the specified image.</p>
</blockquote>
<p>You need to switch it to <code>IfNotPresent</code> or <code>Always</code>.</p>
<p>See more in <a href="https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy" rel="nofollow noreferrer">image pull policy</a>.</p>
<p>After everything is done correctly, pod status should be <code>running</code> and then you can connect to the pod and get the response back. See the example output:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
ubuntu 1/1 Running 0 4d
</code></pre>
|
<p>I'd love to rename or drop a label from a <code>/metrics</code> endpoint within my metric. The metric itself is from the <code>kube-state-metrics</code> application, so nothing extraordinary. The metric looks like this:</p>
<pre><code>kube_pod_container_resource_requests{container="alertmanager", instance="10.10.10.10:8080", funday_monday="blubb", job="some-kube-state-metrics", name="kube-state-metrics", namespace="monitoring", node="some-host-in-azure-1234", pod="alertmanager-main-1", resource="memory", uid="12345678-1234-1234-1234-123456789012", unit="byte"} 209715200
</code></pre>
<p>The label I'd love to replace is <code>instance</code> because it refers to the host which runs the <code>kube-state-metrics</code> application and I don't care about that. I want to have the value of <code>node</code> in <code>instance</code> and I've been trying so for hours now and can't find a way - I wonder if it's not possible at all!?</p>
<p>The way I'm grabbing the <code>/metrics</code> endpoint is through the means of a scrape-config which looks as follows:</p>
<pre><code>- job_name: some-kube-state-metrics
scrape_interval: 30s
scrape_timeout: 10s
metrics_path: /metrics
kubernetes_sd_configs:
- api_server: null
role: pod
scheme: http
relabel_configs:
- source_labels: [__meta_kubernetes_pod_labelpresent_kubeStateMetrics]
regex: true
action: keep
- source_labels: [__meta_kubernetes_pod_label_name]
regex: (.*)
replacement: $1
target_label: name
action: replace
- source_labels: [__meta_kubernetes_pod_container_port_name]
separator: ;
regex: http
replacement: $1
action: keep
- source_labels: [node]
regex: (.*)
replacement: blubb
target_label: funday_monday
action: replace
- action: labeldrop
regex: "unit=(.*)"
- source_labels: [ __name__ ]
regex: 'kube\_pod\_container\_resource\_requests'
action: drop
</code></pre>
<p>As you can tell, I've been trying to drop labels as well, namely the <code>unit</code>-label (just for testing purposes) and I also tried to drop the metrics all together.
The <code>funday_monday</code> is an example that changed because I wanted to know if static relabels are possible (it works!) - before it looked like this:</p>
<pre><code>- source_labels: [node]
regex: (.*)
replacement: $1
target_label: funday_monday
action: replace
</code></pre>
<p>Help is appreciated.</p>
| <p>The problem is that you are doing those operations at the wrong time. <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config" rel="nofollow noreferrer">relabel_configs</a> happens before metrics are actually gathered, so, at this time, you can only manipulate the labels that you got from <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config" rel="nofollow noreferrer">service discovery</a>.</p>
<p>That <code>node</code> label comes from the exporter. Therefore, you need to do this relabeling action under <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs" rel="nofollow noreferrer">metric_relabel_configs</a>:</p>
<pre class="lang-yaml prettyprint-override"><code>metric_relabel_configs:
- source_labels: [node]
target_label: instance
</code></pre>
<p>Same goes for dropping metrics. If you wish a bit more info, I answered a similar question here: <a href="https://stackoverflow.com/questions/70355080/prometheus-relabel-config-drop-action-not-working/70359287#70359287">prometheus relabel_config drop action not working</a></p>
|
<p>I experienced slow response of any kubectl command at my test environment.
I don't have this issue in the production and at minikube in my dev.
For example I run:</p>
<pre><code>kubectl get deployments
</code></pre>
<p>And I have to wait even one minute. If I break with <em>Ctrl + C</em>, and start again sometimes it respond immediately. And when I repeat command is again delayed.</p>
<p>Do you have any tips where I should dig in ?
Could it be memory issue ? Or rather network issue?
How can I monitor them ?
I'm in ssh session to one of master .</p>
| <p>If someone is looking at the same issue, you can try out <code>-v=6</code> option to check API response timings.</p>
<pre><code>kubectl get deployments -v=6
</code></pre>
|
<p>I need to start by saying that I have no experience using Cassandra and I am not the one who who created this deployment.</p>
<p>I have Cassandra running in a cluster in AKS. The PVC as configured in the statefulset is 1000Gi. Currently the pods are out of storage and are in a constant unhealthy state.</p>
<p>I am looking to expand the volumes attached to the pods. The problem I am facing is that I cannot scale down the statefulset because the statefulsets only scale down when <em>all</em> their pods are healthy.</p>
<p>I even tried deleting the statefulset and then recreateing it with a larger PVC (as recomended <a href="https://adamrushuk.github.io/increasing-the-volumeclaimtemplates-disk-size-in-a-statefulset-on-aks" rel="nofollow noreferrer">here</a>)</p>
<p>Howerver, I can't seem to delete the statefulset. It looks to me like the CassandraDatacenter CRD keeps recreating the statefulset as soon as I delete it. Giving me no time to change anything.</p>
<p>My question are as follows:</p>
<ol>
<li>Is there a standard way to expand the volume without losing data?</li>
<li>What would happen if I scale down the replicas in the CassandraDatacenter? Will it delete the PVC or keep it?</li>
<li>If there is no standard, does anyone have any ideas on how to accomplish expanding the volume size without losing storage?</li>
</ol>
| <p>Answer for: <strong>How can I expand a PVC for StatefulSet on AKS without loosing data?</strong></p>
<p>While the answer of @Erick Raminez is a very good advice for Cassandra specific, I would like to answers the more general question "How can I expand a PVC for my StatefulSet on AKS without loosing data?".</p>
<p>If downtime is allowed, you can follow these 4 steps:</p>
<pre class="lang-sh prettyprint-override"><code># Delete StatefulSet
# This is required on AKS since the azure disk must have status "Unattached"
kubectl delete statefulsets.apps STATEFULSET_NAME
# Edit capacity in
# - your statefulset yaml file
# - each pvc
kubectl patch pvc PVC_NAME -p '{"spec": {"resources": {"requests": {"storage": "3Gi"}}}}'
# Deploy updated statefulset yaml (or helm chart)
kubectl apply -f statefulset.yaml
# Verify Capacity
kubectl get pvc
</code></pre>
<p>If you don't want downtime, check the first reference for some additional steps.</p>
<p>References:</p>
<ul>
<li><a href="https://adamrushuk.github.io/increasing-the-volumeclaimtemplates-disk-size-in-a-statefulset-on-aks/" rel="nofollow noreferrer">https://adamrushuk.github.io/increasing-the-volumeclaimtemplates-disk-size-in-a-statefulset-on-aks/</a></li>
<li><a href="https://serverfault.com/a/989665/561107">https://serverfault.com/a/989665/561107</a></li>
</ul>
|
<p>I am new to Kubernetes Operators. I have a general question about how to conceive of cleanup at the point of deletion.</p>
<p>Let's say the Controller is managing a resource which consists of a Deployment among other things. This Deployment writes to some external database. I'd like the items from the Database to be deleted when the resource is deleted (but not when its Pod is simply restarted - thus it can't happen as part of the application's shut down logic).</p>
<p>It seems like the database purging would have to happen in the Controller then? But this makes me a bit uneasy since it seems like this knowledge of how values are stored is the knowledge of the resource being managed, not the Controller. Is the only other good option to have the Controller send a message to the underlying application to perform its own cleanup?</p>
<p>What is the general way to handle this type of thing?</p>
| <p>Have you heard about Finalizers and <a href="https://kubernetes.io/blog/2021/05/14/using-finalizers-to-control-deletion/#owner-references" rel="nofollow noreferrer">Owner References</a> in Kubernetes? It's the Owner references describe how groups of objects are related. They are properties on resources that specify the relationship to one another, so entire trees of resources can be deleted.</p>
<p>To avoid futher copy-pasting, I will just leave the links here: <a href="https://kubernetes.io/blog/2021/05/14/using-finalizers-to-control-deletion/#understanding-finalizers" rel="nofollow noreferrer">Understanding Finalizers</a></p>
|
<p>I am trying to run a spark-submit to the Kubernetes cluster with spark 3.2.1 image and it is working. Now my question is, can I execute an init container along with the spark-submit? What am trying to achieve is that the init container check another service is up or not, is it up then spark-submit will run or it fail.</p>
<p>I can see that a conf parameter "spark.kubernetes.initContainer.image" for spark version 2.3 but not for 3.2.1 (<a href="https://spark.apache.org/docs/2.3.0/running-on-kubernetes.html" rel="nofollow noreferrer">https://spark.apache.org/docs/2.3.0/running-on-kubernetes.html</a>)</p>
<p>is there any mechanism that I can use to check other services are up or not before I submit a spark job?</p>
<p>I can see init container usage for the spark in the below links but it is not providing an accurate answer</p>
<p><a href="https://docs.bitnami.com/kubernetes/infrastructure/spark/configuration/configure-sidecar-init-containers/" rel="nofollow noreferrer">https://docs.bitnami.com/kubernetes/infrastructure/spark/configuration/configure-sidecar-init-containers/</a>
<a href="https://doc.lucidworks.com/spark-guide/11153/running-spark-on-kubernetes" rel="nofollow noreferrer">https://doc.lucidworks.com/spark-guide/11153/running-spark-on-kubernetes</a></p>
<p>any help will be much appreciated, thanks.</p>
| <p>I found that the best way to submit a spark job is the sparkoperator, more details can be found in the <a href="https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/user-guide.md#using-init-containers" rel="nofollow noreferrer">GitHub link</a></p>
<p>There is one option to include an init container and a sidecar container.</p>
|
<p>I need to watch (and wait) until a POD is deleted. I need to this is because I need to start a second pod (with the same name) immediately after the first one has been deleted.</p>
<p>This is what I'm trying:</p>
<pre><code>func (k *k8sClient) waitPodDeleted(ctx context.Context, resName string) error {
watcher, err := k.createPodWatcher(ctx, resName)
if err != nil {
return err
}
defer watcher.Stop()
for {
select {
case event := <-watcher.ResultChan():
if event.Type == watch.Deleted {
k.logger.Debugf("The POD \"%s\" is deleted", resName)
return nil
}
case <-ctx.Done():
k.logger.Debugf("Exit from waitPodDeleted for POD \"%s\" because the context is done", resName)
return nil
}
}
}
</code></pre>
<p>The problem with this approach is that when I get the <code>Deleted</code> event, is when the POD receives the event for deletion, but not when it is actually deleted. Doing some extra tests I ended debugging the process with this code:</p>
<pre><code>case event := <-watcher.ResultChan():
if event.Type == watch.Deleted {
pod := event.Object.(*v1.Pod)
k.logger.Debugf("EVENT %s PHASE %s MESSAGE %s", event.Type, pod.Status.Phase, pod.Status.Message)
}
</code></pre>
<p>The log result for this is:</p>
<pre><code>2022-02-15T08:21:51 DEBUG EVENT DELETED PHASE Running MESSAGE
2022-02-15T08:22:21 DEBUG EVENT DELETED PHASE Running MESSAGE
</code></pre>
<p>I'm getting two Deleted events. The first one right away I send the delete command. The last one when the pod is effectively deleted from the cluster.</p>
<p>My questions are:</p>
<ul>
<li>Why I'm getting two Deleted events? How can I differentiate one from another? I've tried to compare the two events and they seems exactly the same (except the timestamps)</li>
<li>What is the best approach to watch and wait for a pod deletion, so I can immediately relaunch it? should I poll the API until the pod is not returned?</li>
</ul>
<p>The usecase I'm trying to solve:
In my application there is a feature to replace a tool with another with different options. The feature needs to delete the pod that contains the tool and relaunch it with another set of options. In this scenario I need to wait for the pod deletion so I can start it again.</p>
<p>Thanks in advance!</p>
| <p>As I said in the comments, the real problem was the watcher I was creating to watch the pod I want to get deleted. In the watcher I was creating a LabelSelector that was selecting two pods instead of one. This is the complete solution, including the watcher.</p>
<pre><code>func (k *k8sClient) createPodWatcher(ctx context.Context, resName string) (watch.Interface, error) {
labelSelector := fmt.Sprintf("app.kubernetes.io/instance=%s", resName)
k.logger.Debugf("Creating watcher for POD with label: %s", labelSelector)
opts := metav1.ListOptions{
TypeMeta: metav1.TypeMeta{},
LabelSelector: labelSelector,
FieldSelector: "",
}
return k.clientset.CoreV1().Pods(k.cfg.Kubernetes.Namespace).Watch(ctx, opts)
}
func (k *k8sClient) waitPodDeleted(ctx context.Context, resName string) error {
watcher, err := k.createPodWatcher(ctx, resName)
if err != nil {
return err
}
defer watcher.Stop()
for {
select {
case event := <-watcher.ResultChan():
if event.Type == watch.Deleted {
k.logger.Debugf("The POD \"%s\" is deleted", resName)
return nil
}
case <-ctx.Done():
k.logger.Debugf("Exit from waitPodDeleted for POD \"%s\" because the context is done", resName)
return nil
}
}
}
func (k *k8sClient) waitPodRunning(ctx context.Context, resName string) error {
watcher, err := k.createPodWatcher(ctx, resName)
if err != nil {
return err
}
defer watcher.Stop()
for {
select {
case event := <-watcher.ResultChan():
pod := event.Object.(*v1.Pod)
if pod.Status.Phase == v1.PodRunning {
k.logger.Infof("The POD \"%s\" is running", resName)
return nil
}
case <-ctx.Done():
k.logger.Debugf("Exit from waitPodRunning for POD \"%s\" because the context is done", resName)
return nil
}
}
}
</code></pre>
|
<p>Friends</p>
<p>Im writing a configMap containing an array of postgres db names. Approach 1 throws an error like
scalar value is expected at postgres.db.name</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-init
data:
postgres.host: "postgreshost"
postgres.db.name: {"postgredb1","postgredb1", "postgredb3"}
</code></pre>
<p>Here is Approach 2 ie postgres.db.name having db names separated by comma</p>
<pre><code>----
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-init
data:
postgres.host: postgreshost
postgres.db.name: postgredb1,postgredb1,postgredb3
</code></pre>
<p>Which is the correct way to achieve db names as an array ?</p>
| <p>Edit: As @ShawnFumo and @HuBeZa pointed out, my old answer was incorrect. Configmap data key/value pairs expect the value to be in string format, therefore it isn't possible to provide a dict/list as value.</p>
<p>note: you have 4 "-" at the start of your second example, which would make the YAML document invalid. new YAML docs start with 3 "-". :)</p>
<pre><code>---
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-init
data:
postgres.host: postgreshost
# Note the "|" after the key - this indicates a multiline string in YAML,
# hence we provide the values as strings.
postgres.db.name: |
- postgredb1
- postgredb2
- postgredb3
</code></pre>
|
<p>I'm attempting to use the <a href="https://plugins.jenkins.io/statistics-gatherer/%3E" rel="nofollow noreferrer">Statistics Gathering</a> Jenkins plugin to forward metrics to Logstash. The plugin is configured with the following url: <code>http://logstash.monitoring-observability:9000</code>. Both Jenkins and Logstash are deployed on Kubernetes. When I run a build, which triggers metrics forwarding via this plugin, I see the following error in the logs:</p>
<pre><code>2022-02-19 23:29:20.464+0000 [id=263] WARNING o.j.p.s.g.util.RestClientUtil$1#failed: The request for url http://logstash.monitoring-observability:9000/ has failed.
java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:173
</code></pre>
<p>I get the same behavior when I exec into the jenkins pod and attempt to curl logstash:</p>
<pre><code>jenkins@jenkins-7889fb54b8-d9rvr:/$ curl -vvv logstash.monitoring-observability:9000
* Trying 10.52.9.143:9000...
* connect to 10.52.9.143 port 9000 failed: Connection refused
* Failed to connect to logstash.monitoring-observability port 9000: Connection refused
* Closing connection 0
curl: (7) Failed to connect to logstash.monitoring-observability port 9000: Connection refused
</code></pre>
<p>I also get the following error in the logstash logs:</p>
<pre><code>[ERROR] 2022-02-20 00:05:43.450 [[main]<tcp] pipeline - A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
Plugin: <LogStash::Inputs::Tcp port=>9000, codec=><LogStash::Codecs::JSON id=>"json_f96babad-299c-42ab-98e0-b78c025d9476", enable_metric=>true, charset=>"UTF-8">, host=>"jenkins-server.devops-tools", ssl_verify=>false, id=>"0fddd9afb2fcf12beb75af799a2d771b99af6ac4807f5a67f4ec5e13f008803f", enable_metric=>true, mode=>"server", proxy_protocol=>false, ssl_enable=>false, ssl_key_passphrase=><password>>
Error: Cannot assign requested address
Exception: Java::JavaNet::BindException
Stack: sun.nio.ch.Net.bind0(Native Method)
</code></pre>
<p>Here is my jenkins-deployment.yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: devops-tools
labels:
app: jenkins-server
spec:
replicas: 1
selector:
matchLabels:
app: jenkins-server
template:
metadata:
labels:
app: jenkins-server
spec:
securityContext:
fsGroup: 1000
runAsUser: 1000
serviceAccountName: jenkins-admin
containers:
- name: jenkins
env:
- name: LOGSTASH_HOST
value: logstash
- name: LOGSTASH_PORT
value: "5044"
- name: ELASTICSEARCH_HOST
value: elasticsearch-logging
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
image: jenkins/jenkins:lts
resources:
limits:
memory: "2Gi"
cpu: "1000m"
requests:
memory: "500Mi"
cpu: "500m"
ports:
- name: httpport
containerPort: 8080
- name: jnlpport
containerPort: 50000
livenessProbe:
httpGet:
path: "/login"
port: 8080
initialDelaySeconds: 90
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 5
readinessProbe:
httpGet:
path: "/login"
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
volumeMounts:
- name: jenkins-data
mountPath: /var/jenkins_home
volumes:
- name: jenkins-data
persistentVolumeClaim:
claimName: jenkins-pv-claim
</code></pre>
<p>Here is my jenkins-service.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: jenkins-server
namespace: devops-tools
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: /
prometheus.io/port: '8080'
spec:
selector:
app: jenkins-server
k8s-app: jenkins-server
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 30000
</code></pre>
<p>Here is my logstash-deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash-deployment
namespace: monitoring-observability
labels:
app: logstash
spec:
selector:
matchLabels:
app: logstash
replicas: 1
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
env:
- name: JENKINS_HOST
value: jenkins-server
- name: JENKINS_PORT
value: "8080"
image: docker.elastic.co/logstash/logstash:6.3.0
ports:
- containerPort: 9000
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: logstash-pipeline-volume
mountPath: /usr/share/logstash/pipeline
volumes:
- name: config-volume
configMap:
name: logstash-configmap
items:
- key: logstash.yml
path: logstash.yml
- name: logstash-pipeline-volume
configMap:
name: logstash-configmap
items:
- key: logstash.conf
path: logstash.conf
</code></pre>
<p>Here is my logstash-service.yaml</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: logstash
namespace: monitoring-observability
labels:
app: logstash
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "logstash"
spec:
selector:
app: logstash
ports:
- protocol: TCP
port: 9000
targetPort: 9000
type: ClusterIP
</code></pre>
<p>Here is my logstash configmap:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-configmap
namespace: monitoring-observability
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
tcp {
port => "9000"
codec => "json"
host => "jenkins-server.devops-tools"
ssl_verify => "false"
}
}
filter {
if [message] =~ /^\{.*\}$/ {
json {
source => "message"
}
}
if [ClientHost] {
geoip {
source => "ClientHost"
}
}
}
output {
elasticsearch {
hosts => [ "elasticsearch-logging:9200" ]
}
}
</code></pre>
<p>There are no firewalls configured in my cluster that would be blocking traffic on port 9000. I have also tried this same configuration with port <code>5044</code> and get the same results. It seems as though my logstash instance is not actually listening on the <code>containerPort</code>. Why might this be?</p>
| <p>I resolved this error by updating the configmap to this:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-configmap
namespace: monitoring-observability
data:
logstash.yml: |
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
tcp {
port => "9000"
codec => "json"
ssl_verify => "false"
}
}
filter {
if [message] =~ /^\{.*\}$/ {
json {
source => "message"
}
}
if [ClientHost] {
geoip {
source => "ClientHost"
}
}
}
output {
elasticsearch {
hosts => [ "elasticsearch-logging:9200" ]
}
}
</code></pre>
<p>Note that all references to the jenkins host have been removed.</p>
|
<p>I've started experimenting with Argocd as part of my cluster setup and set it up to watch a test repo containing some yaml files for a small application I wanted to use for the experiment. While getting to know the system a bit, I broke the repo connection and instead of fixing it I decided that I had what I wanted, and decided to do a clean install with the intention of configuring it towards my actual project.</p>
<p>I pressed the button in the web UI for deleting the application, which got stuck. After which I read that adding <code>spec.syncPolicy.allowEmpty: true </code> and removing the <code>metadata.finalizers</code> declaration from the application yaml file. This did not allow me to remove the application resource.</p>
<p>I then ran an uninstall command with the official manifests/install.yaml as an argument, which cleaned up most resources installed, but left the application resource and the namespace. Command: <code>kubectl delete -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml</code></p>
<p>Have tried to use the kubectl delete application NAME <code>--force</code> flag and the <code>--cascade=orphans</code> flag on the application resource as well as on the argocd namespace itself. Now I have both of them stuck at terminating without getting any further.</p>
<p>Now I'm proper stuck as I can't reinstall the argocd in any way I know due to the resources and namespace being marked for deletion, and I'm at my wits end as to what else I can try in order to get rid of the dangling application resource.</p>
<p>Any and all suggestions as to what to look into is much appreciated.</p>
| <p>If your problem is that the namespace cannot be deleted, the following two solutions may help you:</p>
<ol>
<li>Check what resources are stuck in the deletion process, delete these resources, and then delete ns</li>
<li>Edit the namespace of argocd, check if there is a finalizer field in the spec, delete that field and the content of the field</li>
</ol>
<p>Hopefully it helped you.</p>
|
<p>How do you update postgresql.conf or pg_hba.conf for the main postgresql chart on <a href="https://artifacthub.io" rel="nofollow noreferrer">https://artifacthub.io</a>? (In particular I have been trying to update the wal_level within the postgresql.conf)</p>
<p>Related Links:</p>
<ul>
<li><a href="https://github.com/bitnami/charts/tree/master/bitnami/postgresql" rel="nofollow noreferrer">https://github.com/bitnami/charts/tree/master/bitnami/postgresql</a></li>
<li><a href="https://artifacthub.io/packages/helm/bitnami/postgresql" rel="nofollow noreferrer">https://artifacthub.io/packages/helm/bitnami/postgresql</a></li>
<li><a href="https://github.com/bitnami/charts/issues/6830" rel="nofollow noreferrer">https://github.com/bitnami/charts/issues/6830</a></li>
</ul>
| <p>Documentation is hard to follow without any real good examples, but basically in the fine print they mention you have to completely override postgresql.conf and pg_hba.conf if you want to update anything here. I'm sure there is a better way to perform an update of just one key such as wal_level, but I found this approach to be the best at the moment.</p>
<p>Copy this yaml into a file (/path/to/this/file/values_bitnami_pg_postgresql_conf.yaml) and use if when you install the chart. This yaml contains all of the keys I found possible to edit at time of writing.</p>
<pre><code>primary:
# Override default postgresql.conf
configuration: |-
# Note: This configuration was created with bitnami/postgresql default chart, connecting and running `SHOW ALL;`
# then modifying to the appropriate yaml format. All values in SHOW present and if commented, this likely
# indicates attempt to set value failed. Please experiment.
#
# CHART NAME: postgresql
# CHART VERSION: 11.0.4
# APP VERSION: 14.1.0
# Example use:
# >>> helm install pg-release-1 bitnami/postgresql -f /path/to/this/file/values_bitnami_pg_postgresql_conf.yaml
# (chart should display connection details... connect using psql and run `SHOW ALL;`)
# If anything appears odd or not present review the pod logs (`kubectl logs <target pod for release>`)
allow_system_table_mods = 'off' #Allows modifications of the structure of system tables.
application_name = 'psql' #Sets the application name to be reported in statistics and logs.
archive_cleanup_command = '' #Sets the shell command that will be executed at every restart point.
archive_command = '(disabled' #Sets the shell command that will be called to archive a WAL file.
archive_mode = 'off' #Allows archiving of WAL files using archive_command.
archive_timeout = '0' #Forces a switch to the next WAL file if a new file has not been started within N seconds.
array_nulls = 'on' #Enable input of NULL elements in arrays.
authentication_timeout = '1min' #Sets the maximum allowed time to complete client authentication.
autovacuum = 'on' #Starts the autovacuum subprocess.
autovacuum_analyze_scale_factor = '0.1' #Number of tuple inserts, updates, or deletes prior to analyze as a fraction of reltuples.
autovacuum_analyze_threshold = '50' #Minimum number of tuple inserts, updates, or deletes prior to analyze.
autovacuum_freeze_max_age = '200000000' #Age at which to autovacuum a table to prevent transaction ID wraparound.
autovacuum_max_workers = '3' #Sets the maximum number of simultaneously running autovacuum worker processes.
autovacuum_multixact_freeze_max_age = '400000000' #Multixact age at which to autovacuum a table to prevent multixact wraparound.
autovacuum_naptime = '1min' #Time to sleep between autovacuum runs.
autovacuum_vacuum_cost_delay = '2ms' #Vacuum cost delay in milliseconds, for autovacuum.
# won't let me post entire show ... so cut the middle
# ...
vacuum_defer_cleanup_age = '0' #Number of transactions by which VACUUM and HOT cleanup should be deferred, if any.
vacuum_failsafe_age = '1600000000' #Age at which VACUUM should trigger failsafe to avoid a wraparound outage.
vacuum_freeze_min_age = '50000000' #Minimum age at which VACUUM should freeze a table row.
vacuum_freeze_table_age = '150000000' #Age at which VACUUM should scan whole table to freeze tuples.
vacuum_multixact_failsafe_age = '1600000000' #Multixact age at which VACUUM should trigger failsafe to avoid a wraparound outage.
vacuum_multixact_freeze_min_age = '5000000' #Minimum age at which VACUUM should freeze a MultiXactId in a table row.
vacuum_multixact_freeze_table_age = '150000000' #Multixact age at which VACUUM should scan whole table to freeze tuples.
#wal_block_size = '8192' #Shows the block size in the write ahead log.
wal_buffers = '256kB' #Sets the number of disk-page buffers in shared memory for WAL.
wal_compression = 'off' #Compresses full-page writes written in WAL file.
wal_consistency_checking = '' #Sets the WAL resource managers for which WAL consistency checks are done.
wal_init_zero = 'on' #Writes zeroes to new WAL files before first use.
wal_keep_size = '128MB' #Sets the size of WAL files held for standby servers.
wal_level = 'logical' #Sets the level of information written to the WAL.
wal_log_hints = 'off' #Writes full pages to WAL when first modified after a checkpoint, even for a non-critical modification.
wal_receiver_create_temp_slot = 'off' #Sets whether a WAL receiver should create a temporary replication slot if no permanent slot is configured.
wal_receiver_status_interval = '10s' #Sets the maximum interval between WAL receiver status reports to the sending server.
wal_receiver_timeout = '1min' #Sets the maximum wait time to receive data from the sending server.
wal_recycle = 'on' #Recycles WAL files by renaming them.
wal_retrieve_retry_interval = '5s' #Sets the time to wait before retrying to retrieve WAL after a failed attempt.
#wal_segment_size = '16MB' #Shows the size of write ahead log segments.
wal_sender_timeout = '1min' #Sets the maximum time to wait for WAL replication.
wal_skip_threshold = '2MB' #Minimum size of new file to fsync instead of writing WAL.
wal_sync_method = 'fdatasync' #Selects the method used for forcing WAL updates to disk.
wal_writer_delay = '200ms' #Time between WAL flushes performed in the WAL writer.
wal_writer_flush_after = '1MB' #Amount of WAL written out by WAL writer that triggers a flush.
work_mem = '4MB' #Sets the maximum memory to be used for query workspaces.
xmlbinary = 'base64' #Sets how binary values are to be encoded in XML.
xmloption = 'content' #Sets whether XML data in implicit parsing and serialization operations is to be considered as documents or content fragments.
zero_damaged_pages = 'off' #Continues processing past damaged page headers.
# Override default pg_hba.conf
pgHbaConfiguration: |-
# WARNING! This is wide open and only for toy local development purposes.
# Note: To verify this is loaded, connect via psql and run `table pg_hba_file_rules ;`
host all all 0.0.0.0/0 trust
host all all ::/0 trust
local all all trust
host all all 127.0.0.1/32 trust
host all all ::1/128 trust
</code></pre>
|
<p>I have a k8s deployment which is using <strong>Cloud DNS</strong> And <strong>Managed Certificate ( for SSL )</strong> along with the k8s service.</p>
<p>I have configured HTTP to HTTPS according to <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#https_redirect" rel="nofollow noreferrer">this GKE documentation</a></p>
<p>Which works perfectly fine and redirects my HTTP requests to HTTPS website.</p>
<p>Now when I am testing the vulnerability for <strong>HOST HEADER INJECTION</strong> using following command from CMD,</p>
<pre><code>curl http://staging.mysite.com --header 'Host: malicious.com'
</code></pre>
<p>I am getting the response like below</p>
<pre><code><HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="https://malicious.com/">here</A>.
</BODY></HTML>
</code></pre>
<p>To mention, my application is built on <strong>angular 11</strong>, and I am using <strong>Nginx</strong> to serve the app after building.</p>
<p>Here's my <strong>Ingress & Frontend Config & Managed Cert Config</strong></p>
<pre><code>apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: ssl-redirect
spec:
redirectToHttps:
enabled: true
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: staging-service-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: staging-global-ip
networking.gke.io/managed-certificate: staging-cert
networking.gke.io/v1beta1.FrontendConfig: ssl-redirect
spec:
defaultBackend:
service:
name: web-staging-service
port:
number: 80
rules:
- host: staging.mysite.com
http:
paths:
- backend:
service:
name: web-staging-service
port:
number: 80
pathType: ImplementationSpecific
path: /*
---
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: staging-cert
spec:
domains:
- staging.mysite.com
</code></pre>
<p>Here's my <strong>Nginx</strong> config</p>
<pre><code>worker_processes 4;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events { worker_connections 1024; }
http {
server {
listen 80;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256';
ssl_prefer_server_ciphers on;
server_tokens off;
server_name *.mysite.com, mysite.com;
types {
module js;
}
sendfile on;
include /etc/nginx/mime.types;
gzip on;
gzip_http_version 1.1;
gzip_disable "MSIE [1-6]\.";
gzip_min_length 256;
gzip_vary on;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_comp_level 9;
location / {
root /usr/share/nginx/html;
try_files $uri $uri/ /index.html;
index index.html index.htm;
}
error_page 404 500 502 503 504 /404.html;
location = /404.html {
root /usr/share/nginx/html;
}
}
add_header Strict-Transport-Security "max-age=31536000;" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Frame-Options sameorigin always;
add_header X-Content-Type-Options nosniff;
add_header Referrer-Policy 'origin';
add_header Content-Security-Policy "some rules";
add_header Permissions-Policy "some rules";
}
</code></pre>
<p>I am not finding any right way to prevent the injection. The application works otherwise perfectly with the current configuration.</p>
<p>Please help me with a proper solution to prevent the HOST HEADER INJECTION</p>
| <p>GCP provides protection from these types of attacks via <a href="https://cloud.google.com/armor#section-2" rel="nofollow noreferrer">Cloud Armor</a>. Cloud Armor has built-in <a href="https://cloud.google.com/armor/docs/rule-tuning#protocol_attack" rel="nofollow noreferrer">WAF policies</a> which support protection from protocol attacks such as HTTP header injection.</p>
<p>You'll first need to configure your Cloud Armor policy and then you can associate it with a BackendConfig attached to the backend Kubernetes Service ("web-staging-service") used by your Ingress resource.</p>
<pre><code>apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: cloud-armor
spec:
securityPolicy:
name: "waf-security-policy"
</code></pre>
|
<p>I have my application running in EKS cluster.I have exposed the application using Ingress- ALB load balancer controller. ALB load balancer controller has deleted recently, how to find when it got deleted.</p>
| <p>If you have configured the ALB-ingress controller driver to dump logs on S3. Its a place to start. This <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v1.1/guide/ingress/annotation/#:%7E:text=enable%20access%20log%20to%20s3" rel="nofollow noreferrer">enter link description here</a> guide will be a good start to understand how could it be configured.</p>
<p>Here is a pattern of an annotation for the ALB ingress controller that you could use for searching:</p>
<pre class="lang-yaml prettyprint-override"><code>alb.ingress.kubernetes.io/load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=my-access-log-bucket,access_logs.s3.prefix=my-app
</code></pre>
|
<p>Is there a way to create a K8s cluster role with full access (all resources, verbs and apigroups on any namespaces) but no commands execution on all namespaces for example: <code>kubectl delete pods --all-namespaces</code> or <code>kubectl delete pv --all-namespaces</code>?</p>
<p>(Running the same commands on a single namespace should be allowed, just not in bulk to all namespaces).</p>
<p>If this cannot be achieved with a cluster role, is there another way to achieve it?</p>
| <p>What if bind clusterrole to only needed namespaces and not give permissions to restricted ones? Thats not full solution, at least user wont be able to delete not needed ones. And strictly answering your question - not sure this is possible.</p>
<pre><code>---
apiVersion: v1
kind: ServiceAccount
metadata:
name: testsa
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: testclusterrole
rules:
- apiGroups: [""]
resources: ["pods","services","namespaces","deployments","jobs"]
verbs: ["get", "watch", "list", "create", "delete", "patch", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: job-master-1
namespace: namespace1
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: testclusterrole
subjects:
- kind: ServiceAccount
name: testsa
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: job-master-2
namespace: namespace2
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: job-master
subjects:
- kind: ServiceAccount
name: satestsa namespace: default
</code></pre>
|
<p>I am having running elastic-search on my Kubernetes cluster with host <code>http://192.168.18.35:31200/</code>. Now I have to connect my elastic search to the kibana. For that an enrollment token needs to be generated but how?
When I login to the root directory of elastic-search from kibana dashboard and type the following command to generate a new enrollment token, it shows the error:</p>
<p><a href="https://i.stack.imgur.com/h5B95.png" rel="noreferrer"><img src="https://i.stack.imgur.com/h5B95.png" alt="enter image description here" /></a></p>
<pre><code>command : bin/elasticsearch-create-enrollment-token --scope kibana
error: bash: bin/elasticsearch-create-enrollment-token: No such file or directory
</code></pre>
<p>I have created a file elasticsearch-create-enrollment-token inside the bin directory and gave full permission. Still, no tokens are generated.
Have any ideas on enrollment token guys?</p>
| <p>Assuming that you are on debian/ ubuntu, this should help</p>
<pre><code>cd /usr/share/elasticsearch/bin/
then
./elasticsearch-create-enrollment-token --scope kibana
</code></pre>
|
<p>I am new to K8s. I am trying to deploy a Datadog agent to my cluster to collect logs and this happens through a ConfigMap.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: fluent-cm
data:
fluent-bit.conf: |
[SERVICE]
Parsers_File parsers.conf
[INPUT]
name tail
path /tmp/app.log
parser nginx
[OUTPUT]
Name datadog
Match *
Host http-intake.logs.datadoghq.com
TLS off
apikey <API key to be used confidentially>
dd_service abcd
dd_source abcd
dd_tags env:dev
parsers.conf: |
[PARSER]
Name nginx
Format regex
Regex ^(?<remote>[^ ]*) (?<host>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*))" "(?<agent>[^\"]*)"(?: "(?<target>[^\"]*))"$
Time_Key time
Time_Format %d/%b/%Y:%H:%M:%S %z
</code></pre>
<p>I do not want to add the API key as plain text. Can I do it using secrets? Please suggest ways to achieve this.</p>
<p>P.S: I need to use the config map in order to add the Datadog agent, and hence the config map cannot be replaced.</p>
| <p>In theory you could <a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables" rel="nofollow noreferrer">Secret as a environment variable</a>, and pass it to a Pod using ConfigMap.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Secret
metadata:
name: fluentbit-secret-test
type: Opaque
data:
apikey: <base64 encoded api key>
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: fluent-cm
data:
fluent-bit.conf: |
...
apikey $FLUENTBIT_API_KEY
...
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: secret-env-test
spec:
containers:
- name: test
image: nginx
env:
- name: FLUENTBIT_API_KEY
valueFrom:
secretKeyRef:
name: fluentbit-secret-test
key: apikey
</code></pre>
<p>However, you must remember, that Kubernetes Secrets are not encrypted by default, only base64 encoded. Anyone with API access can retrieve or modify a Secret, and so can anyone with access to etcd</p>
|
<p>Background: I have two deployments in my k8s cluster called (as an example) the following:</p>
<pre><code>foo-bar
foo-bar-asd
</code></pre>
<p>In my pipeline, I want to implement logic which checks if there's any deployments named "foo-bar" already deployed.</p>
<p>How do I construct my command to only fetch/output "foo-bar", and not "foo-bar-asd" ?</p>
<p>I've tried toying around with regex, '^foo-bar$' etc without any success, found through googling/similar stackoverflow-questions.</p>
<p>My current command (which fetches both):</p>
<pre><code>kubectl -n my-namespace get deployments | grep foo-bar
</code></pre>
<p>Thank you in advance for any help!</p>
<p>Edit:
Realized beggars can't be choosers, <em>but</em> on top of just fetching "foo-bar" I would need the command to not explicitly mention "foo-bar-asd".
(Building a pipeline where this command is to be placed, and is supposed to be generic for different apps. Does not explain fully, but hopefully a little.)</p>
| <p><code>grep</code> command can also exclude certain string. You can run something like</p>
<p><code>kubectl -n my-namespace get deployments | grep foo-bar | grep -v foo-bar-asd</code></p>
<p>This should show you what you are looking for.</p>
|
<p>I deployed K8S cluster on AWS EKS (nodegroup) with 3 nodes. I'd like to see the pod CIDR for each node but this command returns empty: <code>$ kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'</code>. Why doesn't it have CIDR in the configuration?</p>
<pre><code>$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-1-193.ap-southeast-2.compute.internal Ready <none> 94d v1.21.5-eks-bc4871b
ip-10-0-2-66.ap-southeast-2.compute.internal Ready <none> 22m v1.21.5-eks-bc4871b
ip-10-0-2-96.ap-southeast-2.compute.internal Ready <none> 24m v1.21.5-eks-bc4871b
</code></pre>
<p>Below is one of the node info.</p>
<pre><code>$ kubectl describe node ip-10-0-1-193.ap-southeast-2.compute.internal
Name: ip-10-0-1-193.ap-southeast-2.compute.internal
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=t3.large
beta.kubernetes.io/os=linux
eks.amazonaws.com/capacityType=ON_DEMAND
eks.amazonaws.com/nodegroup=elk
eks.amazonaws.com/nodegroup-image=ami-00c56588b2d911d26
failure-domain.beta.kubernetes.io/region=ap-southeast-2
failure-domain.beta.kubernetes.io/zone=ap-southeast-2a
kubernetes.io/arch=amd64
kubernetes.io/hostname=ip-10-0-1-193.ap-southeast-2.compute.internal
kubernetes.io/os=linux
node.kubernetes.io/instance-type=t3.large
topology.ebs.csi.aws.com/zone=ap-southeast-2a
topology.kubernetes.io/region=ap-southeast-2
topology.kubernetes.io/zone=ap-southeast-2a
Annotations: csi.volume.kubernetes.io/nodeid: {"ebs.csi.aws.com":"i-0da5d02f6c203fe6b"}
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 19 Nov 2021 16:04:37 +1100
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ip-10-0-1-193.ap-southeast-2.compute.internal
AcquireTime: <unset>
RenewTime: Mon, 21 Feb 2022 20:39:23 +1100
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 21 Feb 2022 20:37:46 +1100 Fri, 26 Nov 2021 15:42:06 +1100 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 21 Feb 2022 20:37:46 +1100 Fri, 26 Nov 2021 15:42:06 +1100 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 21 Feb 2022 20:37:46 +1100 Fri, 26 Nov 2021 15:42:06 +1100 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 21 Feb 2022 20:37:46 +1100 Fri, 26 Nov 2021 15:42:06 +1100 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.0.1.193
Hostname: ip-10-0-1-193.ap-southeast-2.compute.internal
InternalDNS: ip-10-0-1-193.ap-southeast-2.compute.internal
Capacity:
attachable-volumes-aws-ebs: 25
cpu: 2
ephemeral-storage: 20959212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8047100Ki
pods: 35
Allocatable:
attachable-volumes-aws-ebs: 25
cpu: 1930m
ephemeral-storage: 18242267924
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 7289340Ki
pods: 35
System Info:
Machine ID: ec29e60ae2a5ed86515b0b6e7fe39341
System UUID: ec29e60a-e2a5-ed86-515b-0b6e7fe39341
Boot ID: f75bc84f-fbd5-4414-87c8-669a8b4e3c62
Kernel Version: 5.4.149-73.259.amzn2.x86_64
OS Image: Amazon Linux 2
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.7
Kubelet Version: v1.21.5-eks-bc4871b
Kube-Proxy Version: v1.21.5-eks-bc4871b
ProviderID: aws:///ap-southeast-2a/i-0da5d02f6c203fe6b
Non-terminated Pods: (15 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
cert-manager cert-manager-68ff46b886-ndnm8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 89d
cert-manager cert-manager-cainjector-7cdbb9c945-bzfx2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 89d
cert-manager cert-manager-webhook-58d45d56b8-2mr76 0 (0%) 0 (0%) 0 (0%) 0 (0%) 89d
default elk-es-node-1 1 (51%) 100m (5%) 4Gi (57%) 50Mi (0%) 32m
default my-nginx-5b56ccd65f-sndqv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18m
elastic-system elastic-operator-0 100m (5%) 1 (51%) 150Mi (2%) 512Mi (7%) 89d
kube-system aws-load-balancer-controller-9c59c86d8-86ld2 100m (5%) 200m (10%) 200Mi (2%) 500Mi (7%) 89d
kube-system aws-node-mhqp6 10m (0%) 0 (0%) 0 (0%) 0 (0%) 94d
kube-system cluster-autoscaler-76fd4db4c-j59vm 100m (5%) 100m (5%) 600Mi (8%) 600Mi (8%) 89d
kube-system coredns-68f7974869-2x4qc 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 89d
kube-system coredns-68f7974869-wfhzq 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 89d
kube-system ebs-csi-controller-7584b68c57-ksvkc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 89d
kube-system ebs-csi-controller-7584b68c57-rkbq4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 89d
kube-system ebs-csi-node-nxfkz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 94d
kube-system kube-proxy-zcqg4 100m (5%) 0 (0%) 0 (0%) 0 (0%) 94d
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1610m (83%) 1400m (72%)
memory 5186Mi (72%) 2002Mi (28%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
attachable-volumes-aws-ebs 0 0
Events: <none>
</code></pre>
| <p>From kubelet let documentation I can see that it is only being used for standalone configuration
<a href="https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/</a></p>
<p><a href="https://i.stack.imgur.com/VegKe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VegKe.png" alt="enter image description here" /></a></p>
|
<p>I have two below docker commands, can someone let me know the equivalent <code>kubernetes</code> (<code>kubectl</code>) command for single node cluster :</p>
<pre><code>docker image prune -a --force --filter until=5m
docker container prune --force --filter until=5m
</code></pre>
<p>Also are these two commands doing same thing ? I understand that first one is deleting image and second one is deleting container, but I am not sure about extra params like <code>-a</code> , <code>--force</code> and <code>--filter</code> are doing.</p>
| <blockquote>
<p>can someone let me know the equivalent kubernetes (kubectl) command for single node cluster</p>
<pre><code>docker image prune -a --force --filter until=5m
docker container prune --force --filter until=5m
</code></pre>
</blockquote>
<p>You don't directly interact with the docker containers or images on the nodes themselves, and you shouldn't have to. There is no direct equivalent on kubectl. If you wanted to run a prune on the nodes to (for example), save space, then you could setup a cronjob on the node itself to do this -- Kubernetes does not support this functionality.</p>
<p>If you are trying to do this to force Kubernetes to pull a new version of the image and not used the cached version, then you can just use the <code>imagePullPolicy</code> directive (<a href="https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy" rel="nofollow noreferrer">documentation</a>) to force Kubernetes to attempt to pull a new version each time</p>
<p>What are you trying to achieve by pruning the images on the nodes?</p>
<blockquote>
<p>Also are these two commands doing same thing ?</p>
</blockquote>
<p>No, they are not doing the same thing. Docker works with images. You build or pull an image, and to run it, you create a container. You can have multiple containers that run the same image, but you can only have one of that image.</p>
<p><code>--force</code> will no prompt for confirmation before deleting (so the usual "be careful" warning applies here)</p>
<p><code>-a</code> or <code>--all</code> will remove all unused images, not just dangling ones. <a href="https://stackoverflow.com/a/45143234/2017590">This answer</a> explains the difference between a dangling and unused image.</p>
<p>You can also use the <code>--help</code> switch to get details on what the commands do and their accepted options:</p>
<pre><code>$ docker image prune --help
Usage: docker image prune [OPTIONS]
Remove unused images
Options:
-a, --all Remove all unused images, not just dangling ones
--filter filter Provide filter values (e.g. 'until=<timestamp>')
-f, --force Do not prompt for confirmation
</code></pre>
<pre><code>$ docker container prune --help
Usage: docker container prune [OPTIONS]
Remove all stopped containers
Options:
--filter filter Provide filter values (e.g. 'until=<timestamp>')
-f, --force Do not prompt for confirmation
</code></pre>
|
<p>I am in the process of deploying ActiveMQ 5.15 in HA on Kubernetes. Previously I was using a deployment and a <code>clusterIP</code> Service. And it was working fine. The master will boot up and the slave will wait for the lock to be acquired. If I delete the pod which is the master one, the slave picks up and becomes the master.</p>
<p>Now I want to try with <code>statefulset</code> basing myself on <a href="https://stackoverflow.com/questions/57289381/activemq-on-kuberenetes-with-shared-storage">this thread</a>.
Deployment is done successfully and two pods were created with <code>id0</code> and <code>id1</code>. But what I noticed is that both pods were master. They were both started. I noticed also that two PVC were created <code>id0</code> and <code>id1</code> in the case of <code>Statefulset</code> compared to <code>deployment</code> which had only 1 PVC. Could that be the issue since it is no more a shared storage? Can we still achieve a master/slave setup with <code>Statefulset</code>?</p>
| <blockquote>
<p>I noticed also that two PVC were created id0 and id1 in the case of statefulset compared to deployment which had only 1 PVC. Could that be the issue since it is no more a shared storage?</p>
</blockquote>
<p>You are right. When using k8s StatefulSets each Pod gets its own persistent storage (dedicated PVC and PV), and this persistent storage is not shared.</p>
<p>When a Pod gets terminated and is rescheduled on a different Node, the Kubernetes controller will ensure that the Pod is associated with the same PVC which will guarantee that the state is intact.</p>
<p>In your case, to achieve a master/slave setup, consider using a <strong>shared network location / filesystem for persistent storage</strong> like:</p>
<ul>
<li><a href="https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner" rel="nofollow noreferrer">NFS storage</a> for on-premise k8s cluster.</li>
<li><a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-persistent-storage/" rel="nofollow noreferrer">AWS EFS</a> for EKS.</li>
<li>or <a href="https://learn.microsoft.com/en-us/azure/aks/azure-files-volume" rel="nofollow noreferrer">Azure Files</a> for AKS.</li>
</ul>
<p>Check the <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes" rel="nofollow noreferrer">complete list of PersistentVolume types currently supported by Kubernetes</a> (implemented as plugins).</p>
|
<p>What is the difference between <code>minikube start</code> and <code>minikube create cluster</code>?</p>
<p>Do they both create a cluster?</p>
| <p><code>Minikube start</code>, start the cluster with the name <code>minikube</code> by default</p>
<p><code>minikube create cluster</code> which version are you using ? No commands exist actually in official documentation: <a href="https://minikube.sigs.k8s.io/docs/commands/" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/commands/</a></p>
<p><a href="https://i.stack.imgur.com/BQGjS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BQGjS.png" alt="enter image description here" /></a></p>
|
<p>I have a Kubernetes Cluster and I've been trying to forward logs to Splunk with this <a href="https://github.com/splunk/splunk-connect-for-kubernetes#prerequisites" rel="nofollow noreferrer">splunk-connect-for-kubernetes</a> repo which is essentially Splunk's own kubernetes-oriented configuration of fluentd.</p>
<p>I initially could see logs in Splunk but they appeared to just be related to the system components but not from the pods that I needed from.</p>
<p>I think I tracked down to the problem in the global <code>values.yaml</code> file. I experimented a bit with the <code>fluentd path</code> and <code>containers path</code> and found that I likely needed to update the <code>containers pathDest</code> to the same file path as the pods logs.</p>
<p>It looks like something like this now:</p>
<pre><code>fluentd:
# path of logfiles, default /var/log/containers/*.log
path: /var/log/containers/*.log
# paths of logfiles to exclude. object type is array as per fluentd specification:
# https://docs.fluentd.org/input/tail#exclude_path
exclude_path:
# - /var/log/containers/kube-svc-redirect*.log
# - /var/log/containers/tiller*.log
# - /var/log/containers/*_kube-system_*.log (to exclude `kube-system` namespace)
# Configurations for container logs
containers:
# Path to root directory of container logs
path: /var/log
# Final volume destination of container log symlinks
pathDest: /app/logs
</code></pre>
<p>But now I can see in my the logs for my <code>splunk-connect</code> repeated logs like</p>
<pre><code>[warn]: #0 [containers.log] /var/log/containers/application-0-tcc-broker-0_application-0bb08a71919d6b.log unreadable. It is excluded and would be examined next time.
</code></pre>
| <p>I had a very similar problem once and changing the path in the values.yaml file helped to solve the problem. It is perfectly described in <a href="https://community.splunk.com/t5/All-Apps-and-Add-ons/splunk-connect-for-kubernetes-var-log-containers-log-unreadable/m-p/473943" rel="nofollow noreferrer">this thread</a>:</p>
<blockquote>
<p>Found the solution for my question -</p>
<p>./splunk-connect-for-kubernetes/charts/splunk-kubernetes-logging/values.yaml: path: <code>/var/log/containers/*.log </code>
Changed to:<br />
path: <code>/var/log/pods/*.log</code> works to me.</p>
</blockquote>
<p>The cited answer may not be readable. Just try changing <code>/var/log/containers/*.log</code> to <code>/var/log/pods/*.log</code> in your file.</p>
<p>See also <a href="https://stackoverflow.com/questions/51671212/fluentd-log-unreadable-it-is-excluded-and-would-be-examined-next-time">this similar question on stackoverflow</a>.</p>
|
<p>I am new to kubernetes helm chart. There is a yaml file named configmap. This file containing all the configuration which are related to the application. Since this file containing a lot of data, I was trying to put some data to the new file and access using FILE object. So created two different file with the name:
<strong>data1.yaml</strong> and <strong>data2.yaml</strong> <br>
<strong>data1.yaml</strong> file have only static data. On the other hand <strong>data2.yaml</strong> file contains dynamic data (some variables also like <code>$.Values.appUrl</code>).
I am able to read static file (data1.yaml) to the configmap.yaml file using FILE object. I am also able to read data2.yaml file but since this file containing some variable also, the variable value is not replacing by actual value. It printing the same variable in configmap file. So my question is,
Is there any way to access dynamic file (which contains variable type data) ? <br>
Below is the example data shown.<br><br>
configmap.yaml file is-></p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: example-configmap
namespace: default
data1: {{ .File.Get "data1.yaml" | indent 2 }}
data2: {{ .File.Get "data2.yaml" | indent 2 }}
</code></pre>
<p>data1.yaml file is -></p>
<pre><code>data1:
ui.type:2
ui.color1:red
ui.color2:green
</code></pre>
<p>data2.yaml file is -></p>
<pre><code>system.data.name: "app-name"
system.data.url: {{ $.Values.appUrl }} # variable
system.data.type_one: "app-type-xxx"
system.data.value: "3"
system.interface.properties: |
</code></pre>
<p>Values.yaml file is -></p>
<pre><code>appUrl: "https://app-name.com"
</code></pre>
<p>Output:</p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: example-configmap
namespace: default
data1:
ui.type:2
ui.color1:red
ui.color2:green
data2:
system.data.name: "app-name"
system.data.url: {{ $.Values.appUrl }} # here should be "https://app-name.com"
system.data.type_one: "app-type-xxx"
system.data.value: "3"
system.interface.properties: |
</code></pre>
| <pre><code>{{ (tpl (.Files.Glob "data2.yaml").AsConfig . ) | indent 2 }}
</code></pre>
<p>Using above syntax it's picking the actual value of variables. But it also printing the file name like below:</p>
<pre><code>data2.yaml: |-
</code></pre>
<p>So I resolve the issue by using below syntax:</p>
<pre><code>{{ (tpl (.Files.Get "data2.yaml") . ) | indent 2 }}
</code></pre>
|
<p>I'm running a <code>flask</code> application with <code>gunicorn</code> and <code>gevent</code> worker class. In my own test environment, I follow the official guide <code>multiprocessing.cpu_count() * 2 + 1</code> to set worker number.</p>
<p>If I want to put the application on Kubernetes' pod and assume that resources will be like</p>
<pre><code>resources:
limits:
cpu: "10"
memory: "5Gi"
requests:
CPU: "3"
memory: "3Gi"
</code></pre>
<p>how to calculate the worker number? should I use limits CPU or requests CPU?</p>
<hr />
<p>PS. I'm launching application via binary file packaged by <code>pyinstaller</code>, in essence <code>flask run(python script.py)</code>, and launch gunicorn in the main thread:</p>
<pre><code>def run():
...
if config.RUN_MODEL == 'GUNICORN':
sys.argv += [
"--worker-class", "event",
"-w", config.GUNICORN_WORKER_NUMBER,
"--worker-connections", config.GUNICORN_WORKER_CONNECTIONS,
"--access-logfile", "-",
"--error-logfile", "-",
"-b", "0.0.0.0:8001",
"--max-requests", config.GUNICORN_MAX_REQUESTS,
"--max-requests-jitter", config.GUNICORN_MAX_REQUESTS_JITTER,
"--timeout", config.GUNICORN_TIMEOUT,
"--access-logformat", '%(t)s %(l)s %(u)s "%(r)s" %(s)s %(M)sms',
"app.app_runner:app"
]
sys.exit(gunicorn.run())
if __name__ == "__main__":
run()
</code></pre>
<hr />
<p>PS. Whether I set worker number by <code>limits CPU (10*2+1=21)</code> or <code>requests CPU (3*2+1=7)</code> the performance still can't catch up with my expectations. Any trial suggestions to improve performance will be welcome under this questions</p>
| <blockquote>
<p>how to calculate the worker number? should I use limits CPU or requests CPU?</p>
</blockquote>
<p>It depends on your situation. First, look at the documentation about <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits" rel="nofollow noreferrer">request and limits</a> (this example is for memory, but the same is for CPU).</p>
<blockquote>
<p>f the node where a Pod is running has enough of a resource available, it's possible (and allowed) for a container to use more resource than its <code>request</code> for that resource specifies. However, a container is not allowed to use more than its resource <code>limit</code>.</p>
<p>For example, if you set a <code>memory</code> request of 256 MiB for a container, and that container is in a Pod scheduled to a Node with 8GiB of memory and no other Pods, then the container can try to use more RAM.</p>
<p>If you set a <code>memory</code> limit of 4GiB for that container, the kubelet (and <a href="https://kubernetes.io/docs/setup/production-environment/container-runtimes" rel="nofollow noreferrer">container runtime</a>) enforce the limit. The runtime prevents the container from using more than the configured resource limit. For example: when a process in the container tries to consume more than the allowed amount of memory, the system kernel terminates the process that attempted the allocation, with an out of memory (OOM) error.</p>
</blockquote>
<p>Answering your question: first of all, you need to know how many resources (eg. CPU) your application needs. Request will be the minimum amount of CPU that the application must receive (you have to calculate this value yourself. In other words - you must know how much the application needs minimum CPU to run properly and then you need to set the value.) For example, if your application will perform better, when it receives more CPU, consider adding a limit ( this is the maximum amount of CPU an application can receive). If you want to calculate the worker number based on the highest performance, use <code>limit</code> to calculate the value. If, on the other hand, you want your application to run smoothly (perhaps not as fast as possible, but it will consume less resources) use <code>request</code> type.</p>
|
<p>Recently, the same container of several pods in a deployment restarted with <code>OOMKilled</code> event.
Here is the description of one of the containers:</p>
<pre><code>State: Running
Started: Tue, 15 Feb 2022 23:33:06 +0000
Last State: Terminated
Reason: OOMKilled
Exit Code: 1
Started: Fri, 11 Feb 2022 17:48:21 +0000
Finished: Tue, 15 Feb 2022 23:33:05 +0000
Ready: True
Restart Count: 1
Limits:
cpu: 1
memory: 512Mi
Requests:
cpu: 1
memory: 512Mi
</code></pre>
<p>If the container would exceed the limit of the available memory then it would exit with code <code>137</code>. And I guess the container did not reached the limit. So my question what could happen if the exit code is 1 and the Reason is <code>OOMKilled</code>.</p>
<p><strong>Update:</strong>
The process is actually a python app which has threads, this is the code:</p>
<pre><code>ret = subprocess.run(args, stderr=subprocess.PIPE, universal_newlines=True, check=False)
if ret.returncode != 0:
logging.warning("Executing cmd failed: %s, code: %d, stderr: %s", cmd, ret.returncode, ret.stderr)
raise Exception("Failed")
</code></pre>
<p>and the relevant logs when called, it return with <code>-9</code>:</p>
<pre><code>2022-02-15T23:33:30.510Z WARNING "MainThread - Executing cmd failed: iptables-restore -n -w 3 restore-filter, code: -9, stderr: "
raise Exception("Failed")
Exception: Failed
</code></pre>
<p>from the description of subprocess.run(): <code>A negative value -N indicates that the child was terminated by signal N (POSIX only).</code></p>
<p>So because the exception is raised the python code exited with 1? Probably..</p>
| <p>Two possible reasons:</p>
<h4>Reason #1</h4>
<p>Subprocess was killed by OOM killer (it received SIGKILL(9) from OOM killer), resulting in application crashing with <em>exit code 1</em>, and <em>OOMKilled</em> reason for termination.</p>
<h4>Reason #2</h4>
<p>If you have <code>initContainers</code> specified, init container could have been killed by OOM killer, resulting in <em>OOMKilled</em> reason, and application crashing with <em>exit code 1</em> due to the bad initialization.</p>
<hr />
<p>OOM kill is not very well documented in Kubernetes docs. For example</p>
<blockquote>
<p>Containers are marked as OOM killed only when the init pid gets killed by the kernel OOM killer. There are apps that can tolerate OOM kills of non init processes and so we chose to not track non-init process OOM kills.
<sup>[<a href="https://github.com/kubernetes/kubernetes/issues/50632#issuecomment-322849618" rel="nofollow noreferrer">source</a>]</sup></p>
</blockquote>
<p>I could not find any mentions of it anywhere, othen than this GitHub issue.</p>
<hr />
<p>First reason is more probable in my opinion.<br />
Possible solution is to increase memory limits (if you have any).</p>
|
<p>I am using kubernetes (by windows 10 - docker desktop).</p>
<p>I am using mysql, that is running by helm 3 (loaded from bitnami repository).</p>
<p>I am creating another application.
For now, I am testing on docker (not in kubernetes yet).</p>
<p>Everything is fine, but when trying to connect the database from my project
(BTW - Project works fine, but not when running on docker).</p>
<p>Something like:</p>
<pre><code>docker run --name test-docker --rm my-image:tag --db "root:12345@tcp(127.0.0.1:3306)/test"
</code></pre>
<p>(db is a parameter to to connect to db).</p>
<p>I get the message:</p>
<pre><code>2022-02-21T12:18:17.205Z FATAL failed to open db: could not setup schema: cannot create jobs table: dial tcp 127.0.0.1:3306: connect: connection refused
</code></pre>
<p>I have investigate a little, and find that the problem may be because the dockers running need to run on the same network.
(Nonetheless, they are actually dockers, when one is running by helm tool for K8S).</p>
<p>this is on:
<a href="https://www.digitalocean.com/community/tutorials/how-to-inspect-kubernetes-networking" rel="nofollow noreferrer">kubernetes networking</a></p>
<p>When I run:</p>
<pre><code>nsenter -t your-container-pid -n ip addr
</code></pre>
<p>the pid is not directory, so I get the message:</p>
<pre><code>/proc/<pid>/ns/net - No such file or directory
</code></pre>
<p>How can I eventually run my project that can use the mysql (running in dockers on K8S)?</p>
<p>Thanks.</p>
| <p>Docker containers are isolated from other containers and the external network by default. There are several options to establish connection between Docker containers:</p>
<ul>
<li><p>Docker sets up a default <code>bridge</code> network automatically, through which the communication is possible between containers and between containers and the host machine. Both your containers should be on the <code>bridge</code> network - for container with your project to connect to your DB container by referring to it's name. More details on this approach and how it can be set up is <a href="https://docs.docker.com/network/network-tutorial-standalone/#use-the-default-bridge-network" rel="nofollow noreferrer">here</a>.</p>
</li>
<li><p>You can also create user-defined bridge network - basically, your own custom bridge network - and attach your Docker containers to it. In this way, both containers won't be connected to the default <code>bridge</code> network at all. Example of this approach is described in details <a href="https://docs.docker.com/network/network-tutorial-standalone/#use-user-defined-bridge-networks" rel="nofollow noreferrer">here</a>.</p>
<ol>
<li>First, user-defined network should be created:</li>
</ol>
<pre><code>docker network create <network-name>
</code></pre>
<ol start="2">
<li>List your newly created network and check with <code>inspect</code> command its IP address and that no containers are connected to it:</li>
</ol>
<pre><code>docker network ls
docker network inspect <network-name>
</code></pre>
<ol start="3">
<li>You can either connect your containers on their start with <code>--network</code> flag:</li>
</ol>
<pre><code>docker run -dit --name <container-name1> --network <network-name>
docker run -dit --name <container-name2> --network <network-name>
</code></pre>
<p>Or attach running containers by their name or by their ID to your newly created network by <code>docker network connect</code> - more options are listed <a href="https://docs.docker.com/engine/reference/commandline/network_connect/" rel="nofollow noreferrer">here</a>:</p>
<pre><code>docker network connect <network-name> <container-name1>
docker network connect <network-name> <container-name2>
</code></pre>
<ol start="4">
<li>To verify that your containers are connected to the network, check again the <code>docker network inspect</code> command.</li>
</ol>
</li>
</ul>
<p>Once connected in network, containers can communicate with each other, and you can connect to them using another container’s IP address or name.</p>
<p><strong><strong>EDIT</strong>:</strong> As suggested by @Eitan, when referring to the network instead of a changing IP address in <code>root:12345@tcp(127.0.0.1:3306)/test</code>, special DNS name <code>host.docker.internal</code> can be used - it resolves to the internal IP address used by the host.</p>
|
<p>I am setting up nginx autoscaling based on CPU.</p>
<p>Setup on the nginx deployment is:</p>
<pre><code> resources:
limits:
cpu: "2"
memory: 1000Mi
requests:
cpu: 100m
memory: 100Mi
</code></pre>
<p>When I check <code>kubectl top pod</code></p>
<p>I see I have 4 pods. Each pod is using 2m. So that is 8m total. Can someone explain to me how when I check the HPA it shows 44%/50% is utilized? That math is definitely wrong.</p>
<p>HPA config:</p>
<pre><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: api-svc-nginx-ingress
namespace: api-svc
spec:
maxReplicas: 5
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx-ingress
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
</code></pre>
<p>If I manually scale down the deployment to 1 pod I see that the utilization drops to 8%/50%... Not sure whats wrong? Is metric server broken?</p>
| <p>So it turns out I was using an additional metric that wasn't working (it was a custom metric that didn't have a source from metric server) - and I think that broke the metrics from updating.</p>
<p>I had another</p>
<pre><code> metrics:
- type: Object
</code></pre>
<p>that was showing up as on the HPA targets output. Once I removed that metric the HPA seemed to work fine.</p>
|
<p>I have to changed the <code>terminationGracePeriodSeconds</code> from 30 to 120 in Kubernetes deployment manifest file but when I am deploying using helm:</p>
<pre><code>helm upgrade --install <chartname> --values <valuesfilename>
</code></pre>
<p>The old pods getting terminated immediately and new pods started running.</p>
<p>But the expected behavior is to keep the old pods in terminating state and continue its current processes for 120 seconds as defined.</p>
<p>What else could be missing here?</p>
<p>Does this solve my issue here?</p>
<pre><code>containers:
- name: containername
lifecycle:
preStop:
exec:
command: [ "/bin/sleep", "20" ]
</code></pre>
<p>One question I had is does adding sleep command stops execution of current processes of the pod and just sleeps while it is in terminating state?</p>
| <p>The Kubernetes parameter <code>terminationGracePeriodSeconds</code> decides on how much time Kubernetes waits until it forcefully kills your container. In other words, when Kubernetes wants to terminate your Pod, it does the following:</p>
<ol>
<li>Kubernetes sends <code>SIGTERM</code> to your container</li>
<li>Kubernetes waits max. <code>terminationGracePeriodSeconds</code> for the following sequence of operations to happen:
<ul>
<li>Container stops normally</li>
<li><code>PreStop</code> hook executes</li>
</ul>
</li>
<li>Kubernetes sends <code>SIGKILL</code> to your container (and your <code>PreStop</code> container) and therefore terminates them forcefully</li>
</ol>
<p>Note that your container and the <code>PreStop</code> containers are not executed at the same time, but synchronously, one by one.</p>
<p>Read more at <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#hook-handler-execution" rel="nofollow noreferrer">Kubernetes Hook Handler Execution</a>.</p>
|
<p>I want to remove pod that I deployed to my cluster with <code>helm install</code>.</p>
<p>I used 3 ways to do so:</p>
<ol>
<li><code>helm uninstall <release name></code> -> remove the pod from the cluster and from the helm list</li>
<li><code>helm delete <release name></code> -> remove the pod from the cluster and from the helm list</li>
<li><code>kubectl delete -n <namespace> deploy <deployment name></code> -> remove the pod from the cluster but not from the helm list</li>
</ol>
<p>What's the difference between them?
Is one better practice then the other?</p>
| <p><code>helm delete</code> is an alias for <code>helm uninstall</code> and you can see this when you check the <code>--help</code> syntax:</p>
<pre><code>$ helm delete --help
...
Usage:
helm uninstall RELEASE_NAME [...] [flags]
</code></pre>
<p><code>kubectl delete ...</code> just removes the resource in the cluster.</p>
<p>Doing <code>helm uninstall ... </code> won't just remove the pod, but it will remove all the resources created by helm when it installed the chart. For a single pod, this might not be any different to using <code>kubectl delete...</code> but when you have tens or hundreds of different resources and dependent charts, doing all this manually by doing <code>kubectl delete...</code> becomes cumbersome, time-consuming and error-prone.</p>
<p>Generally, if you're deleting something off of the cluster, use the same method you used to install it in the first place. Namely, if you used <code>helm install</code> or <code>helm upgrade --install</code> to install it into the cluster, use <code>helm uninstall</code> to remove it (or <code>helm delete --purge</code> if you're still running Helm v2); and if you used <code>kubectl create</code> or <code>kubectl apply</code>, use <code>kubectl delete</code> to remove it.</p>
|
<p>We are using the Job <code>ttlSecondsAfterFinished</code> attribute to automatically clean up finished jobs. When we had a very small number of jobs (10-50), the jobs (and their pods) would get cleaned up approximately 60 seconds after completion. However, now that we have ~5000 jobs running on our cluster, it takes 30 + minutes for a Job object to get cleaned after completion.</p>
<p>This is a problem because although the Jobs are just sitting there, not consuming resources, we do use a <code>ResourceQuota</code> (selector count/jobs.batch) to control our workload, and those completed jobs are taking up space in the <code>ResourceQuota</code>.</p>
<p>I know that jobs only get marked for deletion once the TTL has passed, and are not guaranteed to be deleted immediately then, but 30 minutes is a very long time. What could be causing this long delay?</p>
<p>Our logs indicate that our k8s API servers are not under heavy load, and that API response times are reasonable.</p>
| <p><strong>Solution 1</strong></p>
<p>How do you use the Job <code>ttlSecondsAfterFinished</code>? You can specify <code>.spec.ttlSecondsAfterFinished</code> to the value what you need. Below is the example from <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#ttl-mechanism-for-finished-jobs" rel="nofollow noreferrer">official documentation</a></p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: pi-with-ttl
spec:
ttlSecondsAfterFinished: 100
template:
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
</code></pre>
<p>And please note this:</p>
<blockquote>
<p>Note that the TTL period, e.g. .spec.ttlSecondsAfterFinished field of Jobs, can be modified after the job is created or has finished. However, once the Job becomes eligible to be deleted (when the TTL has expired), the system won't guarantee that the Jobs will be kept, even if an update to extend the TTL returns a successful API response.</p>
</blockquote>
<p>For more information: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/#updating-ttl-seconds" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/#updating-ttl-seconds</a></p>
<p><strong>Solution 2</strong></p>
<p>As it mentioned above in the comment field, you can try to play with <code>kube-controller-manager</code> and increase the number of TTL-after-finished controller workers that are allowed to sync concurrently by using the following <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/" rel="nofollow noreferrer">flag option</a>:</p>
<p><code>kube-controller-manager --concurrent-ttl-after-finished-syncs int32 Default: 5</code></p>
|
<p>I've been having loads of issues with <code>kubectl</code> not having enough space. How can I increase the default storage size allocated to <code>docker</code>?</p>
<p>None of minikube recommendations worked.</p>
<pre><code>1. Run "docker system prune" to remove unused Docker data (optionally with "-a")
2. Increase the storage allocated to Docker for Desktop by clicking on:
Docker icon > Preferences > Resources > Disk Image Size
3. Run "minikube ssh -- docker system prune" if using the Docker container runtime
</code></pre>
<p>And the second is not possible from command line...</p>
| <p>Taking your comment into consideration</p>
<blockquote>
<p>I get ImagePullBackOff when I try to deploy nginx on the cluster –
Caterina</p>
</blockquote>
<p>You can specify minikube's disk allocations separately:</p>
<p><code>minikube start --memory=8192 --cpus=4 --disk-size=50g</code></p>
<p>Which can help you to work around the disk space issues as the default is significantly smaller: <code> --disk-size string Disk size allocated to the minikube VM (format: <number>[<unit>], where unit = b, k, m or g). (default "20000mb")</code></p>
|
<p>I have to changed the <code>terminationGracePeriodSeconds</code> from 30 to 120 in Kubernetes deployment manifest file but when I am deploying using helm:</p>
<pre><code>helm upgrade --install <chartname> --values <valuesfilename>
</code></pre>
<p>The old pods getting terminated immediately and new pods started running.</p>
<p>But the expected behavior is to keep the old pods in terminating state and continue its current processes for 120 seconds as defined.</p>
<p>What else could be missing here?</p>
<p>Does this solve my issue here?</p>
<pre><code>containers:
- name: containername
lifecycle:
preStop:
exec:
command: [ "/bin/sleep", "20" ]
</code></pre>
<p>One question I had is does adding sleep command stops execution of current processes of the pod and just sleeps while it is in terminating state?</p>
| <p>Basically, it is expected behavior for <code>terminationGracePeriodSeconds</code>, as it is <em>optional</em> duration in seconds the pod needs to terminate gracefully.</p>
<p>On pod termination - the lifecycle is described <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination" rel="nofollow noreferrer">here</a> - pod receives <code>terminating</code> state and it is removed from the endpoints list of all services and stops getting new traffic. <code>SIGTERM</code> signal is immediately sent to the processes running in pod, and Kubernetes is waiting for the pod to stop normally on its own
during the grace period specified in <code>terminationGracePeriodSeconds</code> option. As soon as <code>terminationGracePeriodSeconds</code> expires, the Pod is forcefully killed by <code>SIGKILL</code> signal. In your case, the processes in the pod were just shutdown normally before 120 seconds of grace period have passed.</p>
<p>In its turn, <code>preStop</code> hook is called immediately before a pod is terminated, which means that the hook will be executed prior to <code>kubectl</code> sends <code>SIGTERM</code> to the pod. As stated in <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks" rel="nofollow noreferrer">documentation</a>, the hook <strong>must</strong> complete before the <code>TERM</code> signal to stop the container is sent. <code>terminationGracePeriodSeconds</code> is happening in parallel with the preStop hook and the <code>SIGTERM</code> signal and its countdown begins before the <code>preStop</code> hook is executed. The pod will eventually terminate within the pod's termination grace period, regardless of the hook completion.</p>
<p>Therefore, on receiving <code>sleep</code> command for <code>preStop</code> hook, pod already marked as <code>Terminating</code> but the processes inside of it are not yet terminated - so the container can complete all the active requests during this period.</p>
|
<p>I got to know that the "krew" is the famous and fancy kubernetes plugin.</p>
<p>So I want to install the "krew" in my clusters.</p>
<p>But because of some reasons, my clusters must be disconnected to public internet.</p>
<p>I installed the k8s in my clusters with offline. (kubespray)</p>
<p>I pushed the necessary binary files into my cluster through FTP.</p>
<p>In this condition, how can I install "krew"?</p>
<p>I tried to transmit the "krew" binary file to my cluster, but after installment the "krew" binary looked for some files from public github.</p>
<p>Since my cluster cannot reach the public internet, the "krew" failed to find some files and the installment also failed.</p>
<p>Can you help me?</p>
| <p>Not sure what you would achieve by having <code>krew</code> installed in an offline environment. <code>krew</code> need the internet to download the various plugins from their source over the internet.</p>
<p>Although, You "may" not need to install <code>krew</code> at all. You can manually check the plugin list at this <a href="https://krew.sigs.k8s.io/plugins/" rel="nofollow noreferrer">link</a>. Once you decide which plugin you want to install, download the binary/executable and place it under your <code>$PATH</code> directory. If you name the executable with the <code>kubectl-</code> prefix, kubectl would consider it as a plugin, if the file is placed in <code>$PATH</code>.</p>
<p>Then you may run it as any of the following way:</p>
<ol>
<li><code>kubectl <plugin-name></code></li>
<li><code>./<plugin-downloaded-executable></code></li>
</ol>
<p>An example of how to make a custom plugin is provided here <a href="https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/#writing-kubectl-plugins" rel="nofollow noreferrer">link</a>. This can be used to replace any downloaded plugin.</p>
<p>Note that, this method worked for me for most of the plugins, but not guaranteed to work on all as testing for all the plugins is not possible.</p>
|
<p>I have created my own k8s cluster using Kubeadm with two Ubuntu virtual servers - one master and one worker node. I also deployed a Springboot app with MongoDB persistence and it works absolutely fine. Below are the nodes in my k8s cluster</p>
<p><a href="https://i.stack.imgur.com/Ng1KH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ng1KH.png" alt="enter image description here" /></a></p>
<p>Below are the contents of my yaml file which creates storage class, persistence volume, persistence volume claim, mongo deployment and mongo service</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
reclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-pv
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
hostPath:
path: "/home/moviepopcorn/mongodb/data"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
spec:
volumeName: mongo-pv
volumeMode: Filesystem
storageClassName: local-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
labels:
app: mongo
spec:
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongo
image: mongo:4.4.11
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /home/moviepopcorn/mongodb/data
volumes:
- name: mongo-persistent-storage
persistentVolumeClaim:
claimName: mongo-pvc
---
apiVersion: v1
kind: Service
metadata:
name: mongo-service
spec:
selector:
app: mongo
ports:
- protocol: TCP
port: 27017
targetPort: 27017
</code></pre>
<p><a href="https://i.stack.imgur.com/or3i5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/or3i5.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/uwn8J.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uwn8J.png" alt="enter image description here" /></a></p>
<p>Data stored from my Springboot into mongodb is not retained once I reboot the nodes in my k8s cluster. Could you please help me understand on what mistake I am doing?</p>
| <p>May you have updated the <strong>PVC</strong> mount path and due to that mongo not aware of where to store or save the data</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: mongo
name: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mongo
spec:
containers:
- image: mongo
name: mongo
args: ["--dbpath","/data/db"]
livenessProbe:
exec:
command:
- mongo
- --disableImplicitSessions
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
readinessProbe:
exec:
command:
- mongo
- --disableImplicitSessions
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 6
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongo-creds
key: username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-creds
key: password
volumeMounts:
- name: "mongo-data-dir"
mountPath: "/data/db"
volumes:
- name: "mongo-data-dir"
persistentVolumeClaim:
claimName: "pvc"
</code></pre>
<p>Try the above example or check the <strong>arg</strong> and <strong>mount path</strong> for reference.</p>
|
<p>Sorry in advance as this is probably a very easy question to answer, I am pretty new to prometheus and grafana and I am trying to figure out where this metric in prometheus is coming from "container_cpu_usage_seconds_total".</p>
<p>I have found online that all metrics starting with "node_" come from the node exporter pod. So I am just wondering if this metric is coming prometheus itself or is this also coming from the node exporter as we currently have no annotations set on our pods but are getting these metrics inside grafana.</p>
<p>Thanks in advance!</p>
| <p>The metric</p>
<p><strong>container_cpu_usage_seconds_total</strong></p>
<p>comes from <code>cAdvisor</code> service embedded in <code>kubelet</code>, exposed through port <code>10250</code> and endpoint <code>/metrics/cadvisor</code>.</p>
<p>The metric's source code definition is in:</p>
<p><a href="https://github.com/google/cadvisor/blob/364876670b745035739f217c3f1af85301731a43/metrics/prometheus.go#L163" rel="noreferrer">cAdvisor/metrics/prometheus.go</a></p>
|
<p>I'm using GKE to manage my app and the app is trying to consume 3rd party API, and the 3rd party API is whitelisting ip addresses that trying to access their APIs, how can i find what ip address that my app used when consume 3rd party APIs?</p>
| <p>The IP address used by the Pod depends on type of service and a few more factors, which are very well documented in <a href="https://kubernetes.io/docs/tutorials/services/source-ip/" rel="nofollow noreferrer">this documentation page</a></p>
<p>There is also IP Masquerade agent which allows you to use Node's IP address to talk to services on other nodes instead of IP of POD, <a href="https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/" rel="nofollow noreferrer">documented here</a></p>
<p>Now coming to solution to your actual problem, you will have to use a NAT Gateway tied to a static IP so that all the outgoing traffic from your cluster will use same IP - no matter which POD/Node it originates from. I found a few guide but YMMV based on which cloud or underlying infrastructure you are on</p>
<ul>
<li><a href="https://devopstales.github.io/cloud/gke-egress/" rel="nofollow noreferrer">GKE and NAT for IP Masking</a></li>
<li><a href="https://docs.aws.amazon.com/eks/latest/userguide/external-snat.html" rel="nofollow noreferrer">AWS Guide for same</a></li>
</ul>
<p>You should search for "Kubernetes IP Masquerade NAT " for instructions specific to your cloud!</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.