prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>I want to deploy multiple Deployments of Pods with different images, ports, etc. but with very similar other properties. So I want to declare a single <code>deployment.yaml</code> file that looks something like this</p>
<pre><code>{{- range .Values.types }}
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
...
{{- end }}
</code></pre>
<p>Where my <code>values.yaml</code> is </p>
<pre><code>types:
- foo
- bar
- baz
</code></pre>
<p>However, this only spins up a single Kubernetes Deployment when I <code>helm install</code> because everything is in one template file. Any ideas on how to do this?</p>
| <p>Kubernetes generally uses YAML syntax, and that allows multiple "documents" to be in a single physical file with a <code>---</code> delimiter before each one. Helm in turn generally operates by applying the templating to produce a plain-text file and in effect feeding it to <code>kubectl apply</code>.</p>
<p>The upshot of this is that if you start each Kubernetes object description with the <code>---</code> start-of-document delimiter, it should work:</p>
<pre><code>{{- range .Values.types }}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
...
{{- end }}
</code></pre>
|
<p>Actually I am trying to make the following command in one command</p>
<pre><code>kubectl get pods| grep -oP 'mag[^\s]+'
output
mag121111
</code></pre>
<p>After that I will run the following command</p>
<pre><code>kubectl exec -itmag121111 bash
</code></pre>
<p>Now I am trying as following </p>
<pre><code> kubectl get pods| grep -oP 'mag[^\s]+' | kubectl exec -it bash
</code></pre>
| <blockquote>
<p>kubectl exec -it $(kubectl get pods| grep -oP 'mag[^\s]+') -- /bin/bash</p>
</blockquote>
|
<p>Actually I am trying to make the following command in one command</p>
<pre><code>kubectl get pods| grep -oP 'mag[^\s]+'
output
mag121111
</code></pre>
<p>After that I will run the following command</p>
<pre><code>kubectl exec -itmag121111 bash
</code></pre>
<p>Now I am trying as following </p>
<pre><code> kubectl get pods| grep -oP 'mag[^\s]+' | kubectl exec -it bash
</code></pre>
| <pre><code>kubectl exec -it $(kubectl get pods| grep -oP 'mag[^\s]+') --/bin/bash
</code></pre>
<p>OR</p>
<pre><code>kubectl exec -it $(kubectl get pods| grep -oP 'mag[^\s]+') --bash
</code></pre>
|
<p>Actually I am trying to make the following command in one command</p>
<pre><code>kubectl get pods| grep -oP 'mag[^\s]+'
output
mag121111
</code></pre>
<p>After that I will run the following command</p>
<pre><code>kubectl exec -itmag121111 bash
</code></pre>
<p>Now I am trying as following </p>
<pre><code> kubectl get pods| grep -oP 'mag[^\s]+' | kubectl exec -it bash
</code></pre>
| <p>You can use <a href="https://github.com/thecasualcoder/kube-fzf" rel="nofollow noreferrer">kube-fzf</a>. It makes <code>exec into a pod(container)</code> and <code>portforward</code> super <strong>easy</strong>.</p>
<p>Refer <a href="https://github.com/thecasualcoder/kube-fzf#execpod-1" rel="nofollow noreferrer">this</a> for execpod</p>
|
<p>I have a Kubernetes cluster running various apps with different machine types (ie. cpu-heavy, gpu, ram-heavy) and installed cluster-autoscaler (CA) to manage the Auto Scaling Groups (ASG) using auto-discovery.</p>
<p>Kubernetes version: EKS 1.11</p>
<p>Cluster Autoscaler: v1.13.2</p>
<p>I have configured my ASGs such that they contain the appropriate CA tags. These ASGs are usually scaled down to 0 nodes, and will be scaled up according to workload. Going into the CA logs, I have also verified that the CA acknowledges the existence of the ASGs. However, whenever I try to create pods with nodeSelectors, the CAs doesn't scale up the corresponding ASG and responds with "predicate failed: nodeSelector(s) did not match". I have added the appropriate node labels to the ASG's tags as well. </p>
<p>Could it be a limitation due to scaling up from 0 nodes? </p>
<p>Can't seem to find similar problems online. Any feedback would help! Thanks!</p>
| <p>Update: CA Documentation update</p>
<p>I had the incorrect ASG tag. Ensure that you have <code>k8s.io/cluster-autoscaler/node-template/label/some-label</code> to ensure that CA can see your node labels when the ASG is scaled down to 0. Currently, CA doesn't create node labels based on ASG tags, but this feature is in the pipeline. </p>
|
<p>I am using kubernetes and istio.</p>
<p>I need to call https service outside my mesh, and this called service using internal CA authority, which mean I need to trust the server side certificate.</p>
<p>Can I trust the certificate on istio level instead of each micro-service?</p>
| <p>you can do it using cert-manager & ingress probably which will manage the ssl certificates :</p>
<p>you can check it out more at : </p>
<blockquote>
<p><a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes" rel="nofollow noreferrer">https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes</a></p>
</blockquote>
<p>you can also refer documents of istio where there option for HTTPS for managed gateway: <a href="https://istio.io/docs/tasks/traffic-management/secure-ingress/" rel="nofollow noreferrer">https://istio.io/docs/tasks/traffic-management/secure-ingress/</a></p>
|
<p>Actually I am trying to make the following command in one command</p>
<pre><code>kubectl get pods| grep -oP 'mag[^\s]+'
output
mag121111
</code></pre>
<p>After that I will run the following command</p>
<pre><code>kubectl exec -itmag121111 bash
</code></pre>
<p>Now I am trying as following </p>
<pre><code> kubectl get pods| grep -oP 'mag[^\s]+' | kubectl exec -it bash
</code></pre>
| <p>This works for me</p>
<pre><code>kubectl exec -it $(kubectl get pods| grep -oP 'mag[^\s]+') --container magname -- /bin/bash
</code></pre>
<p>here magname is actual pod name</p>
|
<p>I understand the difference between ReplicaSet and ReplicationController, of former being Set based and the latter Equality based. What I want to know is why was a newer implementation (Read ReplicaSet) introduced when the older ReplicationController achieves the same functionality.</p>
| <p>Replicasets is updated version of Replication controller </p>
<p>In Replication controller there equality-based selectors </p>
<p>In Replicasets there is set-based selectors</p>
<p>Replicasets also work with deployment so when you do simple deployment in kubernetes replicasets automatically generated and managed.so deployment <code>owned</code> replicasets. </p>
|
<p>kubernetes seems to have lot of objects. I can't seem to find the full list of objects anywhere. After briefly searching on google, I can find results which mention a subset of kubernetes objects. Is the <strong><em>full</em></strong> list of objects documented somewhere, perhaps in source code? Thank you.</p>
| <p>Following command successfully display all kubernetes objects</p>
<pre><code>kubectl api-resources
</code></pre>
<p><strong>Example</strong></p>
<pre><code>[root@hsk-controller ~]# kubectl api-resources
NAME SHORTNAMES KIND
bindings Binding
componentstatuses cs ComponentStatus
configmaps cm ConfigMap
endpoints ep Endpoints
events ev Event
limitranges limits LimitRange
namespaces ns Namespace
nodes no Node
persistentvolumeclaims pvc PersistentVolumeClaim
persistentvolumes pv PersistentVolume
pods po Pod
podtemplates PodTemplate
replicationcontrollers rc ReplicationController
resourcequotas quota ResourceQuota
secrets Secret
serviceaccounts sa ServiceAccount
services svc Service
initializerconfigurations InitializerConfiguration
mutatingwebhookconfigurations MutatingWebhookConfiguration
validatingwebhookconfigurations ValidatingWebhookConfiguration
customresourcedefinitions crd,crds CustomResourceDefinition
apiservices APIService
controllerrevisions ControllerRevision
daemonsets ds DaemonSet
deployments deploy Deployment
replicasets rs ReplicaSet
statefulsets sts StatefulSet
tokenreviews TokenReview
localsubjectaccessreviews LocalSubjectAccessReview
selfsubjectaccessreviews SelfSubjectAccessReview
selfsubjectrulesreviews SelfSubjectRulesReview
subjectaccessreviews SubjectAccessReview
horizontalpodautoscalers hpa HorizontalPodAutoscaler
cronjobs cj CronJob
jobs Job
brpolices br,bp BrPolicy
clusters rcc Cluster
filesystems rcfs Filesystem
objectstores rco ObjectStore
pools rcp Pool
certificatesigningrequests csr CertificateSigningRequest
leases Lease
events ev Event
daemonsets ds DaemonSet
deployments deploy Deployment
ingresses ing Ingress
networkpolicies netpol NetworkPolicy
podsecuritypolicies psp PodSecurityPolicy
replicasets rs ReplicaSet
nodes NodeMetrics
pods PodMetrics
networkpolicies netpol NetworkPolicy
poddisruptionbudgets pdb PodDisruptionBudget
podsecuritypolicies psp PodSecurityPolicy
clusterrolebindings ClusterRoleBinding
clusterroles ClusterRole
rolebindings RoleBinding
roles Role
volumes rv Volume
priorityclasses pc PriorityClass
storageclasses sc StorageClass
volumeattachments VolumeAttachment
</code></pre>
<p><strong>Note</strong>: kubernate version is v1.12*</p>
<blockquote>
<pre><code>kubectl version
</code></pre>
</blockquote>
|
<p>I try to build a CI/CD Pipeline with Azure Devops.
My goal is to </p>
<ol>
<li><p>Build a docker Image an upload this to a private docker Respository in Dockerhub within the CI Pipeline</p></li>
<li><p>Deploy this image to an Azure Kubernetes Cluster within the CD Pipeline</p></li>
</ol>
<p>The CI Pipeline works well:
<a href="https://i.stack.imgur.com/dnsnn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dnsnn.png" alt="enter image description here"></a></p>
<p>The image is pushed successfully to dockerhub
<a href="https://i.stack.imgur.com/zzcc3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zzcc3.png" alt="enter image description here"></a></p>
<p>The pipeline docker push task:</p>
<pre><code>steps:
- task: Docker@1
displayName: 'Push an image'
inputs:
containerregistrytype: 'Container Registry'
dockerRegistryEndpoint: DockerHubConnection
command: 'Push an image'
imageName: 'jastechgmbh/microservice-demo:$(Build.BuildId)'
</code></pre>
<p>After that I trigger my release pipeline manually an it shows success as well
<a href="https://i.stack.imgur.com/4LvLD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4LvLD.png" alt="enter image description here"></a></p>
<p>The apply pipeline task:</p>
<pre><code>steps:
- task: Kubernetes@0
displayName: 'kubectl apply'
inputs:
kubernetesServiceConnection: MicroserviceTestClusterConnection
command: apply
useConfigurationFile: true
configuration: '$(System.DefaultWorkingDirectory)/_MicroservicePlayground-MavenCI/drop/deployment.azure.yaml'
containerRegistryType: 'Container Registry'
dockerRegistryConnection: DockerHubConnection
</code></pre>
<p>But when I check the deployment on my kubernetes dashboard an error message pops up:
<a href="https://i.stack.imgur.com/OgumP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OgumP.png" alt="enter image description here"></a></p>
<p><strong>Failed to pull image "jastechgmbh/microservice-demo:38": rpc error: code = Unknown desc = Error response from daemon: pull access denied for jastechgmbh/microservice-demo, repository does not exist or may require 'docker login': denied: requested access to the resource is denied</strong></p>
<p>I use the same dockerhub service connection in the CI & CD Pipeline. </p>
<p><a href="https://i.stack.imgur.com/E9Cfr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E9Cfr.png" alt="enter image description here"></a></p>
<p><strong>I would be very happy about your help.</strong></p>
| <p>I believe this error indicates your kubernetes cluster doesnt have access to docker registry. You'd need to create docker secret for that. like so:</p>
<pre><code>kubectl create secret generic regcred \
--from-file=.dockerconfigjson=<path/to/.docker/config.json> \
--type=kubernetes.io/dockerconfigjson
</code></pre>
<p>or from command line:</p>
<pre><code>kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
</code></pre>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a></p>
|
<p>I'm deploying Azure K8s cluster with Terraform, and the image is hosted in Amazon ECR.
The deployment fails at the image pull from the ECR with the following error:</p>
<pre><code>Failed to pull image "tooot.eu-west-1.amazonaws.com/app-t:latest": rpc error: code = Unknown desc = Error response from daemon: Get https://tooot.eu-west-1.amazonaws.com/v2/app-t/manifests/latest: no basic auth credentials
</code></pre>
<p>the following is my kuberentes resource in the terraform template</p>
<pre><code> metadata {
name = "terraform-app-deployment-example"
labels {
test = "app-deployment"
}
}
spec {
replicas = 6
selector {
match_labels {
test = "app-deployment"
}
}
template {
metadata {
labels {
test = "app-deployment"
}
}
spec {
container {
image = "toot.eu-west-1.amazonaws.com/app-t:latest"
name = "app"
}
}
}
}
}`
</code></pre>
| <p>Basically you are lacking credentials to pull images from AWS. </p>
<p>You need to create a regcred, which contains the login credentials:</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a> </p>
<p>After that you need to add the regcred on your terraform configuration. I have not worked with templates, but in a deploy specification you would add a field called imagePullSecrets. </p>
<p><a href="https://www.terraform.io/docs/providers/kubernetes/r/deployment.html" rel="noreferrer">https://www.terraform.io/docs/providers/kubernetes/r/deployment.html</a></p>
<p>The imagePullSecrets description: </p>
<p>image_pull_secrets - (Optional) ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. For example, in the case of docker, only DockerConfig type secrets are honored</p>
|
<p>I have the following configuration for my pod:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-app
labels:
app: my-app
spec:
serviceName: my-app
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
restartPolicy: Never
containers:
- name: my-app
image: myregistry:443/mydomain/my-app
imagePullPolicy: Always
</code></pre>
<p>And it deploys fine without the restartPolicy. However, I do not want the process to be run again once finished, hence I added the 'restartPolicy: Never'. Unfortunately I get the following error when I attempt to deploy:</p>
<pre><code>Error from server (Invalid): error when creating "stack.yaml": StatefulSet.apps "my-app" is invalid: spec.template.spec.restartPolicy: Unsupported value: "Never": supported values: "Always"
</code></pre>
<p>What am I missing?</p>
<p>Thanks</p>
| <p>Please see <a href="https://github.com/kubernetes/kubernetes/issues/24725" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/24725</a></p>
<p>It appears that only "Always" is supported.</p>
|
<p>I need a shared volume accessible from multiple pods for caching files in RAM on each node.</p>
<p>The problem is that the <code>emptyDir</code> volume provisioner (which supports <code>Memory</code> as its <code>medium</code>) is available in <code>Volume</code> spec but not in <code>PersistentVolume</code> spec.</p>
<p>Is there any way to achieve this, except by creating a <code>tmpfs</code> volume manually on each host and mounting it via <code>local</code> or <code>hostPath</code> provisioner in the PV spec?</p>
<p>Note that Docker itself supports such volumes:</p>
<pre><code>docker volume create --driver local --opt type=tmpfs --opt device=tmpfs \
--opt o=size=100m,uid=1000 foo
</code></pre>
<p>I don't see any reason why k8s doesn't. Or maybe it does, but it's not obvious?</p>
<p>I tried playing with <code>local</code> and <code>hostPath</code> PVs with <code>mountOptions</code> but it didn't work.</p>
| <p>EmtpyDir tied to lifetime of a pod, so it can't be used via shared with multiple pods.
What you request, is additional feature and if you look at below github discussions, you will see that you are not the first that asking for this feature.</p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/27732" rel="nofollow noreferrer">consider a tmpfs storage class</a></p>
<p>Also according your mention that <code>docker supports this tmpfs volume</code>, yes it supports, but you can't share this volume between containers. From <a href="https://docs.docker.com/storage/tmpfs/" rel="nofollow noreferrer">Documentation</a></p>
<blockquote>
<p>Limitations of tmpfs mounts:</p>
<p>Unlike volumes and bind mounts, you canβt
share tmpfs mounts between containers.</p>
</blockquote>
|
<p>I am getting the following error when i try to deploy a kubernetes service using my bitbucket pipeline to my kubernetes cluster. I am using <a href="https://github.com/hobby-kube/guide#deploying-services" rel="nofollow noreferrer">deploying services</a> method to deploy the service which works fine on my local machine so i am not able to reproduce the issue.</p>
<p>Is it a certificate issue or some configuration issue ?</p>
<p>How can i resolve this ?</p>
<pre><code>1s
+ kubectl apply -f dashboard/
unable to recognize "dashboard/deployment.yml": Get https://kube1.mywebsitedomain.com:6443/api?timeout=32s: x509: certificate is valid for kube1, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, not kube1.mywebsitedomain.com
unable to recognize "dashboard/ingress.yml": Get https://kube1.mywebsitedomain.com:6443/api?timeout=32s: x509: certificate is valid for kube1, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, not kube1.mywebsitedomain.com
unable to recognize "dashboard/secret.yml": Get https://kube1.mywebsitedomain.com:6443/api?timeout=32s: x509: certificate is valid for kube1, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, not kube1.mywebsitedomain.com
unable to recognize "dashboard/service.yml": Get https://kube1.mywebsitedomain.com:6443/api?timeout=32s: x509: certificate is valid for kube1, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, not kube1.mywebsitedomain.com
</code></pre>
<p>Before running the apply command I did set the cluster using the kubectl config and i get the following on the console.</p>
<pre><code>+ kubectl config set-cluster kubernetes --server=https://kube1.mywebsitedomain.com:6443
Cluster "kubernetes" set.
</code></pre>
| <p>It was the certificate issue. Using the right certificate will definitely solve this problem but in my case the certificate verification wasn't necessary as secure connection is not required for this spike.</p>
<p>So here is my work around</p>
<p>I used the flag <code>--insecure-skip-tls-verify</code> with kubectl and it worked fine</p>
<pre><code>+ kubectl --insecure-skip-tls-verify apply -f dashboard/
deployment.extensions/kubernetes-dashboard unchanged
ingress.extensions/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-auth unchanged
service/kubernetes-dashboard unchanged
</code></pre>
|
<p>I'm using GCloud, I have a kubernate cluster and a cloud sql instance.</p>
<p>I have a simple node.js app, that uses database. When I deploy with <code>gcloud app deploy</code> it has an access to a database. However, when I build a dockerimage and expose it, it cannot reach database.</p>
<ol>
<li>I expose Docker application following: <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app</a></li>
<li>Cloud SQL deosn't have Private IP enabled, Im connecting using cloud sql proxy</li>
<li>In app.yaml I do specify <code>base_settings:cloud_sql_instances</code>. I use the same value in <code>socketPath</code> config for mysql connection.</li>
<li><p>The error in docker logs is:</p>
<p>(node:1) UnhandledPromiseRejectionWarning: Error: connect ENOENT /cloudsql/x-alcove-224309:europe-west1:learning
at PipeConnectWrap.afterConnect [as oncomplete] (net.js:1097:14)</p></li>
</ol>
<p>Can you please explain me how to connect to cloud sql from dockerized node application.</p>
| <p>Generally, the best method is to connect using a sidecar container inside the same pod as your application. You can find examples on the "Connecting from Google Kubernetes Engine" page <a href="https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine" rel="nofollow noreferrer">here</a>. There is also a codelab <a href="https://github.com/GoogleCloudPlatform/gmemegen" rel="nofollow noreferrer">here</a> that goes more in-depth and might be helpful. </p>
|
<p>I try to port an ASP.NET Core 1 application with Identity to Kubernetes. The login doesn't work and I got different errors like <em><a href="https://stackoverflow.com/a/53870092/9428314">The anti-forgery token could not be decrypted</a></em>. The problem is that I'm using a deployment with three replica sets so that further request were served by different pods that don't know about the anti-forgery token. Using <code>replicas: 3</code> it works.</p>
<p>In the same question I found a <a href="https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/" rel="nofollow noreferrer">sticky session documentation</a> which seems a solution for my problem. The cookie name <code>.AspNetCore.Identity.Application</code> is from my browser tools. </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp-k8s-test
annotations:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: ".AspNetCore.Identity.Application"
spec:
replicas: 3
template:
metadata:
labels:
app: myapp-k8s
spec:
containers:
- name: myapp-app
image: myreg/myapp:0.1
ports:
- containerPort: 80
env:
- name: "ASPNETCORE_ENVIRONMENT"
value: "Production"
imagePullSecrets:
- name: registrypullsecret
</code></pre>
<p>This doesn't work, either with or without leading dot at the cookie name. I also tried adding the following annotations</p>
<pre><code>kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
</code></pre>
<p>What is required to allow sticky sessions on Kubernetes with ASP.NET Core?</p>
| <p>Found out that I made two logical mistakes: </p>
<ol>
<li>Sticky sessions doesn't work this way</li>
</ol>
<p>I assumed that Kubernetes will look into the cookie and create some mapping of cookie hashes to pods. But instead, another session is generated and append to our http header. <code>nginx.ingress.kubernetes.io/session-cookie-name</code> is only the name of those generated cookie. So per default, it's not required to change them. </p>
<ol start="2">
<li>Scope to the right object</li>
</ol>
<p>The annotations must be present on the ingress, NOT the deployment (stupid c&p mistake)</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myapp-k8s-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
spec:
tls:
- hosts:
- myapp-k8s.local
rules:
- host: myapp-k8s.local
http:
paths:
- path: /
backend:
serviceName: myapp-svc
servicePort: 80
</code></pre>
<p>This works as expected. </p>
|
<p>Let's say that I defined a pod that simply runs a few pieces of code and exits afterwards. I need to make sure that this pod exits before allowing other pods to run. What is the best way to implement this?</p>
<p>I used to check whether a pod is ready by performing network requests, e.g. once ready, some webapps pods will block and listen to pre-defined ports, hence I can have the waiting pods performing netcat requests to them. But in this particular case the pod does not need to open any port, hence this approach does not work. Can anyone suggest an alternative?</p>
<p>Thanks</p>
| <p>If the "pieces of code" needs to be run in every pod start/restart, you maybe are looking for an <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">Init Container</a> implementation:</p>
<blockquote>
<p>Init Containers are specialized Containers that run before app
Containers and can contain utilities or setup scripts not present in
an app image.</p>
</blockquote>
<p>If your code is a dependency for multiple pods and needs to be run once (e.g., on every new deploy), you may consider using a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">Job Controller</a>, and implement a logic to check if it's completed before deploying new pods/containers.
(You can use commands like <code>kubectl wait --for=condition=complete job/myjob</code> on your deploy script). </p>
<p>If you are using helm to deploy, the best option is to use k8s job combined with <code>pre-install</code> and <code>pre-upgrade</code> <a href="https://github.com/helm/helm/blob/master/docs/charts_hooks.md" rel="nofollow noreferrer">helm hooks</a>.</p>
|
<p>I'm trying to create prometheus with operator in fresh new k8s cluster
I use the following files , </p>
<ol>
<li>Iβm creating a namespace monitoring </li>
<li>Apply this file , which works ok</li>
</ol>
<pre><code>
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
k8s-app: prometheus-operator
name: prometheus-operator
namespace: monitoring
spec:
replicas: 2
selector:
matchLabels:
k8s-app: prometheus-operator
template:
metadata:
labels:
k8s-app: prometheus-operator
spec:
priorityClassName: "operator-critical"
tolerations:
- key: "WorkGroup"
operator: "Equal"
value: "operator"
effect: "NoSchedule"
- key: "WorkGroup"
operator: "Equal"
value: "operator"
effect: "NoExecute"
containers:
- args:
- --kubelet-service=kube-system/kubelet
- --logtostderr=true
- --config-reloader-image=quay.io/coreos/configmap-reload:v0.0.1
- --prometheus-config-reloader=quay.io/coreos/prometheus-config-reloader:v0.29.0
image: quay.io/coreos/prometheus-operator:v0.29.0
name: prometheus-operator
ports:
- containerPort: 8080
name: http
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
nodeSelector:
serviceAccountName: prometheus-operator
</code></pre>
<p>Now I want to apply this file (CRD)</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: prometheus
namespace: monitoring
labels:
prometheus: prometheus
spec:
replica: 1
priorityClassName: "operator-critical"
serviceAccountName: prometheus
nodeSelector:
worker.garden.sapcloud.io/group: operator
serviceMonitorNamespaceSelector: {}
serviceMonitorSelector:
matchLabels:
role: observeable
tolerations:
- key: "WorkGroup"
operator: "Equal"
value: "operator"
effect: "NoSchedule"
- key: "WorkGroup"
operator: "Equal"
value: "operator"
effect: "NoExecute"
</code></pre>
<p>before I've created those CRD</p>
<p><a href="https://github.com/coreos/prometheus-operator/tree/master/example/prometheus-operator-crd" rel="nofollow noreferrer">https://github.com/coreos/prometheus-operator/tree/master/example/prometheus-operator-crd</a></p>
<p>The problem that the <strong>pods didn't able to start</strong> (0/2), see the picture below. What could be the problem? please advice </p>
<p><a href="https://i.stack.imgur.com/3lwSI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3lwSI.png" alt="enter image description here"></a></p>
<p><strong>update</strong></p>
<p>when I go to the event of the prom operator I see the following Error <code>creating: pods "prometheus-operator-6944778645-" is forbidden: no PriorityClass with name operator-critical was found replicaset-controller</code> , any idea ?</p>
| <p>You are trying to reference the <code>operator-critical</code> <a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass" rel="nofollow noreferrer">priority class</a>. Priority classes determine the priority of pods and their resource assignment.</p>
<p>To fix this issue you could either remove the explicit priority class(<code>priorityClassName: "operator-critical"</code>) in both files or create the <code>operator-critical</code> class:</p>
<pre><code>apiVersion: scheduling.k8s.io/v1beta1
kind: PriorityClass
metadata:
name: operator-critical
value: 1000000
globalDefault: false
description: "Critical operator workloads"
</code></pre>
|
<p>What is the difference between Nginx ingress controller and HAProxy load balancer in kubernetes?</p>
| <p>First, let's have a quick overview of what an <code>Ingress Controller</code> is in Kubernetes.</p>
<ul>
<li><strong>Ingress Controller:</strong> controller that responds to changes in <code>Ingress</code> rules and changes its internal configuration accordingly</li>
</ul>
<p>So, both the HAProxy ingress controller and the Nginx ingress controller will listen for these <code>Ingress</code> configuration changes and configure their own running server instances to route traffic as specified in the targeted <code>Ingress</code> rules. The main differences come down to the specific differences in use cases between Nginx and HAProxy themselves.</p>
<p>For the most part, Nginx comes with more batteries included for serving <em>web content</em>, such as configurable content caching, serving local files, etc. HAProxy is more stripped down, and better equipped for high-performance network workloads.</p>
<p>The available configurations for HAProxy can be found <a href="https://github.com/jcmoraisjr/haproxy-ingress#configuration" rel="nofollow noreferrer">here</a> and the available configuration methods for Nginx ingress controller <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/" rel="nofollow noreferrer">are here</a>.</p>
<p>I would add that Haproxy is capable of doing TLS / SSL offloading (SSL termination or TLS termination) for non-http protocols such as mqtt, redis and ftp type workloads.</p>
<p>The differences go deeper than this, however, and these issues go into more detail on them:</p>
<ol>
<li><a href="https://serverfault.com/questions/229945/what-are-the-differences-between-haproxy-and-ngnix-in-reverse-proxy-mode">https://serverfault.com/questions/229945/what-are-the-differences-between-haproxy-and-ngnix-in-reverse-proxy-mode</a></li>
<li><a href="https://stackoverflow.com/questions/21173496/haproxy-vs-nginx">HAProxy vs. Nginx</a></li>
</ol>
|
<p>Is it possible to fake a container to always be ready/live in kubernetes so that kubernetes thinks that the container is live and doesn't try to kill/recreate the container? I am looking for a quick and hacky solution, preferably.</p>
| <p>Liveness and Readiness probes are <strong>not required</strong> by k8s controllers, <strong>you can simply remove them</strong> and your containers will be always live/ready.</p>
<p>If you want the hacky approach anyways, use the <code>exec</code> probe (instead of <code>httpGet</code>) with something dummy that always returns <code>0</code> as exit code. For example:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
livenessProbe:
exec:
command:
- touch
- /tmp/healthy
readinessProbe:
exec:
command:
- touch
- /tmp/healthy
</code></pre>
<p>Please note that this will turn into ready/live state only in case you did not specify the flag <code>readOnlyRootFilesystem: true</code>.</p>
|
<p>I have a pod that is meant to run a code excerpt and exit afterwards. I do not want this pod to restart after exiting, but apparently it is not possible to set a restart policy in Kubernetes (see <a href="https://github.com/kubernetes/kubernetes/issues/24725" rel="nofollow noreferrer">here</a> and <a href="https://stackoverflow.com/a/55169169/4828060">here</a>).</p>
<p>Therefore my question is: how can I implement a pod that runs only once?</p>
<p>Thank you</p>
| <p>You need to deploy a job. A deployment is meant to keep the containers running all the time. Give a check on: </p>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/</a></p>
|
<p>I'm using GCloud, I have a kubernate cluster and a cloud sql instance.</p>
<p>I have a simple node.js app, that uses database. When I deploy with <code>gcloud app deploy</code> it has an access to a database. However, when I build a dockerimage and expose it, it cannot reach database.</p>
<ol>
<li>I expose Docker application following: <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app</a></li>
<li>Cloud SQL deosn't have Private IP enabled, Im connecting using cloud sql proxy</li>
<li>In app.yaml I do specify <code>base_settings:cloud_sql_instances</code>. I use the same value in <code>socketPath</code> config for mysql connection.</li>
<li><p>The error in docker logs is:</p>
<p>(node:1) UnhandledPromiseRejectionWarning: Error: connect ENOENT /cloudsql/x-alcove-224309:europe-west1:learning
at PipeConnectWrap.afterConnect [as oncomplete] (net.js:1097:14)</p></li>
</ol>
<p>Can you please explain me how to connect to cloud sql from dockerized node application.</p>
| <p>When you deploy your app on App Engine with <code>gcloud app deploy</code>, the platform runs it in a container along with a side-car container in charge of running the cloud_sql_proxy (you ask for it by specifying the <code>base_settings:cloud_sql_instances</code> in your app.yaml file).</p>
<p>Kubernetes Engine doesn't use an app.yaml file and doesn't supply this side-car container to you so you'll have to set it up. The <a href="https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine#proxy" rel="nofollow noreferrer">public doc</a> shows how to do it by creating secrets for your database credentials and updating your deployment file with the side-car container config. An example shown in the doc would look like:</p>
<pre><code>...
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=<INSTANCE_CONNECTION_NAME>=tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
...
</code></pre>
|
<p>I am using a flask microservice with gunicorn to make requests to a service hosted on Google Kubernetes engine. The microservice is dockerised and is hosted as a pod on google kubernetes engine as well. After testing it locally, I deployed it but I am getting a CrashLoopBackOff error. The logs of my pod are :</p>
<pre><code>[2019-03-15 08:41:13 +0000] [1] [INFO] Starting gunicorn 19.9.0
[2019-03-15 08:41:13 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)
[2019-03-15 08:41:13 +0000] [1] [INFO] Using worker: threads
[2019-03-15 08:41:13 +0000] [12] [INFO] Booting worker with pid: 12
Failed to find application object 'app' in 'app'
[2019-03-15 08:41:13 +0000] [12] [INFO] Worker exiting (pid: 12)
[2019-03-15 08:41:13 +0000] [13] [INFO] Booting worker with pid: 13
Failed to find application object 'app' in 'app'
[2019-03-15 08:41:13 +0000] [13] [INFO] Worker exiting (pid: 13)
[2019-03-15 08:41:13 +0000] [1] [INFO] Shutting down: Master
[2019-03-15 08:41:13 +0000] [1] [INFO] Reason: App failed to load.
</code></pre>
<p>This seems to be an error with <strong>gunicorn</strong>.</p>
<p>My <strong>folder structure</strong> is :</p>
<pre><code>.
βββ app.py
βββ app.yaml
βββ config.py
βββ data
βΒ Β βββ object_detection.pbtxt
βββ Dockerfile
βββ filename.jpg
βββ helper.py
βββ object_2.py
βββ object_detection
βΒ Β βββ core
βΒ Β βΒ Β βββ anchor_generator.py
βΒ Β βββ vrd_evaluation_test.py
βββ object_detection_grpc_client.py
βββ requirements.txt
βββ tensorflow_serving
βββ apis
Β Β βββ regression_pb2.py
</code></pre>
<p>The <strong>app.py</strong> code is:</p>
<pre><code>import logging
from flask import Flask, request, jsonify
from config import auth_secret_token, PORT, DEBUG_MODE
from helper import check_parameters
from object_detection_grpc_client import main
app = Flask(__name__)
def check_authorization(request):
try:
if not 'Auth-token' in request.headers:
return jsonify({'error': 'unauthorized access'}), 401
token = request.headers['Auth-token']
if token != auth_secret_token:
return jsonify({'error': 'unauthorized access'}), 401
return "ok", 200
except Exception as e:
return jsonify({'error': 'unauthorized access'}), 401
@app.route("/", methods=['POST'])
def hello():
info, status_code = check_authorization(request)
if status_code != 200:
return info, status_code
else:
status, status_code = check_parameters(request.form)
if status_code != 200:
return status, status_code
else:
score = main()
response = {"status": "success", "score": score, "customer_id":(request.form["cust_id"])}
return jsonify(response), status_code
if __name__ == "__main__":
app.run(host="0.0.0.0", port=PORT, debug=DEBUG_MODE)
</code></pre>
<p>The <strong>config.py</strong> code is:</p>
<pre><code> from os import environ as env
import multiprocessing
PORT = int(env.get("PORT", 8080))
DEBUG_MODE = int(env.get("DEBUG_MODE", 1))
# Gunicorn config
bind = ":" + str(PORT)
workers = multiprocessing.cpu_count() * 2 + 1
threads = 2 * multiprocessing.cpu_count()
auth_secret_token = "token"
server='A.B.C.D:9000'
model_name="mymodel"
input_image='filename.jpg'
label_map="./data/object_detection.pbtxt"
</code></pre>
<p>The <strong>Dockerfile</strong> is :</p>
<pre><code> FROM python:3.5.2
RUN apt update
WORKDIR /app
ADD requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
ADD . /app
ENV PORT 8080
CMD ["gunicorn", "app:app", "--config=config.py"]
</code></pre>
<p>The deployment file app.yaml:</p>
<pre><code> apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mymodel-client
labels:
name: mymodel-client
spec:
replicas: 1
selector:
matchLabels:
name: mymodel-client
template:
metadata:
name: mymodel-client
labels:
name: mymodel-client
spec:
containers:
- name: mymodel-client
image: gcr.io/image-identification/mymodel-client:v1
ports:
- containerPort: 8080
resources:
requests:
memory: 256Mi
limits:
memory: 512Mi
env:
- name: DEBUG_MODE
value: "1"
</code></pre>
<p>What is wrong here?</p>
<p>Link to the tutorial I referred to <a href="https://medium.com/google-cloud/a-guide-to-deploy-flask-app-on-google-kubernetes-engine-bfbbee5c6fb" rel="nofollow noreferrer">https://medium.com/google-cloud/a-guide-to-deploy-flask-app-on-google-kubernetes-engine-bfbbee5c6fb</a></p>
| <p>I think the problem might be similar to this one. <a href="https://stackoverflow.com/a/50157417/4229159">https://stackoverflow.com/a/50157417/4229159</a></p>
<p>The <code>app</code> folder (the one you create in docker) and the <code>app</code> file have the same name. Could you try it again renaming one of them?</p>
<p>The rest of the files look ok, and it seems only to be a problem of <code>gunicorn</code></p>
|
<p>When running <code>kubeadm init</code> command, by default the required certificates are generated under <code>/etc/kubernetes/pki location</code>. </p>
<p>Is there any option to run <code>kubeadm init</code> command to ignore generating certificates?</p>
| <p>You can use the <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#options" rel="nofollow noreferrer"><code>--skip-phases</code></a> option to skip phases:</p>
<blockquote>
<p>--skip-phases<br>
stringSlice List of phases to be skipped</p>
</blockquote>
<p><strong>To skip the certificate generation:</strong>
<code>kubeadm init --skip-phases certs</code></p>
<p>This implies generating the certificates on your own. <strong>You cannot use Kubernetes without a Certificate Authority (CA)</strong>. Take a look on how <a href="https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/" rel="nofollow noreferrer">Manage TLS Certificates in a Cluster</a>:</p>
<blockquote>
<p>Every Kubernetes cluster has a cluster root Certificate Authority
(CA). The CA is generally used by cluster components to validate the
API serverβs certificate, by the API server to validate kubelet client
certificates, etc. To support this, the CA certificate bundle is
distributed to every node in the cluster and is distributed as a
secret attached to default service accounts.</p>
</blockquote>
<p>And also <a href="https://kubernetes.io/docs/setup/certificates/" rel="nofollow noreferrer">PKI Certificates and Requirements</a>:</p>
<blockquote>
<p>Kubernetes requires PKI certificates for authentication over TLS. If
you install Kubernetes with <code>kubeadm</code>, the certificates that your
cluster requires are automatically generated. You can also generate
your own certificates β for example, to keep your private keys more
secure by not storing them on the API server.</p>
</blockquote>
|
<p>I am used to work with Docker swarm and whenever I wanted to deploy one container replica per node available I would use the 'global' deploy mode, as exemplified in this Docker stack yaml:</p>
<pre><code> logstash:
image: myregistry:443/mydomain/logstash
deploy:
mode: global
restart_policy:
condition: on-failure
ports:
- "12201:12201/udp"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
mylan:
</code></pre>
<p>This would deploy one and only one logstash replica in each node available. I am new to Kubernetes and I was attempting to reproduce this behaviour, is there an equivalent mode? If not, what are my alternatives?</p>
<p>Thank you.</p>
| <p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="noreferrer">DaemonSet</a> is the controller that you wants:</p>
<blockquote>
<p>A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As
nodes are added to the cluster, Pods are added to them. As nodes are
removed from the cluster, those Pods are garbage collected. Deleting a
DaemonSet will clean up the Pods it created.</p>
</blockquote>
<p>The official doc even mention your needs:</p>
<blockquote>
<p>Some typical uses of a DaemonSet are:<br>
[...]<br>
- running a logs collection daemon on every node, such as fluentd or logstash.</p>
</blockquote>
|
<p>can I scale up (add more nodes) my Kubernetes cluster in AWS via AWS Auto Scaling Group only or I must use <code>kops</code> for this purpose as well? I used <code>kops</code> to provision Kubernetes cluster in the first place.</p>
| <p>You definetly you can scale up in aws EKS using aws auto scaling group.</p>
<p>cluster auto-scaler will automatically add the node in pool when there is insufficient memory for pod or there fails to assign new pod on node.</p>
<p>for more help you can refer this document : <a href="https://medium.com/@alejandro.millan.frias/cluster-autoscaler-in-amazon-eks-d9f787176519" rel="nofollow noreferrer">https://medium.com/@alejandro.millan.frias/cluster-autoscaler-in-amazon-eks-d9f787176519</a></p>
|
<p>A Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noreferrer"><code>Service</code></a> can have a <code>targetPort</code> and <code>port</code> in the service definition:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
</code></pre>
<p>What is the difference between the <code>port</code> and <code>targetPort</code>?</p>
| <p><strong>Service:</strong> This directs the traffic to a pod.</p>
<p><strong>TargetPort:</strong> This is the actual port on which your application is running inside the container. </p>
<p><strong>Port:</strong> Some times your application inside container serves different services on a different port. </p>
<p><strong>Example:</strong> The actual application can run <code>8080</code> and health checks for this application can run on <code>8089</code> port of the container.
So if you hit the service without port it doesn't know to which port of the container it should redirect the request. Service needs to have a mapping so that it can hit the specific port of the container.</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- name: http
nodePort: 30475
port: 8089
protocol: TCP
targetPort: 8080
- name: metrics
nodePort: 31261
port: 5555
protocol: TCP
targetPort: 5555
- name: health
nodePort: 30013
port: 8443
protocol: TCP
targetPort: 8085
</code></pre>
<p>if you hit the <code>my-service:8089</code> the traffic is routed to <code>8080</code> of the container(targetPort). Similarly, if you hit <code>my-service:8443</code> then it is redirected to <code>8085</code> of the container(targetPort). But this <code>myservice:8089</code> is internal to the kubernetes cluster and can be used when one application wants to communicate with another application. So to hit the service from outside the cluster someone needs to expose the port on the host machine on which kubernetes is running
so that the traffic is redirected to a port of the container. This is <code>node port</code>(port exposed on the host machine).
From the above example, you can hit the service from outside the cluster(Postman or any rest-client) by <code>host_ip:nodePort</code></p>
<p>Say your host machine ip is <code>10.10.20.20</code> you can hit the http, metrics, health services by <code>10.10.20.20:30475</code>, <code>10.10.20.20:31261</code>, <code>10.10.20.20:30013</code>.</p>
<p>Edits: Edited as per <a href="https://stackoverflow.com/users/545127/raedwald">Raedwald</a> comment.</p>
|
<p>I am new to Kubernetes and when I used to work with Docker swarm I was able to redirect logging the following way:</p>
<pre><code> myapp:
image: myregistry:443/mydomain/myapp
deploy:
mode: global
restart_policy:
condition: on-failure
logging:
driver: gelf
options:
gelf-address: "udp://localhost:12201"
environment:
- LOGGING_LEVEL=WARN
</code></pre>
<p>this way, instead of consulting logs using <code>docker service logs -f myapp</code> or in this case <code>kubectl logs -f myapp</code>, I would have them redirected to monitor them in a centralised manner (e.g. using ELK).</p>
<p>Is this possible with Kubernetes? What is the equivalent solution?</p>
<p>Thank you for your help </p>
| <p>Yes , there are many solutions both opensource and commerial to send all kubernetes logs ( apps and cluster and everything ) to systems like ELK.</p>
<p>Assuming you have the ElasticSearch already setup.</p>
<p>We are using FluentBit to send K8S logs to EFK:</p>
<p><strong>Fluent Bit DaemonSet ready to be used with Elasticsearch on a normal Kubernetes Cluster</strong></p>
<p><a href="https://github.com/fluent/fluent-bit-kubernetes-logging" rel="nofollow noreferrer">https://github.com/fluent/fluent-bit-kubernetes-logging</a></p>
<p>We are also using SearchGurard with ELk to Restrict users to see logs that belong to apps running in thier own namespaces only.</p>
|
<p>I have the following CSR object in Kubernetes:</p>
<pre><code>$ kubectl get csr
NAME AGE REQUESTOR CONDITION
test-certificate-0.my-namespace 53m system:serviceaccount:my-namespace:some-user Pending
</code></pre>
<p>And I would like to approve it using the Python API client:</p>
<pre class="lang-py prettyprint-override"><code>from kuberentes import config, client
# configure session
config.load_kube_config()
# get a hold of the certs API
certs_api = client.CertificatesV1beta1Api()
# read my CSR
csr = certs_api.read_certificate_signing_request("test-certificate-0.my-namespace")
</code></pre>
<p>Now, the contents of the <code>csr</code> object are:</p>
<pre><code>{'api_version': 'certificates.k8s.io/v1beta1',
'kind': 'CertificateSigningRequest',
'metadata': {'annotations': None,
'cluster_name': None,
'creation_timestamp': datetime.datetime(2019, 3, 15, 14, 36, 28, tzinfo=tzutc()),
'deletion_grace_period_seconds': None,
'name': 'test-certificate-0.my-namespace',
'namespace': None,
'owner_references': None,
'resource_version': '4269575',
'self_link': '/apis/certificates.k8s.io/v1beta1/certificatesigningrequests/test-certificate-0.my-namespace',
'uid': 'b818fa4e-472f-11e9-a394-124b379b4e12'},
'spec': {'extra': None,
'groups': ['system:serviceaccounts',
'system:serviceaccounts:cloudp-38483-test01',
'system:authenticated'],
'request': 'redacted',
'uid': 'd5bfde1b-4036-11e9-a394-124b379b4e12',
'usages': ['digital signature', 'key encipherment', 'server auth'],
'username': 'system:serviceaccount:test-certificate-0.my-namespace'},
'status': {'certificate': 'redacted',
'conditions': [{'last_update_time': datetime.datetime(2019, 3, 15, 15, 13, 32, tzinfo=tzutc()),
'message': 'This CSR was approved by kubectl certificate approve.',
'reason': 'KubectlApprove',
'type': 'Approved'}]}}
</code></pre>
<p>I would like to <strong>approve</strong> this cert programmatically, if I use kubectl to do it with (<code>-v=10</code> will make <code>kubectl</code> output the http trafffic):</p>
<pre><code>kubectl certificate approve test-certificate-0.my-namespace -v=10
</code></pre>
<p>I get to see the <code>PUT</code> operation used to <strong>Approve</strong> my certificate:</p>
<pre><code>PUT https://my-kubernetes-cluster.com:8443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests/test-certificate-0.my-namespace/approval
</code></pre>
<p>So I need to <code>PUT</code> to the <code>/approval</code> resource of the certificate object. Now, how do I do it with the Python Kubernetes client?</p>
| <p>It's got a weird name, but it's in the <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CertificatesV1beta1Api.md#replace_certificate_signing_request_approval" rel="nofollow noreferrer">docs</a> for the python client - you want <code>replace_certificate_signing_request_approval</code></p>
<pre><code># create an instance of the API class
api_instance = kubernetes.client.CertificatesV1beta1Api(kubernetes.client.ApiClient(configuration))
name = 'name_example' # str | name of the CertificateSigningRequest
body = kubernetes.client.V1beta1CertificateSigningRequest() # V1beta1CertificateSigningRequest |
dry_run = 'dry_run_example' # str | When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed (optional)
pretty = 'pretty_example' # str | If 'true', then the output is pretty printed. (optional)
try:
api_response = api_instance.replace_certificate_signing_request_approval(name, body, dry_run=dry_run, pretty=pretty)
pprint(api_response)
except ApiException as e:
print("Exception when calling CertificatesV1beta1Api->replace_certificate_signing_request_approval: %s\n" % e)
</code></pre>
|
<p>I have a yaml file for creating k8s pod with just one container. Is it possible to pre-add an username and its password from yaml file during k8s pod creation?</p>
<p>I searched many sites and found the env variable. However, I could not make the pod as my wish. The pod's status is always showing Crashoff after pod creation.</p>
<p>Is it possible to pre-add an username and its password from yaml file during k8s pod creation?</p>
<p>Following are my yaml file:</p>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: centos610-sp-v1
spec:
replicas: 1
template:
metadata:
labels:
app: centos610-sp-v1
spec:
containers:
- name: centos610-pod-v1
image: centos-done:6.10
env:
- name: SSH_USER
value: "user1"
- name: SSH_SUDO
value: "ALL=(ALL) NOPASSWD:ALL"
- name: PASSWORD
value: "password"
command: ["/usr/sbin/useradd"]
args: ["$(SSH_USER)"]
ports:
- containerPort: 22
resources:
limits:
cpu: "500m"
memory: "1G"
---
apiVersion: v1
kind: Service
metadata:
name: centos610-sp-v1
labels:
app: centos610-sp-v1
spec:
selector:
app: centos610-sp-v1
ports:
- port: 22
protocol: TCP
nodePort: 31022
type: NodePort
---
</code></pre>
<p>Should I use specific command as </p>
<pre><code>env:
- name: MESSAGE
value: "hello world"
command: ["/bin/echo"]
args: ["$(MESSAGE)"]
</code></pre>
<p>or </p>
<pre><code>command: ["/bin/sh"]
args: ["-c", "while true; do echo hello; sleep 10;done"]
</code></pre>
<h2><strong>pod's status after get</strong></h2>
<pre><code>root@zero:~/k8s-temp# kubectl get pod
NAME READY STATUS RESTARTS AGE
centos610-sp-v1-6689c494b8-nb9kv 0/1 CrashLoopBackOff 5 3m
</code></pre>
<h2><strong>pod's status after describe</strong></h2>
<pre><code>root@zero:~/k8s-temp# kubectl describe pod centos610-sp-v1-6689c494b8-nb9kv
Name: centos610-sp-v1-6689c494b8-nb9kv
Namespace: default
Node: zero/10.111.33.15
Start Time: Sat, 16 Mar 2019 01:16:59 +0800
Labels: app=centos610-sp-v1
pod-template-hash=2245705064
Annotations: <none>
Status: Running
IP: 10.233.127.104
Controlled By: ReplicaSet/centos610-sp-v1-6689c494b8
Containers:
centos610-pod-v1:
Container ID: docker://5fa076c5d245dd532ef7ce724b94033d93642dc31965ab3fbde61dd59bf7d314
Image: centos-done:6.10
Image ID: docker://sha256:26362e9cefe4e140933bf947e3beab29da905ea5d65f27fc54513849a06d5dd5
Port: 22/TCP
Host Port: 0/TCP
Command:
/usr/sbin/useradd
Args:
$(SSH_USER)
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 16 Mar 2019 01:17:17 +0800
Finished: Sat, 16 Mar 2019 01:17:17 +0800
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 16 Mar 2019 01:17:01 +0800
Finished: Sat, 16 Mar 2019 01:17:01 +0800
Ready: False
Restart Count: 2
Limits:
cpu: 500m
memory: 1G
Requests:
cpu: 500m
memory: 1G
Environment:
SSH_USER: user1
SSH_SUDO: ALL=(ALL) NOPASSWD:ALL
PASSWORD: password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qbd8x (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-qbd8x:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-qbd8x
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 22s default-scheduler Successfully assigned centos610-sp-v1-6689c494b8-nb9kv to zero
Normal SuccessfulMountVolume 22s kubelet, zero MountVolume.SetUp succeeded for volume "default-token-qbd8x"
Normal Pulled 5s (x3 over 21s) kubelet, zero Container image "centos-done:6.10" already present on machine
Normal Created 5s (x3 over 21s) kubelet, zero Created container
Normal Started 4s (x3 over 21s) kubelet, zero Started container
Warning BackOff 4s (x3 over 19s) kubelet, zero Back-off restarting failed container
</code></pre>
<p><strong>2019/03/18 UPDATE</strong></p>
<p>Although pre-add username and password from pod's yaml is not suggested but I just want to clarify how to use command & args from yaml file. Finally, I use following yaml file to create a username "user1" and its password "1234" successfully. Thank you all of you guys' great answer to make me more familiar with k8s about configMap, RBAC, container's behavior.</p>
<p>Actually, this link gave me a reference on how to use command & args</p>
<p><a href="https://stackoverflow.com/questions/33887194/how-to-set-multiple-commands-in-one-yaml-file-with-kubernetes">How to set multiple commands in one yaml file with Kubernetes?</a></p>
<p>Here are my final yaml file content:</p>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: centos610-sp-v1
spec:
replicas: 1
template:
metadata:
labels:
app: centos610-sp-v1
spec:
containers:
- name: centos610-pod-v1
image: centos-done:6.10
env:
- name: SSH_USER
value: "user1"
- name: SSH_SUDO
value: "ALL=(ALL) NOPASSWD:ALL"
- name: PASSWORD
value: "password"
command: ["/bin/bash", "-c"]
args: ["useradd $(SSH_USER); service sshd restart; echo $(SSH_USER):1234 | chpasswd; tail -f /dev/null"]
ports:
- containerPort: 22
resources:
limits:
cpu: "500m"
memory: "1G"
---
apiVersion: v1
kind: Service
metadata:
name: centos610-sp-v1
labels:
app: centos610-sp-v1
spec:
selector:
app: centos610-sp-v1
ports:
- port: 22
protocol: TCP
nodePort: 31022
type: NodePort
---
</code></pre>
| <p>Keep username and password in a configMap or in a secret object. Load those values into container as environment variables</p>
<p>Follow the reference
<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/</a></p>
|
<p>Is there a way to check whether a pod status is in the completed state? I have a Pod that I only wanted to use once (where init containers didn't quite serve my purpose) and want to write a check to wait for Completed status.</p>
<p>I am able to get it for Running, Pending, but not for Completed.</p>
<p>Running:</p>
<pre>
[user@sandbox gcp_kubernetes_installation]$ kubectl get pods --field-selector=status.phase=Running -n mynamespace
NAME READY STATUS RESTARTS AGE
mssql-deployment-795dfcf9f7-l2b44 1/1 Running 0 6m
data-load-pod 1/1 Running 0 5m
</pre>
<p>Pending: </p>
<pre>
[user@sandbox gcp_kubernetes_installation]$ kubectl get pods --field-selector=status.phase=Pending -n mynamespace
NAME READY STATUS RESTARTS AGE
app-deployment-0 0/1 Pending 0 5m
</pre>
<p>Completed:</p>
<pre>
[user@sandbox gcp_kubernetes_installation]$ kubectl get pod -n namespace
NAME READY STATUS RESTARTS AGE
mssql-deployment-795dfcf9f7-l2b44 1/1 Running 0 11m
data-load-data-load-pod 0/1 Completed 0 10m
app-deployment-0 0/1 Pending 0 10m
[user@sandbox gcp_kubernetes_installation]$ kubectl get pods --field-selector=status.phase=Completed -n namespace
No resources found.
</pre>
<p>I believe there may be a bug in the field-selector, but just wondering if there are any fixes or details on a workaround.</p>
| <p>The correct <em>status.phase</em> for completed pods is <strong>Succeeded</strong>. </p>
<p><strong>So, to filter only completed pods, you should use this:</strong><br>
<code>kubectl get pod --field-selector=status.phase=Succeeded</code></p>
<p>Although, the use of bare pods is not recommended. Consider using a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="noreferrer">Job Controller</a>:</p>
<blockquote>
<p>A Job creates one or more Pods and ensures that a specified number of
them successfully terminate. As pods successfully complete, the Job
tracks the successful completions.</p>
</blockquote>
<p>You can check job conditions and wait for them with this:<br>
<code>kubectl wait --for=condition=complete job/myjob</code></p>
|
<p>What CLI command can I type to remove the <code>node-role.kubernetes.io/master:NoSchedule</code> taint from the master node in a Kubernetes cluster? </p>
<p>The following command is failing: </p>
<pre><code>[lnxcfg@ip-10-0-0-193 ~]$ kubectl taint nodes $(kubectl get nodes --selector=node-role.kubernetes.io/master | awk 'FNR==2{print $1}') key:node-role.kubernetes.io/master:NoSchedule-
error: invalid taint effect: node-role.kubernetes.io/master, unsupported taint effect
</code></pre>
<p>As you can see below, I am able to get the name of the master node successfully by using the following command, which is also embedded in the above failing command: </p>
<pre><code>[lnxcfg@ip-10-0-0-193 ~]$ kubectl get nodes --selector=node-role.kubernetes.io/master | awk 'FNR==2{print $1}'
ip-10-0-0-193.us-west-2.compute.internal
</code></pre>
<p>This is an AWS Linux 2 node hosting the master node of a single master Kubernetes cluster. </p>
| <pre><code>kubectl taint nodes $(hostname) node-role.kubernetes.io/master:NoSchedule-
</code></pre>
<p>But you can also schedule on master node without removing the taint:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
...
spec:
...
spec:
...
tolerations:
- key: "node-role.kubernetes.io/master"
effect: "NoSchedule"
operator: "Exists"
</code></pre>
|
<p>I want to do the opposite of this question:</p>
<p><a href="https://stackoverflow.com/questions/46763148/how-to-create-secrets-using-kubernetes-python-client">How to create secrets using Kubernetes Python client?</a></p>
<p>i.e.:</p>
<p><strong>How do I read an existing secret from a kubernetes cluster via the kubernetes-python API?</strong></p>
<p>The use case is: I want to authenticate to mongodb (running in my cluster) from a jupyter notebook (also running in my cluster) without, for obvious reasons, saving the mongodb auth password inside the jupyter notebook.</p>
<p>Thanks!</p>
| <ol>
<li>Install <a href="https://github.com/kubernetes-client/python#kubernetes-python-client" rel="noreferrer">Kubernetes client</a> for python</li>
<li>Now you can pull the secret. For example secret name - <code>mysql-pass</code>, namespace - <code>default</code></li>
</ol>
<pre><code>from kubernetes import client, config
config.load_kube_config()
v1 = client.CoreV1Api()
secret = v1.read_namespaced_secret("mysql-pass", "default")
print(secret)
</code></pre>
<ol start="3">
<li>If you need to extract decoded password from the secret</li>
</ol>
<pre><code>from kubernetes import client, config
import base64
import sys
config.load_kube_config()
v1 = client.CoreV1Api()
sec = str(v1.read_namespaced_secret("mysql-pass", "default").data)
pas = base64.b64decode(sec.strip().split()[1].translate(None, '}\''))
print(pas)
</code></pre>
<p>Hope this will help.</p>
|
<p>I am going through the second module of the <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive/" rel="noreferrer">Kubernetes tutorial</a> and I'm confused about when <code>kubectl proxy</code> is necessary. </p>
<p>The reason I'm confused is, in the tutorial it was possible to create a Deployment (i.e. deploy a Docker image as a container in a pod) with the command <code>kubectl run kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1 --port=8080</code> before we ever setup a proxy. Deploying an image seems like it would require access to the nodes. </p>
<p>The tutorial says that "By default [pods, i.e. groups of containers] are visible from other pods and services within the same kubernetes cluster, but not outside that network." For this reason it instructs us to setup a proxy with <code>kubectl proxy</code> before we attempt to <code>curl</code> a pod directly (e.g. with <code>curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy/</code>). Yet we were already able to deploy those pods without a proxy. It seems strange that we could deploy them but not query them.</p>
<p>Similarly, the tutorial also has us setup the proxy before we attempt to get the version using the Kubernetes API with <code>curl http://localhost:8001/version</code> (I believe <code>localhost:8001</code> is the proxy). And yet, earlier we were able to query the version without a proxy using <code>kubectl version</code>, which returns the versions of both <code>kubectl</code> <strong>and</strong> Kubernetes on the master node.</p>
<p>Can anybody shed some light on these apparent contradictions?</p>
| <p>When running <code>kubectl</code> commands, the CLI is determining the address of the Kubernetes API server, the CA to verify the server's certificate against (to ensure you're talking to a trusted server and not some man-in-the-middle, say) and your client credentials from the kubeconfig file (to establish an encrypted, authenticated connection to the server with mTLS), which is in <code>~/.kube/config</code> by default. You can <code>cat</code> that file in the tutorial to see what's in it:</p>
<pre><code>$ cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority: /root/.minikube/ca.crt
server: https://172.17.0.26:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /root/.minikube/client.crt
client-key: /root/.minikube/client.key
</code></pre>
<p>You can do the equivalent of what is happening in the tutorial without the proxy, as follows:</p>
<pre><code>$ curl \
--cacert /root/.minikube/ca.crt \
--cert /root/.minikube/client.crt \
--key /root/.minikube/client.key \
https://172.17.0.26:8443/api/v1/namespaces/default/pods/$POD_NAME/proxy/
Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-6bf84cb898-5gzp5 | v=1
</code></pre>
<p>You can see that after running the proxy command, the resulting curl command you need to run is simpler and more convenient:</p>
<pre><code>curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy/
</code></pre>
<p>You don't need to bother figuring out the address of the API server or dealing with all these certificates and keys, you just connect to localhost and the local proxy running there handles making the secure, authenticated connection to the API and proxies the response from the API server back to you.</p>
<hr>
<p>Now, most interactions with the Kubernetes API that you need to do can be done directly via <code>kubectl</code> commands, it's rare that you need to <code>curl</code> the API directly. You see this when you issued the <code>kubectl run</code> or <code>kubectl version</code> command. In fact, you observed that you later found the version information via a <code>curl</code>, but you don't really need to do that since you can directly run <code>kubectl version</code>.</p>
<p>So you would probably only use <code>kubectl proxy</code> when you need to <code>curl</code> the Kubernetes API directly because there is no native <code>kubectl</code> command that lets you do what you want <em>and</em> when you prefer the convenience of not having that more complicated <code>curl</code> command with all the cert flags, etc.</p>
<hr>
<p>Okay, but when do you really need to <code>curl</code> the API directly? Again, usually never. However, one thing the API does in addition to being a RESTful API for creating and deleting Kubernetes resources (pods, deployments, services, pvcs, etc.) is it serves as a proxy into the internal container network. Out-of-the-box, there's no way to send traffic to the container you ran in the tutorial, except for via the proxy endpoints provided by the Kubernetes API located at <code>/api/v1/namespaces/default/pods/$POD_NAME/proxy/</code>.</p>
<p>The <a href="https://stackoverflow.com/questions/54332972/what-is-the-purpose-of-kubectl-proxy">StackOverflow question</a> that you link to in the comment to your question has an accepted answer that explains several other ways to send traffic to the running containers. For a real-world application, you're likely to want to use something other than the proxy endpoints on the Kubernetes API server itself, but the proxy endpoints are a quick and easy way to interact over the network with a deployed container, so it might be something you want to do early on in your development lifecycle before setting up more robust and complex infrastructure to handle ingress traffic to your containers.</p>
<hr>
<p>So putting it all together: <strong>when</strong> you have deployed a web application to Kubernetes <strong>and</strong> you would like to send requests to it <strong>and</strong> you don't (yet) want to set up some of the more complex but robust ways to get ingress traffic to your containers, you can use the container network proxy API located at <code>/api/v1/namespaces/default/pods/$POD_NAME/proxy/</code> of the Kubernetes API server. There is no <code>kubectl</code> command that will hit that endpoint for you, so you have to curl it (or open it in a browser) directly. <strong>When</strong> you want to <code>curl</code> any Kuberentes server API endpoint directly <strong>and</strong> you don't want to pass a bunch of flags to your <code>curl</code> command, then running <code>kubectl proxy</code> allows you to run simpler <code>curl</code> commands directed at that local proxy that will proxy your requests to the Kubernetes API.</p>
<hr>
<p>One final note, there are two completely different proxies going on here. One is the local proxy proxying your requests to <em>any</em> endpoint of the Kuberentes API server. One such (type of) endpoint that the Kubernetes API server has is itself a proxy into the internal network where containers are deployed. (Further still, there are proxies internal to the container network that make things work under the hood but to keep it simple, there's no need to discuss them in this answer). Don't get those two proxies confused.</p>
|
<p>I'm trying to work with Istio from Go, and are using Kubernetes and Istio go-client code.</p>
<p>The problem I'm having is that I can't specify <code>ObjectMeta</code> or <code>TypeMeta</code> in my Istio-<code>ServiceRole</code> object. I can only specify <code>rules</code>, which are inside the <code>spec</code>.</p>
<p>Below you can see what I got working: </p>
<pre class="lang-golang prettyprint-override"><code>import (
v1alpha1 "istio.io/api/rbac/v1alpha1"
)
func getDefaultServiceRole(app nais.Application) *v1alpha1.ServiceRole {
return &v1alpha1.ServiceRole{
Rules: []*v1alpha1.AccessRule{
{
Ports: []int32{2},
},
},
}
}
</code></pre>
<p>What I would like to do is have this code work:</p>
<pre class="lang-golang prettyprint-override"><code>func getDefaultServiceRole(app *nais.Application) *v1alpha1.ServiceRole {
return &v1alpha1.ServiceRole{
TypeMeta: metav1.TypeMeta{
Kind: "ServiceRole",
APIVersion: "v1alpha1",
},
ObjectMeta: metav1.ObjectMeta{
Name: app.Name,
Namespace: app.Namespace,
},
Spec: v1alpha1.ServiceRole{
Rules: []*v1alpha1.AccessRule{
{
Ports: []int32{2},
},
},
},
},
}
</code></pre>
<p>Can anyone point me in the right direction?</p>
| <p>Ah - this is a pretty painful point: Istio requires Kubernetes CRD wrapper metadata (primarily the <code>name</code> and <code>namespace</code> fields), but those fields are not part of the API objects themselves nor are they represented in the protos. (This is changing with the new MCP API for configuring components - which Galley uses - <a href="https://github.com/istio/api/blob/master/mcp/v1alpha1/metadata.proto" rel="nofollow noreferrer">does encode these fields as protobufs</a> but that doesn't help for your use case.) Instead, you should use the types in <a href="https://godoc.org/istio.io/istio/pilot/pkg/config/kube/crd" rel="nofollow noreferrer"><code>istio.io/istio/pilot/pkg/config/kube/crd</code></a>, which implement the K8s CRD interface.</p>
<p>The easiest way to work with the Istio objects in golang is to use Pilot's libraries, particularly the <a href="https://godoc.org/istio.io/istio/pilot/pkg/model" rel="nofollow noreferrer"><code>istio.io/istio/pilot/pkg/model</code></a> and <a href="https://godoc.org/istio.io/istio/pilot/pkg/config/kube/crd" rel="nofollow noreferrer"><code>istio.io/istio/pilot/pkg/config/kube/crd</code></a> packages, as well as the <a href="https://godoc.org/istio.io/istio/pilot/pkg/model#Config" rel="nofollow noreferrer"><code>model.Config</code></a> struct. You can either pass around the full <code>model.Config</code> (not great because <code>spec</code> has type <code>proto.Message</code> so you need type assertions to extract the data you care about), or pass around the inner object wrap it in a <code>model.Config</code> before you push it. You can use the <a href="https://godoc.org/istio.io/istio/pilot/pkg/model#ProtoSchema" rel="nofollow noreferrer"><code>model.ProtoSchema</code></a> type to help with conversion to and from YAML and JSON. <a href="https://godoc.org/istio.io/istio/pilot/pkg/model#pkg-variables" rel="nofollow noreferrer">Pilot only defines <code>ProtoSchema</code> objects for the networking API</a>, the type is public and you can create them for arbitrary types.</p>
<p>So, using your example code I might try something like:</p>
<pre class="lang-golang prettyprint-override"><code>import (
v1alpha1 "istio.io/api/rbac/v1alpha1"
"istio.io/istio/pilot/pkg/model"
)
func getDefaultServiceRole() *v1alpha1.ServiceRole {
return &v1alpha1.ServiceRole{
Rules: []*v1alpha1.AccessRule{
{
Ports: []int32{2},
},
},
}
}
func toConfig(app *nais.Application, role *v1alpha1.ServiceRole) model.Config {
return &model.Config{
ConfigMeta: model.ConfigMeta{
Name: app.Name,
Namespace: app.Namespace,
},
Spec: app,
}
}
type Client model.ConfigStore
func (c Client) CreateRoleFor(app nais.Application, role *v1alpha1.ServiceRole) error {
cfg := toConfig(app, role)
_, err := c.Create(cfg)
return err
}
</code></pre>
<p>As a more complete example, we built the Istio CloudMap operator in this style. <a href="https://github.com/tetratelabs/istio-cloud-map/blob/master/pkg/control/synchronizer.go#L64-L91" rel="nofollow noreferrer">Here's the core of it that pushes config to K8s with Pilot libraries.</a> Here's <a href="https://github.com/tetratelabs/istio-cloud-map/blob/e3c95237f5e0743f9ec9898fdf1809321df91b5e/pkg/control/synchronizer.go#L28" rel="nofollow noreferrer">the incantation to create an instance of model.ConfigStore to use to create objects</a>. Finally, I want to call out explicitly as it's only implicit in the example: when you call <code>Create</code> on the <code>model.ConfigStore</code>, the <code>ConfigStore</code> relies on the metadata in the <code>ProtoSchema</code> objects used to create it. So be sure to initialize the store with <code>ProtoSchema</code> objects for all of the types you'll be working with.</p>
<hr>
<p>You can achieve the same using just the K8s client libraries and the <code>istio.io/istio/pilot/pkg/config/kube/crd</code> package, but I have not done it firsthand and don't have examples handy.</p>
|
<p>As per <a href="https://docs.docker.com/compose/env-file/" rel="nofollow noreferrer">docker docs</a>, environment variables in .env file expected to be in in key-val format as <code>VAR=VAL</code> which works fine for sample like <code>foo=bar</code> but no mention of unavoidable special characters e.g. '=', which may confuse for <code>key-val</code> separator OR <code>space</code> both part of valid db connection string as in:</p>
<p>secrets.env file:</p>
<pre><code> connectionString=Data Source=some-server;Initial Catalog=db;User ID=uid;Password=secretpassword
</code></pre>
<p>which is referred in docker-compose.debug.yaml file content as:</p>
<pre><code>services:
some-service:
container_name: "service-name"
env_file:
- secrets.env
ports:
- "80:80"
</code></pre>
<p>Which is further used to transform into <code>docker-compose.yaml</code> as shown complete flow below:</p>
<p><a href="https://i.stack.imgur.com/yplMb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yplMb.png" alt="enter image description here"></a></p>
<p>So the question is - how do you include connection string which has <code>=</code> and <code>Spaces</code> as part of value ?</p>
<p><strong>Need</strong> - We have few micro-services within VS solution and looking forward to avoid repetition of same connection strings otherwise needs in service spec of `docker-compose.yaml' </p>
<p><strong>Tried</strong> including values in single/double quote but post transformation whatever after <code>=</code> is treated as value including quotes just similar like kubernets yaml file convention</p>
| <p>I ran a test without any issues:</p>
<pre><code>$ cat .env
ENV=default
USER_NAME=test2
SPECIAL=field=with=equals;and;semi-colons
$ cat docker-compose.env.yml
version: '2'
services:
test:
image: busybox
command: env
environment:
- SPECIAL
$ docker-compose -f docker-compose.env.yml up
Creating network "test_default" with the default driver
Creating test_test_1_55eac1c3767c ... done
Attaching to test_test_1_d7787ac5bfc0
test_1_d7787ac5bfc0 | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
test_1_d7787ac5bfc0 | HOSTNAME=d249a16a8e09
test_1_d7787ac5bfc0 | SPECIAL=field=with=equals;and;semi-colons
test_1_d7787ac5bfc0 | HOME=/root
test_test_1_d7787ac5bfc0 exited with code 0
</code></pre>
|
<p>I'm a Kubernetes amateur trying to use NGINX ingress controller on GKE. I'm following <a href="https://cloud.google.com/community/tutorials/nginx-ingress-gke" rel="nofollow noreferrer">this</a> google cloud documentation to setup NGINX Ingress for my services, but, I'm facing issues in accessing the NGINX locations.</p>
<p><strong>What's working?</strong></p>
<ol>
<li>Ingress-Controller deployment using Helm (RBAC enabled)</li>
<li>ClusterIP service deployments</li>
</ol>
<p><strong>What's not working?</strong></p>
<ol>
<li>Ingress resource to expose multiple ClusterIP services using unique paths (fanout routing)</li>
</ol>
<p><strong>K8S Services</strong></p>
<pre><code>[msekar@ebs kube-base]$ kubectl get services -n payment-gateway-7682352
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-controller LoadBalancer 10.35.241.255 35.188.161.171 80:31918/TCP,443:31360/TCP 6h
nginx-ingress-default-backend ClusterIP 10.35.251.5 <none> 80/TCP 6h
payment-gateway-dev ClusterIP 10.35.254.167 <none> 5000/TCP 6h
payment-gateway-qa ClusterIP 10.35.253.94 <none> 5000/TCP 6h
</code></pre>
<p><strong>K8S Ingress</strong></p>
<pre><code>[msekar@ebs kube-base]$ kubectl get ing -n payment-gateway-7682352
NAME HOSTS ADDRESS PORTS AGE
pgw-nginx-ingress * 104.198.78.169 80 6h
[msekar@ebs kube-base]$ kubectl describe ing pgw-nginx-ingress -n payment-gateway-7682352
Name: pgw-nginx-ingress
Namespace: payment-gateway-7682352
Address: 104.198.78.169
Default backend: default-http-backend:80 (10.32.1.4:8080)
Rules:
Host Path Backends
---- ---- --------
*
/dev/ payment-gateway-dev:5000 (<none>)
/qa/ payment-gateway-qa:5000 (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx","nginx.ingress.kubernetes.io/ssl-redirect":"false"},"name":"pgw-nginx-ingress","namespace":"payment-gateway-7682352"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"payment-gateway-dev","servicePort":5000},"path":"/dev/"},{"backend":{"serviceName":"payment-gateway-qa","servicePort":5000},"path":"/qa/"}]}}]}}
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: false
Events: <none>
</code></pre>
<p>Last applied configuration in the annotations (ingress description output) shows the ingress resource manifest. But, I'm pasting it below for reference</p>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: pgw-nginx-ingress
namespace: payment-gateway-7682352
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- backend:
serviceName: payment-gateway-dev
servicePort: 5000
path: /dev/
- backend:
serviceName: payment-gateway-qa
servicePort: 5000
path: /qa/
</code></pre>
<p><strong>Additional Info</strong></p>
<p>The services I'm trying to access are springboot services that use contexts, so, the root location isn't a valid end-point.</p>
<p>The container's readiness and liveliness probes are defined accordingly.</p>
<p>For example, "payment-gateway-dev" service is using a context /pgw/v1 context, so, the deployment can only be accessed through the context. To access application's swagger spec you would use the URL</p>
<p>http://<>/pgw/v1/swagger-ui.html</p>
<p><strong>Behaviour of my deployment</strong></p>
<blockquote>
<p>ingress-controller-LB-ip = 35.188.161.171</p>
</blockquote>
<ul>
<li>Accessing ingress controller load balancer "<a href="http://35.188.161.171" rel="nofollow noreferrer">http://35.188.161.171</a>" takes me to default 404 backend</li>
<li>Accessing ingress controller load balancer health "<a href="http://35.188.161.171/healthz" rel="nofollow noreferrer">http://35.188.161.171/healthz</a>" returns 200 HTTP response as expected</li>
<li>Trying to access the services using the URLs below returns "404: page not found" error
<ul>
<li><a href="http://35.188.161.171/dev/pgw/v1/swagger-ui.html" rel="nofollow noreferrer">http://35.188.161.171/dev/pgw/v1/swagger-ui.html</a></li>
<li><a href="http://35.188.161.171/qa/pgw/v1/swagger-ui.html" rel="nofollow noreferrer">http://35.188.161.171/qa/pgw/v1/swagger-ui.html</a></li>
</ul></li>
</ul>
<p>Any suggestions about or insights into what I might be doing wrong will be much appreciated.</p>
| <p>+1 for this well asked question.</p>
<p>Your setup seemed right to me. In you explanation, I could find that your services would require <code>http://<>/pgw/v1/swagger-ui.html</code> as context. However, in your setup the path submitted to the service will be <code>http://<>/qa/pgw/v1/swagger-ui.html</code> if your route is <code>/qa/</code>.</p>
<p>To remove the prefix, what you would need to do is to add a <code>rewrite</code> rule to your ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: pgw-nginx-ingress
namespace: payment-gateway-7682352
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- backend:
serviceName: payment-gateway-dev
servicePort: 5000
path: /dev/(.+)
- backend:
serviceName: payment-gateway-qa
servicePort: 5000
path: /qa/(.+)
</code></pre>
<p>After this, you service should receive the correct contexts.</p>
<p>Ref:</p>
<ul>
<li>Rewrite: <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md</a></li>
<li>Ingress Route Matching: <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/ingress-path-matching.md" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/ingress-path-matching.md</a></li>
</ul>
|
<p>I have a Kubernetes cluster set up using Kubernetes Engine on GCP. I have also installed Dask using the Helm package manager. My data are stored in a Google Storage bucket on GCP.</p>
<p>Running <code>kubectl get services</code> on my local machine yields the following output</p>
<p><a href="https://i.stack.imgur.com/Uix0v.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Uix0v.png" alt="enter image description here"></a></p>
<p>I can open the dashboard and jupyter notebook using the external IP without any problems. However, I'd like to develop a workflow where I write code in my local machine and submit the script to the remote cluster and run it there. </p>
<p><strong>How can I do this?</strong></p>
<p>I tried following the instructions in <a href="http://distributed.dask.org/en/latest/submitting-applications.html" rel="nofollow noreferrer">Submitting Applications</a> using <code>dask-remote</code>. I also tried exposing the scheduler using <code>kubectl expose deployment</code> with type LoadBalancer, though I do not know if I did this correctly. Suggestions are greatly appreciated.</p>
| <p>Yes, if your client and workers share the same software environment then you should be able to connect a client to a remote scheduler using the publicly visible IP.</p>
<pre><code>from dask.distributed import Client
client = Client('REDACTED_EXTERNAL_SCHEDULER_IP')
</code></pre>
|
<p>I installed Kubernetes <strong>version 10</strong> </p>
<pre><code> kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead",GitTreeState:"archive", BuildDate:"2018-03-29T08:38:42Z",GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"archive", BuildDate:"2018-03-29T08:38:42Z",
GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>When i execute : </p>
<pre><code>kubectl delete deployment example
Error: unknown command "verion" for "kubectl"
</code></pre>
<p>it is not useful here to put --force Or --cascade=false ==> I tried all this put there is no change </p>
<p>When I execute : </p>
<pre><code> kubectl get nodes ==> Master return Nodes , it is Okay
</code></pre>
<p>Any help ? </p>
| <p>I had a similar issue where I could not delete a deployment with Kubernetes installed in Docker Desktop. The solution has been going to <code>Docker Desktop > Preferences > Reset Kubernetes cluster.</code></p>
|
<p>I am creating 11 pods on EKS kubernetes. I have two public and two private subnets.
In that, I have to move 10 pods in private subnet and 1 pod in public subnet. The reason behind moving in a public subnet is I have to attach public facing load balancer IP to it. But I have not sure how to move particular pod in particular subnet on EKS. I got similar question asked <a href="https://stackoverflow.com/questions/54027386/eks-in-private-subnet-load-balancer-in-public-subnet">here</a>. But. didn't got answer.
All things I am creating via Cloudformation.
How can I create particular pod in particular subnet on EKS?</p>
| <p>Seems like <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="nofollow noreferrer">Kubernetes' nodeSelector</a> is the thing you need. You just have to add a fitting label to the nodes, <a href="https://stackoverflow.com/a/52228629/10761013">this answer</a> will help you automate it.</p>
|
<p>I have deployed Elastic-search container in aws using eks kubernetes cluster. The memory usage of the container keeps increasing even though there are only 3 indices and not used heavily. I am dumping cluster container logs into elastic search using FluentD. Other than this, there is no use of elastic-search. I tried applying min/max heap size using <code>-Xms512m -Xmx512m</code>. It applies successfully but still, the memory usage gets almost doubled in 24 hours. I am not sure what other options do i have to configure. I tried changing docker image from <code>elasticsearch:6.5.4</code> to <code>elasticsearch:6.5.1</code>. But issue persists. I also tried <code>-XX:MaxHeapFreeRatio=50</code> java option. </p>
<p>Check the screenshot from kibana. <a href="https://i.stack.imgur.com/QCadA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QCadA.png" alt="enter image description here"></a></p>
<p>Edit : Following are logs from Elastic-search start-up :</p>
<pre><code>[2019-03-18T13:24:03,119][WARN ][o.e.b.JNANatives ] [es-79c977d57-v77gw] Unable to lock JVM Memory: error=12, reason=Cannot allocate memory
[2019-03-18T13:24:03,120][WARN ][o.e.b.JNANatives ] [es-79c977d57-v77gw] This can result in part of the JVM being swapped out.
[2019-03-18T13:24:03,120][WARN ][o.e.b.JNANatives ] [es-79c977d57-v77gw] Increase RLIMIT_MEMLOCK, soft limit: 16777216, hard limit: 16777216
[2019-03-18T13:24:03,120][WARN ][o.e.b.JNANatives ] [es-79c977d57-v77gw] These can be adjusted by modifying /etc/security/limits.conf, for example:
# allow user 'elasticsearch' mlockall
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
[2019-03-18T13:24:03,120][WARN ][o.e.b.JNANatives ] [es-79c977d57-v77gw] If you are logged in interactively, you will have to re-login for the new limits to take effect.
[2019-03-18T13:24:03,397][INFO ][o.e.e.NodeEnvironment ] [es-79c977d57-v77gw] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/xvda1)]], net usable_space [38.6gb], net total_space [96.8gb], types [ext4]
[2019-03-18T13:24:03,397][INFO ][o.e.e.NodeEnvironment ] [es-79c977d57-v77gw] heap size [503.6mb], compressed ordinary object pointers [true]
[2019-03-18T13:24:03,469][INFO ][o.e.n.Node ] [es-79c977d57-v77gw] node name [es-79c977d57-v77gw], node ID [qrCUCaHoQfa3SXuTpLjUUA]
[2019-03-18T13:24:03,469][INFO ][o.e.n.Node ] [es-79c977d57-v77gw] version[6.5.1], pid[1], build[default/tar/8c58350/2018-11-16T02:22:42.182257Z], OS[Linux/4.15.0-1032-aws/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11.0.1/11.0.1+13]
[2019-03-18T13:24:03,469][INFO ][o.e.n.Node ] [es-79c977d57-v77gw] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.oEmM9oSp, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -XX:UseAVX=2, -Des.cgroups.hierarchy.override=/, -Xms512m, -Xmx512m, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=tar]
[2019-03-18T13:24:05,082][INFO ][o.e.p.PluginsService ] [es-79c977d57-v77gw] loaded module [aggs-matrix-stats]
[2019-03-18T13:24:05,082][INFO ][o.e.p.PluginsService ] [es-79c977d57-v77gw] loaded module [analysis-common]
[2019-03-18T13:24:05,082][INFO ][o.e.p.PluginsService ] [es-79c977d57-v77gw] loaded module [ingest-common] ....
</code></pre>
| <p>Pod memory usage in Kubernetes isn't equivalent to JVM memory usage--to get that stat you'd have to pull the metric from the JVM directly. Pod memory usage, depending on the metric you're querying, can also include page cache and swap space, in addition to application memory, so there's no telling from the graph you've provided what is actually consuming memory here. Depending on what the problem is, Elasticsearch has advanced features like <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration-memory.html#bootstrap-memory_lock" rel="nofollow noreferrer">memory locking</a>, which will lock your process address space in RAM. However, a surefire way to keep a Kubernetes pod from eating up non-JVM memory is simply to set a limit to how much memory that pod can consume. In your Kubernetes pod spec set <code>resources.limits.memory</code> to your desired memory cap and your memory consumption won't stray beyond that limit. Of course, if this is a problem with your JVM configuration, the ES pod will fail with an OOM error when it hits the limit. Just make sure you're allocating additional space for system resources, by which I mean, your pod memory limit should be somewhat greater than your max JVM heap size.</p>
<p>On another note, you might be surprised how much logging Kubernetes is actually doing behind the scenes. Consider periodically <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-open-close.html" rel="nofollow noreferrer">closing Elasticsearch indexes</a> that aren't being regularly searched to free up memory.</p>
|
<p>In <a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/baremetal/</a>
In metalLB mode, one node attracts all the traffic for the ingress-nginx
By node port we can gather all traffic and loadbalance it podes by service</p>
<p>what is diffrence between node port and metalLB?</p>
| <p>It's detailed fairly well here in the Kubernetes Service here:</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types</a></p>
<p>To summarise: </p>
<p>NodePort exposes the service on a port, which can then be accessed externally. </p>
<p>LoadBalancer uses a Cloud Providers option for exposing a port. E.G. Azure Load Balancers are used and can potentially expose multiple public IP addresses and load balances them against a larger pool of backend resources (Kubernetes nodes)</p>
|
<p>I have an azure container service (aks) cluster. It is migrated to version 1.8.1. I am trying to deploy postgres database and use <code>AzureFileVolume</code> to persist postgres data on.</p>
<p>By default, if I deploy the postgres database without mounting volume, everything is working as excepted, i.e. pod is created and database is initialized.</p>
<p>When I try to mount a volume using the yaml below, I get <strong>initdb: could not access directory "/var/lib/postgresql/data": Permission denied</strong>.</p>
<p>I tried various hacks as suggested in this long <a href="https://github.com/kubernetes/kubernetes/issues/2630" rel="nofollow noreferrer">github thread</a>, like: setting security context for the pod or running chown commands in <em>initContainers</em>. The result was the same - permission denied.</p>
<p>Any ideas would be appreciated.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: myapp
component: test-db
name: test-db
spec:
ports:
- port: 5432
selector:
app: myapp
component: test-db
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test-db
spec:
template:
metadata:
labels:
app: myapp
component: test-db
spec:
securityContext:
fsGroup: 999
runAsUser: 999
containers:
- name: test-db
image: postgres:latest
securityContext:
allowPrivilegeEscalation: false
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: myappdb
- name: POSTGRES_USER
value: myappdbuser
- name: POSTGRES_PASSWORD
value: qwerty1234
volumeMounts:
- name: azure
mountPath: /var/lib/postgresql/data
volumes:
- name: azure
azureFile:
secretName: azure-secret
shareName: acishare
readOnly: false
</code></pre>
| <p>This won't work you need to use azure disks, reason postgres uses hard links which are not supported by azure files
<a href="https://github.com/docker-library/postgres/issues/548" rel="nofollow noreferrer">https://github.com/docker-library/postgres/issues/548</a></p>
|
<p>I'm having a few issues getting Ambassador to work correctly. I'm new to Kubernetes and just teaching myself.</p>
<p>I have successfully managed to work through the demo material Ambassador provide - e.g /httpbin/ endpoint is working correctly, but when I try to deploy a Go service it is falling over.</p>
<p>When hitting the 'qotm' endpoint, the page this is the response: </p>
<pre><code>upstream request timeout
</code></pre>
<p>Pod status:</p>
<pre><code>CrashLoopBackOff
</code></pre>
<p>From my research, it seems to be related to the yaml file not being configured correctly but I'm struggling to find any documentation relating to this use case.</p>
<p>My cluster is running on AWS EKS and the images are being pushed to AWS ECR.</p>
<p>main.go: </p>
<pre><code>package main
import (
"fmt"
"net/http"
"os"
)
func main() {
var PORT string
if PORT = os.Getenv("PORT"); PORT == "" {
PORT = "3001"
}
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hello World from path: %s\n", r.URL.Path)
})
http.ListenAndServe(":" + PORT, nil)
}
</code></pre>
<p>Dockerfile:</p>
<pre><code>FROM golang:alpine
ADD ./src /go/src/app
WORKDIR /go/src/app
EXPOSE 3001
ENV PORT=3001
CMD ["go", "run", "main.go"]
</code></pre>
<p>test.yaml: </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: qotm
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: qotm_mapping
prefix: /qotm/
service: qotm
spec:
selector:
app: qotm
ports:
- port: 80
name: http-qotm
targetPort: http-api
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: qotm
spec:
replicas: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: qotm
spec:
containers:
- name: qotm
image: ||REMOVED||
ports:
- name: http-api
containerPort: 3001
readinessProbe:
httpGet:
path: /health
port: 5000
initialDelaySeconds: 30
periodSeconds: 3
resources:
limits:
cpu: "0.1"
memory: 100Mi
</code></pre>
<p>Pod description:</p>
<pre><code>Name: qotm-7b9bf4d499-v9nxq
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: ip-192-168-89-69.eu-west-1.compute.internal/192.168.89.69
Start Time: Sun, 17 Mar 2019 17:19:50 +0000
Labels: app=qotm
pod-template-hash=3656908055
Annotations: <none>
Status: Running
IP: 192.168.113.23
Controlled By: ReplicaSet/qotm-7b9bf4d499
Containers:
qotm:
Container ID: docker://5839996e48b252ac61f604d348a98c47c53225712efd503b7c3d7e4c736920c4
Image: IMGURL
Image ID: docker-pullable://IMGURL
Port: 3001/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Sun, 17 Mar 2019 17:30:49 +0000
Finished: Sun, 17 Mar 2019 17:30:49 +0000
Ready: False
Restart Count: 7
Limits:
cpu: 100m
memory: 200Mi
Requests:
cpu: 100m
memory: 200Mi
Readiness: http-get http://:3001/health delay=30s timeout=1s period=3s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5bbxw (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-5bbxw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-5bbxw
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 12m default-scheduler Successfully assigned default/qotm-7b9bf4d499-v9nxq to ip-192-168-89-69.eu-west-1.compute.internal
Normal Pulled 10m (x5 over 12m) kubelet, ip-192-168-89-69.eu-west-1.compute.internal Container image "IMGURL" already present on machine
Normal Created 10m (x5 over 12m) kubelet, ip-192-168-89-69.eu-west-1.compute.internal Created container
Normal Started 10m (x5 over 11m) kubelet, ip-192-168-89-69.eu-west-1.compute.internal Started container
Warning BackOff 115s (x47 over 11m) kubelet, ip-192-168-89-69.eu-west-1.compute.internal Back-off restarting failed container
</code></pre>
| <p>In your kubernetes deployment file you have exposed a readiness probe on port 5000 while your application is exposed on port 3001, also while running the container a few times I got OOMKilled so increased the memory limit. Anyways below deployment file should work fine.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: qotm
spec:
replicas: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: qotm
spec:
containers:
- name: qotm
image: <YOUR_IMAGE>
imagePullPolicy: Always
ports:
- name: http-api
containerPort: 3001
readinessProbe:
httpGet:
path: /health
port: 3001
initialDelaySeconds: 30
periodSeconds: 3
resources:
limits:
cpu: "0.1"
memory: 200Mi
</code></pre>
|
<p>While trying to setup Cassandra database in a local Kubernetes cluster on a Mac OS (via Minikube), I am getting connection issues.
It seems like Node.js is not able to resolve DNS settings correctly, but resolving via command line DOES work.</p>
<p>The setup is as following (simplified):
Cassandra Service</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: cassandra
name: cassandra
spec:
type: NodePort
ports:
- port: 9042
targetPort: 9042
protocol: TCP
name: http
selector:
app: cassandra
</code></pre>
<p>In addition, there's a PersistentVolume and a StatefulSet.</p>
<p>The application itself is very basic</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: app1
labels:
app: app1
spec:
replicas: 1
selector:
matchLabels:
app: app1
template:
metadata:
labels:
app: app1
spec:
containers:
- name: app1
image: xxxx.dkr.ecr.us-west-2.amazonaws.com/acme/app1
imagePullPolicy: "Always"
ports:
- containerPort: 3003
</code></pre>
<p>And a service</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: app1
namespace: default
spec:
selector:
app: app1
type: NodePort
ports:
- port: 3003
targetPort: 3003
protocol: TCP
name: http
</code></pre>
<p>there also a simple ingress setup</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: dev.acme.com
http:
paths:
- path: /app1
backend:
serviceName: app1
servicePort: 3003
</code></pre>
<p>And added to <code>/etc/hosts</code> the minikube ip address</p>
<pre><code>192.xxx.xx.xxx dev.acme.com
</code></pre>
<p>So far so good.</p>
<p>When trying to call <code>dev.acme.com/app1</code> via Postman, the node.js app itself is being called correctly (can see in the logs), HOWEVER, the app can not connect to Cassandra and times out with the following error:</p>
<blockquote>
<p>"All host(s) tried for query failed. First host tried,
92.242.140.2:9042: DriverError: Connection timeout. See innerErrors."</p>
</blockquote>
<p>The IP <code>92.242.140.2</code> seems to be just a public IP that is related to my ISP, I believe since the app is not able to resolve the service name.</p>
<p>I created a simple node.js script to test dns:</p>
<p><code>var dns = require('dns')
dns.resolve6('cassandra', (err, res) => console.log('ERR:', err, 'RES:', res))</code></p>
<p>and the response is</p>
<blockquote>
<p>ERR: { Error: queryAaaa ENOTFOUND cassandra
at QueryReqWrap.onresolve [as oncomplete] (dns.js:197:19) errno: 'ENOTFOUND', code: 'ENOTFOUND', syscall: 'queryAaaa', hostname:
'cassandra' } RES: undefined</p>
</blockquote>
<p>However, and this is where it gets confusing - when I ssh into the pod (app1), I am able to connect to cassandra service using:</p>
<p><code>cqlsh cassandra 9042 --cqlversion=3.4.4</code></p>
<p>So it seems as the pod is "aware" of the service name, but node.js runtime is not.</p>
<p>Any idea what could cause the node.js to not being able to resolve the service name/dns settings?</p>
<p><strong>UPDATE</strong></p>
<p>After re-installing the whole cluster, including re-installing docker, kubectl and minikube I am getting the same issue.</p>
<p>While running <code>ping cassandra</code> from app1 container via ssh, I am getting the following</p>
<blockquote>
<p>PING cassandra.default.svc.cluster.local (10.96.239.137) 56(84) bytes
of data. 64 bytes from cassandra.default.svc.cluster.local
(10.96.239.137): icmp_seq=1 ttl=61 time=27.0 ms</p>
<p>2 packets transmitted, 2 received, 0% packet loss, time 1001ms</p>
</blockquote>
<p>Which seems to be fine.
However, when running from Node.js runtime I am still getting the same error - </p>
<blockquote>
<p>"All host(s) tried for query failed. First host tried,
92.242.140.2:9042: DriverError: Connection timeout. See innerErrors."</p>
</blockquote>
<p>These are the services</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app1 ClusterIP None <none> 3003/TCP 11m
cassandra NodePort 10.96.239.137 <none> 9042:32564/TCP 38h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 38h
</code></pre>
<p>And these are the pods (all namespaces)</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
default app1-85d889db5-m977z 1/1 Running 0 2m1s
default cassandra-0 1/1 Running 0 38h
kube-system calico-etcd-ccvs8 1/1 Running 0 38h
kube-system calico-node-thzwx 2/2 Running 0 38h
kube-system calico-policy-controller-5bb4fc6cdc-cnhrt 1/1 Running 0 38h
kube-system coredns-86c58d9df4-z8pr4 1/1 Running 0 38h
kube-system coredns-86c58d9df4-zcn6p 1/1 Running 0 38h
kube-system default-http-backend-5ff9d456ff-84zb5 1/1 Running 0 38h
kube-system etcd-minikube 1/1 Running 0 38h
kube-system kube-addon-manager-minikube 1/1 Running 0 38h
kube-system kube-apiserver-minikube 1/1 Running 0 38h
kube-system kube-controller-manager-minikube 1/1 Running 0 38h
kube-system kube-proxy-jj7c4 1/1 Running 0 38h
kube-system kube-scheduler-minikube 1/1 Running 0 38h
kube-system kubernetes-dashboard-ccc79bfc9-6jtgq 1/1 Running 4 38h
kube-system nginx-ingress-controller-7c66d668b-rvxpc 1/1 Running 0 38h
kube-system registry-creds-x5bhl 1/1 Running 0 38h
kube-system storage-provisioner 1/1 Running 0 38h
</code></pre>
<p><strong>UPDATE 2</strong></p>
<p>The code to connect to Cassandra from Node.js:</p>
<pre><code>const cassandra = require('cassandra-driver');
const client = new cassandra.Client({ contactPoints: ['cassandra:9042'], localDataCenter: 'datacenter1', keyspace: 'auth_server' });
const query = 'SELECT * FROM user';
client.execute(query, [])
.then(result => console.log('User with email %s', result.rows[0].email));
</code></pre>
<p>It DOES work when replacing <code>cassandra:9042</code> with <code>10.96.239.137:9042</code> (10.69.239.137 is the ip address received from pinging cassandra via cli).</p>
| <p>The Cassandra driver for Node.js uses <code>resolve4</code>/<code>resolve6</code> to do its dns lookup, which bypasses your <code>resolv.conf</code> file. A program like ping uses resolv.conf to resolve 'cassandra' to 'cassandra.default.svc.cluster.local', the actual dns name assigned to your Cassandra service. For a more detailed explanation of name resolution in node.js see <a href="https://stackoverflow.com/questions/40985367/node-js-dns-resolve-vs-dns-lookup">here</a>.</p>
<p>The fix is simple, just pass in the full service name to your client:</p>
<pre><code>const client = new cassandra.Client({ contactPoints: ['cassandra.default.svc.cluster.local:9042'], localDataCenter: 'datacenter1', keyspace: 'auth_server' });
</code></pre>
|
<p>I want to auto scale my pod in <strong>kubernetes</strong>. after some research I understand that i should use <strong>heapster</strong> for monitoring. what tested document you can suggest.
how can i test it?
i know i should use some stress test but does any one has document about it?
thanks</p>
| <p>Heapster is EOL. <a href="https://github.com/kubernetes-retired/heapster" rel="nofollow noreferrer">https://github.com/kubernetes-retired/heapster</a></p>
<blockquote>
<p>RETIRED: Heapster is now retired. See the deprecation timeline for more information on support. We will not be making changes to Heapster.</p>
</blockquote>
<p>The following are potential migration paths for Heapster functionality:</p>
<pre><code>For basic CPU/memory HPA metrics: Use metrics-server.
For general monitoring: Consider a third-party monitoring pipeline that can gather Prometheus-formatted metrics. The kubelet exposes all the metrics exported by Heapster in Prometheus format. One such monitoring pipeline can be set up using the Prometheus Operator, which deploys Prometheus itself for this purpose.
For event transfer: Several third-party tools exist to transfer/archive Kubernetes events, depending on your sink. heptiolabs/eventrouter has been suggested as a general alternative.
</code></pre>
|
<p>Mongoose can't connect to mogodb Atlas. It every times give me this error:</p>
<pre><code> Error: querySrv ENOTIMP _mongodb._tcp.cluster1-owfxv.mongodb.net
</code></pre>
<p>I am running inside kubernetes cluster inside minikube locally.
If I run project directly then it works perfectly but with minikube it alwasy give me error.</p>
<p>Following is my code:</p>
<pre><code> const url = "mongodb+srv://name:password@cluster1-owfxv.mongodb.net/test?retryWrites=true";
const mongoDbOptions = {
useNewUrlParser: true,
reconnectTries: 10,
autoReconnect: true
};
mongoose.connect(url, mongoDbOptions).then((r) => { }).catch((e) => {
console.log(e);
});
</code></pre>
<p>Error message is not so clear to me. Its strange that it works directly but with kubernetes cluster it does not work.</p>
<p>I will really appreciate for any contribution.</p>
| <p>Try using connection string compatible with mongo driver 2.2.12 or later i.e. one with mongodb://username:password@host1:port,host2:port,host3:port/databaseName</p>
<p>It's not clear why connection to mongodb is not working with new url.</p>
|
<p>Issue Redis POD creation on k8s(v1.10) cluster and POD creation stuck at "ContainerCreating"</p>
<pre><code>Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 30m default-scheduler Successfully assigned redis to k8snode02
Normal SuccessfulMountVolume 30m kubelet, k8snode02 MountVolume.SetUp succeeded for volume "default-token-f8tcg"
Warning FailedCreatePodSandBox 5m (x1202 over 30m) kubelet, k8snode02 Failed create pod sandbox: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "redis_default" network: failed to find plugin "loopback" in path [/opt/loopback/bin /opt/cni/bin]
Normal SandboxChanged 47s (x1459 over 30m) kubelet, k8snode02 Pod sandbox changed, it will be killed and re-created.
</code></pre>
| <p>When I used calico as CNI and I faced a similar issue.</p>
<p>The container remained in creating state, I checked for /etc/cni/net.d and /opt/cni/bin on master both are present but not sure if this is required on worker node as well.</p>
<pre><code>root@KubernetesMaster:/opt/cni/bin# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-5c7588df-5zds6 0/1 ContainerCreating 0 21m
root@KubernetesMaster:/opt/cni/bin# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetesmaster Ready master 26m v1.13.4
kubernetesslave1 Ready <none> 22m v1.13.4
root@KubernetesMaster:/opt/cni/bin#
kubectl describe pods
Name: nginx-5c7588df-5zds6
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: kubernetesslave1/10.0.3.80
Start Time: Sun, 17 Mar 2019 05:13:30 +0000
Labels: app=nginx
pod-template-hash=5c7588df
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/nginx-5c7588df
Containers:
nginx:
Container ID:
Image: nginx
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qtfbs (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-qtfbs:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-qtfbs
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18m default-scheduler Successfully assigned default/nginx-5c7588df-5zds6 to kubernetesslave1
Warning FailedCreatePodSandBox 18m kubelet, kubernetesslave1 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "123d527490944d80f44b1976b82dbae5dc56934aabf59cf89f151736d7ea8adc" network for pod "nginx-5c7588df-5zds6": NetworkPlugin cni failed to set up pod "nginx-5c7588df-5zds6_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Warning FailedCreatePodSandBox 18m kubelet, kubernetesslave1 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "8cc5e62ebaab7075782c2248e00d795191c45906cc9579464a00c09a2bc88b71" network for pod "nginx-5c7588df-5zds6": NetworkPlugin cni failed to set up pod "nginx-5c7588df-5zds6_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Warning FailedCreatePodSandBox 18m kubelet, kubernetesslave1 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "30ffdeace558b0935d1ed3c2e59480e2dd98e983b747dacae707d1baa222353f" network for pod "nginx-5c7588df-5zds6": NetworkPlugin cni failed to set up pod "nginx-5c7588df-5zds6_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Warning FailedCreatePodSandBox 18m kubelet, kubernetesslave1 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "630e85451b6ce2452839c4cfd1ecb9acce4120515702edf29421c123cf231213" network for pod "nginx-5c7588df-5zds6": NetworkPlugin cni failed to set up pod "nginx-5c7588df-5zds6_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Warning FailedCreatePodSandBox 18m kubelet, kubernetesslave1 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "820b919b7edcfc3081711bb78b79d33e5be3f7dafcbad29fe46b6d7aa22227aa" network for pod "nginx-5c7588df-5zds6": NetworkPlugin cni failed to set up pod "nginx-5c7588df-5zds6_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Warning FailedCreatePodSandBox 18m kubelet, kubernetesslave1 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "abbfb5d2756f12802072039dec20ba52f546ae755aaa642a9a75c86577be589f" network for pod "nginx-5c7588df-5zds6": NetworkPlugin cni failed to set up pod "nginx-5c7588df-5zds6_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Warning FailedCreatePodSandBox 18m kubelet, kubernetesslave1 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "dfeb46ffda4d0f8a434f3f3af04328fcc4b6c7cafaa62626e41b705b06d98cc4" network for pod "nginx-5c7588df-5zds6": NetworkPlugin cni failed to set up pod "nginx-5c7588df-5zds6_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Warning FailedCreatePodSandBox 18m kubelet, kubernetesslave1 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9ae3f47bb0282a56e607779d3267127ee8b0ae1d7f416f5a184682119203b1c8" network for pod "nginx-5c7588df-5zds6": NetworkPlugin cni failed to set up pod "nginx-5c7588df-5zds6_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Warning FailedCreatePodSandBox 18m kubelet, kubernetesslave1 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "546d07f1864728b2e2675c066775f94d658e221ada5fb4ed6bf6689ec7b8de23" network for pod "nginx-5c7588df-5zds6": NetworkPlugin cni failed to set up pod "nginx-5c7588df-5zds6_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
Normal SandboxChanged 18m (x12 over 18m) kubelet, kubernetesslave1 Pod sandbox changed, it will be killed and re-created.
Warning FailedCreatePodSandBox 3m39s (x829 over 18m) kubelet, kubernetesslave1 (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f586be437843537a3082f37ad139c88d0eacfbe99ddf00621efd4dc049a268cc" network for pod "nginx-5c7588df-5zds6": NetworkPlugin cni failed to set up pod "nginx-5c7588df-5zds6_default" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
root@KubernetesMaster:/etc/cni/net.d#
</code></pre>
<p>On worker node NGINX is trying to come up but getting exited, I am not sure what's going on here - I am newbie to kubernetes & not able to fix this issue -</p>
<pre><code>root@kubernetesslave1:/home/ubuntu# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5ad5500e8270 fadcc5d2b066 "/usr/local/bin/kubeβ¦" 3 minutes ago Up 3 minutes k8s_kube-proxy_kube-proxy-f24gd_kube-system_4e2d313a-4873-11e9-a33a-06516e7d78c4_1
b1c9929ebe9e k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_calico-node-749qx_kube-system_4e2d8c9c-4873-11e9-a33a-06516e7d78c4_1
ceb78340b563 k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-proxy-f24gd_kube-system_4e2d313a-4873-11e9-a33a-06516e7d78c4_1
root@kubernetesslave1:/home/ubuntu# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5ad5500e8270 fadcc5d2b066 "/usr/local/bin/kubeβ¦" 3 minutes ago Up 3 minutes k8s_kube-proxy_kube-proxy-f24gd_kube-system_4e2d313a-4873-11e9-a33a-06516e7d78c4_1
b1c9929ebe9e k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_calico-node-749qx_kube-system_4e2d8c9c-4873-11e9-a33a-06516e7d78c4_1
ceb78340b563 k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-proxy-f24gd_kube-system_4e2d313a-4873-11e9-a33a-06516e7d78c4_1
root@kubernetesslave1:/home/ubuntu# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5ad5500e8270 fadcc5d2b066 "/usr/local/bin/kubeβ¦" 3 minutes ago Up 3 minutes k8s_kube-proxy_kube-proxy-f24gd_kube-system_4e2d313a-4873-11e9-a33a-06516e7d78c4_1
b1c9929ebe9e k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_calico-node-749qx_kube-system_4e2d8c9c-4873-11e9-a33a-06516e7d78c4_1
ceb78340b563 k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-proxy-f24gd_kube-system_4e2d313a-4873-11e9-a33a-06516e7d78c4_1
root@kubernetesslave1:/home/ubuntu# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
94b2994401d0 k8s.gcr.io/pause:3.1 "/pause" 1 second ago Up Less than a second k8s_POD_nginx-5c7588df-5zds6_default_677a722b-4873-11e9-a33a-06516e7d78c4_534
5ad5500e8270 fadcc5d2b066 "/usr/local/bin/kubeβ¦" 4 minutes ago Up 4 minutes k8s_kube-proxy_kube-proxy-f24gd_kube-system_4e2d313a-4873-11e9-a33a-06516e7d78c4_1
b1c9929ebe9e k8s.gcr.io/pause:3.1 "/pause" 4 minutes ago Up 4 minutes k8s_POD_calico-node-749qx_kube-system_4e2d8c9c-4873-11e9-a33a-06516e7d78c4_1
ceb78340b563 k8s.gcr.io/pause:3.1 "/pause" 4 minutes ago Up 4 minutes k8s_POD_kube-proxy-f24gd_kube-system_4e2d313a-4873-11e9-a33a-06516e7d78c4_1
root@kubernetesslave1:/home/ubuntu# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5ad5500e8270 fadcc5d2b066 "/usr/local/bin/kubeβ¦" 4 minutes ago Up 4 minutes k8s_kube-proxy_kube-proxy-f24gd_kube-system_4e2d313a-4873-11e9-a33a-06516e7d78c4_1
b1c9929ebe9e k8s.gcr.io/pause:3.1 "/pause" 4 minutes ago Up 4 minutes k8s_POD_calico-node-749qx_kube-system_4e2d8c9c-4873-11e9-a33a-06516e7d78c4_1
ceb78340b563 k8s.gcr.io/pause:3.1 "/pause" 4 minutes ago Up 4 minutes k8s_POD_kube-proxy-f24gd_kube-system_4e2d313a-4873-11e9-a33a-06516e7d78c4_1
root@kubernetesslave1:/home/ubuntu# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f72500cae2b7 k8s.gcr.io/pause:3.1 "/pause" 1 second ago Up Less than a second k8s_POD_nginx-5c7588df-5zds6_default_677a722b-4873-11e9-a33a-06516e7d78c4_585
5ad5500e8270 fadcc5d2b066 "/usr/local/bin/kubeβ¦" 4 minutes ago Up 4 minutes k8s_kube-proxy_kube-proxy-f24gd_kube-system_4e2d313a-4873-11e9-a33a-06516e7d78c4_1
b1c9929ebe9e k8s.gcr.io/pause:3.1 "/pause" 4 minutes ago Up 4 minutes k8s_POD_calico-node-749qx_kube-system_4e2d8c9c-4873-11e9-a33a-06516e7d78c4_1
ceb78340b563 k8s.gcr.io/pause:3.1 "/pause" 4 minutes ago Up 4 minutes k8s_POD_kube-proxy-f24gd_kube-system_4e2d313a-4873-11e9-a33a-06516e7d78c4_1
root@kubernetesslave1:/home/ubuntu# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5ad5500e8270 fadcc5d2b066 "/usr/local/bin/kubeβ¦" 5 minutes ago Up 5 minutes k8s_kube-proxy_kube-proxy-f24gd_kube-system_4e2d313a-4873-11e9-a33a-06516e7d78c4_1
b1c9929ebe9e k8s.gcr.io/pause:3.1 "/pause" 5 minutes ago Up 5 minutes k8s_POD_calico-node-749qx_kube-system_4e2d8c9c-4873-11e9-a33a-06516e7d78c4_1
ceb78340b563 k8s.gcr.io/pause:3.1 "/pause" 5 minutes ago Up 5 minutes k8s_POD_kube-proxy-f24gd_kube-system_4e2d313a-4873-11e9-a33a-06516e7d78c4_1
</code></pre>
<p>I checked about /etc/cni/net.d & /opt/cni/bin on worker node as well, it is there -</p>
<pre><code>root@kubernetesslave1:/home/ubuntu# cd /etc/cni
root@kubernetesslave1:/etc/cni# ls -ltr
total 4
drwxr-xr-x 2 root root 4096 Mar 17 05:19 net.d
root@kubernetesslave1:/etc/cni# cd /opt/cni
root@kubernetesslave1:/opt/cni# ls -ltr
total 4
drwxr-xr-x 2 root root 4096 Mar 17 05:19 bin
root@kubernetesslave1:/opt/cni# cd bin
root@kubernetesslave1:/opt/cni/bin# ls -ltr
total 107440
-rwxr-xr-x 1 root root 3890407 Aug 17 2017 bridge
-rwxr-xr-x 1 root root 3475802 Aug 17 2017 ipvlan
-rwxr-xr-x 1 root root 3520724 Aug 17 2017 macvlan
-rwxr-xr-x 1 root root 3877986 Aug 17 2017 ptp
-rwxr-xr-x 1 root root 3475750 Aug 17 2017 vlan
-rwxr-xr-x 1 root root 9921982 Aug 17 2017 dhcp
-rwxr-xr-x 1 root root 2605279 Aug 17 2017 sample
-rwxr-xr-x 1 root root 32351072 Mar 17 05:19 calico
-rwxr-xr-x 1 root root 31490656 Mar 17 05:19 calico-ipam
-rwxr-xr-x 1 root root 2856252 Mar 17 05:19 flannel
-rwxr-xr-x 1 root root 3084347 Mar 17 05:19 loopback
-rwxr-xr-x 1 root root 3036768 Mar 17 05:19 host-local
-rwxr-xr-x 1 root root 3550877 Mar 17 05:19 portmap
-rwxr-xr-x 1 root root 2850029 Mar 17 05:19 tuning
root@kubernetesslave1:/opt/cni/bin#
</code></pre>
|
<p>What is difference between <strong>MetalLB</strong> and <strong>NodePort</strong>?</p>
| <p>A node port is a built-in feature that allows users to access a service from the IP of any k8s node using a static port. The main drawback of using node ports is that your port must be in the range 30000-32767 and that there can, of course, be no overlapping node ports among services. Using node ports also forces you to expose your k8s nodes to users who need to access your services, which could pose security risks.</p>
<p>MetalLB is a third-party load balancer implementation for bare metal servers. A load balancer exposes a service on an IP external to your k8s cluster at any port of your choosing and routes those requests to yours k8s nodes. </p>
|
<p>I'm looking to find a way to write a scaler for my application which is running on Minikube to scale it up and down based on Time Stamps. Any idea?</p>
| <p>That would be an <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaler</a> (see its <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">Walkthrough here</a>), which would automatically scales the number of pods in a replication controller, deployment or replica set based on observed CPU utilization (or, with <strong><a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md" rel="nofollow noreferrer">custom metrics support</a></strong>, on some other application-provided metrics)</p>
<p>In your case, the custom metric would be the time.</p>
<p>You can then follow "<a href="https://itsmetommy.com/2018/07/01/kubernetes-horizontal-pod-autoscaler-using-minikube/" rel="nofollow noreferrer">Kubernetes: Horizontal Pod Autoscaler using Minikube</a>" from <a href="https://twitter.com/r1tommy" rel="nofollow noreferrer">Tommy Elmesewdy</a> as a practical example to implement one such autoscaler on Minikube.</p>
<p>In your case, you should ensure custom metrics are enabled:</p>
<pre><code>minikube start --extra-config kubelet.EnableCustomMetrics=true
</code></pre>
|
<p>I'm new to Kubernetes, and after I've seen how huge it is I thought I'd ask for a bit of help.</p>
<p>The purpose of my company is to deploy a set of apps independantly for each of our clients. Say we have an app A, we want to deploy a first version for client 1, another version for client 2, etc. We will have a lot of clients in the future (maybe around 50). Of course we want to be able to manage them easily.</p>
<p>Which part of Kubernetes should I explore to achieve this, or if kubernetes is not fit for this what else should I consider ?</p>
<p>Thanks !</p>
| <p>There is concept in kubernetes as <code>namespaces</code> which are isolated with each other and provide the isolation between deployments in it.</p>
<p>so you can use and explore the namespaces in kubernetes which will help to isolate the client versions and deployments.</p>
<blockquote>
<p>if kubernetes is not fit for this what else should I consider ?</p>
</blockquote>
<p>if donot think it may happen for your requirement kubernetes having lots of options for deployment for zero downtime in service, with kubernetes you can implement CI/CD so i think kubernetes will be easy to setup and manage any application.</p>
|
<p>We are planning to setup Highly Available Jenkins setup in container platform using kubernetes. We are looking at setting up one Active master and another master in standby mode.
Jenkins data volume is going to be stored in a global storage that is shared between the two master containers.</p>
<p>In case the active master is not available then requests should fail over to other master. And the slaves should be communicating only with active master.</p>
<p>How do we accomplish Jenkins HA setup in active/passive mode in kubernetes. please provide your suggestions.</p>
<p>We would like to achieve as shown in the diagram from below link</p>
<p><a href="https://endocode.com/img/blog/jenkins-ha-setup_concept.png" rel="nofollow noreferrer">https://endocode.com/img/blog/jenkins-ha-setup_concept.png</a></p>
| <p>There have been active considerations to emulate active/passive setup for containers, but note that as a product feature this really is not a must have and hence not built in. This very well may be implemented as an OOB feature integration wherein you have to craft your applications to at least do the following:</p>
<ol>
<li>General leader election (for master selection and traffic routing,
maybe a sidecar container to do elections and message routing)</li>
<li>Make the liveness/readiness probe detection routines (and the failover logic) to patch all pods under failed paradigm to no longer be selected via any pod selector equation</li>
<li>In the event of another failover, you will still have to ensure another patch of labels (and this time across the old and new pods) to update pods metadata aka labels</li>
</ol>
<p>If you are looking for something bare minimal than, configuring liveness/readiness probes may just do the trick for you. As always, you should avoid getting into a practice of mass mutating pod labels with ad-hoc patches for role selection</p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/45300" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/45300</a></p>
|
<p>AFAIK, the admission controller is the last pass before the submission to the database.</p>
<p>However I cannot know which one is enabled, Is there a way to know which one is taking effect?</p>
<p>Thanks.</p>
| <p>The kube-apiserver is running in your kube-apiserver-< example.com > container.
The application does not have a get method at the moment to obtain the enabled admission plugins, but you can get the startup parameters from its command line.</p>
<pre><code>kubectl -n kube-system describe po kube-apiserver-example.com
</code></pre>
<p>Another way, to see what is in the container: unfortunately there is no "ps" command in the container, but you can get the initial process command parameters from /proc , something like that:</p>
<pre><code>kubectl -n kube-system exec kube-apiserver-example.com -- sed 's/--/\n/g' /proc/1/cmdline
</code></pre>
<p>It will be probably like :</p>
<blockquote>
<p>enable-admission-plugins=NodeRestriction</p>
</blockquote>
|
<p>I am using Istio-1.0.6 to implement Authentication/Authorization. I am attempting to use Jason Web Tokens (JWT). I followed most of the examples from the documentation but I am not getting the expected outcome. Here are my settings:</p>
<p>Service</p>
<pre><code>kubectl describe services hello
Name: hello
Namespace: agud
Selector: app=hello
Type: ClusterIP
IP: 10.247.173.177
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
Endpoints: 172.16.0.193:8080
Session Affinity: None
</code></pre>
<p>Gateway</p>
<pre><code>kubectl describe gateway
Name: hello-gateway
Namespace: agud
Kind: Gateway
Metadata:
Cluster Name:
Creation Timestamp: 2019-03-15T13:40:43Z
Resource Version: 1374497
Self Link:
/apis/networking.istio.io/v1alpha3/namespaces/agud/gateways/hello-gateway
UID: ee483065-4727-11e9-a712-fa163ee249a9
Spec:
Selector:
Istio: ingressgateway
Servers:
Hosts:
*
Port:
Name: http
Number: 80
Protocol: HTTP
</code></pre>
<p>Virtual Service</p>
<pre><code>kubectl describe virtualservices
Name: hello
Namespace: agud
API Version: networking.istio.io/v1alpha3
Kind: VirtualService
Metadata:
Cluster Name:
Creation Timestamp: 2019-03-18T07:38:52Z
Generation: 0
Resource Version: 2329507
Self Link:
/apis/networking.istio.io/v1alpha3/namespaces/agud/virtualservices/hello
UID: e099b560-4950-11e9-82a1-fa163ee249a9
Spec:
Gateways:
hello-gateway
Hosts:
*
Http:
Match:
Uri:
Exact: /hello
Uri:
Exact: /secured
Route:
Destination:
Host: hello.agud.svc.cluster.local
Port:
Number: 8080
</code></pre>
<p>Policy</p>
<pre><code>kubectl describe policies
Name: jwt-hello
Namespace: agud
API Version: authentication.istio.io/v1alpha1
Kind: Policy
Metadata:
Cluster Name:
Creation Timestamp: 2019-03-18T07:45:33Z
Generation: 0
Resource Version: 2331381
Self Link:
/apis/authentication.istio.io/v1alpha1/namespaces/agud/policies/jwt-hello
UID: cf9ed2aa-4951-11e9-9f64-fa163e804eca
Spec:
Origins:
Jwt:
Audiences:
hello
Issuer: testing@secure.istio.io
Jwks Uri: https://raw.githubusercontent.com/istio/istio/release-1.0/security/tools/jwt/samples/jwks.json
Principal Binding: USE_ORIGIN
Targets:
Name: hello.agud.svc.cluster.local
</code></pre>
<p>RESULT</p>
<p>I am expecting to get a 401 error but I am getting a 200. What is wrong with my configuration and how do I fix this?</p>
<pre><code>curl $INGRESS_HOST/hello -s -o /dev/null -w "%{http_code}\n"
200
</code></pre>
| <p>You have:</p>
<pre><code>Port: <unset> 8080/TCP
</code></pre>
<p>For <em>Istio routing and security</em>, you must set the port name to <code>http</code> or <code>http-<something></code>.</p>
<p>I tried with Istio 1.1. I got a <code>503</code> rather than a <code>401</code>.</p>
|
<p>I have an image in my docker repository. I an trying to create POD out of it but K8S is giving the following error. </p>
<p><em>Failed to pull image "cloudanswer:latest": rpc error: code = Unknown desc = Get <a href="https://registry-1.docker.io/v2/" rel="nofollow noreferrer">https://registry-1.docker.io/v2/</a>: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)</em></p>
<p>It seems K8S is connecting to <a href="https://registry-1.docker.io/v2/" rel="nofollow noreferrer">https://registry-1.docker.io/v2/</a> instead of taking from local docker repository.</p>
<p>How to make K8S take image for local docker repository ?</p>
| <p>If you use <strong>single node</strong> in your cluster, make sure this docker image is available on this node.
You can check via</p>
<pre><code>docker image ls
</code></pre>
<p>Also set the <code>imagePullPolicy</code> to <code>Never</code>, otherwise Kubernetes will try to download the image.</p>
<p><strong>Multiple node cluster</strong>, you can use docker <a href="https://hub.docker.com/_/registry" rel="nofollow noreferrer">registry image</a>.
Use a local registry:</p>
<pre><code>docker run -d -p 5000:5000 --restart=always --name registry registry:2
</code></pre>
<p>Now tag your image properly:</p>
<pre><code>docker tag ubuntu <dns-name-of-machine>:5000/ubuntu
</code></pre>
<p><strong>dns name</strong> of the machine running registry container should be reachable by all nodes in network</p>
<p>Now push your image to local registry:</p>
<pre><code>docker push <dns-name-of-machine>:5000/ubuntu
</code></pre>
<p>You should be able to pull it back:</p>
<pre><code>docker pull <dns-name-of-machine>:5000/ubuntu
</code></pre>
<p>Now change your yaml file to use local registry.</p>
|
<p>I'm trying to use a directory config map as a mounted volume inside of my docker container running a spring boot application. I am passing some of the mounted paths to the things like the spring application.yaml, but it doesn't appear the mount is working as expected as it can't find the config. For example</p>
<p>Create the configmap like so</p>
<pre><code>kubectl create configmap example-config-dir \
--from-file=/example/config/
</code></pre>
<p>Kubernetes yaml</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: example
labels:
app: example
spec:
replicas: 1
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
containers:
- name: example
image: example:latest
ports:
- containerPort: 8443
volumeMounts:
- name: config-vol
mountPath: /config
volumes:
- name: config-vol
configMap:
name: example-config-dir
</code></pre>
<p>And the Dockerfile (there are other steps which copy the jar file in which I haven't detailed)</p>
<pre><code>VOLUME /tmp
RUN echo "java -Dspring.config.location=file:///config/ -jar myjarfile.jar" > ./start-spring-boot-app.sh"
CMD ["sh", "start-spring-boot-app.sh"]
</code></pre>
| <p>As explained in <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-configmaps-from-directories" rel="nofollow noreferrer">Create ConfigMaps from Directories</a> and <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-configmaps-from-files" rel="nofollow noreferrer">Create ConfigMaps from files</a>, when you create a ConfigMap using <code>--from-file</code>, <strong>the filename becomes a key stored in the data section of the ConfigMap</strong>. The file contents become the keyβs value.</p>
<p>To do the way you want, a better way would be creating the yml like this</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
</code></pre>
<p>and then apply like this:</p>
<pre><code>kubectl create -f https://k8s.io/examples/configmap/configmap-multikeys.yaml
</code></pre>
<p>When the pod runs, the command <code>ls /config</code> produces the output below:</p>
<pre><code>special.level
special.type
</code></pre>
<p>The way you did, should generate a file with same name as your original files and within it the contents of the file.</p>
|
<p>We had configured a kubernetes cluster where we deploy various services using spring boot and we have one service that is Spring Cloud Config Server.</p>
<p>Our trouble is that when we start the cluster all the services try to connect to the config server to download the configuration, and since the Config Server has not yet started all the services fail, causing kubernetes to retry the initialization and consuming many resources so that config server it self can not start.</p>
<p>We are wondering if there is a way to initialize all services in such a way that do not over load the cluster or so that they pacefully wait until the config server starts. As of now, all services start and we have to wait for like 20 minutes until the cluster works its way out.</p>
<p>Thanks in advance</p>
| <p>You can use <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">Init Containers</a> to ping for the server until it is online. An example would be:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
selector:
matchLabels:
app: web
replicas: 1
template:
metadata:
labels:
app: web
spec:
initContainers:
- name: wait-config-server
image: busybox
command: ["sh", "-c", "for i in $(seq 1 300); do nc -zvw1 config-server 8080 && exit 0 || sleep 3; done; exit 1"]
containers:
- name: web
image: my-mage
ports:
- containerPort: 80
...
</code></pre>
<p>In this example I am using an <a href="https://www.commandlinux.com/man-page/man1/nc.1.html" rel="nofollow noreferrer">nc</a> command for pinging the server but you can also use wget, curl or whatever is suited best for you.</p>
|
<p>I want to reserve static IP address for my k8s exposed service.
If I am not mistaken when I expose the k8s service it gets the random public IP address. I redeploy my app often and the IP changes.
But I want to get permanent public IP address.
My task is to get my application via permanent IP address (or DNS-name).</p>
| <p>This is cloud provider specific, but from the tag on your question it appears you are using Google Cloud Platform's Kubernetes Engine (GKE). My answer is specific for this situation.</p>
<p>From the <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer#step_5_optional_configuring_a_static_ip_address" rel="nofollow noreferrer">Setting up HTTP Load Balancing with Ingress</a> tutorial:</p>
<blockquote>
<pre><code>gcloud compute addresses create web-static-ip --global
</code></pre>
</blockquote>
<p>And in your Ingress manifest:</p>
<blockquote>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "web-static-ip"
spec:
backend:
serviceName: web
servicePort: 8080
</code></pre>
</blockquote>
<p>You can do something similar if you using <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip" rel="nofollow noreferrer">Service instead of Ingress</a>:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: helloweb
labels:
app: hello
spec:
type: LoadBalancer
loadBalancerIP: "web-static-ip"
selector:
app: hello
tier: web
ports:
- port: 80
targetPort: 8080
</code></pre>
|
<p>I was playing with k8s deployment - rolling update and it works really well.
I am curious to know how to do deployment when we have a service dependency! Not sure If i am explaining my question correctly. It is just a very high level scenario! </p>
<p>Lets consider this example. I have deployed 2 apps with 10 replicas each, exposed as services. </p>
<pre><code>Service-A
Deployment-A
Pod-A - v1 - (10)
Service-B
Deployment-B
Pod-B - v1 - (10)
</code></pre>
<p>Service A depends on B. Now as part of v2 release both apps need to use v2. Service B api expects few additional parameters / slightly changed. When we upgrade both apps with newer version v2, if service-B becomes up and running before Service-A, some of the requests would be failing as Service-A is still in v1 (as the upgrade is in progress). How can we do the deployment without any failure here? if you are using k8s already, what is the best practice you would follow.</p>
| <p>As shown in "<a href="https://medium.com/platformer-blog/enable-rolling-updates-in-kubernetes-with-zero-downtime-31d7ec388c81" rel="nofollow noreferrer">Enable Rolling updates in Kubernetes with Zero downtime</a>" from <a href="https://twitter.com/Nilesh_93" rel="nofollow noreferrer">Nilesh Jayanandana</a>, you could check if implementing a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer">readiness probe</a> would help service B to wait for service A being in V2.</p>
<p>Another approach would be through an Helm package, as in "<a href="https://docs.bitnami.com/kubernetes/how-to/deploy-application-kubernetes-helm/" rel="nofollow noreferrer">Deploy, Scale and Upgrade an Application on Kubernetes with Helm</a>", which can modelize the dependency, and then, through <code>helm update</code>, perform the rolling upgrade.</p>
|
<p>Is it possible to globally (or at least per namespace), configure kubernetes to always use an image pull secret when connecting to a private repo?
There are two use cases: </p>
<ol>
<li>when a user specifies a container in our private registry in a deployment</li>
<li>when a user points a Helm chart at our private repo (and so we have no control over the image pull secret tag).</li>
</ol>
<p>I know it is possible to do this on a <a href="https://kubernetes.io/docs/concepts/containers/images/#using-a-private-registry" rel="noreferrer">service account basis</a> but without writing a controller to add this to every new service account created it would get a bit of a mess.</p>
<p>Is there are way to set this globally so if kube tries to pull from registry X it uses secret Y?</p>
<p>Thanks</p>
| <p>As far as I know, usually the <strong>default</strong> serviceAccount is responsible for pulling the images.
To easily add imagePullSecrets to a serviceAccount you can use the <strong>patch command</strong>:</p>
<pre><code>kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "mySecret"}]}'
</code></pre>
<p>It's possible to use <code>kubectl patch</code> in a script that inserts imagePullSecrets on serviceAccounts across all namespaces.</p>
<p>If itΒ΄s too complicated to manage multiple namespaces you can have look at <a href="https://github.com/mittwald/kubernetes-replicator" rel="noreferrer">kubernetes-replicator</a>, which syncs resources between namespaces.</p>
<p><strong>Solution 2:</strong><br>
<a href="https://kubernetes.io/docs/concepts/containers/images/#configuring-nodes-to-authenticate-to-a-private-registry" rel="noreferrer">This section of the doc</a> explains how you can set the private registry on a node basis:</p>
<blockquote>
<p>Here are the recommended steps to configuring your nodes to use a
private registry. In this example, run these on your desktop/laptop:</p>
<ol>
<li>Run <code>docker login [server]</code> for each set of credentials you want to use. This updates <code>$HOME/.docker/config.json</code>.</li>
<li>View <code>$HOME/.docker/config.json</code> in an editor to ensure it contains just the credentials you want to use.</li>
<li><p>Get a list of your nodes, for example:</p>
<ul>
<li><p>If you want the names:<br>
nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')</p></li>
<li><p>If you want to get the IPs:<br>
nodes=$(kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(@.type=="ExternalIP")]}{.address} {end}')</p></li>
</ul></li>
<li><p>Copy your local .docker/config.json to one of the search paths list above. for example: </p>
<p>for n in $nodes; do scp ~/.docker/config.json root@$n:/var/lib/kubelet/config.json; done</p></li>
</ol>
</blockquote>
<p><strong>Solution 3:</strong><br>
A (very dirty!) way I discovered to not need to set up an imagePullSecret on a deployment / serviceAccount basis is to:</p>
<ol>
<li>Set ImagePullPolicy: IfNotPresent</li>
<li>Pulling the image in each node<br>
2.1. manually using <code>docker pull myrepo/image:tag</code>.<br>
2.2. using a script or a tool like <a href="https://github.com/glowdigitalmedia/docker-puller" rel="noreferrer">docker-puller</a> to automate that process.</li>
</ol>
<p>Well, I think I don't need to explain how ugly is that.</p>
<p><strong>PS</strong>: If it helps, I found <a href="https://github.com/kubernetes/kops/issues/2505" rel="noreferrer">an issue</a> on kubernetes/kops about the feature of creating a global configuration for private registry.</p>
|
<p>We're making use of ServiceAccounts for RBAC, and so have multiple SAs in play to allow us us to tune accesses via RoleBindings appropriately.</p>
<p>We're also using a private registry, and thus have imagePullSecrets to use for pulling images from the private registry. I'm trying to come up with a solution by which all SAs created within a namespace would by default get the list of imagePullSecrets applied to the default SA added to them, so that when we deploy the pods making use of the service (typically right after the SA), the serviceAccount is already configured to use the imagePullSecrets to retrieve the images. </p>
<p>Has anyone devised an elegant way to handle this? I did check to see whether pods could have more than one serviceAccount applied - N to hold imageSecrets, and 1 to map to RBAC. And/or, can someone suggest an alternate way to look at the problem?</p>
<p>[UPDATE: Clarifying - the challenge is to share the set of imagePullSecrets across multiple service accounts, preferably without explicitly needing to add them to each ServiceAccount definition. The private registry should be considered akin to dockerhub: the user accessing the registry is generally intended to be able to pull, with the user info then used to track who's pulling images and occasionally to keep users from pulling images they shouldn't have access to for 'this thing just isn't intended for broader consumption' reasons.]</p>
| <p>As I answered <a href="https://stackoverflow.com/a/55230340/8971507">in another thread</a>: </p>
<blockquote>
<p>To easily add imagePullSecrets to a serviceAccount you can use the
patch command:</p>
<pre><code>kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "mySecret"}]}'
</code></pre>
</blockquote>
|
<p>I started EFK stack on k8s. </p>
<p>my ekf-kibana service manifest is as below</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-03-19T07:30:15Z"
labels:
app: kibana
chart: kibana-0.4.3
heritage: Tiller
release: efk
name: efk-kibana
namespace: logging
resourceVersion: "10156"
selfLink: /api/v1/namespaces/logging/services/efk-kibana
uid: d70a3266-4a18-11e9-b340-02edaf44024a
spec:
clusterIP: 100.69.129.248
ports:
- port: 443
protocol: TCP
targetPort: 5601
selector:
app: kibana
release: efk
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
</code></pre>
<p>when i access kibana using kube proxy URL as below </p>
<p><a href="https://i.stack.imgur.com/b2erg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/b2erg.png" alt="enter image description here"></a></p>
<p>when kibana load some css and js in backend, it redirect to api URL instead of proxy and kibana base URL as per below screenshot. </p>
<p><a href="https://i.stack.imgur.com/d45PL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/d45PL.png" alt="enter image description here"></a></p>
<p>please help!!!</p>
<p>Actually there is path related issue. below is my detailed explanation</p>
<pre><code>NAME=cluster_name
</code></pre>
<p>kibana URL using proxy is</p>
<pre><code>https://api.$NAME/api/v1/namespaces/logging/services/efk-kibana:443/proxy/app/kibana
</code></pre>
<p>it give error, kibana not loading properly and show for load some js and css from</p>
<pre><code>https://api.$NAME/bundles/vendors.style.css
</code></pre>
<p>i set kibana server_basepath in kibana manifest is as below for open via proxy </p>
<pre><code>/api/v1/namespaces/logging/services/efk-kibana
</code></pre>
<p>actually it should load from "API_URL+server_basepath/proxy/...." as below</p>
<pre><code>https://api.$NAME/api/v1/namespaces/logging/services/efk-kibana:443/proxy/bundles/vendors.style.css
</code></pre>
<p>so there is some path related issue in kibana deployment or in docker image.</p>
| <p>You can try alternative way to get into Kibana dashboard, with port-forward instead proxy command, like this:</p>
<pre><code>kubectl port-forward service/efk-kibana 5000:443 -n logging
</code></pre>
<p>now open <a href="http://localhost:5000" rel="nofollow noreferrer">http://localhost:5000</a> in web browser</p>
|
<p>I'm going over RBAC in Kubernetes. It appears to me that</p>
<ul>
<li>a ServiceAccount can be bound to a Role within a namespace
(or)</li>
<li>a ServiceAccount can be bound to a ClusterRole and have cluster-wide access (all namespaces?)</li>
</ul>
<p>Is it possible for a single Service Account (or User) to not have cluster-wide access but only have read-only access in only a subset of namespaces? If so, can someone elaborate on how this can be achieved. Thanks!</p>
| <p>You need to create a RoleBinding for every namespace in each namespace the ServiceAccount should have access to.</p>
<p>There is an example to give the default ServiceAccount permissions to read pods in the <code>development</code> namespace. </p>
<pre><code>kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-secrets
namespace: development # This only grants permissions within the "development" namespace.
subjects:
- kind: ServiceAccount
name: default
namespace: kube-system
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
</code></pre>
|
<p>My Zalenium installation on an OpenShift environment is far from stable. The web ui (admin view with vnc, dashboard, selenium console) works about 50% of the time and connecting with a RemoteWebDriver doesn't work at all.</p>
<p><strong>Error:</strong> <br>
504 Gateway Time-out
The server didn't respond in time.</p>
<p><strong>WebDriver error:</strong><br></p>
<pre><code>org.openqa.selenium.WebDriverException: Unable to parse remote response: <html><body><h1>504 Gateway Time-out</h1>
The server didn't respond in time.
</body></html>
at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:115)
</code></pre>
<p><strong>oc version:</strong> <br>
oc v3.9.0+191fece <br>
kubernetes v1.9.1+a0ce1bc657</p>
<p><strong>Zalenium template:</strong> <br></p>
<pre><code>apiVersion: v1
kind: Template
metadata:
name: zalenium
annotations:
"openshift.io/display-name": "Zalenium"
"description": "Disposable Selenium Grid for use in OpenShift"
message: |-
A Zalenium grid has been created in your project. Continue to overview to verify that it exists and start the deployment.
parameters:
- name: PROJECTNAME
description: The namespace / project name of this project
displayName: Namespace
required: true
- name: HOSTNAME
description: hostname used for route creation
displayName: route hostname
required: true
- name: "VOLUME_CAPACITY"
displayName: "Volume capacity for the disk that contains the test results."
description: "The volume is used to store all the test results, including logs and video recordings of the tests."
value: "10Gi"
required: true
objects:
- apiVersion: v1
kind: DeploymentConfig
metadata:
generation: 1
labels:
app: zalenium
role: hub
name: zalenium
spec:
replicas: 1
selector:
app: zalenium
role: hub
strategy:
activeDeadlineSeconds: 21600
resources: {}
type: Rolling
template:
metadata:
labels:
app: zalenium
role: hub
spec:
containers:
- args:
- start
- --seleniumImageName
- "elgalu/selenium:latest"
- --sendAnonymousUsageInfo
- "false"
image: dosel/zalenium:latest
imagePullPolicy: Always
name: zalenium
ports:
- containerPort: 4444
protocol: TCP
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /home/seluser/videos
name: zalenium-volume
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
serviceAccount: deployer
serviceAccountName: deployer
volumes:
- name: zalenium-volume
persistentVolumeClaim:
claimName: zalenium-pvc
test: false
triggers:
- type: ConfigChange
- apiVersion: v1
kind: Route
metadata:
labels:
app: zalenium
annotations:
openshift.io/host.generated: 'true'
haproxy.router.openshift.io/timeout: "60"
name: zalenium
spec:
host: zalenium-4444-${PROJECTNAME}.${HOSTNAME}
to:
kind: Service
name: zalenium
port:
targetPort: selenium-4444
- apiVersion: v1
kind: Route
metadata:
labels:
app: zalenium
annotations:
openshift.io/host.generated: 'true'
haproxy.router.openshift.io/timeout: "60"
name: zalenium-4445
spec:
host: zalenium-4445-${PROJECTNAME}.${HOSTNAME}
to:
kind: Service
name: zalenium
port:
targetPort: selenium-4445
- apiVersion: v1
kind: Service
metadata:
labels:
app: zalenium
name: zalenium
spec:
ports:
- name: selenium-4444
port: 4444
protocol: TCP
targetPort: 4444
- name: selenium-4445
port: 4445
protocol: TCP
targetPort: 4445
selector:
app: zalenium
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: zalenium
name: zalenium-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: ${VOLUME_CAPACITY}
</code></pre>
<p><strong>Errors in main pod:</strong><br>
I get about 2-3 errors in 30 minutes.<br></p>
<pre><code>[OkHttp https://172.17.0.1/ ...] ERROR i.f.k.c.d.i.ExecWebSocketListener - Exec Failure: HTTP:403. Message:pods "zalenium-40000-wvpjb" is forbidden: User "system:serviceaccount:PROJECT:deployer" cannot get pods/exec in the namespace "PROJECT": User "system:serviceaccount:PROJECT:deployer" cannot get pods/exec in project "PROJECT"
java.net.ProtocolException: Expected HTTP 101 response but was '403 Forbidden'
</code></pre>
<p><br></p>
<pre><code>[OkHttp https://172.17.0.1/ ...] ERROR d.z.e.z.c.k.KubernetesContainerClient - zalenium-40000-wvpjb Failed to execute command [bash, -c, notify 'Zalenium', 'TEST COMPLETED', --icon=/home/seluser/images/completed.png]
java.net.ProtocolException: Expected HTTP 101 response but was '403 Forbidden'
</code></pre>
<p><strong>With own service account:</strong>
yml template of the sa:</p>
<pre><code>- apiVersion: v1
kind: Role
metadata:
name: zalenium-role
labels:
app: zalenium
rules:
- apiGroups:
- ""
attributeRestrictions: null
resources:
- pods
verbs:
- create
- delete
- deletecollection
- get
- list
- watch
- apiGroups:
- ""
attributeRestrictions: null
resources:
- pods/exec
verbs:
- create
- delete
- list
- get
- apiGroups:
- ""
attributeRestrictions: null
resources:
- services
verbs:
- create
- delete
- get
- list
- apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: zalenium
name: zalenium-sa
- apiVersion: v1
kind: RoleBinding
metadata:
labels:
app: zalenium
name: zalenium-rolebinding
roleRef:
kind: Role
name: zalenium-role
namespace: ${PROJECTNAME}
subjects:
- kind: ServiceAccount
name: zalenium-sa
namespace: ${PROJECTNAME}
userNames:
- zalenium-sa
</code></pre>
<p>Result:</p>
<pre><code>--WARN 10:22:28:182931026 We don't have sudo
Kubernetes service account found.
Copying files for Dashboard...
Starting Nginx reverse proxy...
Starting Selenium Hub...
.....10:22:29.626 [main] INFO o.o.grid.selenium.GridLauncherV3 - Selenium server version: 3.141.59, revision: unknown
.10:22:29.771 [main] INFO o.o.grid.selenium.GridLauncherV3 - Launching Selenium Grid hub on port 4445
..10:22:30.292 [main] INFO d.z.e.z.c.k.KubernetesContainerClient - Initialising Kubernetes support
..10:22:30.700 [main] WARN d.z.e.z.c.k.KubernetesContainerClient - Error initialising Kubernetes support.
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://172.30.0.1/api/v1/namespaces/PROJECT/pods/zalenium-1-j6s4q . Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods "zalenium-1-j6s4q" is forbidden: User "system:serviceaccount:PROJECT:zalenium-sa" cannot get pods in the namespace "PROJECT": User "system:serviceaccount:PROJECT:zalenium-sa" cannot get pods in project "PROJECT".
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:476)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:413)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:313)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:296)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:794)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:210)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:177)
at de.zalando.ep.zalenium.container.kubernetes.KubernetesContainerClient.<init>(KubernetesContainerClient.java:91)
at de.zalando.ep.zalenium.container.ContainerFactory.createKubernetesContainerClient(ContainerFactory.java:43)
at de.zalando.ep.zalenium.container.ContainerFactory.getContainerClient(ContainerFactory.java:22)
at de.zalando.ep.zalenium.proxy.DockeredSeleniumStarter.<clinit>(DockeredSeleniumStarter.java:63)
at de.zalando.ep.zalenium.registry.ZaleniumRegistry.<init>(ZaleniumRegistry.java:97)
at de.zalando.ep.zalenium.registry.ZaleniumRegistry.<init>(ZaleniumRegistry.java:83)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at org.openqa.grid.web.Hub.<init>(Hub.java:94)
at org.openqa.grid.selenium.GridLauncherV3.lambda$buildLaunchers$5(GridLauncherV3.java:264)
at org.openqa.grid.selenium.GridLauncherV3.lambda$launch$0(GridLauncherV3.java:86)
at java.util.Optional.map(Optional.java:215)
at org.openqa.grid.selenium.GridLauncherV3.launch(GridLauncherV3.java:86)
at org.openqa.grid.selenium.GridLauncherV3.main(GridLauncherV3.java:70)
10:22:30.701 [main] INFO d.z.e.z.c.k.KubernetesContainerClient - About to clean up any left over docker-selenium pods created by Zalenium
Exception in thread "main" org.openqa.grid.common.exception.GridConfigurationException: Error creating class with de.zalando.ep.zalenium.registry.ZaleniumRegistry : null
at org.openqa.grid.web.Hub.<init>(Hub.java:99)
at org.openqa.grid.selenium.GridLauncherV3.lambda$buildLaunchers$5(GridLauncherV3.java:264)
at org.openqa.grid.selenium.GridLauncherV3.lambda$launch$0(GridLauncherV3.java:86)
at java.util.Optional.map(Optional.java:215)
at org.openqa.grid.selenium.GridLauncherV3.launch(GridLauncherV3.java:86)
at org.openqa.grid.selenium.GridLauncherV3.main(GridLauncherV3.java:70)
Caused by: java.lang.ExceptionInInitializerError
at de.zalando.ep.zalenium.registry.ZaleniumRegistry.<init>(ZaleniumRegistry.java:97)
at de.zalando.ep.zalenium.registry.ZaleniumRegistry.<init>(ZaleniumRegistry.java:83)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at org.openqa.grid.web.Hub.<init>(Hub.java:94)
... 5 more
Caused by: java.lang.NullPointerException
at java.util.TreeMap.putAll(TreeMap.java:313)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.withLabels(BaseOperation.java:426)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.withLabels(BaseOperation.java:63)
at de.zalando.ep.zalenium.container.kubernetes.KubernetesContainerClient.deleteSeleniumPods(KubernetesContainerClient.java:402)
at de.zalando.ep.zalenium.container.kubernetes.KubernetesContainerClient.initialiseContainerEnvironment(KubernetesContainerClient.java:348)
at de.zalando.ep.zalenium.container.ContainerFactory.createKubernetesContainerClient(ContainerFactory.java:46)
at de.zalando.ep.zalenium.container.ContainerFactory.getContainerClient(ContainerFactory.java:22)
at de.zalando.ep.zalenium.proxy.DockeredSeleniumStarter.<clinit>(DockeredSeleniumStarter.java:63)
... 13 more
</code></pre>
| <pre><code>[OkHttp https://172.17.0.1/ ...] ERROR i.f.k.c.d.i.ExecWebSocketListener - Exec Failure: HTTP:403. Message:pods "zalenium-40000-wvpjb" is forbidden: User "system:serviceaccount:PROJECT:deployer" cannot get pods/exec in the namespace "PROJECT": User "system:serviceaccount:PROJECT:deployer" cannot get pods/exec in project "PROJECT"
java.net.ProtocolException: Expected HTTP 101 response but was '403 Forbidden'
</code></pre>
<p>Usually this means that the service account does not have enough rights, perhaps start by checking that.</p>
|
<p>I have a Kubernetes pod using a readiness probe, and tied with the service this ensures that I don't receive traffic until I'm ready.</p>
<p>I'm using Spring Actuator as the health endpoint for this readiness probe.</p>
<p>But i'd like to trigger some actions whenever the pod is deemed ready by the kubelet. </p>
<p>What would be the simplest way to do this?</p>
| <p>Perhaps <strong><em>implement your own HealthCheck</em></strong>. When you find that everything is ok for the first time, run your code.</p>
<p>I use a static variable firstHealthCheckOK is checked. Your logic should run only once.</p>
<p>I am assuming you are running Spring-boot 2.x and are calling a readiness probe on <a href="http://localhost:8080/actuator/health" rel="nofollow noreferrer">http://localhost:8080/actuator/health</a></p>
<p>The health() method below is called when Kubernetes calls <a href="http://localhost:8080/actuator/health" rel="nofollow noreferrer">http://localhost:8080/actuator/health</a></p>
<pre><code>import org.springframework.boot.actuate.health.Health;
import org.springframework.boot.actuate.health.HealthIndicator;
import org.springframework.stereotype.Component;
@Component
public class HealthCheck implements HealthIndicator {
static boolean firstHealthCheckOK = false;
@Override
public Health health() {
int errorCode = check(); // perform health check
if (errorCode != 0) {
return Health.down()
.withDetail("Error Code", errorCode).build();
}
if (firstHealthCheckOK == false){
firstHealthCheckOK = true;
doStartUpLogic();
}
return Health.up().build();
}
private int check() {
//some logic
return 0;
}
private void doStartUpLogic() {
//some startup logic
}
}
</code></pre>
|
<p>I am trying to create a Kubernetes pod with a single container which has two external volumes mounted on it. My .yml pod file is:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: my-project
labels:
name: my-project
spec:
containers:
- image: my-username/my-project
name: my-project
ports:
- containerPort: 80
name: nginx-http
- containerPort: 443
name: nginx-ssl-https
imagePullPolicy: Always
volumeMounts:
- mountPath: /home/projects/my-project/media/upload
name: pd-data
- mountPath: /home/projects/my-project/backups
name: pd2-data
imagePullSecrets:
- name: vpregistrykey
volumes:
- name: pd-data
persistentVolumeClaim:
claimName: pd-claim
- name: pd2-data
persistentVolumeClaim:
claimName: pd2-claim
</code></pre>
<p>I am using Persistent Volumes and Persisten Volume Claims, as such:
<code>PV</code></p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pd-disk
labels:
name: pd-disk
spec:
capacity:
storage: 250Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: "pd-disk"
fsType: "ext4"
</code></pre>
<p><code>PVC</code></p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pd-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250Gi
</code></pre>
<p>I have initially created my disks using the command:
<code>$ gcloud compute disks create --size 250GB pd-disk</code>
Same goes for the second disk and second PV and PVC. Everything seems to work ok when I create the pod, no errors are thrown. Now comes the weird part: one of the paths is being mounted correctly (and is therefor persistent) and the other one is being erased every time I restart the pod...</p>
<p>I have tried re-creating everything from scratch, but nothing changes. Also, from the pod description, both volumes seem to be correctly mounted:</p>
<pre><code>$ kubectl describe pod my-project
Name: my-project
...
Volumes:
pd-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pd-claim
ReadOnly: false
pd2-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pd2-claim
ReadOnly: false
</code></pre>
<p>Any help is appreciated. Thanks.</p>
| <p>The Kubernetes <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="noreferrer">documentation</a> states:</p>
<blockquote>
<p>Volumes can not mount onto other volumes or have hard links to other
volumes</p>
</blockquote>
<p>I had the same issue and in my case the problem was that both volume mounts had overlapping mountPaths, i.e. both started with /var/. </p>
<p>They mounted without issues after fixing that.</p>
|
<p>I understand that Kubernetes CronJobs create pods that run on the schedule specified inside the CronJob. However, the retention policy seems arbitrary and I don't see a way where I can retain failed/successful pods for a certain period of time.</p>
| <p>I am not sure about what you are exactly asking here.</p>
<p>CronJob does not create pods. It creates Jobs (which also manages) and those jobs are creating pods.
As per Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">Jobs Documentation</a> If the Jobs are managed directly by a higher level controller, such as CronJobs, the Jobs can be cleaned up by CronJobs based on the specified capacity-based cleanup policy. In short, pods and jobs will not be deleted utill you remove CronJob. You will be able to check logs from Pods/Jobs/CronJob. Just use kubctl describe </p>
<p>As default CronJob keeps history of 3 successfulJobs and only 1 of failedJob. You can change this limitation in CronJob spec by parameters:</p>
<pre><code>spec:
successfulJobsHistoryLimit: 10
failedJobsHistoryLimit: 0
</code></pre>
<p>0 means that CronJob will not keep any history of failed jobs <br/>
10 means that CronJob will keep history of 10 succeeded jobs</p>
<p>You will not be able to retain pod from failed job because, when job fails it will be restarted until it was succeded or reached backoffLimit given in the spec.</p>
<p>Other option you have is to suspend CronJob.</p>
<pre><code>kubctl patch cronjob <name_of_cronjob> -p '{"spec:"{"suspend":true}}'
</code></pre>
<p>If value of spuspend is true, CronJob will not create any new jobs or pods. You will have access to completed pods and jobs.</p>
<p>If none of the above helpd you, could you please give more information what do you exactly expect? <br/>
CronJob spec would be helpful.</p>
|
<p>I have a Kubernetes cluster that is connected over VPN to an on-premise datacentre. This cluster needs to "expose" Services to other programs running in the datacenter, but not to the Internet.</p>
<p>Currently I've been creating Services with type "NodePort" and then manually creating an Internal (Private) Load balancer to map an endpoint to the Cluster Node/Port combination.</p>
<p>However, this approach has some drawbacks:</p>
<ul>
<li>Having to manually add/remove Nodes from the load balancer (or have some sort of process which "scans" the list of all nodes and makes sure they're attached to the ELB)</li>
<li>Having to make sure to delete the ELB when deleting a Service (the "orphan ELB" problem)</li>
</ul>
<p>Does anyone know of any way to configure Kubernetes to bring up "Internal" load balancers in AWS instead of Externally facing ones and manage them in the same way that it does the External ones?</p>
| <p>latest format is </p>
<pre><code>annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: true
</code></pre>
|
<p>What is the Maximum number of request can a Kubernetes Service handle in parellel. I will be providing auto scaling for the pods to scale based on the input request, but is there any limit for the maximum number of request which a Service can handle at a point of time.</p>
| <p>Services in Kubernetes are just routing configurations in the kernel. When you call a Service URL/IP your local kernel will forward the request to a random Pod.</p>
<p>Therefore the request limit is defined by the number of requests a single node of your cluster can handle. </p>
|
<p>in Kube, i have one pod with two containers
* container 1: nginx reverse proxy
* container 2: myapp</p>
<p>for testing purpose, i also has a docker compose file, include the two services
* service 1 nginx reverse proxy
* service 2: myapp</p>
<p>the issue is, in docker, the nginx upstream host is in the format of container name. In Kube, it is localhost.
Here is a code snipt:</p>
<pre><code>//for docker, nginx.conf
...
upstream web{
server myapp:8080;
}
....
proxy_pass http://web;
//for Kube, nginx.conf
...
upstream web{
server localhost:8080;
}
....
proxy_pass http://web;
}
</code></pre>
<p>i would like to have one nginx.conf to support both kube and docker-compose.
one way i can thinkof is to pass an env run time variable, so i can sed the upstream host in the entrypoint.sh.</p>
<p>are there other ways to accomplish this?</p>
<p>thank you</p>
| <p>I came across this question because we have the same issue.</p>
<p>I noticed the other answers suggested splitting nginx and the app-server into 2 different Services / Pods. Whilst that is certainly a solution, I rather like a self-contained Pod with both nginx and the app-server together. It works well for us, especially with php-fpm which can use a unix socket to communicate when in the same Pod which reduces internal http networking significantly.</p>
<p>Here is one idea:</p>
<p>Create a base nginx configuration file, for example, <code>proxy.conf</code> and setup docker to add it to the <code>conf.d</code> directory while building the image. The command is:</p>
<pre><code>ADD proxy.conf /etc/nginx/conf.d/proxy.conf
</code></pre>
<p>In the <code>proxy.conf</code>, omit the <code>upstream</code> configuration, leaving that for later. Create another file, a <code>run.sh</code> file and add it to the image using the <code>Dockerfile</code>. The file could be as follows:</p>
<pre class="lang-sh prettyprint-override"><code>#!/bin/sh
(echo "upstream theservice { server $UPSTREAM_NAME:$UPSTREAM_PORT; }" && cat /etc/nginx/conf.d/proxy.conf) > proxy.conf.new
mv proxy.conf.new /etc/nginx/conf.d/proxy.conf
nginx -g 'daemon off;'
</code></pre>
<p>Finally, run the nginx from the <code>run.sh</code> script. The <code>Dockerfile</code> command:</p>
<pre><code>CMD /bin/sh run.sh
</code></pre>
<p>The trick is that since the container is initialized like that, the configuration file does not get permanently written and the configuration is updated accordingly. Set the ENV vars appropriately depending on whether using from docker-compose or Kubernetes.</p>
<hr>
<p>Let me also share a less <em>proper</em> solution which is more <em>hacky</em> but also simpler...</p>
<p>In Kubernetes, we change the docker image CMD so that it modifies the nginx config before the container starts. We use <code>sed</code> to update the upstream name to <code>localhost</code> to make it compatible with Kubernetes Pod networking. In our case it looks like this:</p>
<pre><code> - name: nginx
image: our_custom_nginx:1.14-alpine
command: ["/bin/ash"]
args: ["-c", "sed -i 's/web/127.0.0.1/g' /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"]
</code></pre>
<p>While this workaround works, it breaks the immutable infrastructure principle, so may not be a good candidate for everyone.</p>
|
<p>New to K8s and facing an implementation dilemma. I need to deploy a K8s cluster for multiple NGINX-PHP websites, each with its own domain. The number of websites hosted can increase/decrease regularly, with hundreds/thousands of them deployed at any given time. I have excluded the PHP part to keep the question simple.</p>
<p><strong>Scenario 1 - vhost and SSL termination at ingress</strong></p>
<ul>
<li>Create a new Namespace containing a new Service and a Deployment (NGINX pods) for every new website</li>
<li>Setup the virtual host on NGINX Ingress and route it to the service responsible for the website based on the hostname</li>
</ul>
<p><em>Pros:</em></p>
<ul>
<li>Pod/Container level isolation for each website</li>
</ul>
<p><em>Possibly a con?</em></p>
<ul>
<li>Each website requires a new service and a deployment. This can result in hundreds or thousands of services/deployments</li>
</ul>
<p><strong>Scenario 2 - vhost and SSL termination at pod level</strong></p>
<ul>
<li>Create a single NGINX Service and a Deployment (with pods running a single NGINX container)</li>
<li>Each pod has access to shared configmap and tls-secret which contains NGINX config and tls certs for every virtual host</li>
<li>Ingress routes all traffic to the NGINX service</li>
<li>The pod running NGINX serves the website content from the document root for the desired virtual host</li>
</ul>
<p><em>Pros:</em></p>
<ul>
<li>Single service and deployment</li>
<li>State less pods</li>
</ul>
<p><em>Cons:</em></p>
<ul>
<li>NGINX instance in each pod shall require a reload to load a new vhost config/ssl certs on addition of a new website, thus resulting in a management nightmare</li>
</ul>
<p>Which one of above scenario is better suited? Are there any other possible scenarios for given problem?</p>
| <p>If you'd like to just avoid having hundreds of services you can set up a single <a href="https://github.com/kubernetes/ingress-nginx" rel="noreferrer">nginx ingress controller</a> (with a single ingress class in your cluster) and then create multiple <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noreferrer">Ingresses</a> using the single ingress controller. (It can also be a single Ingress if you'd like, with hostname-based routing)</p>
<p>This controller basically runs in a deployment with multiple pods and can scale up and down (for example using an <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="noreferrer">HPA</a> or/and the <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="noreferrer">cluster autoscaler</a> or/and the <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="noreferrer">VPA</a>). </p>
<p>The nginx controller takes care of reloading nginx with the new configs everytime you changed them (with no downtime). You can also have an SSL termination per Kubernetes Ingress, so if you have multiple domains that can also be handled.</p>
<p>Hope it helps!</p>
|
<p>I was wondering if there is any advantage of using secrets over standard file mounting in Kubernetes. I have to provide credentials, saved on the host machine to one of pods and just cannot understand what are the pros of using them. </p>
| <p>The main idea of using secrets is to reduce exposure and make it more secure. It was specifically designed for this. As per <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">documentation</a>:</p>
<ul>
<li>The data in the secrets is encoded (base64)</li>
<li>Secrets can only be referenced from the same namespace</li>
<li>A secret is only sent to a node if a pod on that node requires it. It is stored into a <code>tmpfs</code> and <strong>not</strong> written to disk. Once the pod that is using the secret is gone, kubelet will delete its local copy of the secret on that node.</li>
<li>You can set access rights (e.g <code>644</code>)</li>
<li>If there are several secrets for several pods on the same node, one pod will not have access to the secrets of another pod, only the one that it asks for</li>
</ul>
|
<p>I am trying to modify the <code>yaml</code> file of a <code>persistent volume</code> in <code>OpenShift</code> through the API in <code>Go(lang)</code>, I have the following</p>
<pre><code> pv, err := clientset.CoreV1().PersistentVolumes().List(metav1.ListOptions{})
for _, persV := range pv.Items {
// Check status persistent volume
pvStatus, err := clientset.CoreV1().PersistentVolumes().Get(persV.Name, metav1.GetOptions{})
if err != nil {
panic(err.Error())
}
patch := []byte(`{"spec":{"template":{"spec":{"containers":[{"persistentVolumeReclaimPolicy":"Retain"}]}}}}`)
a := fmt.Sprintf("%s", patch)
fmt.Println(a)
_, err = clientset.CoreV1().PersistentVolumes().Patch(persV.Name, types.StrategicMergePatchType, patch)
}
</code></pre>
<p>my <code>persistent volume</code> <code>yaml</code></p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
...
...
...
persistentVolumeReclaimPolicy: Retain
status:
phase: Released
</code></pre>
<p>How could I get the <code>yaml</code> file and modify it from my actual <code>pvStatus</code>?
I would like to change <code>persistentVolumeReclaimPolicy: Retain</code> to <code>persistentVolumeReclaimPolicy: Delete</code></p>
| <p>Basically my error was constructing the <code>patch</code> <code>[]byte</code> value, the script should look like</p>
<pre><code> pv, err := clientset.CoreV1().PersistentVolumes().List(metav1.ListOptions{})
for _, persV := range pv.Items {
// Check status persistent volume
pvStatus, err := clientset.CoreV1().PersistentVolumes().Get(persV.Name, metav1.GetOptions{})
if err != nil {
panic(err.Error())
}
patch := []byte(`{"spec": {"persistentVolumeReclaimPolicy": "Delete"}}`)
_, err = clientset.CoreV1().PersistentVolumes().Patch(persV.Name, types.StrategicMergePatchType, patch)
</code></pre>
|
<p><strong>Background:</strong></p>
<p>There was a similar question: <a href="https://stackoverflow.com/questions/54140994/istio-origin-authentication-using-jwt-does-not-work">Here</a> but it didn't offer a solution to my issue.</p>
<p>I have deployed an application which is working as expected to my Istio Cluster. I wanted to enable JWT authentication, so adapting the instructions <a href="https://istio.io/docs/tasks/security/authn-policy/#end-user-authentication" rel="nofollow noreferrer">Here</a> to my use-case. </p>
<p><strong>ingressgateway:</strong></p>
<p>I first applied the following policy to the istio-ingressgateway. This worked and any traffic sent without a JWT token was blocked.</p>
<pre><code>kubectl apply -n istio-system -f mypolicy.yaml
</code></pre>
<pre><code>apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: core-api-policy
namespace: istio-system
spec:
targets:
- name: istio-ingressgateway
ports:
- number: 80
origins:
- jwt:
issuer: "https://cognito-idp.ap-northeast-1.amazonaws.com/ap-northeast-1_pa9vj7sbL"
jwksUri: "https://cognito-idp.ap-northeast-1.amazonaws.com/ap-northeast-1_pa9vj7sbL/.well-known/jwks.json"
principalBinding: USE_ORIGIN
</code></pre>
<p>Once that worked I deleted this policy and installed a new policy for my service.</p>
<pre><code>kubectl delete -n istio-system -f mypolicy.yaml
</code></pre>
<p><strong>service/core-api-service:</strong></p>
<p>After editing the above policy, changing the namespace and target as below, I reapplied the policy to the correct namespace.</p>
<p>Policy:</p>
<pre><code>kubectl apply -n solarmori -f mypolicy.yaml
</code></pre>
<pre><code>apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: core-api-policy
namespace: solarmori
spec:
targets:
- name: core-api-service
ports:
- number: 80
origins:
- jwt:
issuer: "https://cognito-idp.ap-northeast-1.amazonaws.com/ap-northeast-1_pa9vj7sbL"
jwksUri: "https://cognito-idp.ap-northeast-1.amazonaws.com/ap-northeast-1_pa9vj7sbL/.well-known/jwks.json"
principalBinding: USE_ORIGIN
</code></pre>
<p>Service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: core-api-service
spec:
type: LoadBalancer
ports:
- port: 80
name: api-svc-port
targetPort: api-app-port
selector:
app: core-api-app
</code></pre>
<p>The outcome of this action didn't appear to change anything in processing of traffic. I was still able to reach my service even though I did not provide a JWT.</p>
<p>I checked the istio-proxy of my service deployment and there was no creation of a <code>local_jwks</code> in the logs as described <a href="https://istio.io/help/ops/security/end-user-auth/" rel="nofollow noreferrer">Here</a>.</p>
<pre><code>[procyclinsur@P-428 istio]$ kubectl logs -n solarmori core-api-app-5dd9666777-qhf5v -c istio-proxy | grep local_jwks
[procyclinsur@P-428 istio]$
</code></pre>
<p>If anyone knows where I am going wrong I would greatly appreciate any help.</p>
| <p>For a Service to be part of Istio's service mesh you need to fulfill some requirements as shown in the official <a href="https://istio.io/docs/setup/kubernetes/spec-requirements/" rel="nofollow noreferrer">docs</a>.</p>
<p>In your case, the service port name needs to be updated to:
<code><protocol>[-<suffix>]</code> with the <code><protocol></code> as either: </p>
<ul>
<li>grpc</li>
<li>http</li>
<li>http2</li>
<li>https</li>
<li>mongo</li>
<li>mysql</li>
<li>redis</li>
<li>tcp</li>
<li>tls</li>
<li>udp</li>
</ul>
<p>At that point requests forwarded to the service will go through the service mesh; Currently, requests are resolved by Kubernetes networking.</p>
|
<p>We would like to setup Elasticsearch Highly Available Setup in Kubernetes. we would like to deploy the below objects and would like to scale them independently</p>
<ol>
<li>Master pods</li>
<li>Data pods</li>
<li>Client pods</li>
</ol>
<p>please share your suggestions if you have implemented this kind of setup. Preferably using open source tools</p>
| <p>See below some points for a proposed architecture:</p>
<ol>
<li>Elasticsearch master nodes do not need persistent storage, so use a Deployment to manage these. Use a Service to load balance between the masters. </li>
</ol>
<p>Use a ConfigMap to manage their settings. Something like this:</p>
<pre class="lang-none prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: elasticsearch-discovery
labels:
component: elasticsearch
role: master
version: v6.5.0 // or whatever version you require
spec:
selector:
component: elasticsearch
role: master
version: v6.5.0
ports:
- name: transport
port: 9300 // no need to expose port 9200, as master nodes don't need it
protocol: TCP
clusterIP: None
---
apiVersion: v1
kind: ConfigMap
metadata:
name: elasticsearch-master-configmap
data:
elasticsearch.yml: |
# these should get you going
# if you want more fine-grained control, feel free to add other ES settings
cluster.name: "${CLUSTER_NAME}"
node.name: "${NODE_NAME}"
network.host: 0.0.0.0
# (no_master_eligible_nodes / 2) + 1
discovery.zen.minimum_master_nodes: 2
discovery.zen.ping.unicast.hosts: ${DISCOVERY_SERVICE}
node.master: true
node.data: false
node.ingest: false
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: elasticsearch-master
labels:
component: elasticsearch
role: master
version: v6.5.0
spec:
replicas: 3 // 3 is the recommended minimum
template:
metadata:
labels:
component: elasticsearch
role: master
version: v6.5.0
spec:
affinity:
// you can also add node affinity in case you have a specific node pool
podAntiAffinity:
// make sure 2 ES processes don't end up on the same machine
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: component
operator: In
values:
- elasticsearch
- key: role
operator: In
values:
- master
topologyKey: kubernetes.io/hostname
initContainers:
# just basic ES environment configuration
- name: init-sysctl
image: busybox:1.27.2
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
containers:
- name: elasticsearch-master
image: // your preferred image
imagePullPolicy: Always
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CLUSTER_NAME
value: elasticsearch-cluster
- name: DISCOVERY_SERVICE
value: elasticsearch-discovery
- name: ES_JAVA_OPTS
value: -Xms256m -Xmx256m // or more, if you want
ports:
- name: tcp-transport
containerPort: 9300
volumeMounts:
- name: configmap
mountPath: /etc/elasticsearch/elasticsearch.yml
subPath: elasticsearch.yml
- name: storage
mountPath: /usr/share/elasticsearch/data
volumes:
- name: configmap
configMap:
name: elasticsearch-master-configmap
- emptyDir:
medium: ""
name: storage
</code></pre>
<p>Client nodes can also be deployed in a very similar fashion, so I will avoid adding code for that.</p>
<ol start="2">
<li>Data nodes are a bit more special: you need to configure persistent storage, so you'll have to use StatefulSets. Use PersistentVolumeClaims to create disks for these pods. I'd do something like this:</li>
</ol>
<pre class="lang-none prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: elasticsearch
labels:
component: elasticsearch
role: data
version: v6.5.0
spec:
ports:
- name: http
port: 9200 # in this example, data nodes are being used as client nodes
- port: 9300
name: transport
selector:
component: elasticsearch
role: data
version: v6.5.0
type: ClusterIP
---
apiVersion: v1
kind: ConfigMap
metadata:
name: elasticsearch-data-configmap
data:
elasticsearch.yml: |
cluster.name: "${CLUSTER_NAME}"
node.name: "${NODE_NAME}"
network.host: 0.0.0.0
# (no_master_eligible_nodes / 2) + 1
discovery.zen.minimum_master_nodes: 2
discovery.zen.ping.unicast.hosts: ${DISCOVERY_SERVICE}
node.master: false
node.data: true
node.ingest: false
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch-data
labels:
component: elasticsearch
role: data
version: v6.5.0
spec:
serviceName: elasticsearch
replicas: 1 # choose the appropriate number
selector:
matchLabels:
component: elasticsearch
role: data
version: v6.5.0
template:
metadata:
labels:
component: elasticsearch
role: data
version: v6.5.0
spec:
affinity:
# again, I recommend using nodeAffinity
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: component
operator: In
values:
- elasticsearch
- key: role
operator: In
values:
- data
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 180
initContainers:
- name: init-sysctl
image: busybox:1.27.2
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
containers:
- name: elasticsearch-production-container
image: .search the same image that you use for the master node
imagePullPolicy: Always
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CLUSTER_NAME
value: elasticsearch-cluster
- name: DISCOVERY_SERVICE
value: elasticsearch-discovery
- name: ES_JAVA_OPTS
value: -Xms31g -Xmx31g # do not exceed 32 GB!!!
ports:
- name: http
containerPort: 9200
- name: tcp-transport
containerPort: 9300
volumeMounts:
- name: configmap
mountPath: /etc/elasticsearch/elasticsearch.yml
subPath: elasticsearch.yml
- name: elasticsearch-node-pvc
mountPath: /usr/share/elasticsearch/data
readinessProbe:
httpGet:
path: /_cluster/health?local=true
port: 9200
initialDelaySeconds: 15
livenessProbe:
exec:
command:
- /usr/bin/pgrep
- -x
- "java"
initialDelaySeconds: 15
resources:
requests:
# adjust these as per your needs
memory: "32Gi"
cpu: "11"
volumes:
- name: configmap
configMap:
name: elasticsearch-data-configmap
volumeClaimTemplates:
- metadata:
name: elasticsearch-node-pvc
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: # this is dependent on your K8s environment
resources:
requests:
storage: 350Gi # choose the desired storage size for each ES data node
</code></pre>
<p>Hope this helps!</p>
|
<p>I have managed to run the latest elasticsearch in Kubernetes with only ONE pod. I would like to extend this to a full-blown elasticsearch cluster on Kubernetes. I have checked out <a href="https://github.com/pires/kubernetes-elasticsearch-cluster" rel="nofollow noreferrer">https://github.com/pires/kubernetes-elasticsearch-cluster</a> but it is not maintained anymore and does not have the latest ES docker image. I tried to use the .yaml files from that github with the latest ES image from docker hub but have not been able to set up the cluster. Any advice and insight is appreciated.</p>
| <p>See <a href="https://stackoverflow.com/questions/55216342/elasticsearch-highly-available-setup-in-kubernetes">this</a>. I've answered that question and it might be what you are looking for.</p>
|
<p>I see that I can use the "kubectl set image" command to update a container used in a deployment, like this:</p>
<pre><code>kubectl set image deployment/myapp myapp=repo.mycompany.com/myapp/ui:beta.119
</code></pre>
<p>But, i would also like to use a different startup command in some situations. Is there a way to update both the image AND the command used for the container?</p>
| <p>You could use <code>kubectl patch</code> for that. Run <code>kubectl patch --help</code> to get the docs, but as far as I can tell something like this should do it:</p>
<pre><code>$ kubectl patch deployment <your-deployment> -p '
spec:
template:
spec:
containers:
- name: <container-name>
command: ["new", "command"]
'
</code></pre>
|
<p>I have two container deployed in kubernates, each container has stateful application which is tightly coupled with container IP & I need to have communication between two (application is not trusting service ip). Therefore I need to assign static IP to container. Could anyone help me here ?
Thanks in advance</p>
| <p>A static IP cannot be assigned to a Pod.</p>
<p>Withing StatefulSet, you can refer to a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id" rel="nofollow noreferrer">stable network ID</a></p>
<p>If you are using GKE for your cluster, it supports <code>loadBalancerIP</code>. So, at some point, you can rely on this service. Just mark the auto-assigned IP as static first.</p>
<pre><code>apiVersion: v1
kind: Service
spec:
type: LoadBalancer
loadBalancerIP: 10.10.10.10
</code></pre>
|
<p>I was running this to see how job restart works in k8s.</p>
<pre><code>kubectl run alpine --image=alpine --restart=OnFailure -- exit 1
</code></pre>
<p>The alpine image was already there. The first failure happened almost within a second. k8s takes 5 minutes to do 5 restarts! why does it not try immediately? Is there any way reduce the time between 2 restarts?</p>
<p><a href="https://i.stack.imgur.com/fImXZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fImXZ.png" alt="enter image description here"></a></p>
| <p>Take a look at the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy" rel="nofollow noreferrer">Pod Lifecycle</a> docs:</p>
<blockquote>
<p>Exited Containers that are restarted by the kubelet are restarted with an <strong>exponential back-off delay</strong> (10s, 20s, 40s β¦) capped at five minutes, and is reset after ten minutes of successful execution. </p>
</blockquote>
<p>I think that there is no way to configure the back-off delay time.<br>
<em>EDIT: There is an <a href="https://github.com/kubernetes/kubernetes/issues/57291" rel="nofollow noreferrer">open issue</a> requesting this feature.</em></p>
<p>Also, note that using <code>kubectl run</code> <strong>you are not simulating "job restarts"</strong>. Jobs are managed by <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#handling-pod-and-container-failures" rel="nofollow noreferrer">Job Controllers</a>, which behaves a little bit different when handling pod/containers errors, as it takes into account the combination of <code>restartPolicy</code>, <code>parallelism</code>, <code>completions</code> and the <code>backoffLimit</code> configs:</p>
<blockquote>
<p>There are situations where you want to fail a Job after some amount of
retries due to a logical error in configuration etc. To do so, set
.spec.backoffLimit to specify the number of retries before considering
a Job as failed. The back-off limit is set by default to 6. Failed
Pods associated with the Job are recreated by the Job controller with
an exponential back-off delay (10s, 20s, 40s β¦) capped at six minutes.</p>
</blockquote>
|
<p>I tried configuring ingress on my kubernetes cluster. I followed the <a href="https://kubernetes.github.io/ingress-nginx/deploy/#generic-deployment" rel="noreferrer">documentation</a> to install ingress controller and ran the following commands</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml
</code></pre>
<p>After that default-http-backend and nginx-ingress-controller were running:</p>
<pre><code>ingress-nginx default-http-backend-846b65fb5f-6kwvp 1/1 Running 0 23h 192.168.2.28 node1
ingress-nginx nginx-ingress-controller-d658896cd-6m76j 1/1 Running 0 6m 192.168.2.31 node1
</code></pre>
<p>I tried testing ingress and I deployed the following service:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: echoserver-deploy
spec:
replicas: 2
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: my-echo
image: gcr.io/google_containers/echoserver:1.8
---
apiVersion: v1
kind: Service
metadata:
name: echoserver-svc
spec:
selector:
app: echo
ports:
- protocol: TCP
port: 8080
targetPort: 8080
</code></pre>
<p>And the following ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: happy-ingress
annotations:
INGRESS.kubernetes.io/rewrite-target: /
spec:
rules:
- host: happy.k8s.io
http:
paths:
- path: /echoserver
backend:
serviceName: echoserver-svc
servicePort: 8080
</code></pre>
<p>When I ran the command 'kubectl get ing' I received:</p>
<pre><code>NAME HOSTS ADDRESS PORTS AGE
happy-ingress happy.k8s.io 80 14m
</code></pre>
<p>I didn't have ADDRESS resolved and I canβt figure out what the problem is because all the pods are running. Can you give me a hint as to what the issue can be?</p>
<p>Thanks</p>
| <p>You have to enable <code>ingress addons</code> by following command before creating ingress rules. You can also enable it before executing any other command</p>
<pre><code>$ minikube addons enable ingress
ingress was successfully enabled
</code></pre>
<p>Wait until the pods are up and running. You can check by executing following command and wait for the similar output</p>
<pre><code>kubectl get pods -n kube-system | grep nginx-ingress-controller
nginx-ingress-controller-5984b97644-jjng2 1/1 Running 2 1h
</code></pre>
<p><a href="https://i.stack.imgur.com/VGaeg.png" rel="noreferrer"><img src="https://i.stack.imgur.com/VGaeg.png" alt="enter image description here"></a>
For <code>Deployment</code> you have to specify the <code>containerPort</code> and for <code>Service</code> you have to specify <code>http</code> protocol.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: echoserver-deploy
spec:
replicas: 2
selector:
matchLabels:
app: my-echo
template:
metadata:
labels:
app: my-echo
spec:
containers:
- name: my-echo
image: gcr.io/kubernetes-e2e-test-images/echoserver:2.1
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: echoserver-svc
spec:
selector:
app: my-echo
ports:
- protocol: TCP
port: 80
targetPort: 8080
name: http
</code></pre>
<p>For ingress rule change the port <code>servicePort</code> from 8080 to 80 the default http port.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: happy-ingress
annotations:
INGRESS.kubernetes.io/rewrite-target: /
spec:
rules:
- host: happy.k8s.io
http:
paths:
- path: /echoserver
backend:
serviceName: echoserver-svc
servicePort: 80
</code></pre>
<p>Now apply those files and create your pods, service and ingress rule. Wait few moment, it will take few moments to get ADDRESS for your ingress rule.
<a href="https://i.stack.imgur.com/gNzOe.png" rel="noreferrer"><img src="https://i.stack.imgur.com/gNzOe.png" alt="enter image description here"></a>
Now you can visit your service using <code>minikube ip</code> address but not by host name yet. For that you have to add the host and respective IP address in <code>/etc/hosts</code> file. So open <code>/etc/hosts</code> file in your favorite editor and add below line where is the actual IP of you minikube</p>
<pre><code><minikube_ip> happy.k8s.io
</code></pre>
<p>Now you access you service using host name. Verify be following command</p>
<pre><code>curl http://happy.k8s.io/echoserver
</code></pre>
|
<p>I have a pod that has following chart:</p>
<pre><code>ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
// ...
paths:
- /api/myservice
</code></pre>
<p>My pod exposes api and additionally a <code>/prometheus</code> endpoint that is accessible through <code>/api/myservice/prometheus</code>.</p>
<p>I would like to have <code>prometheus</code> visible inside my cluster but not from <code>/api/myservice/prometheus</code>. How I can achieve that?</p>
| <p>You can add an Ingress rule that redirects the endpoint to the default-backend:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: block
annotations:
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- myapp.to
rules:
- host: myapp.to
http:
paths:
- path: /metrics
backend:
serviceName: ingress-default-backend
servicePort: 8080
</code></pre>
|
<p>I am currently running Airflow on Kubernetes in Google Cloud GCP. I based my project off of <a href="https://github.com/puckel/docker-airflow" rel="nofollow noreferrer">docker-airflow</a>. I am able to start the UI but when I try to create a connection for google cloud and submit the connection I get the following errors. </p>
<ul>
<li><pre><code>ValueError: Fernet key must be 32 url-safe base64-encoded bytes.
</code></pre></li>
<li><pre><code>[2018-09-21 19:45:13,345] AirflowException: Could not create Fernet object: Fernet key must be 32 url-safe base64-encoded bytes.
</code></pre></li>
</ul>
<p>The first issue the docs recommend is to make sure you have cryptography installed, which I do. I installed both types, the one that comes with airflow and the standard one from PyPi. </p>
<pre><code>pip3 install apache-airflow[kubernetes,crypto] and also tried
pip install cryptography
</code></pre>
<p>I tried to run the commands for generating and storing env variables as explained in the documentation, found <a href="https://bcb.github.io/airflow/fernet-key" rel="nofollow noreferrer">here</a>. (and shown below) </p>
<p>1) <strong>Either generate a fernet key manually and add to airflow.cfg</strong></p>
<p>2) <strong>Set the environment variable and restarting the server.</strong></p>
<pre><code>python -c "from cryptography.fernet import Fernet;
print(Fernet.generate_key().decode())"
</code></pre>
<p>Example Key:<code>81HqDtbqAywKSOumSha3BhWNOdQ26slT6K0YaZeZyPs=</code></p>
<p>Using kubernetes I am unable to restart the server using the typical method of shutting down the process ID since its tied to the container. I also tried putting a generated key (above) in the configmaps.yaml file of the kubernetes cluster (equal to airflow.cfg when deployed). </p>
<p>I tried running the GCP connection through DAG, via the UI, and manually by using the airflow command line client. All three methods returned the same error. I am including a picture of the UI submission here along with the full stack-trace. </p>
<h3>Question</h3>
<ul>
<li>Why might this be happening? Is the fernet key not being generated? Is it not being saved on the underlying volume maybe?*</li>
</ul>
<p>Thanks for the help.</p>
<p>-RR</p>
<pre><code>Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/airflow/models.py", line 159, in get_fernet
_fernet = Fernet(configuration.conf.get('core', 'FERNET_KEY').encode('utf-8'))
File "/usr/local/lib/python3.6/site-packages/cryptography/fernet.py", line 37, in __init__
"Fernet key must be 32 url-safe base64-encoded bytes."
ValueError: Fernet key must be 32 url-safe base64-encoded bytes.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1982, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1614, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1517, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.6/site-packages/flask/_compat.py", line 33, in reraise
raise value
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1612, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1598, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/lib/python3.6/site-packages/flask_appbuilder/security/decorators.py", line 26, in wraps
return f(self, *args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/flask_appbuilder/views.py", line 524, in edit
widgets = self._edit(pk)
File "/usr/local/lib/python3.6/site-packages/flask_appbuilder/baseviews.py", line 965, in _edit
form.populate_obj(item)
File "/usr/local/lib/python3.6/site-packages/wtforms/form.py", line 96, in populate_obj
field.populate_obj(obj, name)
File "/usr/local/lib/python3.6/site-packages/wtforms/fields/core.py", line 330, in populate_obj
setattr(obj, name, self.data)
File "<string>", line 1, in __set__
File "/usr/local/lib/python3.6/site-packages/airflow/models.py", line 731, in set_extra
fernet = get_fernet()
File "/usr/local/lib/python3.6/site-packages/airflow/models.py", line 163, in get_fernet
raise AirflowException("Could not create Fernet object: {}".format(ve))
airflow.exceptions.AirflowException: Could not create Fernet object:
Fernet key must be 32 url-safe base64-encoded bytes.
</code></pre>
<p>This is the YAML for the underlying persisted volumes. </p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: airflow-dags
namespace: data
spec:
accessModes:
- ReadOnlyMany
storageClassName: standard
resources:
requests:
storage: 8Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: airflow-logs
namespace: data
spec:
accessModes:
- ReadOnlyMany
storageClassName: standard
resources:
requests:
storage: 8Gi
</code></pre>
<p>This is the airflow configuration YAML. </p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: airflow
namespace: data
labels:
name: airflow
spec:
replicas: 1
selector:
matchLabels:
name: airflow
template:
metadata:
labels:
name: airflow
spec:
serviceAccountName: spark-service-account
automountServiceAccountToken: true
initContainers:
- name: "init"
image: <image_name>
imagePullPolicy: Always
volumeMounts:
- name: airflow-configmap
mountPath: /root/airflow/airflow.cfg
subPath: airflow.cfg
- name: airflow-dags
mountPath: /root/airflow/dags
# - name: test-volume
# mountPath: /root/test_volume
env:
- name: SQL_ALCHEMY_CONN
valueFrom:
secretKeyRef:
name: airflow-secrets
key: sql_alchemy_conn
command:
- "bash"
args:
- "-cx"
- "airflow initdb || true && airflow create_user -u airflow -l airflow -f jon -e airflow@apache.org -r Admin -p airflow || true"
containers:
- name: webserver
image: <image_name>
imagePullPolicy: IfNotPresent
ports:
- name: webserver
containerPort: 8080
env:
- name: <namespace_name>
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SQL_ALCHEMY_CONN
valueFrom:
secretKeyRef:
name: airflow-secrets
key: sql_alchemy_conn
command: ["/bin/sh", "-c"]
args: ["airflow webserver"]
volumeMounts:
- name: airflow-configmap
mountPath: /root/airflow/airflow.cfg
subPath: airflow.cfg
- name: airflow-dags
mountPath: /root/airflow/dags
- name: airflow-logs
mountPath: /root/airflow/logs
# readinessProbe:
# initialDelaySeconds: 5
# timeoutSeconds: 5
# periodSeconds: 5
# httpGet:
# path: /login
# port: 8080
# livenessProbe:
# initialDelaySeconds: 5
# timeoutSeconds: 5
# failureThreshold: 5
# httpGet:
# path: /login
# port: 8080
- name: scheduler
image: image-name
imagePullPolicy: IfNotPresent
env:
- name: namespace_name
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SQL_ALCHEMY_CONN
valueFrom:
secretKeyRef:
name: airflow-secrets
key: sql_alchemy_conn
command: ["/bin/sh", "-c"]
args: ["cp ./dags/* /root/airflow/dags/; airflow scheduler"]
volumeMounts:
- name: airflow-configmap
mountPath: /root/airflow/airflow.cfg
subPath: airflow.cfg
- name: airflow-dags
mountPath: /root/airflow/dags
- name: airflow-logs
mountPath: /root/airflow/logs
volumes:
- name: airflow-configmap
configMap:
name: airflow-configmap
- name: airflow-dags
persistentVolumeClaim:
claimName: airflow-dags
- name: airflow-logs
persistentVolumeClaim:
claimName: airflow-logs
---
apiVersion: v1
kind: Service
metadata:
name: airflow
namespace: data
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30809
selector:
name: airflow
</code></pre>
<p><img src="https://i.stack.imgur.com/nJiNu.png" alt="UI"></p>
| <p>Restart the worker and webserver.</p>
<p>Your worker and webserver are operating on the old fernet key. You changed the key in your config, so all your newly stored or modified Connections will use the new key, but the webserver/worker are still operating on the old key. They will never match and continue to give this error, till they're restarted. </p>
|
<p>I have 2 issues :
- my Vertical pod autoscaler doesn't follow my minimal resource policies :</p>
<pre><code>Spec:
Resource Policy:
Container Policies:
Min Allowed:
Cpu: 50m <==== mini allowed for CPU
Memory: 75Mi
Mode: auto
Target Ref:
API Version: extensions/v1beta1
Kind: Deployment
Name: hello-world
Update Policy:
Update Mode: Auto
Status:
Conditions:
Last Transition Time: 2019-03-19T19:11:36Z
Status: True
Type: RecommendationProvided
Recommendation:
Container Recommendations:
Container Name: hello-world
Lower Bound:
Cpu: 25m
Memory: 262144k
Target:
Cpu: 25m <==== actual CPU configured by the VPA
Memory: 262144k
</code></pre>
<ul>
<li><p>I configured my VPA to use the new kind of label selector using targetref but in the recommender logs it says I'm using the legacy one :</p>
<pre><code>Error while fetching legacy selector. Reason: v1beta1 selector not found
</code></pre></li>
</ul>
<p>Here is my deployment configuration :</p>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
namespace: hello-world
labels:
name: hello-world
spec:
selector:
matchLabels:
name: hello-world
replicas: 2
template:
metadata:
labels:
name: hello-world
spec:
securityContext:
fsGroup: 101
containers:
- name: hello-world
image: xxx/hello-world:latest
imagePullPolicy: Always
ports:
- containerPort: 3000
protocol: TCP
resources:
limits:
cpu: 500m
memory: 500Mi
requests:
cpu: 100m
memory: 150Mi
volumeMounts:
- mountPath: /u/app/www/images
name: nfs-volume
volumes:
- name: nfs-volume
persistentVolumeClaim:
claimName: hello-world
</code></pre>
<p>Here is my VPA configuration :</p>
<pre><code>---
apiVersion: "autoscaling.k8s.io/v1beta2"
kind: VerticalPodAutoscaler
metadata:
name: hello-world
namespace: hello-world
spec:
targetRef:
apiVersion: "extensions/v1beta1"
kind: Deployment
name: hello-world
resourcePolicy:
containerPolicies:
- minAllowed:
cpu: 50m
memory: 75Mi
mode: auto
updatePolicy:
updateMode: "Auto"
</code></pre>
<p>I'm running kubernetes v1.13.2, VPA v0.4 and here is his configuration :</p>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: vpa-recommender
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
app: vpa-recommender
spec:
serviceAccountName: vpa-recommender
containers:
- name: recommender
image: k8s.gcr.io/vpa-recommender:0.4.0
imagePullPolicy: Always
resources:
limits:
cpu: 200m
memory: 1000Mi
requests:
cpu: 50m
memory: 500Mi
ports:
- containerPort: 8080
command:
- ./recommender
- --alsologtostderr=false
- --logtostderr=false
- --prometheus-address=http://prometheus-service.monitoring:9090/
- --prometheus-cadvisor-job-name=cadvisor
- --v=10
</code></pre>
<p>Thanks</p>
| <p>I don't think you are using old fetcher.</p>
<p>Here is a <a href="https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/pkg/recommender/input/cluster_feeder.go#L418" rel="nofollow noreferrer">code</a>:</p>
<pre><code>legacySelector, fetchLegacyErr := feeder.legacySelectorFetcher.Fetch(vpa)
if fetchLegacyErr != nil {
glog.Errorf("Error while fetching legacy selector. Reason: %+v", fetchLegacyErr)
}
selector, fetchErr := feeder.selectorFetcher.Fetch(vpa)
if fetchErr != nil {
glog.Errorf("Cannot get target selector from VPA's targetRef. Reason: %+v", fetchErr)
}
</code></pre>
<p>Autoscaler just try to get legacy selector first and then use a new one.</p>
<p>About a resource limitations. </p>
<p>Here is a <a href="https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/pkg/apis/poc.autoscaling.k8s.io/v1alpha1/types.go#L109" rel="nofollow noreferrer">comment</a> in a source code (PodResourcePolicy is a "resourcePolicy" block in a spec):</p>
<blockquote>
<p>PodResourcePolicy controls how autoscaler computes the recommended resources
for containers belonging to the pod. There can be at most one entry for every
named container and optionally a single wildcard entry with <code>containerName</code> = '*',
which handles all containers that don't have individual policies.</p>
</blockquote>
<p>I think, you should also set <code>ContainerName</code> in your spec, because you want one pod-wide policy:</p>
<pre><code>apiVersion: "autoscaling.k8s.io/v1beta2"
kind: VerticalPodAutoscaler
metadata:
name: hello-world
namespace: hello-world
spec:
targetRef:
apiVersion: "extensions/v1beta1"
kind: Deployment
name: hello-world
resourcePolicy:
containerPolicies:
- minAllowed:
cpu: 50m
memory: 75Mi
mode: auto
containerName: "*" # Added line
updatePolicy:
updateMode: "Auto"
</code></pre>
|
<p>I need to loop through a list of instances and create 1 stateful set for every instance. However, inside range I then limit myself to the scope of that loop. I need to access some global values in my statefulset.</p>
<p>I've <em>solved</em> it by just putting all global objects I need in an env variable but... this very seems hacky.</p>
<p>What is the correct way to loop through ranges while still being able to reference global objects?</p>
<p>Example of my loop</p>
<pre><code>{{- $values := .Values -}}
{{- $release := .Release -}}
{{- range .Values.nodes }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ $release.Name }} <-- Global Scope
labels:
.
.
.
env:
- name: IP_ADDRESS
value: {{ .ip_address }} <-- From range scope
.
.
.
{{- end }}
</code></pre>
<p>Example of values</p>
<pre><code># Global
image:
repository: ..ecr.....
# Instances
nodes:
- node1:
name: node-1
iP: 1.1.1.1
- node2:
name: node-2
iP: 1.1.1.1
</code></pre>
| <p>When entering a loop block you lose your global context when using <code>.</code>. You can access the global context by using <code>$.</code> instead.</p>
<p>As written in the <a href="https://helm.sh/docs/chart_template_guide/variables/#helm" rel="noreferrer">Helm docs</a> -</p>
<blockquote>
<p>there is one variable that is always global - $ - this variable will always point to the root context. This can be very useful when you are looping in a range and need to know the chart's release name.</p>
</blockquote>
<p>In your example, using this would look something like:</p>
<pre><code>{{- range .Values.nodes }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ $.Release.Name }}
labels:
.
.
.
env:
- name: IP_ADDRESS
value: {{ .ip_address }}
.
.
.
{{- end }}
</code></pre>
|
<p>I am trying to connect to cqlsh from remote (kuebctl command) when encryption is enabled, but I am unable to connect to cqlsh. anyone has a better way to connect?</p>
<pre><code>$ kubectl run -i --tty --restart=Never --rm --image cassandra cqlsh -- cqlsh cassandra-0.cassandra.default.svc.cluster.local -u cassandra -p cassandra --ssl
If you don't see a command prompt, try pressing enter.
Validation is enabled; SSL transport factory requires a valid certfile to be specified. Please provide path to the certfile in [ssl] section as 'certfile' option in /root/.cassandra/cqlshrc (or use [certfiles] section) or set SSL_CERTFILE environment variable.
pod "cqlsh" deleted
pod default/cqlsh terminated (Error)
</code></pre>
<p>Since I am connecting from remote, I cannot set the cqlshrc file.</p>
| <p>You can specify location of the certfile, and validate options via environment variables <code>SSL_CERTFILE</code> and <code>SSL_VALIDATE</code> correspondingly, but you'll need to mount certificate files anyway, so you can also mount corresponding <code>cqlshrc</code>...</p>
<p>See <a href="https://docs.datastax.com/en/dse/5.1/dse-admin/datastax_enterprise/security/usingCqlshSslAndKerberos.html" rel="nofollow noreferrer">documentation</a> for more details.</p>
<p>P.S. Also, if client validation is enabled, you'll need to provide client's key/certificate as well (options <code>userkey</code>, and <code>usercert</code> in the <code>cqlshrc</code>).</p>
|
<p>I'm trying to implement a <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#streaming-sidecar-container" rel="nofollow noreferrer">Streaming Sidecar Container</a> logging architecture in Kubernetes using Fluentd.</p>
<p>In a single pod I have:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">emptyDir</a> Volume (as log storage)</li>
<li>Application container</li>
<li>Fluent log-forwarder container</li>
</ul>
<p>Basically, the Application container logs are stored in the shared emptyDir volume. Fluentd log-forwarder container tails this log file in the shared emptyDir volume and forwards it an external log-aggregator.</p>
<p>The Fluentd log-forwarder container uses the following config in <code>td-agent.conf</code>:</p>
<pre><code><source>
@type tail
tag "#{ENV['TAG_VALUE']}"
path (path to log file in volume)
pos_file /var/log/td-agent/tmp/access.log.pos
format json
time_key time
time_format %iso8601
keep_time_key true
</source>
<match *.*>
@type forward
@id forward_tail
heartbeat_type tcp
<server>
host (server-host-address)
</server>
</match>
</code></pre>
<p>I'm using an environment variable to set the <code>tag</code> value so I can change it dynamically e.g. when I have to use this container side-by-side with a different Application container, I don't have to modify this config and rebuild this image again.</p>
<p>Now, I set the environment variable value during pod creation in Kubernetes:</p>
<pre><code> .
.
spec:
containers:
- name: application-pod
image: application-image:1.0
ports:
- containerPort: 1234
volumeMounts:
- name: logvolume
mountPath: /var/log/app
- name: log-forwarder
image: log-forwarder-image:1.0
env:
- name: "TAG_VALUE"
value: "app.service01"
volumeMounts:
- name: logvolume
mountPath: /var/log/app
volumes:
- name: logvolume
emptyDir: {}
</code></pre>
<p>After deploying the pod, I found that the tag value in the Fluentd log-forwarder container comes out empty (expected value: "app.service01"). I imagine it's because Fluentd's td-agent initializes first before the <code>TAG_VALUE</code> environment variable gets assigned.</p>
<p>So, the main question is...<br>
<strong>How can I dynamically set the td-agent's tag value?</strong></p>
<p>But really, what I'm wondering is:<br>
<strong>Is it possible to assign an environment variable before a container's initialization in Kubernetes?</strong></p>
| <p>As an answer to your first question (<strong>How can I dynamically set the td-agent's tag value?</strong>), this seems the best way that you are doing which is defining <code>tag "#{ENV['TAG_VALUE']}"</code> inside fluentd config file.</p>
<p>For your second question, environment variable is assigned before a container's initialization. </p>
<p>So it means it should work and I tested with below sample yaml, and it just worked fine.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-conf
data:
fluentd.conf.template: |
<source>
@type tail
tag "#{ENV['TAG_VALUE']}"
path /var/log/nginx/access.log
format nginx
</source>
<match *.*>
@type stdout
</match>
---
apiVersion: v1
kind: Pod
metadata:
name: log-forwarder
labels:
purpose: test-fluentd
spec:
containers:
- name: nginx
image: nginx:latest
volumeMounts:
- name: logvolume
mountPath: /var/log/nginx
- name: fluentd
image: fluent/fluentd
env:
- name: "TAG_VALUE"
value: "test.nginx"
- name: "FLUENTD_CONF"
value: "fluentd.conf"
volumeMounts:
- name: fluentd-conf
mountPath: /fluentd/etc
- name: logvolume
mountPath: /var/log/nginx
volumes:
- name: fluentd-conf
configMap:
name: fluentd-conf
items:
- key: fluentd.conf.template
path: fluentd.conf
- name: logvolume
emptyDir: {}
restartPolicy: Never
</code></pre>
<p>And when I curl nginx pod, I see this output on fluentd containers stdout.</p>
<pre><code>kubectl logs -f log-forwarder fluentd
2019-03-20 09:50:54.000000000 +0000 test.nginx: {"remote":"10.20.14.1","host":"-","user":"-","method":"GET","path":"/","code":"200","size":"612","referer":"-","agent":"curl/7.60.0","http_x_forwarded_for":"-"}
2019-03-20 09:50:55.000000000 +0000 test.nginx: {"remote":"10.20.14.1","host":"-","user":"-","method":"GET","path":"/","code":"200","size":"612","referer":"-","agent":"curl/7.60.0","http_x_forwarded_for":"-"}
2019-03-20 09:50:56.000000000 +0000 test.nginx: {"remote":"10.128.0.26","host":"-","user":"-","method":"GET","path":"/","code":"200","size":"612","referer":"-","agent":"curl/7.60.0","http_x_forwarded_for":"-"}
</code></pre>
<p>As you can see, my environment variable <code>TAG_VALUE=test.nginx</code> has applied to log entries.</p>
<p>I hope it will be useful.</p>
|
<p>I have a init container in which I'm storing file but while running it's showing the file is not present.
Is there any way I can exec inside init container and check where the file is being stored. </p>
| <p>Use <code>kubectl describe <pod></code> to get the id of the initContainer you need to exec into then use <code>kubectl exec -ti <pod> -c <container> sh</code> to access its shell. You may need to add some kind of delay, like <code>sleep</code> to the initContainer to access it before it completes or fails.</p>
|
<p>I have created docker images using docker-compose.yml as below</p>
<pre><code>version: '2'
services:
djangoapp:
build: .
volumes:
- .:/sig_app
- static_volume:/sig_app
networks:
- nginx_network
nginx:
image: nginx:1.13
ports:
- "80:80"
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static_volume:/sig_app
depends_on:
- djangoapp
networks:
- nginx_network
networks:
nginx_network:
driver: bridge
volumes:
static_volume:
</code></pre>
<p>I have used docker-compose build and docker-compose up. The three images are created as below </p>
<ol>
<li>kubernetes_djangoapp </li>
<li>docker.io/python</li>
<li>docker.io/nginx</li>
</ol>
<p>I want to deploy the application into kubernetes using YAML file.
I am new to kubernetes.
Django application is running with port 8000
and Nginx with port 80</p>
| <p>This should work:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-deploy
spec:
replicas: 1
template:
metadata:
labels:
app: my-app
spec:
volumes:
- name: django-nginx
emptyDir: {}
- name: nginx-host
hostPath:
path: /config/nginx/conf.d
containers:
- name: djangoapp
image: kubernetes_djangoapp
volumeMounts:
- name: django-nginx
mountPath: /sig_app
- name: nginx
image: nginx:1.13
ports:
- containerPort: 80
volumeMounts:
- name: django-nginx
mountPath: /sig_app
- name: nginx-host
mountPath: /etc/nginx/conf.d
</code></pre>
<p>Note that you will have to modify some things to make it custom yours. I am missing where the image is. You should upload it to docker hub, or any registry of your choice.</p>
<p>About the volumes, here both containers share a non-persistent volume (django-nginx), which maps <code>/sig_app</code> directory in each container with each other. And another one that shares the container nginx (etc/nginx/conf.d) with your host (/config/nginx/conf.d), to pass the config file. A better way would be to use a ConfigMap. Check on that.</p>
<p>So, yeah, set the image for django and let me know if it doesn't work, and we will see whats failing.</p>
<p>Cheers</p>
|
<p>I didn't find could we replace rabbitMQ/activeMQ/SQS with native kubernetes messaging queue?
or they are totally different in terms of features?</p>
| <p>It is a totally different mechanism.</p>
<p>Kubernetes internal queues is not a real "queues" you can use in external applications, they are a part of internal messaging system and manage only objects which are parts of Kubernetes.</p>
<p>Moreover, Kubernetes doesn't provide any message queue as a service for external apps (except a situation when your app actually service one of K8s objects).</p>
<p>If you are not sure which service is better for your app - try to check <a href="https://github.com/lukaszx0/queues.io" rel="nofollow noreferrer">queues.io</a>.
That is a list of almost all available MQ engines with some highlights.</p>
|
<p>a quick question. I know if the Kubernetes liveness probe fails, kubernetes will help restart the pod and try again. But how about the readiness probe fails? How can I also ask kubernetes to restart the pod?</p>
<p><code>api-group-0 0/1 Running 0 6h35m</code></p>
<p>Restart this pod can make it works. Thanks all!</p>
| <p>There's no way to trigger pod restart within a readiness probe.
As it was recommended in the comments, you should rely on liveness probe instead. </p>
<pre><code>livenessProbe:
exec:
command:
- /opt/fissile/readiness-probe.sh
initialDelaySeconds: 20
periodSeconds: 10
failureThreshold: 3
</code></pre>
<p>If you have concerns about readiness-probe.sh fails periodically and shouldn't trigger restart straight after the first failure, consider <code>failureThreshold</code> setting. It will give this many tries before pod restart. </p>
|
<p>What is the use of external IP address option in kubernetes service when the service is of type <code>ClusterIP</code></p>
| <p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">ClusterIP</a> is the default service type in Kubernetes which allows you to reach your service only <strong>within</strong> the cluster. </p>
<p>If your service type is set as <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">LoadBalancer</a> or <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">NodePort</a>, <code>ClusterIP</code> is automatically created and <code>LoadBalancer</code> or <code>NodePort</code> service will route to this <code>ClusterIP</code> IP address.</p>
<p>The new external IP addresses are only allocated with <code>LoadBalancer</code> type.</p>
<p>You can also use the node's external IP addresses when you set your service as <code>NodePort</code>. But in this case you will need extra firewall rules for your nodes to allow ingress traffic for your exposed node ports.</p>
|
<p>I have an VPN between the company network 172.16.0.0/16 and GCP 10.164.0.0/24</p>
<p>On GCP there is a cassandra cluster running with 3 instances. These instances get dynamical local ip adresses - for example <em>10.4.7.4</em> , <em>10.4.6.5</em>, <em>10.4.3.4</em>. </p>
<p>My issue: from the company network I cannot access 10.4x addresses as the tunnel works only for 10.164.0.0/24.</p>
<p>I tried setting up an LB service on 10.164.0.100 with the cassandra nodes behind. This doesnt work: when I configure that ip adress as seed node on local cluster, it gets an reply from one of the 10.4.x ip addresses, which it doesnt have in its seed list.</p>
<p>I need advice how to setup inter DC sync in this scenario.</p>
| <p>IP addresses which K8s assign to Pods and Services are <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">internal cluster-only addresses</a> which are not accessible from outside of the cluster. It is possible by some CNI to create connection between in-cluster addresses and external networks, but I don't think that is a good idea in your case.</p>
<p>You need to expose your Cassandra using <a href="https://kubernetes.io/docs/concepts/services-networking/" rel="nofollow noreferrer">Service</a> with NodePort or LoadBalancer type. That is another one <a href="https://github.com/kubernetes/examples/issues/48" rel="nofollow noreferrer">answer</a> with a same solution from Kubernetes Github.</p>
<p>If you will add a Service with type NodePort, your Cassandra will be available on a selected port on all Kubernetes nodes. </p>
<p>If you will choose LoadBalancer, Kubernetes will create for you Cloud Load Balancer which will be an entrypoint for Cassandra. Because you have a VPN to your VPC, I think you will need an <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing" rel="nofollow noreferrer">Internal Load Balancer</a>.</p>
|
<p>I'm a bit confused about communication model between K8s master components. How does <em>kube-apiserver</em>, <em>kube-controller-manager</em> and <em>kube-scheduler</em> communicate with each other?</p>
<p>According to <a href="https://kubernetes.io/docs/concepts/architecture/master-node-communication/#cluster-to-master" rel="nofollow noreferrer">official doc</a>, it seems to me that only <em>kube-controller-manager</em> and <em>kube-scheduler</em> connects to <em>kube-apiserver</em>, but not the other way around. However, I found there were bunch of server-flavored flags provided by both <em>kube-controller-manager</em> and <em>kube-scheduler</em>, such as <code>--bind-address</code> or <code>--client-ca-file</code>. So they are both definitely acting as a server too, which I can further confirm using <code>curl localhost:10251/healthz</code> and <code>curl localhost:10252/healthz</code>.</p>
<p>So the big question mark in my head now is that, what functionalities were provided by <em>kube-controller-manager</em>'s and <em>kube-scheduler</em>'s server ports? And were they used by <em>kube-apiserver</em>?</p>
| <p>They are not used by kube-apiserver.</p>
<p>That is health check ports to export current health status and metrics. <code>--client-ca-file</code> is an option for outgoing connections. </p>
<p>Here is a related path of a <a href="https://github.com/kubernetes/kubernetes/blob/master/cmd/kube-scheduler/app/options/insecure_serving.go" rel="noreferrer">source code</a> of kube-scheduler.</p>
|
<p>I have the following CSR object in Kubernetes:</p>
<pre><code>$ kubectl get csr
NAME AGE REQUESTOR CONDITION
test-certificate-0.my-namespace 53m system:serviceaccount:my-namespace:some-user Pending
</code></pre>
<p>And I would like to approve it using the Python API client:</p>
<pre class="lang-py prettyprint-override"><code>from kuberentes import config, client
# configure session
config.load_kube_config()
# get a hold of the certs API
certs_api = client.CertificatesV1beta1Api()
# read my CSR
csr = certs_api.read_certificate_signing_request("test-certificate-0.my-namespace")
</code></pre>
<p>Now, the contents of the <code>csr</code> object are:</p>
<pre><code>{'api_version': 'certificates.k8s.io/v1beta1',
'kind': 'CertificateSigningRequest',
'metadata': {'annotations': None,
'cluster_name': None,
'creation_timestamp': datetime.datetime(2019, 3, 15, 14, 36, 28, tzinfo=tzutc()),
'deletion_grace_period_seconds': None,
'name': 'test-certificate-0.my-namespace',
'namespace': None,
'owner_references': None,
'resource_version': '4269575',
'self_link': '/apis/certificates.k8s.io/v1beta1/certificatesigningrequests/test-certificate-0.my-namespace',
'uid': 'b818fa4e-472f-11e9-a394-124b379b4e12'},
'spec': {'extra': None,
'groups': ['system:serviceaccounts',
'system:serviceaccounts:cloudp-38483-test01',
'system:authenticated'],
'request': 'redacted',
'uid': 'd5bfde1b-4036-11e9-a394-124b379b4e12',
'usages': ['digital signature', 'key encipherment', 'server auth'],
'username': 'system:serviceaccount:test-certificate-0.my-namespace'},
'status': {'certificate': 'redacted',
'conditions': [{'last_update_time': datetime.datetime(2019, 3, 15, 15, 13, 32, tzinfo=tzutc()),
'message': 'This CSR was approved by kubectl certificate approve.',
'reason': 'KubectlApprove',
'type': 'Approved'}]}}
</code></pre>
<p>I would like to <strong>approve</strong> this cert programmatically, if I use kubectl to do it with (<code>-v=10</code> will make <code>kubectl</code> output the http trafffic):</p>
<pre><code>kubectl certificate approve test-certificate-0.my-namespace -v=10
</code></pre>
<p>I get to see the <code>PUT</code> operation used to <strong>Approve</strong> my certificate:</p>
<pre><code>PUT https://my-kubernetes-cluster.com:8443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests/test-certificate-0.my-namespace/approval
</code></pre>
<p>So I need to <code>PUT</code> to the <code>/approval</code> resource of the certificate object. Now, how do I do it with the Python Kubernetes client?</p>
| <p>Here's to answer my question based on @jaxxstorm answer and my own investigation:</p>
<pre><code># Import required libs and configure your client
from datetime import datetime, timezone
from kubernetes import config, client
config.load_kube_config()
# this is the name of the CSR we want to Approve
name = 'my-csr'
# a reference to the API we'll use
certs_api = client.CertificatesV1beta1Api()
# obtain the body of the CSR we want to sign
body = certs_api.read_certificate_signing_request_status(name)
# create an approval condition
approval_condition = client.V1beta1CertificateSigningRequestCondition(
last_update_time=datetime.now(timezone.utc).astimezone(),
message='This certificate was approved by Python Client API',
reason='MyOwnReason',
type='Approved')
# patch the existing `body` with the new conditions
# you might want to append the new conditions to the existing ones
body.status.conditions = [approval_condition]
# patch the Kubernetes object
response = certs_api.replace_certificate_signing_request_approval(name, body)
</code></pre>
<p>After this, the KubeCA will approve and issue the new certificate. The issued certificate file can be obtained from the <code>response</code> object we just got:</p>
<pre><code>import base64
base64.b64decode(response.status.certificate) # this will return the decoded cert
</code></pre>
|
<p>I have 2 clusters in GCP, one in Europe and the other in USA.
I have created a VPC network to peer the subnetworks each others and configured the relative firewall rules.
Now i'm able to create calls between pods, but i obtain a timeout when try to create them from a pod in europe to a service in the other cluster. I checked very well all firewall rules but can't find a solution. Can someone give me some hint to solve my problem?</p>
| <p>The problem is that GCP requires to use a VM IP address in order to communicate outside the VPC. For allowing cross cluster communication on top of VPN, you need to make sure the clusters can communicate as they are on the same virtual network. GKE blocks egress traffic in case an internal IP address (pod address space) is used to access the internal IP address which are outside the virtual network (in this case through VPN). Hence you need to configure an ip table rule to masquerade traffic as it was originated from the VM instance IP address instead of the pod IP address for all outgoing traffic to the other subnet.</p>
<p>There is an implementation that is using daemon set to define the iptables MASQUERADE rules.</p>
<p>You can find more details on the github page - <a href="https://github.com/kubernetes-incubator/ip-masq-agent" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/ip-masq-agent</a> and on the k8s documentation - <a href="https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/</a></p>
|
<p>I want to deploy kubernetes in a <strong>non-cloud environment.</strong></p>
<p>I know MetalLB is an L2 load balancer and ingress is an L7 load balancer and nodeport can load balance in L4, but the question is in production (expecting 1 million requests) how should I use them?</p>
<p>Is it necessary to have all of them?</p>
<p>nodeport can load balance between pods that are on different nodes so it seems that using MetalLB that also load balances between nodes is useless.
How can you explain it?</p>
<p>Thanks</p>
| <p>Are you deploing it on premises? I would use all of them in production, and use metallb in mode BGP, where it divides the on going traffic among the nodes. Metallb divides the traffic between the physical nodes, Ingress finds the appropriate service, and NodePort find the pod in the cluster. They are all necessary for best practices. If you are on cloud providers go for a LB launched there. It can scale accordingly to demand. </p>
|
<p>When I do <code>kubectl delete pod</code> or <code>kubectl patch</code>, how can I ensure that the old pod doesn't actually delete itself until the replacement pod has been running for 2 minutes? (And if the replacement pod dies before the 2 minute mark, then don't even delete the old pod)</p>
<p>The reason is that my initiation takes about 2 minutes to pull some latest data and run calculations; after 2 minutes it'll reach a point where it'll either error out or continue running with the updated calculations.</p>
<p>I want to be able to occasionally delete the pod so that it'll restart and get new versions of stuff (because getting new versions is only done at the beginning of the code).</p>
<p>Is there a way I can do this without an init container? Because I'm concerned that it would be difficult to pass the calculated results from the init container to the main container</p>
| <p>We need to tweak <strong>two</strong> parameters.</p>
<ul>
<li>We have set the <code>minReadySeconds</code> as 2 min. Or we can use <code>readiness probe</code> instead of hardcoded 2min.</li>
<li>We have to do <strong>rolling update</strong> with <code>maxSurge > 0 (default: 1) and maxUnavailable: 0</code>. This will bring the new pod(s) and <strong>only if it becomes ready</strong>, old pod(s) will be killed. This process continues for rest of the pods.</li>
</ul>
<blockquote>
<p>Note: 0 <= maxSurge <= replicaCount</p>
</blockquote>
|
<p>I deploy an ingress with <code>app1</code> and <code>app2</code>.</p>
<pre><code>example.com/app1 ---> app1
example.com/app2 ---> app2
</code></pre>
<p>And define /etc/hosts in all the machine.</p>
<pre><code>192.168.1.10 example.com
</code></pre>
<p>But i want to know in operation how can i use <strong>DNS</strong> and ingress.</p>
<p>What should i do?
What ingress bring to me?
I confused by ingress. How should i use it in practical envinroment?</p>
| <p>With DNS you can't just use <code>example.com</code> (<code>example.com</code> is owned by <a href="https://www.iana.org/" rel="noreferrer">IANA</a>). You have to own the DNS configured on your ingress. For example:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: simple-fanout-example
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: mydomain.com
http:
paths:
- path: /foo
backend:
serviceName: service1
servicePort: 4200
</code></pre>
<p>In the case above you have to own <code>mydomain.com</code>. You can buy your domain at any major <a href="https://en.wikipedia.org/wiki/Domain_name_registrar" rel="noreferrer">domain registrar</a> like <a href="https://www.godaddy.com" rel="noreferrer">GoDaddy</a>.</p>
<p>Then you will have to expose your Ingress externally depending on the setup you have (AWS, bare-metal, etc) with a <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/" rel="noreferrer">LoadBalancer</a> Kubernetes Service and have an <a href="https://en.wikipedia.org/wiki/List_of_DNS_record_types" rel="noreferrer">A, or CNAME</a> entry on your domain registrar manager, point to that external endpoint (for <code>mydomain.com</code>). For example, on AWS that entry would be a CNAME that looks like this: <code>xxxxx-xxxxxxxxxxxx.us-west-2.elb.amazonaws.com</code></p>
<p>Note: You can ignore the host altogether but the ingress will only service a default backend as described <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource" rel="noreferrer">here</a> with a minimal ingress resource. As far as using your own DNS server, just you can too, as long as your DNS server correctly resolves mydomain.com to the external IP that your ingress controller is fronting.</p>
<p>Hope it helps!</p>
|
<p>Is there a way to get a trigger to shutdown, so we can close all connections gracefully before shutdown and don't proceed any actions after that probe and keeping the probe ready to kill.</p>
<p>This including flushing logs, keeping any state of the application saved before handing over to the new pod and many more use cases.</p>
| <p>You have 2 options:</p>
<ol>
<li><p>Containers (PID 1) receive SIGTERM before the container (and the pod) is removed. You can trap SIGTERM and act on it.</p></li>
<li><p>You can use the <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#hook-details" rel="nofollow noreferrer">preStop lifecycle hook</a></p></li>
</ol>
<p>Important implementation details can be found here: <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods</a></p>
<h2>httpGet example</h2>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: prestop-pod
spec:
terminationGracePeriodSeconds: 5
containers:
- name: nginx
image: nginx
lifecycle:
preStop:
httpGet:
# only port is reqired
port: 80
path: "?preStop"
# scheme: HTTP
# host: ...
# httpHeaders:
# name: ...
# value: ...
</code></pre>
<p>After <code>kubectl apply -f</code> on this file, run <code>kubectl log -f prestop-pod</code> while executing <code>kubectl delete pod prestop-pod</code> on another terminal. You should see something like:</p>
<pre><code>$ kubectl apply -f prestop.yaml
pod/prestop-pod created
$ kubectl logs -f prestop-pod
10.244.0.1 - - [21/Mar/2019:09:15:20 +0000] "GET /?preStop HTTP/1.1" 200 612 "-" "Go-http-client/1.1" "-"
</code></pre>
|
<p>I tried using rktlet(<a href="https://github.com/kubernetes-incubator/rktlet/blob/master/docs/getting-started-guide.md" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/rktlet/blob/master/docs/getting-started-guide.md</a>)</p>
<p>But when I try to </p>
<pre><code>kubelet --cgroup-driver=systemd \
> --container-runtime=remote \
> --container-runtime-endpoint=/var/run/rktlet.sock \
> --image-service-endpoint=/var/run/rktlet.sock
</code></pre>
<p>I am getting the below errors</p>
<pre><code>Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
I0320 13:10:21.661373 3116 server.go:407] Version: v1.13.4
I0320 13:10:21.663411 3116 plugins.go:103] No cloud provider specified.
W0320 13:10:21.664635 3116 server.go:552] standalone mode, no API client
W0320 13:10:21.669757 3116 server.go:464] No api server defined - no events will be sent to API server.
I0320 13:10:21.669791 3116 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
I0320 13:10:21.670018 3116 container_manager_linux.go:248] container manager verified user specified cgroup-root exists: []
I0320 13:10:21.670038 3116 container_manager_linux.go:253] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms}
I0320 13:10:21.670125 3116 container_manager_linux.go:272] Creating device plugin manager: true
I0320 13:10:21.670151 3116 state_mem.go:36] [cpumanager] initializing new in-memory state store
I0320 13:10:21.670254 3116 state_mem.go:84] [cpumanager] updated default cpuset: ""
I0320 13:10:21.670271 3116 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
W0320 13:10:21.672059 3116 util_unix.go:77] Using "/var/run/rktlet.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/rktlet.sock".
W0320 13:10:21.672124 3116 util_unix.go:77] Using "/var/run/rktlet.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/rktlet.sock".
E0320 13:10:21.673168 3116 remote_runtime.go:72] Version from runtime service failed: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService
E0320 13:10:21.673228 3116 kuberuntime_manager.go:184] Get runtime version failed: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService
F0320 13:10:21.673249 3116 server.go:261] failed to run Kubelet: failed to create kubelet: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService
</code></pre>
<p>How do I create a kube cluster using rkt? Please help.</p>
| <p>That's the way to run <code>Rktlet</code>. However, Rktlet is still pretty experimental and I believe it's not being actively developed either. The <a href="https://github.com/kubernetes-incubator/rktlet/commit/fd7fc6bf4a25f03c22e5f6e30f3d9f12c468afcb" rel="nofollow noreferrer">last commit</a> as per this writing was in <code>05/2018</code>. </p>
<p>You can try running it the other way as described <a href="https://coreos.com/rkt/docs/latest/using-rkt-with-kubernetes.html" rel="nofollow noreferrer">here</a> or <a href="https://www.devtech101.com/2017/11/02/configuring-kubernetes-3-node-cluster-coreos-kubelet-rkt-cni-part-4/" rel="nofollow noreferrer">here</a>. Basically, use <code>--container-runtime=rkt</code>, <code>--rkt-path=PATH_TO_RKT_BINARY</code>, etc. on the kubelet.</p>
<p>Is there a reason why you are need <code>rkt</code>? Note that <code>--container-runtime=rkt</code> is <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">deprecated</a> in the latest Kubernetes but should still work (1.13 as of this writing).</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.