prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>I am running a k8s cluster on bare-metal RHEL7. I am trying to run the kubectl port-forward command and am getting a error. </p>
<pre><code>kubectl port-forward -p somepod 10000:8080
I0128 15:33:33.802226 70558 portforward.go:225] Forwarding from 127.0.0.1:10000 -> 8080
E0128 15:33:33.802334 70558 portforward.go:214] Unable to create listener: Error listen tcp6 [::1]:10000: bind: cannot assign requested address
</code></pre>
<p>Any ideas why this could be happening?</p>
| <p>If you run kubectl port-forward multiple times, and you have ipv6 enabled on your machine you will run on this quite often.</p>
<p>There are two solutions:</p>
<ol>
<li>Run <strong>netstat -nlp | grep 10000</strong> in order to know the PID of the process using that port. Then you can kill it with <strong>kill -9 PID_OF_PROCESS</strong></li>
<li><p>Permanent solution: disable ipv6</p>
<p>echo "<br>
net.ipv6.conf.all.disable_ipv6=1<br>
net.ipv6.conf.default.disable_ipv6=1 <br>
net.ipv6.conf.lo.disable_ipv6=1 <br>
" | sudo tee -a /etc/sysctl.conf reboot"</p></li>
</ol>
|
<p>I try to install Magento 2 via CLI (inside a Minikube). I set all the required parameters, but it often happens that the setup fails with the error message beyond. I can run the setup command again, and it completes without an error. As this setup process should be fully automated and failsafe, I have to find out, what's wrong here.</p>
<pre><code>Upgrading data...
[Progress: 466 / 905]
Module 'Magento_Cms':
[Progress: 467 / 905]
Module 'Magento_Catalog':
In PatchApplier.php line 167:
Invalid entity_type specified: catalog_category
setup:install [--backend-frontname BACKEND-FRONTNAME] [--enable-debug-logging ENABLE-DEBUG-LOGGING] [--enable-syslog-logging ENABLE-SYSLOG-LOGGING] [--amqp-host AMQP-HOST] [--amqp-port AMQP-PORT] [--amqp-user AMQP-USER] [--amqp-password AMQP-PASSWORD] [--amqp-virtualhost AMQP-VIRTUALHOST] [--amqp-ssl AMQP-SSL] [--amqp-ssl-options AMQP-SSL-OPTIONS] [--key KEY] [--db-host DB-HOST] [--db-name DB-NAME] [--db-user DB-USER] [--db-engine DB-ENGINE] [--db-password DB-PASSWORD] [--db-prefix DB-PREFIX] [--db-model DB-MODEL] [--db-init-statements DB-INIT-STATEMENTS] [-s|--skip-db-validation] [--http-cache-hosts HTTP-CACHE-HOSTS] [--session-save SESSION-SAVE] [--session-save-redis-host SESSION-SAVE-REDIS-HOST] [--session-save-redis-port SESSION-SAVE-REDIS-PORT] [--session-save-redis-password SESSION-SAVE-REDIS-PASSWORD] [--session-save-redis-timeout SESSION-SAVE-REDIS-TIMEOUT] [--session-save-redis-persistent-id SESSION-SAVE-REDIS-PERSISTENT-ID] [--session-save-redis-db SESSION-SAVE-REDIS-DB] [--session-save-redis-compression-threshold SESSION-SAVE-REDIS-COMPRESSION-THRESHOLD] [--session-save-redis-compression-lib SESSION-SAVE-REDIS-COMPRESSION-LIB] [--session-save-redis-log-level SESSION-SAVE-REDIS-LOG-LEVEL] [--session-save-redis-max-concurrency SESSION-SAVE-REDIS-MAX-CONCURRENCY] [--session-save-redis-break-after-frontend SESSION-SAVE-REDIS-BREAK-AFTER-FRONTEND] [--session-save-redis-break-after-adminhtml SESSION-SAVE-REDIS-BREAK-AFTER-ADMINHTML] [--session-save-redis-first-lifetime SESSION-SAVE-REDIS-FIRST-LIFETIME] [--session-save-redis-bot-first-lifetime SESSION-SAVE-REDIS-BOT-FIRST-LIFETIME] [--session-save-redis-bot-lifetime SESSION-SAVE-REDIS-BOT-LIFETIME] [--session-save-redis-disable-locking SESSION-SAVE-REDIS-DISABLE-LOCKING] [--session-save-redis-min-lifetime SESSION-SAVE-REDIS-MIN-LIFETIME] [--session-save-redis-max-lifetime SESSION-SAVE-REDIS-MAX-LIFETIME] [--session-save-redis-sentinel-master SESSION-SAVE-REDIS-SENTINEL-MASTER] [--session-save-redis-sentinel-servers SESSION-SAVE-REDIS-SENTINEL-SERVERS] [--session-save-redis-sentinel-verify-master SESSION-SAVE-REDIS-SENTINEL-VERIFY-MASTER] [--session-save-redis-sentinel-connect-retires SESSION-SAVE-REDIS-SENTINEL-CONNECT-RETIRES] [--cache-backend CACHE-BACKEND] [--cache-backend-redis-server CACHE-BACKEND-REDIS-SERVER] [--cache-backend-redis-db CACHE-BACKEND-REDIS-DB] [--cache-backend-redis-port CACHE-BACKEND-REDIS-PORT] [--cache-backend-redis-password CACHE-BACKEND-REDIS-PASSWORD] [--cache-id-prefix CACHE-ID-PREFIX] [--page-cache PAGE-CACHE] [--page-cache-redis-server PAGE-CACHE-REDIS-SERVER] [--page-cache-redis-db PAGE-CACHE-REDIS-DB] [--page-cache-redis-port PAGE-CACHE-REDIS-PORT] [--page-cache-redis-compress-data PAGE-CACHE-REDIS-COMPRESS-DATA] [--page-cache-redis-password PAGE-CACHE-REDIS-PASSWORD] [--page-cache-id-prefix PAGE-CACHE-ID-PREFIX] [--base-url BASE-URL] [--language LANGUAGE] [--timezone TIMEZONE] [--currency CURRENCY] [--use-rewrites USE-REWRITES] [--use-secure USE-SECURE] [--base-url-secure BASE-URL-SECURE] [--use-secure-admin USE-SECURE-ADMIN] [--admin-use-security-key ADMIN-USE-SECURITY-KEY] [--admin-user [ADMIN-USER]] [--admin-password [ADMIN-PASSWORD]] [--admin-email [ADMIN-EMAIL]] [--admin-firstname [ADMIN-FIRSTNAME]] [--admin-lastname [ADMIN-LASTNAME]] [--cleanup-database] [--sales-order-increment-prefix SALES-ORDER-INCREMENT-PREFIX] [--use-sample-data] [--enable-modules [ENABLE-MODULES]] [--disable-modules [DISABLE-MODULES]] [--convert-old-scripts [CONVERT-OLD-SCRIPTS]] [-i|--interactive] [--safe-mode [SAFE-MODE]] [--data-restore [DATA-RESTORE]] [--dry-run [DRY-RUN]] [--magento-init-params MAGENTO-INIT-PARAMS]
</code></pre>
| <p>I figured out, that my php had not enough memory. I set the memory of my memory to 4GB and it finally worked for me.</p>
|
<p>kubernetes documentation can be a lot better but as with a very bulky software documentation becomes a big issue to deal with</p>
<p>i am trying to deploy configmap using kubectl and i am yet to find any documentation to explains if i can use tolerations or not with configmaps</p>
<p>i assume if i want to deploy an app that will use configmap, i should deploy in exact nodes i want so all things related to the app stays on same nodes...so following that i will assume configmap should allow tolerations as well</p>
<p>but upon trying to add tolerations so i can target specific nodes, here is what i get</p>
<pre><code>...
unknown field "tolerations" in io.k8s.api.core.v1.ConfigMap
...
</code></pre>
| <p>wherever the pod get schedule, configMap will be there on that node, its the responsibility of <strong>kubelet</strong> to bring it on the node from the <strong>etcd</strong> and mount it inside the container(pod), so it does not make sense to put toleration on configMap object.</p>
<ul>
<li>Taint is applied to nodes, and toleration is applied to the pod.</li>
</ul>
<blockquote>
<p>taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks that the node should not accept any pods that do not tolerate the taints. Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.</p>
</blockquote>
<ul>
<li><p>With the configMap, you have an independent lifecycle of configuration data. it is not baked into the container, which is a flexible solution.</p>
</li>
<li><p>Get documentation of various resources and their field with the following command</p>
<p><code>kubectl explain $K8sObject --recursive</code></p>
</li>
</ul>
<p><a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/#concepts" rel="nofollow noreferrer">taint-and-toleration-Concept</a></p>
|
<p>I just saw a yaml file for Postgres with <code>PersistentVolumeClaim</code> and <code>volumeMounts</code> and <code>volumes</code> with the <code>persistentVolumeClaim</code> in the <code>postgres</code> container. I couldn't find any <code>PersistentVolume</code> defined. </p>
<p>However, when the <code>postgres</code> container pod has been brought up, I can see a
<code>PersistentVolume</code> bound to the <code>persistentVolumeClaim</code> defined in the yaml file.</p>
<p>So will k8s create the <code>PersistentVolume</code> if we only define the <code>PersistentVolumeClaim</code>? </p>
| <p>yea that's correct, so when your cluster has dynamic provisioning with <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="noreferrer">storage-classes</a> then you just need to provide the <a href="https://kubernetes.io/docs/concepts/storage/volumes/#persistentvolumeclaim" rel="noreferrer">PVC</a> , provisioner will get relevant information from PVC and storageClass, then based on these info it will create the PV.</p>
<ul>
<li>Provisioning of PV happens dynamically
<blockquote>
<p>When none of the static PVs the administrator created matches a userβs PersistentVolumeClaim, the cluster may try to dynamically provision a volume specially for the PVC. This provisioning is based on StorageClasses: the PVC must request a storage class and the administrator must have created and configured that class in order for dynamic provisioning to occur.
<a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#dynamic" rel="noreferrer">dynamic-provisining</a></p>
</blockquote></li>
</ul>
<p>For example here you provide the following info in PVC</p>
<ol>
<li><p>StorageClassName</p></li>
<li><p>Requested Storage size</p></li>
<li><p>AccessModes </p></li>
</ol>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
</code></pre>
<p>In the StorageClass you provide the following information</p>
<ol>
<li><p>Provisioner </p></li>
<li><p>Other information </p></li>
</ol>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: manual
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
volumeBindingMode: Immediate
</code></pre>
<ul>
<li>PVC is a namespace scoped kubernetes object but storageclass in a cluster scope k8s object. So there is always a default storageclass in your cluster.so when you do not specify the name of the storgaeclass in your pvc, PV will be provisioned from the default storage class.</li>
</ul>
<p><code>kubectl get sc,pvc,pv</code> will provide the relevant information </p>
|
<p>For logs, I mount a volume from host on to the pod. This is written in the deployment yaml.
But, if my 2 pods run on the same host, there will be conflict as both pods will produce log files with same name.
Can I use some dynamic variables in deployment file so that mount on host is created with different name for different pods?</p>
| <p>you can use <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath-with-expanded-environment-variables" rel="nofollow noreferrer">subPathExpr</a> to achieve the uniqueness in the absolute path, this is one of the use case of the this feature. As of now its is alpha in k1.14.</p>
<blockquote>
<p>In this example, a Pod uses subPathExpr to create a directory pod1 within the hostPath volume /var/log/pods, using the pod name from the Downward API. The host directory /var/log/pods/pod1 is mounted at /logs in the container.</p>
</blockquote>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- name: container1
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
image: busybox
command: [ "sh", "-c", "while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt" ]
volumeMounts:
- name: workdir1
mountPath: /logs
subPathExpr: $(POD_NAME)
restartPolicy: Never
volumes:
- name: workdir1
hostPath:
path: /var/log/pods
</code></pre>
|
<p>While trying to understand <strong>Kubernetes networking</strong>, one point has confused me. Why <strong>Kubernetes doesn't handle pod-to-pod communication inbuilt/itself?</strong></p>
<p>As per the docs - <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/networking/</a> </p>
<p>There are <strong>4 distinct networking problems</strong> to address:</p>
<ol>
<li><strong>Highly-coupled container-to-container communications</strong>: this is solved
by pods and localhost communications.</li>
<li><strong>Pod-to-Pod communications</strong>: this is the primary focus of this
document.</li>
<li><strong>Pod-to-Service communications</strong>: this is covered by services.</li>
<li><strong>External-to-Service communications</strong>: this is covered by services.</li>
</ol>
<p>When Kubernetes can <strong>handle all the other problems</strong>(mentioned above) of networking, <strong>why does pod-to-pod communication needs to handled by other <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model" rel="nofollow noreferrer">plugins</a></strong> like ACI, Cilium, Flannel, Jaguar and so on..</p>
<p>Would like to know is there <strong>any specific reason for such architecture</strong>? </p>
| <p>Agree with Tim above. Kubernetes in general is mostly an abstraction and orchestration layer of the Compute, storage and networking for the developers so that they don't have to be aware of the implementation. The implementation itself will be tied to the underlying infrastructure and kubernetes just defines the interface for them (CRI for containers -compute, CSI for storage and CNI for networking).</p>
<p>By just defining the interface the implementations can evolve independently without breaking the contract. For example, in future it might become possible to offload pod-to-pod networking to the nic-card and expecting kubernetes to evolve to such a technology change might be a big ask. By not being intimately tied to the implementation it allows development of technology to be accelerated in each layer. </p>
|
<p>I have a YAML file which creates a pod on execution. This pod extracts data from one of our internal systems and uploads to GCP. It takes around 12 mins to do so after which the status of the pod changes to 'Completed', however I would like to delete this pod once it has completed.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: xyz
spec:
restartPolicy: Never
volumes:
- name: mount-dir
hostPath:
path: /data_in/datos/abc/
initContainers:
- name: abc-ext2k8s
image: registrysecaas.azurecr.io/secaas/oracle-client11c:11.2.0.4-latest
volumeMounts:
- mountPath: /media
name: mount-dir
command: ["/bin/sh","-c"]
args: ["sqlplus -s CLOUDERA/MYY4nGJKsf@hal5:1531/dbmk @/media/ext_hal5_lk_org_localfisico.sql"]
imagePullSecrets:
- name: regcred
</code></pre>
<p>Is there a way to acheive this?</p>
| <p>Typically you don't want to create bare Kubernetes pods. The pattern you're describing of running some moderate-length task in a pod, and then having it exit, matches a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">Job</a>. (Among other properties, a job will reschedule a pod if the node it's on fails.)</p>
<p>Just switching this to a Job doesn't directly address your question, though. <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#job-termination-and-cleanup" rel="nofollow noreferrer">The documentation notes</a>:</p>
<blockquote>
<p>When a Job completes, no more Pods are created, but the Pods are not deleted either. Keeping them around allows you to still view the logs of completed pods to check for errors, warnings, or other diagnostic output. The job object also remains after it is completed so that you can view its status. It is up to the user to delete old jobs after noting their status.</p>
</blockquote>
<p>So whatever task creates the pod (or job) needs to monitor it for completion, and then delete the pod (or job). (Consider using the <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#efficient-detection-of-changes" rel="nofollow noreferrer">watch API</a> or equivalently the <code>kubectl get -w</code> option to see when the created objects change state.) There's no way to directly specify this in the YAML file since there is a specific intent that you can get useful information from a completed pod.</p>
<p>If this is actually a nightly task that you want to run at midnight or some such, you do have one more option. A <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">CronJob</a> will run a job on some schedule, which in turn runs a single pod. The important relevant detail here is that <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#jobs-history-limits" rel="nofollow noreferrer">CronJobs have an explicit control for how many completed Jobs they keep</a>. So if a CronJob matches your pattern, you can set <code>successfulJobsHistoryLimit: 0</code> in the CronJob spec, and created jobs and their matching pods will be deleted immediately.</p>
|
<p>While trying to understand <strong>Kubernetes networking</strong>, one point has confused me. Why <strong>Kubernetes doesn't handle pod-to-pod communication inbuilt/itself?</strong></p>
<p>As per the docs - <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/networking/</a> </p>
<p>There are <strong>4 distinct networking problems</strong> to address:</p>
<ol>
<li><strong>Highly-coupled container-to-container communications</strong>: this is solved
by pods and localhost communications.</li>
<li><strong>Pod-to-Pod communications</strong>: this is the primary focus of this
document.</li>
<li><strong>Pod-to-Service communications</strong>: this is covered by services.</li>
<li><strong>External-to-Service communications</strong>: this is covered by services.</li>
</ol>
<p>When Kubernetes can <strong>handle all the other problems</strong>(mentioned above) of networking, <strong>why does pod-to-pod communication needs to handled by other <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model" rel="nofollow noreferrer">plugins</a></strong> like ACI, Cilium, Flannel, Jaguar and so on..</p>
<p>Would like to know is there <strong>any specific reason for such architecture</strong>? </p>
| <p>The short answer is that networks are complex and highly customized. It's not easy to provide an efficient built-in that works everywhere. All of the cloud provider networks are different than bare-metal networks. Rather than pick a bad default we require that the end user, who really is the only person who could possibly comprehend their network, makes a decision.</p>
<p>Doing a built-in VXLAN or something might be possible but would be far from ideal for many users, and defaults tend to stick...</p>
|
<p><strong>Objective</strong>: Get some logging/monitoring on Googles
Stackdriver from a Kuberntes HA cluster
that is on premises, version 1.11.2.</p>
<p>I have been able to send logs to Elasticsearch using <a href="https://github.com/fluent/fluentd-kubernetes-daemonset" rel="noreferrer">Fluentd Daemonset for
Kubernetes</a>, but the
project is not supporting Stackdriver
(<a href="https://github.com/fluent/fluentd-kubernetes-daemonset/issues/6" rel="noreferrer">issue</a>).
That said, there is a docker image created for Stackdriver
(<a href="https://github.com/fluent/fluentd-kubernetes-daemonset/tree/master/docker-image/v1.2/debian-stackdriver" rel="noreferrer">source</a>),
but it does not have the daemonset. Looking at other daemonsets in this
repository, there are similarities between the different <code>fluent.conf</code> files
with the exception of the Stackdriver <code>fluent.conf</code> file that is missing any
environment variables.</p>
<p>As noted in the <a href="https://github.com/fluent/fluentd-kubernetes-daemonset/issues/6" rel="noreferrer">GitHub
issue</a>
mentioned above there is a plugin located in the Kubernetes GitHub
<a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/" rel="noreferrer">here</a>,
but it is legacy.
The docs can be found
<a href="https://kubernetes.io/docs/tasks/debug-application-cluster/logging-stackdriver/" rel="noreferrer">here</a>.</p>
<p>It states:</p>
<blockquote>
<p>"Warning: The Stackdriver logging daemon has known issues on
platforms other than Google Kubernetes Engine. Proceed at your own risk."</p>
</blockquote>
<p>Installing in this manner fails, without indication of why.</p>
<p>Some other notes. There is <a href="https://cloud.google.com/kubernetes-monitoring/" rel="noreferrer">Stackdriver Kubernetes
Monitoring</a> that clearly
states:</p>
<blockquote>
<p>"Easy to get started on any cloud or on-prem"</p>
</blockquote>
<p>on the front page, but
doesn't seem to explain how. This <a href="https://stackoverflow.com/questions/50303093/setup-stackdriver-kubernetes-monitoring-for-aws">Stack Overflow
question</a>
has someone looking to add the monitoring to his AWS cluster. It seems that it is not yet supported.</p>
<p>Furthermore, on the actual Google
Stackdriver it is also stated that</p>
<blockquote>
<p>"Works with multiple clouds and on-premises infrastructure".</p>
</blockquote>
<p>Of note, I am new to Fluentd and the Google Cloud Platform, but am pretty
familiar with administering an on-premise Kubernetes cluster.</p>
<p>Has anyone been able to get monitoring or logging to work on GCP from another platform? If so, what method was used? </p>
| <p>Consider reviewing <a href="https://docs.bindplane.bluemedora.com/docs/kubernetes-1" rel="nofollow noreferrer">this documentation</a> for using the BindPlane managed fluentd service from Google partner Blue Medora. It is available in Alpha to all Stackdriver users. It parses/forwards Kubernetes logs to Stackdriver, with additional payload markup.
<em>Disclaimer: I am employed by Blue Medora.</em></p>
|
<p>We are using helm to deploy many charts, but for simplicity let's say it is two charts. A parent chart and a child chart: </p>
<pre><code>helm/parent
helm/child
</code></pre>
<p>The parent chart has a <code>helm/parent/requirements.yaml</code> file which specifies:</p>
<pre><code>dependencies:
- name: child
repository: file://../child
version: 0.1.0
</code></pre>
<p>The child chart requires a bunch of environment variables on startup for configuration, for example in <code>helm/child/templates/deployment.yaml</code> </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
spec:
replicas: 1
strategy:
type: Recreate
template:
spec:
containers:
env:
- name: A_URL
value: http://localhost:8080
</code></pre>
<p>What's the best way to override the child's environment variable from the parent chart, so that I can run the parent using below command and set the <code>A_URL</code> env variable for this instance to e.g. <code>https://www.mywebsite.com</code>?</p>
<pre><code>helm install parent --name parent-release --namespace sample-namespace
</code></pre>
<p>I tried adding the variable to the parent's <code>helm/parent/values.yaml</code> file, but to no avail</p>
<pre><code>global:
repository: my_repo
tag: 1.0.0-SNAPSHOT
child:
env:
- name: A_URL
value: https://www.mywebsite.com
</code></pre>
<p>Is the syntax of the parent's value.yaml correct? Is there a different approach? </p>
| <p>In the child chart you have to explicitly reference a value from the configuration. (Having made this change you probably need to run <code>helm dependency update</code> from the parent chart directory.)</p>
<pre class="lang-yaml prettyprint-override"><code># child/templates/deployment.yaml, in the pod spec
env:
- name: A_URL
value: {{ .Values.aUrl | quote }}
</code></pre>
<p>You can give it a default value for the child chart.</p>
<pre class="lang-yaml prettyprint-override"><code># child/values.yaml
aUrl: "http://localhost:8080"
</code></pre>
<p>Then in the parent chart's values file, you can provide an override value for that.</p>
<pre class="lang-yaml prettyprint-override"><code># parent/values.yaml
child:
aUrl: "http://elsewhere"
</code></pre>
<p>You can't use Helm to override or inject arbitrary YAML, except to the extent the templates allow for it.</p>
|
<p>Using the Nginx Ingress Controller, we would like to expose different paths of a Kubernetes service, with different security requirements.</p>
<ol>
<li><p><code>/</code> is open to the public</p></li>
<li><p><code>/white-list</code> only allows connections from a specific IP Address</p></li>
<li><p><code>/need-key</code> requires an API key</p></li>
</ol>
<p>I'm running in AWS EKS. Kubernetes version is as follows:<code>v1.12.6-eks-d69f1b</code>.</p>
<p>If we use Annotations, they apply to the entire service. Ideally I would like to apply an Annotation only to a path. </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-myServiceA
annotations:
# use the shared ingress-nginx
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: myServiceA.foo.org
http:
paths:
- path: /
backend:
serviceName: myServiceA
servicePort: 80
- path: /white-list
backend:
serviceName: myServiceA
servicePort: 80
**NEED SOMETHING HERE TO WHITELIST**
- path: /need-key
backend:
serviceName: myServiceA
servicePort: 80
**NEED SOMETHING HERE TO USE API-KEY**
</code></pre>
<p>The results I've been having end up applying to all the paths.
I can live without API-Key as I can code that out, but ideally, I'd rather have it managed outside of the container.</p>
<p>Has anyone accomplished this with NGINX Ingress controller?</p>
| <p>To apply annotation for each path, you could write one <code>ingress</code> rule for each path you want to apply. Nginx Ingress Controller will collect those <code>ingress</code> rules by itself and apply accordingly. </p>
<p>For example:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-myServiceA-root
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: myServiceA.foo.org
http:
paths:
- path: /
backend:
serviceName: myServiceA
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-myServiceA-white-list
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/whitelist-source-range: X.X.X.X/32
spec:
rules:
- host: myServiceA.foo.org
http:
paths:
- path: /white-list
backend:
serviceName: myServiceA
servicePort: 80
...
</code></pre>
|
<p>I'm new into Kubernetes and playing around with the Kubernetes-RBAC. I want restrict access to my cluster for different users. As I unterstood Service-Accounts meant for intra-cluster processes running inside pods, which want to authenticate against the API.</p>
<p>So should I use User-Accounts for Buildservers and access from outside the cluster via a kubeconfig-file? Or what are the best practices in this case?</p>
<p>Is it bad to use Service-Accounts to access the cluster from remote?</p>
| <p>kubernetes doesnt have user object. recommend to use service account for deployments</p>
|
<p>The error below is triggered when executing <code>kubectl -n gitlab-managed-apps logs install-helm</code>. </p>
<p>I've tried regenerating the certificates, and bypassing the certificate check. Somehow it is using my internal certificate instead of the certificate of the source.</p>
<pre class="lang-sh prettyprint-override"><code>root@dev # kubectl -n gitlab-managed-apps logs install-helm
+ helm init --tiller-tls --tiller-tls-verify --tls-ca-cert /data/helm/helm/config/ca.pem --tiller-tls-cert /data/helm/helm/config/cert.pem --tiller-tls-key /data/helm/helm/config/key.pem
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Error: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached: Get https://kubernetes-charts.storage.googleapis.com/index.yaml: x509: certificate is valid for *.tdebv.nl, not kubernetes-charts.storage.googleapis.com
</code></pre>
<p>What might be the issue here? Screenshot below is the error Gitlab is giving me (not much information either).</p>
<p><a href="https://i.stack.imgur.com/d0M6f.png" rel="noreferrer"><img src="https://i.stack.imgur.com/d0M6f.png" alt="enter image description here"></a></p>
| <p>After having the same issue I finally found the solution for it:</p>
<p><strong>In the <code>/etc/resolv.conf</code> file on your Master and Worker nodes you have to search and remove the <code>search XYZ.com</code> entry.</strong></p>
<p>If you are using Jelastic you have to remove this entry every time after a restart. It gets added by Jelastic automatically. I already contacted them so maybe they will fix it soon.</p>
|
<p>I know that using Terraform to deploy your Infra and Kubernetes Cluster is the way to go. However, does it make any sense to use Terraform to also deploy applications on kubernetes cluster? Is this also the way to go?</p>
<p>Thank you</p>
| <p>Though it's not devoid of it's complexities, a better pipeline is Jenkins + Helm + Spinnaker combo.</p>
<ul>
<li>Jenkins - CI </li>
<li>Helm - templating and chart build </li>
<li>Spinnaker - deploy</li>
</ul>
<p>Pros:</p>
<ul>
<li><p>Spinnaker is an excellent tool for deployment to kubernetis.</p></li>
<li><p>It can be made aware of multiple environment ,so cloud pipeline are
easier to build.</p></li>
<li><p>Natively integrates with most of the cloud providers like AWS,Azure,PCF etc</p></li>
</ul>
<p>Cons:</p>
<ul>
<li>On the flip side it's a little heavy tool as it is comprised of a
bunch of microservices and configuration can get under your skin.</li>
</ul>
|
<p>Using <a href="https://kubectl.docs.kubernetes.io/pages/app_management/apply.html#usage" rel="noreferrer"><code>kubectl apply -k</code></a>, you can overlay <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#resources" rel="noreferrer">Resource</a> configs (that you have already defined). Can you create resources as well?</p>
<p>In my specific case I want to create a <a href="https://kubernetes.io/docs/concepts/storage/volumes/#local" rel="noreferrer">local Volume</a> for the development environment. I do not have this Resource in the base folder though. </p>
<p>My folder structure is like this:</p>
<blockquote>
<pre><code>~/someApp
βββ base
βΒ Β βββ deployment.yaml
βΒ Β βββ kustomization.yaml
βΒ Β βββ service.yaml
βββ overlays
βββ development
βΒ Β βββ cpu_count.yaml
βΒ Β βββ kustomization.yaml
βΒ Β βββ replica_count.yaml
βΒ Β βββ volume.yaml <--- *Is this possible*?
βββ production
βββ cpu_count.yaml
βββ kustomization.yaml
βββ replica_count.yaml
</code></pre>
</blockquote>
| <p>In your <code>overlays/development/kustomization.yaml</code> you can add:</p>
<pre><code>resources:
- volume.yaml
</code></pre>
|
<p>What should I do with pods after adding a node to the Kubernetes cluster?</p>
<p>I mean, ideally I want some of them to be stopped and started on the newly added node. Do I have to manually pick some for stopping and hope that they'll be scheduled for restarting on the newly added node?</p>
<p>I don't care about affinity, just semi-even distribution.</p>
<p>Maybe there's a way to always have the number of pods be equal to the number of nodes?</p>
<p><strong>For the sake of having an example:</strong></p>
<p>I'm using juju to provision small Kubernetes cluster on AWS. One master and two workers. This is just a playground.</p>
<p>My application is apache serving PHP and static files. So I have a deployment, a service of type NodePort and an ingress using nginx-ingress-controller.</p>
<p>I've turned off one of the worker instances and my application pods were recreated on the one that remained working.</p>
<p>I then started the instance back, master picked it up and started nginx ingress controller there. But when I tried deleting my application pods, they were recreated on the instance that kept running, and not on the one that was restarted.</p>
<p>Not sure if it's important, but I don't have any DNS setup. Just added IP of one of the instances to /etc/hosts with host value from my ingress.</p>
| <p><a href="https://github.com/kubernetes-incubator/descheduler" rel="nofollow noreferrer">descheduler</a>, a kuberenets incubator project could be helpful. Following is the introduction</p>
<p>As Kubernetes clusters are very dynamic and their state change over time, there may be desired to move already running pods to some other nodes for various reasons:</p>
<ul>
<li>Some nodes are under or over utilized.</li>
<li>The original scheduling decision does not hold true any more, as taints or labels are added to or removed from nodes, pod/node affinity requirements are not satisfied any more.</li>
<li>Some nodes failed and their pods moved to other nodes.</li>
<li><strong>New nodes are added to clusters.</strong></li>
</ul>
|
<p>I am trying to schedule Python Script through Kubernetes CronJob but for some reason I am not able to understand how can I do it. I am able to run simple script like <code>echo Hello World</code> but that's not what I want</p>
<p>I tried using this specification:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: test
spec:
schedule: "*/1 * * * *"
concurrencyPolicy: "Forbid"
failedJobsHistoryLimit: 10
startingDeadlineSeconds: 600 # 10 min
jobTemplate:
spec:
backoffLimit: 0
activeDeadlineSeconds: 3300 # 55min
template:
spec:
containers:
- name: hello
image: python:3.6-slim
command: ["python"]
args: ["./main.py"]
restartPolicy: Never
</code></pre>
<p>But then I am not able to run it because main.py is not found, I understand that relative path is not supported so I hardcoded the path but then I am not able to find my home directory, I tried doing <code>ls /home/</code> and over there my folder name is not visible so I am not able to access my project repository.</p>
<p>Initially I was planning to run bash script which can do:</p>
<ol>
<li>Install requirements by <code>pip install requirements.txt</code></li>
<li>Then run Python script</li>
</ol>
<p>But I am not sure how can I do this with kubernetes, It is so confusing to me</p>
<p>In short I want to be able to run k8s CronJob which can run Python script by first installing requirements and then running it</p>
| <p>where is the startup script ./main.py located? is it present in the image.
you need to build new image using python:3.6-slim as base image and add your python script to PATH. then you would be able to run it from k8s CronJob</p>
|
<p>I'm new to Kubernetes. In my project I'm trying to use Kustomize to generate configMaps for my deployment. Kustomize adds a hash after the configMap name, but I can't get it to also change the deployment to use that new configMap name.</p>
<p>Deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: env-receiver-deployment
labels:
app: env-receiver-app
project: env-project
spec:
replicas: 1
selector:
matchLabels:
app: env-receiver-app
template:
metadata:
labels:
app: env-receiver-app
project: env-project
spec:
containers:
- name: env-receiver-container
image: eu.gcr.io/influxdb-241011/env-receiver:latest
resources: {}
ports:
- containerPort: 8080
envFrom:
- configMapRef:
name: env-receiver-config
args: [ "-port=$(ER_PORT)", "-dbaddr=$(ER_DBADDR)", "-dbuser=$(ER_DBUSER)", "-dbpass=$(ER_DBPASS)" ]
</code></pre>
<p>kustomize.yml:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
- name: env-receiver-config
literals:
- ER_PORT=8080
- ER_DBADDR=http://localhost:8086
- ER_DBUSER=writeuser
- ER_DBPASS=writeuser
</code></pre>
<p>Then I run kustomize, apply the deployment and check if it did apply the environment.</p>
<pre><code>$ kubectl apply -k .
configmap/env-receiver-config-258g858mgg created
$ kubectl apply -f k8s/deployment.yml
deployment.apps/env-receiver-deployment unchanged
$ kubectl describe pod env-receiver-deployment-76c678dcf-5r2hl
Name: env-receiver-deployment-76c678dcf-5r2hl
[...]
Environment Variables from:
env-receiver-config ConfigMap Optional: false
Environment: <none>
[...]
</code></pre>
<p>But it still gets its environment variables from: <code>env-receiver-config</code>, not <code>env-receiver-config-258g858mgg</code>.</p>
<p>My current workaround is to disable the hash suffixes in the <code>kustomize.yml</code>.</p>
<pre><code>generatorOptions:
disableNameSuffixHash: true
</code></pre>
<p>It looks like I'm missing a step to tell the deployment the name of the new configMap. What is it?</p>
| <p>It looks like the problem come from the fact that you generate the config map through kustomize but the deployment via kubectl directly without using kustomize.</p>
<p>Basically, kustomize will look for all the <code>env-receiver-config</code> in all your resources and replace them by the hash suffixed version.</p>
<p>For it to work, <strong>all</strong> your resources have to go through kustomize.
To do so, you need to add to your <code>kustomization.yml</code>:</p>
<pre><code>resources:
- yourDeployment.yml
</code></pre>
<p>and then just run <code>kubectl apply -k .</code>. It should create both the ConfigMap and the Deployment using the right ConfigMap name</p>
|
<p>Normally you'd do <code>ibmcloud login</code> β <code>ibmcloud ks cluster-config mycluster</code> β copy and paste the <code>export KUBECONFIG=</code> and then you can run your <code>kubectl</code> commands.</p>
<p>But if this were being done for some automated devops pipeline outside of IBM Cloud, what is the method for getting authenticating and getting access to the cluster?</p>
| <p>You should not copy your kubeconfig to the pipeline. Instead you can create a service account with permissions to a particular namespace and then use its credentials to access the cluster.</p>
<p>What I do is create a service account and role binding like this:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab-tez-dev # account name
namespace: tez-dev #namespace
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tez-dev-full-access #role
namespace: tez-dev
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods", "services"] #resources to which permissions are granted
verbs: ["*"] # what actions are allowed
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tez-dev-view
namespace: tez-dev
subjects:
- kind: ServiceAccount
name: gitlab-tez-dev
namespace: tez-dev
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: tez-dev-full-access
</code></pre>
<p>Then you can get the token for the service account using:</p>
<pre><code>kubectl describe secrets -n <namespace> gitlab-tez-dev-token-<value>
</code></pre>
<p>The output:</p>
<pre><code>Name: gitlab-tez-dev-token-lmlwj
Namespace: tez-dev
Labels: <none>
Annotations: kubernetes.io/service-account.name: gitlab-tez-dev
kubernetes.io/service-account.uid: 5f0dae02-7b9c-11e9-a222-0a92bd3a916a
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1042 bytes
namespace: 7 bytes
token: <TOKEN>
</code></pre>
<p>In the above command, <code>namespace</code> is the namespace in which you created the account and the <code>value</code> is the unique value which you will see when you do </p>
<pre><code>kubectl get secret -n <namespace>
</code></pre>
<p>Copy the token to your pipeline environment variables or configuration and then you can access it in the pipeline. For example, in gitlab I do (only the part that is relevant here):</p>
<pre><code>k8s-deploy-stage:
stage: deploy
image: lwolf/kubectl_deployer:latest
services:
- docker:dind
only:
refs:
- dev
script:
######## CREATE THE KUBECFG ##########
- kubectl config set-cluster ${K8S_CLUSTER_NAME} --server=${K8S_URL}
- kubectl config set-credentials gitlab-tez-dev --token=${TOKEN}
- kubectl config set-context tez-dev-context --cluster=${K8S_CLUSTER_NAME} --user=gitlab-tez-dev --namespace=tez-dev
- kubectl config use-context tez-dev-context
####### NOW COMMANDS WILL BE EXECUTED AS THE SERVICE ACCOUNT #########
- kubectl apply -f deployment.yml
- kubectl apply -f service.yml
- kubectl rollout status -f deployment.yml
</code></pre>
|
<p>I'm new into Kubernetes and playing around with the Kubernetes-RBAC. I want restrict access to my cluster for different users. As I unterstood Service-Accounts meant for intra-cluster processes running inside pods, which want to authenticate against the API.</p>
<p>So should I use User-Accounts for Buildservers and access from outside the cluster via a kubeconfig-file? Or what are the best practices in this case?</p>
<p>Is it bad to use Service-Accounts to access the cluster from remote?</p>
| <p>You should use Kubernetes' ordinary user authentication system for authenticating automation agents that run outside the cluster. Service accounts are only usable by pods running inside the cluster (unless you go very far out of your way to "borrow" a service account token). You can do things like set up a <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.14/#rolebinding-v1-rbac-authorization-k8s-io" rel="nofollow noreferrer">RoleBinding</a> to give special permission to create and delete Kubernetes objects to your CD system's user.</p>
|
<p>what is the difference between kube-nginx (here i am not talking about nginx ingress controller) and kube-proxy ? </p>
<p>i've seen a recent deployment that all nodes in the cluster are running 1 kube-proxy (which is used for accessing services running in the nodes according to <a href="https://kubernetes.io/docs/concepts/cluster-administration/proxies/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/proxies/</a>) and 1 kube-nginx pod, so they are used for different purposes.</p>
| <p>As mentioned by community above and <a href="https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ha-mode.md" rel="nofollow noreferrer">here</a> </p>
<blockquote>
<p>K8s components require a loadbalancer to access the apiservers via a reverse proxy. Kubespray includes support for an nginx-based proxy that resides on each non-master Kubernetes node. This is referred to as localhost loadbalancing. It is less efficient than a dedicated load balancer because it creates extra health checks on the Kubernetes apiserver, but is more practical for scenarios where an external LB or virtual IP management is inconvenient. This option is configured by the variable loadbalancer_apiserver_localhost (defaults to True. Or False, if there is an external loadbalancer_apiserver defined). You may also define the port the local internal loadbalancer uses by changing, loadbalancer_apiserver_port. This defaults to the value of kube_apiserver_port. It is also important to note that Kubespray will only configure kubelet and kube-proxy on non-master nodes to use the local internal loadbalancer.</p>
</blockquote>
|
<p>I first ssh into the Master Node.</p>
<p>When I run <code>kubectl get svc</code></p>
<p>I get the output for NAME, TYPE, CLUSTER-IP, EXTERNAL-IP, PORT(S), AGE:</p>
<pre><code>python-app-service LoadBalancer 10.110.157.42 <pending> 5000:30008/TCP 68m
</code></pre>
<p>I then run <code>curl 10.110.157.52:5000</code>
and I get the following message:</p>
<p><code>curl: (7) Failed connect to 10.110.157.42:5000; Connection refused</code></p>
<p>Below I posted my Dockerfile, deployment file, service file, and python application file. When I run the docker image, it works fine. However when I try to apply a Kubernetes service to the pod, I am unable to make calls. What am I doing wrong? Also please let me know if I left out any necessary information. Thank you!</p>
<p>Kubernetes was created with KubeAdm using Flannel CNI</p>
<p>Deployment yaml file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: python-api
labels:
app: my-python-app
type: back-end
spec:
replicas: 1
selector:
matchLabels:
app: my-python-app
type: backend
template:
metadata:
name: python-api-pod
labels:
app: my-python-app
type: backend
spec:
containers:
- name: restful-python-example
image: mydockerhub/restful-python-example
ports:
- containerPort: 5000
</code></pre>
<p>Service yaml file:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: python-app-service
spec:
type: LoadBalancer
ports:
- port: 5000
targetPort: 5000
nodePort: 30008
selector:
app: my-python-app
type: backend
</code></pre>
<p>Python application source - restful.py:</p>
<pre><code>#!/usr/bin/python3
from flask import Flask, jsonify, request, abort
from flask_restful import Api, Resource
import jsonpickle
app = Flask(__name__)
api = Api(app)
# Creating an empty dictionary and initializing user id to 0.. will increment every time a person makes a POST request.
# This is bad practice but only using it for the example. Most likely you will be pulling this information from a
# database.
user_dict = {}
user_id = 0
# Define a class and pass it a Resource. These methods require an ID
class User(Resource):
@staticmethod
def get(path_user_id):
if path_user_id not in user_dict:
abort(400)
return jsonify(jsonpickle.encode(user_dict.get(path_user_id, "This user does not exist")))
@staticmethod
def put(path_user_id):
update_and_add_user_helper(path_user_id, request.get_json())
@staticmethod
def delete(path_user_id):
user_dict.pop(path_user_id, None)
# Get all users and add new users
class UserList(Resource):
@staticmethod
def get():
return jsonify(jsonpickle.encode(user_dict))
@staticmethod
def post():
global user_id
user_id = user_id + 1
update_and_add_user_helper(user_id, request.get_json())
# Since post and put are doing pretty much the same thing, I extracted the logic from both and put it in a separate
# method to follow DRY principles.
def update_and_add_user_helper(u_id, request_payload):
name = request_payload["name"]
age = request_payload["age"]
address = request_payload["address"]
city = request_payload["city"]
state = request_payload["state"]
zip_code = request_payload["zip"]
user_dict[u_id] = Person(name, age, address, city, state, zip_code)
# Represents a user's information
class Person:
def __init__(self, name, age, address, city, state, zip_code):
self.name = name
self.age = age
self.address = address
self.city = city
self.state = state
self.zip_code = zip_code
# Add a resource to the api. You need to give the class name and the URI.
api.add_resource(User, "/users/<int:path_user_id>")
api.add_resource(UserList, "/users")
if __name__ == "__main__":
app.run()
</code></pre>
<p>Dockerfile:</p>
<pre><code>FROM python:3
WORKDIR /usr/src/app
RUN pip install flask
RUN pip install flask_restful
RUN pip install jsonpickle
COPY . .
CMD python restful.py
</code></pre>
<p><code>kubectl describe svc python-app-service</code></p>
<pre><code>Name: python-app-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=my-python-app,type=backend
Type: LoadBalancer
IP: 10.110.157.42
Port: <unset> 5000/TCP
TargetPort: 5000/TCP
NodePort: <unset> 30008/TCP
Endpoints: 10.244.3.24:5000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
| <p>So the reason I was unable to connect was because I never exposed the port in my Dockerfile.</p>
<p>My Dockerfile should have been:</p>
<pre><code>FROM python:3
WORKDIR /usr/src/app
RUN pip install flask
RUN pip install flask_restful
RUN pip install jsonpickle
COPY . .
EXPOSE 5000
CMD python restful.py
</code></pre>
|
<p>I'm setting up a kubernetes deployment with an image that will execute docker commands (<code>docker ps</code> etc.).</p>
<p>My yaml looks as the following:</p>
<pre><code>kind: Deployment
apiVersion: apps/v1
metadata:
name: discovery
namespace: kube-system
labels:
discovery-app: kubernetes-discovery
spec:
selector:
matchLabels:
discovery-app: kubernetes-discovery
strategy:
type: Recreate
template:
metadata:
labels:
discovery-app: kubernetes-discovery
spec:
containers:
- image: docker:dind
name: discover
ports:
- containerPort: 8080
name: my-awesome-port
imagePullSecrets:
- name: regcred3
volumes:
- name: some-volume
emptyDir: {}
serviceAccountName: kubernetes-discovery
</code></pre>
<p>Normally I will run a docker container as following:</p>
<p><code>docker run -v /var/run/docker.sock:/var/run/docker.sock docker:dind</code></p>
<p>Now, kubernetes yaml supports <code>commands</code> and <code>args</code> but for some reason does not support <code>options</code>.</p>
<p>What is the right thing to do?</p>
<p>Perhaps I should configure a volume, but then, is it volumeMount or just a volume?</p>
<p>I am new with kubernetes so it is important for me to do it the right way.</p>
<p>Thank you</p>
| <p>You want to add the volume to the container.</p>
<pre><code>spec:
containers:
- name: discover
image: docker:dind
volumeMounts:
- name: dockersock
mountPath: "/var/run/docker.sock"
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
</code></pre>
|
<p>I am trying to scrape data from Istio envoy at 15090 port using prometheus.</p>
<p>My current setup is with istio 1.1.5 with standalone prometheus(not the one which comes attached with istio)</p>
<p>Envoy sidecars are attached to multiple pods in different namespaces and I am not sure how to scrape data on specific port in multiple istio-proxy containers</p>
<p>I tried using service monitor to scrape data from istio envoy and its not working.</p>
<p>Service monitor which I tried currently.</p>
<pre><code>kind: ServiceMonitor
metadata:
annotations:
labels:
k8s-app: istio
name: envoy
namespace: monitoring
spec:
endpoints:
- interval: 5s
path: /metrics
port: http-envoy-prom
jobLabel: envoy
namespaceSelector:
matchNames:
- istio-system
selector:
matchLabels:
istio: mixer```
can somebody help, how to scrape data from port 15090 on multiple istio-proxy containers attached to multiple pods.
</code></pre>
| <p>Apart from ServiceMonitor you also need to create following scrape config for envoy proxies</p>
<pre><code> # Scrape config for envoy stats
- job_name: 'envoy-stats'
metrics_path: /stats/prometheus
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_container_port_name]
action: keep
regex: '.*-envoy-prom'
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:15090
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod_name
metric_relabel_configs:
# Exclude some of the envoy metrics that have massive cardinality
# This list may need to be pruned further moving forward, as informed
# by performance and scalability testing.
- source_labels: [ cluster_name ]
regex: '(outbound|inbound|prometheus_stats).*'
action: drop
- source_labels: [ tcp_prefix ]
regex: '(outbound|inbound|prometheus_stats).*'
action: drop
- source_labels: [ listener_address ]
regex: '(.+)'
action: drop
- source_labels: [ http_conn_manager_listener_prefix ]
regex: '(.+)'
action: drop
- source_labels: [ http_conn_manager_prefix ]
regex: '(.+)'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_tls.*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_tcp_downstream.*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_http_(stats|admin).*'
action: drop
- source_labels: [ __name__ ]
regex: 'envoy_cluster_(lb|retry|bind|internal|max|original).*'
action: drop
</code></pre>
<p>Or use this <a href="https://github.com/istio/tools/blob/master/perf/istio-install/setup_prometheus.sh" rel="nofollow noreferrer">script</a> </p>
|
<p>In most of the tutorials, publications and even some Docker blog posts, container engine is illustrated as an entire layer that sits on top of the OS. It is also considered as a hypervisor or a layer that does virtualization and it's even sometimes called lightweight virtualization or OS-level virtualization.</p>
<p>But the truth is, the apps are running on the OS directly and they all share the same kernel. The container engine does not interpret or translate any code to run on the underlying OS.</p>
<p>I've also read <a href="https://stackoverflow.com/questions/16047306/how-is-docker-different-from-a-virtual-machine">How is Docker different from a virtual machine</a> but it's mainly about the difference between virtual machines and containers but my question is specifically about container engines.</p>
<p>Is it correct to illustrate container engine as an entire layer between the OS and the applications (figure 1) or it should be considered just as a process running next to other applications on top of the OS (figure 2)?</p>
<p><a href="https://i.stack.imgur.com/VQ75i.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VQ75i.jpg" alt="container architecture" /></a></p>
| <blockquote>
<p>Is a container engine an entire layer between OS and applications?</p>
</blockquote>
<p>No.</p>
<blockquote>
<p>Is a container engine another application running next to other applications on top of OS?</p>
</blockquote>
<p>This definition is better.</p>
<p><a href="https://stackoverflow.com/users/1553450/fatherlinux">Scott McCarty</a> has the following slide in one of his presentations:</p>
<p><a href="https://i.stack.imgur.com/sp69G.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sp69G.jpg" alt="enter image description here" /></a></p>
<p><a href="https://docs.google.com/presentation/d/1S-JqLQ4jatHwEBRUQRiA5WOuCwpTUnxl2d1qRUoTz5g/edit#slide=id.g566dfb1a47_0_22" rel="nofollow noreferrer">link to this slide</a></p>
<hr />
<p>A bit of history follows which might help with terms like <code>docker daemon</code>, <code>containerd</code>, <code>runc</code>, <code>rkt</code>...</p>
<p>from: <a href="https://coreos.com/rkt/docs/latest/rkt-vs-other-projects.html" rel="nofollow noreferrer">CoreOS documentation</a>:</p>
<blockquote>
<p><a href="https://i.stack.imgur.com/Jhnnh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Jhnnh.png" alt="enter image description here" /></a></p>
<p>Prior to Docker version 1.11, the Docker Engine daemon downloaded container images, launched container processes, exposed a remote API, and acted as a log collection daemon, all in a <strong>centralized process running as root</strong>.</p>
<p>While such a centralized architecture is convenient for deployment, it does not follow best practices for Unix process and privilege separation; further, it makes Docker difficult to properly integrate with Linux init systems such as upstart and systemd.</p>
<p>Since version 1.11, the Docker daemon no longer handles the execution of containers itself. Instead, this is now handled by <a href="https://containerd.io/" rel="nofollow noreferrer">containerd</a>. More precisely, the <strong>Docker daemon</strong> prepares the image as an <a href="https://www.opencontainers.org/" rel="nofollow noreferrer">Open Container Image</a> (OCI) bundle and makes an API call to containerd to start the OCI bundle. containerd then starts the container using <a href="https://github.com/opencontainers/runc" rel="nofollow noreferrer">runC</a></p>
</blockquote>
<p>Further reading:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/41645665/how-containerd-compares-to-runc">How containerd compares to runC</a></li>
<li><a href="https://docs.google.com/presentation/d/1S-JqLQ4jatHwEBRUQRiA5WOuCwpTUnxl2d1qRUoTz5g/edit#slide=id.g566dfb1a47_0_0" rel="nofollow noreferrer">Linux Container Internals 2.0</a></li>
<li><a href="https://blog.docker.com/2017/08/what-is-containerd-runtime/" rel="nofollow noreferrer">What is containerd ?</a></li>
<li><a href="https://medium.com/@adriaandejonge/moving-from-docker-to-rkt-310dc9aec938" rel="nofollow noreferrer">Moving from Docker to rkt</a></li>
</ul>
|
<p>Is there a UUID attribute (or something similar) that is either automatically assigned to all K8S pods, or that can be globally configured for a cluster?</p>
<p>I have an application that acts as a database (DB) front-end in my cluster, where some (but not all pods) submit a catalog of digital "assets" they make accessible to the cluster. The pods are expected to "clean up" their entries they created in the DB before shutting down, but there are cases where the pods could crash, preventing the shutdown/cleanup code from executing.</p>
<p>Even if I <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-terminating-with-grace" rel="nofollow noreferrer">use a pre-stop hook</a>, the pod has already crashed, and I can no longer access data within it to know what assets it has reported to the DB application.</p>
<p>I was considering having the DB application, upon receiving an incoming connection, query the remote pod for IP, unique ID/UUID of some sort, etc.; and map that data to the DB information submitted by the pod. I can then have the DB application periodically poll the cluster via the K8S REST API to see if a pod with the corresponding UUID still exists, and to clear out individual records if the pod-UUID associated with them is no longer present.</p>
<hr />
<h1>Question(s):</h1>
<ol>
<li>Is there a way to automatically assign such a UUID to all pods in the cluster at a global level (i.e. without having to modify the YAML files for all my pods, deployments, etc).</li>
<li>Is there some other K8S-supplied mechanism/feature that would provide a better means of post-mortem cleanup of pod resources that live in a separate/external pod?</li>
</ol>
| <p>You can access pod's uid by <code>kubectl get pods/$POD_NAME -o yaml</code>, it is under <code>metadata.uid</code>.</p>
<p>It is also exposed to pod by <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">env var</a>, but make sure your kubernetes contain this <a href="https://github.com/kubernetes/kubernetes/commit/0f65b218a07e3a5edda16484604dcf1f76bae793" rel="nofollow noreferrer">commit</a>.</p>
|
<p>I've recently asked a question about how to structure multiple applications which should be bundled together and am thinking of going down a route of having each separate application have it's own Helm chart and own ingress controller. This would allow for CI/CD to update each component easily without affecting the rest.</p>
<p>I was also thinking about using a "Umbrella" chart to specify versions of the other charts when it comes to actual releases and keep that in another repo.</p>
<p>However, when using multiple Helm charts with an ingress controller for each, how would I handle some objects which are shared between them? Such as secrets, TLS issuers and volumes shared between these? I could put them in the umbrella chart but then I'd lose the benefit of being able to use this in CI/CD. I could duplicate them inside each individual chart but that seems wrong to me. I'm thinking that I'd need another Helm chart to manage these resources alone.</p>
<p>Is there a suggested standard for doing this or what way would you reccomend?</p>
| <p>You can create one helm chart and inside of it your subcharts for app A, app B, and Shared chart, and you can define global Values for shared objects, and use this names in templates of main subcharts of app A & B. </p>
<p>For more information about subchart and global values, check <a href="https://github.com/helm/helm/blob/release-2.16/docs/chart_template_guide/subcharts_and_globals.md" rel="nofollow noreferrer">this</a></p>
|
<p>I have a few questions regarding Kubernetes: How to secure Kubernetes cluster?</p>
<p>My plan is to develop an application that secures Kubernetes cluster by default. I have written and tested a few Network Policy successfully.
As a second step I want to set these information dynamically in my application based on cloud provider and so one.</p>
<p>1.) I want to block access the host network as well as the meta data services (my cluster runs on AWS):</p>
<pre><code> egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.250.0.0/16 # host network
- 169.254.169.254/32 # metadata service
</code></pre>
<p>Does anyone know how I can access the host network dynamically?
I found an issue that says that you must use the Meta Data Service: <a href="https://github.com/kubernetes/kubernetes/issues/24657" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/24657</a> </p>
<p>Does anyone know how I can find out on which cloud provider I am currently running?
Based on that information, I want to set the meta data service IP.</p>
<p>2.) I want to block access to the "kube-system" namespace:</p>
<pre><code> egress:
- to:
- podSelector:
matchExpressions:
- key: namespace
operator: NotIn
values:
- kube-system
</code></pre>
<p>Does anyone know how I can enforce the actual denied access?
As far as I understood, the key labled "namespace" is just a name that I choosed. How does Kubernetes know that I actually mean the namespace and nothing else?</p>
<p>3.) I want to block Internet access:</p>
<pre><code>spec:
podSelector: {}
policyTypes:
- Egress
</code></pre>
<p>Does anyone know, if something like the DNS server in the DMZ zone is still reachable?</p>
<p>4.) I want to block communication to pods with a different namespace:</p>
<pre><code> egress:
- to:
- namespaceSelector:
matchLabels:
project: default
</code></pre>
<p>Here, I developed a controller that set the namespace dynamically.</p>
| <p>Your ideas are good in terms of a least-privilege policy but the implementation is problematic due to the following reasons.</p>
<ol>
<li><p>The logic you are trying to achieve it beyond the capabilities of Kubernetes network policies. It is very difficult to combine multiple block and allow policies in k8s without them conflicting with each other. For example, your first snippet allows access to any IP outside of the cluster and then your 3rd question is about blocking access to the internet - these two policies can't work simultaneously. </p></li>
<li><p>You shouldn't block access to the kube-system namespace because that's where the k8s DNS service is deployed and blocking access to it will prevent all communications in the cluster.</p></li>
<li><p>To answer your 1st question specifically: </p>
<ul>
<li>How I can access the host network dynamically?<br>
The cluster subnet is defined when you deploy it on AWS - you should store it during creation and inject it to your policies. Alternatively, you may be able to get it by calling an AWS API.
You can also get the cluster node IPs from Kubernetes: <code>kubectl get nodes -o wide</code> </li>
<li>How I can find out on which cloud provider I am currently running?<br>
Kubernetes doesn't know which platform it is running on, but you can guess it based on the node name prefix, for example: aks-nodepool1-18370774-0 or gke-...</li>
</ul></li>
<li><p>Your 4th point about blocking access between namespaces is good but it would be better to do it with an ingress policy like this:</p></li>
</ol>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
namespace: default
spec:
podSelector: {}
policyTypes:
- Ingress
</code></pre>
<p>For more details, I recommend this blog post that explains the complexities of k8s network policies: <a href="https://medium.com/@reuvenharrison/an-introduction-to-kubernetes-network-policies-for-security-people-ba92dd4c809d" rel="nofollow noreferrer">https://medium.com/@reuvenharrison/an-introduction-to-kubernetes-network-policies-for-security-people-ba92dd4c809d</a></p>
|
<p>I want to enable <a href="https://cloud.google.com/monitoring/kubernetes-engine/" rel="noreferrer">Kubernetes Engine Monitoring</a> on clusters but I don't see that as a field in Terraform's <a href="https://www.terraform.io/docs/providers/google/r/container_cluster.html" rel="noreferrer"><code>google_container_cluster</code> resource</a>.</p>
<p>Is Kubernetes Engine Monitoring managed with another resource?</p>
| <p>You can use the newer Kubernetes Monitoring Service by setting <a href="https://www.terraform.io/docs/providers/google/r/container_cluster.html#monitoring_service" rel="noreferrer"><code>monitoring_service</code></a> to <code>monitoring.googleapis.com/kubernetes</code> instead of the default <code>monitoring.googleapis.com</code>.</p>
<p>When enabling this you will also need to set <code>logging_service</code> to <code>logging.googleapis.com/kubernetes</code> as well.</p>
|
<p>I ran command
<strong>systemctl stop kubelet</strong>
then try to start it
<strong>systemctl start kubelet</strong></p>
<p>but can't able to start it</p>
<p>here is the output of <strong>systemctl status kubelet</strong></p>
<pre><code> kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
ββ10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Wed 2019-06-05 15:35:34 UTC; 7s ago
Docs: https://kubernetes.io/docs/home/
Process: 31697 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
Main PID: 31697 (code=exited, status=255)
</code></pre>
<p>Because of this i am not able to run any kubectl command</p>
<p>example <strong>kubectl get pods</strong> gives</p>
<pre><code>The connection to the server 172.31.6.149:6443 was refused - did you specify the right host or port?
</code></pre>
| <p>So i need to reset kubelete service
Here are the step :-</p>
<ol>
<li>check status of your docker service.
If stoped,start it by cmd <code>sudo systemctl start docker</code>.
If not installed installed it
<code>#yum install -y kubelet kubeadm kubectl docker</code></li>
<li>Make swap off by #swapoff -a</li>
<li>Now reset kubeadm by #kubeadm reset</li>
<li>Now try #kudeadm init
after that check #systemctl status kubelet
it will be working</li>
</ol>
<p>Check nodes
kubectl get nodes
if Master Node is not ready ,refer following
To start using your cluster, you need to run the following as a regular user:</p>
<pre><code>mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
</code></pre>
<p>if you not able to create pod ..check dns
kubectl get pods --namespace=kube-system
if dns pods are in pending state
i.e you need to use network service
i used calico</p>
<pre><code>kubectl apply -f https://docs.projectcalico.org/v3.7/manifests/calico.yaml
</code></pre>
<p>Now your master node is ready .. now you can deploy pod</p>
|
<p>I am trying to set up TLS for a MessageSight service that has been installed on IBM Cloud Private(ICP) </p>
<p>ICP and MessageSight have already been installed and I was trying to see how the MessageSight has been exposed as a service (Is that a NodePort, LoadBalancer or Externalname)</p>
<pre><code>$kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
messagesight-messagesight-svc ClusterIP 10.0.241.72 168.xx.xx.xxx 9089/TCP,1883/TCP,16102/TCP 9d
messagesight-messagesightui-svc ClusterIP 10.0.139.199 168.xx.xx.xxx 9087/TCP 9d
</code></pre>
<p>The type states it is a ClusterIP however it has an external IP. I always thought an external IP is going to empty if the service type is ClusterIP. If it were a LoadBalancer i would expect to see the external IP. </p>
<p>Describing the service does not provide any additional information</p>
<pre><code>kubectl describe svc messagesight-messagesight-svc
Name: messagesight-messagesight-svc
Labels: app=messagesight
chart=messagesight
heritage=Tiller
release=messagesight
Annotations: <none>
Selector: app=messagesight,release=messagesight
Type: ClusterIP
IP: 10.0.241.72
External IPs: 168.xx.xx.xxx
Port: adminport 9089/TCP
TargetPort: 9089/TCP
Endpoints: 10.1.66.1:9089
Port: messaging-1883 1883/TCP
TargetPort: 1883/TCP
Endpoints: 10.1.66.1:1883
Port: messaging-16102 16102/TCP
TargetPort: 16102/TCP
Endpoints: 10.1.66.1:16102
Session Affinity: None
Events: <none>
</code></pre>
<p>I am able to access the service via the external-IP and ports and am puzzled as to how it is working.</p>
<p>I installed a Jenkins setup to make observations and the output looks good and makes sense to me</p>
<pre><code>$kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins-ibm-jenki NodePort 10.0.241.156 <none> 8080:31058/TCP,50000:31155/TCP 1d
</code></pre>
<p>I can see the type is NodePort and it doesn't have a corresponding externalIP.</p>
<p>The description of the service also give me clear insights that this service is of type NodePort</p>
<pre><code>$kubectl describe svc jenkins-ibm-jenki
Name: jenkins-ibm-jenki
Labels: app=jenkins-ibm-jenki
chart=ibm-jenkins-dev-1.0.2
component=jenkins-jenkins-master
heritage=Tiller
release=jenkins
Annotations: helm.sh/created=1559696400
Selector: app=jenkins-ibm-jenki,component=jenkins-jenkins-master
Type: NodePort
IP: 10.0.241.156
Port: http 8080/TCP
TargetPort: 8080/TCP
NodePort: http 31058/TCP
Endpoints: 10.1.66.89:8080
Port: slavelistener 50000/TCP
TargetPort: 50000/TCP
NodePort: slavelistener 31155/TCP
Endpoints: 10.1.66.89:50000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
| <p>As stated in <a href="https://kubernetes.io/docs/concepts/services-networking/service/#external-ips" rel="nofollow noreferrer">documentation</a>:</p>
<p><code>In the Service spec, externalIPs can be specified along with any of the ServiceTypes</code></p>
|
<p>I have a scenario where it is required to 'prepare' Kubernetes towards taking off/terminating/shutdown a container, but allow it to serve <em>some</em> requests till that happens. </p>
<p>For example, lets assume that there are three methods: StartAction, ProcessAction, EndAction. I want to prevent clients from invoking StartAction when a container is about to be shutdown. However they should be able use ProcessAction and EndAction on that <em>same</em> container (after all Actions have been completed, the container will shutdown).</p>
<p>I was thinking that this is some sort of 'dual' readiness probe, where I basically want to indicate a 'not ready' status but continue to serve requests for already started Actions.</p>
<p>I know that there is a PreStop hook but I am not confident that this serves the need because according to the documentation I suspect that during the PreStop the pod is already taken off the load balancer:</p>
<blockquote>
<ol start="5">
<li>(simultaneous with 3) Pod is removed from endpoints list for service, and are no longer considered part of the set of running Pods for replication controllers. Pods that shutdown slowly cannot continue to serve traffic as load balancers (like the service proxy) remove them from their rotations.</li>
</ol>
</blockquote>
<p>(<a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods</a>).</p>
<p>Assuming that I must rely on stickiness and must continue serving requests for Actions on containers where those actions were started, is there some recommended practice?</p>
| <p>I think you can just implement 2 endpoints in your application:</p>
<ul>
<li>Custom readiness probe</li>
<li>Shutdown preparation endpointList item</li>
</ul>
<p>So to make graceful shutdown I think you should firstly call "Shutdown preparation endpoint" which will cause that "Custom readiness probe" will return error so Kubernetes will get out that POD from service load balancer (no new clients will come) but existing TCP connections will be kept (existing clients will operate). After your see in some custom metrics (which your service should provide) that all actions for clients are done you should shutdown containers using standard Kubernetes actions. All those actions should be probably automated somehow using Kubernetes and your application APIs.</p>
|
<p>I have an EKS cluster for which I want :
- 1 Load Balancer per cluster,
- Ingress rules to direct to the right namespace and the right service.</p>
<p>I have been following this guide : <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes" rel="noreferrer">https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes</a></p>
<p>My deployments:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: IMAGENAME
ports:
- containerPort: 8000
name: hello-world
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: bleble
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: bleble
template:
metadata:
labels:
app: bleble
spec:
containers:
- name: bleble
image: IMAGENAME
ports:
- containerPort: 8000
name: bleble
</code></pre>
<p>the service of those deployments:</p>
<pre><code>
apiVersion: v1
kind: Service
metadata:
name: hello-world-svc
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8000
selector:
app: hello-world
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: bleble-svc
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8000
selector:
app: bleble
type: NodePort
</code></pre>
<p>My Load balancer:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
</code></pre>
<p>My ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: simple-fanout-example
namespace : default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: internal-lb.aws.com
http:
paths:
- path: /bleble
backend:
serviceName: bleble-svc
servicePort: 80
- path: /hello-world
backend:
serviceName: hello-world-svc
servicePort: 80
</code></pre>
<p>I've set up the Nginx Ingress Controller with this : kubectl apply -f <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.24.1/deploy/mandatory.yaml" rel="noreferrer">https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.24.1/deploy/mandatory.yaml</a></p>
<p>I am unsure why I get a 503 Service Temporarily Unavailable for one service and one 502 for another... I would guess it's a problem of ports or of namespace? In the guide, they don't define namespace for the deployment...</p>
<p>Every resources create correctly, and I think the ingress is actually working but is getting confused where to go. </p>
<p>Thanks for your help!</p>
| <p>In general, use <code>externalTrafficPolicy: Cluster</code> instead of <code>Local</code>. You can gain some performance (latency) improvement by using <code>Local</code> but you need to configure those pod allocations with a lot efforts. You will hit 5xx errors with those misconfigurations. In addition, <code>Cluster</code> is the default option for <code>externalTrafficPolicy</code>.</p>
<p>In your <code>ingress</code>, you route <code>/bleble</code> to service <code>bleble</code>, but your service name is actually <code>bleble-svc</code>. please make them consistent. Also, you would need to set your <code>servicePort</code> to 8080 as you exposed 8080 in your service configuration.</p>
<p>For internal service like <code>bleble-svc</code>, <code>Cluster IP</code> is good enough in your case as it does not need external access.</p>
<p>Hope this helps.</p>
|
<p>Trying to understand the relationship between Helm and Docker containers.</p>
<p>Can a Helm Install create a container from a dockerfile?</p>
| <p>No. A helm chart is a templated set of kubernetes manifests. There will usually by a manifest for a Pod, Deployment, or Daemonset. Any of those will have a reference to a docker image (either hard coded or a parameter). That image will usually be in a container registry like dockerhub. You'll need to build your image using the docker file, push it to a registry, reference this image in a helm chart, then install or update helm. </p>
|
<p><code>kube-apiserver.service</code> is running with <code>--authorization-mode=Node,RBAC</code></p>
<pre><code>$ kubectl api-versions | grep rbac
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
</code></pre>
<p>Believe this is enough to enable RBAC. </p>
<p>However, any new user i create can view all resources without any rolebindings. </p>
<p><strong>Steps to create new user:</strong></p>
<pre><code>$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes nonadmin-csr.json | cfssljson -bare nonadmin
$ kubectl config set-cluster nonadmin --certificate-authority ca.pem --server https://127.0.0.1:6443
$ kubectl config set-credentials nonadmin --client-certificate nonadmin.pem --client-key nonadmin-key.pem
$ kubectl config set-context nonadmin --cluster nonadmin --user nonadmin
$ kubectl config use-context nonadmin
</code></pre>
<p>User <code>nonadmin</code> can view pods, svc without any rolebindings</p>
<pre><code>$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.32.0.1 <none> 443/TCP 5d4h
ingress-nginx ingress-nginx NodePort 10.32.0.129 <none> 80:30989/TCP,443:30686/TCP 5d3h
kube-system calico-typha ClusterIP 10.32.0.225 <none> 5473/TCP 5d3h
kube-system kube-dns ClusterIP 10.32.0.10 <none> 53/UDP,53/TCP 5d3h
rook-ceph rook-ceph-mgr ClusterIP 10.32.0.2 <none> 9283/TCP 4d22h
rook-ceph rook-ceph-mgr-dashboard ClusterIP 10.32.0.156 <none> 8443/TCP 4d22h
rook-ceph rook-ceph-mon-a ClusterIP 10.32.0.55 <none> 6790/TCP 4d22h
rook-ceph rook-ceph-mon-b ClusterIP 10.32.0.187 <none> 6790/TCP 4d22h
rook-ceph rook-ceph-mon-c ClusterIP 10.32.0.128 <none> 6790/TCP 4d22h
</code></pre>
<p><strong>Version:</strong></p>
<pre><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState:"clean", BuildDate:"2019-01-10T23:35:51Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState:"clean", BuildDate:"2019-01-10T23:28:14Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>This is an unmanaged kubernetes setup on Ubuntu 18 VMs.
Where am i going wrong?</p>
<p><strong>Edit1:</strong> Adding <code>kubectl config view</code></p>
<pre><code>$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/dadmin/ca.pem
server: https://192.168.1.111:6443
name: gabbar
- cluster:
certificate-authority: /home/dadmin/ca.pem
server: https://127.0.0.1:6443
name: nonadmin
- cluster:
certificate-authority: /home/dadmin/ca.pem
server: https://192.168.1.111:6443
name: kubernetes
contexts:
- context:
cluster: gabbar
namespace: testing
user: gabbar
name: gabbar
- context:
cluster: nonadmin
user: nonadmin
name: nonadmin
- context:
cluster: kubernetes
user: admin
name: kubernetes
current-context: nonadmin
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate: /home/dadmin/admin.pem
client-key: /home/dadmin/admin-key.pem
- name: gabbar
user:
client-certificate: /home/dadmin/gabbar.pem
client-key: /home/dadmin/gabbar-key.pem
- name: nonadmin
user:
client-certificate: /home/dadmin/nonadmin.pem
client-key: /home/dadmin/nonadmin-key.pem
</code></pre>
<p><strong>Edit 2:</strong>
<strong>Solution</strong> as suggested by @VKR:</p>
<pre><code>cat > operator-csr.json <<EOF
{
"CN": "operator",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "IN",
"L": "BGLR",
"O": "system:view", <==== HERE
"OU": "CKA"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
operator-csr.json | cfssljson -bare operator
MasterNode~$ kubectl config set-cluster operator --certificate-authority ca.pem --server $SERVER
Cluster "operator" set.
MasterNode~$ kubectl config set-credentials operator --client-certificate operator.pem --client-key operator-key.pem
User "operator" set.
MasterNode~$ kubectl config set-context operator --cluster operator --user operator
Context "operator" created.
MasterNode~$ kubectl auth can-i get pods --as operator
no
MasterNode~$ kubectl create rolebinding operator --clusterrole view --user operator -n default --save-config
rolebinding.rbac.authorization.k8s.io/operator created
MasterNode~$ cat crb-view.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: view
subjects:
- kind: User
name: operator
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
MasterNode~$ kubectl create -f crb-view.yml --record --save-config
clusterrolebinding.rbac.authorization.k8s.io/view created
MasterNode~$ kubectl auth can-i get pods --as operator --all-namespaces
yes
MasterNode~$ kubectl auth can-i create pods --as operator --all-namespaces
no
MasterNode~$ kubectl config use-context operator
Switched to context "operator".
MasterNode~$ kubectl auth can-i "*" "*"
no
MasterNode~$ kubectl run db --image mongo
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
Error from server (Forbidden): deployments.apps is forbidden: User "operator" cannot create resource "deployments" in API group "apps" in the namespace "default"
</code></pre>
| <p>Most probably root cause of such behavior is that use set <code>"O": "system:masters"</code> group while generating <code>nonadmin-csr.json</code></p>
<p><code>system:masters</code> group bounds to the <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#default-roles-and-role-bindings" rel="nofollow noreferrer">cluster-admin super-user default role</a> and as a result - every newly created user will have full access.</p>
<p>Here is a good <a href="https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/" rel="nofollow noreferrer">article</a> that provide you step-by-step instruction on how to create users with limited namespace access.</p>
<p>Quick test shows that similar users but with different groups have huge access differences</p>
<p>-subj "/CN=employee/O=testgroup" :</p>
<pre><code>kubectl --context=employee-context get pods --all-namespaces
Error from server (Forbidden): pods is forbidden: User "employee" cannot list resource "pods" in API group "" at the cluster scope
</code></pre>
<p>-subj "/CN=newemployee/O=system:masters" :</p>
<pre><code>kubectl --context=newemployee-context get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx nginx-ingress-controller-797b884cbc-pckj6 1/1 Running 0 85d
ingress-nginx prometheus-server-8658d8cdbb-92629 1/1 Running 0 36d
kube-system coredns-86c58d9df4-gwk28 1/1 Running 0 92d
kube-system coredns-86c58d9df4-jxl84 1/1 Running 0 92d
kube-system etcd-kube-master-1 1/1 Running 0 92d
kube-system kube-apiserver-kube-master-1 1/1 Running 0 92d
kube-system kube-controller-manager-kube-master-1 1/1 Running 4 92d
kube-system kube-flannel-ds-amd64-k6sgd 1/1 Running 0 92d
kube-system kube-flannel-ds-amd64-mtrnc 1/1 Running 0 92d
kube-system kube-flannel-ds-amd64-zdzjl 1/1 Running 1 92d
kube-system kube-proxy-4pm27 1/1 Running 1 92d
kube-system kube-proxy-ghc7w 1/1 Running 0 92d
kube-system kube-proxy-wsq4h 1/1 Running 0 92d
kube-system kube-scheduler-kube-master-1 1/1 Running 4 92d
kube-system tiller-deploy-5b7c66d59c-6wx89 1/1 Running 0 36d
</code></pre>
|
<p>I am trying to deploy a ML app on Kubernetes engine with GPU. I created the docker image using nvidia/cuda:9.0-runtime and built my app above it. When I deploy the image to Kubernetes Engine I get an error saying that it could not import libcuda.so.1.</p>
<pre><code>ImportError: libcuda.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>I looked at few solutions posted but none of them seem to work.</p>
<p>When trying those solutions I also found that</p>
<p>the paths mentioned by</p>
<pre><code>echo $LD_LIBRARY_PATH
</code></pre>
<p>which gives </p>
<pre><code>/usr/local/nvidia/lib:/usr/local/nvidia/lib64
</code></pre>
<p>do not seem to exist.</p>
<p>As well as there isn't a file with the name libcuda.so.1 (or any number) anywhere within the file system. And the /usr/lib/cuda/lib64 contains the shared libraries.
Am I currently implementing anything wrong here</p>
| <p>You are facing that issue as you have not installed the CUDA drivers on the cluster. Please follow the installing drivers section in this <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/gpus" rel="nofollow noreferrer">link</a>. For verifying the installation you can run this command and check. </p>
<pre><code>kubectl logs -n kube-system ds/nvidia-driver-installer -c nvidia-driver-installer
</code></pre>
|
<p>I'm trying to retrieve the value for the key <code>clientSecret</code>, from my kubernetes response, but I am failing to find the correct go syntax.</p>
<p>I have tried commands like:</p>
<pre><code>kubectl get secret client-secret -o yaml --namespace magic-test -o go-template --template="{{range .items}}{{range .data}}{{.clientSecret}} {{end}}{{end}}"
</code></pre>
<p>And other variations </p>
<p>This is the yaml output of what I am trying to retrieve from</p>
<pre><code>kubectl get secret client-secret -n magic-test -o yaml
apiVersion: v1
data:
clientSecret: NmQQuCNFiOWItsdfOTAyMCb00MjEwLWFiNGQtNTI4NDdiNWM5ZjMx
kind: Secret
metadata:
creationTimestamp: 2019-05-31T14:03:44Z
name: client-secret
namespace: magic-test
resourceVersion: "11544532074"
selfLink: /api/v1/namespaces/magic-test/secrets/client-secret
uid: e72acdsfbcc-83fsdac-1sdf1e9-9sdffaf-0050dsf56b7c1fa
type: Opaque
</code></pre>
<p>How can I retrieve the value for <code>clientSecret</code>?</p>
| <p>The output is not a list of items but an object or dictionary, so you can't iterate over the pipeline but you may simply index it by the keys you're interested in.</p>
<p>So simply use the template <code>{{.data.clientSecret}}</code>:</p>
<pre><code>kubectl get secret client-secret -o yaml --namespace magic-test -o go-template
--template="{{.data.clientSecret}}"
</code></pre>
|
<p>The README for kustomize says that</p>
<blockquote>
<p>It's like make, in that what it does is declared in a file, and it's like sed, in that it emits edited text.</p>
</blockquote>
<p>Does this analogy extend beyond the fact that files are used to declare what is needed?</p>
<p>Or, is kustomize backward chaining like make in that it reads all command input before working out what it has to do rather than work sequentially and step through the command input like bash working through a shell script?</p>
<p><strong>EDIT:</strong> Jeff Regan, of the Kustomize team in Google, explains the model for the way kustomize works towards the beginning of his talk <a href="https://youtu.be/ahMIBxufNR0" rel="nofollow noreferrer">Kustomize: Kubernetes Configuration Customization</a>. He also shows how kustomize may be daisy chained so the output of one kustomize may serve as the input to another kustomize. It seems that, as pointed out by ITChap below, kustomize starts by gathering all the resources referenced in the kustomization.yml file in the base dir. It the executes sequentially in a series of steps to perform the required substitutions and transformations interatively. Repeating the substitution/tranformation step as often as needed to complete. It then spits out the generated YAML on stdout. So I would say that it is not backward chaining like make but rather somewhere in between. HTH.</p>
| <p>What I noticed so far is that kustomize will first accumulate the content of all the base resources then apply the transformations from your <code>kustomization.yml</code> files. If you have multiple level of overlays, it doesn't seem to pass the result from one level to the next.</p>
<p>Let's consider the following:</p>
<p><code>./base/pod.yml</code>:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test
image: busybox
</code></pre>
<p><code>./base/kustomization.yml</code>:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../pod.yml
</code></pre>
<p><code>./overlays/l1/kustomization.yml</code>:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../base
nameSuffix: "-l1"
</code></pre>
<p><code>./overlays/l2/kustomization.yml</code>:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../l1
nameSuffix: "-l2"
</code></pre>
<p>When running <code>kustomize build overlays/l2</code> you are going to get a pod named <code>test-l1-l2</code> as expected.</p>
<p>But if you try to patch the base pod you would have to reference the pod using:</p>
<pre><code>patchesJson6902:
- target:
version: v1
kind: Pod
name: test
path: patch.yml
</code></pre>
<p>in your <code>./overlays/l1/kustomization.yml</code> <strong>but also</strong> in <code>./overlays/l2/kustomization.yml</code>. At the time the patch of l2 is applied, the referenced resource is still <code>test</code> and not <code>test-l1</code>.</p>
<p>I don't know kustomize well enough to understand the intention behind this but these are my observations. Hope it answers your question.</p>
<p>PS: this might change with <a href="https://github.com/kubernetes-sigs/kustomize/issues/1036" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kustomize/issues/1036</a></p>
|
<p>By adding static route table on every node with proper rules, the container network also works fine. For example, given three nodes with three different docker bridge subnet:</p>
<pre><code>node-1(192.168.0.1):
10.0.1.1/24
node-2(192.168.0.2):
10.0.2.1/24
node-3(192.168.0.3):
10.0.3.1/24
</code></pre>
<p>On each node add the following routes:</p>
<pre><code>ip route add 10.0.1.0/24 via 192.168.0.1 dev eth0
ip route add 10.0.2.0/24 via 192.168.0.2 dev eth0
ip route add 10.0.3.0/24 via 192.168.0.3 dev eth0
</code></pre>
<p>With kube-proxy running in iptables mode, cluster-service-ip is translated to pod ip and finally routed to related node by the route table.</p>
<p>So what's the benefit of using cni plugin over route table? Is there a performance issue with route table method?</p>
| <p>By design Kubernetes has a fluent structure. Pods, services, nodes can come and go depending on the needs, either by manual changes (rolling updates, new deployments) or by automatic scaling (HPA, node auto-scaling). Manually setting up rigid network structure negates the benefits of dynamic Kubernetes environment.</p>
<blockquote>
<p>Overlay networks are not required by default, however, they help in specific situations. Like when we donβt have enough IP space, or network canβt handle the extra routes. Or maybe when we want some extra management features the overlays provide. One commonly seen case is when thereβs a limit of how many routes the cloud provider route tables can handle. For example, AWS route tables support up to 50 routes without impacting network performance. So if we have more than 50 Kubernetes nodes, AWS route table wonβt be enough. In such cases, using an overlay network helps.</p>
<p>It is essentially encapsulating a packet-in-packet which traverses the native network across nodes. You may not want to use an overlay network since it may cause some latency and complexity overhead due to encapsulation-decapsulation of all the packets. Itβs often not needed, so we should use it only when we know why we need it.</p>
</blockquote>
<p><a href="https://itnext.io/an-illustrated-guide-to-kubernetes-networking-part-2-13fdc6c4e24c" rel="nofollow noreferrer">https://itnext.io/an-illustrated-guide-to-kubernetes-networking-part-2-13fdc6c4e24c</a></p>
<p>If you are concerned with latency and overhead caused by CNI plugins here is a handy <a href="https://itnext.io/benchmark-results-of-kubernetes-network-plugins-cni-over-10gbit-s-network-updated-april-2019-4a9886efe9c4" rel="nofollow noreferrer">Benchmark results of Kubernetes network plugins</a></p>
|
<p>When for whatever reasons I delete the pod running the Job that was started by a CronJob, I immediately see a new pod being created. It is only once I delete something like six times the <code>backoffLimit</code> number of pods, that new ones stop being created.</p>
<p>Of course, if I'm actively monitoring the process, I can delete the CronJob, but what if the Pod inside the job fails when I'm not looking? I would like it not to be recreated.</p>
<p>How can I stop the CronJob from persisting in creating new jobs (or pods?), and wait until the next scheduled time if the current job/pod failed? Is there something similar to Jobs' <code>backoffLimit</code>, but for CronJobs?</p>
| <p>Set <strong><code>startingDeadlineSeconds</code></strong> to a large value or left unset (the default).</p>
<p>At the same time set <strong><code>.spec.concurrencyPolicy</code></strong> as <strong><code>Forbid</code></strong> and the CronJobs skips the new job run while previous created job is still running. </p>
<p>If <strong><code>startingDeadlineSeconds</code></strong> is set to a large value or left unset (the default) and if <strong><code>concurrencyPolicy</code></strong> is set to <strong><code>Forbid</code></strong>, the job will not be run if failed.</p>
<p>Concurrent policy field you can add to specification to defintion of your CronJob (.spec.concurrencyPolicy), but this is optional. </p>
<p>It specifies how to treat concurrent executions of a job that is created by this CronJob. The spec may specify only one of these three concurrency policies:</p>
<ul>
<li><strong>Allow (default)</strong> - The cron job allows concurrently running jobs</li>
<li><strong>Forbid</strong> - The cron job does not allow concurrent runs; if it is time for a new job run and the previous job run hasnβt finished yet, the cron job skips the new job run</li>
<li><strong>Replace</strong> - If it is time for a new job run and the previous job run hasnβt finished yet, the cron job replaces the currently running job run with a new job run</li>
</ul>
<p>It is good to know that currency policy applies just to the jobs created by the same CronJob.
If there are multiple CronJobs, their respective jobs are always allowed to run concurrently.</p>
<p>A CronJob is counted as missed if it has failed to be created at its scheduled time. For example, If <strong><code>concurrencyPolicy</code></strong> is set to <strong><code>Forbid</code></strong> and a CronJob was attempted to be scheduled when there was a previous schedule still running, then it would count as missed.</p>
<p>For every CronJob, the CronJob controller checks how many schedules it missed in the duration from its last scheduled time until now. If there are more than 100 missed schedules, then it does not start the job and logs the error</p>
<p>More information you can find here: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer"><code>CronJobs</code></a> and <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/" rel="nofollow noreferrer"><code>AutomatedTask</code></a>.</p>
<p>I hope it helps.</p>
|
<p>I try to set Microsoft Teams notifications for Prometheus Alertmanager, but the notification doesn't arrive.</p>
<p>The alertmanager.yaml file is:</p>
<pre><code>global:
resolve_timeout: 5m
http_config: {}
smtp_hello: localhost
smtp_require_tls: true
pagerduty_url: https://events.pagerduty.com/v2/enqueue
hipchat_api_url: https://api.hipchat.com/
opsgenie_api_url: https://api.opsgenie.com/
wechat_api_url: https://qyapi.weixin.qq.com/cgi-bin/
victorops_api_url: https://alert.victorops.com/integrations/generic/20131114/alert/
route:
receiver: teams
group_by:
- alertname
routes:
- receiver: teams
match:
alertname: QuotaCPUSolrExcedeed
receivers:
- name: teams
webhook_configs:
- send_resolved: true
http_config: {}
url: https://outlook.office.com/webhook/xxx
templates: []
</code></pre>
<p>The rule 'QuotaCPUSolrExcedeed' exist and work on Prometheus.</p>
<p>If I put the webhook_url on Grafana, the notification arrives, but if I use alertmanager, no!</p>
<p>Do you have any idea what the problem might be?</p>
| <p>Prometheus AlertManager web hook is a <a href="https://prometheus.io/docs/alerting/configuration/#webhook_config" rel="nofollow noreferrer">generic one</a>. In order to transform the message into format, which is accepted by MS teams, you need to use side car. One of the option <a href="https://github.com/idealista/prom2teams" rel="nofollow noreferrer">prom2teams</a>, as it is described in the official <a href="https://prometheus.io/docs/operating/integrations/#alertmanager-webhook-receiver" rel="nofollow noreferrer">doc</a>.</p>
|
<p>I have a deployment with specific mounts (please refer to pod dscription below).</p>
<p>But whenever I write into the specified directory in the container it isn't shown in the host folder. </p>
<p>For example:</p>
<pre><code>$ kubectl exec owncloud-68c956f65c-cpxp4 -- ls /var/www/html/data
admin
avatars
files_external
index.html
owncloud.db
owncloud.log
$ ls /disk1/owncloud
...nothing...
</code></pre>
<p>Here is the pod full description:</p>
<pre><code>Name: owncloud-68c956f65c-cpxp4
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: k8s-worker-01/192.168.0.2
Start Time: Mon, 27 May 2019 11:12:03 +0100
Labels: app=owncloud
pod-template-hash=68c956f65c
Annotations: <none>
Status: Running
IP: 10.40.0.4
Controlled By: ReplicaSet/owncloud-68c956f65c
Containers:
owncloud:
Container ID: docker://68d8d96833635927e0317b849a59539cd8f119231a3a41b3a7be4deb5914aa9c
Image: owncloud:latest
Image ID: docker-pullable://owncloud@sha256:173811cb4c40505401595a45c39a802b89fb476885b3f6e8fe327aae08d20fe8
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 27 May 2019 11:12:13 +0100
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-7b4bw (ro)
/var/www/html/config from config (rw)
/var/www/html/data from data (rw)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
config:
Type: HostPath (bare host directory volume)
Path: /home/pi/owncloud/owncloud_conf
HostPathType: Directory
data:
Type: HostPath (bare host directory volume)
Path: /disk1/owncloud
HostPathType: Directory
default-token-7b4bw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-7b4bw
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
</code></pre>
<p>What's wrong here ?</p>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam</p>
| <p>Based on the message exchanged, the directory <code>/disk1/owncloud</code> resides on the <code>master node</code>, the pod described on your question shows that the type of volume used is <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostpath</a>, and it was scheduled for another node (<code>k8s-worker-01/192.168.0.2</code>) that does not have the hostpath informed. </p>
<p>To fix that you should consider move the mount point for a worker node (unless you want to run the pod on master) and use rules for <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature" rel="nofollow noreferrer">pod affinity</a> or <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector" rel="nofollow noreferrer">nodeSelector</a>.</p>
<p>If you want a resilient solution for storage (replicas, distribution among different nodes), I would recommend to use:</p>
<ul>
<li><a href="https://rook.io" rel="nofollow noreferrer">rook.io</a>: Really great, good docs and coverage different aspects of storage (block,file,object and for different backends ...) </li>
<li><a href="https://github.com/gluster/gluster-block" rel="nofollow noreferrer">gluster-block</a>: it is a plug-in for gluster storage, this is used combined with heketi. See docs <a href="https://github.com/gluster/gluster-kubernetes/blob/master/docs/design/gluster-block-provisioning.md" rel="nofollow noreferrer">k8s provisioning</a></li>
</ul>
|
<p>I have different Kubernetes deployment in GKE and I would like to access them from different external subdomains.</p>
<p>I tried to create 2 deployments with subdomain "sub1" and "sub2" and hostname "app" another deployment with hostname "app" and a service that expose it on the IP XXX.XXX.XXX.XXX configured on the DNS of app.mydomain.com</p>
<p>I would like to access the 2 child deployment from sub1.app.mydomain.com and sub2.app.mydomain.com</p>
<p>This should be automatic, adding new deployment I cannot change every time the DNS records.
Maybe I'm approaching the problem in the wrong way, I'm new in GKE, any suggestions?</p>
<pre>
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-host
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
name: my-host
type: proxy
spec:
hostname: app
containers:
- image: nginx:alpine
name: nginx
ports:
- name: nginx
containerPort: 80
hostPort: 80
restartPolicy: Always
status: {}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-subdomain-1
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
name: my-subdomain-1
type: app
spec:
hostname: app
subdomain: sub1
containers:
- image: nginx:alpine
name: nginx
ports:
- name: nginx
containerPort: 80
hostPort: 80
restartPolicy: Always
status: {}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-subdomain-2
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
name: my-subdomain-2
type: app
spec:
hostname: app
subdomain: sub2
containers:
- image: nginx:alpine
name: nginx
ports:
- name: nginx
containerPort: 80
hostPort: 80
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: my-expose-dns
spec:
ports:
- port: 80
selector:
name: my-host
type: LoadBalancer
</pre>
| <p>You want <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noreferrer">Ingress</a>. There are several options available (Istio, nginx, traefik, etc). I like using nginx and it's really easy to install and work with. Installation steps can be found at <a href="https://kubernetes.github.io/ingress-nginx/deploy/#gce-gke" rel="noreferrer">kubernetes.github.io</a>.</p>
<p>Once the Ingress Controller is installed, you want to make sure you've exposed it with a Service with type=LoadBalancer. Next, if you are using Google Cloud DNS, set up a wildcard entry for your domain with an A record pointing to the external IP address of your Ingress Controller's Service. In your case, it would be *.app.mydomain.com.</p>
<p>So now all of your traffic to app.mydomain.com is going to that load balancer and being handled by your Ingress Controller, so now you need to add Service and Ingress Entities for any service you want.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-service1
spec:
selector:
app: my-app-1
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
apiVersion: v1
kind: Service
metadata:
name: my-service2
spec:
selector:
app: my-app2
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: name-virtual-host-ingress
spec:
rules:
- host: sub1.app.mydomain.com
http:
paths:
- backend:
serviceName: my-service1
servicePort: 80
- host: sub2.app.mydomain.com
http:
paths:
- backend:
serviceName: my-service2
servicePort: 80
</code></pre>
<p>Routing shown is host based, but you just as easily could have handled those services as path based, so all traffic to app.mydomain.com/service1 would go to one of your deployments.</p>
|
<pre><code>FROM python:3.5 AS python-build
ADD . /test
WORKDIR /test
RUN pip install -r requirements.txt &&\
pip install oauth2client
FROM node:10-alpine AS node-build
WORKDIR /test
COPY --from=python-build ./test ./
WORKDIR /test/app/static
RUN npm cache verify && npm install && npm install sass -g &&\
sass --no-source-map scss/layout/_header.scss:css/layout/_header.css &&\
sass --no-source-map scss/layout/_footer.scss:css/layout/_footer.css &&\
sass --no-source-map scss/layout/_side_menu.scss:css/layout/_side_menu.css &&\
sass --no-source-map scss/layout/_error_component.scss:css/layout/_error_component.css &&\
sass --no-source-map scss/components/_input_box.scss:css/components/_input_box.css &&\
sass --no-source-map scss/components/_button.scss:css/components/_button.css &&\
sass --no-source-map scss/components/_loading_mask.scss:css/components/_loading_mask.css &&\
sass --no-source-map scss/components/_template_card.scss:css/components/_template_card.css &&\
sass --no-source-map scss/pages/_onboarding_app.scss:css/pages/_onboarding_app.css &&\
sass --no-source-map scss/pages/_choose.scss:css/pages/_choose.css &&\
sass --no-source-map scss/pages/_adapt.scss:css/pages/_adapt.css &&\
sass --no-source-map scss/pages/_express.scss:css/pages/_express.css &&\
sass --no-source-map scss/pages/_experience.scss:css/pages/_experience.css &&\
sass --no-source-map scss/pages/_features.scss:css/pages/_features.css &&\
sass --no-source-map scss/pages/_request_demo.scss:css/pages/_request_demo.css &&\
npm run build
WORKDIR /test/node-src
RUN npm install express
FROM python:3.5-slim
COPY --from=python-build /root/.cache /root/.cache
WORKDIR /test
COPY --from=node-build ./test ./
RUN apt-get update -yq \
&& apt-get install curl -yq \
&& curl -sL https://deb.nodesource.com/setup_10.x | bash \
&& pip install -r requirements.txt
WORKDIR /test/node-src
EXPOSE 3000
CMD ["node", "server.js"] #RUN NODE SERVICE INSIDE NODE/SRC
WORKDIR /test
EXPOSE 9595
CMD [ "python3", "./run.py" ] #RUN PYTHON SERVICE INSIDE /TEST
</code></pre>
<p>i am trying to run two service inside one container node and python but only one is running i want to run both service in one container on kubernetes.</p>
<p>EDIT : 1</p>
<pre><code>FROM python:3.5 AS python-build
ADD . /test
WORKDIR /test
RUN pip install -r requirements.txt &&\
pip install oauth2client
FROM node:10-alpine AS node-build
WORKDIR /test
COPY --from=python-build ./test ./
WORKDIR /test/app/static
RUN npm cache verify && npm install && npm install sass -g &&\
sass --no-source-map scss/layout/_header.scss:css/layout/_header.css &&\
sass --no-source-map scss/layout/_footer.scss:css/layout/_footer.css &&\
sass --no-source-map scss/layout/_side_menu.scss:css/layout/_side_menu.css &&\
sass --no-source-map scss/layout/_error_component.scss:css/layout/_error_component.css &&\
sass --no-source-map scss/components/_input_box.scss:css/components/_input_box.css &&\
sass --no-source-map scss/components/_button.scss:css/components/_button.css &&\
sass --no-source-map scss/components/_loading_mask.scss:css/components/_loading_mask.css &&\
sass --no-source-map scss/components/_template_card.scss:css/components/_template_card.css &&\
sass --no-source-map scss/pages/_onboarding_app.scss:css/pages/_onboarding_app.css &&\
sass --no-source-map scss/pages/_choose.scss:css/pages/_choose.css &&\
sass --no-source-map scss/pages/_adapt.scss:css/pages/_adapt.css &&\
sass --no-source-map scss/pages/_express.scss:css/pages/_express.css &&\
sass --no-source-map scss/pages/_experience.scss:css/pages/_experience.css &&\
sass --no-source-map scss/pages/_features.scss:css/pages/_features.css &&\
sass --no-source-map scss/pages/_request_demo.scss:css/pages/_request_demo.css &&\
npm run build
WORKDIR /test/node-src
RUN npm install express
FROM python:3.5-slim
COPY --from=python-build /root/.cache /root/.cache
WORKDIR /test
COPY --from=node-build ./test ./
RUN apt-get update -yq \
&& apt-get install curl -yq \
&& curl -sL https://deb.nodesource.com/setup_10.x | bash \
&& pip install -r requirements.txt
WORKDIR /test/node-src
RUN npm install pm2 -g
EXPOSE 3000
CMD pm2 start server.js #RUN NODE SERVICE INSIDE NODE/SRC
WORKDIR /test
EXPOSE 9595
CMD [ "python3", "./run.py" ] #RUN PYTHON SERVICE INSIDE /TEST
</code></pre>
<p>i also tried pm2 service for node.</p>
| <p>what you are trying to achieve it is not a recommended design. typically a docker container should run a single service. as a result, there is no built-in way in docker to get this to work, although it is still possible, using a wrapper script or tools like <a href="https://supervisord.org" rel="nofollow noreferrer">supervisord</a>, you can manage multiple services inside the container-</p>
<p>or as for the <a href="https://docs.docker.com/config/containers/multi-service_container/" rel="nofollow noreferrer">docs</a>: (i went for the supervisord option since to me it looks more clean and manageable than the wrapper script way)</p>
<blockquote>
<p>Use a process manager like supervisord. This is a moderately
heavy-weight approach that requires you to package supervisord and its
configuration in your image (or base your image on one that includes
supervisord), along with the different applications it manages. Then
you start supervisord, which manages your processes for you. Here is
an example Dockerfile using this approach, that assumes the
pre-written supervisord.conf, my_first_process, and my_second_process
files all exist in the same directory as your Dockerfile.</p>
<pre><code>FROM ubuntu:latest
RUN apt-get update && apt-get install -y supervisor
RUN mkdir -p /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY my_first_process my_first_process
COPY my_second_process my_second_process
CMD ["/usr/bin/supervisord"]
</code></pre>
</blockquote>
<p><a href="https://docs.docker.com/config/containers/multi-service_container/" rel="nofollow noreferrer">https://docs.docker.com/config/containers/multi-service_container/</a></p>
<p>what you are currently using in your Dockerfile is called <a href="https://docs.docker.com/develop/develop-images/multistage-build/" rel="nofollow noreferrer">multisage build</a> which is by far <strong>not</strong> what you really want.</p>
|
<p>I'm attempting to add a file to the /etc/ directory on an AWX task/web container in kubernetes. I'm fairly new to helm and I'm not sure what I'm doing wrong.</p>
<p>The only thing I've added to my helm chart is krb5 key in configmap and an additional volume and volume mount to both task and web container. The krb5.conf file is in charts/mychart/files/</p>
<p>ConfigMap:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "awx.fullname" . }}-application-config
labels:
app.kubernetes.io/name: {{ include "awx.name" . }}
helm.sh/chart: {{ include "awx.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
data:
krb5: |-
{{ .Files.Get "krb5.conf"}}
secret_key: {{ .Values.awx_secret_key }}
awx_settings: |
*some stuff*
</code></pre>
<p>Deployment: </p>
<p>Volumes add to bottom of deployment.yaml</p>
<pre><code>volumes:
- name: {{ include "awx.fullname" . }}-application-config
configMap:
name: {{ include "awx.fullname" . }}-application-config
items:
- key: awx_settings
path: settings.py
- key: secret_key
path: SECRET_KEY
- name: {{ include "awx.fullname" . }}-application-config-krb5
configMap:
name: {{ include "awx.fullname" . }}-application-config
items:
- key: krb5
path: krb5.conf
</code></pre>
<p>Volume Mounts add to both task/web container</p>
<pre><code> volumeMounts:
- mountPath: /etc/tower
name: {{ include "awx.fullname" . }}-application-config
- mountPath: /etc
name: {{ include "awx.fullname" . }}-application-config-krb5
</code></pre>
<p>I'm trying to mount a file to the containers in a kubernetes pod and am getting the following error:</p>
<pre><code> Warning Failed 40s kubelet, aks-prdnode-18232119-1 Error: failed to start container "web": Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "process_linux.go:424: container init caused \"rootfs_linux.go:58: mounting \\\"/var/lib/docker/containers/d66044fe204abbf9a4d3772370d0f8d4184e339e59ad9a018f046eade03b8418/resolv.conf\\\" to rootfs \\\"/var/lib/docker/overlay2/d9fa9705d70bbb864ed526a96f6a2873b2720c41a9f9ef5b4a428902e4cf3c82/merged\\\" at \\\"/var/lib/docker/overlay2/d9fa9705d70bbb864ed526a96f6a2873b2720c41a9f9ef5b4a428902e4cf3c82/merged/etc/resolv.conf\\\" caused \\\"open /var/lib/docker/overlay2/d9fa9705d70bbb864ed526a96f6a2873b2720c41a9f9ef5b4a428902e4cf3c82/merged/etc/resolv.conf: read-only file system\\\"\"": unknown
</code></pre>
| <p>You'll want to use the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.14/#volumemount-v1-core" rel="nofollow noreferrer"><code>subPath:</code></a> option to "reach into" that <code>-application-config-krb5</code> and mount only the one file:</p>
<pre><code>- mountPath: /etc/krb5.conf
name: {{ include "awx.fullname" . }}-application-config-krb5
subPath: krb5.conf
</code></pre>
<p>since, as the error correctly points out, you for sure don't want to blow away the <code>/etc</code> directory of almost <em>any</em> container environment (it'll nuke <code>/etc/passwd</code>, <code>/etc/hosts</code>, <code>resolv.conf</code>, and a bazillion other important files)</p>
|
<p>I am trying to deploy my Kubernetes Helm chart for sample Kubernetes cluster deployment. I created a sample Helm chart and added Docker image reference and deployed the Helm chart using terminal command <code>helm install <my-chartname></code>. And micro service is accessing successfully without any problem.</p>
<p>After that I created a Jenkins pipeline job and added only one stage that containing the step for deployment. I added like the following way,</p>
<pre><code>pipeline
{
agent any
stages
{
stage ('helmchartinstall')
{
steps
{
sh 'helm install spacestudychart'
}
}
}
}
</code></pre>
<p>And I am getting the error like following ,</p>
<pre><code>[Pipeline] { (helmchartinstall)
[Pipeline] sh
+ helm install spacestudychart
Error: the server could not find the requested resource (get pods)
</code></pre>
<p>The same command is working when I am running through terminal. </p>
<p><strong>Update</strong></p>
<p>To upgrade tiller to latest version, I run the <code>helm init --upgrade</code> command on terminal. But the error remains still.</p>
<p><strong>Output of "helm version" is like the following,</strong></p>
<pre><code>Client: &version.Version{SemVer:"v2.14.0", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.0", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}
</code></pre>
<p><strong>Output of "kubectl version --short" is like the following,</strong></p>
<pre><code>Client Version: v1.14.1
Server Version: v1.13.5
</code></pre>
<p>When I run command "kubectl --v=5 get pods; helm install spacestudychart" , I am getting the console output like the following,</p>
<pre><code>+ kubectl --v=5 get pods
I0604 07:44:46.035459 2620 cached_discovery.go:121] skipped caching discovery info due to yaml: line 10: mapping values are not allowed in this context
I0604 07:44:46.152770 2620 cached_discovery.go:121] skipped caching discovery info due to yaml: line 10: mapping values are not allowed in this context
I0604 07:44:46.152819 2620 shortcut.go:89] Error loading discovery information: yaml: line 10: mapping values are not allowed in this context
I0604 07:44:46.283598 2620 cached_discovery.go:121] skipped caching discovery info due to yaml: line 10: mapping values are not allowed in this context
I0604 07:44:46.374088 2620 cached_discovery.go:121] skipped caching discovery info due to yaml: line 10: mapping values are not allowed in this context
I0604 07:44:46.467938 2620 cached_discovery.go:121] skipped caching discovery info due to yaml: line 10: mapping values are not allowed in this context
F0604 07:44:46.468122 2620 helpers.go:114] error: yaml: line 10: mapping values are not allowed in this context
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 255
Finished: FAILURE
</code></pre>
<p>Do I need to upgrade the kubectl version? What is the exact problem when running with Jenkins?</p>
| <p>This is 100% working I have this problem before.</p>
<p>At first builts jenkins user next
copy <code>config</code> to <code>/home/jenkins/.kube/</code></p>
<pre><code>cp $HOME/.kube/config /home/jenkins/.kube/
or
cp ~/.kube/config /home/jenkins/.kube/
</code></pre>
<p>And after that use</p>
<pre><code> chmod 777 /home/jenkins/.kube/config
</code></pre>
<p>your kubernetes command need your kubernetes config file . it is like key or password for your kubernetes cluster so you should give the kubernetes config to your jenkins and after that it can run kubernetes commands</p>
<p>This is very good tutorial that help me to solve it .</p>
<p><a href="https://www.youtube.com/watch?v=288rTpd1SDE" rel="noreferrer">tutorial</a></p>
<p><strong>UPDATE1</strong>
you should have jenkins user for adding jenkins user you should add jenkins user to your ubuntu or centos or .. </p>
<pre><code>adduser jenkins
</code></pre>
<p>This is good link to adduser
<a href="https://www.digitalocean.com/community/tutorials/how-to-create-a-sudo-user-on-ubuntu-quickstart" rel="noreferrer">Adding user</a></p>
<p><strong>UPDATE 2</strong>
You should install kubectl in your server that you use it as <code>jenkins</code> so that kubectl command can work and after that please copy the config in <code>~/.kube/config</code> in your <code>kubernetes</code> cluster to your <code>jenkins</code> server that previously you installed <code>kubectl</code> on it.</p>
|
<p>Hi I'm trying to setup basic logging to get all my pod logs at a single place. Following is the pod-spec I have created but couldn't find the trace of the logs in the location mentioned. What could be missing in the template below?</p>
<pre><code> apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: busybox
args: [/bin/sh, -c,
'i=0; while true; do echo "$i: $(date)" >> /u01/kubernetes_prac/logs/log_output.txt; i=$((i+1)); sleep 1; done']
volumeMounts:
- name: varlog
mountPath: /u01/kubernetes_prac/logs
volumes:
- name: varlog
emptyDir: {}
</code></pre>
| <p>try this:</p>
<pre><code>volumes:
- name: varlog
hostPath:
path: /tmp/logs
</code></pre>
<p>and check the node logs on that location</p>
|
<p>Init container with <code>kubectl get pod</code> command is used to get ready status of other pod.</p>
<p>After Egress NetworkPolicy was turned on init container can't access Kubernetes API: <code>Unable to connect to the server: dial tcp 10.96.0.1:443: i/o timeout</code>. CNI is Calico.</p>
<p>Several rules were tried but none of them are working (service and master host IPs, different CIDR masks):</p>
<pre><code>...
egress:
- to:
- ipBlock:
cidr: 10.96.0.1/32
ports:
- protocol: TCP
port: 443
...
</code></pre>
<p>or using namespace (default and kube-system namespaces):</p>
<pre><code>...
egress:
- to:
- namespaceSelector:
matchLabels:
name: default
ports:
- protocol: TCP
port: 443
...
</code></pre>
<p>Looks like <code>ipBlock</code> rules just don't work and namespace rules don't work because kubernetes api is non-standard pod.</p>
<p>Can it be configured? Kubernetes is 1.9.5, Calico is 3.1.1.</p>
<p>Problem still exists with GKE 1.13.7-gke.8 and calico 3.2.7</p>
| <p>You need to get the real ip of the master using <code>kubectl get endpoints --namespace default kubernetes</code> and make an egress policy to allow that.</p>
<pre><code>---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-apiserver
namespace: test
spec:
policyTypes:
- Egress
podSelector: {}
egress:
- ports:
- port: 443
protocol: TCP
to:
- ipBlock:
cidr: x.x.x.x/32
</code></pre>
|
<p>I am using Fluentd as a sidecar to ship nginx logs to stdout so they show up in the Pod's logs. I have a strange problem that Fluentd is not picking up the configuration when the container starts.</p>
<p>Upon examining the Fluentd startup logs it appears that the configuration is not being loaded. The config is supposed to be loaded from /etc/fluentd-config/fluentd.conf when the container starts. I have connected to the container and the config file is correct, and the pv mounts are also correct. The environment variable also exists.</p>
<p>The full deployment spec is below - and is self contained if you feel like playing with it.</p>
<pre><code>apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: PersistentVolume
metadata:
name: weblog-pv
labels:
type: local
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/weblog
type: DirectoryOrCreate
capacity:
storage: 500Mi
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: weblog-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Mi
- apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-config
data:
fluentd.conf: |
<source>
@type tail
format none
path /var/log/nginx/access.log
tag count.format1
</source>
<match *.**>
@type forward
<server>
name localhost
host 127.0.0.1
</server>
</match>
- apiVersion: v1
kind: Pod
metadata:
name: sidecar-example
labels:
app: webserver
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: logging-vol
mountPath: /var/log/nginx
- name: fdlogger
env:
- name: FLUENTD_ARGS
value: -c /etc/fluentd-config/fluentd.conf
image: fluent/fluentd
volumeMounts:
- name: logging-vol
mountPath: /var/log/nginx
- name: log-config
mountPath: /etc/fluentd-config
volumes:
- name: logging-vol
persistentVolumeClaim:
claimName: weblog-pvc
- name: log-config
configMap:
name: fluentd-config
- apiVersion: v1
kind: Service
metadata:
name: sidecar-svc
spec:
selector:
app: webserver
type: NodePort
ports:
- name: sidecar-port
port: 80
nodePort: 32000
</code></pre>
| <p>I got it to work by using stdout instead of redirecting to localhost.</p>
<pre><code>- apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-config
data:
fluent.conf: |
<source>
@type tail
format none
path /var/log/nginx/access.log
tag count.format1
</source>
<match *.**>
@type stdout
</match>
</code></pre>
|
<p>Brand new to kubernetes and am having an issue where I am getting a 502 bad gateway when trying to hit the api.</p>
<p>My configs look like this</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: api-cluster-ip-service
spec:
type: ClusterIP
selector:
component: api
ports:
- port: 80
targetPort: 5000
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
spec:
replicas: 1
selector:
matchLabels:
component: api
template:
metadata:
labels:
component: api
spec:
containers:
- name: books-api
image: mctabor/books-api
ports:
- containerPort: 5000
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: books-ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /api/?(.*)
backend:
serviceName: api-cluster-ip-service
servicePort: 80
</code></pre>
<p>and in my flask app I have the following:</p>
<pre><code>if __name__ == "__main__":
app.run(host='0.0.0.0', port=5000)
</code></pre>
<p>Not sure where I went wrong here</p>
<p>my minikube ip is 192.168.99.104 and I'm trying to hit the api route of 192.168.99.104/api/status</p>
| <p>You didn't expose your service properly. First of all, a service of type <em>ClusterIP</em> is only available within the cluster. As you are using minikube, you should try changing the type do <em>NodePort</em>. </p>
<p>In second place, the <strong>port</strong> declared in the yaml is the port which makes the service visible to other services within the cluster.</p>
<p>After creating the a NodePort Service, execute <code>kubectl get svc</code> to see the external port assigned to the service. You will see something like <em>80:30351/TCP</em>. This means you can access the service at <em>192.168.99.104:30351</em>.</p>
<p><a href="https://stackoverflow.com/a/40774861/8971507">This</a> is a great answer at explaining how to expose a service in minikube</p>
|
<p>I have configured the following ingress for traefik but traefik is sending the entire traffic to app-blue-release. Ideally it should send only 30% traffic to blue and 70% traffic to green, but it's not working as per expectation.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
traefik.frontend.entryPoints: http
traefik.ingress.kubernetes.io/service-weights: |-
app-green-release: 70.0
app-blue-release: 30.0
creationTimestamp: 2019-06-04T06:00:37Z
generation: 2
labels:
app: traefik-app
name: traefik-app
namespace: mynamespace
resourceVersion: "645536328"
selfLink: /apis/extensions/v1beta1/namespaces/mynamespace/ingresses/traefik-app
uid: 4637377-747b-11e9-92ea-005056aeabf7
spec:
rules:
- host: mycompany2.com
http:
paths:
- backend:
serviceName: app-release
servicePort: 8080
- host: mycompany.com
http:
paths:
- backend:
serviceName: app-ui-release
servicePort: 80
path: /widget
- backend:
serviceName: app-green-release
servicePort: 8080
path: /
- backend:
serviceName: app-blue-release
servicePort: 8080
path: /
status:
loadBalancer: {}
</code></pre>
<p>I am using following traffic version.
<em>traefik:v1.7.11-alpine</em></p>
<p>Earlier when the weight was configured with 10 (for blue) and 90(for green) then it was working fine. But once we changed to 30 and 70 respectively then this problem is happening.</p>
<p>Anyone has faced such issue before. Thanks for your help in advance</p>
| <p>That seems to be followed by <a href="https://github.com/containous/traefik/issues/4494" rel="nofollow noreferrer">traefik issue 4494</a> (instead of your own <a href="https://github.com/containous/traefik/issues/4940" rel="nofollow noreferrer">issue 4940</a>)</p>
<blockquote>
<p>the annotation <code>ingress.kubernetes.io/service-weights</code> has been added in <a href="https://github.com/containous/traefik/blob/master/CHANGELOG.md#v170-2018-09-24" rel="nofollow noreferrer">v1.7</a>, before the annotation was ignored.</p>
</blockquote>
<p>However, <a href="https://github.com/containous/traefik/issues/4494#issuecomment-500892876" rel="nofollow noreferrer">as of June 11th, 2019</a>, Damien Duportal (Træfik's Developer Advocate) adds:</p>
<blockquote>
<p>There is no known workaround for now.<br>
We are working on this, but as the version 2.0 of Traefik is currently worked on, we have to wait :)</p>
</blockquote>
<hr>
<p>This comes from <a href="https://github.com/containous/traefik/pull/3112" rel="nofollow noreferrer">PR 3112</a></p>
<blockquote>
<p>Provides a new ingress annotation ingress.kubernetes.io/backend-weights which specifies a YAML-encoded, percentage-based weight distribution. With this annotation, we can do canary release by dynamically adjust the weight of ingress backends.</p>
</blockquote>
<p>(called initially <code>ingress.kubernetes.io/percentage-weights</code> before being renamed <code>ingress.kubernetes.io/service-weights</code> in <a href="https://github.com/containous/traefik/pull/3112/commits/11f6079d4814192db745c1175b9729bf64069de5" rel="nofollow noreferrer">commit 11f6079</a>)</p>
<p>The issue is still pending.<br>
Try first to upgrade to <a href="https://hub.docker.com/_/traefik" rel="nofollow noreferrer">v1.7.12-alpine</a> to check that the issue does persist.</p>
<p><a href="https://github.com/yue9944882/traefik/blob/11f6079d4814192db745c1175b9729bf64069de5/docs/configuration/backends/kubernetes.md#user-content-general-annotations" rel="nofollow noreferrer">The example</a> mentions:</p>
<pre><code>service_backend1: 1% # Note that the field names must match service names referenced in the Ingress object.
service_backend2: 33.33%
service_backend3: 33.33% # Same as 33.33%, the percentage sign is optional
</code></pre>
<p>So in your case, do try:</p>
<pre><code> app-green-release: 70%
app-blue-release: 30%
</code></pre>
|
<p>How can we login to a AKS cluster created , by using service account?
We are asked to execute kubectl create clusterrolebinding add-on-cluster-admin ......... but we are not aware how to use this and login to the created cluster in Azure </p>
| <p>you can use this quick start tutorial: <a href="https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough#connect-to-the-cluster" rel="noreferrer">https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough#connect-to-the-cluster</a></p>
<p>basically you need to install kubectl:</p>
<pre><code>az aks install-cli
</code></pre>
<p>and pull credentials for AKS:</p>
<pre><code>az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
</code></pre>
|
<p>I have a <code>webapp</code> running on Kubernetes on 2 pods.</p>
<p>I edit my deployment with a new image version, from <code>webapp:v1</code> to <code>webapp:v2</code>.</p>
<p>I figure an issue during the rolling out...</p>
<pre><code>podA is v2
podB is still v1
html is served from podA
with a <link> to styles.css
styles.css is served from podB
with v1 styles
=> html v2 + css v1 = π₯
</code></pre>
<hr>
<p>How can I be guaranteed, that all subsequent requests will be served from the same pod, or a pod with the same version that the html served?</p>
| <blockquote>
<p>How can I be guaranteed, that all subsequent requests will be served from the same pod, or a pod with the same version that the html served?</p>
</blockquote>
<p>Even if you do this, you will still have problems. Especially if your app is a single-page application. Consider this:</p>
<ul>
<li>User enters your website, gets <code>index.html</code> v1 </li>
<li>You release webapp:v2. After a few minutes, all the pods are running v2.</li>
<li>The user still has the webapp opened, with <code>index.html</code> v1</li>
<li>The user navigates in the app. This needs to load <code>styles.css</code>. The user gets <code>styles.css</code> v2. Boom, you're mixing versions, fail.</li>
</ul>
<p>I've run into this issue in production, and it's a pain to solve. In my experience, the best solution is:</p>
<ul>
<li>Tag all the resources (css, js, imgs, etc) with a version suffix (eg <code>styles.css</code> -> <code>styles-v1.css</code>, or a hash of the file contents <code>styles-39cf1a0b.css</code>). Many tools such as webpack, gulp, etc can do this automatically.</li>
<li><code>index.html</code> is not tagged, but it does reference the other resources with the right tag.</li>
<li>When deploying, do not delete the resources for older versions, just merge them with the newest ones. Make sure clients that have an old <code>index.html</code> can still get them succesfully.</li>
<li>Delete old resources after a few versions, or better, after a period of time passes (maybe 1 week?).</li>
</ul>
<p>With this, the above scenario now works fine!</p>
<ul>
<li>User enters your website, gets <code>index.html</code> v1 </li>
<li>You release webapp:v2. This replaces <code>index.html</code>, but leaves all the js/css in place, adding new ones with the new version suffix.</li>
<li>The user still has the webapp opened, with <code>index.html</code> v1</li>
<li>The user navigates in the app. This needs to load <code>styles-v1.css</code>, which loads successfully and matches the index.html version. No version mixing = good!</li>
<li>Next time the user reloads the page, they get <code>index.html</code> v2, which points to the new <code>styles-v2.css</code>, etc. Still no version mixing!</li>
</ul>
<p>Doing this with kubernetes is a bit tricky, you need to make your image build process take the files from a few older images and include them inside the new image, which is a bit strange.</p>
<p>Another solution is to stop serving your html/css/js from a pod, and serve it from blob storage instead. (Amazon S3, Google Cloud Storage, etc). This way, a deployment is just copying all the files, which get merged with the old files, giving you the desired behavior.</p>
|
<p>New to spring boot.</p>
<p>While exploring spring boot env variables, came to know that,
env variables can be accessed by <strong><em><code>${KeyName}</code></em></strong> from code.</p>
<p>Got a question like, </p>
<p><strong>Case 1:</strong>
In @Configuration files, we are accessing keys in application.properties using <strong><em><code>@Value(value = "${KeyName}")</code></em></strong>.
So, we are using almost same syntax for accessing env variables and accessing keys in application.properties.</p>
<p><strong>Case 2:</strong>
When trying to access the keys in application.properties using system.getEnv("keyname"), I got only null.</p>
<p><strong>Case 3:</strong>
Recently worked on <strong><em>configmap in kubernetes with spring boot</em></strong>.</p>
<p>Config file looks like,</p>
<pre><code>spec:
containers:
- name: demo-configconsumercontainer
image: springbootappimage:latest
ports:
- containerPort: 8080
envFrom:
- configMapRef:
name: example-configmap
</code></pre>
<p>All the values from configMap is exported as environmental variables and
I am accessing those values by <code>@Value(value = "${KeyName}")</code> and by <code>system.getEnv(KeyName)</code>.</p>
<ol>
<li>My question is, how case 3 is working when case 2 is not.</li>
<li>Is Spring boot made such a way that, it is allowing to access by <code>${KeyName}</code> and not by <code>system.getEnv(KeyName)</code>? (ie. Case 2)</li>
</ol>
<p>Could some one clarify my questions here.</p>
| <p>Using @Value annotation, <strong>you can access a property from many property sources</strong> such as in application.properties or an environment variable and few more property sources.</p>
<p>The important point here is <strong>ordering of these property sources</strong>.</p>
<p>Below is the order of looking up the property in various sources.</p>
<ol>
<li>Devtools global settings properties on your home directory (~/.spring-boot-devtools.properties when devtools is active).</li>
<li>@TestPropertySource annotations on your tests.</li>
<li>@SpringBootTest#properties annotation attribute on your tests.</li>
<li>Command line arguments.</li>
<li>Properties from SPRING_APPLICATION_JSON (inline JSON embedded in an environment variable or system property)</li>
<li>ServletConfig init parameters.</li>
<li>ServletContext init parameters.</li>
<li>JNDI attributes from java:comp/env.</li>
<li>Java System properties (System.getProperties()).</li>
<li><strong>OS environment variables.</strong></li>
<li>A RandomValuePropertySource that only has properties in random.*.</li>
<li>Profile-specific application properties outside of your packaged jar (application-{profile}.properties and YAML variants)</li>
<li>Profile-specific application properties packaged inside your jar (application-{profile}.properties and YAML variants)</li>
<li>Application properties outside of your packaged jar (application.properties and YAML variants).</li>
<li><strong>Application properties packaged inside your jar</strong> (application.properties and YAML variants).</li>
<li>@PropertySource annotations on your @Configuration classes.</li>
<li>Default properties (specified using SpringApplication.setDefaultProperties).</li>
</ol>
<p>In your case, the property is either declared in environment variable or in application.yaml and hence accessible using @Value annotation.</p>
|
<p>I deployed drone.io using the helm chart. Builds are working fine.
For my secrets I folowed this docs : <a href="https://readme.drone.io/extend/secrets/kubernetes/install/" rel="nofollow noreferrer">https://readme.drone.io/extend/secrets/kubernetes/install/</a></p>
<p>So I created a secret to hold the shared secret key between the plugin and the drone server (sorry for the ansible markups) :</p>
<pre><code>apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: drone-kubernetes
data:
server: {{ server.stdout | b64encode }}
cert: {{ cert.stdout | b64encode }}
token: {{ token.stdout | b64encode }}
secret: {{ secret.stdout | b64encode }}
</code></pre>
<p>A deployment for the kubernetes secret plugins :</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: drone
component: secrets
release: drone
name: drone-drone-secrets
spec:
selector:
matchLabels:
app: drone
component: secrets
release: drone
template:
metadata:
labels:
app: drone
component: secrets
release: drone
spec:
containers:
- env:
- name: SECRET_KEY
valueFrom:
secretKeyRef:
key: secret
name: drone-kubernetes
image: docker.io/drone/kubernetes-secrets:linux-arm64
imagePullPolicy: IfNotPresent
name: secrets
ports:
- containerPort: 3000
name: secretapi
protocol: TCP
volumeMounts:
- mountPath: /etc/kubernetes/config
name: kube
volumes:
- name: kube
hostPath:
path: /etc/kubernetes/admin.conf
type: File
</code></pre>
<p>And a service for that deployement :</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: drone
component: secrets
release: drone
name: drone-secrets
spec:
ports:
- name: secretapi
port: 3000
protocol: TCP
selector:
app: drone
component: secrets
release: drone
type: ClusterIP
</code></pre>
<p>I patched the drone-server deployment to set the DRONE_SECRET_SECRET and DRONE_SECRET_ENDPOINT variable.</p>
<p>The pods for the kubernetes-secrets plugins do see the file "/etc/kubernetes/config" as expected and have SECRET_KEY as environnement.
And from the drone-server pod :</p>
<pre><code>kubectl exec -i drone-drone-server-some-hash-here -- sh -c 'curl -s $DRONE_SECRET_ENDPOINT'
Invalid or Missing Signature
</code></pre>
<p>So far so good. Everything seems setup properly.</p>
<p>Here is my .drone.yml file for my test project : </p>
<pre><code>kind: pipeline
name: default
steps:
- name: kubectl
image: private-repo.local:5000/drone-kubectl
settings:
kubectl: "get pods"
kubernetes_server:
from_secret: kubernetes_server
kubernetes_cert:
from_secret: kubernetes_cert
image_pull_secrets:
- kubernetes_server
- kubernetes_cert
---
kind: secret
name: kubernetes_server
get:
path: drone-kubernetes
name: server
---
kind: secret
name: kubernetes_cert
get:
path: drone-kubernetes
name: cert
---
kind: secret
name: kubernetes_token
get:
path: drone-kubernetes
name: token
</code></pre>
<p>Currently the custom plugin drone-kubectl only run the env command to see if I'm getting my secrets, and I dont... What I am missing ?</p>
| <p>Ok I found my issue using the environement variable DEBUG in the drone-drone-secrets deployment. The error was :</p>
<pre><code>time="2019-06-10T06:29:22Z" level=debug msg="secrets: cannot find secret cert: kubernetes api: Failure 403 secrets \"drone-kubernetes\" is forbidden: User \"system:serviceaccount:toolchain:default\" cannot get resource \"secrets\" in API group \"\" in the namespace \"toolchain\""
</code></pre>
<p>So I created this serviceaccount and related role :</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: drone-drone-secrets
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: drone-drone-secrets
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: drone-drone-secrets
subjects:
- kind: ServiceAccount
name: drone-drone-secrets
roleRef:
kind: Role
name: drone-drone-secrets
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>And patched the deployement to use that service account. Now everything works.</p>
|
<p>I'm setting up k8s HA. I have 2 K8s MasterNode and 3 K8s Workernode using Flannel as CNI. I try to install glusterfs and heketi and found out that i can't curl to any pod in my WorkerNode(Use masternode01 to Control Cluster).
The error said Connection Time Out. Anywhere that i can check the problem?</p>
<p>I try to exec in to my test pod and curl from that pod. It can curl with result. then i try curl by use every workernode. It can curl. Only MasterNode(MasterNode01 and MasterNode02) can't curl , ping or telnet to any pod in workernode.</p>
| <p>So I Found out that it about iptables.</p>
<pre><code>sudo iptables --flush
sudo iptables -tnat --flush
sudo systemctl restart docker
</code></pre>
<p>I ran this command on my masternode and now it work fine.</p>
|
<p>I want to enable the Kube-proxy mode to IPVS in the existing cluster. currently, it is running on IPtables. how can I change it to IPVS without affecting the existing workload?</p>
<p>I have already installed all the required modules to enable it. Also, my cluster is installed using kubeadm but, I have not used the configuration file during installation.
what should be the command exactly to enable the IPVS on my cluster.</p>
<p><a href="https://github.com/kubernetes/kubernetes/tree/master/pkg/proxy/ipvs" rel="noreferrer">documentation1</a></p>
<p><a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file" rel="noreferrer">documentation2</a></p>
| <p>Edit the configmap</p>
<pre><code>kubectl edit configmap kube-proxy -n kube-system
</code></pre>
<p>change mode from "" to ipvs</p>
<pre><code>mode: ipvs
</code></pre>
<p>Kill any kube-proxy pods</p>
<pre><code>kubectl get po -n kube-system
kubectl delete po -n kube-system <pod-name>
</code></pre>
<p>Verify kube-proxy is started with ipvs proxier</p>
<pre><code>kubectl logs [kube-proxy pod] | grep "Using ipvs Proxier"
</code></pre>
|
<p>I'm trying to create some ingress rules for my Kubernetes cluster (currently, on localhost using Docker Desktop) and they are not working.</p>
<h3>What I'm trying to do</h3>
<pre><code>- App #1 : Some database (e.g. mongodb or RavenDb or Postgres, etc).
- App #2 : Some queue (rabbitmq, etc)
- App #3 : Some web site api #1
- App #4 : Some web site api #2
</code></pre>
<h3>Routes to access each app</h3>
<pre><code>- App #1 : <anything>:port 5200
- App #2 : <anything>:port 5300
- App #3 : /account/*:80, /accounts:80
- App #4 : /order/*:80, /orders/*:80
</code></pre>
<p>[note -> i'm not including ssl/443 port yet because i've not handled that, etc]</p>
<p>So here's an example what I've got for the first app (which doesn't work):</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: data-ravendb-ingress
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- http:
paths:
- backend:
serviceName: data-ravendb-service
servicePort: dashboard
---
apiVersion: v1
kind: Service
metadata:
name: data-ravendb-service
labels:
app: hornet
tier: backend
component: data-ravendb
spec:
ports:
- name: dashboard
port: 5200
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: data-ravendb-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: hornet
tier: backend
component: data-ravendb
spec:
containers:
- name: data-ravendb-container
image: ravendb/ravendb
imagePullPolicy: Always
ports:
- containerPort: 8080
name: dashboard
</code></pre>
<p>How can I setup my ingress to allow those 4 Apps to access the backend services appropriately?</p>
| <p>That's currently not fully supported to route in Kubernetes Ingress any other traffic than HTTP/HTTPS protocols. Also Traefik which you use is HTTP reverse proxy so it will be either hard or not possible to do that using that ingress controller.</p>
<p>See <a href="https://stackoverflow.com/questions/49439899/kubernetes-routing-non-http-request-via-ingress-to-container">Kubernetes: Routing non HTTP Request via Ingress to Container</a> </p>
|
<p>Im using this helm chart to deploy: <a href="https://github.com/helm/charts/tree/master/stable/atlantis" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/atlantis</a></p>
<p>It deploys this stateful set: <a href="https://github.com/helm/charts/blob/master/stable/atlantis/templates/statefulset.yaml" rel="nofollow noreferrer">https://github.com/helm/charts/blob/master/stable/atlantis/templates/statefulset.yaml</a></p>
<p>Is there a way I can add arbitrary config values to a pod spec that was deployed with a helm chart without having to modify the chart? For example I want to add an env: var that gets its value from a secret to the pod spec of the stateful set this chart deploys</p>
<p>Can I create my own helm chart that references this helm chart and add to the config of the pod spec? again without modifying the original chart?</p>
<p>EDIT: what Im talking about is adding an env var like this:</p>
<pre><code>env:
- name: GET_THIS_VAR_IN_ATLANTIS
valueFrom:
secretKeyRef:
name: my-secret
key: abc
</code></pre>
<p>Maybe I can create another chart as a parent of this chart and override the entire <code>env:</code> block?</p>
| <blockquote>
<p>Is there a way I can add arbitrary config values to a pod spec that was deployed with a helm chart without having to modify the chart?</p>
</blockquote>
<p>You can only make changes that the chart itself supports.</p>
<p>If you look at the StatefulSet definition you linked to, there are a lot of <code>{{ if .Values.foo }}</code> knobs there. This is an fairly customizable chart and you probably can change most things. As a chart author, you'd have to explicitly write all of these conditionals and macro expansions in.</p>
<blockquote>
<p>For example I want to add an env: var that gets its value from a secret to the pod spec of the stateful set this chart deploys</p>
</blockquote>
<p>This very specific chart contains a block</p>
<pre><code>{{- range $key, $value := .Values.environment }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
</code></pre>
<p>so you could write a custom Helm YAML values file and add in</p>
<pre><code>environment:
arbitraryKey: "any fixed value you want"
</code></pre>
<p>and then use the <code>helm install -f</code> option to supply that option when you install the chart.</p>
<p>This chart does not support injecting environment values from secrets, beyond a half-dozen specific values it supports by default (<em>e.g.</em>, GitHub tokens).</p>
<p>As I say, this isn't generic at all: this is very specific to what this specific chart supports in its template expansions.</p>
|
<p>Today I was going through some documentation and discussions about matchLabels statement that is a part of a Deployment (or other objects) in Kubernetes. Example below:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
</code></pre>
<p>In some discussions, I saw that depending on the version of the API it could be optional or mandatory to use this selector.</p>
<p>Ref:</p>
<p><a href="https://github.com/helm/charts/issues/7680" rel="nofollow noreferrer">https://github.com/helm/charts/issues/7680</a></p>
<p><a href="https://stackoverflow.com/questions/50309057/what-is-the-purpose-of-a-kubernetes-deployment-pod-selector">What is the purpose of a kubernetes deployment pod selector?</a></p>
<p>But I can't see any official documentation where it is stated explicitly if the usage of this selector was mandatory or not for a specific version of a Kubernetes API. Do you know of any official documentation where it is stated whether or not it is mandatory to use matchLabels selector?</p>
<p>I have checked these links out but I did not bump into an official statement:
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#deploymentspec-v1beta2-apps" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#deploymentspec-v1beta2-apps</a></p>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/</a></p>
| <p><code>kubectl explain deploy.spec.selector --api-version=apps/v1</code></p>
<blockquote>
<p>Label selector for pods. Existing ReplicaSets whose pods are selected
by
this will be the ones affected by this deployment. It must match the pod
template's labels.</p>
</blockquote>
<p><a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/api/apps/v1/types.go#L276-L279" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/api/apps/v1/types.go#L276-L279</a></p>
<blockquote>
<pre><code>Selector *metav1.LabelSelector `json:"selector" protobuf:"bytes,2,opt,name=selector"`
</code></pre>
</blockquote>
<p>The lack of <code>+optional</code> above this line tells you it's mandatory. It matches up with the error message you'll get if you try to make a deployment without one.</p>
<pre><code>cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
template:
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
dnsPolicy: ClusterFirst
restartPolicy: Always
EOF
</code></pre>
<blockquote>
<p>error: error validating "STDIN":
in io.k8s.api.apps.v1.DeploymentSpec; if you choose to ignore these
errors, turn validation off with --validate=false. error validating data: ValidationError(Deployment.spec): missing
required field "selector"</p>
</blockquote>
<p><a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/apis/meta/v1/types.go#L1076-L1085" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/apis/meta/v1/types.go#L1076-L1085</a></p>
<pre><code>type LabelSelector struct {
// matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
// map is equivalent to an element of matchExpressions, whose key field is "key", the
// operator is "In", and the values array contains only "value". The requirements are ANDed.
// +optional
MatchLabels map[string]string `json:"matchLabels,omitempty" protobuf:"bytes,1,rep,name=matchLabels"`
// matchExpressions is a list of label selector requirements. The requirements are ANDed.
// +optional
MatchExpressions []LabelSelectorRequirement `json:"matchExpressions,omitempty" protobuf:"bytes,2,rep,name=matchExpressions"`
}
</code></pre>
|
<h2>Background</h2>
<p>I have web application which deployed to deployment with multiple pods. The deployment is exposed to the internet with kubernetes service with external IP. </p>
<p>The external IP exposed to the world via Cloudflare:</p>
<pre><code>Client ---> Cloudflare ---> k8 service ---> pod
</code></pre>
<p>This web application needs to be defined with sticky sessions. So I patched my service with <code>sessionAffinity: ClientIP</code>, like this:</p>
<pre><code>kubectl patch service MYSERVICE -p '{"spec":{"sessionAffinity":"ClientIP"}}'
</code></pre>
<p>I checked it on development environment and found that the session affinity is not working well. </p>
<h2>Investigation</h2>
<p>I looked for the problem with the sticky sessions. Then I found that Cloudflare caller IP can be changed from time to time - randomly. This will redirect all the users to another pod - exactly what Sticky Session should solve.</p>
<p>So, the problem is that my Loadbalancer service redirect traffic according to Cloudflare IP and it random.</p>
<h2>Possible solutions</h2>
<ol>
<li>I found that it may be possible to loadbalance traffic according to
Cookie. Founded this <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/configure-backend-service#setting_generated_cookie_affinity" rel="nofollow noreferrer">source</a>. But it using advanced Kubernetes
components, like BackendService and Ingress, that needs to be
well-defined. Do you have more simple solution?</li>
<li>Cloudflare attaching the real client IP to the request in headers. Is this possible to define the loadbalancing to look on this header and redirect the traffic according to it's value?</li>
</ol>
| <p>The easiest way to use <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">Nginx Ingress Controller</a> with below official example of ingress resource.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-test
annotations:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- host: stickyingress.example.com
http:
paths:
- backend:
serviceName: http-svc # k8 service name
servicePort: 80 # k8 service port
path: /
</code></pre>
<p>In order to use it, you have to have Nginx Ingress Controller installed on your Kubernetes cluster. You can follow instructions in answer of this <a href="https://stackoverflow.com/questions/56353972/procedure-to-install-an-ingress-controller/56378650#56378650">post</a> to follow installation instructions. Then apply above ingress object.</p>
<p>You can read also from this <a href="https://medium.com/@zhimin.wen/sticky-sessions-in-kubernetes-56eb0e8f257d" rel="nofollow noreferrer">blog</a> to get more information how to use sticky sessions in Kubernetes.</p>
<p>Also feel free to explore other <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md" rel="nofollow noreferrer">nginx ingress annotations</a> if you can use them.</p>
<p>Hope it helps!</p>
|
<p>While running monogodb in kubernetes it gives following error:-</p>
<p>I tried to change image of mongodb</p>
<pre><code>Warning Unhealthy 2m28s kubelet, minikube Readiness probe failed: MongoDB
shell version v4.0.10
connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
2019-06-08T15:25:01.774+0000 E QUERY [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
connect@src/mongo/shell/mongo.js:344:17
@(connect):2:6
exception: connect failed
Warning Unhealthy 2m17s kubelet, minikube Readiness probe failed: MongoDB shell version v4.0.10
connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
2019-06-08T15:25:12.008+0000 E QUERY [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
connect@src/mongo/shell/mongo.js:344:17
@(connect):2:6
exception: connect failed
</code></pre>
| <p>I was observing the same error, once I've increased <code>initialDelaySeconds:</code> value to some higher number in <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-readiness-probes" rel="noreferrer">Readiness</a> probe spec inside deployment, the issue has gone and <code>mongodb</code> Pod has been spawned without error. Actually, it takes some time for docker to pull up <code>docker.io/bitnami/mongodb</code> image and initialize socket <code>mongodb</code> listener, therefore Readiness probe indicates issue while container process has not ready for network connections.</p>
<pre><code> readinessProbe:
exec:
command:
- mongo
- --eval
- db.adminCommand('ping')
failureThreshold: 6
initialDelaySeconds: 360
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
</code></pre>
<p>Meanwhile, you can check <code>mongodb</code> Pod for any inbound connection statuses or any relevant event.</p>
<p><code>kubectl logs <mongodb-Pod-name></code></p>
<p>I've used <a href="https://github.com/helm/charts/tree/master/stable/mongodb" rel="noreferrer">stable/mongodb</a> Helm chart to deploy MongoDB and discovered the error similar to yours:</p>
<p><code>helm install --name mongodb stable/mongodb</code></p>
<blockquote>
<p>Warning Unhealthy 38m kubelet,
gke-helm-test-default-pool-efed557c-52tf Readiness probe failed:
MongoDB shell version v4.0.9 connecting to:
mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
2019-06-10T12:46:46.054+0000 E QUERY [js] Error: couldn't connect
to server 127.0.0.1:27017, connection attempt failed: SocketException:
Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused
: connect@src/mongo/shell/mongo.js:343:13 @(connect):2:6 exception:
connect failed</p>
</blockquote>
<p>When I made some adjustment and gain <code>readinessProbe.initialDelaySeconds</code> from 5 sec to 360 sec, <code>mongodb</code> container run up without any failures.</p>
<pre><code>helm install --name mongodb stable/mongodb --set readinessProbe.initialDelaySeconds=360
</code></pre>
|
<p>I want to have MySQL as service by kubernetes. Any one that create database I should create a service for it and if he want to access database I should use NodePort. but I have 2000 port and I have 4000 user. I install kubernetes in baremetal. What should I do?
Why kubernetes have limited NodePort?</p>
| <p>@check nova</p>
<p>Kubernetes does not have limited NodePort port range value.
The default range is indeed 30000-32767 but it can be changed by setting the</p>
<pre><code>--service-node-port-range
</code></pre>
<p>Update the file /etc/kubernetes/manifests/kube-apiserver.yaml and add the line <code>--service-node-port-range=xxxxx-yyyyy</code></p>
<p>Be careful however and try not to generate any configuration conflicts for your network.</p>
<p>I hope it helps.</p>
|
<p>Hoping that there is some good insight into how to handle orchestration amount microservices in an on prem smaller company environment. Currently, the systems that we are looking to convert from monolithic to microservices like the rest of the world :).</p>
<p>The problem I'm having with as an architect, is justifying the big learning curve and server requirements with the resources we have at the moment. I can easily see us having 50ish microservices, which I feel could be on that line of using kubernetes or not. </p>
<p>The thing is, if we don't, how do we monitor if it is on-prem. We do use Azure Devops, so I'm wondering if this would safice for deployment parts.</p>
<p>Thanks! </p>
| <p>This comes down to a debate over essential vs accidental complexity. The verdict is in from companies that k8s strikes a good balance vs swarm and other orchestrators are barely talked about in the industry.</p>
<p><a href="https://www.reactiveops.com/blog/is-kubernetes-overkill" rel="nofollow noreferrer">https://www.reactiveops.com/blog/is-kubernetes-overkill</a></p>
<p>The platforms that build on kubernetes are still emerging to offer a simpler interface for those wanting a higher level of abstraction but aren't mature enough yet. GKE offers a very easy way to just deal with workloads, AKS is still maturing so you will likely face some bugs but it is tightly integrated with Azure Devops. </p>
<p>Microsoft is all-in on k8s although their on-prem offering doesn't seem fully fledged yet. GKE on-prem and Openshift 4.1 offer fully managed on-prem (if using vSphere) for list price of $1200/core/year. <a href="https://nedinthecloud.com/2019/02/19/azure-stack-kubernetes-cluster-is-not-aks/" rel="nofollow noreferrer">https://nedinthecloud.com/2019/02/19/azure-stack-kubernetes-cluster-is-not-aks/</a></p>
<p>Other ways of deploying on prem are emerging so long as you're comfortable with managing the compute, storage and network yourself. Installing and upgrading are becoming easier (see e.g. <a href="https://github.com/kubermatic/kubeone" rel="nofollow noreferrer">https://github.com/kubermatic/kubeone</a> which builds on the cluster-api abstraction). For bare metal ambitious projects like talos are making k8s specific immutable OSes (<a href="https://github.com/talos-systems/talos" rel="nofollow noreferrer">https://github.com/talos-systems/talos</a>).</p>
<p>AWS is still holding out hope for lock-in with ECS and Fargate but it remains to be seen if that will succeed.</p>
|
<p>I was facing some issues while joining worker node in existing cluster.
Please find below the details of my scenario.<br>
I've Created a HA clusters with <strong>4 master and 3 worker</strong>.<br>
I removed 1 master node.<br>
Removed node was not a part of cluster now and reset was successful.
Now joining the removed node as worker node in existing cluster.</p>
<p>I'm firing below command</p>
<pre><code>kubeadm join --token giify2.4i6n2jmc7v50c8st 192.168.230.207:6443 --discovery-token-ca-cert-hash sha256:dd431e6e19db45672add3ab0f0b711da29f1894231dbeb10d823ad833b2f6e1b
</code></pre>
<p><em>In above command - 192.168.230.207 is cluster IP</em></p>
<p>Result of above command</p>
<pre><code>[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING FileExisting-tc]: tc not found in system path
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
error execution phase preflight: unable to fetch the kubeadm-config ConfigMap: failed to get config map: Get https://192.168.230.206:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config: dial tcp 192.168.230.206:6443: connect: connection refused
</code></pre>
<p>Already tried Steps </p>
<ol>
<li><p>ted this file(<code>kubectl -n kube-system get cm kubeadm-config -o yaml</code>) using kubeadm patch and removed references of removed node("192.168.230.206") </p></li>
<li><p>We are using external etcd so checked member list to confirm removed node is not a part of etcd now. Fired below command <code>etcdctl --endpoints=https://cluster-ip --ca-file=/etc/etcd/pki/ca.pem --cert-file=/etc/etcd/pki/client.pem --key-file=/etc/etcd/pki/client-key.pem member list</code></p></li>
</ol>
<p>Can someone please help me resolve this issue as I'm not able to join this node?</p>
| <p>In addition to @P Ekambaram answer, I assume that you probably have completely dispose of all redundant data from previous <code>kubeadm join</code> setup.</p>
<ol>
<li><p>Remove cluster entries via <code>kubeadm</code> command on worker node: <code>kubeadm reset</code>;</p></li>
<li><p>Wipe all redundant data residing on worker node: <code>rm -rf /etc/kubernetes; rm -rf ~/.kube</code>;</p></li>
<li><p>Try to re-join worker node. </p></li>
</ol>
|
<p>When setting up our cluster we use kubespray which creates an <code>kubernetes-admin</code> user for me. I believe the code is <a href="https://github.com/kubernetes-sigs/kubespray/blob/044dcbaed023ace6e583e87cf160ef703181013a/roles/kubernetes/client/tasks/main.yml#L61" rel="nofollow noreferrer">here</a>.</p>
<p>For some reason now this <code>admin.conf</code> was leaked to all of our developers and I somehow need to revoke it.</p>
<p>What (I think) I understand:</p>
<p>In our kubernetes cluster we use <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#x509-client-certs" rel="nofollow noreferrer">x509</a> to authenticate our users. For our users we create a private key, then create a CSR with that key and sign it with the <code>client-ca-file</code> and key from our kubernetes installation. Like this:</p>
<pre><code>openssl genrsa -out $K8S_USER-key.pem 2048
openssl req -new -key $K8S_USER-key.pem -out $K8S_USER.csr -subj "/CN=$K8S_USER"
openssl x509 -req -in $K8S_USER.csr -CA $cert_dir/ca.pem -CAkey $cert_dir/ca-key.pem -set_serial $SERIAL -out $K8S_USER.pem -days $DAYS
</code></pre>
<p>I assume the same was done for the <code>kubernetes-admin</code> user and I assume that when I change the <code>client-ca-file</code> the <code>admin.conf</code> cannot be used anymore to use the kubernetes API.</p>
<p>Is this correct? That changing the <code>client-ca-file</code> will invalidate the <code>kubernetes-admin</code>?</p>
<p>I also assume I need to recreate all my service accounts as they also will be invalidated.</p>
<p>EDIT: So, I spent some time creating a new CA cert, then issuing new certs for my user and the kube-apiserver. Not sure a restart of the apiserver pods was enough though. My user is being rejected as <code>Unable to connect to the server: x509: issuer name does not match subject from issuing certificate</code>. This doesn't make too much sense to me though. When I connect to the apiserver pod via exec and inspect the apiserver cert it has the same issuer as my kubeconfig user cert.</p>
| <p>Changing the <code>client-ca-file</code> will not invalidate the <code>kubernetes-admin</code>.</p>
<p>Referring to your case:</p>
<p>During creating a config file for generating a Certificate Signing Request you need to (CSR substitute the values marked with angle brackets (e.g. <<strong>MASTER_IP</strong>>) with real values before saving this to a file (e.g. csr.conf). Value for <strong>MASTER_CLUSTER_IP</strong> is the service cluster IP for the API server. I assume that you are using cluster.local as the default DNS domain name.</p>
<p>Did you add the same parameters into the API server start parameters ?</p>
<p>Submit the CSR to the CA, just like you would with any CSR, but with the <code>-selfsign</code> option. This requires your CA directory structure to be prepared first, which you will have to do anyway if you want to set up your own CA. You can find an tutorial on that <a href="https://jamielinux.com/docs/openssl-certificate-authority/create-the-root-pair.html" rel="nofollow noreferrer">here</a>, for example. Submitting the request can be done as follows:</p>
<pre><code>ca -selfsign -keyfile dist/ca_key.pem -in ca_csr.pem -out dist/ca_cert.pem \
-outdir root-ca/cert -config openssl/ca.cnf
</code></pre>
<p>A client node may refuse to recognize a self-signed CA certificate as valid. For a non-production deployment, or for a deployment that runs behind a company firewall, you can distribute a self-signed CA certificate to all clients and refresh the local list for valid certificates.</p>
<p>On each client, perform the following operations:</p>
<p><code>$ sudo cp ca.crt /usr/local/share/ca-certificates/kubernetes.crt</code></p>
<p><code>$ sudo update-ca-certificates</code></p>
<pre><code>Updating certificates in /etc/ssl/certs...
1 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d....
done.
</code></pre>
<p>To make the leaked certificate useless is to replace the CA in the cluster. This would require a restart of the cluster, though. And it would require to re-issue all the certificates. You will have to recreate service account again.</p>
<p>To invalidate leaked tokens just delete the secret that corresponds to the user's token.
Remember of certificate expiration date you have added.</p>
<p>I hope it helps.</p>
|
<p>As described in the <a href="https://microk8s.io/docs/working" rel="noreferrer">doc</a> (based <a href="https://stackoverflow.com/questions/55549522/how-to-configure-kubernetes-microk8s-to-use-local-docker-images">on</a>). </p>
<p>I am on Ubuntu as a host OS.</p>
<pre><code>docker --version
</code></pre>
<blockquote>
<p>Docker version 18.09.6, build 481bc77</p>
</blockquote>
<pre><code>microk8s 1.14/beta
</code></pre>
<hr>
<p>Enable local registry for <code>microk2s</code>:</p>
<pre><code>microk8s.enable registry
</code></pre>
<p>Checking:</p>
<pre><code>watch microk8s.kubectl get all --all-namespaces
</code></pre>
<blockquote>
<p>container-registry pod/registry-577986746b-v8xqc 1/1 Running 0 36m</p>
</blockquote>
<p>Then:</p>
<p>Edit:</p>
<pre><code>sudo vim /etc/docker/daemon.json
</code></pre>
<p>Add this content:</p>
<pre><code>{
"insecure-registries" : ["127.0.0.1:32000"]
}
</code></pre>
<p>Restart:</p>
<pre><code>sudo systemctl restart docker
</code></pre>
<p>Double checking, see if insecure is applied:</p>
<pre><code>docker info | grep -A 2 Insecure
</code></pre>
<blockquote>
<pre><code>Insecure Registries:
127.0.0.1:32000
127.0.0.0/8
WARNING: No swap limit support
</code></pre>
</blockquote>
<p>Tag:</p>
<pre><code>docker tag my-registry/my-services/my-service:0.0.1-SNAPSHOT 127.0.0.1:32000/my-service
</code></pre>
<p>Checking:</p>
<pre><code>docker images
</code></pre>
<blockquote>
<p>127.0.0.1:32000/my-service latest e68f8a7e4675 19 hours ago 540MB</p>
</blockquote>
<p>Pushing:</p>
<pre><code>docker push 127.0.01:32000/my-service
</code></pre>
<p>I see my image here: <a href="http://127.0.0.1:32000/v2/_catalog" rel="noreferrer">http://127.0.0.1:32000/v2/_catalog</a></p>
<p>In <code>deployment-local.yml</code> I have, now the proper image:</p>
<pre><code>...spec:
containers:
- name: my-service-backend
image: 127.0.0.1:32000/my-service
imagePullPolicy: Always ...
</code></pre>
<p>Then applying:</p>
<pre><code>envsubst < ./.local/deployment-local.yml | microk8s.kubectl apply -f -
</code></pre>
<p>I see: <code>ContainerCreating</code></p>
<p>By: <code>microk8s.kubectl describe pods my-service-deployment-f85d5dcd5-cmd5</code></p>
<p>In the Events section:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 107s default-scheduler Successfully assigned default/my-service-deployment-f85d5dcd5-z75tr to my-desktop
Normal Pulling 25s (x4 over 106s) kubelet, my-desktop Pulling image "127.0.0.1:32000/my-service"
Warning Failed 25s (x4 over 106s) kubelet, my-desktop Failed to pull image "127.0.0.1:32000/my-service": rpc error: code = Unknown desc = failed to resolve image "127.0.0.1:32000/my-service:latest": no available registry endpoint: failed to do request: Head https://127.0.0.1:32000/v2/my-service/manifests/latest: http: server gave HTTP response to HTTPS client
Warning Failed 25s (x4 over 106s) kubelet, my-desktop Error: ErrImagePull
Normal BackOff 10s (x5 over 105s) kubelet, my-desktop Back-off pulling image "127.0.0.1:32000/my-service"
Warning Failed 10s (x5 over 105s) kubelet, my-desktop Error: ImagePullBackOff
</code></pre>
<p>Seems like my-service is stuck there.</p>
<p><strong>Q: What could be the reason?</strong></p>
<p><strong>UPDATE:</strong>
changing all to <code>localhost</code> helped, meaning I could see now in the browser: <a href="http://localhost:32000/v2/_catalog" rel="noreferrer">http://localhost:32000/v2/_catalog</a> . </p>
<pre><code>{"repositories":["my-service"]}
</code></pre>
<p>But it worked only in Firefox.. weird. In chrome is pending..</p>
<p>Still:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 161m default-scheduler Successfully assigned default/my-service-deployment-6d4c5587df-72xvd to my-laptop
Normal Pulling 160m (x4 over 161m) kubelet, my-laptop Pulling image "localhost/my-service"
Warning Failed 160m (x4 over 161m) kubelet, my-laptop Failed to pull image "localhost/my-service": rpc error: code = Unknown desc = failed to resolve image "localhost/my-service:latest": no available registry endpoint: failed to do request: Head https://localhost/v2/my-service/manifests/latest: dial tcp 127.0.0.1:443: connect: connection refused
Warning Failed 160m (x4 over 161m) kubelet, my-laptop Error: ErrImagePull
Warning Failed 159m (x6 over 161m) kubelet, my-laptop Error: ImagePullBackOff
Normal BackOff 131m (x132 over 161m) kubelet, my-laptop Back-off pulling image "localhost/my-service"
Normal SandboxChanged 22m kubelet, my-laptop Pod sandbox changed, it will be killed and re-created.
Normal Pulling 21m (x4 over 22m) kubelet, my-laptop Pulling image "localhost/my-service"
Warning Failed 21m (x4 over 22m) kubelet, my-laptop Failed to pull image "localhost/my-service": rpc error: code = Unknown desc = failed to resolve image "localhost/my-service:latest": no available registry endpoint: failed to do request: Head https://localhost/v2/my-service/manifests/latest: dial tcp 127.0.0.1:443: connect: connection refused
Warning Failed 21m (x4 over 22m) kubelet, my-laptop Error: ErrImagePull
Warning Failed 20m (x6 over 22m) kubelet, my-laptop Error: ImagePullBackOff
Normal BackOff 2m22s (x87 over 22m) kubelet, my-laptop Back-off pulling image "localhost/my-service"
</code></pre>
<p>Seems it tries to connect by https.. </p>
<p>-- </p>
<p>Ok.. had to add the port for the yml image: part. will accept the answer below. Thanks.</p>
| <p>In microk8s is a big different between <strong>localhost</strong> and <strong>127.0.0.1</strong>. As you mentioned, you were based on <a href="https://stackoverflow.com/questions/55549522/how-to-configure-kubernetes-microk8s-to-use-local-docker-images">this</a> Stack case and there user were using <strong>localhost</strong>.</p>
<p>The main issue here is error <code>no available registry endpoint: failed to do request:</code>. It means microk8s cannot find endpoint.</p>
<p>If you will run commend below, you will have the same output</p>
<pre><code>$ cat /var/snap/microk8s/current/args/containerd-template.toml | grep endpoint
endpoint = ["https://registry-1.docker.io"]
endpoint = ["http://localhost:32000"]
</code></pre>
<p>As you see in microk8s endpoint is <strong>localhost</strong> not 127.0.0.1</p>
<p>Please change image in deployment-local.yaml </p>
<pre><code>...spec:
containers:
- name: my-service-backend
image: localhost/my-service
imagePullPolicy: Always ...
</code></pre>
<p>Let me know if you will occur any other issues.</p>
<p>In my case I did not need tag and push image again with localhost, only change in YAML was required.</p>
|
<p>I am trying to create a kubernetes job inside which I will run "dig srv" queries to find out the IP address of all the pods for any specific service running on the same cluster.</p>
<p>Is this achievable ? </p>
<p><em>I would like to elaborate a little more on the problem statement. There are a few services already running on the cluster. The requirement is to have a tool that can accept a service name and list down the IP addresses of all the pods belonging to that service.</em></p>
<p>I was able to do this by using kubectl commands along with selector and jq tooling. But for some reasons, I am not allowed to run kubectl commands on this environment. </p>
<p>I want to use <code>dig srv</code> queries to resolve pod IPs for provided service name.</p>
| <p>You can use a headless service (therefore no ClusterIP and no internal loadbalancing). If you provide a selector, you can query for A records of the service.</p>
<p>See: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#headless-services</a></p>
<p>Consider the following example:</p>
<p>Deployment of some pods:</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.16
ports:
- containerPort: 80
</code></pre>
<p>For this deployment the following headless service is added:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
</code></pre>
<p>This can now be queried using DNS (inside the cluster)</p>
<pre><code>$ kubectl run shell -i --rm --tty --restart=Never --image=busybox
# nslookup -type=A nginx
Server: 10.96.0.10
Address: 10.96.0.10:53
Name: nginx.default.svc.cluster.local
Address: 10.34.0.2
Name: nginx.default.svc.cluster.local
Address: 10.42.0.2
Name: nginx.default.svc.cluster.local
Address: 10.46.0.1
</code></pre>
<p>All internal Pod IPs are returned as DNS A records.</p>
|
<p>I have a cluster running on Azure cloud. I have a deployment of a peer service on that cluster. But pods for that deployment is not getting created. I have also scaled up replica set for that depolyment. </p>
<p>Even when i am trying to create simple deployment of docker busybox image then it is not able to create the pods.</p>
<p>Please guide me what could be the issue ?</p>
<p><strong>EDIT</strong></p>
<p>output for describe deployment </p>
<pre><code>Name: peer0-org-myorg
Namespace: internal
CreationTimestamp: Tue, 28 May 2019 06:12:21 +0000
Labels: cattle.io/creator=norman
workload.user.cattle.io/workloadselector=deployment-internal-peer0-org-myorg
Annotations: deployment.kubernetes.io/revision=1
field.cattle.io/creatorId=user-b29mj
field.cattle.io/publicEndpoints=null
Selector: workload.user.cattle.io/workloadselector=deployment-internal-peer0-org-myorg
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: Recreate
MinReadySeconds: 0
Pod Template:
Labels: workload.user.cattle.io/workloadselector=deployment-internal-peer0-org-myorg
Annotations: cattle.io/timestamp=2019-06-11T08:19:40Z
field.cattle.io/ports=[[{"containerPort":7051,"dnsName":"peer0-org-myorg-hostport","hostPort":7051,"kind":"HostPort","name":"7051tcp70510","protocol":"TCP","sourcePort":7051},{"containerPo...
Containers:
peer0-org-myorg:
Image: hyperledger/fabric-peer:1.4.0
Ports: 7051/TCP, 7053/TCP
Host Ports: 7051/TCP, 7053/TCP
Environment:
CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS: couchdb0:5984
CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD: root
CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME: root
CORE_LEDGER_STATE_STATEDATABASE: CouchDB
CORE_LOGGING_CAUTHDSL: INFO
CORE_LOGGING_GOSSIP: WARNING
CORE_LOGGING_GRPC: WARNING
CORE_LOGGING_MSP: WARNING
CORE_PEER_ADDRESS: peer0-org-myorg-com:7051
CORE_PEER_ADDRESSAUTODETECT: true
CORE_PEER_FILESYSTEMPATH: /var/hyperledger/peers/peer0/production
CORE_PEER_GOSSIP_EXTERNALENDPOINT: peer0-org-myorg-com:7051
CORE_PEER_GOSSIP_ORGLEADER: false
CORE_PEER_GOSSIP_USELEADERELECTION: true
CORE_PEER_ID: peer0.org.myorg.com
CORE_PEER_LOCALMSPID: orgMSP
CORE_PEER_MSPCONFIGPATH: /mnt/crypto/crypto-config/peerOrganizations/org.myorg.com/peers/peer0.org.myorg.com/msp
CORE_PEER_PROFILE_ENABLED: true
CORE_PEER_TLS_CERT_FILE: /mnt/crypto/crypto-config/peerOrganizations/org.myorg.com/peers/peer0.org.myorg.com/tls/server.crt
CORE_PEER_TLS_ENABLED: false
CORE_PEER_TLS_KEY_FILE: /mnt/crypto/crypto-config/peerOrganizations/org.myorg.com/peers/peer0.org.myorg.com/tls/server.key
CORE_PEER_TLS_ROOTCERT_FILE: /mnt/crypto/crypto-config/peerOrganizations/org.myorg.com/peers/peer0.org.myorg.com/tls/ca.crt
CORE_PEER_TLS_SERVERHOSTOVERRIDE: peer0.org.myorg.com
CORE_VM_ENDPOINT: unix:///host/var/run/docker.sock
FABRIC_LOGGING_SPEC: DEBUG
Mounts:
/host/var/run from worker1-dockersock (ro)
/mnt/crypto from crypto (ro)
/var/hyperledger/peers from vol2 (rw)
Volumes:
crypto:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: worker1-crypto-pvc
ReadOnly: false
vol2:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: worker1-pvc
ReadOnly: false
worker1-dockersock:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: worker1-dockersock
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Progressing True NewReplicaSetAvailable
Available True MinimumReplicasAvailable
OldReplicaSets: peer0-org-myorg-6d6645ddd7 (1/1 replicas created)
NewReplicaSet: <none>
Events: <none>
</code></pre>
| <p>There are a million reasons why your pods could be broken and there is a bunch of information that you can get that would give you more information on why the pods are not being created. I would start with:</p>
<p>What are the pods saying: </p>
<pre><code>kubectl get pods --all-namespaces -o wide
</code></pre>
<p>If you can see the pods but they have errors, what do the errors say. Further describe the broken pods.</p>
<pre><code>kubectl describe pod <pod-name>
</code></pre>
<p>Or grab logs </p>
<pre><code>kubectl logs <pod-name>
</code></pre>
<p>Maybe something went wrong with your deployment. Check the deployments.</p>
<pre><code>kubectl get deployments
</code></pre>
<p>Describe the deployments (like pods above), look for errors. </p>
<p>We can't really help you until you provide way more information. What debugging attempts have you made so far? What errors are displayed and where are you seeing them? What is actually happening when there's an attempt to create the pods.</p>
<p>kubectl Get/Describe/Log everything and let us know what's actually happening.</p>
<p>Here's a good place to start: </p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster/" rel="noreferrer">Troubleshoot clusters</a> </li>
<li><a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/" rel="noreferrer">Kubectl command cheatsheet</a></li>
</ul>
<p>EDIT: Added a pic of troubleshooting in Azure Portal (mentioned in comments below)</p>
<p><a href="https://i.stack.imgur.com/4s1k0.png" rel="noreferrer"><img src="https://i.stack.imgur.com/4s1k0.png" alt="enter image description here"></a></p>
|
<p>Our Kubernetes cluster includes an nginx load balancer that forwards the requests to other pods.</p>
<p>However, the nginx sees local source IPs and therefore cannot set the correct X-Real-IP header. I tried setting the externalTrafficPolicy value of nginx to "Local" but the IP does not change.</p>
<p>Section of the nginx service config:</p>
<pre><code> "selector": {
"app": "nginx-ingress",
"component": "controller",
"release": "loping-lambkin"
},
"clusterIP": "10.106.1.182",
"type": "LoadBalancer",
"sessionAffinity": "None",
"externalTrafficPolicy": "Local",
"healthCheckNodePort": 32718
</code></pre>
<p>Result:</p>
<pre><code>GET / HTTP/1.1
Host: example.com:444
X-Request-ID: dd3310a96bf154d2ac38c8877dec312c
X-Real-IP: 10.39.0.0
X-Forwarded-For: 10.39.0.0
</code></pre>
<p>We use a bare metal cluster with metallb.</p>
| <p>I found out that weave needs to be configured using <code>NO_MASQ_LOCAL=1</code> to respect the externalTrafficPolicy property</p>
|
<p>I am using Ambassador as the ingress controller for my kubernetes setup. I need to connect to multiple ports on my containers, for example, I have a RethinkDB container and I need to connect to port 8085 for its web-ui, port 28015 for RDB-API and port 29015 for adding nodes to Rethinkdb and clustering.</p>
<p>I tried different configuration but they didn't work. The configurations that I tried:
1- This configuration works for the latest mapping which means if I replace 8085 mapping with 29015 and put it at the end I am able to access the web-ui but not other parts and so on.</p>
<pre><code>getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: rethinkdb_mapping
prefix: /rethinkdb:28015/
service: rethinkdb:28015
labels:
ambassador:
- request_label:
- rethinkdb:28015
---
apiVersion: ambassador/v1
kind: Mapping
name: rethinkdb_mapping
- prefix: /rethinkdb:8085/
service: rethinkdb:8085
labels:
ambassador:
- request_label:
- rethinkdb:8085
---
apiVersion: ambassador/v1
kind: Mapping
name: rethinkdb_mapping
prefix: /rethinkdb:29015/
service: rethinkdb:29015
labels:
ambassador:
- request_label:
- rethinkdb:29015
</code></pre>
<p>2- This one didn't work at all</p>
<pre><code>getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: rethinkdb_mapping
- prefix: /rethinkdb:8085/
service: rethinkdb:8085
- prefix: /rethinkdb:29015/
service: rethinkdb:29015
- prefix: /rethinkdb:28015/
service: rethinkdb:28015
</code></pre>
<p>How shall I configure Ambassador so I can have access to all ports of my container?</p>
| <p>Try to put different names of Mappings like in example below:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: rethinkdb
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: rethinkdb_mapping
prefix: /rethinkdb:28015/
service: rethinkdb:28015
labels:
ambassador:
- request_label:
- rethinkdb:28015
---
apiVersion: ambassador/v1
kind: Mapping
name: rethinkdb_mapping1
prefix: /rethinkdb:8085/
service: rethinkdb:8085
labels:
ambassador:
- request_label:
- rethinkdb:8085
---
apiVersion: ambassador/v1
kind: Mapping
name: rethinkdb_mapping2
prefix: /rethinkdb:29015/
service: rethinkdb:29015
labels:
ambassador:
- request_label:
- rethinkdb:29015
spec:
type: ClusterIP
clusterIP: None
</code></pre>
<p>Remember to put right name of service into service label inside mappings definition.</p>
<p><strong>Note on indents and correct syntax.</strong></p>
<p>I hope it helps.</p>
|
<p>Looking at documentation for installing Knative requires a Kubernetes cluster v1.11 or newer with the MutatingAdmissionWebhook admission controller enabled. So checking the documentation for this I see the following command: </p>
<pre><code>kube-apiserver -h | grep enable-admission-plugins
</code></pre>
<p>However, kube-apiserver is running inside a docker container on master. Logging in as admin to master, I am not seeing this on the command line after install. What steps do I need to take to to run this command? Its probably a basic docker question but I dont see this documented anywhere in Kubernetes documentation.</p>
<p>So what I really need to know is if this command line is the best way to set these plugins and also how exactly to enter the container to execute the command line. </p>
<p><a href="https://stackoverflow.com/questions/50352621/where-is-kube-apiserver-located">Where is kube-apiserver located</a></p>
<p>Should I enter the container? What is name of container and how do I enter it to execute the command? </p>
| <p>I think that answer from <a href="https://stackoverflow.com/questions/50352621/where-is-kube-apiserver-located">@embik</a> that you've pointed out in the initial question is quite decent, but I'll try to shed light on some aspects that can be useful for you.</p>
<p>As @embik mentioned in his answer, <code>kube-apiserver</code> binary actually resides on particular container within K8s api-server Pod, therefore you can free to check it, just execute <code>/bin/sh</code> on that Pod:</p>
<p><code>kubectl exec -it $(kubectl get pods -n kube-system| grep kube-apiserver|awk '{print $1}') -n kube-system -- /bin/sh</code></p>
<p>You might be able to propagate the desired <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#how-do-i-turn-on-an-admission-controller" rel="noreferrer">enable-admission-plugins</a> through <code>kube-apiserver</code> command inside this Pod, however any modification will disappear once api-server Pod re-spawns, i.e. master node reboot, etc.</p>
<p>The essential api-server config located in <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code>. Node agent <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="noreferrer">kubelet</a> controls <code>kube-apiserver</code> runtime Pod, and each time when health checks are not successful <code>kubelet</code> sents a request to K8s <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/" rel="noreferrer">Scheduler</a> in order to re-create this affected Pod from primary <code>kube-apiserver.yaml</code> file.</p>
|
<p>I'm running Kubernetes in virtual machines and going through the basic tutorials, currently <a href="https://kubernetes.io/docs/tutorials/stateless-application/guestbook-logs-metrics-with-elk/" rel="noreferrer">Add logging and metrics to the PHP / Redis Guestbook example</a>. I'm trying to install kube-state-metrics:</p>
<pre><code>git clone https://github.com/kubernetes/kube-state-metrics.git kube-state-metrics
kubectl create -f kube-state-metrics/kubernetes
</code></pre>
<p>but it fails. </p>
<pre><code>kubectl describe pod --namespace kube-system kube-state-metrics-7d84474f4d-d5dg7
</code></pre>
<blockquote>
<p>...</p>
<p>Warning Unhealthy 28m (x8 over 30m) kubelet, kubernetes-node1 Readiness probe failed: Get <a href="http://192.168.129.102:8080/healthz" rel="noreferrer">http://192.168.129.102:8080/healthz</a>: dial tcp 192.168.129.102:8080: connect: connection refused</p>
</blockquote>
<pre><code>kubectl logs --namespace kube-system kube-state-metrics-7d84474f4d-d5dg7 -c kube-state-metrics
</code></pre>
<blockquote>
<p>I0514 17:29:26.980707 1 main.go:85] Using default collectors<br>
I0514 17:29:26.980774 1 main.go:93] Using all namespace<br>
I0514 17:29:26.980780 1 main.go:129] metric white-blacklisting: blacklisting the following items:<br>
W0514 17:29:26.980800 1 client_config.go:549] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.<br>
I0514 17:29:26.983504 1 main.go:169] Testing communication with server<br>
F0514 17:29:56.984025 1 main.go:137] Failed to create client: ERROR communicating with apiserver: Get <a href="https://10.96.0.1:443/version?timeout=32s" rel="noreferrer">https://10.96.0.1:443/version?timeout=32s</a>: dial tcp 10.96.0.1:443: i/o timeout</p>
</blockquote>
<p>I'm unsure if this 10.96.0.1 IP is correct. My virtual machines are in a bridged network 10.10.10.0/24 and a host-only network 192.168.59.0/24. When initializing Kubernetes I used the argument <code>--pod-network-cidr=192.168.0.0/16</code> so that's one more IP range that I'd expect. But 10.96.0.1 looks unfamiliar.</p>
<p>I'm new to Kubernetes, just doing the basic tutorials, so I don't know what to do now. How to fix it or investigate further?</p>
<hr>
<p>EDIT - additonal info:</p>
<p><code>kubectl get nodes -o wide</code></p>
<pre><code>NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kubernetes-master Ready master 15d v1.14.1 10.10.10.11 <none> Ubuntu 18.04.2 LTS 4.15.0-48-generic docker://18.9.2
kubernetes-node1 Ready <none> 15d v1.14.1 10.10.10.5 <none> Ubuntu 18.04.2 LTS 4.15.0-48-generic docker://18.9.2
kubernetes-node2 Ready <none> 15d v1.14.1 10.10.10.98 <none> Ubuntu 18.04.2 LTS 4.15.0-48-generic docker://18.9.2
</code></pre>
<p>The command I used to initialize the cluster:</p>
<pre><code>sudo kubeadm init --apiserver-advertise-address=192.168.59.20 --pod-network-cidr=192.168.0.0/16
</code></pre>
| <p>The reason for this is probably overlapping of Pod network with Node network - you set Pod network CIDR to 192.168.0.0/16 which your host-only network will be included into as its address is 192.168.59.0/24. </p>
<p>To solve this you can either change the pod network CIDR to 192.168.0.0/24 (<strong>it is not recommended as this will give you only 255 addresses for your pod networking</strong>)</p>
<p>You can also use different range for your Calico. If you want to do it on a running cluster here is an <a href="https://docs.projectcalico.org/v3.6/networking/changing-ip-pools" rel="nofollow noreferrer">instruction</a>. </p>
<p>Also other way I tried:</p>
<p>edit Calico <a href="https://docs.projectcalico.org/v3.7/manifests/calico.yaml" rel="nofollow noreferrer">manifest</a> to different range (for example 10.0.0.0/8) - <code>sudo kubeadm init --apiserver-advertise-address=192.168.59.20 --pod-network-cidr=10.0.0.0/8)</code> and apply it after the init. </p>
<p>Another way would be using different CNI like Flannel (which uses 10.244.0.0/16). </p>
<p>You can find more information about ranges of CNI plugins <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network" rel="nofollow noreferrer">here</a>. </p>
|
<p>We have a system of two application A & B deployed over kubernetes cluster. What we need is to set an event/trigger in kubernetes. So each time a pod of application B is created, it will trigger an action to add its IP to configmap. Also when any pod of application B is deleted, it will trigger an action to remove its IP from the configmap.</p>
<p>Is there any built in object in kubernetes to perform such function, or do we need a 3rd party plugin?</p>
| <p>Basically you have two options:</p>
<ul>
<li>Use the kubernetes (watch) API and handle the pod created/deleted events accordingly</li>
<li>Use container lifecycle hooks to react to container creation/deletion, <a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/</a></li>
</ul>
<p>The later is at the container level, but should work for your use case as well.</p>
|
<p>Using Helm, how do I configure the Helm Chart to automatically include the port forwarding? </p>
<p>Documentation I have seen so far indicate I create a Helm chart, I run ...</p>
<pre><code>helm install myhelmchart
</code></pre>
<p>... then forward the port manually...</p>
<pre><code> export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=myhelmchart,app.kubernetes.io/instance=pouring-rat" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
</code></pre>
| <p>You can't: Helm's operation is limited to using its template language to render some set of YAML files, and submitting them to the Kubernetes server. It kind of only does <code>kubectl apply</code> and <code>kubectl delete</code>.</p>
<p>One trick you might find useful, though, is that <code>kubectl port-forward</code> can take things other than pod names as of <code>kubectl</code> 1.10 (and this is functionality in the client, if you have a very old cluster you just need a new enough client). It will look up an appropriate pod name for you. So you can</p>
<pre><code>kubectl port-forward service/pouring-rat-nginx 8080:80
</code></pre>
<p>I've found that <code>kubectl port-forward</code> works fine for lightweight testing and debugging and "if I send a <code>curl</code> request does it act the way I want". It also does things like routinely shut down after some idle time, and since it is tunneling TCP over HTTP it's not the fastest thing. Setting up a LoadBalancer-type service would be a better way to set up access from outside the cluster. Knobs like the type of service and any annotations you need to control the load balancer are good things to expose through Helm values.</p>
|
<p>I want to use ingress with Haproxy in my kuberenets cluster, how should i use it?
I am have tried using it on my local system, I have used the HAproxy ingress controller in different namespace but I am getting 503 error randomly for the haproxy pod which has been created.</p>
| <p>try this</p>
<p>default backend</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: ingress-default-backend
name: ingress-default-backend
spec:
replicas: 1
selector:
matchLabels:
run: ingress-default-backend
template:
metadata:
labels:
run: ingress-default-backend
spec:
containers:
- name: ingress-default-backend
image: gcr.io/google_containers/defaultbackend:1.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
labels:
run: ingress-default-backend
name: ingress-default-backend
spec:
ports:
- name: port-1
port: 8080
protocol: TCP
targetPort: 8080
selector:
run: ingress-default-backend
</code></pre>
<p>haproxy ingress controller</p>
<pre><code>apiVersion: v1
data:
dynamic-scaling: "true"
backend-server-slots-increment: "4"
kind: ConfigMap
metadata:
name: haproxy-configmap
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
spec:
replicas: 1
selector:
matchLabels:
run: haproxy-ingress
template:
metadata:
labels:
run: haproxy-ingress
spec:
containers:
- name: haproxy-ingress
image: quay.io/jcmoraisjr/haproxy-ingress
args:
- --default-backend-service=default/ingress-default-backend
- --default-ssl-certificate=default/tls-secret
- --configmap=$(POD_NAMESPACE)/haproxy-configmap
- --reload-strategy=native
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: stat
containerPort: 1936
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
---
apiVersion: v1
kind: Service
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
spec:
externalIPs:
- 172.17.0.50
ports:
- name: port-1
port: 80
protocol: TCP
targetPort: 80
- name: port-2
port: 443
protocol: TCP
targetPort: 443
- name: port-3
port: 1936
protocol: TCP
targetPort: 1936
selector:
run: haproxy-ingress
</code></pre>
<p>update externalIPs as per your environment</p>
|
<p>I used calico as CNI in my k8s, I'm trying to deploy a single master cluster in 3 servers. I'm using <code>kubeadm</code>, follow the official <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">setup guide</a>. But some error occurred, <code>kube-controller-manager</code> and <code>kube-scheduler</code> go in CrashLoopBackOff error and cannot run well.</p>
<p>I have tried <code>kubeadm reset</code> at every server, and also restart the server, downgrade docker.</p>
<p>I use <code>kubeadm init --apiserver-advertise-address=192.168.213.128 --pod-network-cidr=192.168.0.0/16</code> to init the master, and run <code>kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml</code> and <code>kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml</code> to start calico.</p>
<pre class="lang-sh prettyprint-override"><code>[root@k8s-master ~]# docker info
Containers: 20
Running: 18
Paused: 0
Stopped: 2
Images: 10
Server Version: 18.09.6
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: systemd
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: bb71b10fd8f58240ca47fbb579b9d1028eea7c84
runc version: 2b18fe1d885ee5083ef9f0838fee39b62d653e30
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-957.12.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 972.6MiB
Name: k8s-master
ID: RN6I:PP52:4WTU:UP7E:T3LF:MXVZ:EDBX:RSII:BIRW:36O2:CYJ3:FRV2
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Registry Mirrors:
https://i70c3eqq.mirror.aliyuncs.com/
https://docker.mirrors.ustc.edu.cn/
Live Restore Enabled: false
Product License: Community Engine
</code></pre>
<pre class="lang-sh prettyprint-override"><code>[root@k8s-master ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:02:58Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
[root@k8s-master ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:08:49Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
[root@k8s-master ~]# kubelet --version
Kubernetes v1.14.1
</code></pre>
<pre class="lang-sh prettyprint-override"><code>[root@k8s-master ~]# kubectl get no -A
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 49m v1.14.1
</code></pre>
<pre class="lang-sh prettyprint-override"><code>[root@k8s-master ~]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-xmc5t 2/2 Running 0 27m
kube-system coredns-6765558d84-945mt 1/1 Running 0 28m
kube-system coredns-6765558d84-xz7lw 1/1 Running 0 28m
kube-system coredns-fb8b8dccf-z87sl 1/1 Running 0 31m
kube-system etcd-k8s-master 1/1 Running 0 30m
kube-system kube-apiserver-k8s-master 1/1 Running 0 29m
kube-system kube-controller-manager-k8s-master 0/1 CrashLoopBackOff 8 30m
kube-system kube-proxy-wp7n9 1/1 Running 0 31m
kube-system kube-scheduler-k8s-master 1/1 Running 7 29m
</code></pre>
<pre class="lang-sh prettyprint-override"><code>[root@k8s-master ~]# kubectl logs -n kube-system kube-controller-manager-k8s-master
I0513 13:49:51.836448 1 serving.go:319] Generated self-signed cert in-memory
I0513 13:49:52.988794 1 controllermanager.go:155] Version: v1.14.1
I0513 13:49:53.003873 1 secure_serving.go:116] Serving securely on 127.0.0.1:10257
I0513 13:49:53.005146 1 deprecated_insecure_serving.go:51] Serving insecurely on [::]:10252
I0513 13:49:53.008661 1 leaderelection.go:217] attempting to acquire leader lease kube-system/kube-controller-manager...
I0513 13:50:12.687383 1 leaderelection.go:227] successfully acquired lease kube-system/kube-controller-manager
I0513 13:50:12.700344 1 event.go:209] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"39adc911-7582-11e9-a70e-000c2908c796", APIVersion:"v1", ResourceVersion:"1706", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' k8s-master_fbfa0502-7585-11e9-9939-000c2908c796 became leader
I0513 13:50:13.131264 1 plugins.go:103] No cloud provider specified.
I0513 13:50:13.166088 1 controller_utils.go:1027] Waiting for caches to sync for tokens controller
I0513 13:50:13.368381 1 controllermanager.go:497] Started "podgc"
I0513 13:50:13.368666 1 gc_controller.go:76] Starting GC controller
I0513 13:50:13.368697 1 controller_utils.go:1027] Waiting for caches to sync for GC controller
I0513 13:50:13.368717 1 controller_utils.go:1034] Caches are synced for tokens controller
I0513 13:50:13.453276 1 controllermanager.go:497] Started "attachdetach"
I0513 13:50:13.453534 1 attach_detach_controller.go:323] Starting attach detach controller
I0513 13:50:13.453545 1 controller_utils.go:1027] Waiting for caches to sync for attach detach controller
I0513 13:50:13.461756 1 controllermanager.go:497] Started "clusterrole-aggregation"
I0513 13:50:13.461833 1 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
I0513 13:50:13.461849 1 controller_utils.go:1027] Waiting for caches to sync for ClusterRoleAggregator controller
I0513 13:50:13.517257 1 controllermanager.go:497] Started "endpoint"
I0513 13:50:13.525394 1 endpoints_controller.go:166] Starting endpoint controller
I0513 13:50:13.525425 1 controller_utils.go:1027] Waiting for caches to sync for endpoint controller
I0513 13:50:14.151371 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
I0513 13:50:14.151463 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
I0513 13:50:14.151489 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges
I0513 13:50:14.163632 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
I0513 13:50:14.163695 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps
I0513 13:50:14.163721 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
I0513 13:50:14.163742 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
W0513 13:50:14.163757 1 shared_informer.go:311] resyncPeriod 67689210101997 is smaller than resyncCheckPeriod 86008177281797 and the informer has already started. Changing it to 86008177281797
I0513 13:50:14.163840 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
W0513 13:50:14.163848 1 shared_informer.go:311] resyncPeriod 64017623179979 is smaller than resyncCheckPeriod 86008177281797 and the informer has already started. Changing it to 86008177281797
I0513 13:50:14.163867 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts
I0513 13:50:14.163885 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.extensions
I0513 13:50:14.163911 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.extensions
I0513 13:50:14.163925 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps
I0513 13:50:14.163942 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
I0513 13:50:14.163965 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates
I0513 13:50:14.163994 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
I0513 13:50:14.164004 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
I0513 13:50:14.164019 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.extensions
I0513 13:50:14.164030 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps
I0513 13:50:14.164039 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps
I0513 13:50:14.164054 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
I0513 13:50:14.164079 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps
I0513 13:50:14.164097 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
I0513 13:50:14.164115 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
E0513 13:50:14.164139 1 resource_quota_controller.go:171] initial monitor sync has error: [couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "crd.projectcalico.org/v1, Resource=networkpolicies": unable to monitor quota for resource "crd.projectcalico.org/v1, Resource=networkpolicies"]
I0513 13:50:14.164154 1 controllermanager.go:497] Started "resourcequota"
I0513 13:50:14.171002 1 resource_quota_controller.go:276] Starting resource quota controller
I0513 13:50:14.171096 1 controller_utils.go:1027] Waiting for caches to sync for resource quota controller
I0513 13:50:14.171138 1 resource_quota_monitor.go:301] QuotaMonitor running
I0513 13:50:15.776814 1 controllermanager.go:497] Started "job"
I0513 13:50:15.771658 1 job_controller.go:143] Starting job controller
I0513 13:50:15.807719 1 controller_utils.go:1027] Waiting for caches to sync for job controller
I0513 13:50:23.065972 1 controllermanager.go:497] Started "csrcleaner"
I0513 13:50:23.047495 1 cleaner.go:81] Starting CSR cleaner controller
I0513 13:50:25.019036 1 event.go:209] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"39adc911-7582-11e9-a70e-000c2908c796", APIVersion:"v1", ResourceVersion:"1706", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' k8s-master_fbfa0502-7585-11e9-9939-000c2908c796 stopped leading
I0513 13:50:25.125784 1 leaderelection.go:263] failed to renew lease kube-system/kube-controller-manager: failed to tryAcquireOrRenew context deadline exceeded
F0513 13:50:25.189307 1 controllermanager.go:260] leaderelection lost
</code></pre>
<pre class="lang-sh prettyprint-override"><code>[root@k8s-master ~]# kubectl logs -n kube-system kube-scheduler-k8s-master
I0513 14:16:04.350818 1 serving.go:319] Generated self-signed cert in-memory
W0513 14:16:06.203477 1 authentication.go:387] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0513 14:16:06.215933 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0513 14:16:06.215947 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0513 14:16:06.218951 1 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0513 14:16:06.218983 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0513 14:16:06.961417 1 server.go:142] Version: v1.14.1
I0513 14:16:06.974064 1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0513 14:16:06.997875 1 authorization.go:47] Authorization is disabled
W0513 14:16:06.997889 1 authentication.go:55] Authentication is disabled
I0513 14:16:06.997908 1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
I0513 14:16:06.998196 1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
I0513 14:16:08.872649 1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0513 14:16:08.973148 1 controller_utils.go:1034] Caches are synced for scheduler controller
I0513 14:16:09.003227 1 leaderelection.go:217] attempting to acquire leader lease kube-system/kube-scheduler...
I0513 14:16:25.814160 1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler
</code></pre>
<p>What is the reason for <code>kube-controller-manager</code> and <code>kube-scheduler</code> going into CrashLoopBackoff? And how can I make <code>kube-controller-manager</code> and <code>kube-scheduler</code> run well?</p>
| <p>I have reproduced the steps you listed on a cloud VM and managed to make it work fine.</p>
<p>Got few ideas that might help:</p>
<ol>
<li><p>Be sure to meet <strong>all</strong> the prerequisites listed <a href="https://kubernetes.io/docs/setup/independent/install-kubeadm/" rel="nofollow noreferrer">here</a> </p></li>
<li><p>Install the most recent version of Docker following the guide from <a href="https://docs.docker.com/install/linux/docker-ce/debian/" rel="nofollow noreferrer">here</a>
(chose the proper OS that you use)</p></li>
<li><p>Install kubeadm useing the commands below:</p>
<pre><code> apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl
</code></pre></li>
<li><p>Make sure you got the latest version of kubeadm by executing: <code>apt-get update && apt-get upgrade</code></p></li>
<li><p>Make sure you use the proper arguments alongside <code>kubeadm init</code></p></li>
<li><p>Don't forget to run:</p>
<ul>
<li><p><code>mkdir -p $HOME/.kube</code></p></li>
<li><p><code>sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config</code></p></li>
<li><p><code>sudo chown $(id -u):$(id -g) $HOME/.kube/config</code></p></li>
</ul></li>
</ol>
<p>after <code>kubeadm init</code> finishes (these commands are also part of the kubeadm init output).</p>
<ol start="7">
<li>Finally apply the.yaml files you listed in your question.</li>
</ol>
<p>Notice that by following above steps you will have <code>kubectl version</code>, <code>kubelet --version</code> and <code>kubectl get no -A</code> all in v1.14.3 and not v1.14.1 like you showed, which might be the case.</p>
<p>I hope it helps.</p>
|
<p>I need to find the number of pods currently NOT terminated (can be available, terminating, etc.) in order to prevent deployment from starting if there are still some pods not terminated.</p>
<p>UPDATE: If there are no pods available, I should get "0". Is it possible ?</p>
| <p>You can try:</p>
<p><code>kubectl get pods --field-selector=status.phase!=Succeeded,status.Phase!=Failed</code></p>
<p>If you look at the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase" rel="nofollow noreferrer">Pod Phases</a> you can see that this covers all possible pods where all containers are terminated (either failed or succeeded)</p>
<p>If you specifically want the count you could use a bit of <code>jq</code> and use:</p>
<pre><code>kubectl get pods --field-selector=status.phase!=Succeeded,status.Phase!=Failed --output json | jq -j '.items | length'
</code></pre>
<p>This returns the # of pods that are not <code>Terminated</code>.</p>
|
<p>I'm trying to get the IP address of the "docker-for-desktop" node.</p>
<p>I'm switching from minikube to docker-for-desktop because it doesn't require VirtualBox. </p>
<p><a href="https://i.stack.imgur.com/1UcAx.png" rel="noreferrer"><img src="https://i.stack.imgur.com/1UcAx.png" alt="Screenshot of docker-for-desktop"></a></p>
<p>I created a service that exposes a NodePort. When I use minikube context, I'm able to reach the service via minikube's IP address. When I switch to docker-for-desktop context, I'm not able to reach the service because I don't know its IP address.</p>
<p>With minikube, I was able to do:</p>
<pre><code>$ minikube ip
</code></pre>
<p>or even:</p>
<pre><code>$ minikube service list
</code></pre>
<p>I'm not sure how to do that with docker-for-desktop.</p>
<p>I would expect to have a command that gives a docker-for-desktop IP address.</p>
| <p>Minikube is creating a virtual machine that has its own IP, but when using Docker for Mac, you use your own laptop, so where you were using the minikube ip you can now just use <code>localhost</code>. It's just simpler.</p>
|
<p>Follwing the documentation and provided example here:
<a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#running-an-example-job" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#running-an-example-job</a></p>
<p>I run <code>kubectl apply -f job.yaml</code></p>
<pre><code>kubectl apply -f job.yaml
job.batch/pi created
</code></pre>
<p>Monitoring the job with get pods <code>pi-fts6q 1/2 Running 0 52s</code></p>
<p>I always see 1/2 Running even after the job is complete and checking the logs shows it is completed.</p>
<p>How can I get the job to show a completed status? The job will stay in a running state showing no completions forever.</p>
<pre><code>Parallelism: 1
Completions: 1
Start Time: Thu, 06 Jun 2019 16:21:36 -0500
Pods Statuses: 1 Running / 0 Succeeded / 0 Failed
</code></pre>
<p>It seems the underlying pod that did the work completed, but the actual job-controller stays alive forever.</p>
| <p>Problem is in incomplete implementation of proxy-agent.</p>
<p>Simply add a <strong>'/quitquitquit'</strong> handler to proxy-agent. </p>
<p>Users can manually add a curl or http request to localhost to stop sidecar when the job is done. P0 is a workaround.</p>
<p>More information you can find here: <a href="https://github.com/istio/istio/issues/6324" rel="nofollow noreferrer"><code>istio-issue</code></a></p>
|
<p>I'm trying to set the deployment name as environment variable using the downward API but my container keeps crashing without any logging. I'm using the busybox to print the environment variables. I've had success using a Pod but no luck with a Deployment: This is my YAML:</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-d
name: test-deploy
spec:
replicas: 1
selector:
matchLabels:
app: test-d
template:
metadata:
labels:
app: test-d
spec:
command:
- sh
- "-c"
- "echo Hello Kubernetes, I am $MY_DEPLOY_NAME in $MY_CLUSTER_NAME and $MY_NAMESPACE! && sleep 3600"
containers:
-
image: busybox
name: test-d-container
env:
-
name: MY_DEPLOY_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
-
name: MY_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
-
name: MY_CLUSTER_NAME
value: production
</code></pre>
<p>What am I missing?</p>
<p><strong>Update:</strong></p>
<p>It is clear that my indentation was messed up, thank you for pointing that out but the main part of my question is still not clear to me. How do I get the deployment name from within my container?</p>
| <p>You are using the wrong indentation and structure for Deployment objects. </p>
<p>Both the <code>command</code> key and the <code>env</code> key are part of the <code>container</code> key.</p>
<p>This is the right format</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test-d
name: test-deploy
spec:
replicas: 1
selector:
matchLabels:
app: test-d
template:
metadata:
labels:
app: test-d
spec:
containers:
- image: busybox
name: test-d-container
command:
- sh
- "-c"
- "echo Hello Kubernetes, I am $MY_DEPLOY_NAME in $MY_CLUSTER_NAME and $MY_NAMESPACE! && sleep 3600"
env:
-
name: MY_DEPLOY_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
-
name: MY_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
-
name: MY_CLUSTER_NAME
value: production
</code></pre>
<p>Remember that you can validate your Kubernetes manifests using <a href="https://kubeyaml.com/" rel="nofollow noreferrer">this online validator</a>, or locally <a href="https://github.com/instrumenta/kubeval" rel="nofollow noreferrer">using kubeval</a>.</p>
<p>Referring to the main part of the question, <a href="https://stackoverflow.com/questions/42274229/kubernetes-deployment-name-from-within-a-pod">you can get the object that created the Pod</a>, but most likely that will be the ReplicaSet, not the Deployment.</p>
<p>The Pod name is normally generated by Kubernetes, you don't know it before hand, that's why there is a mechanism to get the name. But that is not the case for Deployments: you know the name of Deployments when creating them. I don't think there is a mechanism to get the Deployment name dynamically.</p>
<p>Typically, labels are used in the PodSpec of the Deployment object to add metadata.</p>
<p>You could also try to parse it, since the pod name (which you have) has always this format: deploymentName-replicaSetName-randomAlphanumericChars.</p>
|
<p>I want to be able to ssh into a container within an OpenShift pod.</p>
<p>I do know that, I can simply do so using <code>oc rsh</code>. But this is based on the assumption that I have the openshift cli installed on the node where I want to ssh into the container from.</p>
<p>But what I want to actually achieve is, to ssh into a container from a node that does not have openshift cli installed. The node is on the same network as that of OpenShift. The node does have access to web applications hosted on a container (just for the sake of example). But instead of web access, I would like to have ssh access.</p>
<p>Is there any way that this can be achieved?</p>
| <p>Unlike a server, which is running an entire operating system on real or virtualized hardware, a container is nothing more than a single Linux process encapsulated by a few kernel features: <a href="https://en.wikipedia.org/wiki/Cgroups" rel="nofollow noreferrer">CGroups</a>, <a href="https://en.wikipedia.org/wiki/Linux_namespaces" rel="nofollow noreferrer">Namespacing</a>, and <a href="https://en.wikipedia.org/wiki/Security-Enhanced_Linux" rel="nofollow noreferrer">SELinux</a>. A "fancy" process if you will.</p>
<p>Opening a shell session into a container is not quite the same as opening an ssh connection to a server. Opening a shell into a container requires starting a shell process and assigning it to the same cgroups and namespaces on the same host as the container process and then presenting that session to you, which is not something ssh is designed for.</p>
<p>Using <a href="https://docs.openshift.com/container-platform/3.11/dev_guide/executing_remote_commands.html" rel="nofollow noreferrer">oc exec</a>, <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/" rel="nofollow noreferrer">kubectl exec</a>, <a href="https://github.com/containers/libpod/blob/master/docs/tutorials/podman_tutorial.md" rel="nofollow noreferrer">podman exec</a>, or <a href="https://docs.docker.com/engine/reference/commandline/exec/" rel="nofollow noreferrer">docker exec</a> cli commands to open a shell session inside a running container is the method that should be used to connect with running containers.</p>
|
<p>I have one .netcore Web App running in "docker". So started to cluster it with kubernetes. Has four configs on the appsettings.json that will be converted by environment variables(all between "${}"):</p>
<pre><code> {
"ConnectionSettings": [
{
"id": "${connectionSettings.connectionString.idMongoDb}",
"databaseName": "${connectionSettings.connectionString.databaseName}",
"connectionString": "${connectionSettings.connectionString.mongoDB}"
}
],
{
"Key": "Token.Issuer",
"Value": "${configuration.token.issuer}",
"Description": "",
"ModifiedDate": "2018-05-05 00:00:00.0000000",
"ModifiedBy": "system",
"AllowedProfiles": 1
}
}
</code></pre>
<p>It's a bit of my <strong>.yaml</strong> file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-dev-api-dep
labels:
app: myapp-dev-api-dep
tier: app
version: v1
spec:
selector:
matchLabels:
app: myapp-dev-api
tier: app
version: v1
replicas: 1
template:
metadata:
labels:
app: myapp-dev-api
tier: app
version: v1
spec:
containers:
- name: myapp-dev-api
image: 'myappapi_tstkube:latest'
env:
- name: connectionSettings.connectionString.mongoDB
value: mongodb://192.168.20.99:27017
- name: configuration.token.issuer
value: '86400'
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
initialDelaySeconds: 30
periodSeconds: 3600
httpGet:
path: /swagger/index.html
port: 80
resources:
requests:
cpu: 25m
memory: 200Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
</code></pre>
<p>Take a look in my configs:</p>
<p><a href="https://i.stack.imgur.com/KKM79.png" rel="noreferrer"><img src="https://i.stack.imgur.com/KKM79.png" alt="enter image description here"></a></p>
<p>The variable "<strong>connectionSettings.connectionString.mongoDB</strong>" works. But the variable "<strong>configuration.token.issuer</strong>" can't substituted on the appsetting.</p>
<p>Made some tests. I found the problem only with variables of numbers.</p>
<p>Has somebody an idea or have you had the problem?</p>
<p>vlw</p>
| <p>The problem was in the yamls identification code. You may have many problems with the error space in the yaml file.</p>
<p><a href="https://github.com/helm/helm/blob/master/docs/chart_template_guide/yaml_techniques.md" rel="noreferrer">https://github.com/helm/helm/blob/master/docs/chart_template_guide/yaml_techniques.md</a></p>
<p>About the number. Both answers are right. You can use single quotation marks '86400' and ACII "\ x38 \ x36 \ x34 \ x30 \ x30".</p>
<p>Thank you everybody</p>
|
<p>I have a setup Metallb as LB with Nginx Ingress installed on K8S cluster.
I have read about session affinity and its significance but so far I do not have a clear picture.</p>
<p>How can I create a single service exposing multiple pods of the same application?
After creating the single service entry point, how to map the specific client IP to Pod abstracted by the service?</p>
<p>Is there any blog explaining this concept in terms of how the mapping between Client IP and POD is done in kubernetes?</p>
<p>But I do not see Client's IP in the YAML. Then, How is this service going to map the traffic to respective clients to its endpoints? this is the question I have.</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10000
</code></pre>
| <p>Main concept of Session Affinity is to redirect traffic from one client always to specific node. Please keep in mind that session affinity is a <strong>best-effort</strong> method and there are scenarios where it will fail due to pod restarts or network errors.
There are two main types of Session Affinity: </p>
<p><strong>1)</strong> <em>Based on Client IP</em></p>
<p>This option works well for scenario where there is only one client per IP. In this method you don't need Ingress/Proxy between K8s services and client.
Client IP should be static, because each time when client will change IP he will be redirected to another pod.</p>
<p>To enable the session affinity in kubernetes, we can add the following to the service definition.</p>
<pre><code>service.spec.sessionAffinity: ClientIP
</code></pre>
<p>Because community provided proper manifest to use this method I will not duplicate.</p>
<p><strong>2)</strong> <em>Based on Cookies</em></p>
<p>It works when there are multiple clients from the same IP, because itΒ΄s stored at web browser level. This method require Ingress object. Steps to apply this method with more detailed information can be found <a href="https://medium.com/@diegomrtnzg/redirect-your-users-to-the-same-pod-by-using-session-affinity-on-kubernetes-baebf6a1733b" rel="noreferrer">here</a> under <strong>Session affinity based on Cookie</strong> section.</p>
<ul>
<li>Create NGINX controller deployment</li>
<li>Create NGINX service</li>
<li>Create Ingress</li>
<li>Redirect your public DNS name to the NGINX service public/external IP.</li>
</ul>
<blockquote>
<p>About mapping ClientIP and POD, according to <a href="https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies" rel="noreferrer">Documentation</a>
kube-proxy is responsible for SessionAffinity. One of Kube-Proxy job
is writing to IPtables, more details <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#is-the-kube-proxy-working" rel="noreferrer">here</a> so thats how it is
mapped.</p>
</blockquote>
<p>Articles which might help with understanding Session Affinity:
<a href="https://sookocheff.com/post/kubernetes/building-stateful-services/" rel="noreferrer">https://sookocheff.com/post/kubernetes/building-stateful-services/</a>
<a href="https://medium.com/@diegomrtnzg/redirect-your-users-to-the-same-pod-by-using-session-affinity-on-kubernetes-baebf6a1733b" rel="noreferrer">https://medium.com/@diegomrtnzg/redirect-your-users-to-the-same-pod-by-using-session-affinity-on-kubernetes-baebf6a1733b</a></p>
|
<p>When Kubernetes creates secrets, do they encrypt the given user name and password with certificate?</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
</code></pre>
| <p>Secrets are stored in etcd which is highly-available key value store fo cluster information data. Data are encrypted at rest. By default, the <strong>identity</strong> provider is used to protect secrets in etcd, which provides no encryption. </p>
<p>EncryptionConfiguration was introduced to encrypt secrets locally, with a locally managed key.
Encrypting secrets with a locally managed key protects against an etcd compromise, but it fails to protect against a host compromise.
Since the encryption keys are stored on the host in the EncryptionConfig YAML file, a skilled attacker can access that file and extract the encryption keys. This was a stepping stone in development to the kms provider, introduced in 1.10, and beta since 1.12. Envelope encryption creates dependence on a separate key, not stored in Kubernetes.
In this case, an attacker would need to compromise etcd, the kubeapi-server, and the third-party KMS provider to retrieve the plaintext values, providing a higher level of security than locally-stored encryption keys.</p>
<p>More information you can find here:
<a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer"><code>secrets</code></a>, <a href="https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/" rel="nofollow noreferrer"><code>encryption</code></a>.</p>
<p>I hope it helps.</p>
|
<p>I have hpa configured for one of my deployment in Kubernetes.</p>
<p>Is there any way to check if HPA scaling happened to the deployment and when it happened?</p>
<p>I don't have prometheus or any monitoring solutions deployed.</p>
| <p>If you created HPA you can check current status using command</p>
<pre><code>$ kubectl get hpa
</code></pre>
<p>You can also use "watch" flag to refresh view each 30 seconds</p>
<pre><code>$ kubectl get hpa -w
</code></pre>
<p>To check if HPA worked you have to describe it</p>
<pre><code>$ kubectl describe hpa <yourHpaName>
</code></pre>
<p>Information will be in <code>Events:</code> section.</p>
<p>Also your deployment will contain some information about scaling</p>
<pre><code>$ kubectl describe deploy <yourDeploymentName>
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set php-apache-b5f58cc5f to 1
Normal ScalingReplicaSet 9m45s deployment-controller Scaled up replica set php-apache-b5f58cc5f to 4
Normal ScalingReplicaSet 9m30s deployment-controller Scaled up replica set php-apache-b5f58cc5f to 8
Normal ScalingReplicaSet 9m15s deployment-controller Scaled up replica set php-apache-b5f58cc5f to 10
</code></pre>
<p>Another way is use events</p>
<pre><code>$ kubectl get events | grep HorizontalPodAutoscaler
5m20s Normal SuccessfulRescale HorizontalPodAutoscaler New size: 4; reason: cpu resource utilization (percentage of request) above target
5m5s Normal SuccessfulRescale HorizontalPodAutoscaler New size: 8; reason: cpu resource utilization (percentage of request) above target
4m50s Normal SuccessfulRescale HorizontalPodAutoscaler New size: 10; reason:
</code></pre>
|
<p>jx has commands:</p>
<pre><code> get applications Display one or more Applications and their versions
get apps Display one or more installed Apps
</code></pre>
<p>I have imported some projects and they are building fine. I can see them when I call <code>jx get applications</code>:</p>
<pre><code>APPLICATION INTEGRATION PODS URL
integration-backend 123208 2/2
integration-frontend 90 2/2
</code></pre>
<p>however <code>jx get apps</code> returns:</p>
<pre><code>No Apps found
</code></pre>
<p>How do we actually install apps? It's really hard to find any reference.</p>
| <p>we need better docs for sure - there's details here: <a href="https://jenkins-x.io/apps/" rel="nofollow noreferrer">https://jenkins-x.io/apps/</a></p>
<p>you can add apps via <a href="https://jenkins-x.io/commands/jx_add_app/" rel="nofollow noreferrer">jx add app</a> e.g. like the <a href="https://jenkins-x.io/architecture/custom-jenkins/" rel="nofollow noreferrer">Jenkins App</a></p>
|
<p>I'm using kubectl to control a local microk8s installation. I configured a local Google cloud shell connection in order to avoid the ridiculous provisioning of the machine available through the GKE dashboard.</p>
<p>Now, <code>kubectl</code> commands seem to be randomly executed on microk8s and GKE. I want not only to fix this, but prohibit this scenario for the future.</p>
<p>The possibility that <code>kubectl</code> can refer to different environment is a horrible idea apparently founded in concept of the <code>kubectl</code> CLI which provides no way to specify a remote cluster. It can lead to accidental changes on production systems. Even a hardcore 12-factor evangelist will have a local cluster installed for development.</p>
| <p><code>kubectl config use-context</code> command can be used to modify, whom kubectl talks to.</p>
<blockquote>
<p>configure access to multiple clusters by using configuration files. After your clusters, users, and contexts are defined in one or more configuration files, you can quickly switch between clusters by using the.
<a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" rel="nofollow noreferrer">configure-access-multiple-clusters/</a></p>
</blockquote>
|
<p>I have a use case with elastic search for rack awareness, which requires me to identify the zone that a pod has been scheduled in.</p>
<p>I've seen many requests for this outside of SO, such as:</p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/40610" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/40610</a></p>
<p>The zone is not exposed in the downward API due to it being a node label, not a pod label.</p>
<p>The two "answers" I have come up with are:</p>
<p>a) Curl request to the google metadata endpoint to retrieve the node from the compute engine metadata</p>
<p>b) Identify the node name via the downward API, and make a call to the Kube API with the node name to retrieve the node object, and use a tool such as JQ to filter the JSON response to get the zone. </p>
<p>I don't like option B due to it being more or less hardcoded against the API call, and I would need to provision a custom docker image with JQ and curl included. Option A feels a bit 'hacky' given it's not Kube native.</p>
<p>Are there any better solutions that I've missed?</p>
<p>Thanks in advance,</p>
| <p>My case is Google Kubernetes Engine, and I needed to have zone/region labels on pods.</p>
<p>What i did was combined @dwillams782 approach in <a href="https://stackoverflow.com/a/52428782/8547463">https://stackoverflow.com/a/52428782/8547463</a> of getting zone via </p>
<pre><code>curl -sS http://metadata.google.internal/computeMetadata/v1/instance/zone -H 'Metadata-Flavor: Google'`
</code></pre>
<p>and set it as labels to pods via approach in <a href="https://github.com/devth/k8s-labeler" rel="nofollow noreferrer">https://github.com/devth/k8s-labeler</a>. Basically i called the zone detection API from initContainer of pod.</p>
|
<p>Worker node is getting into "NotReady" state with an error in the output of <strong>kubectl describe node</strong>:</p>
<p>ContainerGCFailed rpc error: code = DeadlineExceeded desc = context deadline exceeded</p>
<p>Environment: </p>
<p>Ubuntu, 16.04 LTS</p>
<p>Kubernetes version: v1.13.3</p>
<p>Docker version: 18.06.1-ce</p>
<p>There is a closed issue on that on Kubernetes GitHub <a href="https://github.com/kubernetes/kubernetes/issues/42164" rel="noreferrer">k8 git</a>, which is closed on the merit of being related to Docker issue. </p>
<p>Steps done to troubleshoot the issue: </p>
<ol>
<li><strong>kubectl describe node</strong> - error in question was found(root cause isn't clear). </li>
<li><p><strong>journalctl -u kubelet</strong> - shows this related message:</p>
<p>skipping pod synchronization - [container runtime status check may not have completed yet PLEG is not healthy: pleg has yet to be successful]</p>
<p>it is related to this open k8 issue <a href="https://github.com/kubernetes/kubernetes/issues/45419" rel="noreferrer">Ready/NotReady with PLEG issues</a></p></li>
<li><p>Check node health on AWS with cloudwatch - everything seems to be fine.</p></li>
<li><strong>journalctl -fu docker.service</strong> : check docker for errors/issues -
the output doesn't show any erros related to that.</li>
<li><strong>systemctl restart docker</strong> - after restarting docker, the node gets into "Ready" state but in 3-5 minutes becomes "NotReady" again. </li>
</ol>
<p>It all seems to start when I deployed more pods to the node( close to its resource capacity but don't think that it is direct dependency) or was stopping/starting instances( after restart it is ok, but after some time node is NotReady).</p>
<p>Questions: </p>
<p>What is the root cause of the error?</p>
<p>How to monitor that kind of issue and make sure it doesn't happen? </p>
<p>Are there any workarounds to this problem?</p>
| <blockquote>
<p>What is the root cause of the error?</p>
</blockquote>
<p>From what I was able to find it seems like the error happens when there is an issue contacting Docker, either because it is overloaded or because it is unresponsive. This is based on my experience and what has been mentioned in the GitHub issue you provided. </p>
<blockquote>
<p>How to monitor that kind of issue and make sure it doesn't happen?</p>
</blockquote>
<p>There seem to be no clarified mitigation or monitoring to this. But it seems like the best way would be to make sure your node will not be overloaded with pods. I have seen that it is not always shown on disk or memory pressure of the Node - but this is probably a problem of not enough resources allocated to Docker and it fails to respond in time. Proposed solution is to set limits for your pods to prevent overloading the Node. </p>
<p>In case of managed Kubernetes in GKE (not sure but other vendors probably have similar feature) there is a feature called <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/node-auto-repair" rel="noreferrer">node auto-repair</a>. Which will not prevent node pressure or Docker related issue but when it detects an unhealthy node it can drain and redeploy the node/s. </p>
<p>If you already have resources and limits it seems like the best way to make sure this does not happen is to increase memory resource requests for pods. This will mean fewer pods per node and the actual used memory on each node should be lower. </p>
<p>Another way of monitoring/recognizing this could be done by SSH into the node check the memory, the processes with <code>PS</code>, monitoring the <code>syslog</code> and <a href="https://docs.docker.com/engine/reference/commandline/stats/#usage" rel="noreferrer">command</a> <code>$docker stats --all</code></p>
|
<p>I am using metrics-server(<a href="https://github.com/kubernetes-incubator/metrics-server/" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/metrics-server/</a>) to collect the core metrics from containers in a kubernetes cluster.</p>
<p>I could fetch 2 resource usage metrics per container.</p>
<ul>
<li>cpu usage</li>
<li>memory usage</li>
</ul>
<p>However its not clear to me whether </p>
<ul>
<li><p>these metrics are accumulated over time or they are already sampled for a particular time window(1 minute/ 30 seconds..)</p></li>
<li><p>What are the units for the above metric values. For CPU usage, is it the number of cores or milliseconds? And for memory usage i assume its the bytes usage.</p></li>
<li><p>While computing CPU usage metric value, does metrics-server already take care of dividing the container usage by the host system usage?</p></li>
</ul>
<p>Also, if i have to compare these metrics with the docker-api metrics, how to compute CPU usage % for a given container?</p>
<p>Thanks!</p>
| <ol>
<li>Metrics are scraped periodically from kubelets. The default resolution duration is 60s, which can be overriden with the <code>--metric-resolution=<duration></code> flag.</li>
<li>The value and unit (cpu - cores in decimal SI, memory - bytes in binary SI) are arrived at by using the <code>Quantity</code> serializer in the k8s <a href="https://github.com/kubernetes/apimachinery" rel="nofollow noreferrer">apimachinery</a> package. You can read about it from the comments in the <a href="https://github.com/kubernetes/apimachinery/blob/master/pkg/api/resource/quantity.go" rel="nofollow noreferrer">source code</a></li>
<li>No, the CPU metric is not relative to the host system usage as you can see that it's not a percentage value. It represents the rate of change of the total amount of CPU seconds consumed by the container by core. If this value increases by 1 within one second, the pod consumes 1 CPU core (or 1000 milli-cores) in that second.<br>
To arrive at a relative value, depending on your use case, you can divide the CPU metric for a pod by that for the node, since metrics-server exposes both <code>/pods</code> and <code>/nodes</code> endpoints.</li>
</ol>
|
<p>I've a RabbitMQ cluster running on a Kubernetes environment. I don't have access to the containers shell, so I'm trying to run rabbitmqctl from a local container (same image).</p>
<p>These ports are exposed:
- 15672 (exposed as 32672)
- 5671 (exposed as 32671)
- 4369 (exposed as 32369)
- 25672 (exposed as 32256)</p>
<p>The correct cookie is on $HOME/.erlang.cookie on my local container.</p>
<p>How to specify the cluster URL and port to rabbitmqctl, so I can access the RabbitMQ cluster from outside?</p>
<p>Is it necessary to expose other ports?</p>
<p>Is it even possible to do this, since I can't find any reference to this on documentation?</p>
| <p>You will want to expose ports 4369 and 25672 using the same port numbers externally as I can't think of a way to tell the Erlang VM running <code>rabbitmqctl</code> to use a different port for EPMD lookups. You should also expose <a href="https://www.rabbitmq.com/networking.html#ports" rel="nofollow noreferrer">35672-35682</a> using the same port range externally.</p>
<p>Since you're using Kubernetes, I'll assume that you are using long names. Assume that, within your container, your node name is <code>rabbit@container1.my.org</code>, to access it externally use this command:</p>
<pre><code>rabbitmqctl -l -n rabbit@container1.my.org
</code></pre>
<p>Please note that <code>container1.my.org</code> <em>must</em> resolve via DNS to the correct IP address to connect to that container.</p>
<hr />
<p><sub><b>NOTE:</b> the RabbitMQ team monitors the <code>rabbitmq-users</code> <a href="https://groups.google.com/forum/#!forum/rabbitmq-users" rel="nofollow noreferrer">mailing list</a> and only sometimes answers questions on Stack Overflow.</sub></p>
|
<p>I am trying to pass env variable to my pod from configmap. I have the following setup.</p>
<p>I have a file <strong>test-config.txt</strong> with 2 env variables </p>
<pre><code>a_sample_env=b
c_sample_env=d
</code></pre>
<p>I create a configmap as follows:</p>
<blockquote>
<p>kubectl create configmap test-config --from-file test-config.txt</p>
</blockquote>
<p>My pod definition is as follows:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: mycontainer
image: redis
envFrom:
- configMapRef:
name: test-config
</code></pre>
<p>But my application doesn't receive the 2 env variables in the test-config.txt file. I logged into the pod using kubectl exec and get empty values for the env variables.</p>
<pre><code>root@test-pod:/data# echo $c_sample_env
root@test-pod:/data# echo $a_sample_env
</code></pre>
<p>Can anybody point out why the environment variables are not available in the pod?</p>
| <p>you should create configmap as below</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
a_sample_env: b
c_sample_env: d
</code></pre>
<p>if you create configmap using below command</p>
<pre><code>kubectl create configmap test-config --from-file test-config.txt
</code></pre>
<p>then you can mount test-config as volume inside container. You will have to create a wrapper/launch script to export all k:v pair from that file as env variable during startup</p>
|
<p>I was trying to get the Ready status of pods by using -o=jsonpath.
To be more clear of what I want, I would like to get the value 1/1 of the following example using -o=jsonpath.</p>
<pre><code>NAME READY STATUS RESTARTS AGE
some_pod 1/1 Running 1 34d
</code></pre>
<p>I have managed to get some information such as the pod name or namespace.</p>
<pre><code>kubectl get pods --all-namespaces -o=jsonpath='{range .items[*]}{"\n"}{.metadata.namespace}{"\t"}{.metadata.name}{"\t"}{end}'
</code></pre>
<p>And I get somthing like:</p>
<blockquote>
<p>some_namespace1 pod_name1</p>
</blockquote>
<p>However, I don't know how to get the Ready status. What I would like to have is an aoutput similar to this:</p>
<blockquote>
<p>some_namespace1 pod_name1 1/1</p>
</blockquote>
<p>I know I can use bash commands like cut:</p>
<pre><code>kubectl get pods --all-namespaces| tail -1 | cut -d' ' -f8
</code></pre>
<p>However, I would like to get it by using kubectl </p>
| <p>You can get all the pods status using the following command:</p>
<pre><code>kubectl get pods -o jsonpath={.items[*].status.phase}
</code></pre>
<p>Similar commands you can use for the name</p>
<pre><code>kubectl get pods -o jsonpath={.items[*].metadata.name}
</code></pre>
<p>EDIT:</p>
<p>You need to compare the <code>.status.replicas</code> and <code>.status.readyReplicas</code> to get how many ready replicas are there. </p>
|
<p>I am trying to run spark-on-k8s, While I reading stuff on internet I stumbled upon words 'Resource Staging Server'. </p>
<p>AFAIK that is server to sharing jars to executor. But when I issued commands with spark-submit. </p>
<pre><code>spark-2.3.1-bin-hadoop2.7/bin/spark-submit \
--master k8s://k8s-url \
--deploy-mode cluster \
--name spark-pi \
--class org.apache.spark.examples.SparkPi \
--conf spark.executor.instances=2 \
--conf spark.kubernetes.container.image=$I \
local:///Users/myid/examples/target/spark-examples_2.11-2.4.0-SNAPSHOT-shaded.jar
</code></pre>
<p>That pod seems not to be created. </p>
<p>So where is source of Resource Staging Server(RSS)?
Or is RSS still actively developed? </p>
| <p>RSS has been removed by Spark since version 2.3, but it still works at 2.2, spark on K8S.</p>
<p>BTW, RSS pod should be created manually before spark-submit.</p>
|
<p>I am new to the Kubernetes, and I'm trying to understand that how can I apply it for my use-case scenario.
I managed to install a 3-node cluster on VMs within the same network. Searching about K8S's concepts and reading related articles, still I couldn't find answer for my below question. Please let me know if you have knowledge on this:</p>
<p>I've noticed that internal DNS service of K8S applies on the pods and this way services can find each other with hostnames instead of IPs.</p>
<p>Is this applicable for communication between pods of different nodes or this is only within the services inside a single node? (In other words, do we have a dns service on the node level in the K8S, or its only about pods?)</p>
<p>The reason for this question is the scenario that I have in mind: </p>
<p>I need to deploy a micro-service application (written in Java) with K8S. I made docker images from each service in my application and its working locally. Currently, these services are connected via pre-defined IP addresses.
Is there a way to run each of these services within a separate K8S node and use from its DNS service to connect the nodes without pre-defining IPs? </p>
| <p>A service serves as an internal endpoint and (depending on the configuration) load balancer to one or several pods behind it. All communication typically is done between services, not between pods. Pods run on nodes, services don't really run anything, they are just routing traffic to the appropriate pods.</p>
<p>A service is a cluster-wide configuration that does not depend on a node, thus you can use a service name in the whole cluster, completely independent from where a pod is located.</p>
<p>So yes, your use case of running pods on different nodes and communicate between service names is a typical setup.</p>
|
<p>I would like to get and see the random number provided by K8s.</p>
<pre><code>kubectl logs abc-abc-platform-xxxxx dicom-server
</code></pre>
<p>Here xxxxx has to be replaced with the random generated number.</p>
<p>Can you let me know which command do I have to execute to find the random number?</p>
| <p>If the specific pod has the <code>my-app</code> label, you can find that pod and save it as bash variable:</p>
<ul>
<li><code>POD=$(kubectl get pod -l app=my-app -o jsonpath="{.items[0].metadata.name}")</code></li>
</ul>
<p>and then execute:<br></p>
<ul>
<li><code>kubectl logs $POD</code></li>
</ul>
<p><em>PS. This will work only if you have 1 pod with that specific label.</em></p>
|
<p>I have a couple of Stateful Sets, one is dependent on another one, I need the pods in the first Stateful Set to be in Ready state before the 2nd Stateful Set is started to be scaled.</p>
<p>I'm looking to see if there is a way to link the two Stateful Sets to achieve this in an automatic way, instead of me manually doing it. I have the same concern about Deployments as well.</p>
<p>Currently I don't use Helm or any other tools, just kubectl.</p>
| <p>I would advise against doing this as it would increase coupling between different services.</p>
<p>A better way of handling your problem is for containers in the dependent service to check if the other service is available. If it isn't, it's ok to crash the container since Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy" rel="nofollow noreferrer">provides self-healing by automatically restarting containers for you</a>. This way, the dependent service will try to connect to the service and if the latter isn't available, then the dependent service will crash and try again later using exponential back-off.</p>
|
<p>I'm having trouble understanding <code>helm</code>'s use of <code>helm --kube-context=microk8s install ...</code> should install into the context <code>microk8s</code> thus into my local microk8s cluster rather than the remote GKE cluster which I once connected to.</p>
<p>This however fails due to <code>Error: could not get Kubernetes config for context "microk8s": context "microk8s" does not exist</code> if I run e.g. <code>helm --kube-context=microk8s install --name mereet-kafka</code> after successfully running <code>helm init</code> and adding necessary repositories.</p>
<p>The context <code>microk8s</code> is present and enabled according to <code>kubectl config current-context</code>. I can even reproduce this by running <code>helm --kube-context=$(kubectl config current-context) install --name mereet-kafka</code> in order to avoid any typos.</p>
<p>Why can't <code>helm</code> use obviously present contexts?</p>
| <p>This looks like a kubernetes configuration problem more than an issue with helm itself.</p>
<p>There are few things that might help:</p>
<ol>
<li><p>Check the config file in <code>~/.kube/config</code></p>
<ul>
<li><code>kubectl config view</code></li>
</ul></li>
</ol>
<p>Is <code>current-context</code> set to: microk8s?</p>
<ol start="2">
<li><p>Try to use: </p>
<ul>
<li><p><code>kubectl config get-contexts</code></p></li>
<li><p><code>kubectl config set-context</code> </p></li>
<li><p><code>kubectl config use-context</code></p></li>
</ul></li>
</ol>
<p>with proper arguments <code>--server</code> <code>--user</code> <code>--cluster</code></p>
<ol start="3">
<li><p>Check if you are refering to the config from <code>~/.kube/config</code> and not your own private config from somewhere else. </p></li>
<li><p>Check if you have a <code>KUBECONFIG</code> environment variable (<code>echo $KUBECONFIG</code>)</p></li>
</ol>
<p>I hope it helps.</p>
|
<p>I have a Django application running in a container that I would like to probe for readiness. The kubernetes version is 1.10.12. The settings.py specifies to only allow traffic from a specific domain:</p>
<pre class="lang-py prettyprint-override"><code>ALLOWED_HOSTS = ['.example.net']
</code></pre>
<p>If I set up my probe without setting any headers, like so: </p>
<pre><code> containers:
- name: django
readinessProbe:
httpGet:
path: /readiness-path
port: 8003
</code></pre>
<p>then I receive a 400 response, as expected- the probe is blocked from accessing <code>readiness-path</code>:</p>
<pre><code>Invalid HTTP_HOST header: '10.5.0.67:8003'. You may need to add '10.5.0.67' to ALLOWED_HOSTS.
</code></pre>
<p>I have tested that I can can successfully curl the readiness path as long as I manually set the host headers on the request, so I tried setting the Host headers on the httpGet, as <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes" rel="nofollow noreferrer">partially documented here</a>, like so: </p>
<pre><code> readinessProbe:
httpGet:
path: /readiness-path
port: 8003
httpHeaders:
- name: Host
value: local.example.net:8003
</code></pre>
<p>The probe continues to fail with a 400.</p>
<p>Messing around, I tried setting the httpHeader with a lowercase h, like so: </p>
<pre><code> readinessProbe:
httpGet:
path: /django-admin
port: 8003
httpHeaders:
- name: host
value: local.example.net:8003
</code></pre>
<p>Now, the probe actually hits the server, but it's apparent from the logs that instead of overwriting the HTTP_HOST header with the correct value, it has been appended, and fails because the combined HTTP_HOST header is invalid:</p>
<pre><code>Invalid HTTP_HOST header: '10.5.0.67:8003,local.example.net:8003'. The domain name provided is not valid according to RFC 1034/1035
</code></pre>
<p>Why would it recognize the header here and add it, instead of replacing it? </p>
<p>One suspicion I am trying to validate is that perhaps correct handling of host headers was only added to the Kubernetes httpHeaders spec after 1.10. I have been unable to find a clear answer on when host headers were added to Kubernetes- there are no specific headers described in the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#httpheader-v1-core" rel="nofollow noreferrer">API documentation for 1.10</a>. </p>
<p>Is it possible to set host headers on a readiness probe in Kubernetes 1.10, and if so how is it done? If not, any other pointers for getting this readiness probe to correctly hit the readiness path of my application?</p>
<p><strong>Update</strong>:</p>
<p>I have now tried setting the value without a port, as suggested by a commenter:</p>
<pre><code> httpHeaders:
- name: Host
value: local.acmi.net.au
</code></pre>
<p>The result is identical to setting the value with a port. With a capital H the host header value is not picked up at all, with a lowercase h the host header value is appended to the existing host header value.</p>
| <p>This problem is fixed in Kubernetes 1.2.3+ β see <a href="https://github.com/kubernetes/kubernetes/issues/24288" rel="nofollow noreferrer">kubernetes/kubernetes#24288</a>. I've configured one of my deployments in a similar way (for similar reasons). This is for a Craft CMS instance which doesn't require authentication at the <code>/admin/login</code> url:</p>
<pre><code> readinessProbe:
httpGet:
path: /admin/login
port: 80
httpHeaders:
- name: Host
value: www.example.com
timeoutSeconds: 5
</code></pre>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.