Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
|---|---|---|---|
<p>I am running mac OSX Catalina using the docker application with the Kubernetes option turned on. I create a PersistentVolume with the following yaml and command.</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-data
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 10Gi
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.1.250
path: "/volume1/docker"
</code></pre>
<blockquote>
<p>kubectl apply -f pv.yml</p>
</blockquote>
<p>This create and PersistentVolume with name pv-nfs-data. Next I then create a PersistentVolumeClaim with the following yaml and command</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nfs-data
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
</code></pre>
<blockquote>
<p>kubectl apply -f pvc.yml</p>
</blockquote>
<p>This create a PersistentVolumeClaim with the name pvc-nfs-data however it doen't bind it to the available PersistentVolume (pv-nfs-data). Instead it creates an new one and binds it to that. How do I make the PersistentVolumeClaim bind to the available PersistentVolume</p>
|
Dblock247
|
<p>The Docker for Mac <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass" rel="nofollow noreferrer">default storage class</a> is the <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#dynamic" rel="nofollow noreferrer">dynamic provisioning type</a>, like you would get on AKS/GKE, where it allocates the physical storage as well.</p>
<pre><code>→ kubectl get StorageClass
NAME PROVISIONER AGE
hostpath (default) docker.io/hostpath 191d
</code></pre>
<p>For a PVC to use an existing PV, you can disable the storage class and specify in the PV which PVC can use it with a <code>claimRef</code>.</p>
<h3>Claim Ref</h3>
<p>The PV includes a <code>claimRef</code> for the PVC you will create</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-data
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 10Gi
claimRef:
namespace: insert-your-namespace-here
name: pv-nfs-data-claim
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.1.250
path: "/volume1/docker"
</code></pre>
<p>The PVC sets <code>storageClassName</code> to <code>''</code></p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pv-nfs-data-claim
namespace: insert-your-namespace-here
spec:
storageClassName: ''
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
</code></pre>
<h3>Dynamic</h3>
<p>You can go the dynamic route with NFS by adding an <a href="https://github.com/kubernetes-incubator/external-storage/tree/master/nfs" rel="nofollow noreferrer">NFS dynamic provisioner</a>, create a <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">storage class</a> for it and let kubernetes work the rest out. More recent version of Kubernetes (1.13+) can use the <a href="https://github.com/kubernetes-csi/csi-driver-nfs" rel="nofollow noreferrer">CSI NFS driver</a></p>
|
Matt
|
<p>How do I get top three most CPU utilized pod in a Kubernetes cluster?</p>
<pre><code>kubectl top po --all-namespaces
</code></pre>
<p>Above command gives me CPU and memory utilization for all the pods across all namespaces. How to restrict it to only top three most CPU utilized pods?</p>
<p>Also, I've tried to sort by CPU, but seems like sorting is not working.</p>
<pre><code>kubectl top po --all-namespaces --sort-by="cpu"
</code></pre>
<p>Output:</p>
<pre><code>NAMESPACE NAME CPU(cores) MEMORY(bytes)
kube-system weave-net-ksfp4 1m 51Mi
kube-system kube-controller-manager-master 10m 50Mi
kube-system coredns-5644d7b6d9-rzd64 2m 6Mi
kube-system weave-net-h4xlg 1m 77Mi
kube-system kube-proxy-lk9xv 1m 19Mi
kube-system coredns-5644d7b6d9-v2v4m 3m 6Mi
kube-system kube-scheduler-master 2m 21Mi
kube-system kube-apiserver-master 26m 228Mi
kube-system kube-proxy-lrzjh 1m 9Mi
kube-system etcd-master 12m 38Mi
kube-system metrics-server-d58c94f4d-gkmql 1m 14Mi
</code></pre>
|
Naseem Khan
|
<p>The sorting should be fixed in the next release - <a href="https://github.com/kubernetes/kubernetes/issues/81270" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/81270</a> </p>
<p>In the mean time you <a href="https://github.com/kubernetes/kubernetes/issues/81270#issuecomment-578144523" rel="nofollow noreferrer">can use this</a></p>
<pre><code>kubectl top pod --all-namespaces --no-headers \
| sort --key 3 --numeric \
| tail -3
</code></pre>
|
Matt
|
<p>I have my own Python module (an '.so' file that I'm able to import locally) that I want to make available to my application running in Kubernetes. I am wholly unfamiliar with Kubernetes and Helm, and the documentation and attempts I've made so far haven't gotten me anywhere. </p>
<p>I looked into <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">ConfigMaps</a>, trying <code>kubectl.exe create configmap mymodule --from-file=MyModule.so</code>, but kubectl says "Request entity too large: limit is 3145728". (My binary file is ~6mb.) I don't know if this is even the appropriate way to get my file there. I've also looked at <a href="https://helm.sh/docs/topics/charts/" rel="nofollow noreferrer">Helm Charts</a>, but I see nothing about how to package up a file to upload. Helm Charts look more like a way to configure deployment of existing services.</p>
<p>What's the appropriate way to package up my file, upload it, and use it within my application (ensuring that Python will be able to <code>import MyModule</code> successfully when running in my AKS cluster)?</p>
|
user655321
|
<p>The python module should be added to the <a href="https://kubernetes.io/docs/concepts/containers/" rel="nofollow noreferrer">container image</a> that runs in Kubernetes, not to Kubernetes itself. </p>
<p>The current container image running in Kubernetes has a build process, usually controlled by a <code>Dockerfile</code>. That container image is then published to an image repository where the container runtime in Kubernetes can pull the image in from and run the container.</p>
<p>If you don't currently build this container, you may need to create your own build process to add the python module to the existing container. In a <code>Dockerfile</code> you use the <code>FROM old/image:1.7.1</code> and then add your content.</p>
<pre><code>FROM old/image:1.7.1
COPY MyModule.so /app/
</code></pre>
<p>Publish the new container image to an ECR (<a href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html" rel="nofollow noreferrer">Elastic Container Registry</a>) so it is available for use in your AKS cluster. </p>
<p>The only change you might need to make in Kubernetes then is to set the <code>image</code> for the deployment to the newly published image. </p>
<p>Here is a <a href="https://opensource.com/article/18/1/running-python-application-kubernetes" rel="nofollow noreferrer">simple end to end guide</a> for a python application. </p>
|
Matt
|
<p>If I start the service using docker,It should look like this:</p>
<pre><code>docker run -e PARAMS="--spring.datasource.url=jdbc:mysql://mysql-service.example.com/xxl-job?Unicode=true&characterEncoding=UTF-8 --spring.datasource.username=root --spring.datasource.password=<mysql-password>" -p 8180:8080 -v /tmp:/data/applogs --name xxl-job-admin -d xuxueli/xxl-job-admin:2.0.2
</code></pre>
<p>now I am running it in kubernetes(v1.15.2) cluster,how to pass the parameter into pod's container? I am trying to pass parameter like this:</p>
<pre><code>"name": "xxl-job-service",
"image": "xuxueli/xxl-job-admin:2.0.2",
"args": [
"--spring.datasource.url=jdbc:mysql://mysql-service.ttt208.com/xxl-job?Unicode=true&characterEncoding=UTF-8 --spring.datasource.username=root --spring.datasource.password=<mysql-password>"
],
</code></pre>
<p>but it seem do not work,it throw :</p>
<pre><code>19:19:55.563 logback [xxl-job, admin JobFailMonitorHelper] ERROR c.x.j.a.c.t.JobFailMonitorHelper - >>>>>>>>>>> xxl-job, job fail monitor thread error:{}
org.mybatis.spring.MyBatisSystemException: nested exception is org.apache.ibatis.exceptions.PersistenceException:
### Error querying database. Cause: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
### The error may exist in class path resource [mybatis-mapper/XxlJobLogMapper.xml]
### The error may involve com.xxl.job.admin.dao.XxlJobLogDao.findFailJobLogIds
### The error occurred while executing a query
### Cause: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at org.mybatis.spring.MyBatisExceptionTranslator.translateExceptionIfPossible(MyBatisExceptionTranslator.java:77)
at org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:446)
at com.sun.proxy.$Proxy57.selectList(Unknown Source)
at org.mybatis.spring.SqlSessionTemplate.selectList(SqlSessionTemplate.java:230)
at org.apache.ibatis.binding.MapperMethod.executeForMany(MapperMethod.java:139)
at org.apache.ibatis.binding.MapperMethod.execute(MapperMethod.java:76)
at org.apache.ibatis.binding.MapperProxy.invoke(MapperProxy.java:59)
at com.sun.proxy.$Proxy61.findFailJobLogIds(Unknown Source)
at com.xxl.job.admin.core.thread.JobFailMonitorHelper$1.run(JobFailMonitorHelper.java:49)
at java.lang.Thread.run(Thread.java:748)
</code></pre>
<p>what should I do to run this service success? I am sure the database username and password correct.</p>
|
Dolphin
|
<p>Dockers <code>-e</code> or <code>--env</code> sets an environment variable. </p>
<p>The equivalent in a Kubernetes <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#podspec-v1-core" rel="nofollow noreferrer">pod spec</a> is the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#container-v1-core" rel="nofollow noreferrer">containers <code>env</code> field</a></p>
<pre><code> env:
- name: PARAMS
value: ' --spring.datasource.url=jdbc:mysql://mysql-service.example.com/xxl-job?Unicode=true&characterEncoding=UTF-8 --spring.datasource.username=root --spring.datasource.password=<mysql-password>'
</code></pre>
|
Matt
|
<p>We have a kind of evaluation job which consists of several thousand invocations of a legacy binary with various inputs, each of which running like a minute. The individual runs are perfectly parallelizable (one instance per core).</p>
<p>What is the state of the art to do this in a hybrid cloud scenario?</p>
<p>Kubernetes itself does not seem to provide an interface for prioritizing or managing waiting jobs. Jenkins would be good at these points, but feels like a hack. Of course, we could hack something ourselves, but the problem should be sufficiently generic to already have an out-of-the box solution.</p>
|
Gunther Vogel
|
<p>You may be interested in following aricles about using Mesos for Hybrid Cloud</p>
<ul>
<li><a href="https://www.researchgate.net/publication/318176075_Towards_a_Hybrid_Cloud_Platform_Using_Apache_Mesos" rel="nofollow noreferrer">Xue, Noha & Haugerud, Hårek & Yazidi, Anis. (2017). Towards a Hybrid Cloud Platform Using Apache Mesos. 143-148. 10.1007/978-3-319-60774-0_12.</a></li>
</ul>
<blockquote>
<p>Hybrid cloud technology is becoming increasingly popular as it merges private and public clouds to bring the best of two worlds together. However, due to the heterogeneous cloud installation, facilitating a hybrid cloud setup is not simple. Despite the availability of some commercial solutions to build a hybrid cloud, an open source implementation is still unavailable. In this paper, we try to bridge the gap by providing an open source implementation by leveraging the power of Apache Mesos. We build a hybrid cloud on the top of multiple cloud platforms, private and public.</p>
</blockquote>
<ul>
<li><a href="https://youtu.be/yx31J6p60Gg" rel="nofollow noreferrer">Apache Mesos For All Your Hybrid Cloud Needs</a></li>
<li><a href="https://d2iq.com/blog/best-approach-hybrid-cloud" rel="nofollow noreferrer">Choosing the Best Approach to Hybrid Cloud </a></li>
</ul>
|
janisz
|
<p>There seems to be two contradictory explanations of how NodePort services route traffic. Services can route traffic to one of the two, not both:</p>
<ol>
<li><strong>Nodes (through the kube-proxy)</strong> According to <code>kubectl explain Service.spec.externalTrafficPolicy</code> and <a href="https://www.asykim.com/blog/deep-dive-into-kubernetes-external-traffic-policies" rel="nofollow noreferrer">this article</a> that adds more detail, packets incoming to NodePort services with <code>Service.spec.externalTrafficPolicy=Local</code> set get routed to a kube-proxy, which then routes the packets to the corresponding pods its running.
<ul>
<li>This <a href="https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies" rel="nofollow noreferrer">kube-proxy networking documentation</a> further supports this theory adding that endpoints add a rule in the service's IPtable that forwards traffic to nodes through the kube-proxy.</li>
</ul></li>
<li><strong>Pods</strong>: services update their IPtables from <code>endpoints</code>, which contain the IP addresses for the <em>pods</em> they can route to. Furthermore, if you remove your service's label selectors and <a href="https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/" rel="nofollow noreferrer">edit endpoints</a> you can change where your traffic is routed to.</li>
</ol>
<p>If one of these is right, then I must be misunderstanding something.</p>
<ul>
<li>If services route to <strong>nodes</strong>, then why can I edit <code>endpoints</code> without breaking the IPtables? </li>
<li>If services route to <strong>pods</strong>, then why would services go through the trouble of routing to nodes when <code>Service.spec.externalTrafficPolicy</code> is set?</li>
</ul>
|
mikeLundquist
|
<p>A <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noreferrer">Service</a> is a virtual address/port managed by <code>kube-proxy</code>. Services forward traffic to their associated endpoints, which are usually pods but as you mentioned, can be set to any destination IP/Port. </p>
<p>A <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="noreferrer">NodePort Service</a> doesn't change the endpoint side of the service, the NodePort allows external traffic into Service via a port on a node.</p>
<h3>Breakdown of a Service</h3>
<p><code>kube-proxy</code> can use <a href="https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies" rel="noreferrer">3 methods</a> to implement the forwarding of a service from Node to destination. </p>
<ul>
<li>a user proxy</li>
<li>iptables</li>
<li>ipvs</li>
</ul>
<p>Most clusters use iptables, which is what is described below. I use the term "forward" instead of "route" because services use <a href="https://en.wikipedia.org/wiki/Network_address_translation" rel="noreferrer">Network Address Translation</a> (or the proxy) to "forward" traffic rather than standard network routing. </p>
<p>The service <code>ClusterIP</code> is a virtual entity managed by <code>kube-proxy</code>. This address/port combination is available on every node in the cluster and forwards any local (pod) service traffic to the endpoints IP and port. </p>
<pre><code> / Pod (remote node)
Pod -- ClusterIP/Port -- KUBE-SVC-NAT -- Pod
\ Pod (remote node)
</code></pre>
<p>A service with a <code>NodePort</code> is the same as above, with the addition of a way to forward external traffic into the cluster via a Node. <code>kube-proxy</code> manages an additional rule to watch for external traffic and forward it into the same service rules. </p>
<pre><code>Ext -- NodePort \ / Pod (remote node)
KUBE-SVC-NAT -- Pod
Pod -- ClusterIP/Port / \ Pod (remote node)
</code></pre>
<p>The <code>externalTrafficPolicy=Local</code> setting makes a NodePort service use <em>only</em> a local Pod to service the incoming traffic. This avoids a network hop which removes the need to rewrite the source of the packet (via NAT). This results in the real network IP arriving at the pod servicing the connection, rather than one of the cluster nodes being the source IP. </p>
<pre><code>Ext -- NodePort \ Pod (remote node)
KUBE-SVC-NAT -- Pod (local)
Pod -- ClusterIP/Port / Pod (remote node)
</code></pre>
<h3>iptables</h3>
<p>I recommend attempting to trace a connection from source to destination for a service or nodeport on a host. It requires a bit of iptables knowledge but I think it's worthwhile</p>
<p>To list all the services ip/ports that will be forwarded: </p>
<p><code>iptables -vnL -t nat KUBE-SERVICES</code></p>
<p>To list all the nodeports that will be forwarded:</p>
<p><code>iptables -vnL -t nat KUBE-NODEPORTS</code></p>
<p>Once you have the rule you can jump through <code>KUBE-SVC-XXX</code> "target" rules in the full output. </p>
<p><code>iptables -vnL -t nat | less</code></p>
|
Matt
|
<p>I have the following Kubernetes YAML for my cluster of HTTP/REST services, is there a way I can expose the identity, users and actions services through the same load balancer?</p>
<p>With the config below it creates 4 separate elastic load balancers in AWS when I think 1 is enough. I tried setting Kibana to NodePort so I could access it externally but I couldn't access it so I set the type to LoadBalancer.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: identity-service
labels:
app: identity-service
spec:
replicas: 1
selector:
matchLabels:
app: identity-service
template:
metadata:
labels:
app: identity-service
spec:
containers:
- name: identity-service
image: org_name/identity_service
imagePullPolicy: Always
ports:
- containerPort: 5000
env:
- name: CONNECTION_STRING
value: "..."
imagePullSecrets:
- name: docker-hub
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: users-service
labels:
app: users-service
spec:
replicas: 1
selector:
matchLabels:
app: users-service
template:
metadata:
labels:
app: users-service
spec:
containers:
- name: users-service
image: org_name/users_service
imagePullPolicy: Always
ports:
- containerPort: 5001
env:
- name: CONNECTION_STRING
value: "..."
imagePullSecrets:
- name: docker-hub
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: actions-service
labels:
app: actions-service
spec:
replicas: 1
selector:
matchLabels:
app: actions-service
template:
metadata:
labels:
app: actions-service
spec:
containers:
- name: actions-service
image: org_name/actions_service
imagePullPolicy: Always
ports:
- containerPort: 5003
env:
- name: CONNECTION_STRING
value: "..."
imagePullSecrets:
- name: docker-hub
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: message-queue
labels:
app: message-queue
spec:
replicas: 1
selector:
matchLabels:
app: message-queue
template:
metadata:
labels:
app: message-queue
spec:
containers:
- name: message-queue
image: org_name/message_queue
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5672
- containerPort: 15672
imagePullSecrets:
- name: docker-hub
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
labels:
app: elasticsearch
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: elasticsearch:7.6.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9200
env:
- name: ELASTIC_PASSWORD
value: ...
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: kibana:7.6.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5601
env:
- name: ELASTICSEARCH_HOSTS
value: http://ELASTICSEARCH_SERVICE_HOST:ELASTICSEARCH_SERVICE_PORT
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: ...
- name: XPACK_MONITORING_ENABLED
value: "true"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: smtp-server
labels:
app: smtp-server
spec:
replicas: 1
selector:
matchLabels:
app: smtp-server
template:
metadata:
labels:
app: smtp-server
spec:
containers:
- name: smtp-server
image: mailhog/mailhog
imagePullPolicy: IfNotPresent
ports:
- containerPort: 1025
- containerPort: 8025
---
apiVersion: v1
kind: Service
metadata:
name: identity-service
labels:
app: identity-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west-2:...:certificate/...
spec:
ports:
- port: 443
targetPort: 5000
protocol: TCP
selector:
app: identity-service
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: users-service
labels:
app: users-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west-2:...:certificate/...
spec:
ports:
- port: 443
targetPort: 5001
protocol: TCP
selector:
app: users-service
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: actions-service
labels:
app: actions-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west-2:...:certificate/...
spec:
ports:
- port: 443
targetPort: 5003
protocol: TCP
selector:
app: actions-service
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: kibana
labels:
app: kibana
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west-2:...:certificate/...
spec:
ports:
- port: 5601
targetPort: 5601
protocol: TCP
selector:
app: kibana
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
labels:
app: elasticsearch
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west-2:...:certificate/...
spec:
ports:
- port: 9200
targetPort: 9200
protocol: TCP
selector:
app: elasticsearch
type: ClusterIP
</code></pre>
|
c4po
|
<p>Use a single <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="nofollow noreferrer">ingress controller</a> to expose each service with <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">ingress definitions</a>. On AWS you can use an <a href="https://github.com/kubernetes-sigs/aws-alb-ingress-controller" rel="nofollow noreferrer">ALB as the ingress endpoint</a>.</p>
<p>Each service will need a different hostname or /path to differentiate between them.</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /identity
backend:
serviceName: identity-service
servicePort: 5000
- path: /users
backend:
serviceName: users-service
servicePort: 5001
- path: /actions
backend:
serviceName: actions-service
servicePort: 5003
- path: /kibana
backend:
serviceName: kibana
servicePort: 5601
</code></pre>
<p>Then change the <code>type</code> of each service to <code>ClusterIP</code></p>
|
Matt
|
<p>I have the following deployment running in Google Cloud Platform (GCP):</p>
<pre><code> kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: mybackend
labels:
app: backendweb
spec:
replicas: 1
selector:
matchLabels:
app: backendweb
tier: web
template:
metadata:
labels:
app: backendweb
tier: web
spec:
containers:
- name: mybackend
image: eu.gcr.io/teststuff/backend:latest
ports:
- containerPort: 8081
</code></pre>
<p>This uses the following service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mybackend
labels:
app: backendweb
spec:
type: NodePort
selector:
app: backendweb
tier: web
ports:
- port: 8081
targetPort: 8081
</code></pre>
<p>Which uses this ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: backend-ip
labels:
app: backendweb
spec:
backend:
serviceName: mybackend
servicePort: 8081
</code></pre>
<p>When I spin all this up in Google Cloud Platform however, I get the following error message on my ingress:</p>
<pre><code>All backend services are in UNHEALTHY state
</code></pre>
<p>I've looked through my pod logs with no indication about the problem. Grateful for any advice!</p>
|
Nespony
|
<p>Most likely this problem is caused by not returning 200 on route <code>'/'</code> for your pod. Please check your pod configurations. If you don't want to return 200 at route <code>'/'</code>, you could add a readiness rule for health check like this:</p>
<pre><code>readinessProbe:
httpGet:
path: /healthz
port: 80
initialDelaySeconds: 5
periodSeconds: 5
</code></pre>
|
Fei
|
<p>I wonder the difference between kube-proxy and cni.</p>
<ol>
<li><p>Does calico also use iptables to set policies? In this case, the role overlaps with the kube-proxy, what's the difference between the two?</p>
</li>
<li><p>Why kube-proxy disable for Calico eBPF mode?
Since kube-proxy uses iptables, do you disable kube-proxy that uses iptables to use eBPF?</p>
</li>
<li><p>If I disable kube-proxy, will the existing iptables policies be removed?</p>
</li>
</ol>
<p>Thank you.</p>
|
JungGyu Oh
|
<ol>
<li><p>Calico defaults to using iptables to set network policies. Calico iptables chains/rules sit along side and integrate with the kube-proxy rules (when kube-proxy is in iptables mode).</p>
</li>
<li><p>The BPF code Calico implements intercepts the packets before the kube-proxy iptables rules are able to. You don't have to disable kube-proxy, but there is no reason to run kube-proxy (and the overhead of it managing iptables rules) once Calico can communicate directly with the kube-apiserver service and manage kubernetes services via BPF.</p>
</li>
<li><p>If kube-proxy is not running, it will not add any k8s iptables rules. If you have been left with rules after kube-proxy is shutdown, a manual iptables flush <code>iptables --flush</code> or a reload of your base iptables config will do. Otherwise a <code>kube-proxy --cleanup</code>.</p>
</li>
</ol>
|
Matt
|
<p>I have a docker container which running in a pod where I need to restart the pod / container when it exceeds memory usage or CPU limit. How to configure it in docker file</p>
|
user2980608
|
<p>CPU and memory limits cannot be given when building a docker and it cannot be configured in <code>Docker File</code>. It is a scheduling problem. You could run your docker with <code>docker run</code> command with different flags to control resources. See <a href="https://docs.docker.com/config/containers/resource_constraints/" rel="nofollow noreferrer">Docker Official Document</a> for those control flags for <code>docker run</code>.</p>
<p>As your question is tagged with <code>kubernetes</code>, there is <code>kubernetes</code> way to limit your resources. You would want to add <code>resources</code> in specs for those <code>deployment</code> or <code>pod</code> yaml. For example:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: your_deployment
labels:
app: your_app
spec:
replicas: 1
template:
metadata:
labels:
app: your_app
spec:
containers:
- name: your_container
image: your_image
resources:
limits:
cpu: "500m"
memory: "128Mi"
requests:
cpu: "250m"
memory: "64Mi"
...
</code></pre>
<p><code>requests</code> affects how docker pod is scheduled on nodes. <code>Memory limit</code> determines when the docker will be killed for OOM and <code>cpu limit</code> determines how the container cpu usage will be throttled (The pod will not be killed). </p>
<p>Meaning of <code>cpu</code> is different for each cloud service providers. For more information, please refer to <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/" rel="nofollow noreferrer">manage compute resources for Kubernetes</a></p>
|
Fei
|
<p>With the rolling updates, for example if the image version needs to be updated, do we have the option to configure k8s to stop accepting requests at the older pods once at least the minimum expected amount of newer pods are available? Thanks.</p>
|
zero_yu
|
<p>I'm not aware of a way to have kubernetes manage that specific deployment itself without writing custom <a href="https://github.com/kubernetes/kubectl/blob/4a0675f1f2e0816c804c6bc6f2b64425c0356088/pkg/cmd/rollingupdate/rolling_updater.go" rel="nofollow noreferrer">kubectl client</a> or <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#custom-controllers" rel="nofollow noreferrer">kubernetes controller</a> logic (see related questions in the sidebar).</p>
<p>But from the brief description the steps don't seem too hard to implement via a couple of extra API calls: </p>
<ol>
<li>Scale old release to lower limit</li>
<li>Rollout update</li>
<li>Scale new release to upper limit</li>
</ol>
<p>For a good overview of some custom deployment methods managed outside of kubernetes have a look at the <a href="https://github.com/ContainerSolutions/k8s-deployment-strategies" rel="nofollow noreferrer">Container Solution deployment strategy guide</a>. Maybe the <a href="https://github.com/ContainerSolutions/k8s-deployment-strategies/tree/master/blue-green/single-service" rel="nofollow noreferrer">blue/green deployment</a> fits better.</p>
|
Matt
|
<p><a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="nofollow noreferrer">https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler</a></p>
<p>Use case - installing a deployment with nodeSelector. No existing node label matches. Autoscaler won't scale up. Anyone aware if autoscaler is capable of labeling fresh EC2 if there is demand for one?</p>
<p>We are deploying big umbrealla charts (60 pods+) that are replicas of real production environments. Some of the pods are crucial for whole chart to work. If it gets spreaded among a few nodes, and one of the node is having health problems more than one environment is affected. Having it fully deployed on one node reduces number of affected charts to one.</p>
<p>thanks</p>
|
Łukasz
|
<p>There is no such capability to create nodes with a new specification, and I don't think it is necessary as well. Imagine a typo in nodeLabel bringing up new nodes, and also these new nodes are not known to your IAC is another scary thing. Cluster autoscaler adds new nodes by updating your autoscaling group, so node is same as other nodes in that autoscaling group.
if you want to hack around you can check <a href="https://kubernetes.io/docs/reference/access-authn-authz/" rel="nofollow noreferrer">admission-controllers</a>, add existing nodes with this new label or modify label on the fly to one you support? But do you really need to do it?</p>
|
Narain
|
<p>Is there a way to flatten a dictionary with helm? I want to provide environment variables to the application from the chart by flattening a YAML config located in values.yaml. The config can look like. (Not Actual)</p>
<pre class="lang-yaml prettyprint-override"><code>config:
server:
port: 3333
other:
setting:
name: test
</code></pre>
<p>And would like to provide environment variables as</p>
<pre class="lang-yaml prettyprint-override"><code>- name: CONFIG_SERVER_PORT
value: 3333
- name: CONFIG_OTHER_SETTING_NAME
value: test
</code></pre>
<p>I have considered using Kubernetes config maps but this would mean deploying slightly different instances of the app with random release names so that the config is not overwritten.
This library <a href="https://github.com/jeremywohl/flatten" rel="nofollow noreferrer">https://github.com/jeremywohl/flatten</a> provides a way to flatten a <code>map[string]interface{}</code> with delimeters. Is there a way to provide a custom pipe for helm that uses the library or another way to flatten the config?</p>
|
user3533087
|
<p>I'm not aware of anything like that built in. <a href="https://masterminds.github.io/sprig/" rel="nofollow noreferrer">Sprig</a> provides most of the useful functions for helm templates but the <a href="https://masterminds.github.io/sprig/dicts.html" rel="nofollow noreferrer">dict functions</a> just cover the primitives.</p>
<p>You could <code>define</code> a <a href="https://helm.sh/docs/chart_template_guide/named_templates/" rel="nofollow noreferrer">named template</a> that does the business and recurses down the config dict/map. Then <code>include</code> the template where needed:</p>
<pre><code>{{- define "recurseFlattenMap" -}}
{{- $map := first . -}}
{{- $label := last . -}}
{{- range $key, $val := $map -}}
{{- $sublabel := list $label $key | join "_" | upper -}}
{{- if kindOf $val | eq "map" -}}
{{- list $val $sublabel | include "recurseFlattenMap" -}}
{{- else -}}
- name: {{ $sublabel | quote }}
value: {{ $val | quote }}
{{ end -}}
{{- end -}}
{{- end -}}
</code></pre>
<p>Passing the <code>config</code> data in is a little complex here, via a <code>list</code> that is then separated back out into <code>$map</code> and <code>$label</code>. This is due to templates only accepting a single variable <a href="https://helm.sh/docs/chart_template_guide/named_templates/#setting-the-scope-of-a-template" rel="nofollow noreferrer">scope</a>.</p>
<pre><code>env: {{ list .Values.config "CONFIG" | include "recurseFlattenMap" | nindent 2 }}
</code></pre>
<p>With the example values:</p>
<pre><code>config:
server:
port: 3333
first: astr
other:
setting:
name: test
</code></pre>
<p>Results in</p>
<pre><code>$ helm template .
---
# Source: so61280873/templates/config.yaml
env:
- name: "CONFIG_FIRST"
value: "astr"
- name: "CONFIG_OTHER_SETTING_NAME"
value: "test"
- name: "CONFIG_SERVER_PORT"
value: "3333"
</code></pre>
|
Matt
|
<p>I'm confused about nginx ingress with Kubernetes. I've been able to use it with "basic nginx auth" (unable to do so with <code>oauth2</code> yet).</p>
<p>I've installed via helm:</p>
<p><code>helm install stable/nginx-ingress --name app-name --set rbac.create=true</code></p>
<p>This creates two services, an <code>nginx-ingress-controller</code> and an <code>nginx-ingress-backend</code>.</p>
<p>When I create an ingress, this ingress is targeted towards one and only one <code>nginx-ingress-controller</code>, but I have no idea how:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: tomcat
annotations:
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-realm: "Authentication Required - foo"
nginx.ingress.kubernetes.io/rewrite-target: /
namespace: kube-system
spec:
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: tomcat-deployment-service
servicePort: 8080
</code></pre>
<p>When I get this Ingress from the output of <code>kubectl get ingress -n kube-system</code>, it has a public, external IP.</p>
<p>What's <strong><em>concerning</em></strong> is that <code>basic-auth</code> <strong>DOESN'T APPLY</strong> to that external IP; it's wide open! Nginx authentication only kicks in when I try to visit the <code>nginx-ingress-controller</code>'s IP.</p>
<p>I have a lot of questions.</p>
<ol>
<li>How do I made an ingress created from <code>kubectl apply -f
ingress.yaml</code> target a specific nginx-ingress-controller?</li>
<li>How do I keep this new <code>ingress</code> from having an external IP?</li>
<li>Why isn't <code>nginx</code> authentication kicking in?</li>
<li>What IP am I suppose to use (the <code>nginx-ingress-controller</code> or the
generated one?)</li>
<li>If I'm suppose to use the generated IP, what about the one from the controller?</li>
</ol>
<p>I have been searching for descent, working examples (and pouring over sparse, changing documentation, and github issues) for <em>literally</em> days.</p>
<p>EDIT:</p>
<p>In this "official" <a href="https://kubernetes.github.io/ingress-nginx/examples/auth/basic/" rel="nofollow noreferrer">documentation</a>, it's unclear as to weather or not <code>http://10.2.29.4/</code> is the IP from the <code>ingress</code> or the <code>controller</code>. I assume the <code>controller</code> because when I run this, the other doesn't even authenticate (<em>it let's me in without asking for a password</em>). Both IP's I'm using are external IPs (publicly available) on GCP.</p>
|
Alexander Kleinhans
|
<p>I think you might have some concept definition misunderstanding. </p>
<ol>
<li>Ingress is not a job ( Nor a service, nor a pod ). It is just a configuration. It cannot have "IP". think of ingress as a routing rule or a routing table in your cluster.</li>
<li><code>Nginx-ingress-controller</code> is the service with type <code>Loadbalancer</code> with actual running pods behind it that facilitates those ingress rules that you created for your cluster.</li>
<li><code>Nginx-ingress-backend</code> is likely to be a <code>default-backend</code> that your <code>nginx-ingress-controller</code> will route to if no matching routes are found. see <a href="https://kubernetes.github.io/ingress-nginx/user-guide/default-backend/" rel="nofollow noreferrer">this</a></li>
<li>In general, your <code>nginx-ingress-controller</code> should be the only entry of your cluster. Other services in your cluster should have type <code>ClusterIP</code> such that they are not exposed to outside the cluster and only accessible through your <code>nginx-ingress-controller</code>. In you case, since your service could be access from outside directly, it should not be of type <code>ClusterIP</code>. Just change the service type to get it protected.</li>
</ol>
<p>Based on above understanding, I will glad to provide further help for the question you have.</p>
<p>Some readings:</p>
<ol>
<li>What is ingress: <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></li>
<li>K8s Services and external accessibility: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types</a></li>
</ol>
|
Fei
|
<p>I'm setting up a new kubernetes setup in my local PC having below specifications. While trying to initiate Kubernetes cluster I'm facing some issues. Need your inputs.</p>
<p>OS version: Linux server.cent.com 3.10.0-123.el7.x86_64 #1 SMP Mon Jun 3enter code here`0 12:09:22 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux</p>
<p>Docker version: Docker version 1.13.1, build 07f3374/1.13.1</p>
<pre><code>[root@server ~]# rpm -qa |grep -i kube
kubectl-1.13.2-0.x86_64
kubernetes-cni-0.6.0-0.x86_64
kubeadm-1.13.2-0.x86_64
kubelet-1.13.2-0.x86_64
</code></pre>
<p>Issue facing is:</p>
<pre><code>[root@server ~]# kubeadm init --apiserver-advertise-address=192.168.203.154 --pod-network-cidr=10.244.0.0/16
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
</code></pre>
<p>Kubelet status:</p>
<pre><code>Jan 29 09:34:09 server.cent.com kubelet[10994]: E0129 09:34:09.354902 10994 kubelet.go:2266] node "server.cent.com" not found
Jan 29 09:34:09 server.cent.com kubelet[10994]: E0129 09:34:09.456166 10994 kubelet.go:2266] node "server.cent.com" not found
Jan 29 09:34:09 server.cent.com kubelet[10994]: E0129 09:34:09.558500 10994 kubelet.go:2266] node "server.cent.com" not found
Jan 29 09:34:09 server.cent.com kubelet[10994]: E0129 09:34:09.660833 10994 kubelet.go:2266] node "server.cent.com" not found
Jan 29 09:34:09 server.cent.com kubelet[10994]: E0129 09:34:09.763840 10994 kubelet.go:2266] node "server.cent.com" not found
Jan 29 09:34:09 server.cent.com kubelet[10994]: E0129 09:34:09.867118 10994 kubelet.go:2266] node "server.cent.com" not found
Jan 29 09:34:09 server.cent.com kubelet[10994]: E0129 09:34:09.968783 10994 kubelet.go:2266] node "server.cent.com" not found
Jan 29 09:34:10 server.cent.com kubelet[10994]: E0129 09:34:10.071722 10994 kubelet.go:2266] node "server.cent.com" not found
Jan 29 09:34:10 server.cent.com kubelet[10994]: E0129 09:34:10.173396 10994 kubelet.go:2266] node "server.cent.com" not found
Jan 29 09:34:10 server.cent.com kubelet[10994]: E0129 09:34:10.274892 10994 kubelet.go:2266] node "server.cent.com" not found
Jan 29 09:34:10 server.cent.com kubelet[10994]: E0129 09:34:10.292021 10994 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192
Jan 29 09:34:10 server.cent.com kubelet[10994]: E0129 09:34:10.328447 10994 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubeenter code herelet.go:453: Failed to list *v1.Node: Get https://192.168.20?
Jan 29 09:34:10 server.cent.com kubelet[10994]: E0129 09:34:10.3` `29742 10994 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://192.168
Jan 29 09:34:10 server.cent.com kubelet[10994]: E0129 09:34:10.376238 10994 kubelet.go:2266] node "server.cent.com" not found
</code></pre>
<p>I have tried the same in all these versions, but same issue: 1.13.2, 1.12.0, 1.11.0, 1.10.0, and 1.9.0</p>
|
Vignesh M
|
<p>I faced this problem while installing k8s on Fedora Core OS. Then I did</p>
<pre><code>cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
mkdir -p /etc/systemd/system/docker.service.d
systemctl daemon-reload
systemctl restart docker
</code></pre>
<p>See: <a href="https://kubernetes.io/docs/setup/production-environment/container-runtimes/" rel="noreferrer">https://kubernetes.io/docs/setup/production-environment/container-runtimes/</a></p>
<p>Then docker restart failed and I overcome that by creating a new file /etc/systemd/system/docker.service.d/docker.conf with following content</p>
<pre><code>[Service]
ExecStart=
ExecStart=/usr/bin/dockerd
</code></pre>
<p>See: <a href="https://docs.docker.com/config/daemon/" rel="noreferrer">https://docs.docker.com/config/daemon/</a></p>
<p>After that, everything was fine and able to setup k8s cluster.</p>
|
Maithilish
|
<p>Kubernetes newbie here, so my question might not make sense. Please bear with me.</p>
<p>So my question is, given I have setup Storage Class in my cluster, then I have a PVC (Which uses that Storage Class). If I use that PVC into my Deployment, and that Deployment have 5 replicas, will the Storage Class create 5 PV? one per Pod? Or only 1 PV shared by all Pods under that Deployment?</p>
<p>Edit: Also I have 3 Nodes in this cluster</p>
<p>Thanks in advance.</p>
|
Jplus2
|
<p>The <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims" rel="nofollow noreferrer">Persistent Volume Claim</a> resource is specified separately from a deployment. It doesn't matter how many replicas the deployment has, kubernetes will only have the number of PVC resources that you define.</p>
<p>If you are looking for multiple stateful containers that create their own PVC's, use a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> instead. This includes a <code>VolumeClaimTemplate</code> definition. </p>
<p>If you want all deployment replicas to share a PVC, the storage class provider plugin will need to be either <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">ReadOnlyMany or ReadWriteMany</a></p>
|
Matt
|
<p>Very new to kubernetes. I've been getting confused by documentation and example differences between Helm2 and 3.</p>
<p>I installed the <code>stable/nginx-ingress</code> chart via <code>helm install app-name stable/nginx-ingress</code>. </p>
<p>1st question:</p>
<p>I need to update the <code>externalTrafficPolicy</code> to <code>Local</code>. I learned later I could have set that during the install process via adding <code>--set controller.service.externalTrafficPolicy=Local</code> to the helm command.</p>
<p>How can I update the LoadBalancer service with the new setting without removing the ingress controller and reinstalling?</p>
<p>2nd question:</p>
<p>Helm3 just downloaded and setup the ingress controller and didn't save anything locally. Is there a way to backup all my my k8s cluster configs (other than the ones I've created manually)?</p>
|
Geuis
|
<p>To upgrade and dump the YAML deployed (for a backup of the ingress release)</p>
<pre><code>helm upgrade <your-release-name> stable/nginx-ingress \
--reuse-values \
--set controller.service.externalTrafficPolicy=Local \
--output yaml
</code></pre>
<p>For a public chart you may want to set the <code>--version</code> option to the existing installed version of the chart you used. In case you don't want to any updates from newer versions to be applied along with the setting. </p>
<p>For complete dumps, have a look through <a href="https://github.com/kubernetes/kubernetes/issues/24873" rel="nofollow noreferrer">this github issue</a>. All options there are a bit dodge though with edge cases. I would recommend having everything re-deployable from something like git, all the way from cluster to apps. Anyone who makes edits by hand can then be shot (Well.. at least have clusters regularly redeployed on them :)</p>
|
Matt
|
<p>I am using manual scaling in an Azure AKS cluster, which can scale up to 60 nodes.</p>
<p>Scaling command worked fine:</p>
<pre><code>az aks scale --resource-group RG-1 --name KS-3 --node-count 46
{- Finished ..
"agentPoolProfiles": [
{
"availabilityZones": null,
"count": 46,
...
</code></pre>
<p>and reported the count of 46 nodes.</p>
<p>The status also shows "Succeeded":</p>
<pre><code>az aks show --name KS-3 --resource-group RG-1 -o table
Name Location ResourceGroup KubernetesVersion ProvisioningState Fqdn
---------- ---------- ------------------ ------------------- ------------------- -------------------------
KS-3 xxxxxxx RG-1 1.16.13 Succeeded xxx.azmk8s.io
</code></pre>
<p>However, when I look at <code>kubectl get nodes</code> it shows only 44 nodes:</p>
<pre><code>kubectl get nodes | grep -c 'aks'
44
</code></pre>
<p>with 7 nodes in "Ready,SchedulingDisabled" state (and the rest in Ready state):</p>
<pre><code>kubectl get nodes | grep -c "Ready,SchedulingDisabled"
7
</code></pre>
<p>When I try to scale the cluster down to 45 nodes, it gives this error:</p>
<pre><code>Deployment failed. Correlation ID: xxxx-xxx-xxx-x. Node 'aks-nodepool1-xxxx-vmss000yz8' failed to be drained with error: 'nodes "aks-nodepool1-xxxx-vmss000yz8" not found'
</code></pre>
<p>I am not sure what got the cluster into this inconsistent state and how to go about debugging this.</p>
|
arun
|
<p>It happened because two of the nodes were in corrupted state. We had to delete these nodes from the VM scale set in the node resource group associated with our AKS cluster. Not sure why the nodes got into this state though.</p>
|
arun
|
<p>These are recommended labels:</p>
<pre><code>app.kubernetes.io/name
app.kubernetes.io/instance
app.kubernetes.io/version
app.kubernetes.io/component
app.kubernetes.io/part-of
app.kubernetes.io/managed-by
</code></pre>
<p>I don't quite figure out what <code>app.kubernetes.io/instance</code> is for.</p>
<p>Could you provide any useful examples?</p>
|
Jordi
|
<p>A generic application <code>name</code> can have multiple <code>instance</code>s.</p>
<p>Say if an application used <code>nginx</code> to serve different types of content, so each type of nginx can be scaled independantly:</p>
<pre><code>app.kubernetes.io/name: nginx
app.kubernetes.io/instance: static-01
app.kubernetes.io/instance: img-02
app.kubernetes.io/instance: dynamic-05
</code></pre>
<p>They are only recommendations though so you can use them how you want. In small scale clusters you might not have a need to make <code>name</code> and <code>instance</code> different.</p>
<p>See <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/#applications-and-instances-of-applications" rel="noreferrer">Applications And Instances Of Applications</a></p>
<blockquote>
<p>An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example, wordpress can be installed more than once where different websites are different installations of wordpress.</p>
<p>The name of an application and the instance name are recorded separately. For example, WordPress has a <code>app.kubernetes.io/name</code> of <code>wordpress</code> while it has an instance name, represented as <code>app.kubernetes.io/instance</code> with a value of <code>wordpress-abcxzy</code>. This enables the application and instance of the application to be identifiable. Every instance of an application must have a unique name.</p>
</blockquote>
|
Matt
|
<p>Is it possible, within a <a href="/questions/tagged/helm" class="post-tag" title="show questions tagged 'helm'" rel="tag">helm</a> chart to create a single string which is a comma-separated representation (similar to using the <code>",".join()</code> command in Python) of strings with a common prefix and a variable suffix?</p>
<p>For example, I have a CLI application that requires an argument like so via the <code>extraArgs</code> parameter in a <a href="/questions/tagged/kubernetes" class="post-tag" title="show questions tagged 'kubernetes'" rel="tag">kubernetes</a> pod definition:</p>
<pre><code>extraArgs: >-
-M {{ $.Values.global.hostname }}/100
</code></pre>
<p>I now have to modify this value to be over a range (i.e. from <code>{{$.Values.global.minval}}</code> to <code>{{$.Values.global.maxval}}</code>, inclusive). So, for a <code>minval=100</code> and <code>maxval=105</code>, my chart needs to now become (note the lack of a trailing comma, and no spaces other than the space after <code>-M</code>):</p>
<pre><code>extraArgs: >-
-M {{ $.Values.global.hostname }}/100,{{ $.Values.global.hostname }}/101,{{ $.Values.global.hostname }}/102,{{ $.Values.global.hostname }}/103,{{ $.Values.global.hostname }}/104,{{ $.Values.global.hostname }}/105
</code></pre>
<p>Is there some way I can execute this in a range/loop in my chart? I have several instances of this chart that will use different min/max values, and I'd like to automate this tedious task as much as I can (additionally, I <strong>do not</strong> have access to the app's source, so I can't change the CLI interface to the application).</p>
<p>In Python, I could accomplish this roughly by:</p>
<pre><code>minval = 100
minval = 105
s = "-M "
L = []
for i in range(minval, maxval+1):
L.append("{{{{ $.Values.global.hostname }}}}/{}".format(i))
s = s + ",".join(L)
# print(s)
</code></pre>
<p>I'm not sure where to begin doing this in a Helm template beyond starting with the <code>range()</code> function.</p>
|
Uther
|
<p>Helm includes the <a href="https://masterminds.github.io/sprig" rel="nofollow noreferrer">sprig library</a> of template functions which contains <a href="https://masterminds.github.io/sprig/integer_slice.html" rel="nofollow noreferrer"><code>untilStep</code></a> and <a href="https://masterminds.github.io/sprig/string_slice.html" rel="nofollow noreferrer"><code>join</code></a>. </p>
<p>There is no concept of a <a href="https://github.com/Masterminds/sprig/issues/91" rel="nofollow noreferrer"><code>map</code> or <code>each</code> operator in sprig</a> yet so you can construct the list in a <code>range</code> loop to be joined later (<a href="https://stackoverflow.com/a/59173597/1318694">from here</a>)</p>
<pre><code>{{- $minval := int .Values.minval -}}
{{- $maxval := int .Values.maxval | add1 | int -}}
{{- $args := list -}}
{{- range untilStep $minval $maxval 1 -}}
{{- $args = printf "%s/%d" $hostname . | append $args -}}
{{- end }}
extraArgs: '-M {{ $args | join "," }}'
</code></pre>
|
Matt
|
<h1>How can I add stable/nginx-ingress as a dependency to my custom helm chart?</h1>
<p>After trying a few different urls for the repository, I still have no luck. </p>
<p>Steps</p>
<ol>
<li>Created a new helm chart with helm create and editing the <code>Chart.yaml</code> to be</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v2
name: acme
description: A Helm chart for Kubernetes
type: application
version: 0.1.0
appVersion: 1.16.0
icon: https://www.google.com/images/branding/googlelogo/2x/googlelogo_color_272x92dp.png
dependencies:
- name: stable/nginx-ingress
version: ~1.34
repository: https://kubernetes-charts.storage.googleapis.com
</code></pre>
<ol start="2">
<li>Executed this command <code>helm dep update acme</code></li>
</ol>
<p>The output is the following</p>
<p><code>Error: stable/nginx-ingress chart not found in repo https://kubernetes-charts.storage.googleapis.com</code></p>
<p><strong>Note</strong> </p>
<p>I have seen these Stack Overflow questions, but the answers were lacking explanation: </p>
<ul>
<li><a href="https://stackoverflow.com/questions/57970255/helm-v3-cannot-find-the-official-repo">Helm V3 - Cannot find the official repo</a> </li>
<li><a href="https://stackoverflow.com/questions/59454550/adding-nginx-ingress-certmanager-as-dependency-in-helm-charts">Adding Nginx-Ingress/Certmanager as Dependency in Helm Charts</a></li>
</ul>
<p>This question is not intended to be a duplicate. I'm not using Azure and I am using Helm 3.</p>
|
Stephen
|
<p>The updated chart for helm3 is <a href="https://kubernetes.github.io/ingress-nginx/deploy/#using-helm" rel="nofollow noreferrer">ready to use</a>. </p>
<blockquote>
<pre><code>helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install my-release ingress-nginx/ingress-nginx
</code></pre>
</blockquote>
<hr>
<h3>Original</h3>
<p>The <a href="https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx" rel="nofollow noreferrer">nginx-ingress chart</a> is not published there yet. The progress is being tracked in <a href="https://github.com/kubernetes/ingress-nginx/issues/5161" rel="nofollow noreferrer">kubernetes/ingress-nginx#5161</a>. </p>
<p>If you want to use the old chart you will need either a copy of the chart locally, or a version of the chart published to your own repo. For the local file dependency, get a copy of the current chart:</p>
<pre><code>git clone https://github.com/helm/charts.git
cp -r charts/stable/nginx-ingress /path/to/acmes-parent-dir/
</code></pre>
<p>Then you can use a relative reference to the local directory:</p>
<pre><code>dependencies:
- name: nginx-ingress
version: "1.34"
repository: "file://../nginx-ingress"
</code></pre>
|
Matt
|
<p>We are trying to deploy a dot net core API service to amazon EKS using ECR. The deployment was successful, but the pods are in pending status. Below are the detailed steps we followed.</p>
<p>Steps followed.
1. Created a docker image
2. Pushed the image to ECR. The image is now visible in aws console also.
// The image looks good, I was able to run it using my docker locally.</p>
<ol start="3">
<li><p>Created a t2-micro cluster as below
eksctl create cluster --name net-core-prod --version 1.14 --region us-west-2 --nodegroup-name standard-workers --node-type t2.micro --nodes 1 --nodes-min 1 --nodes-max 1 –managed
// Cluster and Node groups were created successfully.
// IAM roles also got created</p></li>
<li><p>Deployed a replication controller using the attached json/yaml//net-app.json
<a href="https://i.stack.imgur.com/7gV0N.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/7gV0N.jpg" alt="enter image description here"></a></p></li>
<li>Deployed the service using the attached json/yaml //net-app-scv.json
<a href="https://i.stack.imgur.com/M9wAp.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/M9wAp.jpg" alt="enter image description here"></a></li>
<li><p>The get all command returned this. //get_all.png
<a href="https://i.stack.imgur.com/aWSzr.png" rel="noreferrer"><img src="https://i.stack.imgur.com/aWSzr.png" alt="get all"></a>
POD always remains in PENDING status.</p></li>
<li><p>Pod describe gave the below result //describe_pod.png
<a href="https://i.stack.imgur.com/fj4io.png" rel="noreferrer"><img src="https://i.stack.imgur.com/fj4io.png" alt="describe pod"></a></p></li>
<li>We have also tried adding policy to the cluster IAM role to include ECR permissions attached. //ECR_policy.json</li>
</ol>
<p>Key points:<br/>
1. We are using a t2-micro instance cluster since it’s a AWS free account.<br/>
2. We created a linux cluster and tried to push the dotnet core app. //this worked fine in our local machine<br/>
3. The cluster had only 1 node //-nodes 1 --nodes-min 1 --nodes-max 1<br/></p>
<p>Can somebody please guide us on how to set up this correctly.</p>
|
snehgin
|
<p>On Amazon Elastic Kubernetes Service (<code>EKS</code>), the maximum number of pods per node depends on the node type and ranges from 4 to 737.</p>
<p>If you reach the max limit, you will see something like:</p>
<pre><code>❯ kubectl get node -o yaml | grep pods
pods: "17" => this is allocatable pods that can be allocated in node
pods: "17" => this is how many running pods you have created
</code></pre>
<p>If you get only one number, it should be allocatable. Another way to count all running pods is to run the following command:</p>
<pre><code>kubectl get pods --all-namespaces | grep Running | wc -l
</code></pre>
<p>Here's the list of max pods per node type:
<a href="https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt" rel="noreferrer">https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt</a></p>
<p>On Google Kubernetes Engine (<code>GKE</code>), the limit is 110 pods per node. check the following URL:</p>
<p><a href="https://github.com/kubernetes/community/blob/master/sig-scalability/configs-and-limits/thresholds.md" rel="noreferrer">https://github.com/kubernetes/community/blob/master/sig-scalability/configs-and-limits/thresholds.md</a> </p>
<p>On Azure Kubernetes Service (<code>AKS</code>), the default limit is 30 pods per node but it can be increased up to 250. The default maximum number of pods per node varies between kubenet and Azure CNI networking, and the method of cluster deployment. check the following URL for more information: </p>
<p><a href="https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni#maximum-pods-per-node" rel="noreferrer">https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni#maximum-pods-per-node</a></p>
|
Muhammad Soliman
|
<p>What is the best way to troubleshoot when kubectl doesn't responde or exit with timeout? How to get it work again?</p>
<p>I'm having my kubectl as well as helm on my cluster down when installing a helm chart. </p>
|
YasiuMaster
|
<p>General advice:</p>
<ol>
<li><p>Check if your kubectl is connecting to the correct kube-api endpoint. You could take a look at your kubeconfig. It is by default stored in $HOME/.kube. Try simple CURL to make sure that it is not DNS problem, etc.</p></li>
<li><p>Take a look at your nodes' logs by ssh into the nodes that you have: see <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster/#looking-at-logs" rel="nofollow noreferrer">this</a> for more details instructions and log locations. </p></li>
</ol>
<p>Once you have more information, you could get yourself started in the investigation of problems.</p>
|
Fei
|
<p>So I have 5 docker containers running (preconfigured) with ssh. “nodes” are in the same Network subnet and can ping each other. What would be the algorithm to setup kubernates or kind on top of such setup?</p>
|
DuckQueen
|
<p>Kubernetes has a project called <a href="https://github.com/kubernetes-sigs/kind" rel="nofollow noreferrer"><code>kind</code></a> (Kubernetes IN Docker) for testing or building kubernetes in Docker containers. There's a <a href="https://kind.sigs.k8s.io/docs/user/quick-start/" rel="nofollow noreferrer">quick start guide</a></p>
<p>Making a container react like a regular node/vm/host to kubernetes is a complex task, docker in docker is only the start of it. kind supplies the container half of the equation for you and does <a href="https://kind.sigs.k8s.io/docs/user/quick-start#multinode-clusters" rel="nofollow noreferrer">multi node</a> setups</p>
|
Matt
|
<p>I have an issue with the DNS mapping in kubernetes.</p>
<p>We have some servers which can be accessed from internet. The global DNS translates these servers's domain names to public internet IPs.
Some services can't access through public IPs for security consideration.</p>
<p>From company internal, we add the DNS mappings with private IPs to /etc/hosts inside docker containers managed by kubernetes to access these servers manually.</p>
<p>I know that docker supports command --add-host to change <code>/etc/hosts</code> when executing <code>docker run</code>. I'm not sure if this command supported in latest kubernetes, such as kuber <code>1.4</code> or <code>1.5</code> ?</p>
<p>On the other hand, we can wrap the startup script for the docker container,</p>
<ul>
<li>append the mappings to <code>/etc/hosts</code> firstly</li>
<li>start our application</li>
</ul>
<p>I only want to change the file once after first run in each container. Is there an easy way to do this because the mapping relations maybe different between develop and production environments or any commands related to this provided by kubernetes itself?</p>
|
qingdaojunzuo
|
<p>It is now possible to add a <code>hostAliases</code> section directly in the description of the deployment.</p>
<p>As a full example of how to use the <code>hostAliases</code> section I have included the surrounding code for an example deployment as well.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion : apps/v1
kind: Deployment
metadata:
name: "backend-cluster"
spec:
replicas: 1
selector:
matchLabels:
app: "backend"
template:
metadata:
labels:
app: "backend"
spec:
containers:
- name: "backend"
image: "exampleregistry.azurecr.io/backend"
ports:
- containerPort: 80
hostAliases:
- hostnames:
- "www.example.com"
ip: "10.0.2.4"
</code></pre>
<p>The important part is only a part of the file and here it is omitted for clarity:</p>
<pre class="lang-yaml prettyprint-override"><code>...
hostAliases:
- hostnames:
- "www.example.com"
ip: "10.0.2.4"
</code></pre>
|
eikooc
|
<p>We deploy JupyterHub to Kubernetes (specifically, AWS managed kubernetes “EKS”)
We deploy it via helm.
We run version 0.8.2 of JupyterHub.</p>
<p>We want to know:</p>
<p>(1) What is the default memory allocation for notebook servers?</p>
<p>(2) Is it possible to increase it? How?</p>
<p>For reference, this is our helm chart:</p>
<pre><code>auth:
admin:
access: true
users:
- REDACTED
type: github
github:
clientId: "REDACTED"
clientSecret: "REDACTED"
callbackUrl: "REDACTED"
org_whitelist:
- "REDACTED"
scopes:
- read:org
singleuser:
image:
# Get the latest image tag at:
# https://hub.docker.com/r/jupyter/datascience-notebook/tags/
# Inspect the Dockerfile at:
# https://github.com/jupyter/docker-stacks/tree/master/datascience-notebook/Dockerfile
# name: jupyter/datascience-notebook
# tag: 177037d09156
name: REDACTED
tag: REDACTED
pullPolicy: Always
storage:
capacity: 32Gi
lifecycleHooks:
postStart:
exec:
command: ["/bin/sh", "-c", "touch ~/.env && chmod 777 ~/.env"]
hub:
# cookie_max_age_days - determines how long we keep the github
# cookie in the hub server (in days).
# cull_idle_servers time out - determines how long it takes before
# we kick out an inactive user and shut down their user server.
extraConfig: |
import sys
c.JupyterHub.cookie_max_age_days = 2
c.JupyterHub.services = [
{
"name": "cull-idle",
"admin": True,
"command": [sys.executable, "/usr/local/bin/cull_idle_servers.py", "--timeout=3600"],
}
]
</code></pre>
|
James Wierzba
|
<p>The <a href="https://github.com/jupyterhub/zero-to-jupyterhub-k8s" rel="nofollow noreferrer">jupyterhub v0.8.2 charts</a> default memory resource request for each <code>singleuser</code> pod/container is <code>1G</code> in the <a href="https://github.com/jupyterhub/zero-to-jupyterhub-k8s/blob/0.8.2/jupyterhub/values.yaml#L222-L224" rel="nofollow noreferrer">chart values</a>. Note that this is a <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-types" rel="nofollow noreferrer">resource <code>request</code></a> which informs the kubernetes scheduler what the container should require on a node. The container is free to use up available memory on the node if needed. Kubernetes should only start evicting pods when the whole node is under memory pressure, which is basically less than 100MiB free memory total</p>
<p>To change this, override the <code>singleuser.memory.guarantee</code> value to set a different request (not sure why they changed the name).</p>
<pre><code>singleuser:
memory:
guarantee: '1024Mi'
</code></pre>
<p>The other option is to set a hard <code>limit</code> at which the container can be killed. There is no limit set by default in the helm <a href="https://github.com/jupyterhub/zero-to-jupyterhub-k8s/blob/0.8.2/jupyterhub/values.yaml#L223" rel="nofollow noreferrer">chart default values</a>. To enforce a limit, override the <code>singleuser.memory.limit</code> value when running helm.</p>
<pre><code>singleuser:
memory:
limit: '1024Mi'
</code></pre>
<p>If you are looking at managing overall usage, you might want to look at <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/#enabling-resource-quota" rel="nofollow noreferrer">resource quotas</a> on the namespace you have jupyterhub running in as it looks like any of the above settings would be per user/singleuser instance. </p>
|
Matt
|
<p>I'm trying to start Minikube, however it crashes. The command used and the output is as follows:
Commad:</p>
<pre><code>minikube start
</code></pre>
<p>Output: </p>
<pre><code> minikube v0.35.0 on linux (amd64)
Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
Downloading Minikube ISO ...
184.42 MB / 184.42 MB [============================================] 100.00% 0s
"minikube" IP address is x.x.x.x
Configuring Docker as the container runtime ...
Preparing Kubernetes environment ...
Downloading kubeadm v1.13.4
Downloading kubelet v1.13.4
Pulling images required by Kubernetes v1.13.4 ...
Unable to pull images, which may be OK: running cmd: sudo kubeadm config images pull --config /var/lib/kubeadm.yaml: command failed: sudo kubeadm config images pull --config /var/lib/kubeadm.yaml
stdout:
stderr: failed to pull image "k8s.gcr.io/kube-apiserver:v1.13.4": output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
: Process exited with status 1
Launching Kubernetes v1.13.4 using kubeadm ...
Error starting cluster: kubeadm init:
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI
[init] Using Kubernetes version: v1.13.4
[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING Swap]: running with swap on is not supported. Please disable swap
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.13.4: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
,
...
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
: Process exited with status 1
Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
</code></pre>
<p>Can someone shed some light on why this happened?</p>
|
Yang Zhou
|
<p><code>k8s.gcr.io</code> is Google Container Registry. It is blocked in China, as I saw that you are a Chinese user. </p>
<p>Please use VPN or and try again.</p>
|
Fei
|
<p>I am reading Core Kubernetes by Vyas and Love. This is the YAML file from page 141, section 7.3.</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dynamic1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100k
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: busybox
name: busybox
volumeMounts:
- mountPath: /shared
name: shared
- image: nginx
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 80
protocol: TCP
volumeMounts:
- mountPath: /var/www
name: dynamic1
- mountPath: /shared
name: shared
volumes:
- name: dynamic1
persistentVolumeClaim:
claimName: dynamic1
- name: shared
emptyDir: {}
</code></pre>
<p>I run <code>kubectl create</code> on this file, and then <code>kubectl get pods --all-namespaces</code>. It shows the <code>nginx</code> pod is having status <code>CrashLoopBackOff</code>. Using <code>kubectl describe pods nginx</code> shows:</p>
<pre><code>Warning FailedScheduling 105s default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Normal Scheduled 104s default-scheduler Successfully assigned default/nginx to minikube
Normal Pulling 101s kubelet Pulling image "nginx"
Normal Pulled 101s kubelet Successfully pulled image "busybox" in 2.289652482s
Normal Pulled 99s kubelet Successfully pulled image "nginx" in 2.219896558s
Normal Created 98s kubelet Created container nginx
Normal Started 98s kubelet Started container nginx
Normal Pulled 96s kubelet Successfully pulled image "busybox" in 2.23260066s
Normal Pulled 78s kubelet Successfully pulled image "busybox" in 2.245476487s
Normal Pulling 49s (x4 over 103s) kubelet Pulling image "busybox"
Normal Created 47s (x4 over 101s) kubelet Created container busybox
Normal Pulled 47s kubelet Successfully pulled image "busybox" in 2.287877562s
Warning BackOff 46s (x5 over 95s) kubelet Back-off restarting failed container
Normal Started 46s (x4 over 101s) kubelet Started container busybox
</code></pre>
<p>Running <code>kubectl logs nginx nginx</code> shows:</p>
<pre><code>/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2022/12/08 14:37:51 [notice] 1#1: using the "epoll" event method
2022/12/08 14:37:51 [notice] 1#1: nginx/1.23.2
2022/12/08 14:37:51 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2022/12/08 14:37:51 [notice] 1#1: OS: Linux 5.15.0-56-generic
2022/12/08 14:37:51 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2022/12/08 14:37:51 [notice] 1#1: start worker processes
2022/12/08 14:37:51 [notice] 1#1: start worker process 30
2022/12/08 14:37:51 [notice] 1#1: start worker process 31
2022/12/08 14:37:51 [notice] 1#1: start worker process 32
2022/12/08 14:37:51 [notice] 1#1: start worker process 33
2022/12/08 14:37:51 [notice] 1#1: start worker process 34
2022/12/08 14:37:51 [notice] 1#1: start worker process 35
2022/12/08 14:37:51 [notice] 1#1: start worker process 36
2022/12/08 14:37:51 [notice] 1#1: start worker process 37
</code></pre>
<p>Running <code>kubectl logs nginx busybox</code> shows nothing. I then comment out the <code>busybox</code> container inside the <code>nginx</code> pod, it works fine. When I comment out the 'nginx' container inside the <code>nginx</code> pod, it runs into the error again. I wonder why this container is causing this problem? Any insight is appreciated.</p>
<p>Addendum:</p>
<p>Running <code>kubectl get sc</code> shows:</p>
<pre><code>NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
standard (default) k8s.io/minikube-hostpath Delete Immediate false 35h
</code></pre>
<p>Running <code>kubectl get event</code> shows:</p>
<pre><code>LAST SEEN TYPE REASON OBJECT MESSAGE
2m12s Warning BackOff pod/nginx Back-off restarting failed container
</code></pre>
|
CaTx
|
<p>Googling around, I find this <a href="https://github.com/josedom24/kubernetes/blob/master/ejemplos/busybox/busybox.yaml" rel="nofollow noreferrer">busybox.yaml</a> file which has the <code>sleep</code> command. I add that to the <code>busybox</code> container as follows:</p>
<pre><code> - image: busybox
name: busybox
command:
- sleep
- "3600"
volumeMounts:
- mountPath: /shared
name: shared
</code></pre>
<p>Now the yaml file works. I guess this is related to Sreepada Jayanthi's answer. This Reddit <a href="https://www.reddit.com/r/kubernetes/comments/jp2ia9/crashloopbackoff_status_using_busybox_images/" rel="nofollow noreferrer">post</a> also explains the details.</p>
|
CaTx
|
<p>I have a file containing many Kubernetes YAML objects.
I am seeking a way of removing all K8s Secret YAML objects from the text file, identified by the <code>"kind: Secret"</code> string contained within the YAML block. This should remove everything from the "apiVersion" through to just before the "---" signifying the start of the next object.</p>
<p>I've looked into Sed, Python and yq tools with no luck.
The YAML may contain any number of secrets in any order.</p>
<p>How can I automate stripping out of these "Secret" blocks?</p>
<pre><code>apiVersion: v1
data:
username: dGVzdAo=
password: dGVzdHBhc3N3b3JkCg==
kind: Secret
metadata:
name: my-secret-1
type: Opaque
---
apiVersion: v1
kind: Pod
metadata:
name: test-site
labels:
app: web
spec:
containers:
- name: front-end
image: nginx
ports:
- containerPort: 80
- name: rss-reader
image: nickchase/rss-php-nginx:v1
ports:
- containerPort: 88
---
apiVersion: v1
data:
username: dGVzdAo=
password: dGVzdHBhc3N3b3JkCg==
kind: Secret
metadata:
name: my-secret-2
type: Opaque
---
</code></pre>
|
James
|
<p><a href="https://github.com/kislyuk/yq" rel="nofollow noreferrer"><code>yq</code></a> can do this (and <a href="https://stedolan.github.io/jq/download/" rel="nofollow noreferrer"><code>jq</code></a> underneath)</p>
<pre><code>pip install yq
</code></pre>
<pre><code>yq --yaml-output 'select(.kind != "Secret")' input.yaml
</code></pre>
<p>You might need to remove the null document at the end of your example, it caused a little bit of weirdness in the output</p>
<p><em><strong>Note</strong></em> that there is also a different <a href="https://github.com/mikefarah/yq" rel="nofollow noreferrer"><code>yq</code> utility</a> that doesn't seem to do what <code>jq</code> does so I'm not sure how to make that one work.</p>
|
Matt
|
<p>I need to deploy NGINX to a Kubernetes cluster, for which I can either use a Helm chart or a Docker image. But I am not clear of the benefits of using a Helm chart. I guess my question is not specific to NGINX but in general.</p>
|
user11081980
|
<p>A helm chart and a container image aren't equivalent things to compare in Kubernetes</p>
<p>A container image is the basic building block of what kubernetes runs. An image will always be required to run an application on kubernetes, no matter how it is deployed.</p>
<p>Helm is a packaging and deployment tool. It makes management of deployments to kubernetes easier. This deployment would normally include a container image. It is possible to write a helm chart that just manages other kubernetes resources but fairly rare. </p>
<p>Other tools in the same arena as helm are <a href="https://kustomize.io/" rel="nofollow noreferrer">kustomize</a>, <a href="https://kompose.io/" rel="nofollow noreferrer">kompose</a>, or using <code>kubectl</code> to apply or create resources. These are all clients of the kubernetes API.</p>
|
Matt
|
<p>Using <code>nginx-ingress</code>, is it possible to enable basic authentication only for a given path (or enable it for all paths and exclude some paths from it)? </p>
<p><a href="https://kubernetes.github.io/ingress-nginx/examples/auth/basic/" rel="nofollow noreferrer">The documentation</a> only shows how to protect all paths.</p>
|
stefan.at.kotlin
|
<p>You could write a seperate ingress rule for your given path such that only this path is protected by basic authentication.</p>
<p>See <a href="https://stackoverflow.com/questions/56444440/how-to-whitelist-only-one-path-in-kubernetes-nginx-ingress-controller/56453375#56453375">this answer</a> for examples.</p>
|
Fei
|
<p>I am trying to configure the <code>kube-apiserver</code> so that it uses encryption to configure secrets in my minikube cluster.</p>
<p>For that, I have followed the <a href="https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#encrypting-your-data" rel="nofollow noreferrer">documentation on kubernetes.io</a> but got stuck at step 3 that says</p>
<p><em>Set the <code>--encryption-provider-config</code> flag on the <code>kube-apiserver</code> to point to the location of the config file.</em></p>
<p>I have discovered the option <code>--extra-config</code> on <code>minikube start</code> and have tried starting my setup using </p>
<p><code>minikube start --extra-config=apiserver.encryption-provider-config=encryptionConf.yaml</code> </p>
<p>but naturally it doesn't work as <code>encryptionConf.yaml</code> is located in my local file system and not in the pod that's spun up by minikube. The error <code>minikube log</code> gives me is</p>
<p><code>error: error opening encryption provider configuration file "encryptionConf.yaml": open encryptionConf.yaml: no such file or directory</code></p>
<p>What is the best practice to get the encryption configuration file onto the <code>kube-apiserver</code>? Or is <code>minikube</code> perhaps the wrong tool to try out these kinds of things?</p>
|
Patrick
|
<p>I found the solution myself in <a href="https://github.com/kubernetes/minikube/issues/2741#issuecomment-398683171" rel="nofollow noreferrer">this GitHub issue</a> where they have a similar issue for passing a configuration file. The comment that helped me was the slightly hacky solution that made use of the fact that the directory <code>/var/lib/localkube/certs/</code> from the minikube VM is mounted into the apiserver. </p>
<p>So my final solution was to run</p>
<pre><code>minikube mount .:/var/lib/minikube/certs/hack
</code></pre>
<p>where in the current directory I had my <code>encryptionConf.yaml</code> and then start minikube like so</p>
<pre><code>minikube start --extra-config=apiserver.encryption-provider-config=/var/lib/minikube/certs/hack/encryptionConf.yaml
</code></pre>
|
Patrick
|
<p>My AKS is accessible via a nginx-ingress. Everything works with https but since I use https nginx is not able to match any routes and use the default backend.</p>
<p>I'm using Kubernetes Version 1.15. I changed my domain to example.com and the IP to 51.000.000.128.
The SSL certificate is signed by an external provider (digicert).</p>
<p><strong>ingress-controller</strong></p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
</code></pre>
<p><strong>ingress-service</strong></p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
</code></pre>
<p><strong>ingress.yaml</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
namespace: ingress-nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - kp-user'
spec:
tls:
- hosts:
- example.com
secretName: ssl-secret
rules:
- host: example.com
- http:
paths:
- path: /app1(/|$)(.*)
backend:
serviceName: app1-service
servicePort: 80
- path: /app2(/|$)(.*)
backend:
serviceName: app2-service
servicePort: 80
</code></pre>
<p><strong>The Ingress is running:</strong></p>
<pre><code>$ kubectl -n ingress-nginx get ing
NAME HOSTS ADDRESS PORTS AGE
nginx-ingress example.com 51.000.000.128 80, 443 43h
</code></pre>
<p><strong>And the description of the Ingress:</strong></p>
<pre><code>$ kubectl describe ingress nginx-ingress --namespace=ingress-nginx
Name: nginx-ingress
Namespace: ingress-nginx
Address: 51.000.000.128
Default backend: default-http-backend:80 (<none>)
TLS:
ssl-secret terminates example.com
Rules:
Host Path Backends
---- ---- --------
*
/app1(/|$)(.*) app1-service:80 (10.244.1.10:80,10.244.2.11:80)
/app2(/|$)(.*) app2-service:80 (10.244.1.12:80,10.244.2.13:80)
Annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-passthrough: true
nginx.ingress.kubernetes.io/ssl-redirect: false
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx","nginx.ingress.kubernetes.io/auth-realm":"Authentication Required - kp-user","nginx.ingress.kubernetes.io/auth-secret":"basic-auth","nginx.ingress.kubernetes.io/auth-type":"basic","nginx.ingress.kubernetes.io/rewrite-target":"/$2","nginx.ingress.kubernetes.io/ssl-passthrough":"true","nginx.ingress.kubernetes.io/ssl-redirect":"false"},"name":"nginx-ingress","namespace":"ingress-nginx"},"spec":{"rules":[{"host":"example.com"},{"http":{"paths":[{"backend":{"serviceName":"app1-service","servicePort":80},"path":"/app1(/|$)(.*)"},{"backend":{"serviceName":"app2-service","servicePort":80},"path":"/app2(/|$)(.*)"}]}}],"tls":[{"hosts":["example.com"],"secretName":"ssl-secret"}]}}
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-realm: Authentication Required - kp-user
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-type: basic
Events: <none>
</code></pre>
<p>Like I wrote in the beginning, unfortunately I get every time the <em>404 not found</em> page from nginx, if I try to access on of the routes via https. The Secret is working because I'm able to see a valid certificate in my browser. The ingress is also working because with http I'm not facing any issues.</p>
<p><strong>Issue</strong></p>
<pre><code>http://51.000.000.128/app1 => working
https://51.000.000.128/app1 => working but unsecure (browser use http)
example.com => not working (404 Not Found by nginx | default backend)
</code></pre>
<p>When I access the page via domain, it will be recognized by ingress-controller:</p>
<pre><code>$ sudo kubectl logs nginx-ingress-controller-799dbf6fbd-bbxdp -n ingress-nginx
// https request
165.000.00.000 - - [05/Dec/2019:12:26:40 +0000] "GET /app1 HTTP/1.1" 308 177 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" 500 0.000 [upstream-default-backend] [] - - - - 323deb61e1babdbca2006844d268b1ce
165.000.00.000 - - [05/Dec/2019:12:26:40 +0000] "GET /app1 HTTP/2.0" 404 179 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" 306 0.001 [upstream-default-backend] [] 127.0.0.1:8181 190 0.000 404 d0cae28ba059531c78bffff38de2a84d
165.000.00.000 - - [05/Dec/2019:12:26:55 +0000] "GET /app1 HTTP/2.0" 404 179 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" 44 0.000 [upstream-default-backend] [] 127.0.0.1:8181 190 0.000 404 db153c080e0116f8b730508b5ae0b0f3
// http request
165.000.00.000 - - [05/Dec/2019:12:27:40 +0000] "GET /app1 HTTP/1.1" 200 550 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" 501 0.004 [ingress-nginx-app1-service-80] [] 10.244.1.10:80 1116 0.000 200 01beb82bb5173e7b0392660a9325c222
165.000.00.000 - - [05/Dec/2019:12:27:40 +0000] "GET /app1/styles.66c87fc4c5e0902762b4.css HTTP/1.1" 200 10401 "http://51.000.000.128/app1" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" 439 0.001 [ingress-nginx-app1-service-80] [] 10.244.2.11:80 70796 0.000 200 d367dfc0ae4db08c54dc6b0cb96e1f55
165.000.00.000 - - [05/Dec/2019:12:27:40 +0000] "GET /app1/polyfills-es2015.80abe0a50bdacb904507.js HTTP/1.1" 200 12933 "http://51.000.000.128/app1" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" 464 0.002 [ingress-nginx-app1-service-80] [] 10.244.1.10:80 37277 0.000 200 a2a4cd368a4badf1b6d2b202cf3958c5
165.000.00.000 - - [05/Dec/2019:12:27:40 +0000] "GET /app1/runtime-es2015.cd056c32d7e60bda4f6b.js HTTP/1.1" 200 1499 "http://51.000.000.128/app1" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" 462 0.000 [ingress-nginx-app1-service-80] [] 10.244.2.11:80 2728 0.000 200 f34c880d21f0172eeee3cc4f058c52a7
165.000.00.000 - - [05/Dec/2019:12:27:40 +0000] "GET /app1/main-es2015.2bb12b52c456e81e18a1.js HTTP/1.1" 200 164595 "http://51.000.000.128/app1" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" 459 0.029 [ingress-nginx-app1-service-80] [] 10.244.1.10:80 566666 0.028 200 7375f5092851e8407fe299c36c8a1b13
165.000.00.000 - - [05/Dec/2019:12:27:40 +0000] "GET /app1/18-es2015.b5bfc8f7102d1318aebc.js HTTP/1.1" 200 554 "http://51.000.000.128/app1" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" 426 0.002 [ingress-nginx-app1-service-80] [] 10.244.2.11:80 973 0.000 200 92e549e50e5ab6df5d456b31a8a34d8a
165.000.00.000 - - [05/Dec/2019:12:27:40 +0000] "GET /app1/assets/logo.svg HTTP/1.1" 200 2370 "http://51.000.000.128/app1" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" 443 0.003 [ingress-nginx-app1-service-80] [] 10.244.1.10:80 4717 0.000 200 c2503ed57519784af2988b70861302ec
</code></pre>
<p>From my understanding, the request by my domain works. For any reason, the ingress controller is not able to use/find the ingress via https.
What I'm doing wrong.</p>
|
Nico Schuck
|
<p>Problem 1:</p>
<p>It should be related to your <code>nginx.ingress.kubernetes.io/ssl-passthrough: "true"</code> configuration. </p>
<p>If you enabled ssl-passthrough, nginx-ingress will not try to decrypt the traffic for you. It would pass through the traffic straight to target service for decryption. In this way, path-based routing will not work because path is actually also encrypted. Also, none of other nginx ingress annotation will not due to the nature of basically not touching the request.</p>
<p>If that is not you want, you would like to remove the ssl-passthrough configuration and let nginx-ingress to terminate the HTTPS for you.</p>
<p>See following for more readings:</p>
<ol>
<li><a href="https://docs.giantswarm.io/guides/advanced-ingress-configuration/#ssl-passthrough" rel="noreferrer">https://docs.giantswarm.io/guides/advanced-ingress-configuration/#ssl-passthrough</a></li>
<li><a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#ssl-passthrough" rel="noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#ssl-passthrough</a></li>
</ol>
<p>Problem 2:</p>
<p>In the ingress configuration. For spec => rules, there should be no <code>-</code> before <code>http</code> tag. Adding <code>-</code> will apply paths route to all hosts instead of just <code>example.com</code> route. There should be a conflict with the <code>tls</code> config that just apply tls to <code>example.com</code> hosts.</p>
|
Fei
|
<p>I found something wrong at HPA for istio gateway.</p>
<p>Why did 10m equal 10%? Wasn't 10m 1%?</p>
<p>Kubernetes version is 1.18.5.</p>
<pre><code># kubectl get hpa --all-namespaces
NAMESPACE NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
istio-system istio-egressgateway Deployment/istio-egressgateway 7%/80% 2 10 2 13d
istio-system istio-ingressgateway Deployment/istio-ingressgateway 10%/80% 2 10 2 21d
istio-system istiod Deployment/istiod 0%/80% 1 5 1 21d
qa2 graph Deployment/graph 2%/50% 1 10 1 7h35m
qa2 member Deployment/member 0%/50% 1 10 1 7h38m
</code></pre>
<pre><code># kubectl describe hpa istio-ingressgateway -n istio-system | grep "resource cpu"
resource cpu on pods (as a percentage of request): 10% (10m) / 80%
# kubectl describe hpa istio-egressgateway -n istio-system | grep "resource cpu"
resource cpu on pods (as a percentage of request): 7% (7m) / 80%
# kubectl describe hpa istiod -n istio-system | grep "resource cpu"
resource cpu on pods (as a percentage of request): 0% (3m) / 80%
# kubectl describe hpa graph -n qa2 | grep "resource cpu"
resource cpu on pods (as a percentage of request): 2% (24m) / 50%
# kubectl describe hpa member -n qa2 | grep "resource cpu"
resource cpu on pods (as a percentage of request): 1% (12m) / 50%
</code></pre>
|
honillusion
|
<p>These values are not the same, and they are not directly calculated from each other.</p>
<p>The value in percents is the target average utilization (corresponding to the <code>targetAverageUtilization</code> parameter), which is relative to the requested value.</p>
<p>The value in the brackets is the target average value (<code>targetAverageValue</code>), which is not measured in percents - this is an absolute raw value for the metric.</p>
|
Forketyfork
|
<p>I have work defined in a file/config with the following format,</p>
<pre><code>config1,resource9
config3,resource21
config5,resource10
</code></pre>
<p>How can I spin individual pods based on the configuration? If I add one more line to the configuration, Kubernetes need to spin one more pod and send the configuration line to that pod. </p>
<p>How to store the configuration in Kubernetes and spin up pods based on the configuration?</p>
|
Kannaiyan
|
<p>Take a look at <a href="https://coreos.com/operators/" rel="nofollow noreferrer">Kubernetes Operators</a>. The pattern adds a Kubernetes management layer to an application. Basically you run a kubernetes native app (the operator) that connects to the kubernetes API and takes care of the deployment management for you.</p>
<p>If you are familiar with helm, then a quick way to get started is with <a href="https://github.com/operator-framework/operator-sdk/blob/master/doc/helm/user-guide.md" rel="nofollow noreferrer">the helm example</a>. This example will create a new Nginx deployment for each Custom Resource you create. The Custom Resource contains all the helm values nginx requires for a deployment. </p>
<p>As a first step you could customise the example so that all you need to do is manage the single Custom Resource to deploy or update the app. </p>
<p>If you want to take it further then you may run into some helm limitations pretty quickly, for advanced use cases you can use the go <a href="https://github.com/operator-framework/operator-sdk" rel="nofollow noreferrer">operator-sdk</a> directly. </p>
<p>There are a number of projects operators to browse on <a href="https://operatorhub.io/" rel="nofollow noreferrer">https://operatorhub.io/</a> </p>
|
Matt
|
<p>I am using Treafik ingress to forward request into kubernetes(v1.15.2) dashboard container,but it give me no page found error. now I want to login into kubernetes dashboard to try to get the home page html using this command:</p>
<pre><code>curl -L http://127.0.0.1:8443
</code></pre>
<p>Now I am stuck in login into kubernetes dashboard container. I am using this command to login kubernetes dashboard:</p>
<pre><code>kubectl exec -it kubernetes-dashboard-6466b68b-mrrs9 -- /bin/bash
</code></pre>
<p>and throw this error:</p>
<pre><code>OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: \"/bin/bash\": stat /bin/bash: no such file or directory": unknown
</code></pre>
<p>and this is what I already tried:</p>
<pre><code>[root@ops001 conf.d]# docker exec -it kubernetes-dashboard-6466b68b-mrrs9 /bin/ash
Error: No such container: kubernetes-dashboard-6466b68b-mrrs9
[root@ops001 conf.d]# kubectl exec -it kubernetes-dashboard-6466b68b-mrrs9 /bin/bash
Error from server (NotFound): pods "kubernetes-dashboard-6466b68b-mrrs9" not found
[root@ops001 conf.d]# kubectl config set-context --current --namespace=kube-system
Context "kubernetes" modified.
[root@ops001 conf.d]# kubectl exec -it kubernetes-dashboard-6466b68b-mrrs9 /bin/bash
OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: \"/bin/bash\": stat /bin/bash: no such file or directory": unknown
command terminated with exit code 126
[root@ops001 conf.d]# kubectl exec -it kubernetes-dashboard-6466b68b-mrrs9 /bin/ash
OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: \"/bin/ash\": stat /bin/ash: no such file or directory": unknown
command terminated with exit code 126
[root@ops001 conf.d]# kubectl exec -it kubernetes-dashboard-6466b68b-mrrs9 -- /bin/bash
OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: \"/bin/bash\": stat /bin/bash: no such file or directory": unknown
command terminated with exit code 126
[root@ops001 conf.d]# kubectl exec -it kubernetes-dashboard-6466b68b-mrrs9 --
error: you must specify at least one command for the container
[root@ops001 conf.d]# kubectl exec -it kubernetes-dashboard-6466b68b-mrrs9 -- ls
OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: \"ls\": executable file not found in $PATH": unknown
command terminated with exit code 126
[root@ops001 conf.d]# kubectl exec -it kubernetes-dashboard-6466b68b-mrrs9 -- /bin/ls
OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: \"/bin/ls\": stat /bin/ls: no such file or directory": unknown
command terminated with exit code 126
[root@ops001 conf.d]# kubectl exec -it kubernetes-dashboard-6466b68b-mrrs9 -- /bin/
OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: \"/bin/\": stat /bin/: no such file or directory": unknown
command terminated with exit code 126
[root@ops001 conf.d]# kubectl exec -it kubernetes-dashboard-6466b68b-mrrs9 -- /
OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: \"/\": permission denied": unknown
command terminated with exit code 126
[root@ops001 conf.d]# kubectl exec -it kubernetes-dashboard-6466b68b-mrrs9 -- /bin/sh
OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: \"/bin/sh\": stat /bin/sh: no such file or directory": unknown
command terminated with exit code 126
[root@ops001 conf.d]# kubectl exec -it kubernetes-dashboard-6466b68b-mrrs9 -- /bin/ash
OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: \"/bin/ash\": stat /bin/ash: no such file or directory": unknown
command terminated with exit code 126
[root@ops001 conf.d]# kubectl exec -it kubernetes-dashboard-6466b68b-mrrs9 -- /bin/bash
OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: \"/bin/bash\": stat /bin/bash: no such file or directory": unknown
command terminated with exit code 126
[root@ops001 conf.d]# kubectl exec -it kubernetes-dashboard-6466b68b-mrrs9 -- env
OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: \"env\": executable file not found in $PATH": unknown
command terminated with exit code 126
[root@ops001 conf.d]#
</code></pre>
<p>what should I do to login the kubenetes dashboard container?</p>
|
Dolphin
|
<p>The recommended way is to use <code>kubectl proxy</code> as it is independend if there are any ingresses or other network ressources available...it should work in any environment as long as kubectl has the correct context.</p>
<p>So try </p>
<pre class="lang-sh prettyprint-override"><code>kubectl proxy
</code></pre>
<p>Then use your browser to navigate to</p>
<p><a href="http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/" rel="nofollow noreferrer">http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/</a></p>
<p>Be aware that this URL contains the namespace <strong>kube-system</strong> where the <strong>kubernetes-dashboard</strong> must be deployed in. Change the URL accordingly if it is deployed differently on your cluster.</p>
|
Jürgen Zornig
|
<p>I'm using <a href="https://github.com/kubernetes/ingress-nginx" rel="noreferrer">ingress-nginx</a> as an Ingress controller for one of my services running over K8S (I'm using the nginx-0.20.0 release image with no specific metrics configurations in the K8S configmap the ingress-controller is using). </p>
<p>The nginx-ingress-controller pods are successfully scraped into my Prometheus server but all ingress metrics (e.g. <code>nginx_ingress_controller_request_duration_seconds_bucket</code>) show up with <code>path="/"</code> regardless of the real path of the handled request.</p>
<p>Worth noting that when I look at the ingress logs - the path is logged correctly.</p>
<p>How can I get the real path noted in the exported metrics?</p>
<p>Thanks!</p>
|
wilfo
|
<p>The <code>Path</code> attribute in the NGINX metrics collected by prometheus derives from the Ingress definition yaml.</p>
<p>For example, if your ingress is:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: <some-k8s-ingress-name>
namespace: <some-k8s-namespace-name>
spec:
rules:
- host: <hostname>
http:
paths:
- backend:
serviceName: <some-k8s-service-name>
servicePort: <some-port>
path: /
</code></pre>
<p>Then although NGINX will match any URL to your service, it'll all be logged under the path "<code>/</code>" (as seen <a href="https://github.com/kubernetes/ingress-nginx/blob/d74dea7585b7b26cf5a16ca9d7ac402b1e0cf8df/rootfs/etc/nginx/template/nginx.tmpl#L1010" rel="nofollow noreferrer">here</a>).</p>
<p>If you want metrics for a specific URL, you'll need to explicitly specify it like this (notice the ordering of rules):</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: <some-k8s-ingress-name>
namespace: <some-k8s-namespace-name>
spec:
rules:
- host: <hostname>
http:
paths:
- backend:
serviceName: <some-k8s-service-name>
servicePort: <some-port>
path: /more/specific/path
- backend:
serviceName: <some-k8s-service-name>
servicePort: <some-port>
path: /
</code></pre>
|
wilfo
|
<p>I want to reformat the default logging output to json format without code changes</p>
<p>Docker image: jboss/keycloak:16.1.1</p>
<p>The current log structure is the default</p>
<pre><code>15:04:16,056 INFO [org.infinispan.CLUSTER] (thread-5,null) [Context=authenticationSessions] ISPN100002: Starting rebalance with members [], phase READ_OLD_WRITE_ALL, topology id 2
15:04:16,099 INFO [org.infinispan.CLUSTER] (thread-20,) [Context=offlineClientSessions] ISPN100002: Starting rebalance with members [], phase READ_OLD_WRITE_ALL, topology id 2
</code></pre>
<p>I tried to use <code>LOG_CONSOLE_OUTPUT</code> as described here <a href="https://www.keycloak.org/server/logging" rel="nofollow noreferrer">https://www.keycloak.org/server/logging</a> but it's not working.</p>
<p>Any ideas please?</p>
|
Areej Mohey
|
<p>Assuming you want to use quarkus based Keycloak: Json logging for the quarkus based keycloak is only available since v18, see the <a href="https://www.keycloak.org/docs/latest/release_notes/index.html#quarkus-distribution" rel="nofollow noreferrer">release notes</a> - the guides are at the moment referencing only the latest version.</p>
<p>I heavily recommend to update to this version. Then you could use the log console output variable.</p>
<p>You find the newer container images at <a href="https://quay.io/keycloak/keycloak" rel="nofollow noreferrer">quay</a> instead of dockerhub since v17.</p>
|
Dominik
|
<p>I have a Flask-based app that is used as a front (HTTP API only) for an image processing task (face detection along with some clustering). The app is deployed to Kubernetes clusters and, unfortunately during load testing it dies.</p>
<p>The problem is that all Flask threads are reserved for request processing and the application can't reply to Kubernetes liveness probe (<code>/health</code> endpoint) via HTTP - so the whole pod gets restarted.</p>
<p>How can I resolve it? I thought about a grep-based liveness problem however it doesn't solve the problem. Another idea is to use celery, however, if Flask doesn't support async-processing I'll need to call <code>wait()</code> on a celery task which gives me exactly to the same place.</p>
<p>For now I don't consider returning <code>202</code> response along with URL for process monitoring.</p>
<p>Any other ideas?</p>
|
Opal
|
<p>How did you deploy Gunicorn etc?</p>
<p>FastAPI might be better suited for your use case, but migration might be prohibitive. It has built in async support which would should help you to scale better. I like tiangolo's docker containers for this.</p>
<p>How long does your image recognition take (seconds, milliseconds)?</p>
<p>If you must stick to your current design:</p>
<ol>
<li>increase timeout, but be aware that your customers have the same problem - they might time out.</li>
<li>Increase resources: More pods so that no pod has no resources left.</li>
</ol>
<p>If you're using Flask, be aware that the dev server is not meant for production deployment. Although it is multithreaded, it's not particularly performant, stable, or secure. Use a production WSGI server to serve the Flask application, such as Gunicorn, mod_wsgi, or something else.</p>
|
Christian Sauer
|
<p>I found that my kubernetes cluster was sending reports to usage.projectcalico.org, how can this be disabled and how exactly is it using usage.projectcalico.org?</p>
|
Alex Cohen
|
<p>Felix is the Calico component that sends usage information.</p>
<p>Felix can be <a href="https://docs.projectcalico.org/v3.7/reference/felix/configuration" rel="nofollow noreferrer">configured</a> to disable the usage ping.</p>
<p>Set the <code>FELIX_USAGEREPORTINGENABLED</code> environment variable can be to <code>"false"</code> (needs to be a string in yaml land!) in the <code>calico-node</code> DaemonSet</p>
<p>Set the <code>UsageReportingEnabled</code> field in the <a href="https://docs.projectcalico.org/v3.7/reference/calicoctl/resources/felixconfig" rel="nofollow noreferrer">FelixConfiguration</a> resource to <code>false</code>. This could be in etcd or in the Kubernetes API depending on what store you use. Both modifiable with <code>calicoctl</code>.</p>
<pre><code>calicoctl patch felixConfiguration default \
--patch='{"spec": {"UsageReportingEnabled": false}}'
</code></pre>
<p>If you happen to be using kubespray, modifying this setting is a little harder as these variables are not exposed to Ansible, other than by manually modifying <a href="https://github.com/kubernetes-sigs/kubespray/blob/a2cf6816ce328032d9ad457b7c2cdb23d9519b1b/roles/network_plugin/calico/templates/calico-node.yml.j2" rel="nofollow noreferrer">templates</a> or <a href="https://github.com/kubernetes-sigs/kubespray/blob/a2cf6816ce328032d9ad457b7c2cdb23d9519b1b/roles/network_plugin/calico/tasks/install.yml#L144-L162" rel="nofollow noreferrer">yaml</a>.</p>
|
Matt
|
<p>I need to use UDP broadcast for peer discovery. </p>
<p>Environment:</p>
<ul>
<li><code>docker-desktop</code> with a single node Kubernetes cluster</li>
</ul>
<p>My code looks as follows:</p>
<pre class="lang-java prettyprint-override"><code>import java.net.DatagramPacket;
import java.net.DatagramSocket;
import java.net.InetAddress;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class MainApp {
public static void main(String[] args) throws ExecutionException, InterruptedException {
int inPort = Integer.parseInt(System.getenv("IN_PORT"));
int outPort = Integer.parseInt(System.getenv("OUT_PORT"));
String name = System.getenv("NAME");
Client client = new Client(name, outPort);
Server server = new Server(name, inPort);
ExecutorService service = Executors.newFixedThreadPool(2);
service.submit(client);
service.submit(server).get();
}
static class Client implements Runnable {
final String name;
final int port;
Client(String name, int port) {
this.name = name;
this.port = port;
}
@Override
public void run() {
System.out.println(name + " client started, port = " + port);
try (DatagramSocket socket = new DatagramSocket()) {
socket.setBroadcast(true);
while (!Thread.currentThread().isInterrupted()) {
byte[] buffer = (name + ": hi").getBytes();
DatagramPacket packet = new DatagramPacket(buffer, buffer.length,
InetAddress.getByName("255.255.255.255"), port);
socket.send(packet);
Thread.sleep(1000);
System.out.println("packet sent");
}
} catch (Exception e) {
throw new RuntimeException(e);
}
}
}
static class Server implements Runnable {
final String name;
final int port;
Server(String name, int port) {
this.name = name;
this.port = port;
}
@Override
public void run() {
System.out.println(name + " server started, port = " + port);
try (DatagramSocket socket = new DatagramSocket(port)) {
byte[] buf = new byte[256];
while (!Thread.currentThread().isInterrupted()) {
DatagramPacket packet = new DatagramPacket(buf, buf.length);
socket.receive(packet);
String received = new String(packet.getData(), 0, packet.getLength());
System.out.println(String.format(name + " received '%s' from %s:%d", received,
packet.getAddress().toString(),
packet.getPort()));
}
} catch (Exception e) {
throw new RuntimeException(e);
}
}
}
}
</code></pre>
<p>Kubernetes pod settings:</p>
<p>For <code>peer-1</code>:</p>
<pre><code> spec:
containers:
- name: p2p
image: p2p:1.0-SNAPSHOT
env:
- name: NAME
value: "peer-1"
- name: IN_PORT
value: "9996"
- name: OUT_PORT
value: "9997"
</code></pre>
<p>For <code>peer-2</code> :</p>
<pre><code> spec:
containers:
- name: p2p-2
image: p2p:1.0-SNAPSHOT
env:
- name: NAME
value: "peer-2"
- name: IN_PORT
value: "9997"
- name: OUT_PORT
value: "9996"
</code></pre>
<p>I used a different in/out ports for simplicity's sake. In reality, it should be the same port, e.g.: 9999</p>
<p>I see that each pod has a unique IP address</p>
<pre><code>kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
p2p-deployment-2-59bb89f9d6-ghclv 1/1 Running 0 2m26s 10.1.0.38 docker-desktop <none> <none>
p2p-deployment-567bb5bd77-5cnsl 1/1 Running 0 2m29s 10.1.0.37 docker-desktop <none> <none>
</code></pre>
<p>Logs from <code>peer-1</code>:</p>
<pre><code>peer-1 received 'peer-2: hi' from /10.1.0.1:57565
</code></pre>
<p>Logs from <code>peer-2</code>:</p>
<pre><code>peer-2 received 'peer-1: hi' from /10.1.0.1:44777
</code></pre>
<p>Question: why <code>peer-1</code> receives UDP packets from <code>10.1.0.1</code> instead of <code>10.1.0.37</code> ?</p>
<p>If I log into <code>peer-2</code> container: <code>kubectl exec -it p2p-deployment-2-59bb89f9d6-ghclv -- /bin/bash</code></p>
<p>Then </p>
<pre><code>socat - UDP-DATAGRAM:255.255.255.255:9996,broadcast
test
test
...
</code></pre>
<p>in <code>peer-1</code> logs I see <code>peer-1 received 'test' from /10.1.0.1:43144</code>.
Again why network address is <code>10.1.0.1</code> instead of <code>10.1.0.37</code>.</p>
<p>Could you please tell me what I'm doing wrong?</p>
<p>Note: when using the same port to send/receive UDP packets, some peer can receive a packet from its own IP address. In other words, a peer can only discover its own IP address but always gets <code>10.1.0.1</code> for packets received from other peers/pods</p>
|
dmgcodevil
|
<p>For some reason, UDP <strong>broadcast</strong> doesn't work as expected in Kubernetes infrastructure, however <strong>multicast</strong> works fine.</p>
<p>Thanks <a href="https://stackoverflow.com/users/3745413/ron-maupin">Ron Maupin</a> for suggesting multicast.</p>
<p><a href="https://gist.github.com/dmgcodevil/22e50eb8f415602bb8d128fe28cdb209" rel="nofollow noreferrer">Here</a> you can find java code + kube config</p>
|
dmgcodevil
|
<p>I am new to the world of Spark and Kubernetes. I built a Spark docker image using the official Spark 3.0.1 bundled with Hadoop 3.2 using the docker-image-tool.sh utility.</p>
<p>I have also created another docker image for Jupyter notebook and am trying to run spark on Kubernetes in client mode. I first run my Jupyter notebook as a pod, do a port forward using kubectl and access the notebook UI from my system at localhost:8888 . All seems to be working fine. I am able to run commands successfully from the notebook.</p>
<p>Now I am trying to access Azure Data Lake Gen2 from my notebook using <a href="https://hadoop.apache.org/docs/current/hadoop-azure/abfs.html#Default:_Shared_Key" rel="noreferrer">Hadoop ABFS connector</a>.
I am setting the Spark context as below.</p>
<pre><code>from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
# Create Spark config for our Kubernetes based cluster manager
sparkConf = SparkConf()
sparkConf.setMaster("k8s://https://kubernetes.default.svc.cluster.local:443")
sparkConf.setAppName("spark")
sparkConf.set("spark.kubernetes.container.image", "<<my_repo>>/spark-py:latest")
sparkConf.set("spark.kubernetes.namespace", "spark")
sparkConf.set("spark.executor.instances", "3")
sparkConf.set("spark.executor.cores", "2")
sparkConf.set("spark.driver.memory", "512m")
sparkConf.set("spark.executor.memory", "512m")
sparkConf.set("spark.kubernetes.pyspark.pythonVersion", "3")
sparkConf.set("spark.kubernetes.authenticate.driver.serviceAccountName", "spark")
sparkConf.set("spark.kubernetes.authenticate.serviceAccountName", "spark")
sparkConf.set("spark.driver.port", "29413")
sparkConf.set("spark.driver.host", "my-notebook-deployment.spark.svc.cluster.local")
sparkConf.set("fs.azure.account.auth.type", "SharedKey")
sparkConf.set("fs.azure.account.key.<<storage_account_name>>.dfs.core.windows.net","<<account_key>>")
spark = SparkSession.builder.config(conf=sparkConf).getOrCreate()
</code></pre>
<p>And then I am running the below command to read a csv file present in the ADLS location</p>
<pre><code>df = spark.read.csv("abfss://<<container>>@<<storage_account>>.dfs.core.windows.net/")
</code></pre>
<p>On runnining it I am getting the error
<em><strong>Py4JJavaError: An error occurred while calling o443.csv.
: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.azurebfs.SecureAzureBlobFileSystem not found</strong></em></p>
<p>After some research, I found that I would have to explicitly include the hadoop-azure jar for the appropriate classes to be available. I downloaded the jar from <a href="https://repo1.maven.org/maven2/org/apache/hadoop/hadoop-azure/3.2.2/hadoop-azure-3.2.2.jar" rel="noreferrer">here</a> , put it in the /spark-3.0.1-bin-hadoop3.2/jars folder and built the image again.</p>
<p><strong>Unfortunately I am still getting this error</strong>. I manually verified that the jar file is indeed present in the docker image and contains the class <code>org.apache.hadoop.fs.azurebfs.SecureAzureBlobFileSystem</code></p>
<p>I looked at the entrypoint.sh present at <code>spark-3.0.1-bin-hadoop3.2\kubernetes\dockerfiles\spark</code> folder which is the entry point of our spark docker image. It adds all the packages present in the <code>spark-3.0.1-bin-hadoop3.2\jar\</code> folder in the class path.</p>
<pre><code># If HADOOP_HOME is set and SPARK_DIST_CLASSPATH is not set, set it here so Hadoop jars are available to the executor.
# It does not set SPARK_DIST_CLASSPATH if already set, to avoid overriding customizations of this value from elsewhere e.g. Docker/K8s.
if [ -n "${HADOOP_HOME}" ] && [ -z "${SPARK_DIST_CLASSPATH}" ]; then
export SPARK_DIST_CLASSPATH="$($HADOOP_HOME/bin/hadoop classpath)"
fi
if ! [ -z ${HADOOP_CONF_DIR+x} ]; then
SPARK_CLASSPATH="$HADOOP_CONF_DIR:$SPARK_CLASSPATH";
fi
</code></pre>
<p>According to my understanding spark should be able to find the class in its classpath with any addition setJar configuration.</p>
<p>Can someone please guide me how to resolve this? I might be missing something quite basic here.</p>
|
Ali Abbas
|
<p>Applying the solution provided here...</p>
<p><a href="https://stackoverflow.com/questions/42978976/how-do-we-specify-maven-dependencies-in-pyspark">How do we specify maven dependencies in pyspark</a></p>
<p>We can start a Spark session and include the required Jar from Maven.</p>
<pre><code>from pyspark.sql import SparkSession
spark = SparkSession.builder.master("local[*]")\
.config('spark.jars.packages', 'org.apache.hadoop:hadoop-azure:3.3.1')\
.getOrCreate()
</code></pre>
|
Dan Ciborowski - MSFT
|
<p>We have a J2EE application (Wildfly) running behind an Apache web server currently running on Amazon EC2 instances. We are planning on migrating this to a Kubernetes (EKS) platform using Docker images. However, we were curious about best practices. Should we create a Docker container with both the web and app server running within it, as seems common, or are there advantages of creating separate images to house both the web server and another housing the app server?</p>
|
Dave
|
<p>Create a container per main process. One for the web server, one for the app server. </p>
<p>If the containers will always have a one to one relationship and need to run together, the containers can be scheduled in the same <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/" rel="nofollow noreferrer">Pod</a> (or as @sfgroups mentioned, an ingress controller may take care of your web server needs).</p>
<p>Kubernetes is a work load scheduler and benefits from knowing the state of the processes it runs. To run multiple processes in a container you need to add a layer of process management, usually starting with backgrounding in a script using <code>./app &</code>, running into issues, then some type of <code>init</code> system like <a href="https://github.com/just-containers/s6-overlay" rel="nofollow noreferrer">s6</a></p>
<pre><code>container-runtime c-r c-r
| | |
init VS web app
/ \
web app
</code></pre>
<p>If you start adding layers of process management in between Kubernetes and the processes being managed, the state Kubernetes can detect starts to become fuzzy. </p>
<p>What happens when the web is down but the app is up? How do you manage logs from two processes? How do you debug a failing process when the init stays up and Kubernetes thinks all is good. There are a number of things that start becoming custom solutions rather than making use of the functionality Kubernetes already supplies.</p>
|
Matt
|
<p>I am trying to set up a new kubernetes cluster on one machine with kubespray (commit 7e84de2ae116f624b570eadc28022e924bd273bc).</p>
<p>After running the playbook (on a fresh ubuntu 16.04), I open the dashboard and see those warning popups:</p>
<pre><code>- configmaps is forbidden: User "system:serviceaccount:default:default" cannot list configmaps in the namespace "default"
- persistentvolumeclaims is forbidden: User "system:serviceaccount:default:default" cannot list persistentvolumeclaims in the namespace "default"
- secrets is forbidden: User "system:serviceaccount:default:default" cannot list secrets in the namespace "default"
- services is forbidden: User "system:serviceaccount:default:default" cannot list services in the namespace "default"
- ingresses.extensions is forbidden: User "system:serviceaccount:default:default" cannot list ingresses.extensions in the namespace "default"
- daemonsets.apps is forbidden: User "system:serviceaccount:default:default" cannot list daemonsets.apps in the namespace "default"
- pods is forbidden: User "system:serviceaccount:default:default" cannot list pods in the namespace "default"
- events is forbidden: User "system:serviceaccount:default:default" cannot list events in the namespace "default"
- deployments.apps is forbidden: User "system:serviceaccount:default:default" cannot list deployments.apps in the namespace "default"
- replicasets.apps is forbidden: User "system:serviceaccount:default:default" cannot list replicasets.apps in the namespace "default"
- jobs.batch is forbidden: User "system:serviceaccount:default:default" cannot list jobs.batch in the namespace "default"
- cronjobs.batch is forbidden: User "system:serviceaccount:default:default" cannot list cronjobs.batch in the namespace "default"
- replicationcontrollers is forbidden: User "system:serviceaccount:default:default" cannot list replicationcontrollers in the namespace "default"
- statefulsets.apps is forbidden: User "system:serviceaccount:default:default" cannot list statefulsets.apps in the namespace "default"
</code></pre>
<p>The kubectl commands seem fine (proxy works, listing pods etc. return no error, <code>/api</code> is reachable), however, the dashboard seem unable to fetch any useful information. How should I go about debugging that?</p>
|
nha
|
<pre><code>kubectl create clusterrolebinding default-admin --clusterrole cluster-admin --serviceaccount=default:default
</code></pre>
<p>seems to do the trick - I'd welcome an explanation though.
(Is it an oversight in kubespray? I need to set up a variable there? Is it related to RBAC?)</p>
|
nha
|
<p>Is it possible to get K8s resource limits in <em>absolute units</em>? By default we get a human-readable units (like Mi, Gi, for memory) and m for CPU:</p>
<pre><code>$ kubectl get pod ndbmysqld-1 -o yaml | grep -A 6 " resources:" | grep "limits:" -A 2
limits:
cpu: 500m
memory: 512Mi
--
limits:
cpu: 100m
memory: 256Mi
</code></pre>
<p>But my Pods have all different units, see this one:</p>
<pre><code> limits:
cpu: "1"
memory: 214748364800m
</code></pre>
<p>NB: 214748364800m is apparently 200Mi...</p>
<p>Any way to get all units aligned?</p>
<p>Thanks!</p>
|
Jarek
|
<p>2 ways to do this: use numfmt. You can use this to reformat numeric columns which is really convenient if you pull out tsv data from kubectl. This is what I'd recommend.</p>
<pre><code>kubectl get pods -o json |jq -r '.items[]|.metadata as $m|.spec.containers[].resources[]|[$m.name, .memory]|@tsv' \
|numfmt --field=2 --from=auto
</code></pre>
<p>alternatively you can approximate it in plain jq (if you don't have numfmt or you need it in jq for differently-formatted output):</p>
<pre><code>kubectl get pods -o json \
|jq -r 'def units: .|capture("(?<n>[0-9]+)(?<u>[A-Za-z]?)")|. as $v|($v.n|tonumber)*(if $v.u=="" then 1 else pow(10;3*("m_KMGTPE"|index($v.u)-1)) end);
.items[].spec.containers[].resources[].memory|units'
</code></pre>
<p>Highlighting just the function I defined in there:</p>
<pre><code>def units:
.|capture("(?<n>[0-9]+)(?<u>[A-Za-z]?)")
|. as $v
|($v.n|tonumber)*(
if $v.u=="" then 1
else pow(10;3*("m_KMGTPE"|index($v.u)-1))
end)
</code></pre>
<p>what this does is capture the number part and the first character of the suffix as 'n' and 'u', then returns 'n' times 10^3*(index of the suffix in "m_KMGTPE" minus one). eg for 'm' the index is 0 so it multiplies by 10^-3, for 1 the index is 2 so it multiplies by 10^3, and so on.</p>
|
bazzargh
|
<p>New to spring boot.</p>
<p>While exploring spring boot env variables, came to know that,
env variables can be accessed by <strong><em><code>${KeyName}</code></em></strong> from code.</p>
<p>Got a question like, </p>
<p><strong>Case 1:</strong>
In @Configuration files, we are accessing keys in application.properties using <strong><em><code>@Value(value = "${KeyName}")</code></em></strong>.
So, we are using almost same syntax for accessing env variables and accessing keys in application.properties.</p>
<p><strong>Case 2:</strong>
When trying to access the keys in application.properties using system.getEnv("keyname"), I got only null.</p>
<p><strong>Case 3:</strong>
Recently worked on <strong><em>configmap in kubernetes with spring boot</em></strong>.</p>
<p>Config file looks like,</p>
<pre><code>spec:
containers:
- name: demo-configconsumercontainer
image: springbootappimage:latest
ports:
- containerPort: 8080
envFrom:
- configMapRef:
name: example-configmap
</code></pre>
<p>All the values from configMap is exported as environmental variables and
I am accessing those values by <code>@Value(value = "${KeyName}")</code> and by <code>system.getEnv(KeyName)</code>.</p>
<ol>
<li>My question is, how case 3 is working when case 2 is not.</li>
<li>Is Spring boot made such a way that, it is allowing to access by <code>${KeyName}</code> and not by <code>system.getEnv(KeyName)</code>? (ie. Case 2)</li>
</ol>
<p>Could some one clarify my questions here.</p>
|
NANDAKUMAR THANGAVELU
|
<p>Using @Value annotation, <strong>you can access a property from many property sources</strong> such as in application.properties or an environment variable and few more property sources.</p>
<p>The important point here is <strong>ordering of these property sources</strong>.</p>
<p>Below is the order of looking up the property in various sources.</p>
<ol>
<li>Devtools global settings properties on your home directory (~/.spring-boot-devtools.properties when devtools is active).</li>
<li>@TestPropertySource annotations on your tests.</li>
<li>@SpringBootTest#properties annotation attribute on your tests.</li>
<li>Command line arguments.</li>
<li>Properties from SPRING_APPLICATION_JSON (inline JSON embedded in an environment variable or system property)</li>
<li>ServletConfig init parameters.</li>
<li>ServletContext init parameters.</li>
<li>JNDI attributes from java:comp/env.</li>
<li>Java System properties (System.getProperties()).</li>
<li><strong>OS environment variables.</strong></li>
<li>A RandomValuePropertySource that only has properties in random.*.</li>
<li>Profile-specific application properties outside of your packaged jar (application-{profile}.properties and YAML variants)</li>
<li>Profile-specific application properties packaged inside your jar (application-{profile}.properties and YAML variants)</li>
<li>Application properties outside of your packaged jar (application.properties and YAML variants).</li>
<li><strong>Application properties packaged inside your jar</strong> (application.properties and YAML variants).</li>
<li>@PropertySource annotations on your @Configuration classes.</li>
<li>Default properties (specified using SpringApplication.setDefaultProperties).</li>
</ol>
<p>In your case, the property is either declared in environment variable or in application.yaml and hence accessible using @Value annotation.</p>
|
Shailesh Pratapwar
|
<p>I'm using kubernetes <a href="https://github.com/kubernetes/ingress-nginx/" rel="noreferrer">ingress-nginx</a> and this is my Ingress spec. <a href="http://example.com" rel="noreferrer">http://example.com</a> works fine as expected. But when I go to <a href="https://example.com" rel="noreferrer">https://example.com</a> it still works, but pointing to default-backend with Fake Ingress Controller certificate. How can I disable this behaviour? I want to disable listening on https at all on this particular ingress, since there is no TLS configured.</p>
<pre><code>kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: http-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: my-deployment
servicePort: 80
</code></pre>
<p>I've tried this <code>nginx.ingress.kubernetes.io/ssl-redirect: "false"</code> annotation. However this has no effect.</p>
|
Shinebayar G
|
<p>I'm not aware of an ingress-nginx <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/" rel="noreferrer">configmap</a> value or ingress <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/" rel="noreferrer">annotation</a> to easily disable TLS. </p>
<p>You could remove port 443 from your ingress controllers service definition.</p>
<p>Remove the <code>https</code> entry from the <code>spec.ports</code> array </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mingress-nginx-ingress-controller
spec:
ports:
- name: https
nodePort: NNNNN
port: 443
protocol: TCP
targetPort: https
</code></pre>
<p>nginx will still be listening on a TLS port, but no clients outside the cluster will be able to connect to it.</p>
|
Matt
|
<p>I have currently Trino deployed in my Kubernetes cluster using the official <a href="https://github.com/trinodb/charts" rel="nofollow noreferrer">Trino(trinodb) Helm Chart</a>. In the same way I deployed <a href="https://github.com/apache/superset/blob/master/helm/superset/Chart.yaml" rel="nofollow noreferrer">Apache superset</a>.</p>
<ul>
<li><p>Using port forwarding of trino to 8080 and superset to 8088, I am able to access the UI for both from localhost but also I am able to use the trino command line API to query trino using:</p>
<p>./trino --server http:localhost:8080</p>
</li>
<li><p>I don't have any authentication set</p>
</li>
<li><p>mysql is setup correctly as Trino catalog</p>
</li>
</ul>
<p>when I try to add Trino as dataset for Superset using either of the following sqlalchemy URLs:</p>
<pre><code>trino://trino@localhost:8080/mysql
trino://localhost:8080/mysql
</code></pre>
<p>When I test the connection from Superset UI, I get the following error:</p>
<blockquote>
<p>ERROR: Could not load database driver: TrinoEngineSpec</p>
</blockquote>
<p>Please advise how I could solve this issue.</p>
|
AR1
|
<p>You should install <code>sqlalchemy-trino</code> to make the trino driver available.</p>
<p>Add these lines to your <code>values.yaml</code> file:</p>
<pre class="lang-yaml prettyprint-override"><code>additionalRequirements:
- sqlalchemy-trino
bootstrapScript: |
#!/bin/bash
pip install sqlalchemy-trino &&\
if [ ! -f ~/bootstrap ]; then echo "Running Superset with uid {{ .Values.runAsUser }}" > ~/bootstrap; fi
</code></pre>
<p>If you want more details about the problem, see this <a href="https://github.com/apache/superset/issues/13640" rel="nofollow noreferrer">Github issue</a>.</p>
<p>I added two options that do the same thing because in some versions the <code>additionalRequirements</code> doesn't work and you may need the <code>bootstrapScript</code> option to install the driver.</p>
|
ruhanbidart
|
<p>I am new to elastic search. I have a 3 data node and 3 master node elastic cluster deployed in Kubernetes. It was working well until recently there is a large intake of data. Now, I am in a stage where I need to apply index refresh interval configuration to 120s to allow optimal usage of the cluster. I am able to do it on individual index level <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-update-settings.html#reset-index-setting" rel="nofollow noreferrer">using this method</a>. but I am not able to do at the cluster level. I have a process that creates a new index everyday and do not have more than 15 indexes totally at any point of time in the cluster. So, currently, I am doing it manually by using the method afore mentioned/using Kibana UI. I tried to do this couple of ways both failed.</p>
<ol>
<li>Used the settings PUT method to force the index settings at the global level and it gives an error of no requests in the range</li>
</ol>
<p><code>PUT /_cluster/settings -d { "index" : { "refresh_interval" : "120s" } }</code></p>
<ol start="2">
<li>I used the elasticsearch yaml to set this value for the data node and the data node fails to come up.</li>
</ol>
<p>elasticsearch yaml</p>
<pre><code>apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elastic-cluster
namespace: elastic-system
spec:
version: 7.6.2
nodeSets:
- name: master
count: 3
config:
node.master: true
node.data: false
node.ingest: false
node.ml: false
node.store.allow_mmap: false
podTemplate:
...
- name: data1
count: 3
config:
node.master: false
node.data: true
node.ingest: true
node.ml: false
node.store.allow_mmap: false
index.refresh_interval: 120s # I added it here
podTemplate:
...
</code></pre>
<p>There is a third way, through kibana UI -> settings-> Elastic Search -> Index management -> index template. But there is no index template for me to start with. Nevertheless, the elastic search creates an index daily with the date. So, I do not want to mess with the existing template.</p>
<p>Can anyone suggest me a better way to do this</p>
|
baatasaari
|
<p>The recommended way is to create an index template that applies default settings when an index is created.
<a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/index-templates.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/elasticsearch/reference/current/index-templates.html</a></p>
<p>Old indexes are not affected by this template - only new ones.</p>
<p>If you don't want to create a template, you have to set the refresh interval manually.</p>
<p>You can try setting index settings with a wildcard like this:
<code>PUT /my-index-2022-*/_settings</code></p>
<p><code>/_cluster/settings</code> has no relation to index settings - it's configuration settings for the cluster, so don't try to do index operations with that URL.</p>
<p>The same applies to the YAML file - the configuration there has no relationship with the index settings.</p>
|
Milen Georgiev
|
<p>The Kubernetes docs on <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/" rel="noreferrer">https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/</a> state:</p>
<blockquote>
<p>The scheduler ensures that, for each resource type, the sum of the resource requests of the scheduled Containers is less than the capacity of the node.</p>
</blockquote>
<p>Does Kubernetes consider the current state of the node when calculating capacity? To highlight what I mean, here is a concrete example:</p>
<p>Assuming I have a node with 10Gi of RAM, running 10 Pods each with 500Mi of resource requests, and no limits. Let's say they are "bursting", and each Pod is actually using 1Gi of RAM. In this case, the node is fully utilized (<code>10 x 1Gi = 10Gi</code>), but the resources requests are only <code>10 x 500Mi = 5Gi</code>. Would Kubernetes consider scheduling another pod on this node because only 50% of the memory capacity on the node has been <code>requested</code>, or would it use the fact that 100% of the memory is currently being utilized, and the node is at full capacity?</p>
|
irbull
|
<p>By default kubernetes will use cgroups to manage and monitor the "allocatable" memory on a node for pods. It is possible to configure <code>kubelet</code> to entirely rely on the static <em>reservations</em> and pod <em>requests</em> from your deployments though so the method depends on your cluster deployment.</p>
<p>In either case, a node itself will track "memory pressure", which monitors the existing overall memory usage of a node. If a node is under memory pressure then no new pods will be scheduled and existing pods will be evicted.</p>
<p>It's best to set sensible memory <em>requests</em> and <em>limits</em> for all workloads to help the scheduler as much as possible.
If a kubernetes deployment does not configure cgroup memory monitoring, setting <em>requests</em> is a requirement for <em>all</em> workloads.
If the deployment is using cgroup memory monitoring, at least setting <em>requests</em> give the scheduler extra detail as to whether the pods to be scheduled should fit on a node. </p>
<h2>Capacity and Allocatable Resources</h2>
<p>The <a href="https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/" rel="noreferrer">Kubernetes Reserve Compute Resources docco</a> has a good overview of how memory is viewed on a node.</p>
<pre><code> Node Capacity
---------------------------
| kube-reserved |
|-------------------------|
| system-reserved |
|-------------------------|
| eviction-threshold |
|-------------------------|
| |
| allocatable |
| (available for pods) |
| |
| |
---------------------------
</code></pre>
<p>The default scheduler checks a node isn't under memory pressure, then looks at the <em>allocatable</em> memory available on a node and whether the new pods <em>requests</em> will fit in it. </p>
<p>The <em>allocatable</em> memory available is the <code>total-available-memory - kube-reserved - system-reserved - eviction-threshold - scheduled-pods</code>.</p>
<h3>Scheduled Pods</h3>
<p>The value for <code>scheduled-pods</code> can be calculated via a dynamic cgroup, or statically via the pods <em>resource requests</em>.</p>
<p>The kubelet <code>--cgroups-per-qos</code> option which defaults to <code>true</code> enables cgroup tracking of scheduled pods. The pods kubernetes runs will be in </p>
<p>If <code>--cgroups-per-qos=false</code> then the <em>allocatable</em> memory will only be reduced by the <em>resource requests</em> that scheduled on a node. </p>
<h3>Eviction Threshold</h3>
<p>The <code>eviction-threshold</code> is the level of free memory when Kubernetes starts evicting pods. This defaults to 100MB but can be set via the kubelet command line. This setting is teid to both the <em>allocatable</em> value for a node and also the memory pressure state of a node in the next section.</p>
<h3>System Reserved</h3>
<p>kubelets <code>system-reserved</code> value can be configured as a static value (<code>--system-reserved=</code>) or monitored dynamically via cgroup (<code>--system-reserved-cgroup=</code>).
This is for any system daemons running outside of kubernetes (<code>sshd</code>, <code>systemd</code> etc). If you configure a cgroup, the processes all need to be placed in that cgroup. </p>
<h3>Kube Reserved</h3>
<p>kubelets <code>kube-reserved</code> value can be configured as a static value (via <code>--kube-reserved=</code>) or monitored dynamically via cgroup (<code>--kube-reserved-cgroup=</code>).
This is for any kubernetes services running outside of kubernetes, usually <code>kubelet</code> and a container runtime.</p>
<h3>Capacity and Availability on a Node</h3>
<p>Capacity is stored in the Node object.</p>
<pre><code>$ kubectl get node node01 -o json | jq '.status.capacity'
{
"cpu": "2",
"ephemeral-storage": "61252420Ki",
"hugepages-1Gi": "0",
"hugepages-2Mi": "0",
"memory": "4042284Ki",
"pods": "110"
}
</code></pre>
<p>The allocatable value can be found on the Node, you can note than existing usage doesn't change this value. Only schduleding pods with resource requests will take away from the <code>allocatable</code> value. </p>
<pre><code>$ kubectl get node node01 -o json | jq '.status.allocatable'
{
"cpu": "2",
"ephemeral-storage": "56450230179",
"hugepages-1Gi": "0",
"hugepages-2Mi": "0",
"memory": "3939884Ki",
"pods": "110"
}
</code></pre>
<h2>Memory Usage and Pressure</h2>
<p>A kube node can also have a "memory pressure" event. This check is done outside of the <em>allocatable</em> resource checks above and is more a system level catch all. Memory pressure looks at the current root cgroup memory usage minus the inactive file cache/buffers, similar to the calculation <code>free</code> does to remove the file cache. </p>
<p>A node under memory pressure will not have pods scheduled, and will actively try and evict existing pods until the memory pressure state is resolved. </p>
<p>You can set the eviction threshold amount of memory kubelet will maintain available via the <code>--eviction-hard=[memory.available<500Mi]</code> flag. The memory requests and usage for pods can help informs the eviction process.</p>
<p><code>kubectl top node</code> will give you the existing memory stats for each node (if you have a metrics service running).</p>
<pre><code>$ kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
node01 141m 7% 865Mi 22%
</code></pre>
<p>If you were not using <code>cgroups-per-qos</code> and a number of pods without resource limits, or a number of system daemons then the cluster is likely to have some problems scheduling on a memory constrained system as <em>allocatable</em> will be high but the actual value might be really low. </p>
<h3>Memory Pressure calculation</h3>
<p>Kubernetes <a href="https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/" rel="noreferrer">Out Of Resource Handling docco</a> includes a <a href="https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/memory-available.sh" rel="noreferrer">script</a> which emulates kubelets memory monitoring process: </p>
<pre><code># This script reproduces what the kubelet does
# to calculate memory.available relative to root cgroup.
# current memory usage
memory_capacity_in_kb=$(cat /proc/meminfo | grep MemTotal | awk '{print $2}')
memory_capacity_in_bytes=$((memory_capacity_in_kb * 1024))
memory_usage_in_bytes=$(cat /sys/fs/cgroup/memory/memory.usage_in_bytes)
memory_total_inactive_file=$(cat /sys/fs/cgroup/memory/memory.stat | grep total_inactive_file | awk '{print $2}')
memory_working_set=${memory_usage_in_bytes}
if [ "$memory_working_set" -lt "$memory_total_inactive_file" ];
then
memory_working_set=0
else
memory_working_set=$((memory_usage_in_bytes - memory_total_inactive_file))
fi
memory_available_in_bytes=$((memory_capacity_in_bytes - memory_working_set))
memory_available_in_kb=$((memory_available_in_bytes / 1024))
memory_available_in_mb=$((memory_available_in_kb / 1024))
echo "memory.capacity_in_bytes $memory_capacity_in_bytes"
echo "memory.usage_in_bytes $memory_usage_in_bytes"
echo "memory.total_inactive_file $memory_total_inactive_file"
echo "memory.working_set $memory_working_set"
echo "memory.available_in_bytes $memory_available_in_bytes"
echo "memory.available_in_kb $memory_available_in_kb"
echo "memory.available_in_mb $memory_available_in_mb"
</code></pre>
|
Matt
|
<p>Our nginx-ingress log is continuously filled with this error message:</p>
<pre><code> dns.lua:61: resolve(): server returned error code: 3: name error, context: ngx.timer
</code></pre>
<p>We created the Kubernetes cluster with Kubeadm which uses CoreDNS by default. </p>
<pre><code>/data # kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-node-8jr7t 2/2 Running 2 4d22h
calico-node-cl5f6 2/2 Running 4 4d22h
calico-node-rzt28 2/2 Running 2 4d22h
coredns-fb8b8dccf-n68x9 1/1 Running 3 3d23h
coredns-fb8b8dccf-x9wr4 1/1 Running 1 3d23h
</code></pre>
<p>It also has a kube-dns service that points to the core-dns pods.</p>
<pre><code>kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 7m29s
</code></pre>
<p>I can't find anything else in the logs that would help me resolve this issue.</p>
<p>UPDATE: </p>
<p>We had a service with externalName as suggested here > <a href="https://github.com/coredns/coredns/issues/2324#issuecomment-484005202" rel="nofollow noreferrer">https://github.com/coredns/coredns/issues/2324#issuecomment-484005202</a></p>
|
Venkatesh Nannan
|
<p>As suggested in this comment, we had a service with type "ExternalName".
<a href="https://github.com/coredns/coredns/issues/2324#issuecomment-484005202" rel="nofollow noreferrer">https://github.com/coredns/coredns/issues/2324#issuecomment-484005202</a></p>
<p>Once we deleted this service, we stopped getting this error. Using IP address instead of DNS name should work as well but I never tried it. </p>
|
Venkatesh Nannan
|
<p>I have 2 services deployed in Kubernetes</p>
<ol>
<li>Application A (asp.net core 5 gRPC service)</li>
<li>Application B (asp.net core 5 api)</li>
</ol>
<p>Application B is accessible via ingress-nginx-controller over https from out side of my cluster.</p>
<p>Application A is expose via Service and only accessible inside my cluster.</p>
<p>My question is how can I connect from Application B to Application A over SSL/TLS?</p>
<p><a href="https://i.stack.imgur.com/v1yKy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/v1yKy.png" alt="enter image description here" /></a></p>
|
Hasanuzzaman
|
<p>For HTTPS communication, you can setup certificate with "dotnet dev-certs https". Each pod will need setup self-signed certificate on port 443, for development purpose, but not recommended.</p>
<p>However, GRPC actually can be use with HTTP based, with service mesh support for http2/grpc for service-to-service communication. These steps can be</p>
<ol>
<li><p>Call GRPC use HTTP</p>
<p><a href="https://learn.microsoft.com/en-us/aspnet/core/grpc/troubleshoot?view=aspnetcore-3.0#call-insecure-grpc-services-with-net-core-client-2" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/aspnet/core/grpc/troubleshoot?view=aspnetcore-3.0#call-insecure-grpc-services-with-net-core-client-2</a></p>
</li>
<li><p>Setup Linkerd</p>
<p><a href="https://techcommunity.microsoft.com/t5/azure-developer-community-blog/meshing-with-linkerd2-using-grpc-enabled-net-core-services/ba-p/1377867" rel="nofollow noreferrer">https://techcommunity.microsoft.com/t5/azure-developer-community-blog/meshing-with-linkerd2-using-grpc-enabled-net-core-services/ba-p/1377867</a></p>
</li>
</ol>
<p>Hope this helps</p>
|
SonDang
|
<p>We have an existing Kubernetes cluster running that is using Istio. I was planning on adding a new Prometheus pod and can find plenty of blogs on how to do it. However, I noticed Istio already has a Prometheus service running in the <strong>Istio-System</strong> namespace.</p>
<p>My main goal is to get Grafana running with a few basic monitoring dashboards. Should I go ahead and use Istio's Prometheus service? What are the advantages/disadvantages of using Istio's Prometheus service over running my own?</p>
|
codeConcussion
|
<p>I'd suggest not sharing the existing istio prometheus, it's deployed in the <code>istio-system</code> namespace for a reason. It was deployed by and configured for istio. </p>
<p>If you really want to create a central shared prometheus service, use <a href="https://github.com/coreos/prometheus-operator" rel="nofollow noreferrer"><code>prometheus-operator</code></a> and create a prometheus operator for istio. This is still going to be a lot of config effort to reintegrate your istio installation back into this new prometheus instance and is probably only worth it if you plan on scaling the number of clusters running this setup. 2 or 4 Prometheis is a manageable gap. 20 or 40 not so much. </p>
|
Matt
|
<p>I am trying to setup connection to my databases which reside outside of GKE cluster from within the cluster.</p>
<p>I have read various tutorials including
<a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-mapping-external-services" rel="nofollow noreferrer">https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-mapping-external-services</a>
and multiple SO questions though the problem persists.</p>
<p>Here is an example configuration with which I am trying to setup kafka connectivity:</p>
<pre><code>---
kind: Endpoints
apiVersion: v1
metadata:
name: kafka
subsets:
- addresses:
- ip: 10.132.0.5
ports:
- port: 9092
---
kind: Service
apiVersion: v1
metadata:
name: kafka
spec:
type: ClusterIP
ports:
- port: 9092
targetPort: 9092
</code></pre>
<p>I am able to get some sort of response by connecting directly via <code>nc 10.132.0.5 9092</code> from the node VM itself, but if I create a pod, say by <code>kubectl run -it --rm --restart=Never alpine --image=alpine sh</code> then I am unable to connect from within the pod using <code>nc kafka 9092</code>. All libraries in my code fail by timing out so it seems to be some kind of routing issue.</p>
<p>Kafka is given as an example, I am having the same issues connecting to other databases as well.</p>
|
tna0y
|
<p>Solved it, the issue was within my understanding of how GCP operates.</p>
<p>To solve the issue I had to add a firewall rule which allowed all incoming traffic from internal GKE network. In my case it was <code>10.52.0.0/24</code> address range.</p>
<p>Hope it helps someone.</p>
|
tna0y
|
<p>I have my application <a href="https://myapp.com" rel="nofollow noreferrer">https://myapp.com</a> deployed on K8S, with an nginx ingress controller. HTTPS is resolved at nginx.</p>
<p>Now there is a need to expose one service on a specific port for example <a href="https://myapp.com:8888" rel="nofollow noreferrer">https://myapp.com:8888</a>. Idea is to keep <a href="https://myapp.com" rel="nofollow noreferrer">https://myapp.com</a> secured inside the private network and expose only port number 8888 to the internet for integration.</p>
<p>Is there a way all traffic can be handled by the ingress controller, including tls termination, and it can also expose 8888 port and map it to a service?</p>
<p>Or
I need another nginx terminating tls and exposed on nodeport? I am not sure if I can access services like <a href="https://myapp.com" rel="nofollow noreferrer">https://myapp.com</a>:<node_port> with https.</p>
<p>Is using multiple ingress controllers an option?</p>
<p>What is the best practice to do this in Kubernetes?</p>
|
Vishal
|
<p>It is not a best practices to expose custom port over internet.</p>
<p>Instead, create a sub-domain (i.e <a href="https://custom.myapp.com" rel="nofollow noreferrer">https://custom.myapp.com</a>) which point to internal service in port 8888.</p>
<p>Then to create separate nginx ingress (not ingress controller) which point to that "https://custom.myapp.com" sub domain</p>
<p>Example manifest file as follow:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-myapp-service
namespace: abc
rules:
- host: custom.myapp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-service
port:
number: 8888
</code></pre>
<p>Hope this helps.</p>
|
SonDang
|
<p>So i heard about <code>initConainers</code> which allow you to do pre-app-container initialization. However, i want initialization which are done either at the cluster level or statefulset, or even the whole pod.</p>
<p>For instance, I want to perform a one time hadoop namenode format on my persistent volumes and be done with that. After that is done my namenode statefulset and my datanode replicasets can proceed each time</p>
<p>Does <code>kubernetes</code> have anything to accommodate this?</p>
<p>How about its Extensions?</p>
|
Jeff Saremi
|
<p>Kubernetes itself provides <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">Jobs</a> for ad hoc executions. Jobs do not integrate very tightly with existing Pods/Deployments/Statefulsets. </p>
<p><a href="https://v3.helm.sh/" rel="nofollow noreferrer">Helm</a> is a deployment orchestrator and includes <a href="https://helm.sh/docs/charts_hooks/" rel="nofollow noreferrer"><code>pre</code> and <code>post</code> hooks</a> that can be used during an <code>install</code> or <code>upgrade</code>. </p>
<p>The <a href="https://helm.sh/docs/charts_hooks/#writing-a-hook" rel="nofollow noreferrer">helm docco provides a Job example</a> run <code>post-install</code> via annotations. </p>
<pre><code>metadata:
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
</code></pre>
<p>If you have more complex requirements you could do the same with a manager or Jobs that query the kubernetes API's to check on cluster state. </p>
<h3>Helm 3</h3>
<p>A warning that helm is moving to v3.x soon where they have rearchitected away a lot of significant problems from v2. If you are just getting started with helm, keep an eye out for the v3 beta. It's only alpha as of 08/2019.</p>
|
Matt
|
<p>I have a Kubernetes cluster with 4 nodes and, ideally, my application should have 4 replicas, evenly distributed to each node. However, when pods are scheduled, they almost always end up on only two of the nodes, or if I'm very lucky, on 3 of the 4. My app as quite a bit of traffic and I would really want to use all the resources that I pay for.</p>
<p>I suspect the reason why this happens is that Kubernetes tries to schedule the new pods on the nodes that have the most available resources, which is nice as a concept, but it would be even nicer if it would reschedule the pods once the old nodes become available again.</p>
<p>What options do I have? Thanks.</p>
|
user1782560
|
<p>You have lots of options!</p>
<p>First and foremost: <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity" rel="nofollow noreferrer">Pod Affinity and Anti-affinity</a> to make sure your Pod prefer to be placed on a host that does not already have a Pod with the same label.</p>
<p>Second, you could set up <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/" rel="nofollow noreferrer">Pod Topology Spread Constraints</a>. This is newer and a bit more advanced, but usually a better solution that simple anti-affinity.</p>
<p>Thirdly, you can pin your Pods to a specific node using a <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector" rel="nofollow noreferrer">NodeSelector</a>.</p>
<p>Finally, you could write your own scheduler or modify the default scheduler settings, but that's a bit more advanced topic. Don't forget to always set your resource requests correctly, these should be set to a value that more or less encapsulates the usage during peak traffic, to make sure that a node has enough resources available to max out the Pod without interfering with other Pods.</p>
|
Tim Stoop
|
<p>While playing around with Docker and orchestration (kubernetes) I had to install and use minikube to create a simple sandbox environment. At the beginning I thought that minikube installs some kind of VM and run the "minified" kubernetes environment inside the same, however, after the installation listing my local Docker running containers I found minikube running as a container!! .. so here I'm a little bit lost and I have some questions hopefully somebody can clarify.</p>
<p>Does minikube itself works as a Docker container?</p>
|
rkachach
|
<p>Just going off of the source code available on Github and my knowledge:</p>
<ol>
<li>No it's not run in a Docker container (although it does orchestrate launching containers)</li>
<li>It uses Go to launch a smaller footprint version of the Kubernetes API that is compatible with the Kubernetes standard APIs but not ideal for a full cluster</li>
<li>The hierarchy is Minkube Golang runtime -> Docker containers running within mini cluster (where the cluster is an abstract concept which is just a bunch of namespaced Docker containers)</li>
<li>Running on Minikube is not ideal for production performance it is intended as a platform to test applications locally that will eventually run in fully-fledged Kubernetes clusters</li>
<li>The main architecture restrictions would be related to the differences between Minkube running locally and a full Kubernetes cluster running across different nodes so probably a lot of networking and authentication type differences/restrictions</li>
</ol>
|
Alex W
|
<p>I'm currently using Docker 19.03 and Kubernetes 1.13.5 and Rancher 2.2.4. Since 19.03, Docker has officially support natively NVIDIA GPUs just by passing <code>--gpus</code> option. Example (from <a href="https://github.com/NVIDIA/nvidia-docker/wiki/Installation-(Native-GPU-Support)#usage" rel="noreferrer">NVIDIA/nvidia-docker github</a>):</p>
<pre><code> docker run --gpus all nvidia/cuda nvidia-smi
</code></pre>
<p>But in Kubernetes, there's no option to pass Docker CLI options. So if I need to run a GPU instance, I have to install <code>nvidia-docker2</code>, which is not convenient to use.</p>
<p>Is there anyway to pass the Docker CLI options or passing NVIDIA runtime without installing <code>nvidia-docker2</code></p>
|
Aperture Prometheus
|
<p><a href="https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/" rel="nofollow noreferrer">GPU's are scheduled</a> via <a href="https://kubernetes.io/docs/concepts/cluster-administration/device-plugins" rel="nofollow noreferrer">device plugins</a> in Kubernetes.</p>
<blockquote>
<p>The <a href="https://github.com/NVIDIA/k8s-device-plugin" rel="nofollow noreferrer">official NVIDIA GPU device</a> plugin has the following requirements:</p>
<ul>
<li>Kubernetes nodes have to be pre-installed with NVIDIA drivers.</li>
<li>Kubernetes nodes have to be pre-installed with <a href="https://github.com/NVIDIA/k8s-device-plugin#preparing-your-gpu-nodes" rel="nofollow noreferrer">nvidia-docker 2.0</a></li>
<li>nvidia-container-runtime must be configured as the <a href="https://github.com/NVIDIA/k8s-device-plugin#preparing-your-gpu-nodes" rel="nofollow noreferrer">default runtime</a> for docker instead of runc.</li>
<li>NVIDIA drivers ~= 361.93</li>
</ul>
</blockquote>
<p>Once the nodes are setup GPU's become another resource in your spec like <code>cpu</code> or <code>memory</code>.</p>
<pre><code>spec:
containers:
- name: gpu-thing
image: whatever
resources:
limits:
nvidia.com/gpu: 1
</code></pre>
|
Matt
|
<p>We have a default deny-all-egress policy for all pods and we have an egress-internet policy like below</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-external-egress-internet
spec:
podSelector:
matchLabels:
egress: internet
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
</code></pre>
<p>Now, if I try to add multiple labels under <code>spec/podselector/matchlabels</code> everything breaks. Is there a way for this network policy to get implemented on pods with label <code>egress: internet</code> OR <code>foo:bar</code>.</p>
<p>A pod with just <code>foo:bar</code> as label should be allowed but it's not working that way.</p>
|
mbxzxz
|
<p>You can add multiple key-values to podSelector.matchLabels.<br />
See <a href="https://github.com/ahmetb/kubernetes-network-policy-recipes/blob/master/10-allowing-traffic-with-multiple-selectors.md" rel="nofollow noreferrer">https://github.com/ahmetb/kubernetes-network-policy-recipes/blob/master/10-allowing-traffic-with-multiple-selectors.md</a></p>
<pre><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: redis-allow-services
spec:
podSelector:
matchLabels:
app: bookstore
role: db
ingress:
- from:
- podSelector:
matchLabels:
app: bookstore
role: search
- podSelector:
matchLabels:
app: bookstore
role: api
- podSelector:
matchLabels:
app: inventory
role: web
</code></pre>
|
Banoona
|
<p>As a followup question to my post <a href="https://stackoverflow.com/questions/66191604/helm-function-to-set-value-based-on-a-variable">Helm function to set value based on a variable?</a>, and modifying the answer given in <a href="https://stackoverflow.com/questions/52742241/dynamically-accessing-values-depending-on-variable-values-in-a-helm-chart">Dynamically accessing values depending on variable values in a Helm chart</a>, I'm trying this</p>
<pre><code>$ helm version --short
v3.5.2+g167aac7
values.yaml
-----------
env: sandbox
environments:
sandbox: 0
staging: 1
production: 2
replicaCount:
- 1
- 2
- 4
templates/deployments.yaml
--------------------------
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ index .Values.replicaCount (pluck .Values.env .Values.environments | first | default .Values.environments.sandbox) }}
</code></pre>
<p>But I get</p>
<pre><code>$ helm template . --dry-run
Error: template: guestbook/templates/deployment.yaml:10:15: executing "guestbook/templates/deployment.yaml" at <index .Values.replicaCount (pluck .Values.env .Values.environments | first | default .Values.environments.sandbox)>: error calling index: cannot index slice/array with type float64
</code></pre>
<p>Why is <code>pluck</code> returning a <code>float64</code> instead of an integer, which I expect since my <code>environments</code> dictionary values are integers?</p>
|
Chris F
|
<p>If I do this, that is, pipe <code>pluck</code> with the <code>int</code> converter, it works, but it doesn't explain why <code>pluck</code> returns a <code>float64</code> value.</p>
<pre><code>templates/deployments.yaml
--------------------------
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ index .Values.replicaCount ((pluck .Values.env .Values.environments | first | default .Values.environments.sandbox) | int) }}
</code></pre>
<p><strong>UPDATE:</strong> It turns out to be a known bug. See <a href="https://github.com/kubernetes-sigs/yaml/issues/45" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/yaml/issues/45</a></p>
|
Chris F
|
<p>We run K8S clusters based on custom VM images that have corporate standard services and utilities. How can a pod/container have access to those? For example, how to start a service in the host as part of deploy/undeploy</p>
|
user2991054
|
<p>You can mount the systemd sockets into the Pod's container. From there you either need <a href="https://www.freedesktop.org/wiki/Software/polkit/" rel="nofollow noreferrer">polkit</a> permissions to run the commands as a non privileged user, or you need to run the container privileged. The Pod spec to do so is as follows:</p>
<pre><code>kind: Pod
apiVersion: v1
metadata:
name: dbus-pod
labels:
app: dbus
spec:
containers:
- name: dbus-container
image: centos:7
command: ['systemctl','status','sshd']
securityContext:
privileged: true
volumeMounts:
- name: run-dbus
mountPath: /var/run/dbus
- name: run-systemd
mountPath: /run/systemd
- name: bin-systemctl
mountPath: /usr/bin/systemctl
readOnly: true
- name: etc-systemd
mountPath: /etc/systemd/system
readOnly: true
restartPolicy: Never
volumes:
- name: run-dbus
hostPath:
path: /var/run/dbus
- name: run-systemd
hostPath:
path: /run/systemd
- name: bin-systemctl
hostPath:
path: /usr/bin/systemctl
- name: etc-systemd
hostPath:
path: /etc/systemd/system
</code></pre>
<p>Then you have to figure out how you want to schedule the Pod on your cluster. If you wanted to run something on every node once, you could create a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">DaemonSet</a> and remove it. A <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">Job</a> might be more appropriate if you have selectors to define where you want the Pod to run.</p>
<p>There are also projects like <a href="https://github.com/coreos/go-systemd" rel="nofollow noreferrer">go-systemd</a> that control dbus via the <code>/var/run/dbus</code> socket and take the place of all the systemd/systemctl setup.</p>
|
Matt
|
<p>I have a kubernetes Cronjob that performs some backup jobs, and the backup files needs to be uploaded to a bucket. The pod have the service account credentials mounted inside the pod at /var/run/secrets/kubernetes.io/serviceaccount, <strong>but how can I instruct gsutil to use the credentials in /var/run/secrets/kubernetes.io/serviceaccount?</strong></p>
<pre><code>lrwxrwxrwx 1 root root 12 Oct 8 20:56 token -> ..data/token
lrwxrwxrwx 1 root root 16 Oct 8 20:56 namespace -> ..data/namespace
lrwxrwxrwx 1 root root 13 Oct 8 20:56 ca.crt -> ..data/ca.crt
lrwxrwxrwx 1 root root 31 Oct 8 20:56 ..data -> ..2018_10_08_20_56_04.686748281
drwxr-xr-x 2 root root 100 Oct 8 20:56 ..2018_10_08_20_56_04.686748281
drwxrwxrwt 3 root root 140 Oct 8 20:56 .
drwxr-xr-x 3 root root 4096 Oct 8 20:57 ..
</code></pre>
|
pjotr_dolphin
|
<p>The short answer is that the token there is not in a format that gsutil knows how to use, so you can't use it. You'll need a JSON keyfile, as mentioned in the tutorial here (except that you won't be able to use the <code>GOOGLE_APPLICATION_CREDENTIALS</code> environment variable):</p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform</a></p>
<p>Rather than reading from the <code>GOOGLE_APPLICATION_CREDENTIALS</code> environment variable, Gsutil uses Boto configuration files to load credentials. The common places that it knows to look for these Boto config files are <code>/etc/boto.cfg</code> and <code>$HOME/.boto</code>. Note that the latter value changes depending on the user running the command (<code>$HOME</code> expands to different values for different users); since cron jobs usually run as a different user than the one who set up the config file, I wouldn't recommend relying on this path.</p>
<p>So, on your pod, you'll need to first create a Boto config file that references the keyfile:</p>
<pre><code># This option is only necessary if you're running an installation of
# gsutil that came bundled with gcloud. It tells gcloud that you'll be
# managing credentials manually via your own Boto config files.
$ gcloud config set pass_credentials_to_gsutil False
# Set up your boto file at /path/to/my/boto.cfg - the setup will prompt
# you to supply the /path/to/your/keyfile.json. Alternatively, to avoid
# interactive setup prompts, you could set up this config file beforehand
# and copy it to the pod.
$ gsutil config -e -o '/path/to/my/boto.cfg'
</code></pre>
<p>And finally, whenever you run gsutil, you need to tell it where to find that Boto config file which references your JSON keyfile (and also make sure that the user running the command has permission to read both the Boto config file and the JSON keyfile). If you wrote your Boto config file to one of the well-known paths I mentioned above, gsutil will attempt to find it automatically; if not, you can tell gsutil where to find the Boto config file by exporting the <code>BOTO_CONFIG</code> environment variable in the commands you supply for your cron job:</p>
<pre><code>export BOTO_CONFIG=/path/to/my/boto.cfg; /path/to/gsutil cp <src> <dst>
</code></pre>
<p><strong>Edit</strong>:</p>
<p>Note that GCE VM images come with a pre-populated file at /etc/boto.cfg. This config file tells gsutil to load a plugin that allows gsutil to contact the GCE metadata server and fetch auth tokens (corresponding to the <code>default</code> robot service account for that VM) that way. If your pod is able to read the host VM's /etc/boto.cfg file, you're able to contact the GCE metadata server, and you're fine with operations being performed by the VM's <code>default</code> service account, this solution should work out-of-the-box.</p>
|
mhouglum
|
<p>I have two network policies (one with pod selector app=db and the other with app=proxy) and I have one pod to apply both network policies, the pod config doesn't allow to have 2 different labels with the same key app.</p>
<p>How can I do it in this case without modifying any network policies?</p>
|
sergiotm
|
<p>If the pod/label/app selector is the only selector in each policy then it's not possible. The net policy probably needs a <code>matchExpressions</code> selector then a new label.</p>
<p>Ingress and Egress rules can supply an array of <code>podSelector</code>s for the network targets, or similar <code>matchExpressions</code></p>
<pre><code>spec:
podSelector:
matchExpressions:
- key: role
operator: In
values: [ "db-proxy", "db" ]
</code></pre>
|
Matt
|
<p>I have a microservice that is not a webservice. </p>
<p>It is a Spring Boot (1.5) CommandLineRunner app that does not have a need to expose an API or do anything with http.</p>
<p>However, I need to give it a liveness probe for Kubernetes.</p>
<p>Can this be accomplished without refactoring it into a webservice app?</p>
<p>I have this configuration added to enable Spring's info endpoint</p>
<pre><code>management:
endpoint:
health:
enabled: true
info:
enabled: true
# https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#production-ready-endpoints
info:
app:
name: foo-parser
description: parses binary files from S3 and updates the database
</code></pre>
<p>I implemented this health check class</p>
<pre><code>import org.springframework.boot.actuate.health.AbstractHealthIndicator;
import org.springframework.boot.actuate.health.Health.Builder;
@Component
public class HealthCheck extends AbstractHealthIndicator {
Logger log = LoggerFactory.getLogger("jsonLogger");
private final MySQLAccessor mySQLAccessor;
private static final String FOO_TABLE = "foo";
@Autowired
public HealthCheck(final MySQLAccessor mySQLAccessor) {
this.mySQLAccessor = mySQLAccessor;
}
@Override
protected void doHealthCheck(Builder builder) throws Exception {
boolean result = mySQLAccessor.healthCheck(FOO_TABLE);
if (result) {
log.info("HELLO! the health check is good!");
builder.up().withDetail("test", "good");
}
else {
log.info("HELLO! OH NOES the health check is ungood!");
builder.down().withDetail("test", "bad");
}
}
}
</code></pre>
<p>Can this idea work? Or do I have to refactor it to serve web requests?</p>
<p>Thank you for any clues</p>
|
slashdottir
|
<p>you can expose actuator endpoint details including the healthcheck using JMX.</p>
<p>example <code>application.yml</code></p>
<pre><code>management:
endpoints:
jmx:
exposure:
include: health,info,metrics,mappings
</code></pre>
<p>Then define the liveness probe to run a script (or java program) to call the JMX endpoint and answer the healthcheck:</p>
<p>example k8s config</p>
<pre><code>apiVersion: v1
kind: Pod
spec:
containers:
- name: liveness
image: my-app
livenessProbe:
exec:
command:
- /bin/sh
- test_app_with_jmx.sh
initialDelaySeconds: 5
periodSeconds: 5
</code></pre>
|
stringy05
|
<p>I am trying to deploy an app to Kubernetes. I have 2 containers: Angular app, hosted by nginx, and Node.js server. I run those containers in the same pod. The issue is that Angular app can't access the Node.js api.
Here is my nginx's default.conf:</p>
<pre><code>server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
try_files $uri $uri/ /index.html;
index index.html;
}
location /api/ {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
}
}
</code></pre>
<p>Here is my Dockerfile for the Angular app:</p>
<pre><code>FROM node:lts-alpine AS build
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build:prod
FROM nginx:stable-alpine
COPY nginx/default.conf /etc/nginx/conf.d/default.conf
COPY --from=build /app/dist/mag-ui /usr/share/nginx/html
</code></pre>
<p>Here is my deploy.yml that I run on a Kubernetes cluster:</p>
<pre><code>kind: Namespace
metadata:
name: mag
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: mag
name: mag
labels:
app: mag
spec:
replicas: 1
selector:
matchLabels:
app: mag
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: mag
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mag-api
image: mag-api
imagePullPolicy: "Always"
ports:
- containerPort: 3000
- name: mag-ui
image: mag-ui
imagePullPolicy: "Always"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
namespace: mag
name: mag-svc
labels:
app: mag
spec:
ports:
- port: 80
name: ui
targetPort: 80
selector:
app: mag
</code></pre>
<p>So when I deploy this to my local cluster (and forward port 8080:80 of service/mag-svc) and browse localhost:8080, the ui app tries to query data from Node.js server and fails with: <code>GET http://localhost:3000/api/mag/models net::ERR_CONNECTION_REFUSED</code>.</p>
<p><strong>However</strong>, if I connect to Angular app container's shell and curl the <code>localhost:3000/api/mag/models</code>, it works fine and I get the expected response.</p>
<p>Looks like it tries to access localhost of my host vm, instead of localhost of a container where the Angular app is running. So, how to make the Angular app call Node.js api, that runs in the same pod?</p>
|
Kiramm
|
<p><a href="https://stackoverflow.com/a/45307259/1423507"><code>angular</code> runs in your browser</a> so the connections to <code>http://localhost:3000</code> from the app running in your browser are to your PC's <code>localhost:3000</code>.</p>
<p>You can create a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer"><code>Service</code></a> for the <code>nodejs</code> container:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
namespace: mag
name: mag-svc-api
labels:
app: mag
spec:
ports:
- port: 3000
name: mag-api
targetPort: 3000
selector:
app: mag
</code></pre>
<p>... then forward traffic for <code>localhost:3000</code> to the <code>Service</code>: <code>kubectl port-forward -n mag mag-svc-api 3000:3000</code>. Connections from the app running in your browser to <code>http://localhost:3000</code> would be forwarded to the <code>Service</code> -> container running in <a href="https://kubernetes.io/" rel="nofollow noreferrer">Kubernetes</a>.</p>
|
masseyb
|
<p>Is it possible to run the kubernetes api-server in minikube with maximum log verbosity?</p>
<pre><code>$ minikube start --v 4
</code></pre>
<p>didn't work for me. When I exec into the api-server container and do ps, the api-server commandline didn't have --v=4 in it. So, minikube is not passing the --v = 4 down to the api-server.</p>
<p>Thanks.</p>
|
user674669
|
<p>there is an error in the parameters, try this instead</p>
<pre><code>minikube start --v=7
</code></pre>
|
sev7nx
|
<p>I have a Helm Chart. Which on deployment creates the following Kubernetes resources:</p>
<ol>
<li>A SSL enabled Service</li>
<li>A Container created with my Docker image which internally runs a Java process that communicates with the above service.</li>
<li>Kubernetes Secrets (SSL certificates and access keys), that are mounted inside the above container.</li>
</ol>
<p>The problem is my Container can not talk to my Service unless I add the SSL certificate to Java certificates. I do this by running the following command <strong>manually</strong> inside my docker with <strong>root</strong> user.</p>
<pre class="lang-sh prettyprint-override"><code>cd ${JAVA_HOME}/jre/lib/security
keytool -keystore cacerts -storepass changeit -noprompt -trustcacerts -importcert -alias myservicecert -file /home/myuser/.myservice/certs/public.crt
</code></pre>
<p>I want to automate the above steps but facing issues:</p>
<ul>
<li>I can not perform above steps inside the <code>Dockerfile</code> since the certificate is not available at that time. Obivious.</li>
<li>I can not add these steps in <code>entrypoint.sh</code> file of the docker because entrypoint file is executed by a different user than the root.</li>
<li>Another solution that I could think of is that change the permission of path <code>${JAVA_HOME}/jre/lib/security/cacerts</code> to <code>777</code> so that I can run these commands in <code>entrypoint.sh</code> without an issue. But I am not sure if this is the correct way to go due to security reasons. Suggestions on this ?</li>
<li>I do not want to set a custom location for certificates using <code>Djavax.net.ssl.trustStore</code> because then java will not have access to root certificates.</li>
</ul>
<p>What is correct way to achieve this?</p>
|
Neelesh
|
<h2>TL;DR</h2>
<p>You can add the certificate to the keystore in an <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer"><code>initContainer</code></a>. The keystore can be shared between the <code>initContainer</code> and the <code>container</code> using a <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer"><code>volume</code></a>.</p>
<hr />
<p>In this example we're mounting the <code>tls-secret</code> to the pod as a Kubernetes <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">Secret</a>. The <code>tls-secret</code> contains the <code>tls.crt</code> which needs to be added to the keystore. We add the certificate to the keystore in the <code>initContainer</code> after copying the security files into a <code>volume</code> which is mounted to the pod's <code>container</code>.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: myapp
labels:
app: myapp
spec:
containers:
- name: myapp
image: openjdk:8-jdk-slim-buster
command:
- bash
- -c
- |
keytool -list -keystore /usr/local/openjdk-8/jre/lib/security/cacerts -alias local
sleep infinity
volumeMounts:
- mountPath: /usr/local/openjdk-8/jre/lib/security
name: cacerts
initContainers:
- name: init-cacerts
image: openjdk:8-jdk-slim-buster
command:
- bash
- -c
- |
cp -R /usr/local/openjdk-8/jre/lib/security/* /cacerts/
keytool -import -noprompt -trustcacerts -alias local -file /security/tls.crt -keystore /cacerts/cacerts -storepass changeit
keytool -list -keystore /cacerts/cacerts -alias local
volumeMounts:
- mountPath: /cacerts
name: cacerts
- mountPath: /security
name: tls
volumes:
- name: cacerts
emptyDir: {}
- name: tls
secret:
secretName: tls-secret
defaultMode: 0400
</code></pre>
<p><a href="https://i.stack.imgur.com/GAEYK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GAEYK.png" alt="shared keystore" /></a></p>
|
masseyb
|
<p>I am using Containerd as a container runtime. When I create a pod, contained pull two images.
(the result from <code>ctr -n k8s.io i ls -q</code>)</p>
<pre><code>myprivateregistery/gcr.io/google_containers/kubernetes-dashboard-amd64:v1.7.1 application/vnd.docker.distribution.manifest.v2+json sha256:2cc826b775aacfb15a89a1f2d6685799f360ddb65f101b656097784cef2bb9d7 39.8 MiB linux/amd64 io.cri-containerd.image=managed
myprivateregistery/gcr.io/google_containers/kubernetes-dashboard-amd64@sha256:2cc826b775aacfb15a89a1f2d6685799f360ddb65f101b656097784cef2bb9d7 application/vnd.docker.distribution.manifest.v2+json sha256:2cc826b775aacfb15a89a1f2d6685799f360ddb65f101b656097784cef2bb9d7 39.8 MiB linux/amd64 io.cri-containerd.image=managed
</code></pre>
<p>A bit later another image has created, the result from <code>ctr -n k8s.io i ls -q</code> becomes.</p>
<pre><code>myprivateregistery/gcr.io/google_containers/kubernetes-dashboard-amd64:v1.7.1 application/vnd.docker.distribution.manifest.v2+json sha256:2cc826b775aacfb15a89a1f2d6685799f360ddb65f101b656097784cef2bb9d7 39.8 MiB linux/amd64 io.cri-containerd.image=managed
myprivateregistery/gcr.io/google_containers/kubernetes-dashboard-amd64@sha256:2cc826b775aacfb15a89a1f2d6685799f360ddb65f101b656097784cef2bb9d7 application/vnd.docker.distribution.manifest.v2+json sha256:2cc826b775aacfb15a89a1f2d6685799f360ddb65f101b656097784cef2bb9d7 39.8 MiB linux/amd64 io.cri-containerd.image=managed
sha256:294879c6444ed35b8cb94c613e61c47b9938305a1d1eaf452c0d17db471d99e5 application/vnd.docker.distribution.manifest.v2+json sha256:2cc826b775aacfb15a89a1f2d6685799f360ddb65f101b656097784cef2bb9d7 39.8 MiB linux/amd64 io.cri-containerd.image=managed
</code></pre>
<p>The events received by Containerd:</p>
<pre><code>2021-06-03 10:41:41.942185302 +0000 UTC k8s.io /images/create {"name":"myprivateregistry/gcr.io/google_containers/kubernetes-dashboard-amd64:v1.7.1"}
2021-06-03 10:41:41.944191919 +0000 UTC k8s.io /images/create {"name":"sha256:294879c6444ed35b8cb94c613e61c47b9938305a1d1eaf452c0d17db471d99e5","labels":{"io.cri-containerd.image":"managed"}}
</code></pre>
<p>The question is why Containerd pull mutilple images that are the same. I exepect to see one image, the output of image listing command shows 3 images pulled. Also if I created lot of pod it result a Disk pressure beacause of this.</p>
|
mohamed.Yassine.SOUBKI
|
<p>It doesn't pull two images, it only pulls one images. The tag is actually a reference to a specific hash that it needs to resolve first. So the first requests is just to resolve the tag to the hash and the second is the actual download.</p>
<p>Indirect evidence for that is the small amount of time between the two, you can see that it only took 0.002 seconds to get the reply to the first request and only after that it began downloading.</p>
|
Tim Stoop
|
<p>I've setup a bare metal cluster and want to provide different types of shared storage to my applications, one of which is an s3 bucket I mount via <a href="https://github.com/kahing/goofys" rel="nofollow noreferrer">goofys</a> to a pod that exports if via NFS. I then use the NFS client provisioner to mount the share to automatically provide volumes to pods.</p>
<p>Letting aside the performance comments, the issue is that the nfs client provisioner mounts the NFS share via the node's OS, so when I set the server name to the NFS pod, this is passed on to the node and it cannot mount because it has no route to the service/pod.</p>
<p>The only solution so far has been to configure the service as NodePort, block external connections via ufw on the node, and configure the client provisioner to connect to 127.0.0.1:nodeport.</p>
<p>I'm wondering if there is a way for the node to reach a cluster service using the service's dns name?</p>
|
Assis Ngolo
|
<p>I've managed to get around my issue buy configuring the NFS client provisioner to use the service's clusterIP instead of the dns name, because the node is unable to resolve it to the IP, but it does have a route to the IP. Since the IP will remain allocated unless I delete the service, this is scalable, but of course can't be automated easily as a redeployment of the nfs server helm chart will change the service's IP.</p>
|
Assis Ngolo
|
<p>I'm trying to figure out ways to automate k8s deployments in an EKS cluster. I'm trying to set up namespaces for each specific environment. One for dev, one for staging, and one for production. My production namespace is in a separate region and also in a separate cluster (dev & staging are in one cluster). I'm a little new to this concept, but does it make sense to have each respective application load balancer in it's respective namespace? Is that practice common or best practice? Any ideas on automating deployments would be appreciated.</p>
|
Dave Michaels
|
<p>Hi <a href="https://stackoverflow.com/users/12145078/dave-michaels">Dave Michaels</a>,
I assume there are two questions in your post above:</p>
<ol>
<li><p>If we use a dedicated namespace in the same cluster (dev & staging setup), can we use a dedicated load balancer for each of these namespaces? Is this good practice.
Answer: Yes. As you are using the namespace concept for each environment in the same cluster, it is Ok to create a dedicated load balancer (promise me you will use ingress :)) in each of these namespaces as we need an easier way to access those environments. To be frank, I am not a fan of using namespaces for environments, because as your cluster grows and lots of microservices getting added to it, you might want to use namespace for another reason eg., namespace per team or domain to have granular access rights. But I have seen teams using it for different environments successfully as well.</p>
</li>
<li><p>Suggest automated Kubernetes deployments possibilities?
This is a large topic by itself.
As your microservices grow, you will have multiple Kubernetes manifests to handle, first thing I will suggest is to either use a configuration manager like <a href="https://kustomize.io/" rel="nofollow noreferrer">Kustomize</a> or a package manager like <a href="https://helm.sh/" rel="nofollow noreferrer">Helm</a> to segregate variables from actual manifests, this will help to easily automate deployment across environments (same cluster or different clusters). Coming to actual deployment automation, if there is no existing CD in place I would suggest exploring tools that support natively Kubernetes that supports GitOps, like <a href="https://fluxcd.io/" rel="nofollow noreferrer">FluxCD</a> or <a href="https://argoproj.github.io/argo-cd/" rel="nofollow noreferrer">ArgoCD</a> etc</p>
</li>
</ol>
|
Narain
|
<p>I'm managing Kubernetes + nginx.</p>
<p>I'd like to install dynamic modules on nginx that are provided by Nginx Ingress Controller.
Those dynamic modules are not offered by Nginx Ingress Controller official configmap (<a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/" rel="noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/</a>)</p>
<p>So I believe, I need to build my own Docker container of Nginx Ingress Controller.
(Could be added at this? <a href="https://github.com/kubernetes/ingress-nginx/blob/8951b7e22ad3952c549150f61d7346f272c563e1/images/nginx/rootfs/build.sh#L618-L632" rel="noreferrer">https://github.com/kubernetes/ingress-nginx/blob/8951b7e22ad3952c549150f61d7346f272c563e1/images/nginx/rootfs/build.sh#L618-L632</a> )</p>
<p>Do you know how we can customize the controller and manage it by helm chart? I'm thinking about making a Fork branch from the controller master repo on Github.
But I don't have any idea on how we install a customized version of the controller on terraform + helm chart.</p>
<p>However, I would prefer to use a non-customizable solution (because of some annotation settings)</p>
<p>Environment:
Kubernetes
Nginx Ingress Controller is installed by helm chart + terraform
Nginx Ingress Controller -> <a href="https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx" rel="noreferrer">https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx</a></p>
<p>Terraform:</p>
<pre><code>resource "helm_release" "nginx-ingress-controller" {
name = "nginx-ingress-controller"
chart = "ingress-nginx/ingress-nginx"
namespace = "kube-system"
version = "3.34.0"
}
</code></pre>
<p>dynamic modules
<a href="https://docs.nginx.com/nginx/admin-guide/dynamic-modules/dynamic-modules/" rel="noreferrer">https://docs.nginx.com/nginx/admin-guide/dynamic-modules/dynamic-modules/</a>
(install process might be using <code>--add-dynamic-module</code> option, and set <code>load_module modules/something.so</code> on <code>nginx.conf</code> via <code>ingress.yaml</code>)</p>
<p>Thank you.</p>
|
mto
|
<h1>TL;DR</h1>
<p>Extend the official image with the dynamic modules, and update the <code>helm_release</code> <code>terraform</code> resource to <code>set</code> the <code>controller.image.registry</code>, <code>controller.image.image</code>, <code>controller.image.tag</code>, <code>controller.image.digest</code>, and <code>controller.image.digestChroot</code> for your custom image along with a <code>controller.config.main-snippet</code> to load the dynamic module(s) in the main context.</p>
<hr />
<p>This is similar to my previous <a href="https://stackoverflow.com/a/57741684/1423507">answer for building modules using the official nginx image</a>. You can extend the <code>ingress-nginx/controller</code> image, build the modules in one stage, extend the official image with the dynamic modules in another stage, and use the image in your <code>helm_release</code>. An example for extending the <code>ingress-nginx/controller</code> with the <a href="https://github.com/openresty/echo-nginx-module" rel="nofollow noreferrer"><code>echo-nginx-module</code></a> e.g.:</p>
<h2>Docker</h2>
<pre><code>ARG INGRESS_NGINX_CONTROLLER_VERSION
FROM registry.k8s.io/ingress-nginx/controller:${INGRESS_NGINX_CONTROLLER_VERSION} as build
ARG INGRESS_NGINX_CONTROLLER_VERSION
ENV INGRESS_NGINX_CONTROLLER_VERSION=${INGRESS_NGINX_CONTROLLER_VERSION}
USER root
RUN apk add \
automake \
ca-certificates \
curl \
gcc \
g++ \
make \
pcre-dev \
zlib-dev
RUN NGINX_VERSION=$(nginx -V 2>&1 |sed -n -e 's/nginx version: //p' |cut -d'/' -f2); \
curl -L "http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz" | tar -C /tmp/nginx --strip-components=1 -xz
WORKDIR /src/echo-nginx-module
RUN curl -L https://github.com/openresty/echo-nginx-module/archive/refs/tags/v0.63.tar.gz | tar --strip-components=1 -xz
WORKDIR /tmp/nginx
RUN ./configure --with-compat --add-dynamic-module=/src/echo-nginx-module && \
make modules
FROM registry.k8s.io/ingress-nginx/controller:${INGRESS_NGINX_CONTROLLER_VERSION}
COPY --from=build /tmp/nginx/objs/ngx_http_echo_module.so /etc/nginx/modules/
</code></pre>
<p>... build and push the image e.g.: <code>docker build --rm -t myrepo/ingress-nginx/controller:v1.5.1-echo --build-arg INGRESS_NGINX_CONTROLLER_VERSION=v1.5.1 . && docker push myrepo/ingress-nginx/controller:v1.5.1-echo</code></p>
<h2>Terraform</h2>
<p>Update the <code>terraform</code> <code>helm_release</code> resource to install the charts using the custom image and adding a <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#main-snippet" rel="nofollow noreferrer"><code>main-snippet</code></a> to set the <a href="https://nginx.org/en/docs/ngx_core_module.html#load_module" rel="nofollow noreferrer"><code>load_module</code></a> directive in the <code>main</code> context:</p>
<pre class="lang-hcl prettyprint-override"><code>resource "helm_release" "ingress-nginx" {
name = "ingress-nginx"
namespace = "kube-system"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
version = "3.34.0"
set {
name = "controller.image.registry"
value = "myrepo"
}
set {
name = "controller.image.image"
value = "ingress-nginx/controller"
}
set {
name = "controller.image.tag"
value = "v1.5.1-echo"
}
set {
name = "controller.image.digest"
value = "sha256:1b32b3e8c983ef4a32d87dead51fbbf2a2c085f1deff6aa27a212ca6beefcb72"
}
set {
name = "controller.image.digestChroot"
value = "sha256:f2e1146adeadac8eebb251284f45f8569beef9c6ec834ae1335d26617da6af2d"
}
set {
name = "controller.config.main-snippet"
value = <<EOF
load_module /etc/nginx/modules/ngx_http_echo_module.so;
EOF
}
}
</code></pre>
<p>The <code>controller.image.digest</code> is the image <code>RepoDigest</code>: <code>docker inspect myrepo/ingress-nginx/controller:v1.5.1-echo --format '{{range .RepoDigests}}{{println .}}{{end}}' |cut -d'@' -f2</code></p>
<p>The <code>controller.image.digestChroot</code> is the <code>Parent</code> sha: <code>docker inspect myrepo/ingress-nginx/controller:v1.5.1-echo --format {{.Parent}}</code></p>
<h2>Test</h2>
<ol>
<li>Create a <code>nginx</code> pod: <code>kubectl run nginx --image=nginx</code></li>
<li>Expose the pod: <code>kubectl expose pod nginx --port 80 --target-port 80</code></li>
<li>Create an ingress with a <code>server-snippet</code>:</li>
</ol>
<pre><code>cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/server-snippet: |
location /hello {
echo "hello, world!";
}
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: echo.example.com
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: nginx
port:
number: 80
tls:
- hosts:
- echo.example.com
secretName: tls-echo
EOF
</code></pre>
<blockquote>
<p>Using <a href="https://cert-manager.io/docs/" rel="nofollow noreferrer"><code>cert-manager</code></a> for TLS certificates issuance and <a href="https://github.com/kubernetes-sigs/external-dns" rel="nofollow noreferrer"><code>external-dns</code></a> for DNS management.</p>
</blockquote>
<ol start="4">
<li>Test using <code>curl</code>:</li>
</ol>
<p><a href="https://i.stack.imgur.com/ZoDrF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZoDrF.png" alt="curl echo module test on publicly exposed app" /></a></p>
|
masseyb
|
<p>Following <a href="https://stackoverflow.com/questions/60067799/building-multi-architecture-docker-images-with-skaffold">Building multi-architecture docker images with Skaffold</a>, I've been able to successfully continue building my multi-architecture (AMD64 and ARM64) images.</p>
<p>However, it looks like the kubernetes cluster ends up pulling the AMD64 image, as I'm seeing:</p>
<pre><code>standard_init_linux.go:211: exec user process caused "exec format error"
</code></pre>
<p>in the logs.</p>
<p>I've looked at <a href="https://skaffold.dev/docs/references/yaml/" rel="nofollow noreferrer">https://skaffold.dev/docs/references/yaml/</a> but that didn't appear to shed any light on how I can ensure it uses the correct architecture.</p>
<p>Thanks in advance.</p>
|
twilson
|
<p>Skaffold <code>v2.0.0</code> and beyond now has explicit support for cross-platform and multi-platform builds. See the relevant docs here:
<a href="https://skaffold.dev/docs/workflows/handling-platforms/" rel="nofollow noreferrer">https://skaffold.dev/docs/workflows/handling-platforms/</a></p>
|
aaron-prindle
|
<p>I have docker desktop and kubectl installed , I am trying to connect to cluster from my local pc and getting above error
here is my kubeconfig file</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ****
name: AKS-CLUSTER
contexts:
- context:
cluster: AKS-CLUSTER
user: clusterUser_D-AKS_AKS-CLUSTER
name: AKS-CLUSTER
current-context: AKS-CLUSTER
kind: Config
preferences: {}
users:
- name: clusterUser_D-AKS_AKS-CLUSTER
user:
client-certificate-data: ****
client-key-data: ****
token: ****
</code></pre>
|
megha
|
<p>One reason for this is the AKS cluster certificate getting rotated. See <a href="https://learn.microsoft.com/en-us/azure/aks/certificate-rotation" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/certificate-rotation</a>.</p>
<p>As outlined there, run:</p>
<pre><code>az aks get-credentials -g <RESOURCE_GROUP_NAME> -n <CLUSTER_NAME> --overwrite-existing
</code></pre>
<p>and see if it gives you an output like:</p>
<pre><code>Merged "<CLUSTER_NAME>" as current context in /home/<USER>/.kube/config
</code></pre>
|
arun
|
<p>Let's suppose I have bare-metal servers forming a Kubernetes cluster where I am deploying my application. How can I point one domain name to all of the worker nodes of the cluster without a Load Balancing Service or Ingress Controller ?</p>
|
joe1531
|
<p>One suggestion could be to forget its Kubernetes worker nodes, and think about how you will give some domain pointing to a set of instances? Imagine you are running copy of your static website in 10 servers and you want to have same domain to all the nodes? You have either an external load balancer or a reverse proxy. But the biggest question why do you want to do that as worker nodes are short-lived you got to be dynamic about load balancing them right? Thats where a Service or Ingress will help, as it knows when a worker node leaves or gets added to cluster dynamically for you. Checkout these possibilities listed here <a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/baremetal/</a></p>
|
Narain
|
<p>I'm attempting to deploy a dask application on Kubernetes/Azure. I have a Flask application server that is the client of a Dask scheduler/workers.</p>
<p>I installed the Dask operator as described <a href="https://kubernetes.dask.org/en/latest/#kubecluster" rel="nofollow noreferrer">here</a>:</p>
<pre><code>helm install --repo https://helm.dask.org --create-namespace -n dask-operator --generate-name dask-kubernetes-operator
</code></pre>
<p>This created the scheduler and worker pods, I have them running on Kubernetes without errors.</p>
<p>For the Flask application, I have a Docker image with the following Dockerfile:</p>
<pre><code>FROM daskdev/dask
RUN apt-get -y install python3-pip
RUN pip3 install flask
RUN pip3 install gunicorn
RUN pip3 install "dask[complete]"
RUN pip3 install "dask[distributed]" --upgrade
RUN pip3 install "dask-ml[complete]"
</code></pre>
<p>Whenever I try to run a function in the workers using the <code>Client</code> interface, I get this error in the scheduler pod:</p>
<pre><code>TypeError: update_graph() got an unexpected keyword argument 'graph_header'
</code></pre>
<p>It seems to me that the Dask image used to run Flask and the Dask Kubernetes that I installed are not compatible or aligned?</p>
<p>How to create an image that includes Dask for the Flask server that can be integrated with the Dask Kubernetes package?</p>
<p>I run in Flask <code>client.get_versions(check=True)</code> and this is what I get:</p>
<p>{'scheduler': {'host': {'python': '3.8.15.final.0', 'python-bits': 64, 'OS': 'Linux', 'OS-release': '5.4.0-1105-azure', 'machine': 'x86_64', 'processor': 'x86_64', 'byteorder': 'little', 'LC_ALL': 'C.UTF-8', 'LANG': 'C.UTF-8'}, 'packages': {'python': '3.8.15.final.0', 'dask': '2023.1.0', 'distributed': '2023.1.0', 'msgpack': '1.0.4', 'cloudpickle': '2.2.0', 'tornado': '6.2', 'toolz': '0.12.0', 'numpy': '1.24.1', 'pandas': '1.5.2', 'lz4': '4.2.0'}}, 'workers': {'tcp://10.244.0.3:40749': {'host': {'python': '3.8.15.final.0', 'python-bits': 64, 'OS': 'Linux', 'OS-release': '5.4.0-1105-azure', 'machine': 'x86_64', 'processor': 'x86_64', 'byteorder': 'little', 'LC_ALL': 'C.UTF-8', 'LANG': 'C.UTF-8'}, 'packages': {'python': '3.8.15.final.0', 'dask': '2023.1.0', 'distributed': '2023.1.0', 'msgpack': '1.0.4', 'cloudpickle': '2.2.0', 'tornado': '6.2', 'toolz': '0.12.0', 'numpy': '1.24.1', 'pandas': '1.5.2', 'lz4': '4.2.0'}}, 'tcp://10.244.0.4:36757': {'host': {'python': '3.8.15.final.0', 'python-bits': 64, 'OS': 'Linux', 'OS-release': '5.4.0-1105-azure', 'machine': 'x86_64', 'processor': 'x86_64', 'byteorder': 'little', 'LC_ALL': 'C.UTF-8', 'LANG': 'C.UTF-8'}, 'packages': {'python': '3.8.15.final.0', 'dask': '2023.1.0', 'distributed': '2023.1.0', 'msgpack': '1.0.4', 'cloudpickle': '2.2.0', 'tornado': '6.2', 'toolz': '0.12.0', 'numpy': '1.24.1', 'pandas': '1.5.2', 'lz4': '4.2.0'}}, 'tcp://10.244.1.7:40561': {'host': {'python': '3.8.15.final.0', 'python-bits': 64, 'OS': 'Linux', 'OS-release': '5.4.0-1105-azure', 'machine': 'x86_64', 'processor': 'x86_64', 'byteorder': 'little', 'LC_ALL': 'C.UTF-8', 'LANG': 'C.UTF-8'}, 'packages': {'python': '3.8.15.final.0', 'dask': '2023.1.0', 'distributed': '2023.1.0', 'msgpack': '1.0.4', 'cloudpickle': '2.2.0', 'tornado': '6.2', 'toolz': '0.12.0', 'numpy': '1.24.1', 'pandas': '1.5.2', 'lz4': '4.2.0'}}}, 'client': {'host': {'python': '3.8.16.final.0', 'python-bits': 64, 'OS': 'Linux', 'OS-release': '5.4.0-1105-azure', 'machine': 'x86_64', 'processor': 'x86_64', 'byteorder': 'little', 'LC_ALL': 'C.UTF-8', 'LANG': 'C.UTF-8'}, 'packages': {'python': '3.8.16.final.0', 'dask': '2023.4.0', 'distributed': '2023.4.0', 'msgpack': '1.0.5', 'cloudpickle': '2.2.1', 'tornado': '6.2', 'toolz': '0.12.0', 'numpy': '1.23.5', 'pandas': '2.0.0', 'lz4': '4.3.2'}}} @ 2023-04-20 13:33:09.921545"}</p>
|
ps0604
|
<p>Solved, just forced the Dockerfile to use version 2023.1.0, that fixed the problem and matched the operator dask version.</p>
|
ps0604
|
<p>Two of my microk8s clusters running version 1.21 just stopped working.</p>
<p>kubectl locally returns <code>The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?</code></p>
<p>microk8s.status says not running, and microk8s.inspect just checks four services:</p>
<pre><code>Inspecting services
Service snap.microk8s.daemon-cluster-agent is running
Service snap.microk8s.daemon-containerd is running
Service snap.microk8s.daemon-apiserver-kicker is running
Service snap.microk8s.daemon-kubelite is running
</code></pre>
<p>Apiserver not mentioned, and it's not running (checking status for that separately says "Will not run along with kubelite")</p>
<p>I didn't change anything on any of the machines.</p>
<p>I tried upgrading microk8s to 1.22 - no change.</p>
<p>journal.log for apiserver says:</p>
<pre><code>Oct 18 07:57:05 myserver microk8s.daemon-kubelite[30037]: I1018 07:57:05.143264 30037 daemon.go:65] Starting API Server
Oct 18 07:57:05 myserver microk8s.daemon-kubelite[30037]: Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
Oct 18 07:57:05 myserver microk8s.daemon-kubelite[30037]: I1018 07:57:05.144650 30037 server.go:654] external host was not specified, using 192.168.1.10
Oct 18 07:57:05 myserver microk8s.daemon-kubelite[30037]: W1018 07:57:05.144719 30037 authentication.go:507] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
</code></pre>
<p>snap services:</p>
<pre><code>Service Startup Current Notes
microk8s.daemon-apiserver enabled inactive -
microk8s.daemon-apiserver-kicker enabled active -
microk8s.daemon-cluster-agent enabled active -
microk8s.daemon-containerd enabled active -
microk8s.daemon-control-plane-kicker enabled inactive -
microk8s.daemon-controller-manager enabled inactive -
microk8s.daemon-etcd enabled inactive -
microk8s.daemon-flanneld enabled inactive -
microk8s.daemon-kubelet enabled inactive -
microk8s.daemon-kubelite enabled active -
microk8s.daemon-proxy enabled inactive -
microk8s.daemon-scheduler enabled inactive -
</code></pre>
<p>It's not this (<a href="https://github.com/ubuntu/microk8s/issues/2486" rel="nofollow noreferrer">https://github.com/ubuntu/microk8s/issues/2486</a>), both info.yaml and cluster.yaml have the correct contents.</p>
<p>All machines are virtual Ubuntus running in Hyper-V in a Windows Server cluster.</p>
|
Anders Bornholm
|
<p>Turns out there were two different problems in the cluster, and that I hadn't changed anything was not entirely true.</p>
<h3>Single-node cluster:</h3>
<p>cluster.yaml was not correct, it was empty. Copying the contents of localnode.yaml to cluster.yaml fixed the problem.</p>
<h3>Multi-node cluster:</h3>
<p>One node had gone offline (microk8s not running) due to a stuck unsuccessful auto-refresh of the microk8s snap.</p>
<p>I had temporarily shut down one node for a couple of days. That left only one node to hold the vote on master for dqlite, which failed. When the shut down node was turned back on the cluster had already failed. Unsticking the auto-refresh on the third node fixed the cluster.</p>
|
Anders Bornholm
|
<p>I've setup k3s v1.20.4+k3s1 with Klipper Lb and nginx ingress 3.24.0 from the helm charts.
I'm following <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes" rel="nofollow noreferrer">this article</a> but I'm stumbling upon a very weird issue where my ingress hosts would point to the wrong service.</p>
<p>Here is my configuration:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-routing
spec:
rules:
- host: echo1.stage.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: echo1
port:
number: 80
- host: echo2.stage.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: echo2
port:
number: 80
---
apiVersion: v1
kind: Service
metadata:
name: echo1
spec:
ports:
- port: 80
targetPort: 5678
selector:
app: echo1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo1
spec:
selector:
matchLabels:
app: echo1
replicas: 2
template:
metadata:
labels:
app: echo1
spec:
containers:
- name: echo1
image: hashicorp/http-echo
args:
- "-text=This is echo1"
ports:
- containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
name: echo2
spec:
ports:
- port: 80
targetPort: 5678
selector:
app: echo2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo2
spec:
selector:
matchLabels:
app: echo2
replicas: 1
template:
metadata:
labels:
app: echo2
spec:
containers:
- name: echo2
image: hashicorp/http-echo
args:
- "-text=This is the new (echo2)"
ports:
- containerPort: 5678
</code></pre>
<p>And my Cloudflare DNS records (no DNS proxy activated):</p>
<pre><code>;; A Records
api.stage.example.com. 1 IN A 162.15.166.240
echo1.stage.example.com. 1 IN A 162.15.166.240
echo2.stage.example.com. 1 IN A 162.15.166.240
</code></pre>
<p>But when I do a curl on echo1.stage.example.com multiple times, here is what I get:</p>
<pre><code>$ curl echo1.stage.example.com
This is echo1
$ curl echo1.stage.example.com
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx</center>
</body>
</html>
$ curl echo1.stage.example.com
This is the new (echo2)
</code></pre>
<p>Sometimes I get a bad gateway, sometimes I get the <code>echo1.stage.example.com</code> domain pointing to the service assigned to <code>echo2.stage.example.com</code>. Is this because of the LB? Or a bad configuration on my end? Thanks!</p>
<p>EDIT: It's not coming from the LB, I just switched to metallb and I still get the same issue</p>
|
E-Kami
|
<p>Ok I found the issue. It was actually not related to the config I previously posted by to my <code>kustomization.yaml</code> config where I had:</p>
<pre><code>commonLabels:
app: myapp
</code></pre>
<p>Just removing that <code>commonLabels</code> solved the issue.</p>
|
E-Kami
|
<p>I have a terraform-managed EKS cluster. It used to have 2 nodes on it. I doubled the number of nodes (4).</p>
<p>I have a kubernetes_deployment resource that automatically deploys a fixed number of pods to the cluster. It was set to 20 when I had 2 nodes, and seemed evenly distributed with 10 each. I doubled that number to 40.</p>
<p>All of the new pods for the kubernetes deployment are being scheduled on the first 2 (original) nodes. Now the two original nodes have 20 pods each, while the 2 new nodes have 0 pods. The new nodes are up and ready to go, but I cannot get kubernetes to schedule the new pods on those new nodes.</p>
<p>I am unsure where to even begin searching, as I am fairly new to k8s and ops in general.</p>
<p>A few beginner questions that may be related:</p>
<ol>
<li><p>I'm reading about pod affinity, and it seems like I could tell k8s to have a pod ANTI affinity with itself within a deployment. However, I am having trouble setting up the anti-affinity rules. I see that the <a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/deployment#preferred_during_scheduling_ignored_during_execution" rel="nofollow noreferrer">kubernetes_deployment</a> resource has a scheduling argument, but I can't seem to get the syntax right.</p>
</li>
<li><p>Naively it seems that the issue may be that the deployment somehow isn't aware of the new nodes. If that is the case, how could I reboot the entire deployment (without taking down the already-running pods)?</p>
</li>
<li><p>Is there a cluster level scheduler that I need to set? I was under the impression that the default does round robin, which doesn't seem to be happening at the node level.</p>
</li>
</ol>
<p>EDIT:
The EKS terraform module <a href="https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/11.0.0/submodules/node_groups" rel="nofollow noreferrer">node_groups submodule</a> has fields for desired/min/max_capacity. To increase my worker nodes, I just increased those numbers. The change is reflected in the aws eks console.</p>
|
theahura
|
<p>Check a couple of things:</p>
<ol>
<li>Do your nodes show up correctly in the output of <code>kubectl get nodes -o wide</code> and do they have a state of ready?</li>
<li>Instead of pod affinity look into <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/" rel="nofollow noreferrer">pod topology spread constraints</a>. Anti affinity will not work with multiple pods.</li>
</ol>
|
Jonathan
|
<p>Trying to assign persistent volume to an uWSGI application, but I'm getting following error: <code>bind(): Operation not permitted [core/socket.c line 230]</code>.
Works when I assign none-persistent "empty dir" volume.</p>
<p>Here are the yaml files of the persistent volume I'm trying to assign:</p>
<pre><code>#volume claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: api-storage
spec:
accessModes:
- ReadWriteMany
storageClassName: api-storage
resources:
requests:
storage: 100Gi
</code></pre>
<pre><code>#storage class
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: api-storage
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=root
- gid=root
parameters:
skuName: Standard_LRS
</code></pre>
<p>The manifest of the application looks like this :</p>
<pre><code>
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-app
spec:
replicas: 2
selector:
matchLabels:
app: api-app
template:
metadata:
labels:
app: api-app
spec:
containers:
- name: nginx
image: nginx
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
ports:
- containerPort: 80
protocol: TCP
resources:
limits:
cpu: 50m
memory: 100Mi
requests:
cpu: 10m
memory: 50Mi
volumeMounts:
- name: storage
mountPath: /var/run/api
- name: nginx-conf
mountPath: /etc/nginx/conf.d
- name: api-app
image: azurecr.io/api_api_se:opencv
workingDir: /app
command: ["/usr/local/bin/uwsgi"]
args:
- "--die-on-term"
- "--manage-script-name"
- "--mount=/=api:app_dispatch"
- "--socket=/var/run/api/uwsgi.sock"
- "--chmod-socket=777"
- "--pyargv=se"
# - "--metrics-dir=/storage"
# - "--metrics-dir-restore"
resources:
requests:
cpu: 150m
memory: 1Gi
volumeMounts:
- name: storage
mountPath: /var/run/api
# - name: storage
# mountPath: /storage
volumes:
- name: storage
# work's if following two lines are substituted with "emptyDir: {}"
persistentVolumeClaim:
claimName: api-storage
- name: nginx-conf
configMap:
name: api
tolerations:
- key: "sku"
operator: "Equal"
value: "test"
effect: "NoSchedule"
---
apiVersion: v1
kind: Service
metadata:
labels:
app: api-app
name: api-app
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: api-app
</code></pre>
<p>The final goal is to collect metrics from the uWSGI, at this moment, the metrics get delete if the pod gets deleted by scale down </p>
|
user1334557
|
<p>To solve this problem I had to create the actual folder first, in my case <code>/storage</code>, in the <em>Dockerfile</em> while building the application image, so I added <code>RUN mkdir /storage</code> to the <em>Dockerfile</em></p>
|
user1334557
|
<p>I'm confused with one behavior of pod in k8s. I pulled and run my alpine container and it is working fine when I see the docker ps -a command, but when I run it through k8s the output of the kubectl get pod shows complete. although in the Dockerfile I typed
CMD ["sleep", "3600"], it is not going to sleep in k8s. I can send it to sleep mode with the kubectl run myalpine --image=myalpine -- sleep infinity and pod is working fine, But I don't want to use that command I expect that when I clearly type the sleep command in the Dockerfile and build it, k8s should run it as well.
I really appreciate it if someone explain the behavior of the pod.</p>
|
lutube
|
<p><a href="https://kubernetes.io/docs/concepts/workloads/pods/" rel="nofollow noreferrer">The documentation</a> has some useful explanations:</p>
<blockquote>
<p>Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.</p>
</blockquote>
<p>Think of such as the unit of “deployment”—And excuse the abuse of terminology here, since <em>deployment</em> itself is a very well defined and precise concept in k8s as well, namely another type of workload. Pods are classified as a workload.</p>
<blockquote>
<p>A Pod (as in a pod of whales or pea pod) is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers. A Pod's contents are always co-located and co-scheduled, and run in a shared context. A Pod models an application-specific "logical host": it contains one or more application containers which are relatively tightly coupled. In non-cloud contexts, applications executed on the same physical or virtual machine are analogous to cloud applications executed on the same logical host.</p>
</blockquote>
<p>So that’s one part of the answer you’re looking for: a pod models a “logical host” where you can assemble fairly-fully-functional applications (“microservices” if you want, that do 1 basic thing, but do it well) out of one or several containers together.</p>
<p>I like to think of this as some form of composition but brought up to the application level via patterns such as <em>sidecar</em> and <em>adapter</em>. Similar to how you implement cross-cutting concerns in DDD, such as logging, by abstracting them and providing a generic implementation that ought to work the same when used by any class (“attached” to any main container) albeit some wiring work being required. It is precisely that PodSpec that wires these containers up.</p>
<blockquote>
<p>As well as application containers, a Pod can contain init containers that run during Pod startup. You can also inject ephemeral containers for debugging if your cluster offers this.</p>
</blockquote>
<p>Continues supporting the analogy I gave above to composition in OOP/DDD.</p>
<p>The official documentation continues adding more explanations along those lines.</p>
<blockquote>
<p>In terms of Docker concepts, a Pod is similar to a group of Docker containers with shared namespaces and shared filesystem volumes.</p>
</blockquote>
<p>Lastly, when you want to run a container in Kubernetes (necessarily through a pod) via <code>kubectl run</code> be careful you are not overriding the container entrypoint/command-args:</p>
<blockquote>
<p>When you override the default Entrypoint and Cmd, these rules apply:</p>
<ul>
<li>If you do not supply <code>command</code> or <code>args</code> for a Container, the defaults defined in the Docker image are used.</li>
<li>If you supply a <code>command</code> but no <code>args</code> for a Container, only the supplied <code>command</code> is used. The default EntryPoint and the default Cmd
defined in the Docker image are ignored.</li>
<li>If you supply only <code>args</code> for a Container, the default Entrypoint defined in the Docker image is run with the <code>args</code> that you supplied.</li>
<li>If you supply a <code>command</code> and <code>args</code>, the default Entrypoint and the default Cmd defined in the Docker image are ignored. Your <code>command</code> is
run with your <code>args</code>.</li>
</ul>
</blockquote>
|
Fernando Espinosa
|
<p>In my terraform config files I create a Kubernetes cluster on GKE and when created, set up a Kubernetes provider to access said cluster and perform various actions such as setting up namespaces.</p>
<p>The problem is that some new namespaces were created in the cluster without terraform and now my attempts to import these namespaces into my state seem fail due to inability to connect to the cluster, which I believe is due to the following (taken from Terraform's official documentation of the import command):</p>
<blockquote>
<p>The only limitation Terraform has when reading the configuration files is that the import provider configurations must not depend on non-variable inputs. For example, a provider configuration cannot depend on a data source.</p>
</blockquote>
<p>The command I used to import the namespaces is pretty straightforward:</p>
<p><code>terraform import kubernetes_namespace.my_new_namespace my_new_namespace</code></p>
<p>I also tried using the <code>-provdier=""</code> and <code>-config=""</code> but to no avail.</p>
<p>My Kubernetes provider configuration is this:</p>
<pre><code>provider "kubernetes" {
version = "~> 1.8"
host = module.gke.endpoint
token = data.google_client_config.current.access_token
cluster_ca_certificate = base64decode(module.gke.cluster_ca_certificate)
}
</code></pre>
<p>An example for a namespace resource I am trying to import is this:</p>
<pre><code>resource "kubernetes_namespace" "my_new_namespace" {
metadata {
name = "my_new_namespace"
}
}
</code></pre>
<p>The import command results in the following:</p>
<blockquote>
<p>Error: Get <a href="http://localhost/api/v1/namespaces/my_new_namespace" rel="noreferrer">http://localhost/api/v1/namespaces/my_new_namespace</a>: dial tcp [::1]:80: connect: connection refused</p>
</blockquote>
<p>It's obvious it's doomed to fail since it's trying to reach <code>localhost</code> instead of the actual cluster IP and configurations.</p>
<p>Is there any workaround for this use case?</p>
<p>Thanks in advance.</p>
|
user1384377
|
<p>the issue lies with the dynamic data provider. The <code>import</code> statement doesn't have access to it.</p>
<p>For the process of importing, you have to hardcode the provider values.</p>
<p>Change this:</p>
<pre><code>provider "kubernetes" {
version = "~> 1.8"
host = module.gke.endpoint
token = data.google_client_config.current.access_token
cluster_ca_certificate = base64decode(module.gke.cluster_ca_certificate)
}
</code></pre>
<p>to:</p>
<pre><code>provider "kubernetes" {
version = "~> 1.8"
host = "https://<ip-of-cluster>"
token = "<token>"
cluster_ca_certificate = base64decode(<cert>)
load_config_file = false
}
</code></pre>
<ul>
<li>The token can be retrieved from <code>gcloud auth print-access-token</code>.</li>
<li>The IP and cert can be retrieved by inspecting the created container resource using <code>terraform state show module.gke.google_container_cluster.your_cluster_resource_name_here</code></li>
</ul>
<p>For provider version 2+ you have to drop <code>load_config_file</code>.</p>
<p>Once in place, import and revert the changes on the provider.</p>
|
Jonathan
|
<p>I'd like to show entries that have <code>.metadata.labels.app</code> set to <code>"myapp"</code>value.</p>
<p>Command:</p>
<pre><code>kubectl get pods -o go-template --template="{{range .items}}{{if eq .metadata.labels.app "myapp"}}{{.metadata.name}} {{end}}{{end}}"
</code></pre>
<p>It gives an error:</p>
<blockquote>
<p>output:1: function "myapp" not defined</p>
</blockquote>
<p>The structures look like this:</p>
<pre><code>- apiVersion: v1
kind: Pod
metadata:
creationTimestamp: 2017-09-15T08:18:26Z
generateName: myapp-2830765207-
labels:
app: myapp
pod-template-hash: "2830765207"
name: myapp-2830765207-dh359
namespace: default
</code></pre>
|
Velkan
|
<p>I haven't used kubetcl before, but I am familiar with shell commands in general, from which I can tell you one thing that's going wrong, and maybe that's all you need. (I'm also somewhat familiar with Go templates, and your string comparison looks fine to me.) By using double quotes around your template and within your template, you're actually closing the string you're passing in as the template at the first double quote in <code>"myapp"</code>. Using single quotes around the template should help:</p>
<pre><code>kubectl get pods -o go-template --template='{{range .items}}{{if eq .metadata.labels.app "myapp"}}{{.metadata.name}} {{end}}{{end}}'
</code></pre>
|
Darshan Rivka Whittle
|
<p>I have a Kubernetes deployment that deploys a Java application based on the <a href="https://hub.docker.com/r/anapsix/alpine-java/" rel="nofollow noreferrer">anapsix/alpine-java</a> image. There is nothing else running in the container expect for the Java application and the container overhead.</p>
<p>I want to maximise the amount of memory the Java process can use inside the docker container and minimise the amount of ram that will be reserved but never used.</p>
<p><strong>For example I have:</strong></p>
<ol>
<li><strong>Two Kubernetes nodes that have 8 gig of ram each and no swap</strong></li>
<li><strong>A Kubernetes deployment that runs a Java process consuming a maximum of 1 gig of heap to operate optimally</strong></li>
</ol>
<p><em><strong>How can I safely maximise the amount of pods running on the two nodes while never having Kubernetes terminate my PODs because of memory limits?</strong></em></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: my-deployment
spec:
containers:
- name: my-deployment
image: myreg:5000/my-deployment:0.0.1-SNAPSHOT
ports:
- containerPort: 8080
name: http
resources:
requests:
memory: 1024Mi
limits:
memory: 1024Mi
</code></pre>
<p>Java 8 update 131+ has a flag -XX:+UseCGroupMemoryLimitForHeap to use the Docker limits that come from the Kubernetes deployment.</p>
<p><strong>My Docker experiments show me what is happening in Kubernetes</strong></p>
<p><strong>If I run the following in Docker:</strong></p>
<pre><code>docker run -m 1024m anapsix/alpine-java:8_server-jre_unlimited java -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XshowSettings:vm -version
</code></pre>
<p><em><strong>I get:</strong></em></p>
<pre><code>VM settings:
Max. Heap Size (Estimated): 228.00M
</code></pre>
<p>This low value is because Java sets -XX:MaxRAMFraction to 4 by default and I get about 1/4 of the ram allocated...</p>
<p><strong>If I run the same command with -XX:MaxRAMFraction=2 in Docker:</strong></p>
<pre><code>docker run -m 1024m anapsix/alpine-java:8_server-jre_unlimited java -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XshowSettings:vm -XX:MaxRAMFraction=2 -version
</code></pre>
<p><em><strong>I get:</strong></em></p>
<pre><code>VM settings:
Max. Heap Size (Estimated): 455.50M
</code></pre>
<p>Finally setting MaxRAMFraction=1 quickly causes Kubernetes to Kill my container.</p>
<pre><code>docker run -m 1024m anapsix/alpine-java:8_server-jre_unlimited java -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XshowSettings:vm -XX:MaxRAMFraction=1 -version
</code></pre>
<p><em><strong>I get:</strong></em></p>
<pre><code>VM settings:
Max. Heap Size (Estimated): 910.50M
</code></pre>
|
rjdkolb
|
<p>The reason Kubernetes kills your pods is the <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/" rel="noreferrer">resource limit</a>. It is difficult to calculate because of container overhead and the usual mismatches between decimal and binary prefixes in specification of memory usage. My solution is to entirely drop the limit and only keep the requirement(which is what your pod will have available in any case if it is scheduled). Rely on the JVM to limit its heap via static specification and let Kubernetes manage how many pods are scheduled on a single node via resource requirement. </p>
<p>At first you will need to determine the actual memory usage of your container when running with your desired heap size. Run a pod with <code>-Xmx1024m -Xms1024m</code> and connect to the hosts docker daemon it's scheduled on. Run <code>docker ps</code> to find your pod and <code>docker stats <container></code> to see its current memory usage wich is the sum of JVM heap, other static JVM usage like direct memory and your containers overhead(alpine with glibc). This value should only fluctuate within kibibytes because of some network usage that is handled outside the JVM. Add this value as memory requirement to your pod template. </p>
<p>Calculate or estimate how much memory other components on your nodes need to function properly. There will at least be the Kubernetes kubelet, the Linux kernel, its userland, probably an SSH daemon and in your case a docker daemon running on them. You can choose a generous default like 1 Gibibyte excluding the kubelet if you can spare the extra few bytes. Specify <code>--system-reserved=1Gi</code> and <code>--kube-reserved=100Mi</code> in your kubelets flags and restart it. This will add those reserved resources to the Kubernetes schedulers calculations when determining how many pods can run on a node. See the <a href="https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/" rel="noreferrer">official Kubernetes documentation</a> for more information. </p>
<p>This way there will probably be five to seven pods scheduled on a node with eight Gigabytes of RAM, depending on the above chosen and measured values. They will be guaranteed the RAM specified in the memory requirement and will not be terminated. Verify the memory usage via <code>kubectl describe node</code> under <code>Allocated resources</code>. As for elegancy/flexibility, you just need to adjust the memory requirement and JVM heap size if you want to increase RAM available to your application.</p>
<p>This approach only works assuming that the pods memory usage will not explode, if it would not be limited by the JVM a rouge pod might cause eviction, see <a href="https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/" rel="noreferrer">out of resource handling</a>. </p>
|
Simon Tesar
|
<p>I have this ingress in my server.</p>
<pre><code>kind: Ingress
metadata:
name: mycompany-production
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"
spec:
tls:
- hosts:
- ra2.mycompany.com.br
secretName: ra-production-us2-certmanager-certificate
rules:
- host: ra2.mycompany.com.br
http:
paths:
- path: /
backend:
serviceName: mycompany-production-deployment-nodeport
servicePort: 80
- path: /conteudo/
backend:
serviceName: seo-production-deployment-nodeport
servicePort: 80
</code></pre>
<p>Actually, I expect that when I call ra2.mycompany.com.br/conteudo/health, for exemple, it touch the <strong>seo-production-deployment-nodeport/health</strong>, but it is going to <strong>seo-production-deployment-nodeport/conteudo/health</strong> receiving an 404 answer.</p>
<p>Looking in my ingress it does not seems to have an error. I just do not know why it's not "cleaning" the path as expected. The requests in normal <strong>/</strong> path is happening as expected.</p>
|
Jonathan Silva
|
<pre><code> - path: /conteudo/
backend:
serviceName: seo-production-deployment-nodeport
servicePort: 80
</code></pre>
<p>You're hitting <code>ra2.mycompany.com.br/conteudo/health</code> so of course it goes to this backend, as defined in the ingress rules.</p>
<p><code>nginx.ingress.kubernetes.io/rewrite-target: /</code> just means that <code>/</code> in your url is forwarded to <code>/</code> in the backend matching the path set in rules, <strong>not</strong> matching the rule serving <code>/</code></p>
<p>To make it easy to understand: when you run a webserver which only serves everything under a <strong>base path</strong> let's say <code>/mysite</code> but you do not want the people using your ingress to see /mysite/ you set your <code>nginx.ingress.kubernetes.io/rewrite-target: /mysite</code> and if they hit <code><domain>/foo</code> it actually goes to your backend <code>/mysite/foo</code></p>
|
DevLounge
|
<p>I am using OS Ubuntu 16.0.4 and i installed minikube on it.
I need to copy some files to minikube, so how can i do that?
I tried next command but it asked me on password and i don't know it</p>
<pre><code>scp /media/myuser/sourceFolder docker@192.168.99.100:/home/docker/destiationFolder
</code></pre>
<p><strong>Note:</strong> minikube IP is <strong>192.168.99.100</strong>, and i used <strong>docker</strong> as default user for minikube but actually i don't know if it correct ot not.</p>
<p>So what is the default username and password for minikube and how can i copy files from my local machine into minikube?</p>
<p>Thanks :)</p>
|
mibrahim.iti
|
<p>On the host machine you can use the <code>ssh-key</code> and <code>ip</code> subcommands of the <code>minikube</code> command:</p>
<pre><code>scp -i $(minikube ssh-key) <local-path> docker@$(minikube ip):<remote-path>
</code></pre>
<p>So the command from the question becomes:</p>
<pre><code>scp -i $(minikube ssh-key) /media/myuser/sourceFolder docker@$(minikube ip):/home/docker/destiationFolder
</code></pre>
|
Dirk
|
<p>Trying to investigate certain things in Kubernetes:</p>
<ol>
<li><p>What are the things that needs to be cleaned up when a pod is getting deleted ?</p></li>
<li><p>How are connections handled during the termination phase ?</p></li>
</ol>
|
XnaijaZ
|
<p>When Pod is deleted, you need to delete below resources</p>
<ol>
<li>configmap</li>
<li>secrets</li>
<li>services</li>
<li>certificates</li>
<li>Ingress</li>
<li>Services</li>
</ol>
<p>Deployment or replicaset needs to be deleted first if pods are part of these resources and deletion of pod is taken care automatically</p>
<p>Connections are handled until there is one pod is running, and service is not yet deleted. Deleting configmaps, secrets may not impact as pod would have picked up these details at the startup</p>
|
Shambu
|
<p>I am reading Core Kubernetes by Vyas and Love. Section 8.3.1 has the following 2 yaml files. Let's call them <code>secret.yaml</code>:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
val1: YXNkZgo=
val2: YXNkZjIK
stringData:
val1: asdf
</code></pre>
<p>and <code>secret-pod.yaml</code>:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: mysecretpod
spec:
containers:
- name: mypod
image: nginx
volumeMounts:
- name: myval
mountPath: /etc/myval
readOnly: true
volumes:
- name: myval
secret:
secretName: val1
</code></pre>
<p>When I run <code>kubectl apply -f secret-pod.yaml</code>, it errors out. Using <code>describe</code>, I can see this:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3s default-scheduler Successfully assigned default/mysecretpod to minikube
Warning FailedMount 0s (x4 over 3s) kubelet MountVolume.SetUp failed for volume "myval" : secret "val1" not found
</code></pre>
<p>This kinda makes sense. Using <code>kubectl get secrets</code>, I can only see the following:</p>
<pre><code>NAME TYPE DATA AGE
default-token-vhllg kubernetes.io/service-account-token 3 5d3h
mysecret Opaque 2 19m
</code></pre>
<p>So I make the following change to <code>secret-pod.yaml</code>:</p>
<pre><code> volumes:
- name: myval
secret:
secretName: mysecret
</code></pre>
<p>That makes <code>kubectl</code> happy and it promptly creates <code>mysecretpod</code> without any issue. However looking into that pod using <code>kubectl exec -it mysecretpod -- ls -l /etc/myval</code>, I get:</p>
<pre><code>total 0
lrwxrwxrwx 1 root root 11 Dec 12 08:08 val1 -> ..data/val1
lrwxrwxrwx 1 root root 11 Dec 12 08:08 val2 -> ..data/val2
</code></pre>
<p>So the content of <code>mysecret</code> is loaded into that folder with <code>val1</code> and <code>val2</code> being files. I think the authors intend to mount <code>val1</code> to be the <code>/etc/myval</code> files in that pod. How should <code>secret-pod.yaml</code> be written to achieve that? I have tried this but it fails:</p>
<pre><code> volumes:
- name: myval
secret:
secretName: mysecret/val1
</code></pre>
<p>Also, why am I seeing the extraneous <code> -> ..data/val...</code> for both <code>val1</code> and <code>val2</code>? What are they?</p>
|
CaTx
|
<p>So for it to work as intended, <code>secret-pod.yaml</code> must specify <code>subPath</code> as follows:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: mysecretpod
spec:
containers:
- name: mypod
image: nginx
volumeMounts:
- name: myval
mountPath: /etc/myval
subPath: myval
readOnly: true
volumes:
- name: myval
secret:
secretName: mysecret
items:
- key: val1
path: myval
</code></pre>
|
CaTx
|
<p>I have a set of vendor-provided containers that work together to provide a service, let's call them A, B and C - the Kubernetes cluster runs one of each, in separate pods.</p>
<ul>
<li>Container A is a frontend Nginx container that mostly forwards stuff to container B. The frontend allows you to signup an account, which is really an API call forwarded to container B.</li>
<li>Container B is the backend - it has a proprietary in-memory database, and stores the accounts there. When an account is created, a security token is also created and stored. (Users can retrieve that token through API calls via A.)</li>
<li>Container C is a bridge to some stuff outside the cluster. But crucially, you configure it with an environment variable containing a valid security token from a user account stored in container B. C uses that token to talk to B through A.</li>
</ul>
<p>Ideally I would like to deploy this all in one go - meaning I somehow need to create an account in container B (by calling A), then get that value into the environment variable for C.</p>
<p>Some thoughts:</p>
<ul>
<li>I can't use an init container to SQL insert a new user because the database isn't exposed. </li>
<li>I can build containers on top of the vendor containers to edit config/scripts etc, but replacing binaries is probably out of scope.</li>
<li>I can <strong>just about</strong> script the backend API through A to B. However, it's challenging as it makes use of XSRF tokens that persist between requests, etc. Any suggestions for the simplest tools/libraries to use to achieve that are welcome.</li>
<li>If I do script that account creation, I would then need to get that token into the Deployment for container C - I could use a ConfigMap, but then I would need to call the Kube API to modify the ConfigMap from within the cluster, and that doesn't seem like a great idea to me.</li>
</ul>
<p>To me, the only viable solution is to put an initContainer on C that will query B (via A) for the security token, then write that to a shared volume. Then I would build on top of container C to read from the shared volume, set the environment variable internal to the container and start the vendor's process. But then I have to manage the secret for the user account.</p>
<p>Are there any improvements on that approach, or anything completely different I haven't considered?</p>
|
Oli
|
<p>Performing API operations from inside a container is not an antipattern. Block startup of your process on <strong>C</strong> until the <code>initContainer</code> runs and updates the <code>Secret</code> containing the token. Don't use a <code>ConfigMap</code> for secrets; the <code>Secret</code> object exists for these purposes, and you can pull a <code>Secret</code> into a <code>PodSpec</code> - as env vars, or a volume mount - the same way you pull a <code>ConfigMap</code> (with some minor syntax variation).</p>
<p>The only trouble I can see you potentially running into is that you'll be dealing with multiple replicas, so you might want to randomise some of the <code>Secret</code> name. Create it in the <code>initContainer</code>, pass the randomised name across in the filesystem since <code>Pods</code> share a filesystem, then consume it in the main container and delete the <code>Secret</code> after the env var or mount gets set up. <code>Secrets</code> and <code>ConfigMaps</code> can disappear after a <code>Pod</code> starts up without affecting their presence inside the <code>Pod</code>.</p>
<p>You'll also probably want some way of cleaning up the user accounts, as you're essentially creating a new user every time a replica starts and won't have an opportunity to delete it. A <code>CronJob</code> might be the way forward for that - list the user accounts matching your naming convention, then restart the deployment and delete the accounts in the list you fetched before the restart. That way you won't be pulling the rug out on any active replicas.</p>
|
dave.io
|
<p>It might be hard to explain so sorry if ı can not explain correctly.</p>
<p>In our k8s cluster we have two OpenStack-Load Balancer because we would like to expose our application through ingress which has to be internet facing. In same cluster we also deployed pgadmin4 which has to be intranet facing.(only reachable from internal network.)</p>
<p>So in front of these OpenStack-LB, we have also f5 Load Balancer which handle https connection,ssl .. and also logic to expose via intranet or internet.</p>
<p>MyApp is internet facing and needs to reachable with host.internet.net</p>
<p>PgAdmin4 is intranet and needs to reachable via host.intranet.net/pgadmin4</p>
<p>So the issue is, when I try to expose my application through ingress using host.internet.net it won't works and ı received below error cause probably it can not able to communicate with correct openStack-LB. When ı tried to expose via openStack-lb IP everything works properly.</p>
<blockquote>
<p>{"level":"error","msg":"Service not found for
dev/oneapihub-ui-dev","time":"2020-03-26T05:20:05Z"}
{"level":"error","msg":"endpoints not found for
dev/oneapihub-ui-dev","time":"2020-03-26T05:20:05Z"}</p>
</blockquote>
<p>And the question is , how can I handle this issue via ingress controller? Should I intall another traefik ingress controller?</p>
<pre><code>capel0068340585:~ semural$ kubectl get ingress -n ingress
NAME HOSTS ADDRESS PORTS AGE
ingress-traefik-dashboard * 80 21d
</code></pre>
<hr>
<pre><code>kubectl get tenantSpec -o yaml
loadBalancers:
- ip: <IP1>
name: LBaaS2
ports:
- extPort: 80
name: "80"
nodePort: 30001
- ip: <IP2>
name: LBaaS1
ports:
- extPort: 80
name: "80"
nodePort: 30000
</code></pre>
<hr>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/oneapihub-ui-dev ClusterIP 10.254.173.130 <none> 80/TCP 15m
NAME ENDPOINTS AGE
endpoints/oneapihub-ui-dev 10.6.24.136:3000 15m
</code></pre>
<hr>
<pre><code>ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: traefik
hosts:
- host: host.internet.net -> example
paths: [/]
tls: []
</code></pre>
<hr>
<pre><code>ingress:
enabled: ingress
annotations:
kubernetes.io/ingress.class: traefik
hosts:
- host: host.intranet.net
paths:
- /pgadmin4
</code></pre>
|
semural
|
<p>You error state <code>"Service not found for dev/oneapihub-ui-dev"</code>, which means traefik is trying to connect to a Service in the dev namespace called "oneapihub-ui-dev" which it cannot find.</p>
<p>You need to make sure that both the Service exists and that it has endpoints. You can check if the Service exists with <code>kubectl -n dev get service oneapihub-ui-dev</code>. If it exists, check if it has endpoints with <code>kubectl -n dev get ep oneapihub-ui-dev</code>.</p>
<p>EDIT: If the Service exists and has Endpoints, than you may want to look into the RBAC permissions of traefik to see if it has enough permissions to look in the dev namespace and if you do not deploy any NetworkPolicies on the dev namespace that prevent the ingress namespace from connecting.</p>
|
Tim Stoop
|
<p>I am trying to understand <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="noreferrer">Stateful Sets</a>. How does their use differ from the use of "stateless" Pods with Persistent Volumes? That is, assuming that a "normal" Pod may lay claim to persistent storage, what obvious thing am I missing that requires this new construct (with ordered start/stop and so on)?</p>
|
Laird Nelson
|
<p>Yes, a regular pod can use a persistent volume. However, sometimes you have multiple pods that logically form a "group". Examples of this would be database replicas, ZooKeeper hosts, Kafka nodes, etc. In all of these cases there's a bunch of servers and they work together and talk to each other. What's special about them is that each individual in the group has an identity. For example, for a database cluster one is the master and two are followers and each of the followers communicates with the master letting it know what it has and has not synced. So the followers know that "db-x-0" is the master and the master knows that "db-x-2" is a follower and has all the data up to a certain point but still needs data beyond that.</p>
<p>In such situations you need a few things you can't easily get from a regular pod:</p>
<ol>
<li>A predictable name: you want to start your pods telling them where to find each other so they can form a cluster, elect a leader, etc. but you need to know their names in advance to do that. Normal pod names are random so you can't know them in advance.</li>
<li>A stable address/DNS name: you want whatever names were available in step (1) to stay the same. If a normal pod restarts (you redeploy, the host where it was running dies, etc.) on another host it'll get a new name and a new IP address. </li>
<li>A persistent <strong>link</strong> between an individual in the group and their persistent volume: if the host where one of your database master was running dies it'll get moved to a new host but should connect to the <strong>same</strong> persistent volume as there's one and only 1 volume that contains the right data for that "individual". So, for example, if you redeploy your group of 3 database hosts you want the same individual (by DNS name and IP address) to get the same persistent volume so the master is still the master and still has the same data, replica1 gets it's data, etc.</li>
</ol>
<p>StatefulSets solve these issues because they provide (quoting from <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/</a>):</p>
<ol>
<li>Stable, unique network identifiers.</li>
<li>Stable, persistent storage.</li>
<li>Ordered, graceful deployment and scaling.</li>
<li>Ordered, graceful deletion and termination.</li>
</ol>
<p>I didn't really talk about (3) and (4) but that can also help with clusters as you can tell the first one to deploy to become the master and the next one find the first and treat it as master, etc.</p>
<p>As some have noted, you can indeed can <strong>some</strong> of the same benefits by using regular pods and services, but its much more work. For example, if you wanted 3 database instances you could manually create 3 deployments and 3 services. Note that you must manually create <strong>3 deployments</strong> as you can't have a service point to a single pod in a deployment. Then, to scale up you'd manually create another deployment and another service. This does work and was somewhat common practice before PetSet/PersistentSet came along. Note that it is missing some of the benefits listed above (persistent volume mapping & fixed start order for example).</p>
|
Oliver Dain
|
<p>I've been working with Kubernetes for quite a while, but still often got confused about Volume, PersistentVolume and PersistemtVolumeClaim. It would be nice if someone could briefly summarize the difference of them.</p>
|
Dagang
|
<p>Volume - For a pod to reference a storage that is external , it needs volume spec. This volume can be from configmap, from secrets, from persistantvolumeclaim, from hostpath etc</p>
<p>PeristentVolume - It is representation of a storage that is made avaliable. The plugins for cloud provider enable to create this resource.</p>
<p>PeristentVolumeClaim - This claims specific resources and if the persistent volume is avaliable in namespaces match the claim requirement, then claim get tied to that Peristentvolume</p>
<p>At this point this PVC/PV aren't used. Then in Pod spec, pod makes use of claim as volumes and then the storage is attached to Pod</p>
|
Shambu
|
<p>what is the differentes of deployment of a self hosted gateway to docker or kubernets?</p>
<p>I had read that Kubernets allows local metrics and logs.
Link: <a href="https://learn.microsoft.com/en-us/azure/api-management/how-to-configure-local-metrics-logs" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/api-management/how-to-configure-local-metrics-logs</a></p>
<p>I also had read that kubernets would be necessary to use caching with a self hosted gateway.
Link: <a href="https://learn.microsoft.com/en-us/azure/api-management/api-management-howto-cache-external" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/api-management/api-management-howto-cache-external</a></p>
<p>Would it be possible to use these capabilities with docker only? or is kubernets necessary to allow these capabilities?</p>
<p>Thanks.</p>
|
high5
|
<p>Kubernetes and Docker are not competing techonlogies. Docker is just a container runtime, bundled with extra tools to also build container images. Kubernetes "can use" docker as an option for its container runtime, which is what most people do in practice.</p>
<p>That said, Kubernetes adds a whole new layer dedicated to container orchestration, integration and automation. It addresses multiple concerns you would have to handle anyway if you decided to use plain Docker. Some of that includes: self-healing, health checks, workload distribution, auto scaling, blue/green application updates, native support for stateful or stateless applications, a framework for a variety of ingress controllers, cluster-wide application configuration, integration with storage providers, RBAC authorization for cluster management... it's a pretty big list.</p>
<p>It doesn't necessarily add anything new with respect to the applications themselves, such as caching, gateways or metrics... anymore than you would be able to with Docker. Except, with plain Docker, you would need to set things up yourself in your own way, while Kubernetes provides the necessary tools to automate most of it. Integrated solutions, like reverse proxies, elasticsearch, redis, prometheus, etc. they just leverage that potential for automation.</p>
|
JulioHM
|
<p>i have issues connecting my spring boot app to mq on icp so how can i define the <code>ibm.mq.connName=mymq-ibm-mq(30803)</code> because i always get this execption :</p>
<pre><code>Caused by: com.ibm.mq.jmqi.JmqiException: CC=2;RC=2538;AMQ9204: Connection to host ‘10.0.0.1(1414)’ rejected. [1=com.ibm.mq.jmqi.JmqiException[CC=2;RC=2538;AMQ9213: A communications error for ‘TCP’ occurred. [1=java.net.ConnectException[Connection refused (Connection refused)],3=connnectUsingLocalAddress,4=TCP,5=Socket.connect]],3=10.0.0.1(1414),5=RemoteTCPConnection.connnectUsingLocalAddress]
</code></pre>
<p>how can i get the correct host and the port ?</p>
<p>this is my application.properties :</p>
<pre><code>ibm.mq.queueManager=QM1
ibm.mq.channel=DEV.ADMIN.SVRCONN
ibm.mq.connName=mymq-ibm-mq(30803)
ibm.mq.user=admin
ibm.mq.password=passw0rd
</code></pre>
<p>this is the output of kubectl describe service</p>
<pre><code>Name: mymq-ibm-mq
Namespace: lab
Labels: app=ibm-mq
chart=ibm-mqadvanced-server-dev
heritage=Tiller
release=mymq
Annotations:
Selector: app=ibm-mq,release=mymq
Type: NodePort
IP: 10.1.0.24
Port: console-https 9443/TCP
TargetPort: 9443/TCP
NodePort: console-https 32575/TCP
Endpoints: 10.2.9.53:9443
Port: qmgr 1414/TCP
TargetPort: 1414/TCP
NodePort: qmgr 30803/TCP
Endpoints: 10.2.9.53:1414
Session Affinity: None
External Traffic Policy: Cluster
</code></pre>
|
Henda Salhi
|
<p>I'm pretty sure the MQ Connection port should be 1414 and not 30803. Try this:</p>
<pre><code>ibm.mq.connName=mymq-ibm-mq(1414)
</code></pre>
|
Axel Podehl
|
<p>I have installed both kubernetes and docker on Ubuntu in an effort to have the similar dev environment that I have on my windows 10 machine so I can debug a problem with extra \r\n on my kubernetes secrets.</p>
<p>How do you perform <a href="https://stackoverflow.com/questions/51072235/does-kubernetes-come-with-docker-by-default">this</a> step on Ubuntu?</p>
<p>I think I need something like <code>kubectl config use-context docker-for-desktop</code> which doesn't work on Ubuntu or configure kubectl to point to the right docker port.</p>
<p>How do I get kubernetes configured?</p>
<p>I am on Ubuntu 18.10.
Docker version (Installed with directions from <a href="https://docs.docker.com/v17.09/engine/installation/linux/docker-ce/ubuntu/" rel="nofollow noreferrer">here</a>):</p>
<pre><code>$ docker version
Client:
Version: 18.09.0
API version: 1.38 (downgraded from 1.39)
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:49:01 2018
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 18.06.1-ce
API version: 1.38 (minimum version 1.12)
Go version: go1.10.4
Git commit: e68fc7a
Built: Mon Oct 1 14:25:33 2018
OS/Arch: linux/amd64
Experimental: false
</code></pre>
<p>Kubectl version:</p>
<pre><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
</code></pre>
|
XanderM
|
<p>Docker Enterprise Edition(EE) for Ubuntu is the only container platform with a built-in choice of orchestrators (Docker Swarm and Kubernetes), operating systems, (Windows and multiple Linux distributions), and supported infrastructure (bare metal, VMs, cloud, and more) -<a href="https://store.docker.com/editions/enterprise/docker-ee-server-ubuntu" rel="nofollow noreferrer">https://store.docker.com/editions/enterprise/docker-ee-server-ubuntu</a> </p>
<p><a href="https://forums.docker.com/t/is-there-a-built-in-kubernetes-in-docker-ce-for-linux/54374" rel="nofollow noreferrer">Here’s</a> an answer confirming the same</p>
<blockquote>
<p>Docker’s Community Edition engine for Linux does not include built-in
kubernetes capabilities. We say</p>
<p>We have added Kubernetes support in both Docker Desktop for Mac and
Windows and in Docker Enterprise Edition (EE).</p>
<p>You can build a Kubernetes cluster yourself on top of one or more CE
engines, though. For some guidance, you can visit the setup
documentation at <a href="https://kubernetes.io/docs/setup/scratch/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/scratch/</a></p>
</blockquote>
|
Vidyasagar Machupalli
|
<p>I deployed my app using kubernetes and now Id like to add custom domain to the app. I am using <a href="https://hackernoon.com/expose-your-kubernetes-service-from-your-own-custom-domains-cc8a1d965fc" rel="nofollow noreferrer">this</a> tutorial and it uses ingress to set the custom domain.<br>
I noticed that the app load balancer has an ip. Why shouldn't I use that ip? What is the reason I need ingress?</p>
|
Naor
|
<p>If you want to expose your app, you could just as easily use a service of type <code>NodePort</code> instead of an Ingress. You could also use the type <code>LoadBalancer</code>. <code>LoadBalancer</code> is a superset of <code>NodePort</code> and assigns a fixed ip. With the type <code>LoadBalancer</code> you could assign a domain to this fixed IP. How to do this depends on where you have registered your domain.</p>
<p>To answer your questions:</p>
<ul>
<li>You do not need an Ingress you could use a <code>NodePort</code> service or
<code>LoadBalander</code> service. </li>
<li>To assign a domain to your app, you do not
need an Ingress, you could use a <code>LoadBalancer</code> service</li>
<li>In any case,
you could just use the ip, but as already pointed out, a domain is
more convenient.</li>
</ul>
<p>If you just want to try out your app, you could just use the IP. A domain can be assigned later.</p>
<p>Here is a official kubernetes tutorial on how to expose an app: <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/</a></p>
|
mbuechmann
|
<pre><code>$ helm version
version.BuildInfo{Version:"v3.3.0", GitCommit:"8a4aeec08d67a7b84472007529e8097ec3742105", GitTreeState:"dirty", GoVersion:"go1.14.6"}
</code></pre>
<p>So I have my template:</p>
<pre class="lang-yaml prettyprint-override"><code> minAvailable: {{ mul .Values.autoscaling.minReplicas 0.75 }}
</code></pre>
<p>values.yaml:</p>
<pre class="lang-yaml prettyprint-override"><code>autoscaling:
minReplicas: 3
</code></pre>
<p>I would have expected a rendered output of <code>2.25</code>, but I get 0 (<code>3 * 0</code> because <code>0.75</code> gets floored...)</p>
<p>I've tried things like</p>
<pre class="lang-yaml prettyprint-override"><code> minAvailable: {{ mul (float .Values.autoscaling.minReplicas) 0.75 }}
</code></pre>
<p>Ultimately I'm going to <code>floor</code> the value to get back to an int...</p>
<pre class="lang-yaml prettyprint-override"><code> minAvailable: {{ floor ( mul .Values.autoscaling.minReplicas 0.75 ) }}
</code></pre>
<p>But I just don't understand why I can't seem to do simple float arithmetic</p>
<hr />
<p>Other things I've tried</p>
<pre class="lang-yaml prettyprint-override"><code> minAvailable: {{ float64 .Values.autoscaling.minReplicas }}
</code></pre>
<pre class="lang-yaml prettyprint-override"><code> minAvailable: {{ float64 .Values.autoscaling.minReplicas | toString }}
</code></pre>
<p>nothing produces a float number....</p>
<p>I've even tried doing this in values.yaml</p>
<pre><code>autoscaling:
minReplicas: 3.0
</code></pre>
|
Callum Linington
|
<p>Helm and its templates support the default Go <a href="https://pkg.go.dev/text/template?utm_source=godoc#hdr-Functions" rel="noreferrer">text/template</a> functions and the function provided by the <a href="https://masterminds.github.io/sprig/" rel="noreferrer">Sprig</a> extension. Since Sprig version <a href="https://github.com/Masterminds/sprig/releases/tag/v3.2.0" rel="noreferrer">3.2</a> it also supports <a href="https://masterminds.github.io/sprig/mathf.html" rel="noreferrer">Float Math Functions</a> like <code>addf</code>, <code>subf</code>, <code>mulf</code>, <code>divf</code>, etc. In your case you would just need:</p>
<pre><code> minAvailable: {{ mulf .Values.autoscaling.minReplicas 0.75 }}
</code></pre>
|
Strahinja Kustudic
|
<p>I installed Keycloak using <a href="https://artifacthub.io/packages/olm/community-operators/keycloak-operator" rel="nofollow noreferrer">Operator</a> (version 12.0.1). It's using the repository <a href="https://github.com/keycloak/keycloak-operator" rel="nofollow noreferrer">github repository</a> Everything worked seamlessly. A keycloak instance has been launched and I could log in using admin credentials. I could see a realm, clients, users, etc working as expected.</p>
<p>But I do have a custom theme that I want to use it. For that, I make the following changes in <strong>my-realm.yaml</strong>.</p>
<pre><code>apiVersion: keycloak.org/v1alpha1
kind: Keycloak
metadata:
name: my-keycloak
labels:
app: my-keycloak
spec:
instances: 1
extensions:
- https://github.com/aerogear/keycloak-metrics-spi/releases/download/1.0.4/keycloak-metrics-spi-1.0.4.jar
externalAccess:
enabled: False
podDisruptionBudget:
enabled: True
---
apiVersion: keycloak.org/v1alpha1
kind: KeycloakRealm
metadata:
name: my-realm
labels:
app: my-realm
spec:
realm:
id: "my"
realm: "my"
enabled: True
registrationEmailAsUsername: True
accountTheme: "account-theme" // THEMES
adminTheme: "admin-theme" // THEMES
emailTheme: "email-theme" // THEMES
loginTheme: "login-theme" // THEMES
users:
- username: "aftab@x.com"
firstName: "Service"
lastName: "Account"
instanceSelector:
matchLabels:
app: my-keycloak
</code></pre>
<p>But, I didn't see any of the themes in my realms/my-realm/theme-settings. I can see only the default theme in the select box (i.e. keycloak, base). I am expecting <code>account-theme</code>, <code>admin-theme</code>, <code>email-theme</code>, <code>login-theme</code> in their respective select boxes.</p>
<p>Am I missing something here?</p>
|
Aftab
|
<blockquote>
<p>But, I didn't see any of the themes in my
realms/my-realm/theme-settings. I can see only the default theme in
the select box (<em>i.e.</em> keycloak, base). I am expecting account-theme,
admin-theme, email-theme, login-theme in their respective select
boxes.</p>
</blockquote>
<p>The problem is/was that -- until <a href="https://github.com/keycloak/keycloak-operator/commit/f2d0370290d6abe91724b2536748aa6b1245d1e1" rel="nofollow noreferrer">yesterday</a> the 26th of January of 2021, commit f2d0370290d6abe91724b2536748aa6b1245d1e1 (<a href="https://github.com/keycloak/keycloak-operator/pull/284" rel="nofollow noreferrer">pull request #284</a>) -- by default the Keycloak Operator did not recognize the Theme-related fields (<em>i.e.,</em> <code>accountTheme</code>, <code>adminTheme</code>, <code>emailTheme</code>, <code>loginTheme</code>).</p>
<p>This feature was not deployed on the current latest release (12.0.2), however it is available of master. So you can go from there.</p>
|
dreamcrash
|
<p>I want to list names of all pods which are actually serving traffic behind a kubernetes service.My question is how to achieve this by executing a single kubectl command.</p>
|
Arghya Sadhu
|
<p>There are two ways to list the pods behind a service. </p>
<p><strong>The easier way but with two commands</strong></p>
<p>Find the selector by running the below command</p>
<pre><code>kubectl get services -o=wide
</code></pre>
<p>Output:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
hello-world-service ClusterIP 172.21.xxx.xx <none> 80/TCP 13m run=nginx
</code></pre>
<p>Pass the selector to the command below</p>
<pre><code>kubectl get pods --selector=run=nginx -o=name
</code></pre>
<p>To see the exact pod names without <code>pod/</code></p>
<pre><code>kubectl get pods --selector=run=nginx --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}'
</code></pre>
<p><strong>In a single command but using the endpoints information for the service <code>hello-world-service</code></strong></p>
<pre><code>kubectl get endpoints hello-world-service -o=jsonpath='{.subsets[*].addresses[*].ip}' | tr ' ' '\n' | kubectl get pods --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}'
</code></pre>
|
Vidyasagar Machupalli
|
<p>I can run <code>kubectl get pod nginx -o=jsonpath={'.status'}</code> to get just the status in json for my pod.</p>
<p>How can I do the same filtering but have the result returned in yaml instead of json?</p>
<p>So I would like to get this kind of output by the command:</p>
<pre><code>status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-05-31T14:58:57Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2019-05-31T14:59:02Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2019-05-31T14:58:57Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://5eb07d9c8c4de3b1ba454616ef7b258d9ce5548a46d4d5521a0ec5bc2d36b716
image: nginx:1.15.12
imageID: docker-pullable://nginx@sha256:1d0dfe527f801c596818da756e01fa0e7af4649b15edc3eb245e8da92c8381f8
lastState: {}
name: nginx
ready: true
restartCount: 0
state:
running:
startedAt: "2019-05-31T14:59:01Z"
</code></pre>
|
Kim
|
<p>You cannot do that with kubectl, there is no such output option for it. </p>
<p>However, it should be easy to extract that lines with <code>awk</code>.</p>
<pre><code>kubectl get pods -o yaml | awk '/^status:/{flag=1}flag'
</code></pre>
<p>This starts the output at the line <code>status:</code>. In this case that is exactly what yo want.</p>
|
mbuechmann
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.