prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<h2>Context</h2> <p>Maybe there is unnecessary redundancy in the <code>iptables</code> rules generated by <code>kubeadm init</code> for <code>kube-proxy</code>:</p> <pre><code>iptables -t filter -S </code></pre> <p>output:</p> <pre><code>-P INPUT ACCEPT -P FORWARD DROP -P OUTPUT ACCEPT -N KUBE-EXTERNAL-SERVICES -N KUBE-FIREWALL -N KUBE-FORWARD -N KUBE-SERVICES -A INPUT -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes externally-visible service portals&quot; -j KUBE-EXTERNAL-SERVICES -A INPUT -j KUBE-FIREWALL -A FORWARD -m comment --comment &quot;kubernetes forwarding rules&quot; -j KUBE-FORWARD -A FORWARD -s 10.244.0.0/16 -j ACCEPT -A FORWARD -d 10.244.0.0/16 -j ACCEPT -A OUTPUT -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes service portals&quot; -j KUBE-SERVICES -A OUTPUT -j KUBE-FIREWALL -A KUBE-FIREWALL -m comment --comment &quot;kubernetes firewall for dropping marked packets&quot; -m mark --mark 0x8000/0x8000 -j DROP -A KUBE-FORWARD -m comment --comment &quot;kubernetes forwarding rules&quot; -m mark --mark 0x4000/0x4000 -j ACCEPT -A KUBE-FORWARD -s 10.244.0.0/16 -m comment --comment &quot;kubernetes forwarding conntrack pod source rule&quot; -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A KUBE-FORWARD -d 10.244.0.0/16 -m comment --comment &quot;kubernetes forwarding conntrack pod destination rule&quot; -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT </code></pre> <p>The <code>10.244.0.0/16</code> range corresponds to the pod overlay network.</p> <p>Lets focus on the <code>FORWARD</code> chain.</p> <pre><code>-P FORWARD DROP -N KUBE-FORWARD -A FORWARD -m comment --comment &quot;kubernetes forwarding rules&quot; -j KUBE-FORWARD -A FORWARD -s 10.244.0.0/16 -j ACCEPT -A FORWARD -d 10.244.0.0/16 -j ACCEPT -A KUBE-FORWARD -m comment --comment &quot;kubernetes forwarding rules&quot; -m mark --mark 0x4000/0x4000 -j ACCEPT -A KUBE-FORWARD -s 10.244.0.0/16 -m comment --comment &quot;kubernetes forwarding conntrack pod source rule&quot; -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A KUBE-FORWARD -d 10.244.0.0/16 -m comment --comment &quot;kubernetes forwarding conntrack pod destination rule&quot; -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT </code></pre> <h2>Question:</h2> <p>Why the <code>KUBE-FORWARD</code> accepts packages within the overlay network, when their connection state is <code>RELATED</code> or <code>ESTABLISHED</code> if <code>FORWARD</code> chain will accept all packet traffic within the overlay network regardless their connection state?</p> <h2>Note:</h2> <p>The kubernetes cluster works fine.</p>
<p>This duplication is there, because the default FORWARD policy can be disabled for some reason, and Kubernetes still wants to forward packets that either:</p> <ol> <li>Are marked with "masqueradeMark" (those can start new connections)</li> <li>Are part of already established connection</li> </ol> <p>You can try reading the comments in the k8s source: <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/iptables/proxier.go#L1325" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/iptables/proxier.go#L1325</a></p> <p>In general, one should expect some duplication in iptables rules when those rules are automatically managed. It makes it easier to code the automation.</p>
<p>I am trying to install postgresql using helm on my kubernetes cluster. I get error in transport when i run the helm install command.</p> <p>I have tied different solutions online, none worked.</p> <pre><code>helm install --name realtesting stable/postgresql --debug </code></pre> <p>The expect result is a deployed postgresql on my kubernetes cluster</p> <p>Please help!</p>
<p>It seems that you have not initialized helm with a service account.</p> <p>In rbac-config.yaml:</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system </code></pre> <p><strong>Step 1</strong>: <code>kubectl apply -f rbac-config.yaml</code></p> <p><strong>Step 2</strong>: <code>helm init --service-account tiller --history-max 200</code></p> <p><strong>Step 3</strong>: Test the setup with <code>heml ls</code>. There would not be any output from running this command and that is expected. Now, you can run <code>helm install --name realtesting stable/postgresql</code></p>
<p>I have a multi-node (2) Kubernetes cluster running on bare metal. I understand that 1. hostPath is bad for production and 2. hostPath Persistent Volumes are not supported for multi-node setups. Is there a way that I can safely run apps that are backed by a SQLite database? Over NFS the database locks a lot and really hurts the performance of the apps.</p> <p>I would probably place the SQLite databases for each app on the hostPath volume and everything would run smoothly again. But I was wondering if there are some workarounds to achieve this, even if I have to restrict apps to a specific node.</p>
<p>It seems you should use <strong>Local Persistent Volumes GA</strong>.</p> <p>As per documentation:</p> <blockquote> <p>A local volume represents a mounted local storage device such as a disk, partition or directory.</p> <p>Compared to hostPath volumes, local volumes can be used in a durable and portable manner without manually scheduling Pods to nodes, as the system is aware of the volume’s node constraints by looking at the node affinity on the PersistentVolume.</p> </blockquote> <p>However:</p> <blockquote> <p>At GA, Local Persistent Volumes do not support dynamic volume provisioning.</p> </blockquote> <p>More information you can find <a href="https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/" rel="nofollow noreferrer">here</a>, and <a href="https://kubernetes.io/docs/concepts/storage/volumes/#local" rel="nofollow noreferrer">here</a>.</p> <p>As one example:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: example-pv spec: capacity: storage: 100Gi # volumeMode field requires BlockVolume Alpha feature gate to be enabled. volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage local: path: /mnt/disks/ssd1 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node </code></pre> <blockquote> <p>With Local Persistent Volumes, the Kubernetes scheduler ensures that a pod using a Local Persistent Volume is always scheduled to the same node</p> </blockquote>
<p>I want to dynamically create a PV and PVC for a pod where it can store some data which will be generated by the application running inside a container. I am making sure that only one container is running on each pod. How can I use StatefulSets to dynamically create PVs instead of admin creating it manually?</p> <p>I used volumeClaimTemplate in StatefulSet. In minikube, It created both PV and PVC for each pod. But when I tried it in Kubernetes cluster, somehow it doesn't create PV. If I create PV manually, then a bound happens between PV and PVC. But I want it to happen dynamically.</p> <p>Here is the YAML file:</p> <pre><code>apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: StatefulSet metadata: name: app-deployment spec: selector: matchLabels: app: myapp serviceName: "app-service" replicas: 1 # tells deployment to run 2 pods matching the template template: metadata: labels: app: myapp spec: containers: - name: myapp image: myrepository:5000/docker/images/app:v0 imagePullPolicy: IfNotPresent ports: - containerPort: 8082 volumeMounts: - name: jms mountPath: /opt/APP/DATA/jms-data volumeClaimTemplates: - metadata: name: jms spec: accessModes: [ "ReadWriteOnce" ] storageClassName: standard resources: requests: storage: 256Mi --- kind: Service apiVersion: v1 metadata: name: app-service spec: selector: app: myapp ports: - protocol: TCP port: 8082 targetPort: 8082 type: NodePort </code></pre>
<p>If I understand your issue correctly then, you want a mechanism via which admin wont have to manually create PVs for pods. K8S has mechanism of creating storage classes for PVs (one time effort). Once the storage class is defined by admin and marked as default then there is no need for anyone to create PVs as they will be created on the fly by K8S using that <code>StorageClass</code>. For details on how to define and use it, please refer to </p> <p><a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/storage-classes/</a></p> <p>and </p> <p><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/</a></p>
<p>Is there a maximum number of namespaces supported by a Kubernetes cluster? My team is designing a system to run user workloads via K8s and we are considering using one namespace per user to offer logical segmentation in the cluster, but we don't want to hit a ceiling with the number of users who can use our service.</p> <p>We are using Amazon's EKS managed Kubernetes service and Kubernetes v1.11.</p>
<p>This is quite difficult to answer which has dependency on a lot of factors, Here are some facts which were created on the <strong>k8s 1.7 cluster</strong> <a href="https://github.com/kubernetes/community/blob/master/sig-scalability/configs-and-limits/thresholds.md#kubernetes-thresholds" rel="noreferrer">kubernetes-theresholds</a> the Number of namespaces (ns) are 10000 with few assumtions </p>
<p>I want to disable istio readiness probe from doing health check.. on running service. Is there a way to disable http readiness probe and later enable it</p>
<p>Rewriting my comment as an answer as it seems to be a valid workaround for this issue:</p> <p>You can add this annotation to the pods to disable probes on the istio-proxy containers:</p> <p><code>status.sidecar.istio.io/port: "0"</code></p> <p>The issue is being discussed on github, with more information: <a href="https://github.com/istio/istio/issues/9504#issuecomment-439432130" rel="noreferrer">https://github.com/istio/istio/issues/9504#issuecomment-439432130</a></p> <p>According to some comments there, it may happen when there's no service in front of some pods, or if some ports are exposed but not declared as containerPort, or in case of some labels mismatch between pods and services.</p>
<p>I'm migrating some of my applications that today is running on EC2 with Auto Scalling to k8.</p> <p>Today my Auto Scalling is based on <code>ApproximateNumberOfMessagesVisible</code> metric from SQS queues (That i configured on CloudWatch).</p> <p>I trying to figure out if i can use this metric to scalling pods of my application in AWS EKS environment.</p>
<ol> <li>Install <a href="https://github.com/awslabs/k8s-cloudwatch-adapter" rel="nofollow noreferrer">k8s-cloudwatch-adapter</a> </li> <li><a href="https://github.com/awslabs/k8s-cloudwatch-adapter/blob/master/samples/sqs/README.md" rel="nofollow noreferrer">Deploy HPA</a> with custom metrics from AWS SQS.</li> </ol>
<p>I want to know the recommendation set for pod size. I.e. when to put application within pod or at what size it will be better to use machine itself in place of pod.</p> <p>Ex. when to think of coming out of k8s and used as external service for some application, when pod required 8GB or 16GB or 32GB? Same for CPU intensive.</p> <p>Because if pod required 16GB or 16 CPU and we have a machine/node of the same size then I think there is no sense of running pod on that machine. If we run in that scenario then it will be like we will having 10 pods and which required 8 Nodes.</p> <p>Hopes you understand my concern.</p> <p>So if some one have some recommendation for that then please share your thoughts on that. Some references will be more better.</p> <p>Recommendation for ideal range:</p> <ol> <li>size of pods in terms of RAM and CPU</li> <li>Pods is to nodes ratio, i.e. number of pods per nodes </li> <li>Whether good for stateless or stateful or both type of application or not</li> </ol> <p>etc.</p>
<p>Running 16cpu/16gb pod on 16cpu/16gb machine is normal. Why not? You think of pods to be tiny but there is no such requirement. Pods can be gigantic, there is no issue with that. Remember container is just a process on a node, why you refuse to run a fat process on a fat node? Kubernetes adds very nice orchestration level to containers, why not make use of it?</p> <p>There is no such thing as a universal or recommended pod size. Asking for recommended pod size is the same as asking for a recommended size for VM or bare metal server. It is totally up to your application. If your application requires 16 or 64 GB of RAM - this is the recommended size for you, you see?</p> <p>Regarding pods to nodes ratio - current upper limit of Kubernetes is 110 pods per node. Everything below that watermark is fine. The only thing is that recommended master node size increases with total number of pods. If you have like 1000 pods - you go with small to medium size master nodes. If you have over 10 000 pods - you should increase your master nodes size. </p> <p>Regarding statefulness - stateless application generally survive better. But often state also should be stored somewhere and stored reliably. So if you plan your application as a set of microservices - create as much stateless apps you can and as few stateful as you can. Ideally, only the relational databases should be truly stateful.</p>
<p>I am writing a pipeline with kubernetes in google cloud.</p> <p>I need to activate sometimes a few pods in a second, where each pod is a task that runs inside a pod.</p> <p>I plan to call kubectl run with Kubernetes job and wait for it to complete (poll every second all the pods running) and activate the next step in the pipeline.</p> <p>I will also monitor the cluster size to make sure I am not exceeding the max CPU/RAM usage.</p> <p>I can run tens of thousands of jobs at the same time.</p> <p>I am not using standard pipelines because I need to create a dynamic number of tasks in the pipeline.</p> <p>I am running the batch operation so I can handle the delay.</p> <p>Is it the best approach? How long does it take to create a pod in Kubernetes?</p>
<p>If you wanna run ten thousands of jobs at the same time - you will definitely need to plan resource allocation. You need to estimate the number of nodes that you need. After that you may create all nodes at once, or use GKE cluster autoscaler for automatically adding new nodes in response to resource demand. If you preallocate all nodes at once - you will probably have high bill at the end of month. But pods can be created very quickly. If you create only small number of nodes initially and use cluster autoscaler - you will face large delays, because nodes take several minutes to start. You must decide what your approach will be.</p> <p>If you use cluster autoscaler - do not forget to specify maximum nodes number in cluster.</p> <p>Another important thing - you should put your jobs into Guaranteed <a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/" rel="nofollow noreferrer">quality of service</a> in Kubernetes. Otherwise if you use Best Effort or Burstable pods - you will end up with Eviction nightmare which is really terrible and uncontrolled.</p>
<p>I have microservices deployed using k8s. There are some inter-microservice API calls. The issue is when I perform a release deployment, The services start accepting requests randomly in 40 to 100 seconds based on the application platform. So, some of the services start accepting requests earlier while others take more time to accept the first request. Here some API calls are service dependents. So, Until the full deployment is not performed, The applications are throwing errors because of the delayed deployment of dependent services. I have implemented rolling updates for smooth deployment hence there is no downtime during deployment of a particular application. But, there might be new API endpoints added in the new release which affects the other applications until it starts accepting requests. </p> <p>During deployment, Is there any way I can configure all pods to start accepting requests at once on a particular time?</p> <p>Let's say I have to update 5 services in the current release. Then all new 5 pods should start accepting requests at once. </p>
<p>If I understood you correctly you want to implement service dependencies so that dependent services won't start unless their required services are started. You can implement that using initContainers like that:</p> <pre><code>spec: template: spec: initContainers: - name: waitfor image: jwilder/dockerize args: - -wait - "http://config-srv:7000/actuator/health" - -wait - "http://registry-srv:8761/actuator/health" - -wait - "http://rabbitmq:15672" - -timeout - 600s containers: - </code></pre> <p>In that example main container will not start until three required service have started and are reachable over HTTP. Is this what you want to achieve?</p>
<p>I have a docker image I have created that works on docker like this (local docker)n...</p> <pre><code>docker run -p 4000:8080 jrg/hello-kerb </code></pre> <p>Now I am trying to run it as a Kubernetes pod. To do this I create the deployment...</p> <pre><code>kubectl create deployment hello-kerb --image=jrg/hello-kerb </code></pre> <p>Then I run <code>kubectl get deployments</code> but the new deployment comes as unavailable...</p> <pre><code>NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE hello-kerb 1 1 1 0 17s </code></pre> <p>I was using this site as <a href="https://kubernetes.io/docs/tutorials/hello-minikube/" rel="nofollow noreferrer">the instructions</a>. It shows that the status should be available...</p> <pre><code>NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE hello-node 1 1 1 1 1m </code></pre> <p>What am I missing? Why is the deployment unavailable?</p> <p><strong>UPDATE</strong></p> <pre><code>$ kubectl get pod NAME READY STATUS RESTARTS AGE hello-kerb-6f8f84b7d6-r7wk7 0/1 ImagePullBackOff 0 12s </code></pre>
<p>If you are running a local image (from docker build) it is directly available to the docker daemon and can be executed. If you are using a remote daemon, f.e. in a kubernetes cluster, it will try to get the image from the default registry, since the image is not available locally. This is usually dockerhub. I checked <a href="https://hub.docker.com/u/jrg/" rel="nofollow noreferrer">https://hub.docker.com/u/jrg/</a> and there seems to be no repository and therefore no jrg/hello-kerb</p> <p>So how can you solve this? When using minikube, you can build (and provide) the image using the docker daemon that is provided by minikube.</p> <pre><code>eval $(minikube docker-env) docker build -t jrg/hello-kerb . </code></pre> <p>You could also provide the image at a registry that is reachable from your container runtime in the kubernetes cluster, f.e. dockerhub.</p>
<p>I took the CKA exam and I needed to work with Daemonsets for quite a while there. Since it is much faster to do everything with kubectl instead of creating yaml manifests for k8s resources, I was wondering if it is possible to create Daemonset resources using <code>kubectl</code>.</p> <p>I know that it is NOT possible to create it using regular <code>kubectl create daemonset</code> at least for now. And there is no description of it in the documentation. But maybe there is a way to do that in some different way?</p> <p>The best thing I could do right now is to create Deployment first like <code>kubectl create deployment</code> and edit it's output manifest. Any options here?</p>
<p>The fastest hack is to create a deployment file using </p> <pre><code>kubectl create deploy nginx --image=nginx --dry-run -o yaml &gt; nginx-ds.yaml </code></pre> <p>Now replace the line <code>kind: Deployment</code> with <code>kind: DaemonSet</code> in <strong>nginx-ds.yaml</strong> and remove the line <code>replicas: 1</code></p> <p>However, the following command will give a clean daemonset manifest considering that "apps/v1" is the api used for DaemonSet in your cluster</p> <pre><code>kubectl create deploy nginx --image=nginx --dry-run -o yaml | \ sed '/null\|{}\|replicas/d;/status/,$d;s/Deployment/DaemonSet/g' &gt; nginx-ds.yaml </code></pre> <p>You have your nginx DaemonSet. </p>
<p>I have set up access to HDFS using httpfs setup in Kubernetes as I need to have access to HDFS data nodes and not only the metadata on the name node. I can connect to the HDFS using Node port service with telnet, however, when I try to get some information from HDFS - reading files, checking if the files exist, I get an error:</p> <pre><code>[info] java.net.SocketTimeoutException: Read timed out [info] at java.net.SocketInputStream.socketRead0(Native Method) [info] at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) [info] at java.net.SocketInputStream.read(SocketInputStream.java:171) [info] at java.net.SocketInputStream.read(SocketInputStream.java:141) [info] at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) [info] at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) [info] at java.io.BufferedInputStream.read(BufferedInputStream.java:345) [info] at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:735) [info] at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678) [info] at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1587) </code></pre> <p>What could be the reason for this error? Here is the source code for setting up the connection to HDFS file system and checking files existence:</p> <pre><code>val url = "webhdfs://192.168.99.100:31400" val fs = FileSystem.get(new java.net.URI(url), new org.apache.hadoop.conf.Configuration()) val check = fs.exists(new Path(dirPath)) </code></pre> <p>The directory by the dirPath argument exists on HDFS.</p> <p>HDFS Kubernetes setup looks like this:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: namenode spec: type: NodePort ports: - name: client port: 8020 - name: hdfs port: 50070 nodePort: 30070 - name: httpfs port: 14000 nodePort: 31400 selector: hdfs: namenode --- apiVersion: v1 kind: ReplicationController metadata: name: namenode spec: replicas: 1 template: metadata: labels: hdfs: namenode spec: containers: - env: - name: CLUSTER_NAME value: test image: bde2020/hadoop-namenode:2.0.0-hadoop2.7.4-java8 name: namenode args: - "/run.sh &amp;" - "/opt/hadoop-2.7.4/sbin/httpfs.sh start" envFrom: - configMapRef: name: hive-env ports: - containerPort: 50070 - containerPort: 8020 - containerPort: 14000 volumeMounts: - mountPath: /hadoop/dfs/name name: namenode volumes: - name: namenode emptyDir: {} --- apiVersion: v1 kind: Service metadata: name: datanode spec: ports: - name: hdfs port: 50075 targetPort: 50075 selector: hdfs: datanode --- apiVersion: v1 kind: ReplicationController metadata: name: datanode spec: replicas: 1 template: metadata: labels: hdfs: datanode spec: containers: - env: - name: SERVICE_PRECONDITION value: namenode:50070 image: bde2020/hadoop-datanode:2.0.0-hadoop2.7.4-java8 envFrom: - configMapRef: name: hive-env name: datanode ports: - containerPort: 50075 volumeMounts: - mountPath: /hadoop/dfs/data name: datanode volumes: - name: datanode emptyDir: {} </code></pre> <p>UPD: Ping returns such results (192.168.99.100 - minikube ip, 31400 - service node port):</p> <pre><code>ping 192.168.99.100 -M do -s 28 PING 192.168.99.100 (192.168.99.100) 28(56) bytes of data. 36 bytes from 192.168.99.100: icmp_seq=1 ttl=64 time=0.845 ms 36 bytes from 192.168.99.100: icmp_seq=2 ttl=64 time=0.612 ms 36 bytes from 192.168.99.100: icmp_seq=3 ttl=64 time=0.347 ms 36 bytes from 192.168.99.100: icmp_seq=4 ttl=64 time=0.287 ms 36 bytes from 192.168.99.100: icmp_seq=5 ttl=64 time=0.547 ms 36 bytes from 192.168.99.100: icmp_seq=6 ttl=64 time=0.357 ms 36 bytes from 192.168.99.100: icmp_seq=7 ttl=64 time=0.544 ms 36 bytes from 192.168.99.100: icmp_seq=8 ttl=64 time=0.702 ms 36 bytes from 192.168.99.100: icmp_seq=9 ttl=64 time=0.307 ms 36 bytes from 192.168.99.100: icmp_seq=10 ttl=64 time=0.346 ms 36 bytes from 192.168.99.100: icmp_seq=11 ttl=64 time=0.294 ms 36 bytes from 192.168.99.100: icmp_seq=12 ttl=64 time=0.319 ms 36 bytes from 192.168.99.100: icmp_seq=13 ttl=64 time=0.521 ms ^C --- 192.168.99.100 ping statistics --- 13 packets transmitted, 13 received, 0% packet loss, time 12270ms rtt min/avg/max/mdev = 0.287/0.463/0.845/0.173 ms </code></pre> <p>And for the host and port:</p> <pre><code>ping 192.168.99.100 31400 -M do -s 28 PING 31400 (0.0.122.168) 28(96) bytes of data. ^C --- 31400 ping statistics --- 27 packets transmitted, 0 received, 100% packet loss, time 26603ms </code></pre>
<p>My colleague found out that the problem was with docker in minikube. Running this before setting up HDFS on Kubernetes solved the problem:</p> <pre><code>minikube ssh echo "sudo ip link set docker0 promisc on" </code></pre>
<p>I have a Kubernetes cron job that runs off the latest image of my repo. In the yaml definition of the cron job the image definition is like this:</p> <pre><code>image: 123456789.abc.ecr.us-west-1.amazonaws.com/company/myservice:latest </code></pre> <p>The issue is our latest tag gets updated anytime there is a merge to master in GitHub. However, the code is not deployed to prod at this point. There is no continuous deployment though so our code is not deployed in prod, however the latest code from master will be used by the cron job.</p> <p>Does it work as a solution to tag a 'prod' image that the cron job will run from? In this case when we deploy, the prod image would get updated to that image. Also, if there is a rollback we would update the prod image to be the image that prod is now reverting to.</p> <p>Or is there a better solution/pattern to this issue?</p>
<p>You can create git release tags or use commit ids. Put this value in config map, and the cron job container should pull code by using this commit-id or release tag. So staging and production configs can have a different tags.</p> <p>I would not recommend using <code>latest</code> tag for deployments on kubernetes. It works well with docker but not with kubernetes + docker. Create immutable image tags. Kuberentes compares only tag, not tag + SHA. <a href="https://discuss.kubernetes.io/t/use-latest-image-tag-to-update-a-deployment/2929" rel="nofollow noreferrer">https://discuss.kubernetes.io/t/use-latest-image-tag-to-update-a-deployment/2929</a></p> <p>Or you can use <code>imagePullPolicy: always</code> with latest tag.</p>
<p>I installed Istio with</p> <pre><code>gateways.istio-egressgateway.enabled = true </code></pre> <p>When I try to connect to external database I receive an error. I do not have a domain (only ip and port), so I define the following rules:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: external-db spec: hosts: - external-db.tcp.svc addresses: - 190.64.31.232/32 ports: - number: 3306 name: tcp protocol: TCP location: MESH_EXTERNAL resolution: STATIC endpoints: - address: 190.64.31.232 </code></pre> <p>then I open a Shell in my system (deployed in my service mesh) And it can't resolve the name </p> <pre><code>$ ping external-db.tcp.svc ping: ceip-db.tcp.svc: Name or service not known </code></pre> <p>But i can connect using the ip address</p> <pre><code>$ ping 190.64.31.232 PING 190.64.31.232 (190.64.31.232) 56(84) bytes of data. 64 bytes from 190.64.31.232: icmp_seq=1 ttl=249 time=1.35 ms 64 bytes from 190.64.31.232: icmp_seq=2 ttl=249 time=1.42 ms </code></pre> <p>What is happening? Do I have to connect using the domain or the ip? Can I define a internal domain for my external ip? </p>
<p>You can create headless service with hardcoded IP endpoint: </p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: external-db spec: clusterIP: None ports: - protocol: TCP port: 3306 --- apiVersion: v1 kind: Endpoints metadata: name: external-db subsets: - addresses: - ip: 190.64.31.232 ports: - port: 3306 </code></pre> <p>And then you may add to your ServiceEntry a host <code>external-db.default.svc.cluster.local</code></p>
<p>I'm using Kubernetes 1.11 on Digital Ocean, when I try to use kubectl top node I get this error:</p> <pre><code>Error from server (NotFound): the server could not find the requested resource (get services http:heapster:) </code></pre> <p>but as stated in the doc, heapster is deprecated and no longer required from kubernetes 1.10</p>
<p>If you are running a newer version of Kubernetes and still receiving this error, there is probably a problem with your installation.</p> <p>Please note that to install metrics server on kubernetes, you should first clone it by typing:</p> <pre><code>git clone https://github.com/kodekloudhub/kubernetes-metrics-server.git </code></pre> <p>then you should install it, <strong>WITHOUT GOING INTO THE CREATED FOLDER AND WITHOUT MENTIONING AN SPECIFIC YAML FILE</strong> , only via:</p> <pre><code>kubectl create -f kubernetes-metrics-server/ </code></pre> <p>In this way all services and components are installed correctly and you can run:</p> <pre><code>kubectl top nodes </code></pre> <p>or </p> <pre><code>kubectl top pods </code></pre> <p>and get the correct result.</p>
<p>Does anyone know why this one doesn't (shows empty line):</p> <pre><code>kubectl exec kube -- echo $KUBERNETES_SERVICE_HOST </code></pre> <p>but this one works?</p> <pre><code>kubectl exec kube -- sh -c 'echo $KUBERNETES_SERVICE_HOST' </code></pre>
<p>Running successfully will depend on the image that you are using for <code>kube</code>. But in general terms <code>echo</code> is a built-in <a href="https://en.wikipedia.org/wiki/Bourne_shell" rel="noreferrer">Bourne shell</a> command. </p> <p>With this command:</p> <pre><code>$ kubectl exec kube -- echo $KUBERNETES_SERVICE_HOST </code></pre> <p>You are not instantiating a Bourne Shell environment and there is not an <code>echo</code> executable in the container. It turns out what <code>kubectl</code> does is basically running <code>echo $KUBERNETES_SERVICE_HOST</code> on your client! You can try running for example:</p> <pre><code>$ kubectl exec kube -- echo $PWD </code></pre> <p>You'll see that is the home directory on your client.</p> <p>Whereas with this command:</p> <pre><code>$ kubectl exec kube -- sh -c 'echo $KUBERNETES_SERVICE_HOST' </code></pre> <p>There is a <code>sh</code> executable in your environment and that's Bourne Shell that understands the built-in <code>echo</code> command with the given Kubernetes environment in the default container of your Pod.</p>
<p>Due to a number of settings outside of my control, the default location of my kubectl cache file is on a very slow drive on my Windows PC. This ended up being the root cause for much of the slowness in my kubectl interactions.</p> <p>I have a much faster location in mind. However, I can't find a way to permanently change the location; I must either temporarily alter the home directory environment variables or explicitly set it on each command. </p> <p>Is there a way to alter my .kube/config file to permanently/persistently set my cache location? </p>
<p>Best way for you is move the whole home directory to fast drive.</p> <p>But if you don't want moving the whole directory you can <a href="https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/set-alias?view=powershell-6" rel="nofollow noreferrer">set Powershell alias</a> for your command like <code>PS&gt; Set-Alias -Name kubectl -Value "Path\to\kubectl --kubeconfig=PLACE_FOR_YOUR_CONFIG"</code></p>
<p>I have a kubernetes cluster with three nodes: <code>10.9.84.149</code>,<code>10.9.105.90</code> and <code>10.9.84.149</code>. When my application tries to execute the command inside some pod:</p> <pre><code>kuebctl exec -it &lt;podName&gt; </code></pre> <p>it sometimes gets an error:</p> <pre><code>Error from server: error dialing backend: dial tcp 10.9.84.149:10250: getsockopt: connection refused </code></pre> <p>As far as I could see everything was fine with the cluster: all kube-system services and pods were running well. Besides, it didn't appear regularly.</p> <p>Can anybody help me on this issue?</p>
<p>I got the same error as this below </p> <p><code>Error from server: Get https://192.168.100.102:10250/containerLogs/default/kubia-n8nv9/kubia: dial tcp 192.168.100.102:10250: connect: no route to host</code> </p> <p><b>DISABLING THE FIREWALL WAS MY FIX ON ALL NODES</b></p> <p>I figured out my worker nodes firewall was not disabled. I did instruction below to fix my problem</p> <pre><code>systemctl disable firewalld &amp;&amp; systemctl stop firewalld -Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1... -Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.``` </code></pre>
<p>Currently I started to use a kubernetes environment to deploy a react app. One thing that should be set are the healthiness and readiness probes. How should they look like for a react app, or are probes used for frontend apps?</p>
<p>If you have a separate pod for your front end, I assume a Web server like Apache or Nginx, the health check would just make sure the Webserver is alive. So you are correct that this would be a simple request, f.e. to the home page which is just static html. For the backend a different check would be in order. If you have just a single pod you need to check both with a health check. </p>
<p><img src="https://i.ibb.co/2sNZD9m/Kubernetes-Cluster.png" alt="1"></p> <p>Im trying to make a http request from my frontend deployment/pods (Angular 7 app running inside NGINX) to my backend service (NET Core WEB API).</p> <p>The URL, as indicated in the diagram, is <code>http://k8s-demo-api:8080/api/data</code>.</p> <pre><code>//environment.prod.ts in Angular App export const environment = { production: true, api_url: "http://k8s-demo-api:8080/api" }; </code></pre> <p>I have CORS enabled in my API:</p> <pre><code>using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using K8SDockerCoreWebApi.Contexts; using Microsoft.AspNetCore.Builder; using Microsoft.AspNetCore.Hosting; using Microsoft.AspNetCore.HttpsPolicy; using Microsoft.AspNetCore.Mvc; using Microsoft.Extensions.Configuration; using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.Logging; using Microsoft.Extensions.Options; namespace K8SDockerCoreWebApi { public class Startup { public Startup(IConfiguration configuration) { Configuration = configuration; } public IConfiguration Configuration { get; } // This method gets called by the runtime. Use this method to add services to the container. public void ConfigureServices(IServiceCollection services) { services.AddCors(); services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1); services.Add(new ServiceDescriptor(typeof(K8SDemoContext), new K8SDemoContext(Configuration.GetConnectionString("DefaultConnection")))); } // This method gets called by the runtime. Use this method to configure the HTTP request pipeline. public void Configure(IApplicationBuilder app, IHostingEnvironment env) { if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); } else { app.UseHsts(); } app.UseCors( options =&gt; options.AllowAnyOrigin().AllowAnyMethod().AllowAnyHeader()); app.UseMvc(); } } } </code></pre> <p>So here come the issue, when I make the request I get the following console error on the Angular side:</p> <p><img src="https://i.ibb.co/5BFyR25/Console-Error.png" alt="2"></p> <p>If there are any issues with the image above the error basically reads:</p> <pre><code>Access to XMLHttpRequest at 'k8s-demo-api:8080/api/data' from origin 'http://192.168.99.100:4200' has been blocked by CORS policy: Cross origin requests are only supported for protocol schemes: http, data, chrome, chrome-extension, https. </code></pre> <p>Points to note:</p> <ul> <li>All the other services within the cluster can communicate fine (i.e. WEB API with MySQL using the same approach)</li> <li>These two containers can speak to each other when I run them locally</li> <li>Both of these services run 100% when I access them individually</li> </ul> <p>I'm thinking of implementing an ingress on the API, but my understanding of what that will accomplish is limited.</p> <p>Any help will be appreciated. Thanks.</p> <p><strong>EDIT</strong></p> <p>I am using the following headers as part of my Angular request:</p> <pre><code>headers: new HttpHeaders({ 'Content-Type': 'application/json', 'X-Requested-With': 'XMLHttpRequest', 'Access-Control-Allow-Origin' : '*' }) </code></pre>
<p>While looking simple at the first look there are many ways to solve your issues. First some background on CORS: The browser automatically adds the X-Requested-With header, no need to add it in your Angular request. The browser will look for Access-Control-Allow-Origin in the <em>response</em> headers - otherwise you could tell the browser to trust anything, exactly what CORS prevents. (The server side has to decide if requests are allowed) So, if you want to have CORS enabled, you need to configure that in your asp.net application, which you did</p> <pre><code> app.UseCors( options =&gt; options.AllowAnyOrigin().AllowAnyMethod().AllowAnyHeader()); </code></pre> <p>But you don't need CORS - your setup is carefully constructed to avoid the need: CORS will only become active when your make a cross-origin request. You configured the nginx reverse proxy in that way, that you can request the API from the same origin, where you loaded the Angular app.</p> <p>The reason this is not working, is that your browser is unable to connect to the internal kubernetes hostname <code>k8s-demo-api</code> since that is an internal name in the kubernetes cluster.</p> <p>I assume that you added a hostPort to the container of your frontend pod and that nginx is running on port 4200 serving the Angular app on the root path and the reverse proxy path is <code>/api</code>.</p> <p>Easiest solution, given that the rest of your setup is working: Change the URL for the API to use the created reverse proxy path</p> <pre><code> api_url: "http://192.168.99.100:4200/api" </code></pre> <p>From the screenshot it looks like you are trying to request the api without a proper protocol (http:// in this case), make sure the Angular HttpClient is configured correctly.</p>
<p>We are looking at various opensource ingress controllers available for kubernetes and need to chose the best one among all. We are evaluating the below four ingress controllers</p> <ol> <li>Nginx ingress controller</li> <li>Traefik ingress controller</li> <li>Ha-proxy ingress controller</li> <li>Kong ingress controller</li> </ol> <p>What are the difference between these In terms of features and performance and which one should be adopted in production. please provide your suggestions</p>
<p>One difference I’m aware of, is that haproxy and nginx ingresses can work in TCP mode, whereas traefik only works in HTTP/HTTPS modes. If you want to ingress services like SMTP or MQTT, then this is a useful distinction. </p> <p>Also, haproxy supports the “PROXY” protocol, allowing you to pass real client IP to backend services. I used the haproxy ingress recently for a docker-mailserver helm chart - <a href="https://hub.helm.sh/charts/funkypenguin" rel="nofollow noreferrer">https://hub.helm.sh/charts/funkypenguin</a></p>
<p>Pods in a kubernetes cluster can be reached by sending network requests to the dns of a service that they are a member of. Network requests have to be send to <code>[service].[namespace].svc.cluster.local</code> and get load balanced between all members of that service.</p> <p>This works fine to let some pod reach another pod, but it fails if a pod tries to reach itself via a service that he's a member of. It always results in a timeout.</p> <p>Is this a bug in Kubernetes (in my case minikube v0.35.0) or is some additional configuration required?</p> <hr> <p>Here's some debug info:</p> <p>Let's contact the service from some other pod. This works fine:</p> <pre><code>daemon@auth-796d88df99-twj2t:/opt/docker$ curl -v -X POST -H "Accept: application/json" --data '{}' http://message-service.message.svc.cluster.local:9000/message/get-messages Note: Unnecessary use of -X or --request, POST is already inferred. * Trying 10.107.209.9... * TCP_NODELAY set * Connected to message-service.message.svc.cluster.local (10.107.209.9) port 9000 (#0) &gt; POST /message/get-messages HTTP/1.1 &gt; Host: message-service.message.svc.cluster.local:9000 &gt; User-Agent: curl/7.52.1 &gt; Accept: application/json &gt; Content-Length: 2 &gt; Content-Type: application/x-www-form-urlencoded &gt; * upload completely sent off: 2 out of 2 bytes &lt; HTTP/1.1 401 Unauthorized &lt; Referrer-Policy: origin-when-cross-origin, strict-origin-when-cross-origin &lt; X-Frame-Options: DENY &lt; X-XSS-Protection: 1; mode=block &lt; X-Content-Type-Options: nosniff &lt; Content-Security-Policy: default-src 'self' &lt; X-Permitted-Cross-Domain-Policies: master-only &lt; Date: Wed, 20 Mar 2019 04:36:51 GMT &lt; Content-Type: text/plain; charset=UTF-8 &lt; Content-Length: 12 &lt; * Curl_http_done: called premature == 0 * Connection #0 to host message-service.message.svc.cluster.local left intact Unauthorized </code></pre> <p>Now we try to let the pod contact the service that he's a member of:</p> <pre><code>Note: Unnecessary use of -X or --request, POST is already inferred. * Trying 10.107.209.9... * TCP_NODELAY set * connect to 10.107.209.9 port 9000 failed: Connection timed out * Failed to connect to message-service.message.svc.cluster.local port 9000: Connection timed out * Closing connection 0 curl: (7) Failed to connect to message-service.message.svc.cluster.local port 9000: Connection timed out </code></pre> <p>If I've read the curl debug log correctly, the dns resolves to the ip address 10.107.209.9. The pod can be reached from any other pod via that ip but the pod cannot use it to reach itself.</p> <p>Here are the network interfaces of the pod that tries to reach itself:</p> <pre><code>daemon@message-58466bbc45-lch9j:/opt/docker$ ip a 1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: sit0@NONE: &lt;NOARP&gt; mtu 1480 qdisc noop state DOWN group default qlen 1000 link/sit 0.0.0.0 brd 0.0.0.0 296: eth0@if297: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:09 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.9/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever </code></pre> <p>Here is the kubernetes file deployed to minikube:</p> <pre><code>apiVersion: v1 kind: Namespace metadata: name: message --- apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: message name: message namespace: message spec: replicas: 1 selector: matchLabels: app: message strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 0 type: RollingUpdate template: metadata: labels: app: message spec: containers: - name: message image: message-impl:0.1.0-SNAPSHOT imagePullPolicy: Never ports: - name: http containerPort: 9000 protocol: TCP env: - name: NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: KAFKA_KUBERNETES_NAMESPACE value: kafka - name: KAFKA_KUBERNETES_SERVICE value: kafka-svc - name: CASSANDRA_KUBERNETES_NAMESPACE value: cassandra - name: CASSANDRA_KUBERNETES_SERVICE value: cassandra - name: CASSANDRA_KEYSPACE value: service_message --- # Service for discovery apiVersion: v1 kind: Service metadata: name: message-service namespace: message spec: ports: - port: 9000 protocol: TCP selector: app: message --- # Expose this service to the api gateway apiVersion: extensions/v1beta1 kind: Ingress metadata: name: message namespace: message annotations: kubernetes.io/ingress.class: traefik spec: rules: - host: api.fload.cf http: paths: - path: /message backend: serviceName: message-service servicePort: 9000 </code></pre>
<p>This is a known <a href="https://github.com/kubernetes/minikube/issues/1568" rel="nofollow noreferrer">minikube issue</a>. The discussion contains the following workarounds:</p> <p>1) Try:</p> <pre><code>minikube ssh sudo ip link set docker0 promisc on </code></pre> <p>2) Use a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">headless service</a>: <code>clusterIP: None</code></p>
<p>I have a requirement to scale down OpenShift pods at the end of each business day automatically.</p> <p>How might I schedule this automatically?</p>
<p>OpenShift, like Kubernetes, is an api-driven application. Essentially all application functionality is exposed over the control-plane API running on the master hosts.</p> <p>You can use any orchestration tool that is capable of making API calls to perform this activity. Information on calling the OpenShift API directly can be found in the official documentation in the <a href="https://docs.openshift.com/container-platform/3.11/rest_api/index.html" rel="nofollow noreferrer">REST API Reference Overview</a> section.</p> <p>Many orchestration tools have plugins that allow you to interact with OpenShift/Kubernetes API more natively than running network calls directly. In the case of Jenkins for example there is the <a href="https://github.com/openshift/jenkins-client-plugin" rel="nofollow noreferrer">OpensShift Pipeline Jenkins</a> plugin that allows you to perform OpenShift activities directly from Jenkins pipelines. In the cases of Ansible there is the <a href="https://docs.ansible.com/ansible/latest/modules/k8s_module.html#openshift-raw-module" rel="nofollow noreferrer">k8s module</a>.</p> <p>If you were to combine this with Jenkins capability to run jobs on a schedule you have something that meets your requirements.</p> <p>For something much simpler you could just schedule Ansible or bash scripts on a server via cron to execute the appropriate API commands against the OpenShift API.</p> <p>Executing these commands from <em>within</em> OpenShift would also be possible via the <a href="https://docs.openshift.com/container-platform/3.11/dev_guide/cron_jobs.html" rel="nofollow noreferrer">CronJob</a> object.</p>
<p>On my vagrant VM,I created kops cluster</p> <pre><code> kops create cluster \ --state "s3://kops-state-1000" \ --zones "eu-central-1a","eu-central-1b" \ --master-count 3 \ --master-size=t2.micro \ --node-count 2 \ --node-size=t2.micro \ --ssh-public-key ~/.ssh/id_rsa.pub\ --name jh.mydomain.com \ --yes </code></pre> <p>I can list cluster with kops get cluster</p> <pre><code>jh.mydomain.com aws eu-central-1a,eu-central-1b </code></pre> <p>The problem is that I can not find .kube repository. When I go for </p> <pre><code>kubectl version Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"} The connection to the server localhost:8080 was refused - did you specify the right host or port? </code></pre> <p>Is this somehow related to my virtual box deployment or not? I am aware that the command <em>kubectl</em> works on the master node because that's where <em>kube-apiserver</em> runs. Does this mean that my deployment is workers node?</p> <p>I tried what Vasily suggested</p> <pre><code>kops export kubecfg $CLUSTER_NAME W0513 15:44:17.286965 3279 root.go:248] no context set in kubecfg --name is required </code></pre>
<p>You need to obtain kubeconfig from your cluster and save it as ${HOME}/.kube/config:</p> <pre><code>kops export kubecfg $CLUSTER_NAME </code></pre> <p>After that you may start using kubectl.</p>
<p>Suppose I have 2 containers in POD. Will each POD have a unique IP address ? If so what will be the IP addresses of each container ?</p>
<p>The networking namespace (IP) of pod is managed by <strong>infra container</strong> which just does nothing more than being a placeholder. All of the other Containers in pod are sharing this IP Address (Pod IP) rather than using host network namespace or host IP. </p> <p>It is controlled by the following parameter of <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#options" rel="nofollow noreferrer"><strong>kubelet</a></strong></p> <p><code>--pod-infra-container-image</code> : The image whose network/ipc namespaces containers in each pod will use. (default "k8s.gcr.io/pause:3.1")</p> <p>following article describe it in more detail. </p> <blockquote> <p>In Kubernetes, the pause container serves as the "parent container" for all of the containers in your pod. The pause container has two core responsibilities. First, it serves as the basis of Linux namespace sharing in the pod. And second, with PID (process ID) namespace sharing enabled, it serves as PID 1 for each pod and reaps zombie processes. <a href="https://www.ianlewis.org/en/almighty-pause-container" rel="nofollow noreferrer">almighty-pause-container</a></p> </blockquote>
<p>I am setting up a Grafana server on my local kube cluster using helm-charts. I am trying to get it to work on a subpath in order to implement it on a production env with tls later on, but I am unable to access Grafana on <a href="http://localhost:3000/grafana" rel="nofollow noreferrer">http://localhost:3000/grafana</a>. </p> <p>I have tried all most all the recommendations out there on the internet about adding a subpath to ingress, but nothing seems to work. </p> <p>The Grafana login screen shows up on <a href="http://localhost:3000/" rel="nofollow noreferrer">http://localhost:3000/</a> when I remove root_url: <a href="http://localhost:3000/grafana" rel="nofollow noreferrer">http://localhost:3000/grafana</a> from Values.yaml</p> <p>But when I add root_url: <a href="http://localhost:3000/grafana" rel="nofollow noreferrer">http://localhost:3000/grafana</a> back into values.yaml file I see the error attached below (towards the end of this post).</p> <pre><code>root_url: http://localhost:3000/grafana and ingress as: </code></pre> <pre><code>ingress: enabled: true annotations: nginx.ingress.kubernetes.io/rewrite-target: / labels: {} path: /grafana hosts: - localhost tls: [] # - secretName: chart-example-tls # hosts: # - chart-example.local resources: {} </code></pre> <p>I expect the <a href="http://localhost:3000/grafana" rel="nofollow noreferrer">http://localhost:3000/grafana</a> url to show me the login screen instead i see the below errors:</p> <pre><code>If you're seeing this Grafana has failed to load its application files 1. This could be caused by your reverse proxy settings. 2. If you host grafana under subpath make sure your grafana.ini root_url setting includes subpath 3. If you have a local dev build make sure you build frontend using: yarn start, yarn start:hot, or yarn build 4. Sometimes restarting grafana-server can help </code></pre> <p>Can you please help me fix the ingress and root_url on values.yaml to get Grafana URL working at /grafana ?</p>
<p>As you check documentation for <a href="https://grafana.com/docs/installation/behind_proxy/" rel="nofollow noreferrer">Configuring grafana behind Proxy</a>, <code>root_url</code> should be configured in <code>grafana.ini</code> file under <code>[server]</code> section. You can modify your <code>values.yaml</code> to achieve this.</p> <pre><code>grafana.ini: ... server: root_url: http://localhost:3000/grafana/ </code></pre> <p>Also your ingress in values should look like this.</p> <pre><code>ingress: enabled: true annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: / labels: {} path: /grafana/ hosts: - "" </code></pre> <p>Hope it helps. </p>
<p>I have two namespaces: <code>prod</code> and <code>default</code>. I want to disable access for resources inside these namespaces (resources from <code>default</code> NS can't get access for resources from <code>prod</code>, and resources from <code>prod</code> can't get access for resources from <code>default</code>) BUT allow the opportunity to access these resources for external traffic (ingresses).</p> <pre><code># namespaces.yaml --- kind: Namespace apiVersion: v1 metadata: name: prod labels: tier: prod --- kind: Namespace apiVersion: v1 metadata: name: default labels: tier: infra </code></pre> <pre><code># network-policies.yaml --- kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: network namespace: prod spec: podSelector: {} ingress: - from: - podSelector: {} --- kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: network namespace: default spec: podSelector: {} ingress: - from: - namespaceSelector: matchLabels: tier: dev - namespaceSelector: matchLabels: tier: rc </code></pre> <pre><code># services.yaml --- apiVersion: v1 kind: Service metadata: name: {{ include "conference.appService" . }} labels: app: {{ include "conference.name" . }} release: {{ .Release.Name }} spec: type: NodePort ports: - name: http port: 80 targetPort: http protocol: TCP selector: app: {{ include "conference.name" . }} release: {{ .Release.Name }} role: app </code></pre> <p>Pods from <code>prod</code> have access to the other pods inside given namespace. Pods from <code>default</code> don't have access to the pods inside <code>prod</code>.</p> <p>When I try to get access to the service from the browser it's blocked. When I try to use port-forwarding to the service inside <code>prod</code> - all works fine.</p>
<p>The problem was in from part in network policy.</p> <pre><code>--- kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: network namespace: default spec: podSelector: {} ingress: - from: - ipBlock: cidr: 0.0.0.0/0 - podSelector: {} - namespaceSelector: matchLabels: tier: dev - namespaceSelector: matchLabels: tier: rc egress: - {} </code></pre> <p>The main idea of this selectors it: <code>podSelector</code> - for selecting the pods IN CURRENT namespace <code>namespaceSelector</code> - for selecting the namespaces <code>namespaceSelector.podSelector</code> - for selecting pods inside given namespace</p> <p>and my problem:</p> <p><code>ipBlock</code> - for selecting EXTERNAL IP addresses</p> <p>it's doesn't work for internal IPs so in my case <code>0.0.0.0/0</code> will be OK. <code>except</code> will not disallow internal traffic.</p>
<p>I'm trying to set up a kubernetes cluster in GKE using nginx-ingress to handle request routing. I have two different projects which I would like to host within the same domain, with each managing their own ingress definition. The readme <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/ingress-path-matching.md#path-priority" rel="nofollow noreferrer">here</a> seems to show something similar, so I assume this should be possible.</p> <p>When I deploy either ingress individually, everything works great and I can access the routes I would expect. However, when I add both at the same time, only the ingress with the metadata.name value that comes first alphabetically will reach the intended backend, while the other ingress will return a 404 from nginx-ingress.</p> <p>If I switch the metadata.name values, this behavior is consistent (the ingress that has the first alphabetical name will work), so I don't think it has to do with the routes themselves or the services / pods involved but rather something to do with how nginx-ingress is handling the ingress names.</p> <p>I've tried various versions of the nginx-ingress-controller:</p> <ul> <li>quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1</li> <li>quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.0</li> <li>quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0</li> <li>quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0</li> </ul> <p>I've also tried forcing regex route matching (using <code>nginx.ingress.kubernetes.io/rewrite-target: /</code>), swapping the names on the ingress, deploying the projects to different namespaces, and changing the paths to be entirely distinct with no luck - only one ingress file will be used at a time.</p> <p>Finally, I tried making one ingress file with both definitions in it (so there's only one ingress name in play), and that works fine. Comparing the nginx configuration of the unified, working setup with a non-working one, the only line that is different is the "set $ingress_name", e.g.:</p> <pre><code>set $ingress_name "test-ingress-1"; </code></pre> <p>vs</p> <pre><code>set $ingress_name "test-unified-ingress"; </code></pre> <p>Here are the ingresses (with the hostname changed):</p> <p>test-ingress-1.yaml:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/from-to-www-redirect: "false" name: test-ingress-1 namespace: default spec: rules: - host: test.com http: paths: - backend: serviceName: test-frontend servicePort: 80 path: /test status: loadBalancer: {} </code></pre> <p>test-ingress-2.yaml:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/from-to-www-redirect: "false" name: test-ingress-2 namespace: default spec: rules: - host: test.com http: paths: - backend: serviceName: test-backend servicePort: 80 path: /api/test status: loadBalancer: {} </code></pre> <p>I would expect those two separate ingress files to configure nginx together, but haven't had success. Is there something that I'm missing or doing wrong?</p> <p>Thank you for any assistance!</p>
<p>Can you try adding this annotation and test it if it's work or not.</p> <pre><code>nginx.ingress.kubernetes.io/rewrite-target: / </code></pre>
<p>I have a Kubernetes cluster on GKE that's managed by Helm, and recently deploys started failing because pods would not leave the <code>Pending</code> state. Inspecting the pending pods I see:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 50m default-scheduler Successfully assigned default/my-pod-6cbfb94cb-4pzl9 to gke-staging-pool-43f2e11c-nzjz Warning FailedValidation 50m (x6 over 50m) kubelet, gke-staging-pool-43f2e11c-nzjz Error validating pod my-pod-6cbfb94cb-4pzl9_default(8e4dab93-75a7-11e9-80e1-42010a960181) from api, ignoring: spec.priority: Forbidden: Pod priority is disabled by feature-gate </code></pre> <p>And specifically, this warning seems relevant: <code>Error validating pod my-pod-6cbfb94cb-4pzl9_default(8e4dab93-75a7-11e9-80e1-42010a960181) from api, ignoring: spec.priority: Forbidden: Pod priority is disabled by feature-gate</code></p> <p>Comparing the new, <code>Pending</code>, pods with currently running pods, the only difference I see (apart from timestamps, etc) is:</p> <pre><code>$ kubectl get pod my-pod-6cbfb94cb-4pzl9 -o yaml &gt; /tmp/pending-pod.yaml $ kubectl get pod my-pod-7958cc964-64wsd -o yaml &gt; /tmp/running-pod.yaml $ diff /tmp/running-pod.yaml /tmp/pending-pod.yaml … @@ -58,7 +58,8 @@ name: default-token-wnhwl readOnly: true dnsPolicy: ClusterFirst - nodeName: gke-staging-pool-43f2e11c-r4f9 + nodeName: gke-staging-pool-43f2e11c-nzjz + priority: 0 // &lt;-- notice that the `priority: 0` field is added restartPolicy: Always schedulerName: default-scheduler … </code></pre> <p>It appears that this started happening some time between May 1st, 2019, and May 6th 2019.</p> <p>The cluster I've used as an example here is a staging cluster, but I've noticed the same behavior on two similarly configured production clusters, which leads me to believe there was a change on the Google Kube side.</p> <p>Pods are deployed by Helm through <code>cloudbuild.yaml</code>, and nothing has changed in that setup (either version of Helm, or cloudbuild file) between May 1st and May 6th when the regression seems to have been introduced.</p> <p>Helm version:</p> <pre><code>Client: &amp;version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean"} Server: &amp;version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean"} </code></pre>
<p>If you see the docs for <a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/" rel="nofollow noreferrer">Pod priority and Preemption</a> the feature is <code>alpha</code> in <code>&lt;= 1.10</code> (disabled by default, enabled by a feature gate which GKE doesn't do on the control plane, afaik) then it became <code>beta</code> in <code>&gt;= 1.11</code> (enabled by default). </p> <p>Could be either or combination:</p> <ol> <li><p>You have a GKE control plane <code>&gt;= 1.11</code> and the node where your Pod is trying to start (being scheduled by the kube-scheduler) is running a kubelet <code>&lt;= 1.10</code>. Somebody could have upgraded the control plane without upgrading the nodes (or instance group)</p></li> <li><p>Someone upgraded the control plane to 1.11 or later and the priority admission controller is enabled by default and it's preventing Pods with the <code>spec.priority</code> field set from starting (or restarting). If you see the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#pod-v1-core" rel="nofollow noreferrer">API docs</a> (priority field) it says that when the priority admission controller is enabled you can't set that field and that field can only be set by a <a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass" rel="nofollow noreferrer">PriorityClass/PriorityClassName</a>.</p></li> </ol>
<p>I'm trying to setup a Cloud SQL Proxy Docker image for PostgreSQL as mentioned <a href="https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine#proxy" rel="nofollow noreferrer">here</a>. I can get my app to connect to the proxy docker image but the proxy times out. I suspect it's my credentials or the port, so how do I debug and find out if it works? This is what I have on my project</p> <pre><code>kubectl create secret generic cloudsql-instance-credentials --from-file=credentials.json=my-account-credentials.json </code></pre> <p>My deploy spec snippet:</p> <pre><code>spec: containers: - name: mara ... - name: cloudsql-proxy image: gcr.io/cloudsql-docker/gce-proxy:1.11 command: ["/cloud_sql_proxy", "-instances=&lt;MY INSTANCE NAME&gt;=tcp:5432", "-credential_file=/secrets/cloudsql/credentials.json"] volumeMounts: - name: cloudsql-instance-credentials mountPath: /secrets/cloudsql readOnly: true volumes: - name: cloudsql-instance-credentials secret: secretName: cloudsql-instance-credentials </code></pre> <p>The logs of my cloudsql-proxy show a timeout:</p> <pre><code>2019/05/13 15:08:25 using credential file for authentication; email=646092572393-compute@developer.gserviceaccount.com 2019/05/13 15:08:25 Listening on 127.0.0.1:5432 for &lt;MY INSTANCE NAME&gt; 2019/05/13 15:08:25 Ready for new connections 2019/05/13 15:10:48 New connection for "&lt;MY INSTANCE NAME&gt;" 2019/05/13 15:10:58 couldn't connect to &lt;MY INSTANCE NAME&gt;: dial tcp &lt;MY PRIVATE IP&gt;:3307: getsockopt: connection timed out </code></pre> <p>Questions:</p> <ul> <li><p>I specify 5432 as my port, but as you can see in the logs above,it's hitting 3307. Is that normal and if not, how do I specify 5432?</p></li> <li><p>How do I check if it is a problem with my credentials? My credentials file is from my service account <code>123-compute@developer.gserviceaccount.com</code> and the service account shown when I go to my cloud sql console is <code>p123-&lt;somenumber&gt;@gcp-sa-cloud-sql.iam.gserviceaccount.com</code>. They don't seem the same? Does that make a difference?</p></li> </ul> <p>If I make the Cloud SQL instance available on a public IP, it works.</p>
<blockquote> <p>I specify 5432 as my port, but as you can see in the logs above,it's hitting 3307</p> </blockquote> <p>The proxy listens locally on the port you specified (in this case <code>5432</code>), and connects to your Cloud SQL instance via port 3307. This is expected and normal.</p> <blockquote> <p>How do I check if it is a problem with my credentials?</p> </blockquote> <p>The proxy returns an authorization error if the Cloud SQL instance doesn't exist, or if the service account doesn't have access. The connection timeout error means it failed to reach the Cloud SQL instance. </p> <blockquote> <p>My credentials file is from my service account 123-compute@developer.gserviceaccount.com and the service account shown when I go to my cloud sql console is p123-@gcp-sa-cloud-sql.iam.gserviceaccount.com. They don't seem the same?</p> </blockquote> <p>One is just the name of the file, the other is the name of the service account itself. The name of the file doesn't have to match the name of the service account. You can check the name and IAM roles of a service account on the Service Account <a href="https://console.cloud.google.com/iam-admin/serviceaccounts?_ga=2.78504593.-442290695.1543247497" rel="nofollow noreferrer">page</a>. </p> <blockquote> <p>2019/05/13 15:10:58 couldn't connect to : dial tcp :3307: getsockopt: connection timed out</p> </blockquote> <p>This error means that the proxy failed to establish a network connection to the instance (usually because a path from the current location doesn't exist). There are two common causes for this:</p> <p>First, make sure there isn't a firewall or something blocking outbound connections on port 3307.</p> <p>Second, since you are using Private IP, you need to make sure the resource you are running the proxy on meets the <a href="https://cloud.google.com/sql/docs/mysql/private-ip#network_requirements" rel="nofollow noreferrer">networking requirements</a>.</p>
<p>On my vagrant VM,I created kops cluster</p> <pre><code> kops create cluster \ --state "s3://kops-state-1000" \ --zones "eu-central-1a","eu-central-1b" \ --master-count 3 \ --master-size=t2.micro \ --node-count 2 \ --node-size=t2.micro \ --ssh-public-key ~/.ssh/id_rsa.pub\ --name jh.mydomain.com \ --yes </code></pre> <p>I can list cluster with kops get cluster</p> <pre><code>jh.mydomain.com aws eu-central-1a,eu-central-1b </code></pre> <p>The problem is that I can not find .kube repository. When I go for </p> <pre><code>kubectl version Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"} The connection to the server localhost:8080 was refused - did you specify the right host or port? </code></pre> <p>Is this somehow related to my virtual box deployment or not? I am aware that the command <em>kubectl</em> works on the master node because that's where <em>kube-apiserver</em> runs. Does this mean that my deployment is workers node?</p> <p>I tried what Vasily suggested</p> <pre><code>kops export kubecfg $CLUSTER_NAME W0513 15:44:17.286965 3279 root.go:248] no context set in kubecfg --name is required </code></pre>
<p>It seems that kops didn't configure the context of the cluster in your machine You can try the following command:</p> <pre><code>kops export kubecfg jh.mydomain.com </code></pre> <p>You can also find the <em>config</em> file at S3->kops-state-1000->jh.mydomain.com->config<br> After downloading the config file you can try:</p> <pre><code>kubectl --kubeconfig={config you downloaded} version </code></pre> <p>Also, it's kinda unrelated to your question, but it may be not a bad idea to have a look at <a href="http://blog.arungupta.me/gossip-kubernetes-aws-kops/" rel="nofollow noreferrer">gossip-based kubernetes cluster</a>. To try that feature, you just need to create a cluster with kobs using <code>--name=something.k8s.local</code></p>
<p>I have implemented a KMS Plugin gRPC server. However, my api-server is not able to connect to Unix socket at path "/opt/mysocket.sock".</p> <p>If I bind my socket to "/etc/ssl/certs/" directory. "api-server" is able to access it and interact with my gRPC server over Unix socket and plugin is working as expected.</p> <p>How I can pass my unix socket to api-server without getting restricted to only "/etc/ssl/certs/" directory.</p> <p>I want to use other standard directories like "/opt" or "/var" etc.</p> <p>I have followed below guide from Google to implement KMS plugin. <a href="https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/</a></p>
<p>For "api-server" pod to access any directory from the host system, we need to add mount path in "kube-apiserver.yaml" file.</p> <p>Path to yaml file "/etc/kubernetes/manifests/kube-apiserver.yaml" file.</p> <p>Add mount point as shown below (keep correct indentation).</p> <pre><code>===== volumeMounts: - mountPath: /etc/my_dir name: my-kms readOnly: true ... ... volumes: - hostPath: path: /etc/my_dir type: DirectoryOrCreate ==== </code></pre>
<p>A little bit lost in between nodeselector, affinity / anti affinity; taints</p> <p>What i'd be interested in would be to ensure that a single pod/deployement would run on a given node, not anywhere else. and that this node would not receive any other pod than the specified one</p> <p>Given the options above (if there are any else), what would be the most concise way to do so ?</p>
<p>Add some label only on one node.<br> or<br> You can create a new node-pool with one node.(I would prefer this since this can work well with autoscaling, set min, max nodes to 1)</p> <p>Create a deployment with one replica and affinity equal to this node.</p> <p>To limit running just this pod on node:<br> 1) You can add resource limit equal to node's resource value, so that no other pods are scheduled on this node.<br> or<br> 2) Use some default affinity for all the other pods<br> or<br> 3) Use <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-isolation-restriction" rel="nofollow noreferrer">node-isolation-restriction</a> to restrict which pods can be scheduled on this node. Haven't tried this myself yet</p>
<p>I am deploying multiple helm sub-charts in Kubernetes Cluster through Helm and tiller. How can I make sure that a particular chart completely installs its containers before starting to install other charts and their containers?</p> <p>I looked into requirements.yaml and hooks, none of them look like can be the solution I am looking for.</p> <pre><code>ParentDir/ Chart.yaml requirements.yaml values.yaml charts/ | | —- App1 Chart.yaml values.yaml templates/ —- App2 Chart.yaml values.yaml templates/ . . . —- AppN Chart.yaml values.yaml templates/ </code></pre> <p>I have multiple sub-charts, and want to make sure containers of App1 are up and ready before helm installs other charts and its containers.</p>
<p>You should look into helmfile and split your stack into one <a href="https://github.com/roboll/helmfile" rel="nofollow noreferrer">helmfile</a> per application. Since helmfiles inside a folders are applied in an alphabetically order you can ensure the order through file naming conventions. </p> <p>File structure:</p> <pre><code>helmfile.d/ 00-app-1.yaml 01-app-2.yaml 02-app-n.yaml </code></pre> <p>Execute it with this command: <code>helmfile -f path/to/directory</code></p>
<p>I'm trying to set up a kubernetes cluster in GKE using nginx-ingress to handle request routing. I have two different projects which I would like to host within the same domain, with each managing their own ingress definition. The readme <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/ingress-path-matching.md#path-priority" rel="nofollow noreferrer">here</a> seems to show something similar, so I assume this should be possible.</p> <p>When I deploy either ingress individually, everything works great and I can access the routes I would expect. However, when I add both at the same time, only the ingress with the metadata.name value that comes first alphabetically will reach the intended backend, while the other ingress will return a 404 from nginx-ingress.</p> <p>If I switch the metadata.name values, this behavior is consistent (the ingress that has the first alphabetical name will work), so I don't think it has to do with the routes themselves or the services / pods involved but rather something to do with how nginx-ingress is handling the ingress names.</p> <p>I've tried various versions of the nginx-ingress-controller:</p> <ul> <li>quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1</li> <li>quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.0</li> <li>quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0</li> <li>quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.22.0</li> </ul> <p>I've also tried forcing regex route matching (using <code>nginx.ingress.kubernetes.io/rewrite-target: /</code>), swapping the names on the ingress, deploying the projects to different namespaces, and changing the paths to be entirely distinct with no luck - only one ingress file will be used at a time.</p> <p>Finally, I tried making one ingress file with both definitions in it (so there's only one ingress name in play), and that works fine. Comparing the nginx configuration of the unified, working setup with a non-working one, the only line that is different is the "set $ingress_name", e.g.:</p> <pre><code>set $ingress_name "test-ingress-1"; </code></pre> <p>vs</p> <pre><code>set $ingress_name "test-unified-ingress"; </code></pre> <p>Here are the ingresses (with the hostname changed):</p> <p>test-ingress-1.yaml:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/from-to-www-redirect: "false" name: test-ingress-1 namespace: default spec: rules: - host: test.com http: paths: - backend: serviceName: test-frontend servicePort: 80 path: /test status: loadBalancer: {} </code></pre> <p>test-ingress-2.yaml:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/from-to-www-redirect: "false" name: test-ingress-2 namespace: default spec: rules: - host: test.com http: paths: - backend: serviceName: test-backend servicePort: 80 path: /api/test status: loadBalancer: {} </code></pre> <p>I would expect those two separate ingress files to configure nginx together, but haven't had success. Is there something that I'm missing or doing wrong?</p> <p>Thank you for any assistance!</p>
<p>Why you just don't put multiple paths into one ingress file? </p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/from-to-www-redirect: "false" name: test-ingress-1 namespace: default spec: rules: - host: test.com http: paths: - path: /test backend: serviceName: test-frontend servicePort: 80 - path: /api/test backend: serviceName: test-backend servicePort: 80 status: loadBalancer: {} </code></pre> <p>Or use <code>rewrite-target</code> as suggested below</p>
<p>I've created 2 <code>kubernetes</code> cluster within the same <code>VPC</code> in the same region in AWS.</p> <p>The first cluster is dedicated to my micro services.Let's name it "MS" The second one is dedicated to vault and its highly available storage (consul). Let's name it "V"</p> <p>The question is how i can get access to the secrets I've created in "V" cluster, from the containers in "MS" cluster ?</p> <p>What I've tried so far:</p> <ol> <li><p>I started by creating a new service account in "MS" cluster which authenticates with the review token API.</p></li> <li><p>Then I had to extract the token reviewer JWT, Kuberenetes CA certificate and the Kubernetes host from "MI" cluster</p></li> <li><p>Then I switched to for "V" cluster context to enabled and create a new kubernetes auth method attached to that service account.</p></li> </ol> <p>From there I don't know what to do and I'm not sure that method really works when using 2 different clusters?</p> <p><strong>Service account:</strong></p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: consul labels: app: consul rules: - apiGroups: [""] resources: - pods verbs: - get - list --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: consul roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: consul subjects: - kind: ServiceAccount name: consul namespace: default --- apiVersion: v1 kind: ServiceAccount metadata: name: consul labels: app: consul </code></pre> <p><strong>Export token review variables from "MI" cluster</strong></p> <pre><code>export VAULT_SA_NAME=$(kubectl get sa postgres-vault -o jsonpath="{.secrets[*]['name']}") export SA_JWT_TOKEN=$(kubectl get secret $VAULT_SA_NAME -o jsonpath="{.data.token}" | base64 --decode; echo) export SA_CA_CRT=$(kubectl get secret $VAULT_SA_NAME -o jsonpath="{.data['ca\.crt']}" | base64 --decode; echo) export K8S_HOST=$(kubectl exec consul-consul-0 -- sh -c 'echo $KUBERNETES_SERVICE_HOST') </code></pre> <p><strong>Create kubernetes auth method</strong></p> <pre><code> vault auth enable kubernetes vault write auth/kubernetes/config \ token_reviewer_jwt="$SA_JWT_TOKEN" \ kubernetes_host="https://$K8S_HOST:443" \ kubernetes_ca_cert="$SA_CA_CRT" </code></pre> <p>I expect to gain access to the secrets stored in vault from my micros services. Though I'm not sure that method works when vault is deployed in a dedicated cluster.</p> <p>I guess there might be something else to join both clusters? May be using consul?</p>
<p>You are 80% there. the next steps are: 1. Run a deployment with the correct service account 2. Login/Authenticate to vault using Kubernetes authentication method and get a relevant vault token. 3. Retrieve secrets.</p> <p>This is an example of adding the service account to your deployment:</p> <pre><code>apiVersion: apps/v1beta2 kind: Deployment metadata: name: xxx labels: app: xxx spec: replicas: 1 template: metadata: ... spec: serviceAccountName: vault-auth ... </code></pre> <p>To login using kube auth see <a href="https://www.vaultproject.io/api/auth/kubernetes/index.html#login" rel="nofollow noreferrer">here</a>. I would suggest that you take a look at this <a href="https://github.com/WealthWizardsEngineering/kube-vault-auth-init" rel="nofollow noreferrer">project</a> for a concrete example. </p> <p>Once you get token with the correct permissions you can use vault cli/ rest api to access secrets.</p>
<p>I want to get log and describe of my pod in kubernetes by python client. In kubernetes cluster we can use </p> <pre><code>kubectl logs &lt;NAME_OF_POD&gt; kubectl describe pods &lt;NAME_OF_pod&gt; </code></pre> <p>But I want these command in python client of kubernetes.what should I do?</p>
<p>You can read the logs of a pod using the following code:</p> <pre><code>from kubernetes.client.rest import ApiException from kubernetes import client, config config.load_kube_config() pod_name = "counter" try: api_instance = client.CoreV1Api() api_response = api_instance.read_namespaced_pod_log(name=pod_name, namespace='default') print(api_response) except ApiException as e: print('Found exception in reading the logs') </code></pre> <p>The above code works perfectly fine for getting the logs of pod.</p> <p>To get the output of <code>kubectl describe pod</code>, all the information provided is in <code>read_namespaced_pod</code> function. It has all the information you require, and you can use that information in whatever way you require. You can edit the above code and use <code>read_namespaced_pod</code> in place of <code>read_namespaced_pod_log</code> to get the info.</p>
<p>I am new to the whole container-orchestration world and was wondering if the microservices I deploy with Kubernetes need a secure connection too or if the Ingress TLS termination is enough.</p> <p>For example I have an NGINX microservice with currently no SSL/TLS setup whatsoever. When users communicate with this microservice, the connection is encrypted because I set up an Ingress with TLS termination.</p> <p>Are there any security drawbacks in such a scenario? I find it very hard to find proper literature regarding this topic.</p>
<p>It definitely will work. I mean ingress with TLS termination. It depends on security requirements of your project. If you ok with un-encripted traffic inside your cluster, you can go with it.</p> <p>Though, if you will be running micro-services in production, the best practice for secure service-to-service communication is <a href="https://istio.io/docs/concepts/security/" rel="nofollow noreferrer">Istio mesh</a> with <a href="https://istio.io/docs/concepts/security/#mutual-tls-authentication" rel="nofollow noreferrer">mutual TLS authentication</a></p> <p>What it does is injects sidecar proxy (envoy) for each of your services <a href="https://i.stack.imgur.com/BUt8E.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BUt8E.png" alt="enter image description here"></a></p>
<p>I have ran into an issue with ambassador (Envoy). Ambassador doesn't simultaneously support HTTP and HTTPS. So that, as a workaround I have to deploy two set of ambassadors (one for HTTP and other for HTTPS). I have deployed two set of ambassadors.</p> <pre><code>NAME READY STATUS RESTARTS AGE pod/ambassador-k7nlr 2/2 Running 0 55m pod/ambassador-t2dbm 2/2 Running 0 55m pod/ambassador-tls-7h6td 2/2 Running 0 107s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/ambassador-admin NodePort 10.233.58.170 &lt;none&gt; 8877:30857/TCP 18d service/ambassador-admin-tls NodePort 10.233.33.29 &lt;none&gt; 8878:32339/TCP 28m service/ambassador-monitor ClusterIP None &lt;none&gt; 9102/TCP 18d NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/ambassador 2 2 2 2 2 node-role.kubernetes.io/node= 58m daemonset.apps/ambassador-tls 1 1 1 1 1 node-role.kubernetes.io/node=tls 107s </code></pre> <p>Below two set of pods I wanted to use for http</p> <pre><code>pod/ambassador-k7nlr 2/2 Running 0 55m pod/ambassador-t2dbm 2/2 Running 0 55m </code></pre> <p>And this one for https</p> <pre><code>pod/ambassador-tls-7h6td 2/2 Running 0 107s </code></pre> <p>below are my service annotation</p> <pre><code>getambassador.io/config: | --- apiVersion: ambassador/v0 kind: Module name: tls config: server: secret: dashboard-certs --- apiVersion: ambassador/v0 kind: Mapping name: dashboard_test_mapping host: dashboard.example.com service: https://dashboard.test.svc.cluster.local prefix: / </code></pre> <p>Here the <code>apiVersion: ambassador/v0</code> is referring to both the ambassador set, so whatever changes I made in the service annotation will be reflected in both the set of ambassadors.</p> <p>I wanted to set this service annotation for a specific ambassador daemonset (HTTPS).</p> <p>Any suggestions ??</p>
<p>You can use <code>AMBASSADOR_ID</code> for that, like this:</p> <pre><code>getambassador.io/config: | --- ambassador_id: ambassador-1 apiVersion: ambassador/v0 kind: Module name: tls config: server: secret: dashboard-certs --- ambassador_id: ambassador-1 apiVersion: ambassador/v0 kind: Mapping name: dashboard_test_mapping host: dashboard.example.com service: https://dashboard.test.svc.cluster.local prefix: / </code></pre> <p>and then specify this id in env variables of DaemonSet:</p> <pre><code>env: - name: AMBASSADOR_ID value: ambassador-1 </code></pre> <p>Refer to the documentation: <a href="https://www.getambassador.io/reference/running/#ambassador_id" rel="nofollow noreferrer">https://www.getambassador.io/reference/running/#ambassador_id</a></p>
<p>I have set up my frontend cluster in my Kubernetes and exposed as <code>frontend.loaner.com</code> and I want to point the DNS record of these both <code>johndoe.loaner.com</code>, <code>janedoe.loaner.com</code> to see the <code>frontend.loaner.com</code>.</p> <p>Is that possible to just point two DNS to a 1 running server and works fine still having the hostname?</p> <p>I read about the CNAME but it will redirect me to <code>frontend.loaner.com</code></p>
<p>You can do it with a Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a>. Basically, something like this:</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: test-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: frontend.loaner.com http: paths: - path: / backend: serviceName: backend1 servicePort: 80 - host: johndoe.loaner.com http: paths: - path: / backend: serviceName: backend2 servicePort: 80 - host: janedoe.loaner.com http: paths: - path: / backend: serviceName: backend3 servicePort: 80 </code></pre> <p>The above Ingress resource assumes you are using an <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">Nginx Ingress Controller</a> in your cluster.</p>
<p>I am trying to deploy an docker image from an insecure Repository using Kubernetes. I have made couple of configuration settings in order to declare the repository as insecure and could also verify that the Repository is made Insecure.</p> <p>Still while trying to deploy this sample application from the Kubernetes via </p> <p>Dashboard / deployment.yaml / secret pod creation from all the 3 ways while trying to deploy the docker image from the insecure private registry i am seeing the below error . Request to provide some help in resolving the same.</p> <pre><code> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 41m default-scheduler Successfully assigned registry/private-insecure-reg to kube-node-2 Warning Failed 40m (x2 over 40m) kubelet, kube-node-2 Failed to pull image "x.x.x.x:5000/x-xxx": rpc error: code = Unknown desc = Error response from daemon: manifest for x.x.x.x:5000/x-xxx:latest not found Normal BackOff 39m (x6 over 41m) kubelet, kube-node-2 Back-off pulling image "127.0.0.1:5000/my-ubuntu" Normal Pulling 39m (x4 over 41m) kubelet, kube-node-2 pulling image "127.0.0.1:5000/my-ubuntu" Warning Failed 39m (x2 over 41m) kubelet, kube-node-2 Failed to pull image "x.x.x.x:5000/x-xxx": rpc error: code = Unknown desc = Error response from daemon: received unexpected HTTP status: 502 Bad Gateway Warning Failed 39m (x4 over 41m) kubelet, kube-node-2 Error: ErrImagePull Warning Failed 52s (x174 over 41m) kubelet, kube-node-2 Error: ImagePullBackOff </code></pre>
<p>1) You need to configure docker service to use insecure registry by editing the file <code>/etc/default/docker</code> and update DOCKER_OPTS e.g</p> <pre><code>DOCKER_OPTS='--insecure-registry 127.0.0.1:5000' </code></pre> <p>2)restart docker</p> <pre><code>sudo systemctl restart docker </code></pre>
<p><strong>How can I attach 100GB Persistent Volume Disk to Each Node in the AKS Kubernetes Cluster?</strong></p> <p>We are using Kubernetes on Azure using AKS. </p> <p>We have a scenario where we need to attach Persistent Volumes to each Node in our AKS Cluster. We run 1 Docker Container on each Node in the Cluster. </p> <p>The reason to attach volumes Dynamically is to increase the IOPS available and available amount of Storage that each Docker container needs to do its job.</p> <p>The program running inside of each Docker container works against very large input data files (10GB) and writes out even larger output files(50GB).</p> <p>We could mount Azure File Shares, but Azure FileShares is limited to 60MB/ps which is too slow for us to move around this much raw data. Once the program running in the Docker image has completed, it will move the output file (50GB) to Blob Storage. The total of all output files may exceed 1TB from all the containers. </p> <p>I was thinking that if we can attach a Persistent Volume to each Node we can increase our available disk space as well as the IOPS without having to go to a high vCPU/RAM VM configuration (ie. DS14_v2). Our program is more I/O intensive vs CPU.</p> <p>All the Docker images running in the Pod are exactly the same where they read a message from a Queue that tells it a specific input file to work against. </p> <p>I've followed the docs to create a StorageClass, Persistent Volume Claims and Persistent Volume and run this against 1 POD. <a href="https://learn.microsoft.com/en-us/azure/aks/azure-disks-dynamic-pv" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/azure-disks-dynamic-pv</a></p> <p>However, when I create a Deployment and Scale the number of Pods from 1 to 2 I receive the error (in production we'd scale to as many nodes as necessary ~100)</p> <blockquote> <p>Multi-Attach error for volume "pvc-784496e4-869d-11e8-8984-0a58ac1f1e06" Volume is already used by pod(s) pv-deployment-67fd8b7b95-fjn2n</p> </blockquote> <p>I realize that an Azure Disk can only be attached to a SingleNode (ReadWriteOnce) however I'm not sure how to create multiple disks and attach them to each Node at the time we load up the Kubernetes Cluster and begin our work.</p> <p><strong>Persistent Volume Claim:</strong> </p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: azure-managed-disk spec: accessModes: - ReadWriteOnce storageClassName: managed-premium resources: requests: storage: 100Gi </code></pre> <p><strong>This is my Deployment:</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: pv-deployment labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: myfrontend image: nginx volumeMounts: - name: volume mountPath: /mnt/azure resources: limits: cpu: ".7" memory: "2.5G" requests: cpu: ".7" memory: "2.5G" volumes: - name: volume persistentVolumeClaim: claimName: azure-managed-disk </code></pre> <p>If I knew that I was going to scale to 100 Nodes, would I have to create a .yaml files with 100 Deployments and be explicit for each Deployment to use a specific Volume Claim?</p> <p>For example in my volume claim I'd have azure-claim-01, azure-claim-02, etc. and in each Deployment I would have to make claim to each named Volume Claim</p> <pre><code>volumes: - name: volume persistentVolumeClaim: claimName: azure-claim-01 </code></pre> <hr> <p>I can't quite get my head around how I can do all this dynamically? </p> <p>Can you recommend a better way to achieve the desired result?</p>
<p>You should use the <code>StatefulSet</code>and <code>volumeClaimTemplates</code> configuration like following:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: type: LoadBalancer ports: - port: 80 targetPort: 80 selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: serviceName: "nginx" replicas: 4 updateStrategy: type: RollingUpdate selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: k8s.gcr.io/nginx-slim:0.8 ports: - containerPort: 80 volumeMounts: - name: persistent-storage mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: persistent-storage annotations: volume.beta.kubernetes.io/storage-class: hdd spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 2Gi --- kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: hdd provisioner: kubernetes.io/azure-disk parameters: skuname: Standard_LRS kind: managed cachingMode: ReadOnly </code></pre> <p>You will get Persistent Volume for every Node:</p> <p><code>kubectl get pv</code></p> <pre><code>NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-0e651011-7647-11e9-bbf5-c6ab19063099 2Gi RWO Delete Bound default/persistent-storage-web-0 hdd 51m pvc-17181607-7648-11e9-bbf5-c6ab19063099 2Gi RWO Delete Bound default/persistent-storage-web-1 hdd 49m pvc-4d488893-7648-11e9-bbf5-c6ab19063099 2Gi RWO Delete Bound default/persistent-storage-web-2 hdd 48m pvc-6aff2a4d-7648-11e9-bbf5-c6ab19063099 2Gi RWO Delete Bound default/persistent-storage-web-3 hdd 47m </code></pre> <p>And every Node will create dedicated Persistent Volume Claim:</p> <p><code>kubectl get pvc</code></p> <pre><code>NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistent-storage-web-0 Bound pvc-0e651011-7647-11e9-bbf5-c6ab19063099 2Gi RWO hdd 55m persistent-storage-web-1 Bound pvc-17181607-7648-11e9-bbf5-c6ab19063099 2Gi RWO hdd 48m persistent-storage-web-2 Bound pvc-4d488893-7648-11e9-bbf5-c6ab19063099 2Gi RWO hdd 46m persistent-storage-web-3 Bound pvc-6aff2a4d-7648-11e9-bbf5-c6ab19063099 2Gi RWO hdd 45m </code></pre>
<p>Here is my deployment &amp; service file for Django. The 3 pods generated from deployment.yaml works, but the resource request and limits are being ignored.</p> <p>I have seen a lot of tutorials about applying resource specifications on Pods but not on Deployment files, is there a way around it?</p> <p>Here is my yaml file:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: djangoapi type: web name: djangoapi namespace: "default" spec: replicas: 3 template: metadata: labels: app: djangoapi type: web spec: containers: - name: djangoapi image: wbivan/app:v0.8.1a imagePullPolicy: Always args: - gunicorn - api.wsgi - --bind - 0.0.0.0:8000 resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" envFrom: - configMapRef: name: djangoapi-config ports: - containerPort: 8000 resources: {} imagePullSecrets: - name: regcred restartPolicy: Always --- apiVersion: v1 kind: Service metadata: name: djangoapi-svc namespace: "default" labels: app: djangoapi spec: ports: - port: 8000 protocol: TCP targetPort: 8000 selector: app: djangoapi type: web type: NodePort </code></pre>
<p>There is one extra resource attribute under your container definition after ports.</p> <pre><code>resources: {} </code></pre> <p>This overrides original resource definition. Remove this one and apply it again.</p>
<p>I'm setting up the kafka and zookeeper cluster with high availability.I have 2 kafka brokers(pod1,pod2) and 3 zookeeper(pod1,pod2,pod3).The setup is working fine. When I enter into the one kafka broker(pod1) I'm able to produce and consume the message. But when I enter to the other kafka broker(pod2) I'm not able to get any messages though I have set replication factor to two.So I have added volumes in the container spec, now I'm not able to create any pod getting the crashloopbackoff. </p> <p>When I checked the logs the information is as follows: Bad request to server. Container is not able to create.</p> <p>kafka_pod.yaml contains the kafka deployment.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: kafka labels: app: kafka spec: replicas: 2 selector: matchLabels: app: kafka template: metadata: labels: app: kafka spec: hostname: kafka containers: - name: kafka image: wurstmeister/kafka:2.11-1.0.2 ports: - containerPort: 9092 protocol: TCP env: - name: KAFKA_ADVERTISED_HOST_NAME value: kafka - name: KAFKA_ADVERTISED_PORT value: "9092" - name: KAFKA_ZOOKEEPER_CONNECT value: zookeeper:2181 - name: KAFKA_OFFSET_TOPIC_REPLICATION_FACTOR value: "2" - name: KAFKA_AUTO_CREATE_TOPICS_ENABLE value: "true" - name: KAFKA_LOG_DIRS value: /opt/kafka volumeMounts: - name: socket mountPath: /var/run/docker.sock - name: logdir mountPath: /opt/kafka volumes: - name: socket hostPath: path: /var/run/docker.sock - name: logdir hostPath: path: ~/datadir zookeeper_pod.yaml contains the following. apiVersion: v1 kind: Service metadata: name: zookeeper spec: ports: - port: 2181 selector: app: zookeeper clusterIP: None --- apiVersion: apps/v1 kind: Deployment metadata: name: zookeeper labels: app: zookeeper spec: replicas: 3 selector: matchLabels: app: zookeeper template: metadata: labels: app: zookeeper spec: nodeName: akshatha-ha containers: - name: zookeeper image: wurstmeister/zookeeper ports: - containerPort: 2181 protocol: TCP </code></pre> <p>I need to deploy the kafka with two broker and zookeeper with 3 server.When one of the server goes down the other should able to provide the data.</p>
<p>Use StatefulSets to deploy Kafka and zookeeper. There is a good tutorial on zookeeper StatefulSets on Kubernetes.io website. Follow that.</p> <p>Avoid hostpath if you are not running single node cluster. Use persistent volume or ephemeral storage. If you are on version 1.14 then consider local persistent volumes for statefulsets</p>
<p>I am trying to deploy <code>Nginx</code> with <code>nginx-prometheus-exporter</code>. Following is my deployment</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: namespace: local name: nginx spec: replicas: 1 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx imagePullPolicy: Always ports: - name: nginx-port containerPort: 80 protocol: TCP livenessProbe: httpGet: path: / port: nginx-port httpHeaders: - name: Host value: KubernetesLivenessProbe initialDelaySeconds: 10 timeoutSeconds: 1 periodSeconds: 15 readinessProbe: httpGet: path: / port: nginx-port httpHeaders: - name: Host value: KubernetesLivenessProbe initialDelaySeconds: 3 timeoutSeconds: 1 periodSeconds: 8 volumeMounts: - mountPath: /etc/nginx/conf.d/ # mount nginx-conf volume to /etc/nginx readOnly: true name: frontend-conf - name: nginx-exporter image: nginx/nginx-prometheus-exporter:0.3.0 resources: requests: cpu: 100m memory: 100Mi ports: - name: nginx-ex-port containerPort: 9113 args: - -nginx.scrape-uri=http://nginx.local.svc.cluster.local:80/stub_status volumes: - name: frontend-conf configMap: name: frontend-conf # place ConfigMap `nginx-conf` on /etc/nginx items: - key: frontend.local.conf path: frontend.local.conf </code></pre> <p>Single container pod works, but fails when I add <code>nginx-prometheus-exporter</code> sidecar container. </p> <pre><code>mackbook: xyz$ kubectl logs nginx-6dbbdb64fb-8drmc -c nginx-exporter 2019/05/14 20:17:48 Starting NGINX Prometheus Exporter Version=0.3.0 GitCommit=6570275 2019/05/14 20:17:53 Could not create Nginx Client: Failed to create NginxClient: failed to get http://nginx.local.svc.cluster.local:80/stub_status: Get http://nginx.local.svc.cluster.local:80/stub_status: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) </code></pre> <p>Also was able to <code>curl http://nginx.local.svc.cluster.local</code> from another container running in different namespace.</p> <p>Anyone know what's the correct way to specify <code>-nginx.scrape-uri</code> ? </p> <p>From Nginx container </p> <pre><code>root@nginx-6fd866c4d7-t8wwf:/# curl localhost/stub_status &lt;html&gt; &lt;head&gt;&lt;title&gt;404 Not Found&lt;/title&gt;&lt;/head&gt; &lt;body&gt; &lt;h1&gt;404 Not Found&lt;/h1&gt; &lt;ul&gt; &lt;li&gt;Code: NoSuchKey&lt;/li&gt; &lt;li&gt;Message: The specified key does not exist.&lt;/li&gt; &lt;li&gt;Key: beta/stub_status&lt;/li&gt; &lt;li&gt;RequestId: 3319E5A424C7BA31&lt;/li&gt; &lt;li&gt;HostId: zVv5zuAyzqbB2Gw3w1DavEJwgq2sbgI8RPMf7RmPsNQoh/9ftxCmiSwyj/j6q/K3lxEEeUy6aZM=&lt;/li&gt; &lt;/ul&gt; &lt;hr/&gt; &lt;/body&gt; &lt;/html&gt; root@nginx-6fd866c4d7-t8wwf:/# </code></pre>
<p>nginx container is listening on port 80, and you configure the sidecar to access to 8080 port. Moreover, you deploy this in "production" namespace, but using "test" to access to nginx. </p> <p>Try with <code>-nginx.scrape-uri=http://nginx.production.svc.cluster.local:80/stub_status</code>.</p> <p>BTW you must know that containers within a Pod share an IP address and port space, and can find each other via localhost.</p> <p>So you can use <code>-nginx.scrape-uri=http://localhost/stub_status</code>.</p>
<p>Our nginx-ingress log is continuously filled with this error message:</p> <pre><code> dns.lua:61: resolve(): server returned error code: 3: name error, context: ngx.timer </code></pre> <p>We created the Kubernetes cluster with Kubeadm which uses CoreDNS by default. </p> <pre><code>/data # kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-node-8jr7t 2/2 Running 2 4d22h calico-node-cl5f6 2/2 Running 4 4d22h calico-node-rzt28 2/2 Running 2 4d22h coredns-fb8b8dccf-n68x9 1/1 Running 3 3d23h coredns-fb8b8dccf-x9wr4 1/1 Running 1 3d23h </code></pre> <p>It also has a kube-dns service that points to the core-dns pods.</p> <pre><code>kube-system kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP 7m29s </code></pre> <p>I can't find anything else in the logs that would help me resolve this issue.</p> <p>UPDATE: </p> <p>We had a service with externalName as suggested here > <a href="https://github.com/coredns/coredns/issues/2324#issuecomment-484005202" rel="nofollow noreferrer">https://github.com/coredns/coredns/issues/2324#issuecomment-484005202</a></p>
<p>As suggested in this comment, we had a service with type "ExternalName". <a href="https://github.com/coredns/coredns/issues/2324#issuecomment-484005202" rel="nofollow noreferrer">https://github.com/coredns/coredns/issues/2324#issuecomment-484005202</a></p> <p>Once we deleted this service, we stopped getting this error. Using IP address instead of DNS name should work as well but I never tried it. </p>
<p>I'm creating a Kubernetes Service Account using terraform and trying to output the token from the Kubernetes Secret that it creates.</p> <pre><code>resource "kubernetes_service_account" "ci" { metadata { name = "ci" } } data "kubernetes_secret" "ci" { metadata { name = "${kubernetes_service_account.ci.default_secret_name}" } } output "ci_token" { value = "${data.kubernetes_secret.ci.data.token}" } </code></pre> <p>According to <a href="https://www.terraform.io/docs/configuration-0-11/data-sources.html#data-source-lifecycle" rel="noreferrer">the docs</a> this should make the data block defer getting its values until the 'apply' phase because of the computed value of <code>default_secret_name</code>, but when I run <code>terraform apply</code> it gives me this error:</p> <pre><code>Error: Error running plan: 1 error(s) occurred: * output.ci_token: Resource 'data.kubernetes_secret.ci' does not have attribute 'data.token' for variable 'data.kubernetes_secret.ci.data.token' </code></pre> <p>Adding <code>depends_on</code> to the <code>kubernetes_secret</code> data block doesn't make any difference.</p> <p>If I comment out the <code>output</code> block, it creates the resources fine, then I can uncomment it, apply again, and everything acts normally, since the Kubernetes Secret exists already.</p> <p>I've also made a Github issue <a href="https://github.com/terraform-providers/terraform-provider-kubernetes/issues/436" rel="noreferrer">here</a>.</p> <p><strong>Update</strong></p> <p>The accepted answer does solve this problem, but I omitted another output to simplify the question, which doesn't work with this solution:</p> <pre><code>output "ci_crt" { value = "${data.kubernetes_secret.ci.data.ca.crt}" } </code></pre> <pre><code>* output.ci_ca: lookup: lookup failed to find 'ca.crt' in: ${lookup(data.kubernetes_secret.ci.data, "ca.crt")} </code></pre> <p>This particular issue is <a href="https://github.com/terraform-providers/terraform-provider-kubernetes/issues/334" rel="noreferrer">reported here</a> due to <a href="https://github.com/hashicorp/terraform/issues/10876" rel="noreferrer">a bug in Terraform</a>, which is fixed in version 0.12.</p>
<p>This works:</p> <pre><code>resource &quot;kubernetes_service_account&quot; &quot;ci&quot; { metadata { name = &quot;ci&quot; } } data &quot;kubernetes_secret&quot; &quot;ci&quot; { metadata { name = kubernetes_service_account.ci.default_secret_name } } output &quot;ci_token&quot; { sensitive = true value = lookup(data.kubernetes_secret.ci.data, &quot;token&quot;) } </code></pre>
<p>I have a question about how kubernetes decides the serving pod when there are several replicas of the pod.</p> <p>For Instance, let's assume I have a web application running on a k8s cluster as multiple pod replicas and they are exposed by a service. </p> <p>When a client sends a request it goes to service and kube-proxy. But where and when does kubernetes make a decision about which pod should serve the request?</p> <p>I want to know the internals of kubernetes for this matter. Can we control this? Can we decide which pod should serve based on client requests and custom conditions?</p>
<blockquote> <p>can we decide which pod should serve based on client requests and custom conditions?</p> </blockquote> <p>As kube-proxy works on <strong>L4</strong> load balancing stuff thus you can control the session based on <strong>Client IP</strong>. it does not read the header of client request. </p> <p>you can control the session with the following field <strong>service.spec.sessionAffinityConfig</strong> in <strong>service obejct</strong> </p> <p>following command provide the explanation <code>kubectl explain service.spec.sessionAffinityConfig</code></p> <p>Following paragraph and link provide detail answer.</p> <blockquote> <p>Client-IP based session affinity can be selected by setting service.spec.sessionAffinity to “ClientIP” (the default is “None”), and you can set the max session sticky time by setting the field service.spec.sessionAffinityConfig.clientIP.timeoutSeconds if you have already set service.spec.sessionAffinity to “ClientIP” (the default is “10800”)<a href="https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies" rel="nofollow noreferrer">-service-proxies</a></p> </blockquote> <p>Service object would be like this</p> <pre><code>kind: Service apiVersion: v1 metadata: name: my-service spec: selector: app: my-app ports: - name: http protocol: TCP port: 80 targetPort: 80 sessionAffinity: ClientIP sessionAffinityConfig: clientIP: timeoutSeconds: 10000 </code></pre>
<p>I am trying to deploy an docker image from an insecure Repository using Kubernetes. I have made couple of configuration settings in order to declare the repository as insecure and could also verify that the Repository is made Insecure.</p> <p>Still while trying to deploy this sample application from the Kubernetes via </p> <p>Dashboard / deployment.yaml / secret pod creation from all the 3 ways while trying to deploy the docker image from the insecure private registry i am seeing the below error . Request to provide some help in resolving the same.</p> <pre><code> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 41m default-scheduler Successfully assigned registry/private-insecure-reg to kube-node-2 Warning Failed 40m (x2 over 40m) kubelet, kube-node-2 Failed to pull image "x.x.x.x:5000/x-xxx": rpc error: code = Unknown desc = Error response from daemon: manifest for x.x.x.x:5000/x-xxx:latest not found Normal BackOff 39m (x6 over 41m) kubelet, kube-node-2 Back-off pulling image "127.0.0.1:5000/my-ubuntu" Normal Pulling 39m (x4 over 41m) kubelet, kube-node-2 pulling image "127.0.0.1:5000/my-ubuntu" Warning Failed 39m (x2 over 41m) kubelet, kube-node-2 Failed to pull image "x.x.x.x:5000/x-xxx": rpc error: code = Unknown desc = Error response from daemon: received unexpected HTTP status: 502 Bad Gateway Warning Failed 39m (x4 over 41m) kubelet, kube-node-2 Error: ErrImagePull Warning Failed 52s (x174 over 41m) kubelet, kube-node-2 Error: ImagePullBackOff </code></pre>
<p>You used plain docker to setup the registriy on the kubernetes master node. Therefore it is reachable by localhost or 127.0.0.1 only on the master node itself. You are trying to pull the image from other nodes, according to you logfile <code>kube-node-2</code>. On that node there is no registry on localhost. But since you receive a bad gateway error, it seems like there is something listening on port 5000, just not the registry.</p> <p>This is how you can solve this: Add a DNS name for the IP of the master node, so each node can reach it using a plain name. If you don't want to configure TLS certificates, you must configure each container daemon to consider your registry as unsecured (no HTTPS). See answer form A_Suh for configuration of docker daemon.</p>
<p>Currently I am trying to Migrate a site that was living on an Apache Load balanced Server to my k8s cluster. However the application was set up strangely with a proxypass and proxyreversepass like so:</p> <pre><code>ProxyPass /something http://example.com/something ProxyPassReverse /something http://example.com/something </code></pre> <p>And I would like to mimic this in an Nginx Ingress</p> <p>First I tried using the <code>rewrite-target</code> annotation however that does not keep the <code>Location</code> header which is necessary to get the application running again. </p> <p>Then I tried to get the <code>proxy-redirect-to/from</code> annotation in place inside a specific location block like so:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: gpg-app-ingress annotations: nginx.ingress.kubernetes.io/proxy-redirect-from: http://originalapp.com/something nginx.ingress.kubernetes.io/proxy-redirect-to: http://example.com/something spec: rules: - host: example.com http: paths: - path: /something backend: serviceName: example-com servicePort: 80 </code></pre> <p>I would like to be able to instead use a custom <code>proxy_pass</code> variable but it doesn't seem like its possible.</p> <p>What would be the best way to mimic this proxy pass?</p>
<p>Firstly you can use custom configuration for your nginx ingress controller, documentation can be found <a href="https://kubernetes.github.io/ingress-nginx/examples/customization/configuration-snippets/#ingress" rel="noreferrer">here</a></p> <p>Also, if you just want to use nginx ingress controller as a reverse proxy, each ingress rule already creates <code>proxy_pass</code> directive to relevant upstream/backend service. </p> <p>And if paths are same with your rule and backend service, then you don't have to specify rewrite rule, only just path for backend service. But if paths are different, then take consider using <code>nginx.ingress.kubernetes.io/rewrite-target</code> annotation, otherwise you will get <code>404 backend</code> error</p> <p>So to redirect request from which is coming to frontend <code>http://example.com/something</code> to backend <code>example-com/something</code>, your ingress rule should be similar to below</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: gpg-app-ingress annotations: kubernetes.io/ingress.class: nginx #nginx.ingress.kubernetes.io/rewrite-target: /different-path spec: rules: - host: example.com http: paths: - path: /something backend: serviceName: example-com servicePort: 80 </code></pre> <p>For more explanation about annotations, check <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md" rel="noreferrer">Nginx Ingress Annotations</a></p> <p>Also, consider checking logs of nginx-ingress-controller pod via if something wrong</p> <pre><code>kubectl logs nginx-ingress-controller-xxxxx </code></pre> <p>Hope it helps!</p>
<p>I have multiple APIs all listening on '/api' and a web front end listening on '/'.</p> <p>Is there a way which I can write my ingress definition to rewrite the paths to something like the following?</p> <pre><code>/api/ -&gt; /api/ on service1 /api2/api/ -&gt; /api/ on service2 /api3/api/ -&gt; /api/ on service3 / -&gt; / on service4 </code></pre> <p>I know I can change the APIs to listen to something else but don't want to do that. I know I can also just rewrite all to /api/ and let service3 act as the default but there may be other services which need routing elsewhere in the future.</p> <p>I've heard that you could use multiple ingresses but I'm not sure how that would affect performance and if it's best practice to do so.</p> <p>Also, is there any way to debug which route goes to which service?</p> <p>Thanks, James</p>
<p>With help from @Rahman - see other answer. I've managed to get this to work with a single ingress.</p> <p>I've had to post this as an additional answer because of the character limit.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-name annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: tls: - secretName: tls-secret rules: - host: localhost http: paths: - path: /(api/.*) backend: serviceName: service1 servicePort: 80 - path: /api2/(api.*) backend: serviceName: service2 servicePort: 80 - path: /api3/(api.*) backend: serviceName: service3 servicePort: 80 - path: /(.*) backend: serviceName: service4 servicePort: 80 </code></pre> <p>Just for context for anyone else stumbling upon this in the future, service 1 is a main API, service 2 and 3 are other APIs under another subdomain and service 4 is a web frontend.</p>
<p>I am using the Gitlab Auto DevOps CI pipeline and I want to remove a deployment using helm.</p> <p>I try to connect to tiller like this <code>helm init --client-only --tiller-namespace=gitlab-managed-apps</code> which results in </p> <p><code>$HELM_HOME has been configured at /Users/marvin/.helm. Not installing Tiller due to 'client-only' flag having been set Happy Helming!</code></p> <p><code>helm list --namespace=gitlab-managed-apps</code>returns <code>Error: could not find tiller</code></p>
<p>I had the same problem. I've found the solution to list releases here : <a href="https://forum.gitlab.com/t/orphaned-apps-in-gitlab-managed-apps-namespace/22717/9" rel="nofollow noreferrer">https://forum.gitlab.com/t/orphaned-apps-in-gitlab-managed-apps-namespace/22717/9</a></p> <pre><code>export TILLER_NAMESPACE="gitlab-managed-apps" kubectl get secrets/tiller-secret -n "$TILLER_NAMESPACE" -o "jsonpath={.data['ca\.crt']}" | base64 --decode &gt; tiller-ca.crt kubectl get secrets/tiller-secret -n "$TILLER_NAMESPACE" -o "jsonpath={.data['tls\.crt']}" | base64 --decode &gt; tiller.crt kubectl get secrets/tiller-secret -n "$TILLER_NAMESPACE" -o "jsonpath={.data['tls\.key']}" | base64 --decode &gt; tiller.key helm list --tiller-connection-timeout 30 --tls --tls-ca-cert tiller-ca.crt --tls-cert tiller.crt --tls-key tiller.key --all --tiller-namespace gitlab-managed-apps </code></pre> <p>You can then run :</p> <pre><code>helm delete &lt;name&gt; [--purge] --tiller-connection-timeout 30 --tls --tls-ca-cert tiller-ca.crt --tls-cert tiller.crt --tls-key tiller.key --tiller-namespace gitlab-managed-apps </code></pre> <p>Edit:</p> <p>@mrvnklm proposed to use -D options for base64. In my case, it doesn't work anymore with "d" uppercased. After checks, I guess it's for macOs users(<a href="https://www.unix.com/man-page/osx/1/base64/" rel="nofollow noreferrer">man page base64 osx</a>). For linux, it seems to be "-d" (<a href="https://linux.die.net/man/1/base64" rel="nofollow noreferrer">man page linux</a>). Changed to "--decode" according to mrvnklm's comment.</p>
<p>I have a Jenkins deployment pipeline which involves kubernetes plugin. Using kubernetes plugin I create a slave pod for building a node application using <strong>yarn</strong>. The requests and limits for CPU and Memory are set.</p> <p>When the Jenkins master schedules the slave, sometimes (as I haven’t seen a pattern, as of now), the pod makes the entire node unreachable and changes the status of node to be Unknown. On careful inspection in Grafana, the CPU and Memory Resources seem to be well within the range with no visible spike. The only spike that occurs is with the Disk I/O, which peaks to ~ 4 MiB.</p> <p>I am not sure if that is the reason for the node unable to address itself as a cluster member. I would be needing help in a few things here:</p> <p>a) How to diagnose in depth the reasons for node leaving the cluster.</p> <p>b) If, the reason is Disk IOPS, is there any default requests, limits for IOPS at Kubernetes level?</p> <p>PS: I am using EBS (gp2)</p>
<p>Considering that the node was previously working and recently stopped showing the ready status restart your kubelet service. Just ssh into the affected node and execute:</p> <pre><code>/etc/init.d/kubelet restart </code></pre> <p>Back on your master node run kubectl get nodes to check if the node is working now</p>
<p>So, I created a postgreSQL instance in Google Cloud, and I have a Kubernetes Cluster with containers that I would like to connect to it. I know that the cloud sql proxy sidecar is one method, but the documentation says that I should be able to connect to the private IP as well.</p> <p>I notice that a VPC peering connection was automatically created for me. It's set for a destination network of 10.108.224.0/24, which is where the instance is, with a "Next hop region" of us-central1, where my K8s cluster is. </p> <p>And yet when I try the private IP via TCP on port 5432, I time out. I see nothing in the documentation about have to modify firewall rules to make this work, but I tried that anyway, finding the firewall interface in GCP rather clumsy and confusing compared with writing my own rules using iptables, but my attempts failed.</p> <p>Beyond going to the cloud sql sidecar, does anyone have an idea why this would not work?</p> <p>Thanks. </p>
<p>Does your GKE cluster meet the <a href="https://cloud.google.com/sql/docs/mysql/private-ip#environment_requirements" rel="nofollow noreferrer">environment requirements</a> for private IP? It needs to be a VPC enabled cluster on the same VPC and region as your Cloud SQL instance. </p>
<p>This is an awfully basic question, but I just don't know the answer. I have an app which should have three containers -- front end, back end and database containers.</p> <p>Now they serve on different ports request data from different ports.</p> <p>So I read that in a Pod, it's a local network and the containers can communicate. Does nginx come in to this? My understanding is that it does not as the pod manages the comms between the containers. My understanding is that nginx only is required for serving outside requests and load balancing across a cluster of identical containers round-robin style.</p> <p>If someone could help me out with understanding this I'd be ver grateful.</p>
<ol> <li>You deploy FE, BE, and DB in different pods (different deployments) to scale/manage them separately. Better even in different namespaces.</li> <li>Create a k8s <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">service</a> of type ClusterIP for BE &amp; DB. Use k8s DNS resolver to access them - <code>service-name.namespace.svc.cluster.local</code></li> <li>Create a k8s <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">service</a> of type LoadBalancer or NodePort for FE to expose it outside of k8s. Use Load Balancer address or node-ip:node port to access it.</li> </ol>
<p>I'm getting the <code>Failed to created node environment</code> error with an elasticsearch docker image:</p> <pre><code>[unknown] uncaught exception in thread [main] org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: Failed to create node environment </code></pre> <p>The persistent volume for elasticsearch data is at <code>/mnt/volume/elasticsearch-data</code>.</p> <p>I'm able to solve this problem by <code>ssh</code> into the remote machine and run <code>chown 1000:1000 /mnt/volume/elasticsearch-data</code>. But I don't want to do it manually. How can I solve this privilege issue using the <code>deployment.yaml</code> file?</p> <p>I've read that using <code>fsGroup: 1000</code> in <code>securityContext</code> should solve the problem, but it isn't working for me.</p> <p>deployment.yaml:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: elasticsearch spec: replicas: 1 template: metadata: labels: app: elasticsearch spec: containers: - name: elasticsearch image: me-name/elasticsearch:6.7 imagePullPolicy: "IfNotPresent" ports: - containerPort: 9200 envFrom: - configMapRef: name: elasticsearch-config volumeMounts: - mountPath: /usr/share/elasticsearch/data name: elasticsearch-volume securityContext: runAsUser: 1000 fsGroup: 1000 capabilities: add: - IPC_LOCK - SYS_RESOURCE volumes: - name: elasticsearch-volume persistentVolumeClaim: claimName: elasticsearch-pv-claim lifecycle: postStart: exec: command: ["/bin/sh", "-c", "sysctl -w vm.max_map_count=262144"] </code></pre> <p>storage.yaml:</p> <pre><code>kind: PersistentVolume apiVersion: v1 metadata: name: elasticsearch-pv-volume labels: type: local app: elasticsearch spec: storageClassName: manual capacity: storage: 5Gi accessModes: - ReadWriteMany hostPath: path: "/mnt/volume/elasticsearch-data" persistentVolumeReclaimPolicy: Delete --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: elasticsearch-pv-claim labels: app: elasticsearch spec: storageClassName: manual accessModes: - ReadWriteMany resources: requests: storage: 5Gi </code></pre>
<p>There seems to be on <a href="https://github.com/kubernetes/minikube/issues/1990" rel="nofollow noreferrer">open bug</a> regarding permissions hostPath volumes. To work around this issue you should create an initContainer initially setting the proper permissions:</p> <pre><code>piVersion: extensions/v1beta1 kind: Deployment metadata: name: elasticsearch spec: replicas: 1 template: metadata: labels: app: elasticsearch spec: initContainers: - name: set-permissions image: registry.hub.docker.com/library/busybox:latest command: ['sh', '-c', 'mkdir -p /usr/share/elasticsearch/data &amp;&amp; chown 1000:1000 /usr/share/elasticsearch/data' ] volumeMounts: - mountPath: /usr/share/elasticsearch/data name: elasticsearch-volume containers: - name: elasticsearch image: me-name/elasticsearch:6.7 imagePullPolicy: "IfNotPresent" ports: - containerPort: 9200 envFrom: - configMapRef: name: elasticsearch-config volumeMounts: - mountPath: /usr/share/elasticsearch/data name: elasticsearch-volume securityContext: runAsUser: 1000 fsGroup: 1000 capabilities: add: - IPC_LOCK - SYS_RESOURCE volumes: - name: elasticsearch-volume persistentVolumeClaim: claimName: elasticsearch-pv-claim lifecycle: postStart: exec: command: ["/bin/sh", "-c", "sysctl -w vm.max_map_count=262144"] </code></pre> <p><del>You are on the right track by setting the <code>fsGroup</code> but what you are currently doing is setting the <em>user</em> to <code>1000</code> and mounting the volume with access to the <em>group</em> <code>1000</code>. What you should change is to use <code>runAsGroup: 1000</code> instead of <code>runAsUser: 1000</code>. </del></p>
<p>I created a pod 5 hours ago.Now I have error:Pull Back Off These are events from describe pod</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4h51m default-scheduler Successfully assigned default/nodehelloworld.example.com to minikube Normal Pulling 4h49m (x4 over 4h51m) kubelet, minikube pulling image "milenkom/docker-demo" Warning Failed 4h49m (x4 over 4h51m) kubelet, minikube Failed to pull image "milenkom/docker-demo": rpc error: code = Unknown desc = Error response from daemon: manifest for milenkom/docker-demo:latest not found Warning Failed 4h49m (x4 over 4h51m) kubelet, minikube Error: ErrImagePull Normal BackOff 4h49m (x6 over 4h51m) kubelet, minikube Back-off pulling image "milenkom/docker-demo" Warning Failed 4h21m (x132 over 4h51m) kubelet, minikube Error: ImagePullBackOff Warning FailedMount 5m13s kubelet, minikube MountVolume.SetUp failed for volume "default-token-zpl2j" : couldn't propagate object cache: timed out waiting for the condition Normal Pulling 3m34s (x4 over 5m9s) kubelet, minikube pulling image "milenkom/docker-demo" Warning Failed 3m32s (x4 over 5m2s) kubelet, minikube Failed to pull image "milenkom/docker-demo": rpc error: code = Unknown desc = Error response from daemon: manifest for milenkom/docker-demo:latest not found Warning Failed 3m32s (x4 over 5m2s) kubelet, minikube Error: ErrImagePull Normal BackOff 3m5s (x6 over 5m1s) kubelet, minikube Back-off pulling image "milenkom/docker-demo" Warning Failed 3m5s (x6 over 5m1s) kubelet, minikube Error: ImagePullBackOff </code></pre> <p>Images on my desktop</p> <pre><code>docker images REPOSITORY TAG IMAGE ID CREATED SIZE milenkom/docker-demo tagname 08d27ff00255 6 hours ago 659MB </code></pre> <p>Following advices from Max and Shanica I made a mess when tagging</p> <pre><code>docker tag 08d27ff00255 docker-demo:latest </code></pre> <p>Works OK,but when I try docker push docker-demo:latest The push refers to repository [docker.io/library/docker-demo]</p> <pre><code>e892b52719ff: Preparing 915b38bfb374: Preparing 3f1416a1e6b9: Preparing e1da644611ce: Preparing d79093d63949: Preparing 87cbe568afdd: Waiting 787c930753b4: Waiting 9f17712cba0b: Waiting 223c0d04a137: Waiting fe4c16cbf7a4: Waiting denied: requested access to the resource is denied </code></pre> <p>although I am logged in</p> <p>Output docker inspect image 08d27ff00255</p> <pre><code>[ { "Id": "sha256:08d27ff0025581727ef548437fce875d670f9e31b373f00c2a2477f8effb9816", "RepoTags": [ "docker-demo:latest", "milenkom/docker-demo:tagname" ], </code></pre> <p>Why does it fail assigning pod now?</p>
<pre><code>manifest for milenkom/docker-demo:latest not found </code></pre> <p>Looks like there's no latest tag in the image you want to pull: <a href="https://hub.docker.com/r/milenkom/docker-demo/tags" rel="nofollow noreferrer">https://hub.docker.com/r/milenkom/docker-demo/tags</a>. </p> <p>Try some existing image.</p> <p>UPD (based on question update):</p> <ol> <li>docker push milenkom/docker-demo:tagname</li> <li>update k8s pod to point to <code>milenkom/docker-demo:tagname</code></li> </ol>
<p>Is it possible to run two Kubernetes dashboard's locally within two different shells? I want to view two different cluster's at the same time, however, I run into an issue with the dashboard's port. </p> <ol> <li>Open Dashboard on 1st cluster</li> <li>Open new shell and switch context to 2nd cluster</li> <li>Open Dashboard on 2nd cluster</li> </ol> <p>I created the first dashboard like so: </p> <pre><code>$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') $ kubectl proxy Starting to serve on 127.0.0.1:8001 </code></pre> <p>I opened a new shell and changed context to the new cluster receiving the error: </p> <pre><code>$ listen tcp 127.0.0.1:8001: bind: address already in use </code></pre> <p>I understand <em>why</em> this is happening, however I'm not sure how to mitigate this problem. </p> <p>Furthermore, when I change the port to 8002 for the second cluster's dashboard I'm unable to view both pages live without one rendering an <code>Internal Server Error (500): square/go-jose: error in cryptographic primitive</code> </p> <p>I have switched to incognito, adding a Chrome configuration to erase/ignore browser cookies from localhost:8001 and localhost:8002, however when I go to login I receive the following error in the Chrome console: </p> <pre><code>Possibly unhandled rejection: { "data":"MSG_LOGIN_UNAUTHORIZED_ERROR\n", "status":401, "config":{ "method":"GET", "transformRequest":[ null ], "transformResponse":[ null ], "jsonpCallbackParam":"callback", "url":"api/v1/rbac/status", "headers":{ "Accept":"application/json, text/plain, */*" } }, "statusText":"Unauthorized", "xhrStatus":"complete", "resource":{ } } </code></pre>
<p>The problem originates from the kubectl-proxy. The first one is using port 8001 and a port can be used only once. You can start your second kubectl on a different port</p> <pre><code>kubectl proxy --port=8002 </code></pre> <p>You need to point your browser to the different port to access the other dashboard, of course.</p>
<p>I am using minikube for developing an application on Kubernetes and I am using Traefik as the ingress controller.</p> <p>I am able to connect and use my application services when I use the url of the host which I defined in the ingress ("streambridge.local") and I set up in the linux hosts ("/etc/hosts"). But when I use the exact same ip address that I used for the dns I am not able to connect to any of the services and I receive "404 page not found". I have to mention that I am using the IP address of the <code>minikube</code> which I got by: <code>$(minikube ip)</code>. Below is my ingress config and the commnads that I used for the dns.</p> <p>How I can connect and use my application services with the IP?</p> <p>Ingress config:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-ingress annotations: kubernetes.io/ingress.class: "traefik" traefik.frontend.rule.type: PathPrefixStrip traefik.frontend.passHostHeader: "true" traefik.backend.loadbalancer.sticky: "true" traefik.wss.protocol: http traefik.wss.protocol: https spec: rules: - host: streambridge.local http: paths: - path: /dashboard backend: serviceName: dashboard servicePort: 9009 - path: /rdb backend: serviceName: rethinkdb servicePort: 8085 </code></pre> <p>My <code>/etc/hosts</code>:</p> <pre><code>127.0.0.1 localhost 192.168.99.100 traefik-ui.minikube 192.168.99.100 streambridge.local </code></pre> <p>This works: <code>http://streambridge.local/rdb</code></p> <p>But this does not work: <code>http://192.168.99.100/rdb</code> and returns <code>404 page not found</code></p>
<p>You have created ingress routes that evaluate the host header of the http request. So while you are actually connecting to the same ip, it is once with host:streambridge.local and once with "192.168.99.100" for which you did not add a rule in traefik. This is therefore working exactly as configured.</p>
<p>I’m somewhat new to Kubernetes and not sure the standard way to do this. I’d like to have many instances of a single microservice, but with each of the containers parameterized slightly differently. (Perhaps an environment variable passed to the container that’s different for each instance, as specified in the container spec of the .yaml file?)</p> <p>It seems like a single deployment with multiple replicas wouldn’t work. Yet, having n different deployments with very slightly different .yaml files seems a bit redundant. Is there some sort of templating solution perhaps?</p> <p>Or should each microservice be identical and seek out its parameters from a central service?</p> <p>I realize this could be interpreted as an “opinion question” but I am looking for typical solutions.</p>
<p>As an option you can use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> with <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">InitContainers</a> plus <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">ConfigMap</a>.</p> <p>Statefulset will guarantee you proper naming and order. ConfigMap will let you store fine-grained information like individual properties or coarse-grained information like entire config files.</p> <p>Configuration data can be consumed in pods in a variety of ways. ConfigMaps can be used to:</p> <p>1) Populate the values of environment variables</p> <p>2) Set command-line arguments in a container</p> <p>3) Populate config files in a volume</p> <p>For the begging you can review <a href="https://theithollow.com/2019/04/01/kubernetes-statefulsets/" rel="nofollow noreferrer">Kubernetes – StatefulSets</a> article where you can find a good explanation on how this pieces work together and inspect prepared example on how to deploy containers from the same image but with different properties.</p>
<p>For setting these values, it is unclear as to what the best practices are. Is the following an accurate <strong><em>generalization</em></strong> ?</p> <p>Memory and CPU <em>request</em> values should be slightly higher than what the container requires to idle or do very minimal work.</p> <p>Memory and CPU <em>limit</em> values should be slightly higher than what the container requires when operating at maximum capacity.</p> <p>References:</p> <ul> <li><a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container</a></li> <li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/</a></li> <li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/</a></li> </ul>
<p>Request values should be set around the regular workload numbers. It shouldn't be close to the idle work numbers because request values are what the Scheduler uses to determine where to put Pods. More specifically, the Scheduler will schedule Pods on a Node as long as the sum of the requests is lower than the Node maximum capacity. If your request values are too low, you risk overpopulating your Node, causing some of the Pods to get evicted. </p> <p>For the limit, it should be set at the value at which point you consider it ok for the Scheduler to evict the Pod where the container is running. It should be larger than a regular high load or your Pod risks getting evicted whenever it experiences a peak in workload.</p>
<p>I want to connect from Google Cloud Function to Kubernetes (GKE) container. Specifically, the container has postgres database and I want to read records in a table.</p> <p>In Golang:</p> <pre><code>func ConnectPostgres(w http.ResponseWriter, r *http.Request) { db, err := sql.Open("postgres", "postgresql://postgres@10.32.0.142:5432/myDatabase") if err != nil { http.Error(w, "Error opening conn:" + err.Error(), http.StatusInternalServerError) } defer db.Close() err = db.Ping() if err != nil { http.Error(w, "Error ping conn:" + err.Error(), http.StatusInternalServerError) } rows, err := db.Query("SELECT * FROM myTable") fmt.Println(rows) w.Write([]byte(rows)) } </code></pre> <p>10.32.0.142 is the Internal IP of the pod having the container.</p> <p>But when the cloud function tries to Ping to postgres container, the request gets timed out.</p> <p>How can I solve this?</p>
<p>You need to connect Cloud Function to VPC first, detailed here: <a href="https://cloud.google.com/functions/docs/connecting-vpc" rel="nofollow noreferrer">https://cloud.google.com/functions/docs/connecting-vpc</a> </p>
<p>In case of Rancher the Private Catalogs get added and the private catalogs are also displayed but the <code>helm charts</code> associated to a private catalog can't be accessed. If I select a catalog I don't find the templates files listed.</p> <p>In case if we put the same <code>helm chart</code> on a public catalog the templates file get listed. The issue so clearly is with Rancher and not with the <code>helm charts</code>. I tried to put the helm charts on different private repository like ACR and git private repo and the issue still persists so also the issue with the registry is ruled out.</p> <p>Steps to reproduce:</p> <p>1) Create a Private App Catalog (any, but I used ACR)<br/> 2) Add the app catalog to Rancher by providing the correct Credentials.<br/> 3) Go and Launch the app<br/> 4) The helm chart(pushed in prev steps) gets listed.<br/> 5) Try to go and Launch the App.<br/></p> <p>Result:</p> <p>You find that there is no template files listed (Values.yml,Deployment.yml etc are not listed ) Logs of Rancher Server</p> <blockquote> <p>[ERROR] Failed to load chart: Error fetching helm URLs: [Error in HTTP GET of [_blobs/<strong>.tgz], error: Get //user:*</strong>@_blobs/**-0.1.0.tgz: unsupported protocol scheme ""]</p> </blockquote> <p>I get a unsupported Protocol schema error when the chart tries to read the index.yml and then ries to get the *.tar.gz file</p> <p>The issue seems to be linked to other issues like : <a href="https://github.com/rancher/rancher/issues/15671" rel="nofollow noreferrer">https://github.com/rancher/rancher/issues/15671</a></p>
<p>We need to use Rancher Charts for charts to be correctly listed in the app catalog of Rancher. Rancher Chart and Helm chart have some differences which are listed here : <a href="https://rancher.com/docs/rancher/v2.x/en/catalog/custom/creating/" rel="nofollow noreferrer">https://rancher.com/docs/rancher/v2.x/en/catalog/custom/creating/</a></p> <p><strong>There are two ways to use charts in Rancher:</strong></p> <ul> <li>Helm chart way that requires the GIT server responding to GET request ( charts are stored as tar.gz file along with index.yml file).</li> <li>Rancher chart way where the charts are stored as normal files ( store the whole helm chart folder as it is , no need to gunzip it as in helm chart way) on GIT server.</li> </ul> <p>In my case i had the tar.gz file that had the helm chart and index.yml file that renders the chart. This way is supported by Rancher only if there is some external server responding to the GET request which finds the chart from the index.yaml. Github pages support this feature that is why I was able to use the helm chart in Rancher. </p> <p>Solution : I unzipped the tar and directly uploaded folder on GIT and use this GIT repo in Rancher to get the chart correctly listed under app catalog.</p> <p><em>Do remember to use .git at the end of the url defined in the app catalog.</em></p>
<p>I have an application running in Kubernetes (Azure AKS) in which each pod contains two containers. I also have Grafana set up to display various metrics some of which are coming from Prometheus. I'm trying to troubleshoot a separate issue and in doing so I've noticed that some metrics don't appear to match up between data sources.</p> <p>For example, <code>kube_deployment_status_replicas_available</code> returns the value 30 whereas <code>kubectl -n XXXXXXXX get pod</code> lists 100 all of which are Running, and <code>kube_deployment_status_replicas_unavailable</code> returns a value of 0. Also, if I get the deployment in question using <code>kubectl</code> I'm seeing the expected value.</p> <pre><code>$ kubectl get deployment XXXXXXXX NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE XXXXXXXX 100 100 100 100 49d </code></pre> <p>There are other applications (namespaces) in the same cluster where all the values correlate correctly so I'm not sure where the fault may be or if there's any way to know for sure which value is the correct one. Any guidance would be appreciated. Thanks</p>
<p>Based on having the <code>kube_deployment_status_replicas_available</code> metric I assume that you have Prometheus scraping your metrics from <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">kube-state-metrics</a>. It sounds like there's something quirky about its deployment. It could be:</p> <ul> <li>Cached metric data</li> <li>And/or simply it can't pull current metrics from the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/" rel="nofollow noreferrer">kube-apiserver</a></li> </ul> <p>I would:</p> <ul> <li>Check the version that you are running for <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">kube-state-metrics</a> and see if it's compatible with your K8s version.</li> <li>Restart the kube-state-metrics pod.</li> <li>Check the logs <code>kubectl logs</code>kube-state-metrics` </li> <li>Check the Prometheus logs <ul> <li>If you don't see anything try starting Prometheus with the <code>--log.level=debug</code> flag.</li> </ul></li> </ul> <p>Hope it helps.</p>
<p>I’m somewhat new to Kubernetes and not sure the standard way to do this. I’d like to have many instances of a single microservice, but with each of the containers parameterized slightly differently. (Perhaps an environment variable passed to the container that’s different for each instance, as specified in the container spec of the .yaml file?)</p> <p>It seems like a single deployment with multiple replicas wouldn’t work. Yet, having n different deployments with very slightly different .yaml files seems a bit redundant. Is there some sort of templating solution perhaps?</p> <p>Or should each microservice be identical and seek out its parameters from a central service?</p> <p>I realize this could be interpreted as an “opinion question” but I am looking for typical solutions.</p>
<p>There are definitely several ways of doing it. One popular option is to use <a href="https://helm.sh/" rel="nofollow noreferrer">Helm</a>. Helm lets you define kubernetes manifests using Go templates, and package them on a single unit called a Helm Chart. Later on you can install this Chart (install is what Helm calls to save these manifests in the Kubernetes API). When installing the Helm Chart, you can pass arguments that will be used when rendering the templates. That way you can re-use pretty much everything, and just replace the significant bits of your manifests: Deployments, Services, etc.</p> <p>There are <a href="https://github.com/helm/charts/tree/master/stable" rel="nofollow noreferrer">plenty of Helm charts available as open sources projects</a>, that you can use as an example on how to create your own Chart.</p> <p>And many <a href="https://docs.bitnami.com/kubernetes/how-to/create-your-first-helm-chart/" rel="nofollow noreferrer">useful guides on how to create your first Helm Chart</a>.</p> <p>Here you can find the <a href="https://helm.sh/docs/developing_charts/" rel="nofollow noreferrer">official docs on developing your own Charts</a>.</p>
<p>I'm having a .Net Framework and .NetCore Containers and I would like to run them in Kubernetes. I have Docker Desktop for Windows installed and Kubernetes with it. How can I run these Windows Containers in Kubernetes? <a href="https://kubernetes.io/docs/setup/windows/user-guide-windows-nodes/" rel="nofollow noreferrer">This</a> Documentation specifies how to create a Windows Node on Kubernetes, but it is very confusing. As I am on windows machine and I see linux based commands in there (And no mention of what OS you need to run all those). I am on Windows 10 Pro Machine. Is there a way to run these containers on Kubernetes?</p> <p>When I try to create a Pod with Windows Containers, it fails with the following error message "Failed to pull image 'imagename'; rpc error: code = Unknown desc = image operating system 'windows' cannot be used on this platform"</p>
<p>Welcome on StackOverflow Srinath</p> <p>To my knowledge you can't run Windows Containers on local version of Kubernetes at this moment. When you enable Kubernetes option in your Docker Desktop for Windows installation, the Kubernetes cluster is simply run inside Linux VM (with its own Docker Runtime for Linux containers only) on Hyper-V hypervisor.</p> <p>The other solution for you is to use for instance a managed version of Kubernetes with Windows nodes from any of popular cloud providers. I think relatively easy to start is Azure (if you don't have an Azure subscription, create a <a href="https://azure.microsoft.com/free/?WT.mc_id=A261C142F" rel="nofollow noreferrer">free</a> trial account, valid for 12 months). </p> <p>I would suggest you to use an old way to run Kubernetes on Azure, a service called Azure Container Service aka ACS, for one reason, it has been verified by me to be working well with Windows Containers, especially for testing purposes (I could not achieve the same with its successor, called <a href="https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough" rel="nofollow noreferrer">AKS</a>):</p> <p>Run following commands in Azure Cloud Shell, and your cluster will be ready to use in few minutes.</p> <pre><code>az group create --name azEvalRG --location eastus az acs create -g azEvalRG -n k8s-based-win -d k8s-based-win --windows --agent-count 1 -u azureuser --admin-password 'YourSecretPwd1234$' -t kubernetes --location eastus </code></pre>
<p>In Kubernetes, how can I expose secrets in a file (in a Kubernetes volume) as environment variables instead?</p> <p><strong>Background:</strong><br> I followed the <a href="https://github.com/GoogleCloudPlatform/gke-vault-demo" rel="nofollow noreferrer">Google Cloud Platform GKE Vault Demo</a> and in it, they show how to <em>"continuously fetching a secret's contents onto a local file location. This allows the application to read secrets from a file inside the pod normally without needing to be modified to interact with Vault directly."</em> </p> <p>I would like to know how I can expose these secrets as environment variables (instead of a file) for the other application containers to use.</p>
<p>I found out how to inject the secrets from the file into the application container. </p> <p>First, the secrets file should be in the form <code>KEY="VALUE"</code> on each line.<br> For those using Consul Template to get the secrets from Vault, you can do it as such: </p> <pre><code>- name: CT_LOCAL_CONFIG value: | vault { ssl { ca_cert = "/etc/vault/tls/ca.pem" } retry { backoff = "1s" } } template { contents = &lt;&lt;EOH {{- with secret "secret/myproject/dev/module1/mongo-readonly" }} MONGO_READ_HOSTNAME="{{ .Data.hostname }}" MONGO_READ_PORT="{{ .Data.port }}" MONGO_READ_USERNAME="{{ .Data.username }}" MONGO_READ_PASSWORD="{{ .Data.password }}" {{- end }} {{- with secret "secret/myproject/dev/module2/postgres-readonly" }} POSTGRES_READ_HOSTNAME="{{ .Data.hostname }}" POSTGRES_READ_PORT="{{ .Data.port }}" POSTGRES_READ_USERNAME="{{ .Data.username }}" POSTGRES_READ_PASSWORD="{{ .Data.password }}" {{- end }} EOH destination = "/etc/secrets/myproject/config" } </code></pre> <p>This will result in a secrets file in the correct <code>KEY="VALUE"</code> form. </p> <p>From the secrets file, which is shared to the app container through <code>volumeMount</code>, we can inject the secrets as environment variables like this: </p> <pre><code>command: ["/bin/bash", "-c"] # for Python image, /bin/sh doesn't work, /bin/bash has source args: - source /etc/secrets/myproject/config; export MONGO_READ_HOSTNAME; export MONGO_READ_PORT; export MONGO_READ_USERNAME; export MONGO_READ_PASSWORD; export POSTGRES_READ_HOSTNAME; export POSTGRES_READ_PORT; export POSTGRES_READ_USERNAME; export POSTGRES_READ_PASSWORD; python3 my_app.py; </code></pre> <p>In this way, we don't have to modify existing application code which expects secrets from environment variables (used to use Kubernetes Secrets). </p>
<p>Following the tutorial for Kubernetes (in my case on GKE) <a href="https://docs.traefik.io/v2.0/user-guides/crd-acme/" rel="nofollow noreferrer">https://docs.traefik.io/v2.0/user-guides/crd-acme/</a> I am stuck on how to assign the global static IP (GKE wants a forwarding rule). Am I missing something (e.g. adding another ingress)? I understand that annotations are not possible in the IngressRoute. So how would I assign the global reserved IP?</p> <p>The answer to question 3 on this Q&amp;A online meetup (<a href="https://gist.github.com/dduportal/13874113cf5fa1d0901655e3367c31e5" rel="nofollow noreferrer">https://gist.github.com/dduportal/13874113cf5fa1d0901655e3367c31e5</a>) mentions that "classic ingress" is also possible with version 2.x. Does this mean I can set up traefik as in 1.x (like this: <a href="https://docs.traefik.io/user-guide/kubernetes/" rel="nofollow noreferrer">https://docs.traefik.io/user-guide/kubernetes/</a>) use 2.x configuration and no need for CRD?</p>
<p>You do it like with every other Ingress Controller.</p> <p>A good step-by-step instructions on how to assign a Static IP address to an Ingress are given on nginx-ingress`s website.</p> <p>Follow the section called '<a href="https://kubernetes.github.io/ingress-nginx/examples/static-ip/#promote-ephemeral-to-static-ip" rel="nofollow noreferrer">Promote ephemeral to static IP</a>' </p> <p>If to follow Traefik 2.0's <a href="https://github.com/containous/traefik/tree/master/docs/content/user-guides/crd-acme" rel="nofollow noreferrer">exemplary</a> manifests files made for Kubernetes, once you patch your Traefik's K8S Service (with kubectl patch traefik...), you can verify if IngressRoute's took effect with following command:</p> <pre><code> curl -i http://&lt;static-ip-address&gt;:8000/notls -H 'Host: your.domain.com' </code></pre> <p><strong>Update</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: traefik spec: ports: - protocol: TCP name: web port: 8000 - protocol: TCP name: admin port: 8080 - protocol: TCP name: websecure port: 4443 selector: app: traefik type: LoadBalancer </code></pre> <p>and patch it with:</p> <pre><code>kubectl patch svc traefik -p '{"spec": {"loadBalancerIP": "&lt;your_static_ip&gt;"}}' </code></pre>
<p>Istio does not route to external HTTPs service via TLS origination.</p> <p>I have a pod containing two containers: - Application - ISTIO Proxy</p> <p>Application makes a call to external third party API which resides on <a href="https://someurl.somedomain.com/v1/some-service" rel="nofollow noreferrer">https://someurl.somedomain.com/v1/some-service</a></p> <p>Application sends HTTP requests to this service by calling <a href="http://someurl.somedomain.com/v1/some-service" rel="nofollow noreferrer">http://someurl.somedomain.com/v1/some-service</a> - notice that it's HTTP and not HTTPs.</p> <p>I then configured the following in ISTIO:</p> <ul> <li>Virtual service to route HTTP traffic to port 443:</li> </ul> <pre><code>--- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: someservice-vs spec: hosts: - someurl.somedomain.com http: - match: - port: 80 route: - destination: host: someurl.somedomain.com port: number: 443 timeout: 40s retries: attempts: 10 perTryTimeout: 4s retryOn: gateway-error,connect-failure,refused-stream,retriable-4xx,5xx </code></pre> <ul> <li>Service Entry that allows the traffic out. As you can see, we specify that service is external to the mesh and we opened 443 and 80 both of which use HTTP, but 443 is configured for TLS origination.</li> </ul> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: someservice-se spec: hosts: - someurl.somedomain.com location: MESH_EXTERNAL ports: - number: 443 name: http-port-for-tls-origination protocol: HTTP - number: 80 name: http-port protocol: HTTP resolution: DNS </code></pre> <p>Finally, I have a destination rule that applies simple TLS to the outgoing traffic:</p> <pre><code> --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: someservice-destinationrule spec: host: someurl.somedomain.com trafficPolicy: loadBalancer: simple: ROUND_ROBIN portLevelSettings: - port: number: 443 tls: mode: SIMPLE # initiates HTTPS when accessing someurl.somedomain.com </code></pre> <p>For some reason this does not work and I get 404 when calling the service from my application container, which indicates that traffic isn't being encrypted via TLS.</p> <p>The reason why I use TLS origination is because I need to apply re-tries in my virtual service and I can only do this with HTTP routes as otherwise ISTIO cannot see request and work with it. </p> <p>Been scratching my head for two days and need some help please :-)</p>
<p>Got to the bottom of this. ISTIO documentation was correct - TLS origination and retries work as expected. </p> <p>The issue was caused by the perTryTimeout value which was too low. Requests were not completing in allocated time, so the gateway was timing out. It caught us out because the external service's performance has degraded recently and we didn't think to check it. </p>
<p>I want to run a command as soon as a pod is created and starts running. I am deploying <code>jupyterhub</code> but the config that I am using is:</p> <pre><code>proxy: secretToken: "yada yada" singleuser: image: # Get the latest image tag at: # https://hub.docker.com/r/jupyter/datascience-notebook/tags/ # Inspect the Dockerfile at: # https://github.com/jupyter/docker-stacks/tree/master/datascience-notebook/Dockerfile name: jupyter/datascience-notebook # name: ${IMAGE} tag: 177037d09156 # tag: latest lifecycle: postStart: exec: command: ["/bin/sh", "-c", "echo Hello from the postStart handler &gt; /usr/share/message"] </code></pre> <p>When the pod is up and running, I am not able to see the file <code>/usr/share/message</code> and hence, I deduce that the exec command is not running.</p> <p>What is the right way to make this command work?</p>
<p>The correct key for <strong>lifecycle</strong> stanza is <strong>lifecyleHooks</strong>.</p> <p>Following blob is with the correct values. </p> <pre><code>proxy: secretToken: "yada yada" singleuser: image: # Get the latest image tag at: # https://hub.docker.com/r/jupyter/datascience-notebook/tags/ # Inspect the Dockerfile at: # https://github.com/jupyter/docker-stacks/tree/master/datascience-notebook/Dockerfile name: jupyter/datascience-notebook # name: ${IMAGE} tag: 177037d09156 # tag: latest lifecycleHooks: postStart: exec: command: ["/bin/sh", "-c", "echo Hello from the postStart handler &gt; /usr/share/message"] </code></pre>
<p>I'm bring a GKE cluster up with Terraform, which works nicely. I then want Terraform to perform some Kubernetes-level operations on the cluster (using the k8s provider) - nothing massive, just installing a couple of Deployments etc.</p> <p>The problem I'm having is permissions. I'd like a <strong>neat, tidy, declarative</strong> way to make a cluster and have a set of credentials in hand that I can use short-term to do "admin" operations on it, including bootstrapping other users. I know how to make the google user that's running TF an admin of the cluster (that question comes up a lot), but that doesn't seem very <em>nice</em>. Not least, the k8s TF provider doesn't support clusterolebinding (<a href="https://github.com/terraform-providers/terraform-provider-kubernetes/issues/30" rel="noreferrer">Issue</a>, <a href="https://github.com/terraform-providers/terraform-provider-kubernetes/pull/73" rel="noreferrer">partial PR</a>) so you have to "shell out" with a <code>local-exec</code> provisioner to first run <code>gcloud container clusters get-credentials</code> and then <code>kubectl create clusterrolebinding ...</code>.</p> <p>Similarly, I don't want to set a master password, because I don't want to run with HTTP basic auth on. The nicest option looks to be the key/cert pair that's returned by the TF GKE resource, but that has a CN of "client", and that user has no power. So again, the only way to use it is to shell out to kubectl, pass it the gcloud service account credentials, and get it to add a clusterrolebinding for "client", at which point I may as well just do everything as the service account like above.</p> <p>For contrast, on EKS the (AWS IAM) user that creates the cluster has cluster-admin out of the box (I assume the AWS authn provider claim's the user is in "system:masters").</p> <p>My actual question here is: is there a neat, fully declarative way in Terraform to bring up a cluster and have available (ideally as output) a potent set of credentials to use and then drop? (yes I know they'll stay in the tfstate)</p> <p>My options seem to be:</p> <ul> <li>"shell out" to give TF's google ID (ideally a service account) cluster-admin (which is privilege escalation, but which works due to the gcloud authz plugin)</li> <li>Enable HTTP basic auth and give the admin account a password, then have an aliased k8s provisioner use that to do minimal bootstrapping of another service account.</li> <li>Enable ABAC so that "client" (the CN of the output key/cert) has infinite power - this is what I'm currently running with, don't judge me!</li> </ul> <p>And I don't like any of them!</p>
<p>I've been running into a similar problem, which has gotten particularly nasty since a <a href="https://github.com/terraform-providers/terraform-provider-google/issues/3217" rel="noreferrer">recent Kubernetes issue unexpectedly disabled basic auth by default</a>, which broke my previously-functioning Terraform configuration as soon as I tried to build a new cluster from the same config.</p> <p>Finally found an answer in <a href="https://stackoverflow.com/a/55007787/3198554">this SO answer</a>, which recommends a method of using Terraform's Google IAM creds to connect to the cluster without needing the "shell out". Note that this method allows cluster permissions to be bootstrapped in Terraform with no external tooling/hacks/etc and without needing to have basic auth enabled.</p> <p>The relevant part of that answer is:</p> <pre><code>data "google_client_config" "default" {} provider "kubernetes" { host = "${google_container_cluster.default.endpoint}" token = "${data.google_client_config.default.access_token}" cluster_ca_certificate = "${base64decode(google_container_cluster.default.master_auth.0.cluster_ca_certificate)}" load_config_file = false } </code></pre>
<p>I am trying to whitelist IP(s) on the ingress in the AKS. I am currently using the <code>ingress-nginx</code> not installed with Helm. </p> <p>The mandatory kubernetes resources can be found <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml" rel="nofollow noreferrer">here</a> </p> <p>The service is started as:</p> <pre><code>spec: externalTrafficPolicy: Local </code></pre> <p>Full yaml <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml" rel="nofollow noreferrer">here</a></p> <p>My ingress definition is:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: example-ingress # namespace: ingress-nginx annotations: ingress.kubernetes.io/rewrite-target: / kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/force-ssl-redirect: "true" nginx.ingress.kubernetes.io/whitelist-source-range: "xxx.xxx.xxx.xxx" spec: rules: - http: paths: - path: /xx-xx backend: serviceName: xx-xx servicePort: 8080 - path: /xx backend: serviceName: /xx servicePort: 5432 </code></pre> <p>The IP whitelisting is not enforced. Am I doing something wrong ?</p>
<p>After a lot of digging around I found that the problem is because of this bug in NATing, defined <a href="https://github.com/Azure/AKS/issues/785" rel="nofollow noreferrer">here</a> and there is quick medium read <a href="https://medium.com/@chatelain.io/aks-azure-kubernetes-service-ingress-ip-filtering-whitelisting-issue-c194190df696" rel="nofollow noreferrer">here</a>.</p> <p>Hope this solves problems for future readers or help track the bug</p>
<p>I want to deploy my own image on JuPyter-hub. However, I need to deploy it to some registry so that the <code>image puller</code> of JHub can pull it from there. In my case, the registry is private. Although I am able to push the image to my registry, I don't know how will I make the jupyterhub release and deployment be able to pull the image.</p> <p>I tried reading this doc (<a href="https://github.com/jupyterhub/jupyterhub-deploy-docker" rel="nofollow noreferrer">https://github.com/jupyterhub/jupyterhub-deploy-docker</a>) but it could not help me understand how am I to add authentication in the jupyter hub deployment.</p> <p>I deploy <code>jhub</code> with this command:</p> <pre><code># Suggested values: advanced users of Kubernetes and Helm should feel # free to use different values. RELEASE=jhub NAMESPACE=jhub helm upgrade --install $RELEASE jupyterhub/jupyterhub \ --namespace $NAMESPACE \ --version=0.8.0 \ --values jupyter-hub-config.yaml </code></pre> <p>where the jupyter-hub-config.yaml is as follows:</p> <pre><code>proxy: secretToken: "abcd" singleuser: image: name: jupyter/datascience-notebook tag: some_tag lifecycleHooks: postStart: exec: command: ["/bin/sh", "-c", 'ipython profile create; cd ~/.ipython/profile_default/startup; echo ''run_id = "sample" ''&gt; aviral.py'] </code></pre> <p>The helm chart is available here: <a href="https://jupyterhub.github.io/helm-chart/jupyterhub-0.8.2.tgz" rel="nofollow noreferrer">https://jupyterhub.github.io/helm-chart/jupyterhub-0.8.2.tgz</a></p> <p>And the tree of this helm chart is:</p> <pre><code>. ├── Chart.yaml ├── jupyter-hub-config.yaml ├── requirements.lock ├── schema.yaml ├── templates │   ├── NOTES.txt │   ├── _helpers.tpl │   ├── hub │   │   ├── configmap.yaml │   │   ├── deployment.yaml │   │   ├── image-credentials-secret.yaml │   │   ├── netpol.yaml │   │   ├── pdb.yaml │   │   ├── pvc.yaml │   │   ├── rbac.yaml │   │   ├── secret.yaml │   │   └── service.yaml │   ├── image-puller │   │   ├── _daemonset-helper.yaml │   │   ├── daemonset.yaml │   │   ├── job.yaml │   │   └── rbac.yaml │   ├── ingress.yaml │   ├── proxy │   │   ├── autohttps │   │   │   ├── _README.txt │   │   │   ├── configmap-nginx.yaml │   │   │   ├── deployment.yaml │   │   │   ├── ingress-internal.yaml │   │   │   ├── rbac.yaml │   │   │   └── service.yaml │   │   ├── deployment.yaml │   │   ├── netpol.yaml │   │   ├── pdb.yaml │   │   ├── secret.yaml │   │   └── service.yaml │   ├── scheduling │   │   ├── _scheduling-helpers.tpl │   │   ├── priorityclass.yaml │   │   ├── user-placeholder │   │   │   ├── pdb.yaml │   │   │   ├── priorityclass.yaml │   │   │   └── statefulset.yaml │   │   └── user-scheduler │   │   ├── _helpers.tpl │   │   ├── configmap.yaml │   │   ├── deployment.yaml │   │   ├── pdb.yaml │   │   └── rbac.yaml │   └── singleuser │   ├── image-credentials-secret.yaml │   └── netpol.yaml ├── test-99.py ├── validate.py └── values.yaml </code></pre> <p>All I want to do is to make jupyterhub access my private repo using <code>secrets</code>. In this case, I do not know how to make this available to it.</p>
<p><a href="https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod" rel="nofollow noreferrer">Image pull secret</a> can be used to pull a image from private registry.</p> <p>Append the jupyter-hub-config.yam with the following blob.</p> <pre><code>imagePullSecret: enabled: true registry: username: email: password: </code></pre> <p>With the Value</p> <p>username:AWS </p> <p>password:<code>aws ecr get-login --region ${REGION} --registry-ids ${ACCOUNT} | cut -d' ' -f6</code></p>
<p>I have configured Kubernetics has 4 node cluster and configured Dashboard from Kubernetics Documentation but able to login with different Token based service account which have different role bind on that account .</p> <p>But my point is I want to login with Kubeconfig options but I am unable to do so . So help me the steps how to do that. <a href="https://i.stack.imgur.com/ijTpR.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>Finally found <a href="https://stackoverflow.com/questions/48228534/kubernetes-dashboard-access-using-config-file-not-enough-data-to-create-auth-inf">this answer</a> after search number of sites .</p> <pre><code>$ TOKEN=$(kubectl -n kube-system describe secret default| awk '$1=="token:"{print $2}') $ kubectl config set-credentials kubernetes-admin --token="${TOKEN}" </code></pre> <p>Your config file should be looking like this:</p> <pre><code>$ kubectl config view |cut -c1-50|tail -10 name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate-data: REDACTED client-key-data: REDACTED token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.ey </code></pre>
<p>I created the following persistent volume by calling</p> <p><code>kubectl create -f nameOfTheFileContainingTheFollowingContent.yaml</code></p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: pv-monitoring-static-content spec: capacity: storage: 100Mi accessModes: - ReadWriteOnce hostPath: path: "/some/path" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pv-monitoring-static-content-claim spec: accessModes: - ReadWriteOnce storageClassName: "" resources: requests: storage: 100Mi </code></pre> <p>After this I tried to delete the pvc. But this command stuck. when calling <code>kubectl describe pvc pv-monitoring-static-content-claim</code> I get the following result </p> <pre><code>Name: pv-monitoring-static-content-claim Namespace: default StorageClass: Status: Terminating (lasts 5m) Volume: pv-monitoring-static-content Labels: &lt;none&gt; Annotations: pv.kubernetes.io/bind-completed=yes pv.kubernetes.io/bound-by-controller=yes Finalizers: [foregroundDeletion] Capacity: 100Mi Access Modes: RWO Events: &lt;none&gt; </code></pre> <p>And for <code>kubectl describe pv pv-monitoring-static-content</code></p> <pre><code>Name: pv-monitoring-static-content Labels: &lt;none&gt; Annotations: pv.kubernetes.io/bound-by-controller=yes Finalizers: [kubernetes.io/pv-protection foregroundDeletion] StorageClass: Status: Terminating (lasts 16m) Claim: default/pv-monitoring-static-content-claim Reclaim Policy: Retain Access Modes: RWO Capacity: 100Mi Node Affinity: &lt;none&gt; Message: Source: Type: HostPath (bare host directory volume) Path: /some/path HostPathType: Events: &lt;none&gt; </code></pre> <p>There is no pod running that uses the persistent volume. Could anybody give me a hint why the pvc and the pv are not deleted?</p>
<p>This happens when persistent volume is protected. You should be able to cross verify this: </p> <p>Command:</p> <p><code>kubectl describe pvc PVC_NAME | grep Finalizers</code></p> <p>Output: </p> <p><code>Finalizers: [kubernetes.io/pvc-protection]</code></p> <p>You can fix this by setting finalizers to null using <code>kubectl patch</code>:</p> <pre><code>kubectl patch pvc PVC_NAME -p '{"metadata":{"finalizers": []}}' --type=merge </code></pre> <p>Ref; <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#storage-object-in-use-protection" rel="noreferrer">Storage Object in Use Protection</a></p>
<pre><code> SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE hello * * * * * False 2 2m42s 5m6s hello * * * * * False 3 6s 5m30s hello * * * * * False 4 6s 6m30s hello * * * * * False 3 46s 7m10s hello * * * * * False 1 56s 7m20s hello * * * * * False 2 6s 7m30s hello * * * * * False 0 26s 7m50s hello * * * * * False 1 7s 8m31s hello * * * * * False 0 16s 8m40s hello * * * * * False 1 7s 9m31s hello * * * * * False 0 17s 9m41s hello * * * * * False 1 7s 10m </code></pre> <p>Im runnig K8S cronjob and Im using the following command to watch it</p> <p><code>kubectl get cronjobs --watch -n ns1</code></p> <p>when watching the output Im notice that for each min there is two job</p> <p>e.g. see <code>2m1s</code> and <code>2m11s</code> and so on …</p> <p>why ? I want to run it exactly one time in every minute , how can I do that ? </p> <pre><code>hello * * * * * False 0 &lt;none&gt; 4s hello * * * * * False 1 7s 61s hello * * * * * False 0 17s 71s hello * * * * * False 1 7s 2m1s hello * * * * * False 0 17s 2m11s hello * * * * * False 1 7s 3m1s hello * * * * * False 0 17s 3m11s </code></pre> <p>This is the docker file</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: hello namespace: monitoring spec: schedule: "* * * * *" # run every minute startingDeadlineSeconds: 10 # if a job hasn't starting in this many seconds, skip concurrencyPolicy: Forbid # either allow|forbid|replace successfulJobsHistoryLimit: 3 # how many completed jobs should be jobTemplate: spec: template: spec: containers: - name: hello image: busybox args: - /bin/sh - -c - date; echo Hello from the Kubernetes cluster restartPolicy: OnFailure </code></pre> <p>I tried also to change the schedule like <code>"*/1 * * * *”</code> which doesnt helps.</p> <p>update</p> <p>It seems that for each cronjob there is an entry like this</p> <pre><code>NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE hello */1 * * * * False 1 0s 7s </code></pre> <p>and after 10 second I see </p> <pre><code>hello */1 * * * * False 0 1 0s 17s </code></pre> <p>so on... <strong>one active and the second not</strong></p>
<p>I think you are looking at the wrong thing.</p> <p><a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/" rel="noreferrer"><code>CronJob</code></a> is spawning a <code>Job</code>, so you should be looking at job logs:</p> <pre><code>$ kubectl get jobs NAME DESIRED SUCCESSFUL AGE hello-1558019160 1 1 2m hello-1558019220 1 1 1m hello-1558019280 1 1 14s </code></pre> <p>As you can see there is only <strong>one</strong> spawned per minute. It's possible that the job will take longer then a minute to complete this is when <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#concurrency-policy" rel="noreferrer"><code>concurrencyPolicy</code></a> comes to play:</p> <blockquote> <p>The <code>.spec.concurrencyPolicy</code> field is also optional. It specifies how to treat concurrent executions of a job that is created by this cron job. The spec may specify only one of the following concurrency policies:</p> <ul> <li><code>Allow</code> (default): The cron job allows concurrently running jobs</li> <li><code>Forbid</code>: The cron job does not allow concurrent runs; if it is time for a new job run and the previous job run hasn’t finished yet, the cron job skips the new job run</li> <li><code>Replace</code>: If it is time for a new job run and the previous job run hasn’t finished yet, the cron job replaces the currently running job run with a new job run</li> </ul> <p>Note that concurrency policy only applies to the jobs created by the same cron job. If there are multiple cron jobs, their respective jobs are always allowed to run concurrently.</p> </blockquote> <p>You can also do <code>kubectl describe jobs hello-1558019160</code> in which you will see events:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 2m job-controller Created pod: hello-1558019160-fld74 </code></pre> <p>I was running your <code>.yaml</code> and did not saw <code>Active</code> jobs being higher then <code>1</code>.</p> <p>Hope this helps.</p>
<p>I have create a VMSS in Azure Portal, to have the autoscale feature for my application. My application resided in Kubernetes cluster - around 10 microservices.</p> <p>I want to create an Scale out rule, that if there is no enough memory , then increase the VM instance. But I don't see an option to set the rule based on memory. There are rules which we can define based on CPU utilization, disk space etc... But this won't help me to solve the problem. For my 10 microservice to work each service having 5 pods, i need to set a rule based on memory. If I set the rule based on CPU, the VM doesn;t scale up, as the CPU is not utilised much. Issue is with memory.</p> <p>I get the error "0/3 nodes are available: 3 Insufficient pods. The node was low on resource: [MemoryPressure]. "</p> <p>I read that the memory rule is not available in host metrics in Azure, but it can enabled via guest metrics. To enable guest metrics, i see below link .</p> <p><a href="https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-mvss-guest-based-autoscale-linux" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-mvss-guest-based-autoscale-linux</a></p> <p>But I don't see an option to edit the template as defined in the above link. There is only "export Template" option visible for VMSS, where you cannot edit the template.</p> <p>Could anyone please help me on this issue , to define memory rule for VMSS in Azure ?</p> <p>No option seen to enable guest metrics for VMSS. No option to edit the template, only "export Template" option visible, where you cannot edit the template.</p>
<p>For the AKS autoscale, you just need to enable the autoscale function for your AKS cluster, set the min and max count of the nodes and then it will scale itself. You do not need to set the autoscale rule for it. Take a look at the <a href="https://learn.microsoft.com/en-us/azure/aks/cluster-autoscaler#about-the-cluster-autoscaler" rel="nofollow noreferrer">AKS cluster autoscale</a>.</p> <p><strong>When does Cluster Autoscaler change the size of a cluster?</strong></p> <p>Cluster Autoscaler increases the size of the cluster when:</p> <ul> <li>there are pods that failed to schedule on any of the current nodes due to insufficient resources.</li> <li>adding a node similar to the nodes currently present in the cluster would help.</li> </ul> <p>Cluster Autoscaler decreases the size of the cluster when some nodes are consistently unneeded for a significant amount of time. A node is unneeded when it has low utilization and all of its important pods can be moved elsewhere.</p> <p>And that what you have seen in the VMSS, the metric server is already installed in the high version AKS. If not install, you can install yourself and the steps <a href="https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-scale#autoscale-pods" rel="nofollow noreferrer">here</a>.</p>
<p>I'm running <code>kubectl create -f notRelevantToThisQuestion.yml</code></p> <p>The response I get is:</p> <blockquote> <p>Error from server (NotFound): the server could not find the requested resource</p> </blockquote> <p>Is there any way to determine which resource was requested that was not found?</p> <p><code>kubectl get ns</code> returns</p> <blockquote> <p>NAME STATUS AGE<br> default Active 243d<br> kube-public Active 243d<br> kube-system Active 243d<br></p> </blockquote> <p>This is not a cron job.<br> Client version 1.9<br> Server version 1.6</p> <p>This is very similar to <a href="https://devops.stackexchange.com/questions/2956/how-do-i-get-kubernetes-to-work-when-i-get-an-error-the-server-could-not-find-t?rq=1">https://devops.stackexchange.com/questions/2956/how-do-i-get-kubernetes-to-work-when-i-get-an-error-the-server-could-not-find-t?rq=1</a> but my k8s cluster has been deployed correctly (everything's been working for almost a year, I'm adding a new pod now).</p>
<p>To solve this, downgrade the client or upgrade the server. In my case I've upgraded server (new minikube) but forget to upgrade client (kubectl) and end up with those versions.</p> <pre><code>$ kubectl version --short Client Version: v1.9.0 Server Version: v1.14.1 </code></pre> <p>When I'd upgraded client version (in this case to 1.14.2) then everything started to work again.</p> <p>Instructions how to install (in your case upgrade) client are here <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/tools/install-kubectl</a></p>
<p>I want to create IP based subdomain access rules for traefik (1.7.11) ingress controller running on Kubernetes (EKS). All IP's are allowed to talk to an external/frontend entry point</p> <pre><code>traefik.toml: | defaultEntryPoints = ["http","https"] logLevel = "INFO" [entryPoints] [entryPoints.http] address = ":80" compress = true [entryPoints.http.redirect] entryPoint = "https" [entryPoints.http.whiteList] sourceRange = ["0.0.0.0/0"] [entryPoints.https] address = ":443" compress = true [entryPoints.https.tls] [entryPoints.https.whiteList] sourceRange = ["0.0.0.0/0"] </code></pre> <p>But we have only <code>prod</code> environments running in this cluster. </p> <p>Want to limit certain endpoints like <code>monitoring.domain.com</code> accessible from limited IP's (Office location) and keep <code>*.domain.com</code> (default) accessible from the public internet.</p> <p>anyway I can do it in <code>traefik</code> ?</p>
<p>You can try using the <code>traefik.ingress.kubernetes.io/whitelist-source-range: "x.x.x.x/x, xxxx::/x"</code> <a href="https://docs.traefik.io/configuration/backends/kubernetes/#annotations" rel="nofollow noreferrer">Traefik annotation</a> on you <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> object. You can also have 4 Ingress objects. One for each <code>stage.domain.com</code>, <code>qa.domain.com</code>, <code>dev.domain.com</code> and <code>prod.domain.com</code>.</p> <p>For anything other than <code>prod.domain.com</code> you can add a whitelist.</p> <p>Another option is to change your <code>traefik.toml</code> with <a href="https://docs.traefik.io/configuration/entrypoints/#white-listing" rel="nofollow noreferrer"><code>[entryPoints.http.whitelist]</code></a> but you may have to have different ingress controllers with a different <a href="https://docs.traefik.io/user-guide/kubernetes/#between-traefik-and-other-ingress-controller-implementations" rel="nofollow noreferrer">ingress class</a> for each environment.</p>
<p>I'm getting Microsoft.Rest.HttpOperationException: 'Operation returned an invalid status code 'BadRequest'' on this line.</p> <p><code>var result = client.CreateNamespacedDeployment(deployment, namespace);</code></p> <p>Kubernetes-client has a small number of good resources and most of them is written in other language such as java and python. So i'm referring to these documentations.</p> <p>this is my implementation so far.</p> <pre><code> V1Deployment deployment = new V1Deployment() { ApiVersion = "extensions/v1beta1", Kind = "Deployment", Metadata = new V1ObjectMeta() { Name = "...", NamespaceProperty = env, Labels = new Dictionary&lt;string, string&gt;() { { "app", "..." } } }, Spec = new V1DeploymentSpec { Replicas = 1, Selector = new V1LabelSelector() { MatchLabels = new Dictionary&lt;string, string&gt; { { "app", "..." } } }, Template = new V1PodTemplateSpec() { Metadata = new V1ObjectMeta() { CreationTimestamp = null, Labels = new Dictionary&lt;string, string&gt; { { "app", "..." } } }, Spec = new V1PodSpec { Containers = new List&lt;V1Container&gt;() { new V1Container() { Name = "...", Image = "...", ImagePullPolicy = "Always", Ports = new List&lt;V1ContainerPort&gt; { new V1ContainerPort(80) } } } } } }, Status = new V1DeploymentStatus() { Replicas = 1 } }; var result = client.CreateNamespacedDeployment(deployment, namespace); </code></pre> <p>I want to know the proper way on how to create kubernetes deployment using kubernetes-client, and also i want to know the cause of this issue.</p>
<p>For the full clarity and future visitors, it's worth to mention, what is exactly behind this bad request error (code: 400) returned from API server, when using your code sample:</p> <pre><code>"the API version in the data (extensions/v1beta1) does not match the expected API version (apps/v1)" </code></pre> <p>Solution: </p> <pre><code> ApiVersion = "extensions/v1beta1" -&gt; ApiVersion = "apps/v1" </code></pre> <p>Full code sample:</p> <pre><code> private static void Main(string[] args) { var k8SClientConfig = new KubernetesClientConfiguration { Host = "http://127.0.0.1:8080" }; IKubernetes client = new Kubernetes(k8SClientConfig); ListDeployments(client); V1Deployment deployment = new V1Deployment() { ApiVersion = "apps/v1", Kind = "Deployment", Metadata = new V1ObjectMeta() { Name = "nepomucen", NamespaceProperty = null, Labels = new Dictionary&lt;string, string&gt;() { { "app", "nepomucen" } } }, Spec = new V1DeploymentSpec { Replicas = 1, Selector = new V1LabelSelector() { MatchLabels = new Dictionary&lt;string, string&gt; { { "app", "nepomucen" } } }, Template = new V1PodTemplateSpec() { Metadata = new V1ObjectMeta() { CreationTimestamp = null, Labels = new Dictionary&lt;string, string&gt; { { "app", "nepomucen" } } }, Spec = new V1PodSpec { Containers = new List&lt;V1Container&gt;() { new V1Container() { Name = "nginx", Image = "nginx:1.7.9", ImagePullPolicy = "Always", Ports = new List&lt;V1ContainerPort&gt; { new V1ContainerPort(80) } } } } } }, Status = new V1DeploymentStatus() { Replicas = 1 } }; </code></pre>
<p>I have a Kubernetes Job that has, for instance, parallelism set to 4. When this job is created, I might want to scale this out to, say, 8. But it seems like <code>edit</code>ing the Job and setting parallelism to 8 doesn't actually create more pods in the Job.</p> <p>Am I missing something? Or is there no way to scale out a Job?</p>
<p>So as per <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/jobs" rel="noreferrer">job documentation</a> you can still scale a Job running the following command:</p> <pre><code>kubectl scale job my-job --replicas=[VALUE] </code></pre> <p>Simple test shows that this option works right now as expected, but will be really deprecated in a future</p> <blockquote> <p>kubectl scale job is DEPRECATED and will be removed in a future version.</p> <p>The ability to use kubectl scale jobs is deprecated. All other scale operations remain in place, but the ability to scale jobs will be removed in a future release.</p> </blockquote> <p>The reason is: <a href="https://github.com/kubernetes/kubernetes/pull/60139" rel="noreferrer">Deprecate kubectl scale job</a></p> <p>Use below Job yaml as an example to create job:</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: test-job spec: template: spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2010)"] restartPolicy: Never completions: 1000 parallelism: 5 </code></pre> <p>Now lets test behavior:</p> <pre><code>kubectl describe jobs.batch test-job Parallelism: 5 Completions: 1000 Start Time: Fri, 17 May 2019 16:58:36 +0200 Pods Statuses: 5 Running / 21 Succeeded / 0 Failed kubectl get pods | grep test-job | grep Running test-job-98mlv 1/1 Running 0 13s test-job-fs2hb 1/1 Running 0 8s test-job-l8n6v 1/1 Running 0 16s test-job-lbh46 1/1 Running 0 13s test-job-m8btl 1/1 Running 0 2s </code></pre> <p>Changing parallelism with <code>kubectl scale</code>:</p> <pre><code>kubectl scale jobs.batch test-job --replicas=10 kubectl describe jobs.batch test-job Parallelism: 10 Completions: 1000 Start Time: Fri, 17 May 2019 16:58:36 +0200 Pods Statuses: 10 Running / 87 Succeeded / 0 Fail kubectl get pods | grep test-job | grep Running test-job-475zf 1/1 Running 0 10s test-job-5k45h 1/1 Running 0 14s test-job-8p99v 1/1 Running 0 22s test-job-jtssp 1/1 Running 0 4s test-job-ltx8f 1/1 Running 0 12s test-job-mwnqb 1/1 Running 0 16s test-job-n7t8b 1/1 Running 0 20s test-job-p4bfs 1/1 Running 0 18s test-job-vj8qw 1/1 Running 0 18s test-job-wtjdl 1/1 Running 0 10s </code></pre> <p>And the last step that i believe will be the most interesting for you - you can always edit your job using <a href="https://kubernetes.io/docs/tasks/run-application/update-api-object-kubectl-patch/" rel="noreferrer">kubectl patch</a> command</p> <pre><code>kubectl patch job test-job -p '{"spec":{"parallelism":15}}' kubectl describe jobs.batch test-job Parallelism: 15 Completions: 1000 Start Time: Fri, 17 May 2019 16:58:36 +0200 Pods Statuses: 15 Running / 175 Succeeded / 0 Failed kubectl get pods | grep test-job | grep Running | wc -l 15 </code></pre>
<p>We are looking for viable option to map external windows file share inside <strong>kubernetes+AWS-EKS</strong> hosted docker containers and few of the options. Windows file share being in same VPN is accessible with IP address</p> <p>In absence of anything natively supported by kubernetes esp on EKS, we are trying Flexvolumes along with persistant volume. But that would need installation of cifs drivers on nodes which as I understand EKS does't provide being manages nodes. </p> <p>Any option which doesn't require node level installation of custom drives including cifs etc?</p>
<p>You could modify the cloudformation stack to install the drivers after startup, see <a href="https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/windows-public-preview/amazon-eks-cfn-quickstart-windows.yaml" rel="nofollow noreferrer">https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/windows-public-preview/amazon-eks-cfn-quickstart-windows.yaml</a> </p> <p>It references <a href="https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/windows-public-preview/amazon-eks-windows-nodegroup.yaml" rel="nofollow noreferrer">https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/windows-public-preview/amazon-eks-windows-nodegroup.yaml</a> which contains the following powershell startup lines</p> <pre><code>&lt;powershell&gt; [string]$EKSBinDir = "$env:ProgramFiles\Amazon\EKS" [string]$EKSBootstrapScriptName = 'Start-EKSBootstrap.ps1' [string]$EKSBootstrapScriptFile = "$EKSBinDir\$EKSBootstrapScriptName" [string]$cfn_signal = "$env:ProgramFiles\Amazon\cfn-bootstrap\cfn-signal.exe" &amp; $EKSBootstrapScriptFile -EKSClusterName ${ClusterName} ${BootstrapArguments} 3&gt;&amp;1 4&gt;&amp;1 5&gt;&amp;1 6&gt;&amp;1 $LastError = if ($?) { 0 } else { $Error[0].Exception.HResult } &amp; $cfn_signal --exit-code=$LastError ` --stack="${AWS::StackName}" ` --resource="NodeGroup" ` --region=${AWS::Region} &lt;/powershell&gt; </code></pre> <p>Add your custom installation requirements and use this new stack when launching your nodes</p>
<p>Below is my python script to update a secret so I can deploy to kubernetes using kubectl. So it works fine. But I want to create a kubernetes cron job that will run a docker container to update a secret from within a kubernetes cluster. How do I do that? The aws secret lasts only 12 hours to I have to regenerate from within the cluster so I can pull if pod crash etc...</p> <p>This there an internal api I have access to within kubernetes?</p> <pre><code>cmd = """aws ecr get-login --no-include-email --region us-east-1 &gt; aws_token.txt""" run_bash(cmd) f = open('aws_token.txt').readlines() TOKEN = f[0].split(' ')[5] SECRET_NAME = "%s-ecr-registry" % (self.region) cmd = """kubectl delete secret --ignore-not-found %s -n %s""" % (SECRET_NAME,namespace) print (cmd) run_bash(cmd) cmd = """kubectl create secret docker-registry %s --docker-server=https://%s.dkr.ecr.%s.amazonaws.com --docker-username=AWS --docker-password="%s" --docker-email="david.montgomery@gmail.com" -n %s """ % (SECRET_NAME,self.aws_account_id,self.region,TOKEN,namespace) print (cmd) run_bash(cmd) cmd = "kubectl describe secrets/%s-ecr-registry -n %s" % (self.region,namespace) print (cmd) run_bash(cmd) cmd = "kubectl get secret %s-ecr-registry -o yaml -n %s" % (self.region,namespace) print (cmd) </code></pre>
<p>As it happens I literally just got done doing this.</p> <p>Below is everything you need to set up a cronjob to roll your AWS docker login token, and then re-login to ECR, every 6 hours. Just replace the {{ variables }} with your own actual values.</p> <pre><code>kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: {{ namespace }} name: ecr-cred-helper rules: - apiGroups: [""] resources: - secrets - serviceaccounts - serviceaccounts/token verbs: - 'delete' - 'create' - 'patch' - 'get' --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: ecr-cred-helper namespace: {{ namespace }} subjects: - kind: ServiceAccount name: sa-ecr-cred-helper namespace: {{ namespace }} roleRef: kind: Role name: ecr-cred-helper apiGroup: "" --- apiVersion: v1 kind: ServiceAccount metadata: name: sa-ecr-cred-helper namespace: {{ namespace }} --- apiVersion: batch/v1beta1 kind: CronJob metadata: annotations: name: ecr-cred-helper namespace: {{ namespace }} spec: concurrencyPolicy: Allow failedJobsHistoryLimit: 1 jobTemplate: metadata: creationTimestamp: null spec: template: metadata: creationTimestamp: null spec: serviceAccountName: sa-ecr-cred-helper containers: - command: - /bin/sh - -c - |- TOKEN=`aws ecr get-login --region ${REGION} --registry-ids ${ACCOUNT} | cut -d' ' -f6` echo "ENV variables setup done." kubectl delete secret -n {{ namespace }} --ignore-not-found $SECRET_NAME kubectl create secret -n {{ namespace }} docker-registry $SECRET_NAME \ --docker-server=https://{{ ECR_REPOSITORY_URL }} \ --docker-username=AWS \ --docker-password="${TOKEN}" \ --docker-email="${EMAIL}" echo "Secret created by name. $SECRET_NAME" kubectl patch serviceaccount default -p '{"imagePullSecrets":[{"name":"'$SECRET_NAME'"}]}' -n {{ namespace }} echo "All done." env: - name: AWS_DEFAULT_REGION value: eu-west-1 - name: AWS_SECRET_ACCESS_KEY value: '{{ AWS_SECRET_ACCESS_KEY }}' - name: AWS_ACCESS_KEY_ID value: '{{ AWS_ACCESS_KEY_ID }}' - name: ACCOUNT value: '{{ AWS_ACCOUNT_ID }}' - name: SECRET_NAME value: '{{ imagePullSecret }}' - name: REGION value: 'eu-west-1' - name: EMAIL value: '{{ ANY_EMAIL }}' image: odaniait/aws-kubectl:latest imagePullPolicy: IfNotPresent name: ecr-cred-helper resources: {} securityContext: capabilities: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: Default hostNetwork: true restartPolicy: Never schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 schedule: 0 */6 * * * successfulJobsHistoryLimit: 3 suspend: false </code></pre>
<p>We have a Spring Boot (2.0.4) application exposing a number of endpoints, one of which enables clients to retrieve sometimes very large files (~200 GB). The application is exposed in a Pod via a Kubernetes deployment configured with the rolling-update strategy. </p> <p>When we update our deployment by setting the image to the latest version the pods get destroyed and new ones spun up. Our service provision is seamless for new requests. However current requests can and do get severed and this can be annoying for clients in the middle of downloading very large files.</p> <p>We can configure Container Lifecycle Pre-Stop hooks in our deployment spec to inject a pause before sending shutdown signals to the app via it's PID. This helps prevent any new traffic going to pods which have been set to Terminate. Is there a way to then pause the application shutdown process until all current requests have been completed (this may take tens of minutes)?</p> <p>Here's what we have tried from within the Spring Boot application:</p> <ul> <li><p>Implementing a shutdown listener which intercepts <code>ContextCloseEvents</code>; unfortunately we can't reliably retrieve a list of active requests. Any Actuator metrics which may have been useful are unavailable at this stage of the shutdown process. </p></li> <li><p>Count active sessions by implementing a <code>HttpSessionListener</code> and overriding <code>sessionCreated/Destroy</code> methods to update a counter. This fails because the methods are not invoked on a separate thread so always report the same value in the shutdown listener.</p></li> </ul> <p>Any other strategy we should try? From within the app itself, or the container, or directly through Kubernetes resource descriptors? Advice/Help/Pointers would be much appreciated.</p> <p>Edit: We manage the cluster so we're only trying to mitigate service outages to currently connected clients during a <em>managed update of our deployment via a modified pod spec</em> </p>
<p>You could increase the <code>terminationGracePeriodSeconds</code>, the default is 30 seconds. But unfortunately, there's nothing to prevent a cluster admin from force deleting your pod, and there's all sorts of reasons the whole node could go away. </p>
<p>Say we have the following deployment:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: ... spec: replicas: 2 template: spec: containers: - image: ... ... resources: requests: cpu: 100m memory: 50Mi limits: cpu: 500m memory: 300Mi </code></pre> <p>And we also create a <code>HorizontalPodAutoscaler</code> object which automatically scales up/down the number of pods based on CPU average utilization. I know that the HPA will compute the number of pods based on the resource <strong>requests</strong>, but what if I want the containers to be able to request more resources before scaling horizontally?</p> <p>I have two questions:</p> <p>1) Are resource <strong>limits</strong> even used by K8s when a HPA is defined?</p> <p>2) Can I tell the HPA to scale based on resource <strong>limits</strong> rather than requests? Or as a means of implementing such a control, can I set the <code>targetUtilization</code> value to be more than 100%?</p>
<p>No, HPA is not looking at limits at all. You can specify target utilization to any value even higher than 100%. </p>
<p>I have a k8s template for deploying pods and services. I am using this template to deploy different services based on some parameters(different names, labels) on <strong>AKS</strong>.</p> <p>Some service gets their External-IP and few of the services <strong>External-IP</strong> is always <strong>in pending state</strong>.</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) service/ca1st-orgc LoadBalancer 10.0.25.227 &lt;pending&gt; 7054:30907/TCP 17m service/ca1st-orgc-db-mysql LoadBalancer 10.0.97.81 52.13.67.9 3306:31151/TCP 17m service/kafka1st ClusterIP 10.0.15.90 &lt;none&gt; 9092/TCP,9093/TCP 17m service/kafka2nd ClusterIP 10.0.17.22 &lt;none&gt; 9092/TCP,9093/TCP 17m service/kafka3rd ClusterIP 10.0.02.07 &lt;none&gt; 9092/TCP,9093/TCP 17m service/kubernetes ClusterIP 10.0.0.1 &lt;none&gt; 443/TCP 20m service/orderer1st-orgc LoadBalancer 10.0.17.19 &lt;pending&gt; 7050:30971/TCP 17m service/orderer2nd-orgc LoadBalancer 10.0.02.15 13.06.27.31 7050:31830/TCP 17m service/peer1st-orga LoadBalancer 10.0.10.19 &lt;pending&gt; 7051:31402/TCP,7052:32368/TCP,7053:31786/TCP,5984:30721/TCP 17m service/peer1st-orgb LoadBalancer 10.0.218.48 13.06.25.13 7051:31892/TCP,7052:30326/TCP,7053:31419/TCP,5984:31882/TCP 17m service/peer2nd-orga LoadBalancer 10.0.86.64 &lt;pending&gt; 7051:30590/TCP,7052:31870/TCP,7053:30362/TCP,5984:30036/TCP 17m service/peer2nd-orgb LoadBalancer 10.0.195.212 52.13.58.3 7051:30476/TCP,7052:30091/TCP,7053:30099/TCP,5984:32614/TCP 17m service/zookeeper1st ClusterIP 10.0.57.192 &lt;none&gt; 2888/TCP,3888/TCP,2181/TCP 17m service/zookeeper2nd ClusterIP 10.0.174.25 &lt;none&gt; 2888/TCP,3888/TCP,2181/TCP 17m service/zookeeper3rd ClusterIP 10.0.210.166 &lt;none&gt; 2888/TCP,3888/TCP,2181/TCP 17m </code></pre> <p>Funny thing is, it's the same template which is being used to deploy all the related services. For an instance, services which are prefixed with <strong>peer</strong>, being deployed by same template.</p> <p>Has anyone faced this?</p> <blockquote> <p>Deployment template for an <strong>orderer</strong> Pod</p> </blockquote> <pre><code>apiVersion: v1 kind: Pod metadata: name: {{ orderer.name }} labels: k8s-app: {{ orderer.name }} type: orderer {% if (project_version is version('1.4.0','&gt;=') or 'stable' in project_version or 'latest' in project_version) and fabric.metrics is defined and fabric.metrics %} annotations: prometheus.io/scrape: 'true' prometheus.io/path: /metrics prometheus.io/port: '8443' prometheus.io/scheme: 'http' {% endif %} spec: {% if creds %} imagePullSecrets: - name: regcred {% endif %} restartPolicy: OnFailure volumes: - name: task-pv-storage persistentVolumeClaim: claimName: fabriccerts affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 podAffinityTerm: labelSelector: matchExpressions: - key: type operator: In values: - orderer topologyKey: kubernetes.io/hostname containers: - name: {{ orderer.name }} image: {{ fabric.repo.url }}fabric-orderer:{{ fabric.baseimage_tag }} {% if 'latest' in project_version or 'stable' in project_version %} imagePullPolicy: Always {% else %} imagePullPolicy: IfNotPresent {% endif %} env: {% if project_version is version('1.3.0','&lt;') %} - { name: "ORDERER_GENERAL_LOGLEVEL", value: "{{ fabric.logging_level | default('ERROR') | lower }}" } {% elif project_version is version('1.4.0','&gt;=') or 'stable' in project_version or 'latest' in project_version %} - { name: "FABRIC_LOGGING_SPEC", value: "{{ fabric.logging_level | default('ERROR') | lower }}" } {% endif %} - { name: "ORDERER_GENERAL_LISTENADDRESS", value: "0.0.0.0" } - { name: "ORDERER_GENERAL_GENESISMETHOD", value: "file" } - { name: "ORDERER_GENERAL_GENESISFILE", value: "/etc/hyperledger/fabric/artifacts/keyfiles/genesis.block" } - { name: "ORDERER_GENERAL_LOCALMSPID", value: "{{ orderer.org }}" } - { name: "ORDERER_GENERAL_LOCALMSPDIR", value: "/etc/hyperledger/fabric/artifacts/keyfiles/{{ orderer.org }}/orderers/{{ orderer.name }}.{{ orderer.org }}/msp" } - { name: "ORDERER_GENERAL_TLS_ENABLED", value: "{{ tls | lower }}" } {% if tls %} - { name: "ORDERER_GENERAL_TLS_PRIVATEKEY", value: "/etc/hyperledger/fabric/artifacts/keyfiles/{{ orderer.org }}/orderers/{{ orderer.name }}.{{ orderer.org }}/tls/server.key" } - { name: "ORDERER_GENERAL_TLS_CERTIFICATE", value: "/etc/hyperledger/fabric/artifacts/keyfiles/{{ orderer.org }}/orderers/{{ orderer.name }}.{{ orderer.org }}/tls/server.crt" } - { name: "ORDERER_GENERAL_TLS_ROOTCAS", value: "[/etc/hyperledger/fabric/artifacts/keyfiles/{{ orderer.org }}/orderers/{{ orderer.name }}.{{ orderer.org }}/tls/ca.crt]" } {% endif %} {% if (project_version is version_compare('2.0.0','&gt;=') or ('stable' in project_version or 'latest' in project_version)) and fabric.consensus_type is defined and fabric.consensus_type == 'etcdraft' %} - { name: "ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY", value: "/etc/hyperledger/fabric/artifacts/keyfiles/{{ orderer.org }}/orderers/{{ orderer.name }}.{{ orderer.org }}/tls/server.key" } - { name: "ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE", value: "/etc/hyperledger/fabric/artifacts/keyfiles/{{ orderer.org }}/orderers/{{ orderer.name }}.{{ orderer.org }}/tls/server.crt" } - { name: "ORDERER_GENERAL_CLUSTER_ROOTCAS", value: "[/etc/hyperledger/fabric/artifacts/keyfiles/{{ orderer.org }}/orderers/{{ orderer.name }}.{{ orderer.org }}/tls/ca.crt]" } {% elif fabric.consensus_type | default('kafka') == 'kafka' %} - { name: "ORDERER_KAFKA_RETRY_SHORTINTERVAL", value: "1s" } - { name: "ORDERER_KAFKA_RETRY_SHORTTOTAL", value: "30s" } - { name: "ORDERER_KAFKA_VERBOSE", value: "true" } {% endif %} {% if mutualtls %} {% if project_version is version('1.1.0','&gt;=') or 'stable' in project_version or 'latest' in project_version %} - { name: "ORDERER_GENERAL_TLS_CLIENTAUTHREQUIRED", value: "true" } {% else %} - { name: "ORDERER_GENERAL_TLS_CLIENTAUTHENABLED", value: "true" } {% endif %} - { name: "ORDERER_GENERAL_TLS_CLIENTROOTCAS", value: "[{{ rootca | list | join (", ")}}]" } {% endif %} {% if (project_version is version('1.4.0','&gt;=') or 'stable' in project_version or 'latest' in project_version) and fabric.metrics is defined and fabric.metrics %} - { name: "ORDERER_OPERATIONS_LISTENADDRESS", value: ":8443" } - { name: "ORDERER_OPERATIONS_TLS_ENABLED", value: "false" } - { name: "ORDERER_METRICS_PROVIDER", value: "prometheus" } {% endif %} {% if fabric.orderersettings is defined and fabric.orderersettings.ordererenv is defined %} {% for pkey, pvalue in fabric.orderersettings.ordererenv.items() %} - { name: "{{ pkey }}", value: "{{ pvalue }}" } {% endfor %} {% endif %} {% include './resource.j2' %} volumeMounts: - { mountPath: "/etc/hyperledger/fabric/artifacts", name: "task-pv-storage" } command: ["orderer"] </code></pre> <blockquote> <p>Deployment config for LoadBalancer</p> </blockquote> <pre><code>kind: Service apiVersion: v1 metadata: labels: k8s-app: {{ orderer.name }} name: {{ orderer.name }} spec: selector: k8s-app: {{ orderer.name }} {% if fabric.k8s.exposeserviceport %} type: LoadBalancer {% endif %} ports: - name: port1 port: 7050 {% if fabric.metrics is defined and fabric.metrics %} - name: scrapeport port: 8443 {% endif %} </code></pre> <blockquote> <p>Interesting thing is, I don't see any Events(on running kubectl describe service orderer1st-orgc) for the services which haven't got their External-IP</p> </blockquote> <pre><code>Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre> <p>Please share your thoughts.</p>
<p>There was an issue with my cluster. I am not sure what it was but, <strong>the same set of LoadBalancers never used to get their Public IP</strong>. No matter how many times you cleanup all the pvc, services and pods. I deleted the cluster and re-created one. Everything works as expected in the new cluster.</p> <p>All the LoadBalancers gets their public IP.</p>
<p>I have the following deployment running in Google Cloud Platform (GCP):</p> <pre><code> kind: Deployment apiVersion: extensions/v1beta1 metadata: name: mybackend labels: app: backendweb spec: replicas: 1 selector: matchLabels: app: backendweb tier: web template: metadata: labels: app: backendweb tier: web spec: containers: - name: mybackend image: eu.gcr.io/teststuff/backend:latest ports: - containerPort: 8081 </code></pre> <p>This uses the following service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: mybackend labels: app: backendweb spec: type: NodePort selector: app: backendweb tier: web ports: - port: 8081 targetPort: 8081 </code></pre> <p>Which uses this ingress:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress annotations: kubernetes.io/ingress.global-static-ip-name: backend-ip labels: app: backendweb spec: backend: serviceName: mybackend servicePort: 8081 </code></pre> <p>When I spin all this up in Google Cloud Platform however, I get the following error message on my ingress:</p> <pre><code>All backend services are in UNHEALTHY state </code></pre> <p>I've looked through my pod logs with no indication about the problem. Grateful for any advice!</p>
<p>Most likely this problem is caused by not returning 200 on route <code>'/'</code> for your pod. Please check your pod configurations. If you don't want to return 200 at route <code>'/'</code>, you could add a readiness rule for health check like this:</p> <pre><code>readinessProbe: httpGet: path: /healthz port: 80 initialDelaySeconds: 5 periodSeconds: 5 </code></pre>
<p>I have n instances of my micro-service running as kubernetes pods but as there's some scheduling logic in the application code, I would like only one of these pods to execute the code.</p> <p>In Spring applications, a common approach is to activate <strong>scheduled</strong> profile <code>-Dspring.profiles.active=scheduled</code> for only instance &amp; leave it deactivated for the remaining instances. I'd like to know how one can accomplish this in kubernetes. </p> <hr> <p><strong>Note:</strong> I am familiar with the approach where a kubernetes cron job can invoke an end point so that only one instances picked by load balancer executes the scheduled code. However, I would like to know if it's possible to configure kubernetes specification in such a way that only one pod has an environment variable set.</p>
<p>You can create deployment with 1 replica with the required environment variable and another deployment with as many replicas you want without that variable. You may also set the same labels on both deployments so that Service can load balance traffic between pods from both deployments if you need it. </p>
<p>I want to compare two Kubernetes API objects (e.g. <code>v1.PodSpec</code>s): one of them was created manually (expected state), the other one was received from the Kubernetes API/client (actual state). The problem is that even if the two objects are semantically equal, the manually created struct has zerovalues for unspecified fields where the other struct has default values, and so the two doesn't match. It means that a simple <code>reflect.DeepEqual()</code> call is not sufficient for comparison.</p> <p>E.g. after this:</p> <pre><code>expected := &amp;v1.Container{ Name: "busybox", Image: "busybox", } actual := getContainerSpecFromApi(...) </code></pre> <p><code>expected.ImagePullPolicy</code> will be <code>""</code>, while <code>actual.ImagePullPolicy</code> will be <code>"IfNotPresent"</code> (the default value), so the comparison fails.</p> <p>Is there an idiomatic way to replace zerovalues with default values in Kubernetes API structs specifically? Or alternatively is a constructor function that initializes the struct with default values available for them somewhere?</p> <p>EDIT: Currently I am using handwritten equality tests for each K8s API object types, but this doesn't seem to be maintainable to me. I am looking for a simple (set of) function(s) that "knows" the default values for all built-in Kubernetes API object fields (maybe somewhere under <code>k8s.io/api*</code>?). Something like this:</p> <pre><code>expected = api.ApplyContainerDefaults(expected) if !reflect.DeepEqual(expected, actual) { reconcile(expected, actual) } </code></pre>
<p>There are helpers to fill in default values in place of empty/zero ones.</p> <p>Look at the <a href="https://godoc.org/k8s.io/kubernetes/pkg/apis/apps/v1#SetObjectDefaults_Deployment" rel="noreferrer">SetObjectDefaults_Deployment</a> for Deployment for instance.</p> <p>Looks like the proper way to call it is via <a href="https://godoc.org/k8s.io/apimachinery/pkg/runtime#Scheme.Default" rel="noreferrer"><code>(*runtime.Scheme).Default</code></a>. Below is the snippet to show the general idea:</p> <pre><code>import ( "reflect" appsv1 "k8s.io/api/apps/v1" "k8s.io/client-go/kubernetes/scheme" ) func compare() { scheme := scheme.Scheme // fetch the existing &amp;appsv1.Deployment via API actual := ... expected := &amp;appsv1.Deployment{} // fill in the fields to generate your expected state // ... scheme.Default(expected) // now you should have your empty values filled in if !reflect.DeepEqual(expected.Spec, actual.Spec) { reconcile(expected, actual) } } </code></pre> <p>If you need less strict comparison for instance if you need to tolerate some injected containers then something more relaxed should be used like <a href="https://github.com/amaizfinance/redis-operator/blob/master/pkg/controller/redis/deepcontains.go" rel="noreferrer">this</a>.</p>
<p>I have a Pod or Job yaml spec file (I can edit it) and I want to launch it from my local machine (e.g. using <code>kubectl create -f my_spec.yaml</code>)</p> <p>The spec declares a volume mount. There would be a file in that volume that I want to use as value for an environment variable.</p> <p>I want to make it so that the volume file contents ends up in the environment variable (without me jumping through hoops by somehow "downloading" the file to my local machine and inserting it in the spec).</p> <p>P.S. It's obvious how to do that if you have control over the <code>command</code> of the <code>container</code>. But in case of launching arbitrary image, I have no control over the <code>command</code> attribute as I do not know it.</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: generateName: puzzle spec: template: spec: containers: - name: main image: arbitrary-image env: - name: my_var valueFrom: &lt;Contents of /mnt/my_var_value.txt&gt; volumeMounts: - name: my-vol path: /mnt volumes: - name: my-vol persistentVolumeClaim: claimName: my-pvc </code></pre>
<p>You can create deployment with kubectl endless loop which will constantly poll volume and update configmap from it. After that you can mount created configmap into your pod. It's a little bit hacky but will work and update your configmap automatically. The only requirement is that PV must be ReadWriteMany or ReadOnlyMany (but in that case you can mount it in read-only mode to all pods).</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: cm-creator namespace: default --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: cm-creator rules: - apiGroups: [""] resources: ["configmaps"] verbs: ["create", "update", "get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: cm-creator namespace: default subjects: - kind: User name: system:serviceaccount:default:cm-creator apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: cm-creator apiGroup: rbac.authorization.k8s.io --- apiVersion: apps/v1 kind: Deployment metadata: name: cm-creator namespace: default labels: app: cm-creator spec: replicas: 1 serviceAccountName: cm-creator selector: matchLabels: app: cm-creator template: metadata: labels: app: cm-creator spec: containers: - name: cm-creator image: bitnami/kubectl command: - /bin/bash - -c args: - while true; kubectl create cm myconfig --from-file=my_var=/mnt/my_var_value.txt --dry-run -o yaml | kubectl apply -f-; sleep 60; done volumeMounts: - name: my-vol path: /mnt readOnly: true volumes: - name: my-vol persistentVolumeClaim: claimName: my-pvc </code></pre>
<p>I installed kubeadm to deploy multi node kubernetes cluster. Added two nodes. Those are ready. I am able to run my app using node port service. While i am trying yo access the dashboard facing an issue. I am following the steps to install dashboard in this <a href="http://www.assistanz.com/steps-to-install-kubernetes-dashboard/" rel="nofollow noreferrer">link</a></p> <pre><code>kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml dash-admin.yaml: apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard labels: k8s-app: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system kubectl create -f dashboard-admin.yaml nohup kubectl proxy --address="172.20.22.101" -p 443 --accept-hosts='^*$' &amp; </code></pre> <p>Its running well and saving the output in <code>nohup.out</code></p> <p>When i try to access the site using the url: <code>172.20.22.101:443/api/v1/namespaces/kube-system/services/….</code> it's showing <code>connection refused</code>. I observed the output in the <code>nohup.out</code>, it's showing the below error:</p> <blockquote> <p>I1203 12:28:05.880828 15591 log.go:172] http: proxy error: dial tcp [::1]:8080: connect: connection refused –</p> </blockquote>
<p>You are not running it with root or sudo permission.</p> <p>I have encountered this issue, and after running using root. I was able to access it with no errors.</p>
<h3><strong>Issue in dns lookup for statefulsets srv records</strong></h3> <p><strong><em>My yaml file</em></strong></p> <pre><code>kind: List apiVersion: v1 items: - apiVersion: v1 kind: Service metadata: name: sfs-svc labels: app: sfs-app spec: ports: - port: 80 name: web clusterIP: None selector: app: sfs-app - apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: selector: matchLabels: app: sfs-app # has to match .spec.template.metadata.labels serviceName: "sfs-svc" replicas: 3 template: metadata: labels: app: sfs-app # has to match .spec.selector.matchLabels spec: terminationGracePeriodSeconds: 10 containers: - name: test-container image: nginx imagePullPolicy: IfNotPresent command: [ "sh", "-c"] args: - while true; do printenv MY_NODE_NAME MY_POD_NAME MY_POD_NAMESPACE &gt;&gt; /var/sl/output.txt; printenv MY_POD_IP &gt;&gt; /var/sl/output.txt; date &gt;&gt; var/sl/output.txt; cat /var/sl/output.txt; sleep 999999; done; env: - name: MY_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP volumeMounts: - name: www mountPath: /var/sl volumeClaimTemplates: - metadata: name: www spec: accessModes: [ "ReadWriteOnce" ] #storageClassName: classNameIfAny resources: requests: storage: 1Mi </code></pre> <p><strong>$ Kubectl cluster-info</strong></p> <blockquote> <p>Kubernetes master is running at <a href="https://192.168.99.100:8443" rel="nofollow noreferrer">https://192.168.99.100:8443</a> KubeDNS is running at <a href="https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy" rel="nofollow noreferrer">https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy</a></p> </blockquote> <p><strong>$ kubectl version</strong></p> <blockquote> <p>Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:08:12Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:02:58Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}</p> </blockquote> <p><strong>$ kubectl get po,svc,statefulset</strong></p> <pre><code>&gt; NAME READY STATUS RESTARTS AGE &gt; pod/web-0 1/1 Running 0 45m &gt; pod/web-1 1/1 Running 0 45m &gt; pod/web-2 1/1 Running 0 45m &gt; &gt; NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE &gt; service/kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 46m &gt; service/sfs-svc ClusterIP None &lt;none&gt; 80/TCP 45m &gt; &gt; NAME READY AGE &gt; statefulset.apps/web 3/3 45m &gt; </code></pre> <h3><strong>PROBLEM: I am not getting DNS address for statefulset headless service</strong></h3> <p><strong>When I try $ nslookup sfs-svc.default.svc.cluster.local</strong> </p> <pre><code>&gt; Server: 127.0.0.53 &gt; Address: 127.0.0.53#53 &gt; **&gt; ** server can't find sfs-svc.default.svc.cluster.local: SERVFAIL** &gt; </code></pre>
<p>My first guess is, you are running <code>nslookup</code> from <code>localhost</code> instead of from inside of a <code>pod</code>. </p> <p>I tried the yaml and I can only regenerate this problem when I run <code>nslookup sfs-svc.default.svc.cluster.local</code> from localhost. </p> <p>Anyway, To check the DNS entries of a Service, run <code>nslookup</code> from inside of a pod. Here is an example,</p> <pre><code>~ $ kubectl run -it --rm --restart=Never dnsutils2 --image=tutum/dnsutils --command -- bash root@dnsutils2:/# nslookup sfs-svc.default.svc.cluster.local Server: 10.96.0.10 Address: 10.96.0.10#53 Name: sfs-svc.default.svc.cluster.local Address: 172.17.0.6 Name: sfs-svc.default.svc.cluster.local Address: 172.17.0.5 Name: sfs-svc.default.svc.cluster.local Address: 172.17.0.4 root@dnsutils2:/# exit </code></pre>
<p>I am trying to run a simple wordcount application in Spark on Kubernetes. I am getting following issue.</p> <pre><code>Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Operation: [get] for kind: [Pod] with name: [spark-wordcount-1545506479587-driver] in namespace: [non-default-namespace] failed. at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:62) at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:71) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:228) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:184) at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator$$anonfun$1.apply(ExecutorPodsAllocator.scala:57) at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator$$anonfun$1.apply(ExecutorPodsAllocator.scala:55) at scala.Option.map(Option.scala:146) at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.&lt;init&gt;(ExecutorPodsAllocator.scala:55) at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterManager.createSchedulerBackend(KubernetesClusterManager.scala:89) at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2788) ... 20 more Caused by: java.net.SocketTimeoutException: connect timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) </code></pre> <p>I have followed all the steps mentioned in the <a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html#rbac" rel="noreferrer">RBAC setup</a>. Only thing I could not do was I could not create clusterbinding spark-role since I don't have access to the default namespace. Instead I create rolebinding.</p> <pre><code>kubectl create rolebinding spark-role --clusterrole=edit --serviceaccount=non-default-namespace:spark --namespace=non-default-namespace </code></pre> <p>I am using following spark-submit command.</p> <pre><code>spark-submit \ --verbose \ --master k8s://&lt;cluster-ip&gt;:&lt;port&gt; \ --deploy-mode cluster --supervise \ --name spark-wordcount \ --class WordCount \ --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark-test \ --conf spark.kubernetes.driver.limit.cores=1 \ --conf spark.kubernetes.executor.limit.cores=1 \ --conf spark.executor.instances=1 \ --conf spark.kubernetes.container.image=&lt;image&gt; \ --conf spark.kubernetes.namespace=non-default-namespace \ --conf spark.kubernetes.driver.pod.name=spark-wordcount-driver \ local:///opt/spark/work-dir/spark-k8s-1.0-SNAPSHOT.jar </code></pre> <p>Update: I was able to fix the first SockerTimeoutException issue. I did not have the network policy defined so the driver and executors were not able to talk to each other. This was the reason why it was timing out. I changed the network policy from default-deny-all to allow-all for ingress and egress and the timeout exception went away. However I am still getting the Operation get for kind pod not found error with following excepiton.</p> <pre><code>Caused by: java.net.UnknownHostException: kubernetes.default.svc: Try again </code></pre> <p>Any suggestion or help will be appreciated.</p>
<p>This is because your dns is unable to resolve. kubernetes.default.svc. Which in turn could be issue of your networking and iptables. </p> <p>run this on specific node </p> <pre><code>kubectl run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools </code></pre> <p>and check</p> <pre><code>nslookup kubernetes.default.svc </code></pre> <p>Edit: I had this issue because in my case, flannel was using different network(10.244.x.x) any my kubernetes cluster was configured with networking (172.x.x.x)</p> <p>I blindly ran the default one from <a href="https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml" rel="nofollow noreferrer">https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml</a> inside which pod network is configured to 10.244.x.x. To fix it , i download the file , change it to correct pod network and applied it.</p>
<p>How does an ingress forward https traffic to port 443 of the service(eventually to 8443 on my container)? Do I have to make any changes to my ingress or is this done automatically. </p> <p>On GCP, I have a layer 4 balancer -> nginx-ingress controller -> ingress</p> <p>My ingress is:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-keycloak annotations: kubernetes.io/ingress.class: "nginx" certmanager.k8s.io/issuer: "letsencrypt-prod" certmanager.k8s.io/acme-challenge-type: http01 spec: tls: - hosts: - mysite.com secretName: staging-iam-tls rules: - host: mysite.com http: paths: - path: /auth backend: serviceName: keycloak-http servicePort: 80 </code></pre> <p>I searched online but I don't see explicit examples of hitting 443. It's always 80(or 8080)</p> <p>My service <code>keycloak-http</code> is(elided and my container is actually listening at 8443) </p> <pre><code>apiVersion: v1 kind: Service metadata: creationTimestamp: 2019-05-15T12:45:58Z labels: app: keycloak chart: keycloak-4.12.0 heritage: Tiller release: keycloak name: keycloak-http namespace: default .. spec: clusterIP: .. externalTrafficPolicy: Cluster ports: - name: http port: 80 protocol: TCP targetPort: http - name: https port: 443 protocol: TCP targetPort: 8443 selector: app: keycloak release: keycloak sessionAffinity: None type: NodePort status: loadBalancer: {} </code></pre>
<p>Try this:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-keycloak annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" certmanager.k8s.io/issuer: "letsencrypt-prod" certmanager.k8s.io/acme-challenge-type: http01 spec: tls: - hosts: - mysite.com secretName: staging-iam-tls rules: - host: mysite.com http: paths: - path: /auth backend: serviceName: keycloak-http servicePort: 443 </code></pre>
<p>I want to add some annotations to the metadata block of a service within an existing helm chart (I have to add an annotation for Prometheus so that the service is auto discovered). The chart (it is the neo4j chart) does not offer me a configuration that I can use to set annotations. I also looked into the yaml files and noticed that there is no variable I can use to insert something in the metadata block. The only solution I can see is that I have to fork the chart, insert the annotation data to the correct place and create my own chart out of it. Is that really the only solution or is there some trick I am missing that allows me to modify the helm chart without creating a new one?</p>
<p>In Helm 2, you are correct. Either you would have to fork the chart or pass it through another tool after rendering like Kustomize. Helm 3 has some planned features to improve this in the future.</p>
<p>I was experimenting with something with Kubernetes Persistent Volumes, I can't find a clear explanation in Kubernetes documentation and the behaviour is not the one I am expecting so I like to ask here.</p> <p>I configured following Persistent Volume and Persistent Volume Claim.</p> <pre class="lang-yaml prettyprint-override"><code>kind: PersistentVolume apiVersion: v1 metadata: name: store-persistent-volume namespace: test spec: storageClassName: hostpath capacity: storage: 2Gi accessModes: - ReadWriteOnce hostPath: path: "/Volumes/Data/data" --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: store-persistent-volume-claim namespace: test spec: storageClassName: hostpath accessModes: - ReadWriteOnce resources: requests: storage: 1Gi </code></pre> <p>and the following Deployment and Service configuration.</p> <pre class="lang-yaml prettyprint-override"><code>kind: Deployment apiVersion: apps/v1beta2 metadata: name: store-deployment namespace: test spec: replicas: 1 selector: matchLabels: k8s-app: store template: metadata: labels: k8s-app: store spec: volumes: - name: store-volume persistentVolumeClaim: claimName: store-persistent-volume-claim containers: - name: store image: localhost:5000/store ports: - containerPort: 8383 protocol: TCP volumeMounts: - name: store-volume mountPath: /data --- #------------ Service ----------------# kind: Service apiVersion: v1 metadata: labels: k8s-app: store name: store namespace: test spec: type: LoadBalancer ports: - port: 8383 targetPort: 8383 selector: k8s-app: store </code></pre> <p>As you can see I defined '/Volumes/Data/data' as Persistent Volume and expecting that to mount that to '/data' container.</p> <p>So I am assuming whatever in '/Volumes/Data/data' in the host should be visible at '/data' directory at container. Is this assumption correct? Because this is definitely not happening at the moment.</p> <p>My second assumption is, whatever I save at '/data' should be visible at host, which is also not happening.</p> <p>I can see from Kubernetes console that everything started correctly, (Persistent Volume, Claim, Deployment, Pod, Service...)</p> <p>Am I understanding the persistent volume concept correctly at all?</p> <p>PS. I am trying this in a Mac with Docker (18.05.0-ce-mac67(25042) -Channel edge), may be it should not work at Mac?</p> <p>Thx for answers </p>
<p>Assuming you are using multi-node Kubernetes cluster, you should be able to see the data mounted locally at /Volumes/Data/data <strong>on the specific worker node that pod is running</strong></p> <p>You can check on which worker your pod is scheduled by using the command <code>kubectl get pods -o wide -n test</code></p> <p>Please note, as per kubernetes docs, HostPath (Single node testing only – local storage is not supported in any way and WILL NOT WORK in a multi-node cluster) <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes" rel="nofollow noreferrer">PersistentVolume</a></p> <p>It does work in my case.</p>
<p>I'm using the excellent Redis based <a href="https://github.com/OptimalBits/bull" rel="noreferrer">Bull.js</a> as a job queue on Kubernetes.</p> <p>It's configured as a cluster:</p> <p><a href="https://i.stack.imgur.com/yZ4qm.png" rel="noreferrer"><img src="https://i.stack.imgur.com/yZ4qm.png" alt="enter image description here"></a></p> <p>When Kubernetes restarts upon deployments, I run into this following error:</p> <pre><code>BRPOPLPUSH { ReplyError: MOVED 2651 &lt;IP_ADDRESS&gt;:6379 at parseError (/usr/src/app/node_modules/ioredis/node_modules/redis-parser/lib/parser.js:179:12) at parseType (/usr/src/app/node_modules/ioredis/node_modules/redis-parser/lib/parser.js:302:14) command: { name: 'brpoplpush', args: [ '{slack}:slack notifications:wait', '{slack}:slack notifications:active', '5' ] } } </code></pre> <p>Where <code>&lt;IP_ADDRESS&gt;</code> is, I <em>think</em> the cluster IP? I didn't configure this, but I'm trying to debug this. I want to know if I need to enable cluster mode for Bull.js or if this is a configuration issue outside of the Bull.js project?</p> <p>Or is it a networking issue with K8s? </p> <p>Would enabling: <a href="https://github.com/OptimalBits/bull#cluster-support" rel="noreferrer">https://github.com/OptimalBits/bull#cluster-support</a> be the solution? Is this the right approach?</p> <p>Here is my code:</p> <pre><code>import Queue from 'bull'; import config from 'config'; import { run as slackRun } from './tasks/send-slack-message'; import { run as emailRun } from './tasks/send-email'; const redisConfig = { redis: { host: config.redis.host, port: config.redis.port } }; const slackQueue = new Queue('slack notifications', { ...redisConfig, ...{ prefix: '{slack}' } }); const emailQueue = new Queue('email notifications', { ...redisConfig, ...{ prefix: '{email}' } }); slackQueue.process(slackRun); emailQueue.process(emailRun); emailQueue.on('completed', (job, result) =&gt; { job.remove(); }); export { emailQueue, slackQueue }; </code></pre> <pre><code>import { emailQueue, slackQueue } from 'worker/worker'; const queueOptions = { attempts: 2, removeOnComplete: true, backoff: { type: 'exponential', delay: 60 * 1000 } }; emailQueue.add( { params: { from: email, fromname: name, text: body } }, queueOptions ); slackQueue.add( { channelId: SLACK_CHANNELS.FEEDBACK, attachments: [ { text: req.body.body } ] }, queueOptions ); </code></pre> <p>This is the configmap:</p> <pre><code>Name: redis-cluster-config Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; Data ==== update-node.sh: ---- #!/bin/sh REDIS_NODES="/data/nodes.conf" sed -i -e "/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${POD_IP}/" ${REDIS_NODES} exec "$@" redis.conf: ---- cluster-enabled yes cluster-require-full-coverage no cluster-node-timeout 15000 cluster-config-file nodes.conf cluster-migration-barrier 1 appendonly yes # Other cluster members need to be able to connect protected-mode no Events: &lt;none&gt; </code></pre>
<p><a href="https://stackoverflow.com/questions/56140999/redis-master-slave-setup-on-kubernetes-throwing-error-brpoplpush-replyerror?noredirect=1#comment99043715_56140999">Hitobat</a> is right though.</p> <p>If this doesn't help:</p> <pre class="lang-js prettyprint-override"><code>const Redis = require('ioredis'); ... const ioCluster = new Redis.Cluster([redisConfig.redis]); const slackQueue = new Queue('slack notifications', { prefix: '{slack}' , createClient: () =&gt; ioCluster }); const emailQueue = new Queue('email notifications', { prefix: '{email}' , createClient: () =&gt; ioCluster }); </code></pre> <p>I would go without ioredis or try to downgrade redis engine to 4.x.</p>
<p>I'm new to the Apache Kamel. I have installed Kubernetes on master machine and then I downloaded the binary file "kamel" and placed in the path "/usr/bin". My versions are,</p> <pre><code>Camel K Client 0.3.3 </code></pre> <p>My kubernetes master and kubeDNS is running fine. When I tried to install kamel on kubernetes cluster by using command "kamel install" as per the documentation, I'm getting the following error,</p> <pre><code>Error: cannot find automatically a registry where to push images </code></pre> <p>I don't know what does this new command does </p> <pre><code>"kamel install --cluster-setup" </code></pre> <p>After running the above command the response is like this,</p> <pre><code>Camel K cluster setup completed successfully </code></pre> <p>I tried to run a small integration script like </p> <pre><code>"kamel run hello.groovy --dev" </code></pre> <p>My groovy file code is,</p> <pre><code>from("timer:tick?period=3s") .setBody().constant("Hello World from Camel K!!!") .to("log:message") </code></pre> <p>but the pods are getting hanged, its status is pending.</p> <pre><code>camel-k-operator-587b579567-92xlk 0/1 Pending 0 26m </code></pre> <p>Can you please help me in this regard? Thanks a lot for your time.</p> <p>References I used are, <a href="https://github.com/apache/camel" rel="nofollow noreferrer">https://github.com/apache/camel</a></p>
<p>You need to set the container registry where camel-k can publish/retrieve images, you can do it by editing camel-k's integration platform</p> <pre><code>oc edit integrationplatform camel-k </code></pre> <p>or upon installation</p> <pre><code>kamel install --registry=... </code></pre>
<p>How to configure IBM kubernetes CLI? I am receiving error when I export the environment variable of mycluster</p> <p><code>SET KUBECONFIG=C:\Users\myname\.bluemix\plugins\container-service\clusters\mycluster\kube-config-hou02-mycluster.yml</code></p> <p>When I executed the command below, </p> <p><code>export KUBECONFIG=C:\users\myname\.bluemix\plugins\container-service\clusters\mycluster\kube-config-hou02-mycluster.yml</code></p> <p>I receive error below.</p> <pre><code>export : The term 'export' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + export KUBECONFIG=C:\users\myname\.bluemix\plugins\container-service ... + ~~~~~~ + CategoryInfo : ObjectNotFound: (export:String) [], CommandNotF oundException + FullyQualifiedErrorId : CommandNotFoundException </code></pre>
<p>When setting up the Kubernetes CLI, there are <a href="https://cloud.ibm.com/docs/containers?topic=containers-cs_cli_install#cs_cli_configure" rel="nofollow noreferrer">special instructions for Windows environments</a>:</p> <blockquote> <p>Windows PowerShell users: Instead of copying and pasting the SET command from the output of ibmcloud ks cluster-config, you must set the KUBECONFIG environment variable by running, for example, $env:KUBECONFIG = "C:\Users\.bluemix\plugins\container-service\clusters\mycluster\kube-config-prod-dal10-mycluster.yml".</p> </blockquote>