prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>Allow two pods (say pod A and B) running in same/different namespace communicate irrespective of the protocol(say http,https,akka.tcp) along with a valid Network policy applied.</p> <p><strong><em>Solutions tried:</em></strong></p> <ol> <li><p>Tried applying network policy to both the pods and also used the service name: “my-svc.my-namespace.svc.cluster.local” to make pod B communicate to pod A which is running the service “my-svc” but both failed to communicate.</p></li> <li><p>Also tried adding the IP address and host mapping of pod A in pod B while it’s deployment, then pod B was able to communicate to pod A but inverse communication is failing.</p></li> </ol> <p>Kindly suggest me a way to fix this.</p>
<p>By default, pods can communicate with each other by their IP address, regardless of the namespace they're in.</p> <p>You can see the IP address of each pod with:</p> <pre><code>kubectl get pods -o wide --all-namespaces </code></pre> <p>However, the normal way to communicate within a cluster is through Service resources.</p> <p>A Service also has an IP address and additionally a DNS name. A Service is backed by a set of pods. The Service forwards requests to itself to one of the backing pods.</p> <p>The fully qualified DNS name of a Service is:</p> <pre><code>&lt;service-name&gt;.&lt;service-namespace&gt;.svc.cluster.local </code></pre> <p>This can be resolved to the IP address of the Service from anywhere in the cluster (regardless of namespace).</p> <p>For example, if you have:</p> <ul> <li>Namespace <code>ns-a</code>: Service <code>svc-a</code> → set of pods A</li> <li>Namespace <code>ns-b</code>: Service <code>svc-b</code> → set of pods B</li> </ul> <p>Then a pod of set A can reach a pod of set B by making a request to:</p> <pre><code>svc-b.ns-b.svc.cluster.local </code></pre>
<p>I have a flask app with uwsgi and gevent.<br> Here is my <code>app.ini</code> How could I write readinessProbe and livenessProbe on kubernetes to check to flask app?</p> <pre><code>[uwsgi] socket = /tmp/uwsgi.sock chdir = /usr/src/app/ chmod-socket = 666 module = flasky callable = app master = false processes = 1 vacuum = true die-on-term = true gevent = 1000 listen = 1024 </code></pre>
<p>You can configure <code>uWSGI</code> to serve both <code>uwsgi-socket</code> along side <code>http-socket</code>, and only expose the <code>uwsgi-socket</code> to the <code>k8s service</code>. </p> <p>In this case your <code>uwsgi.ini</code> would looks something like:</p> <pre><code>[uwsgi] socket = /tmp/uwsgi.sock chdir = /usr/src/app/ chmod-socket = 666 module = flasky callable = app master = false processes = 1 vacuum = true die-on-term = true gevent = 1000 listen = 1024 http-socket = 0.0.0.0:5050 </code></pre> <p>And assuming you have <code>/health</code> endpoint in your app, your <code>k8s manifest</code> can be something like:</p> <pre><code>apiVersion: v1 kind: Pod metadata: labels: test: liveness name: liveness-exec spec: containers: - name: myapp image: myimage livenessProbe: httpGet: path: /health port: 5050 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 5 periodSeconds: 5 </code></pre> <p>In this case, your service will be reachable sitting as the upstream at <code>socket = /tmp/uwsgi.sock</code> via your <code>k8s service</code> and the <code>k8s healthcheck</code> service can reach your container at <code>http-socket: 5050</code>. </p>
<p>Our Scala application (Kubernetes deployment) constantly experience Akka Cluster heartbeat delays of ≈3s.</p> <p>Once we even had a 200s delay which also manifested itself in the following graph:</p> <p><a href="https://i.stack.imgur.com/XpS0E.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XpS0Em.jpg" alt="grafana-200s"></a></p> <p>Can someone suggest things to investigate further?</p> <h3>Specs</h3> <ul> <li>Kubernetes 1.12.5</li> </ul> <pre><code>requests.cpu = 16 # limits.cpu not set </code></pre> <ul> <li>Scala 2.12.7</li> <li>Java 11.0.4+11 </li> </ul> <pre><code>-XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:+AlwaysPreTouch -Xlog:gc*,safepoint,gc+ergo*=trace,gc+age=trace:file=/data/gc.log:time,level,tags:filecount=4,filesize=256M -XX:+PerfDisableSharedMem </code></pre> <ul> <li>Akka Cluster 2.5.25</li> </ul> <h3>Java Flight Recording</h3> <p>Some example:</p> <pre><code>timestamp delay_ms 06:24:55.743 2693 06:30:01.424 3390 07:31:07.495 2487 07:36:12.775 3758 </code></pre> <p>There were 4 suspicious time points where lots of <strong>Java Thread Park</strong> events were registered simultaneously for Akka threads (actors &amp; remoting) and all of them correlate to heartbeat issues:</p> <p><a href="https://i.stack.imgur.com/RpBE9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RpBE9m.jpg" alt="jfr-thread-park-1"></a> <a href="https://i.stack.imgur.com/fAbdY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fAbdYm.jpg" alt="jfr-thread-park-2"></a></p> <p>Around <code>07:05:39</code> there were no "heartbeat was delayed" logs, but was this one:</p> <pre><code>07:05:39,673 WARN PhiAccrualFailureDetector heartbeat interval is growing too large for address SOME_IP: 3664 millis </code></pre> <p>No correlation with halt events or blocked threads were found during Java Flight Recording session, only two <strong>Safepoint Begin</strong> events in proximity to delays:</p> <p><a href="https://i.stack.imgur.com/iB8od.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iB8odm.jpg" alt="jfr-safepoint-begin"></a></p> <h3>CFS throttling</h3> <p>The application CPU usage is low, so we thought it could be related to <a href="https://github.com/kubernetes/kubernetes/issues/67577" rel="nofollow noreferrer">how K8s schedule our application node for CPU</a>. But turning off CPU limits haven't improved things much, though <code>kubernetes.cpu.cfs.throttled.second</code> metric disappeared.</p> <h3>Separate dispatcher</h3> <p>Using a separate dispatcher seems to be unnecessary since delays happen even when there is no load, we also built an explicit application similar to our own which does nothing but heartbeats and it still experience these delays.</p> <h3>K8s cluster</h3> <p>From our observations it happens way more frequently on a couple of K8s nodes in a large K8s cluster shared with many other apps when our application doesn't loaded much.</p> <p>A separate dedicated K8s cluster where our app is load tested almost have no issues with heartbeat delays.</p>
<p>Have you been able to rule out garbage collection? In my experience, that's the most common cause for delayed heartbeats in JVM distributed systems (and the CFS quota in a Kubernetes/Mesos environment can make non-Stop-The-World GCs effectively STW, especially if you're not using a really recent (later than release 212 of JDK8) version of openjdk).</p> <p>Every thread parking before "Safepoint begin" does lead me to believe that GC is in fact the culprit. Certain GC operations (e.g. rearranging the heap) require every thread to be in a safepoint, so every so often when not blocked, threads will check if the JVM wants them to safepoint; if so the threads park themselves in order to get to a safepoint.</p> <p>If you've ruled out GC, are you running in a cloud environment (or on VMs where you can't be sure that the CPU or network aren't oversubscribed)? The akka-cluster documentation suggests increasing the <code>akka.cluster.failure-detector.threshold</code> value, which defaults to a value suitable for a more controlled LAN/bare-metal environment: 12.0 is recommended for cloud environments. This won't prevent delayed heartbeats, but it will decrease the chances of a spurious downing event because of a single long heartbeat (and also delay responses to genuine node loss events). If you want to tolerate a spike in heartbeat inter-arrival times from 1s to 200s, though, you'll need a really high threshold.</p>
<p>I have deployed a <a href="https://www.kubeflow.org/" rel="nofollow noreferrer">Kubeflow</a> on an EKS cluster, but want to configure HTTPS listening with a custom ACM certificate. Kubeflow utilizes Istio's ingress gateway to receive external traffic, and by default is only configured for HTTP traffic. </p> <p>When I inspect the <code>ingress.yaml</code> file that creates the Ingress object, I see that it is configured for only HTTP:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]' name: istio-ingress spec: rules: - http: paths: - backend: serviceName: istio-ingressgateway servicePort: 80 path: /* </code></pre> <p>The same exists for the Istio Gateway:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: kubeflow-gateway namespace: kubeflow spec: selector: istio: ingressgateway servers: - hosts: - '*' port: name: http number: 80 protocol: HTTP </code></pre> <p>The only article I can find on accepting TLS traffic comes from the <a href="https://istio.io/docs/tasks/traffic-management/ingress/secure-ingress-mount/#configure-traffic-for-the-bookinfo-com-host" rel="nofollow noreferrer">Istio documentation tutorial for its book service</a>, but its configuration appears to be using a certificate that is mounted directly into the file system at <code>/etc/istio/ingressgateway-bookinfo-certs/tls.crt</code>:</p> <pre><code>$ kubectl apply -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway # use istio default ingress gateway servers: - port: number: 443 name: https-bookinfo protocol: HTTPS tls: mode: SIMPLE serverCertificate: /etc/istio/ingressgateway-bookinfo-certs/tls.crt privateKey: /etc/istio/ingressgateway-bookinfo-certs/tls.key hosts: - "bookinfo.com" EOF </code></pre> <p>The problem is, when I typically configure an AWS ELB to listen to HTTPS traffic, I typically have to specify the SSL certificate from ACM (Amazon Certificate Manager). For example, this is a screenshot of me manually configuring the load balancer created by Istio to listen for HTTPS traffic: <a href="https://i.stack.imgur.com/qtBl8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qtBl8.png" alt="enter image description here"></a> This is the certificate I acquired for my hosted zones purchased from Route 53. However, I honestly don't have any idea how I am supposed to specify this certificate within my Istio YAML configurations, or even how to mount it. I know this is a somewhat basic question, but could someone point me in the direction of a tutorial or walkthrough for how to use HTTPS listeners with ACM for Istio ingress?</p>
<p>I think you simply need to add annotation. You can specify which certificate to use for your load balancer like this</p> <p>alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:eu-central-1:1231234564:certificate/4564abc12-d3c2-4455-8c39-45354cddaf03 (replace with the ARN you get from ACM)</p> <p>And I believe you may need to listen on port 443.</p> <p>Docs on available annotations available at <a href="https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/ingress/annotation/" rel="nofollow noreferrer">https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/ingress/annotation/</a></p>
<p>I'm using the MyHealthClinic app (<a href="https://azuredevopslabs.com/labs/vstsextend/kubernetes/" rel="noreferrer">https://azuredevopslabs.com/labs/vstsextend/kubernetes/</a>) which is a .NET Core frontend and backend Kubernetes cluster, and deploying to Google Kubernetes Engine trying to connect to a SQL Server VM but receive the following error with CrashLoopBackOff when the pod tries to start up after pulling the image that I pushed:</p> <pre><code>Unhandled Exception: System.Data.SqlClient.SqlException: A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: TCP Provider, error: 35 - An internal exception was caught) ---&gt; System.Security.Authentication.AuthenticationException: The remote certificate is invalid according to the validation procedure. at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Net.Security.SslState.StartSendAuthResetSignal(ProtocolToken ... </code></pre> <p>I've checked my appsettings.json and it seems correct in that I have it set as: </p> <pre><code>"DefaultConnection": "Server={my-external-IP},1433;Initial Catalog=mhcdb;Persist Security Info=False;User ID={sqlusername};Password={sqlpassword};MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;" </code></pre> <p>I've also confirmed: </p> <ul> <li>SQL VM is accessible from all external IPs I have for the front-end cluster and my local machine</li> <li>Firewall on the machine and for the VPC network has port 1433 allowed</li> <li>I can connect successfully from my local machine using the same IP for the SQL VM and creds</li> <li>IP for connection string is specified without http/https</li> </ul> <p>Is there anywhere else I can check to try and fix this? I'm able to deploy the cluster without issues to AKS in Azure but not sure if GKE may be blocking outbound connections from the clusters. The only similar questions I've found are related to SMTP servers so far. I'm a bit new to GKE so any ideas will help.</p> <p>If it helps, here's my deployment YAML file (kept the same for the AKS cluster so not sure if there needs to be something changed specifically for GKE): </p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: mhc-back spec: replicas: 1 template: metadata: labels: app: mhc-back spec: containers: - name: mhc-back image: redis ports: - containerPort: 6379 name: redis --- apiVersion: v1 kind: Service metadata: name: mhc-back spec: ports: - port: 6379 selector: app: mhc-back --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: mhc-front spec: replicas: 1 strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 minReadySeconds: 5 template: metadata: labels: app: mhc-front spec: containers: - name: mhc-front image: {gcr.io/[Project-Id]}/myhealth.web:latest imagePullPolicy: Always ports: - containerPort: 80 resources: requests: cpu: 250m limits: cpu: 500m env: - name: REDIS value: "mhc-back" --- apiVersion: v1 kind: Service metadata: name: mhc-front spec: type: LoadBalancer ports: - port: 80 selector: app: mhc-front </code></pre>
<p>After I started looking into why the remote certificate (SQL) was invalid, I changed my connection string to include TrustServerCertificate=True. Since this is a demo environment and I kept Encrypt=True, then it looks like this fixed everything! If anyone thinks bypassing walking the server certs is a bad idea, let me know too. </p> <pre><code>"DefaultConnection": "Server={my-external-IP},1433;Initial Catalog=mhcdb;Persist Security Info=False;User ID={sqlusername};Password={sqlpassword};MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;" </code></pre>
<p>Kubernetes on Google Cloud Platform configures a default CPU request and limit.</p> <p>I make use of deamonsets and deamonset pods should use as much CPU as possible. </p> <p>Manually increasing the upper limit is possible but the upper bound must be reconfigured in case of new nodes and the upper bound must be set much lower than what is available on the node in order to have rolling updates allowing pods scheduling.</p> <p>This requires a lot of manual actions and some resources are just not used most of the time. Is there a way to completely remove the default CPU limit so that pods can use all available CPUs?</p>
<p>GKE, by default, creates a <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.15/#limitrange-v1-core" rel="nofollow noreferrer">LimitRange</a> object named <code>limits</code> in the <code>default</code> namespace looking like this:</p> <pre><code>apiVersion: v1 kind: LimitRange metadata: name: limits spec: limits: - defaultRequest: cpu: 100m type: Container </code></pre> <p>So, if you want to change this, you can either edit it:</p> <pre><code>kubectl edit limitrange limits </code></pre> <p>Or you can delete it altogether:</p> <pre><code>kubectl delete limitrange limits </code></pre> <p><em>Note: the policies in the LimitRange objects are enforced by the <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#limitranger" rel="nofollow noreferrer">LimitRanger admission controller</a> which is enabled by default in GKE.</em></p>
<p>I've deployed a stateful mongodb setup in my k8s cluster. Everytime a scale a new pod, I need to add the pod from mongodb console using rs.add() command. Is there any way I can orchestrate this ?..Also how can I expose my mongodb service outside my k8s cluster..Changing the service type to nodeport didn't work for me..Please help.</p> <p>Giving below the stateful yaml file which I used to deploy mongodb.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: mongo labels: name: mongo spec: ports: - port: 27017 targetPort: 27017 clusterIP: None selector: role: mongo --- apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: mongo spec: serviceName: "mongo" replicas: 3 template: metadata: labels: role: mongo environment: test spec: terminationGracePeriodSeconds: 10 containers: - name: mongo image: mongo:3.4 command: - mongod - "--replSet" - rs0 - "--bind_ip" - 0.0.0.0 - "--smallfiles" - "--noprealloc" ports: - containerPort: 27017 volumeMounts: - name: mongo-persistent-storage mountPath: /data/db - name: mongo-sidecar image: cvallance/mongo-k8s-sidecar env: - name: MONGO_SIDECAR_POD_LABELS value: "role=mongo,environment=test" volumeClaimTemplates: - metadata: name: mongo-persistent-storage annotations: volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 2Gi </code></pre>
<p>As you are using Kubernetes, which is Container Orchestration platform, you can always scale your <code>deployment/statefulset</code> using <code>$ kubectl scale deployment [deployment_name] --repplicas=X</code> or <code>$ kubectl scale statefulset [statefulset-name] --replicas=X</code> where X means how many pods in total you want to have in deployment. It will create autoamatically pods based on your deployment settings. If you don't want to create it manually, you should read about Kubernetes autoscaling - <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">HPA</a>.</p> <p>About exposing application outside Kubernetes you have to do it using <code>Service</code>. More information can be found <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">here</a>. I am not sure if <code>NodePort</code> is right in this scenario. You can check <a href="https://medium.com/@pczarkowski/kubernetes-services-exposed-86d45c994521" rel="nofollow noreferrer">ServiceType</a> description.</p> <p>However I am not very familiar with MongoDB with Kubernetes, but maybe those tutorials help you. <a href="https://medium.com/faun/scaling-mongodb-on-kubernetes-32e446c16b82" rel="nofollow noreferrer">Scaling MongoDB on Kubernetes</a>, <a href="https://www.mongodb.com/blog/post/running-mongodb-as-a-microservice-with-docker-and-kubernetes" rel="nofollow noreferrer">Running MongoDB as a Microservice with Docker and Kubernetes</a>, <a href="https://kubernetes.io/blog/2017/01/running-mongodb-on-kubernetes-with-statefulsets/" rel="nofollow noreferrer">Running MongoDB on Kubernetes with StatefulSets</a>.</p> <p>Hope it will help.</p>
<p>I am trying to push an update to my codebase for my docker image. I have the docker image on kubernetes on GCP and I followed the way it was mentioned in the document <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app" rel="noreferrer">here</a>. I even pushed the image with :v2 tag to the container registry and that image is visible in the console as well. But now when I am trying to run:</p> <pre><code>kubectl set image deployment/clustername myImageName=gcr.io/${PROJECT_ID}/myImageName:v2 </code></pre> <p>it gives me the following error:</p> <pre><code>error: unable to find container myImageName </code></pre> <p>I know that the image is there because I build it with </p> <pre><code>docker build -t gcr.io/${PROJECT_ID}/myImageName:v2 . </code></pre> <p>I have also fixed the issue of: <code>Error from server (NotFound): deployments.extensions</code> I am stuck at this very moment. Can anyone throw some light on it?</p>
<p>The error message specifies that your deployment doesn't have a container named <code>myImageName</code>.</p> <p>The syntax for <code>kubectl set image</code> is:</p> <pre><code>kubectl set image deployment-name container-name=image-name </code></pre> <p><code>container-name</code> must be the name of a container you specified inside your deployment under <code>spec.template.spec.containers.name</code>.</p>
<p>During Kubernetes upgrade from 1.14 to 1.15 with kubespray, my team had a blocking issue with the following message during the "Upgrade first master" step :</p> <pre><code>[upgrade/apply] FATAL: couldn''t upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: [failed to renew certificates for component "kube-apiserver": failed to renew certificate apiserver-kubelet-client: unable to sign certificate: must specify at least one ExtKeyUsage, rename /etc/kubernetes/tmp/kubeadm -backup-manifests-2019-09-19-09-06-27/kube-apiserver.yaml /etc/kubernetes/manifests/kube-apiserver.yaml: no such file or directory]' </code></pre> <p>Trying to isolate the task and manually running the kubeadm command line leads to the same error message :</p> <pre><code>#/usr/local/bin/kubeadm upgrade apply -y v1.15.3 --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=all --allow-experimental-upgrades --allow-release-candidate-upgrades --etcd-upgrade=false -v 6 </code></pre> <p>or even trying to just manually renew the certificate :</p> <pre><code>/etc/kubernetes/pki# kubeadm alpha certs renew apiserver-kubelet-client -v 9 I0919 14:42:11.515503 18597 initconfiguration.go:105] detected and using CRI socket: /var/run/dockershim.sock I0919 14:42:11.515897 18597 interface.go:384] Looking for default routes with IPv4 addresses I0919 14:42:11.515916 18597 interface.go:389] Default route transits interface “eth0” I0919 14:42:11.516284 18597 interface.go:196] Interface eth0 is up (...) I0919 14:42:11.516835 18597 feature_gate.go:216] feature gates: &amp;{map[]} failed to renew certificate apiserver-kubelet-client: unable to sign certificate: must specify at least one ExtKeyUsage </code></pre> <p>The solution was found eventually and posted below.</p>
<p>The issue comes from kubeadm which uses the old certificates when it has to renew them. But when these initial certificates are too old or were manually generated, they may not include some mandatory fields that needs to be there.</p> <p>In the error message, <code>ExtKeyUsage</code> refers to the <code>X509v3 Extended Key Usage</code> field.</p> <p>You can check that by looking into your certificates : 2 certificates are involved : <code>apiserver-kubelet-client.crt</code> and <code>front-proxy-client.crt</code></p> <p>They are located on the master hosts at <code>/etc/kubernetes/pki</code>.</p> <p>You can check them with </p> <p><code># openssl x509 -in apiserver-kubelet-client.crt -text -noout</code></p> <p>If they do not contain the following (near the end), then kubeadm will be totally unable to renew the certificate</p> <pre><code>(...) X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Client Authentication </code></pre> <hr> <p>TL;DR;</p> <p>The solution is simply to create brand new certificates with the following procedure</p> <pre><code>######### Backup your certificates (just in case) master01:/etc/kubernetes/pki# cp -a /etc/kubernetes/pki /root/backup_cert/ ######### Delete incomplete certificates master01:/etc/kubernetes/pki# rm apiserver-kubelet-client.* master01:/etc/kubernetes/pki# rm front-proxy-client.* ######### Then recreate them master01:/etc/kubernetes/pki# kubeadm init phase certs apiserver-kubelet-client master01:/etc/kubernetes/pki# kubeadm init phase certs front-proxy-client </code></pre> <p>You can now restart your upgrade procedure which should be ok. (Note : if your cluster was left in a state where your first master has a SchedulingDisabled state, then do not forget to uncordon the host, because kubespray playbook won't fix that)</p>
<p>I have a private registry and when I try to pull images I get a DNS lookup error. This is after following the steps provided in kubernetes <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">documentation</a>.</p> <p>I just wanted to know what steps will kubernetes perform to pull the images from a container registry. Will the kubernetes master pull the images and deploy it in the nodes or is this also scheduled to the nodes to pull the required images </p>
<p>No, the master won't access your registry. The nodes will pull the images when they need it (ie. when a pod that uses the images will be scheduled)</p> <p>In your case, you must check that accessing the registry is possible from the nodes (you can try a manual <code>docker pull</code> for that)</p> <p>If your registry needs some authentification, you will also have to add some credential in your namespaces secrets (but your "DNS lookup error" does tell that it is not your actual issue)</p>
<p>Is it possible to use spring cloud oauth 2 server with kubernetes api gateway ingress. </p> <p>I have used it with zuul to authenticate user before making a call. Can I do similar with ingress? </p> <p><strong>Edit 1:</strong></p> <p>To explain it more clearly, what I am trying to achieve</p> <p>I am using token based oAuth2 implementation given by the spring cloud.</p> <ol> <li>oauth is running as one of the service behind the zuul.</li> <li>zuul has routes mapped for the oauth server and resource server</li> <li>client call the auth server via zuul and gets the token.</li> <li>client call resource server via zuul with token passed</li> <li>zuul is configured to validate the token before making a call to resource server.</li> </ol> <p>In this way we can stop any downstream traffic to go without a valid token.</p> <p>can we do token validation in ingress with auth server running with in a cluster?</p>
<p>I have not used Spring Cloud OAuth 2 but as OAuth is a standard I believe you can set it up if you are using Nginx Ingress as the ingress controller, you can specify and external Oauth Provider (As OAuth generally has the same flow) like this on your ingress:</p> <pre><code>... metadata: name: application annotations: nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth" nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri" ... </code></pre> <p>You can find more information <a href="https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/auth/oauth-external-auth" rel="nofollow noreferrer">here</a> with an example of using GitHub as an OAuth provider</p>
<p>I've deployed a stateful mongodb setup in my k8s cluster. Everytime a scale a new pod, I need to add the pod from mongodb console using rs.add() command. Is there any way I can orchestrate this ?..Also how can I expose my mongodb service outside my k8s cluster..Changing the service type to nodeport didn't work for me..Please help.</p> <p>Giving below the stateful yaml file which I used to deploy mongodb.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: mongo labels: name: mongo spec: ports: - port: 27017 targetPort: 27017 clusterIP: None selector: role: mongo --- apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: mongo spec: serviceName: "mongo" replicas: 3 template: metadata: labels: role: mongo environment: test spec: terminationGracePeriodSeconds: 10 containers: - name: mongo image: mongo:3.4 command: - mongod - "--replSet" - rs0 - "--bind_ip" - 0.0.0.0 - "--smallfiles" - "--noprealloc" ports: - containerPort: 27017 volumeMounts: - name: mongo-persistent-storage mountPath: /data/db - name: mongo-sidecar image: cvallance/mongo-k8s-sidecar env: - name: MONGO_SIDECAR_POD_LABELS value: "role=mongo,environment=test" volumeClaimTemplates: - metadata: name: mongo-persistent-storage annotations: volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 2Gi </code></pre>
<p>As @PjoterS suggest you can scale the mongoDB replicas or pods inside the kubernetes using HPA.</p> <p>But with that you should have to also take care about the volume mounting with it.also data latency between replicas.</p> <p>I would suggest better first check the native scaling cluster option provided by the mongo db it self and configure. You can use some operators for mongoDB like : <code>https://docs.mongodb.com/kubernetes-operator/master/tutorial/install-k8s-operator/</code></p> <p>Otherwise if you have current config is following native cluster and support scaling replica and data copy between replca's you can go for HPA.</p> <p>you can also have a look at this : <code>https://medium.com/faun/scaling-mongodb-on-kubernetes-32e446c16b82</code></p>
<p>I want to get an alert when the recent job of a cronjob fails. The expr <code>kube_job_status_failed{job_name=~"cronjobname.*"}==1</code> works for most of time. But if a job fails and it's kept, even the next job succeeds, I still get an alert because there are two records in prometheus, one of which is the failure record, the other one is the success record.</p> <p>I found I can get the latest job timestamp from <code>kube_cronjob_status_last_schedule_time{cronjob="cronjobname"}</code>, then use <code>kube_job_status_failed{job_name="cronjobname-TIMESTAMP"}</code> to query the last job status.</p> <p>I wonder whether we have a way in one query to concatenate the <code>jobname</code> from the result of the first query and filter in the second? like <code>kube_job_status_failed{job_name=string_concatenate("cronjobname-", kube_cronjob_status_last_schedule_time{cronjob="cronjobname"})} </code></p>
<p>With promql, you won't be able to have something the way you describe it. Moreover, I am not sure the last schedule time is always the same as the job start time; if there is a slowness or a reschedule somewhere by example.</p> <p>You can follow the approach indicated in <a href="https://medium.com/@tristan_96324/prometheus-k8s-cronjob-alerts-94bee7b90511" rel="nofollow noreferrer">this article</a>. An alternative one would be using the <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/docs/job-metrics.md" rel="nofollow noreferrer">job metrics</a> to determine:</p> <p>the timestamp of the last failed job per cronjob</p> <pre><code>- record: job_cronjob:kube_job_status_start_time:last_failed expr: max((kube_job_status_start_time AND kube_job_status_failed == 1) * ON(job,namespace) GROUP_LEFT kube_job_labels{label_cronjob!=""} ) BY(label_cronjob) </code></pre> <p>the timestamp of the last successful job per cronjob</p> <pre><code>- record: job_cronjob:kube_job_status_start_time:last_suceeded expr: max((kube_job_status_start_time AND kube_job_status_suceeded == 1) * ON(job,namespace) GROUP_LEFT kube_job_labels{label_cronjob!=""} ) BY(label_cronjob) </code></pre> <p>And alert if failed one is more recent than successful one:</p> <pre><code>- alert: CronJobStatusFailed expr: job_cronjob:kube_job_status_start_time:last_failed &gt; job_cronjob:kube_job_status_start_time:last_suceeded for: 1m annotations: description: '{{ $labels.label_cronjob}} last run has failed.' </code></pre>
<p>I created a managed Postgres database in Google Cloud. This database got a external IP address. In a second step I created a Kubernetes cluster. In the k8s I want access this external database. Therefore I created a service without label selector but with a external endpoint pointing to my Postgres-database.</p> <p>I had to allow the Postgres database to get access from the (three) cluster nodes. I configured that in the Google Cloud Console (SQL).</p> <p>My first question: Is this the right way to integrate an external database? Especially this IP access configuration?</p> <p>To test my connection against the database my first try was to establish a port-forwarding from my local host. My idea was to access this database via my Database-IDE(datagrip). However when trying to establish a port forwarding I get the following error:</p> <pre><code>error: cannot attach to *v1.Service: invalid service 'postgres-bla-bla': Service is defined without a selector </code></pre> <p>Second question: How to access this service locally? </p> <p>In a third step I created a pod with 'partlab/ubuntu-postgresql' docker-image. I did a 'kctrl exec -it ... ' and could access my Postgres database with</p> <pre><code>psql -h postgres-bla-bla ... </code></pre> <p>So basically it works. But I'm sure my solution has some flaws. What can I do better? How to fix the problem from question 2?</p>
<p>The problem was discussed <a href="https://github.com/txn2/kubefwd/issues/35" rel="noreferrer">here</a> and there is a solution to set up port forwarding to a service without selector/pod (e.g. ExternalName service) by deploying a proxy pod inside K8s:</p> <pre><code>kubectl -n production run mysql-tunnel-$USER -it --image=alpine/socat --tty --rm --expose=true --port=3306 tcp-listen:3306,fork,reuseaddr tcp-connect:your-internal-mysql-server:3306 kubectl -n production port-forward svc/mysql-tunnel-$USER 3310:3306 </code></pre> <p>In the example above the MySQL server at <code>your-internal-mysql-server:3306</code> will be available on <code>localhost:3310</code> on your machine.</p>
<p>Using helm I know you can disable a dependency in <strong>requirements.yaml</strong> using a <strong>condition</strong>.</p> <p>However, I have a dependency copied into the <strong>charts/</strong> directory and expanded because I need more control over it.</p> <p>Is there is any way to conditionally include this dependency given that it's not included via <strong>requirements.yaml</strong> but rather through expanding in the charts/ folder?</p>
<p>Yes, according to <a href="https://github.com/helm/helm/blob/master/docs/helm/helm_dependency.md" rel="nofollow noreferrer">helm dependency docs</a>:</p> <blockquote> <p>Starting from <strong>2.2.0</strong>, repository can be defined as the path to the directory of the dependency charts stored locally. The path should start with a prefix of "file://". For example,</p> <pre><code># requirements.yaml dependencies: - name: nginx version: "1.2.3" repository: "file://../dependency_chart/nginx" </code></pre> <p>If the dependency chart is retrieved locally, it is not required to have the repository added to helm by "helm repo add". Version matching is also supported for this case.</p> </blockquote>
<p>I have a number of deployment objects in my namespace. I'd like to run <code>kubectl rollout status</code> for all of them at the same time. So, I'd like the command to exit, only when all rollouts are complete, or an error has occurred. How can I achieve that?</p> <p>The only thing I got with so far is:</p> <pre><code>kubectl get deploy -o name | xargs -n1 -t kubectl rollout status </code></pre> <p>But I hope there's something smarter.</p>
<p>You can use this simple check:</p> <pre><code>#!/bin/bash deploy=$(kubectl get deploy -o name) for i in $deploy; do kubectl rollout status $i -w --timeout=30s; done </code></pre> <p>You can also build more advanced script using clues in this <a href="https://stackoverflow.com/questions/37448357/ensure-kubernetes-deployment-has-completed-and-all-pods-are-updated-and-availabl/37472801#37472801">post</a>. </p>
<p>I'm working with Google Cloud Platform GKE, and using kubernetes now. I am trying to use OpenTSDB through google Bigtable and it's QA time.</p> <p>but it has been unexpected bugs shown when I just put some data but, it is not shown. Even long time goes on, it's not shown anymore.</p> <p>I used this image and tutorial (version and env are same) <a href="https://github.com/GoogleCloudPlatform/opentsdb-bigtable" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/opentsdb-bigtable</a></p> <p>here is my steps.</p> <p>1) put data with http POST request api ex) <code>http://localhost:4242/api/put?details=true</code></p> <pre><code>with Json object { "metrics": "foo.bar", "timestamp": "1483401600000", "value": 1.0, "tags": { "foo": "bar" } } </code></pre> <p>2) Confirm log, hbase console</p> <p>3) Using GET API to query it ex: <code>http://localhost:4242/api/query?start=1483401600000&amp;ms=true&amp;m=sum:foo.bar</code></p> <p>4) cannot see anything, just empty vector</p>
<p><a href="https://cloud.google.com/bigtable/docs/integrations#opentsdb" rel="nofollow noreferrer">https://cloud.google.com/bigtable/docs/integrations#opentsdb</a> has links to more documentation on OpenTSDB and Cloud Bigtable integration, including <a href="https://cloud.google.com/solutions/opentsdb-cloud-platform" rel="nofollow noreferrer">Using OpenTSDB to Monitor Time-Series Data</a>. You can try updating to the latest version of <a href="https://github.com/OpenTSDB/asyncbigtable" rel="nofollow noreferrer">AsyncBigtable</a> and see if it works.</p>
<p>I have configured amazon certificate manager, ALB Ingress Controller and a domain names for my application. I can access my application through port 80 and port 443 (all certificates works just fine). However I would like to redirect all coming traffic from HTTP to HTTPS automatically so that people who typed the domain name by itself is redirected to HTTPS. I have followed <a href="https://kubernetes-sigs.github.io/aws-alb-ingress-controller/" rel="noreferrer">this page</a> and <a href="https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/ingress/annotation/" rel="noreferrer">this one</a>but I cannot make it work</p> <p>this is my ingress.yaml file:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: metabase namespace: bigdata annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-2:***:certificate/*** alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]' alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}' alb.ingress.kubernetes.io/scheme: internet-facing labels: app: metabase spec: rules: - http: paths: - path: /* backend: serviceName: ssl-redirect servicePort: use-annotation - path: /* backend: serviceName: metabase servicePort: 3000 </code></pre> <p>this is my service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: metabase annotations: service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-2:****:certificate/**** service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443" namespace: bigdata labels: app: metabase spec: ports: - name: https protocol: TCP port: 443 targetPort: http-server - name: http protocol: TCP port: 80 targetPort: http-server selector: app: metabase type: LoadBalancer </code></pre> <p>ad this is my deployment:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: metabase-deployment namespace: bigdata labels: app: metabase spec: replicas: 2 selector: matchLabels: app: metabase template: metadata: labels: app: metabase spec: containers: - name: metabase image: metabase/metabase ports: - containerPort: 3000 name: http-server resources: limits: cpu: "1" memory: "2Gi" </code></pre> <p>thanks for your support! :-)</p>
<p>I was able to make it work!! basically I modified the ingress.yaml and service.yaml files</p> <p>ingress.yaml looks like this:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: metabase namespace: bigdata annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-2:***:certificate/**** alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]' alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}' alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/group: metabase # name of my app labels: app: metabase spec: rules: - http: paths: - path: /* backend: serviceName: ssl-redirect servicePort: use-annotation - path: /* backend: serviceName: metabase servicePort: 443 </code></pre> <p>and my service looks like this:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: metabase annotations: service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-2:***:certificate/*** namespace: bigdata labels: app: metabase spec: ports: - name: https protocol: TCP port: 443 targetPort: http-server - name: http protocol: TCP port: 80 targetPort: http-server selector: app: metabase type: LoadBalancer </code></pre>
<p>After setting TLS succesfully, i'm not able to reach my web application through the ingress, since it is always redirecting to 404 default backend</p> <p>This is my ingress configuration:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-service annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 kubernetes.io/ingress.class: nginx certmanager.k8s.io/cluster-issuer: 'letsencrypt-prod' nginx.ingress.kubernetes.io/ssl-redirect: 'true' spec: tls: - hosts: - xxxx.com - www.xxxx.com secretName: xxxx-com rules: - host: xxxx.com - http: paths: - path: /(.*) backend: serviceName: web-cluster-ip-service servicePort: 3000 - host: www.xxxx.com - http: paths: - path: /(.*) backend: serviceName: web-cluster-ip-service servicePort: 3000 </code></pre> <p>Removing TLS options make it work fine (thought is not using TLS anymore):</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-service annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 kubernetes.io/ingress.class: nginx spec: rules: - http: paths: - path: /(.*) backend: serviceName: web-cluster-ip-service servicePort: 3000 </code></pre>
<p>Nevermid, after many hours i found that my configuration has an error:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-service annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 kubernetes.io/ingress.class: nginx certmanager.k8s.io/cluster-issuer: 'letsencrypt-prod' nginx.ingress.kubernetes.io/ssl-redirect: 'true' spec: tls: - hosts: - xxxx.com - www.xxxx.com secretName: xxxx-com rules: - host: xxxx.com http: paths: - path: /(.*) backend: serviceName: web-cluster-ip-service servicePort: 3000 - host: www.xxxx.com http: paths: - path: /(.*) backend: serviceName: web-cluster-ip-service servicePort: 3000 </code></pre> <p>Hypens should be removed before http in each host block</p> <p>That was... hard to find, a warning message when applying a possible weird/wrong configuration would not have hurt anyone</p>
<p>I am setting up a container through Google cloud platform (GCP) kubernetes engine. I have a requirement to mount multiple volumes as the containers are created that way. These volume have to be persistent and hence I went with an NFS approach. I have a VM where NFS service is running and it exports couple of directories.</p> <p>I am giving yaml sample files below.</p> <p><strong>deployment.yaml</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: myapp-branch labels: component: myapp-branch spec: template: metadata: labels: component: myapp-branch spec: imagePullSecrets: - name: myprivatekey containers: - name: myapp-branch image: mydockerrepo/myapp/webapp:6.6 command: ["/bin/sh", "-ec", "while :; do echo '.'; sleep 100 ; done"] env: - name: myapp_UID value: "1011" - name: myapp_GID value: "1011" - name: myapp_USER value: "myapp_branch" - name: myapp_XMS_G value: "1" - name: myapp_XMX_G value: "6" volumeMounts: - mountPath: /mypath1/path1 name: pvstorestorage - mountPath: /mypath2/path2 name: mykeys volumes: - name: pvstorestorage persistentVolumeClaim: claimName: standalone - name: mykeys persistentVolumeClaim: claimName: conf </code></pre> <p><strong>PVAndPVC.yaml</strong></p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: standalone spec: capacity: storage: 1Gi accessModes: - ReadWriteMany nfs: server: 10.2.1.6 path: "/exports/path1" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: standalone spec: accessModes: - ReadWriteMany storageClassName: "" resources: requests: storage: 1Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: conf spec: capacity: storage: 1Gi accessModes: - ReadWriteMany nfs: server: 10.2.1.6 path: "/exports/path2" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: conf spec: accessModes: - ReadWriteMany storageClassName: "" resources: requests: storage: 1Gi </code></pre> <p>after applying them, I see that both the volume mounts of container (/mypath1/path1 and /mypath2/path2) are mounted to same mount point of nfs (/exports/path2, the second one). This is happening with persistentVolumeClaim, I tried EmptyDir, its working fine. If any one tried this approach and know the solution, it would be really helpful.</p>
<p>You must add a rule in your PVC (PersistentVolumeClaim) definitions to make them match their correct respective PV (PersistentVolume).</p> <p>Having the same name is not enough.</p> <p>Change your PV and PVC definitions into something like (untested) :</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: standalone labels: type: standalone spec: capacity: storage: 1Gi accessModes: - ReadWriteMany nfs: server: 10.2.1.6 path: "/exports/path1" --- apiVersion: v1 kind: PersistentVolume metadata: name: conf labels: type: conf spec: capacity: storage: 1Gi accessModes: - ReadWriteMany nfs: server: 10.2.1.6 path: "/exports/path2" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: standalone spec: accessModes: - ReadWriteMany storageClassName: "" resources: requests: storage: 1Gi selector: matchLabels: type: standalone --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: conf spec: accessModes: - ReadWriteMany storageClassName: "" resources: requests: storage: 1Gi selector: matchLabels: type: conf </code></pre> <p>(typically, I added a metadata.labels.type in PVs and a spec.selector.matchLabels in PVCs)</p> <p>Also, use <code>kubectl get pv</code> and <code>kubectl get pvc</code> to see how it is working and ease debugging</p>
<p>I am trying to pass an environment variable in kubernetes container.</p> <p><strong>What have I done so far ?</strong></p> <ul> <li><p>Create a deployment</p> <p><code>kubectl create deployment foo --image=foo:v1</code></p></li> <li><p>Create a NODEPORT service and expose the port</p> <p><code>kubectl expose deployment/foo --type=NodePort --port=9000</code></p></li> <li><p>See the pods</p> <p><code>kubectl get pods</code></p></li> <li><p>dump the configurations (so to add the environment variable)</p> <p><code>kubectl get deployments -o yaml &gt; dev/deployment.yaml</code></p> <p><code>kubectl get svc -o yaml &gt; dev/services.yaml</code></p> <p><code>kubectl get pods -o yaml &gt; dev/pods.yaml</code></p></li> <li><p>Add env variable to the pods env:</p> <ul> <li>name: FOO_KEY value: "Hellooooo"</li> </ul></li> <li><p>Delete the svc,pods,deployments</p> <p><code>kubectl delete -f dev/ --recursive</code></p></li> <li><p>Apply the configuration</p> <p><code>kubectl apply -f dev/ --recursive</code></p></li> <li><p>Verify env parameters</p> <p><code>kubectl describe pods</code></p></li> </ul> <p><strong>Something weird</strong></p> <p>If I manually changed the meta information of the pod yaml and hard code the name of the pod. It gets the env variable. However, this time two pods come up one with the hard coded name and other with the hash with it. For example, if the name I hardcoded was "foo", two pods namely foo and foo-12314faf (example) would appear in "kubectl get pods". Can you explain why ?</p> <p><strong>Question</strong></p> <p>Why does the verification step does not show the environment variable ?</p>
<p>As the issue is resolved in the comment section. </p> <p>If you want to set <strong>env</strong> to pods I would suggust you to use <strong>set</strong> sub commend </p> <p><code>kubectl set env --help</code> will provide more detail such as list the env and create new one</p> <pre><code>Examples: # Update deployment 'registry' with a new environment variable kubectl set env deployment/registry STORAGE_DIR=/local # List the environment variables defined on a deployments 'sample-build' kubectl set env deployment/sample-build --list </code></pre> <p>Deployment enables declarative updates for Pods and ReplicaSets. Pods are not typically directly launched on a cluster. Instead, pods are usually managed by replicaSet which is managed by deployment. </p> <p>following thread discuss <a href="https://stackoverflow.com/questions/41325087/what-is-the-difference-between-a-pod-and-a-deployment">what-is-the-difference-between-a-pod-and-a-deployment</a></p>
<p>I want to setup elastic stack (elastic search, logstash, beats and kibana) for monitoring my kubernetes cluster which is running on on-prem bare metals. I need some recommendations on the following 2 approaches, like which one would be more robust,fault-tolerant and of production grade. Let's say I have a K8 cluster named as K8-abc.</p> <p>Approach 1- Will be it be good to setup the elastic stack outside the kubernetes cluster? </p> <p>In this approach, all the logs from pods running in kube-system namespace and user-defined namespaces would be fetched by beats(running on K8-abc) and put into into the ES Cluster which is configured on Linux Bare Metals via Logstash (which is also running on VMs). And for fetching the kubernetes node logs, the beats running on respective VMs (which are participating in forming the K8-abc) would fetch the logs and put it into the ES Cluster which is configured on VMs. The thing to note here is the VMs used for forming the ES Cluster are not the part of the K8-abc. </p> <p>Approach 2- Will be it be good to setup the elastic stack on the kubernetes cluster k8-abc itself?</p> <p>In this approach, all the logs from pods running in kube-system namespace and user-defined namespaces would be send to Elastic search cluster configured on the K8-abc via logstash and beats (both running on K8-abc). For fetching the K8-abc node logs, the beats running on VMs (which are participating in forming the K8-abc) would put the logs into ES running on K8-abc via logstash which is running on k8-abc.</p> <p>Can some one help me in evaluating the pros and cons of the before mentioned two approaches? It will be helpful even if the relevant links to blogs and case studies is provided.</p>
<p>I would be more inclined to the <strong>second solution</strong>. It has many advantages over the first one however it may seem more complex as it comes to the initial setup. You can actually ask similar question when it comes to migrate any other type of workload to <strong>Kubernetes</strong>. It has many advantages over VM. To name just a few:</p> <ol> <li><strong><a href="https://www.stratoscale.com/blog/kubernetes/auto-healing-containers-kubernetes/" rel="nofollow noreferrer">self-healing cluster</a></strong>, </li> <li><strong><a href="https://kubernetes.io/docs/concepts/services-networking/service/#discovering-services" rel="nofollow noreferrer">service discovery</a></strong> and integrated <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/" rel="nofollow noreferrer"><strong>load balancing</strong></a>,</li> <li>Such solution is much easier to scale (<a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer"><strong>HPA</strong></a>) in comparison with VMs,</li> <li>Storage orchestration. <strong>Kubernetes</strong> allows you to automatically mount a storage system of your choice, such as local storage, public cloud providers, and <a href="https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes" rel="nofollow noreferrer">many more</a> including <strong><a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/" rel="nofollow noreferrer">Dynamic Volume Provisioning</a></strong> mechanism.</li> </ol> <p>All the above points could be easily applied to any other workload and may bee seen as <strong>Kubernetes</strong> advantages in general so let's look why to use it for implementing <strong>Elastic Stack</strong>:</p> <ol start="5"> <li>It looks like <strong>Elastic</strong> is actively promoting use of <strong>Kubernetes</strong> on their <a href="https://www.elastic.co/elasticsearch-kubernetes" rel="nofollow noreferrer">website</a>. See also <a href="https://www.elastic.co/blog/alpha-helm-charts-for-elasticsearch-kibana-and-cncf-membership" rel="nofollow noreferrer">this</a> article.</li> <li>They also provide an official <strong><a href="https://github.com/elastic/helm-charts/tree/master/elasticsearch" rel="nofollow noreferrer">elasticsearch helm chart</a></strong> so it is already quite well supported by <strong>Elastic</strong>.</li> </ol> <p>Probably there are many other reasons in favour of <strong>Kubernetes</strong> solution I didn't mention here. <a href="https://medium.com/faun/https-medium-com-thakur-vaibhav23-ha-es-k8s-7e655c1b7b61" rel="nofollow noreferrer">Here</a> you can find a hands-on article about setting up <strong>Highly Available and Scalable Elasticsearch on Kubernetes</strong>.</p>
<p>I created a PV and a PVC on docker-desktop and even after removing the pv and pvc the file still remains. When I re-create it, it attaches the same mysql database to new pods. How do you manually delete the files created by the hostPath? I suppose one way is to just reset Kubernetes in the preferences but there has to be another less nuclear option. </p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: mysql-pv-volume labels: type: local spec: storageClassName: manual capacity: storage: 20Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete hostPath: path: "/mnt/data" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim2 spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 20Gi </code></pre>
<p>According to the <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#recycle" rel="noreferrer">docs</a>, "...<code>Recycle</code> reclaim policy performs a basic scrub (<code>rm -rf /thevolume/*</code>) on the volume and makes it available again for a new claim". <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaim-policy" rel="noreferrer">Also</a>, "...Currently, only NFS and HostPath support recycling". So, try changing</p> <pre><code>persistentVolumeReclaimPolicy: Delete </code></pre> <p>to</p> <pre><code>persistentVolumeReclaimPolicy: Recycle </code></pre>
<p>As part of our CI pipeline, we have a deployment script for a number of web services that looks something like this:</p> <pre><code>kubectl apply -f deployment1.yml kubectl apply -f deployment2.yml </code></pre> <p>The problem is that the next stage of the pipeline sometimes fails because the services are not ready by the time it starts.</p> <p>I would like to add a line in the script that says something like:</p> <pre><code>Wait until all deployments are in the Ready state, or fail if more than 30 seconds has elapsed. </code></pre> <p>I thought that the following would work but unfortunately it seems that the timeout flag is not available:</p> <pre><code>kubectl rollout status deployment deployment1 --timeout=30s kubectl rollout status deployment deployment2 --timeout=30s </code></pre> <p>I don't want to run "kubectl rollout status" without a timeout as that will cause our build to hang if there is a failure in one of the deployments.</p>
<p>I found a solution that works well. Set the property <code>.spec.progressDeadlineSeconds</code> to a value such as 30 (default is 600 or ten minutes) and <code>kubectl rollout status deployment</code> will wait for this amount of time before displaying an error message and exiting with a non zero exit code:</p> <pre><code>$ kubectl rollout status deploy/nginx Waiting for rollout to finish: 2 out of 3 new replicas have been updated... error: deployment "nginx" exceeded its progress deadline </code></pre> <p>Documentation is here: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#failed-deployment" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#failed-deployment</a></p>
<p>I have fallowed the <a href="https://darienmt.com/kubernetes/2019/03/31/kubernetes-at-home.html" rel="nofollow noreferrer">this web site</a> to configure kubernetes in aws ubuntu(18.04) ec2 instance. I have fallowed same steps in above web page. But after applied Network Overlay, core dns pod's state not changed to running state. <a href="https://i.stack.imgur.com/jDHoR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jDHoR.png" alt="enter image description here"></a> </p> <p>My installed kubernetes version as bellow</p> <ul> <li>kubeadm - GitVersion:"v1.16.0"</li> <li>kubectl - GitVersion:"v1.16.0"</li> <li>kubelet - Kubernetes v1.16.0</li> </ul> <p>resolve this issue i have tried the this answer in <a href="https://stackoverflow.com/questions/44086826/kubeadm-master-node-never-ready">stackoverflow</a> </p> <p>How can i resolve this issue ?</p> <p>The output of <code>kubectl get nodes</code></p> <p><a href="https://i.stack.imgur.com/xWA7E.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xWA7E.png" alt="enter image description here"></a></p> <p><strong>out put of</strong> <code>kubectl describe pod coredns-644d7b6d9-nv9mj -n kube-system</code></p> <pre><code> ubuntu@master:~$ sudo kubectl describe pod coredns-644d7b6d9-nv9mj -n kube-system Name: coredns-644d7b6d9-nv9mj Namespace: kube-system Priority: 2000000000 Priority Class Name: system-cluster-critical Node: &lt;none&gt; Labels: k8s-app=kube-dns pod-template-hash=644d7b6d9 Annotations: &lt;none&gt; Status: Pending IP: IPs: &lt;none&gt; Controlled By: ReplicaSet/coredns-644d7b6d9 Containers: coredns: Image: k8s.gcr.io/coredns:1.6.2 Ports: 53/UDP, 53/TCP, 9153/TCP Host Ports: 0/UDP, 0/TCP, 0/TCP Args: -conf /etc/coredns/Corefile Limits: memory: 170Mi Requests: cpu: 100m memory: 70Mi Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5 Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: &lt;none&gt; Mounts: /etc/coredns from config-volume (ro) /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-54rgm (ro) Conditions: Type Status PodScheduled False Volumes: config-volume: Type: ConfigMap (a volume populated by a ConfigMap) Name: coredns Optional: false coredns-token-54rgm: Type: Secret (a volume populated by a Secret) SecretName: coredns-token-54rgm Optional: false QoS Class: Burstable Node-Selectors: beta.kubernetes.io/os=linux Tolerations: CriticalAddonsOnly node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling &lt;unknown&gt; default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate . Warning FailedScheduling &lt;unknown&gt; default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate </code></pre>
<p>So basically there are some problems with flannel working on kubernetes v1.16 and you need to add <code>"cniVersion": "0.2.0"</code> to <a href="https://github.com/coreos/flannel/blob/d893bcbfe6b04791054aea6c7569dea4080cc289/Documentation/kube-flannel.yml#L108" rel="noreferrer">kube-flannel.yml</a> file</p> <pre><code>--- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: psp.flannel.unprivileged annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec: privileged: false volumes: - configMap - secret - emptyDir - hostPath allowedHostPaths: - pathPrefix: "/etc/cni/net.d" - pathPrefix: "/etc/kube-flannel" - pathPrefix: "/run/flannel" readOnlyRootFilesystem: false # Users and groups runAsUser: rule: RunAsAny supplementalGroups: rule: RunAsAny fsGroup: rule: RunAsAny # Privilege Escalation allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false # Capabilities allowedCapabilities: ['NET_ADMIN'] defaultAddCapabilities: [] requiredDropCapabilities: [] # Host namespaces hostPID: false hostIPC: false hostNetwork: true hostPorts: - min: 0 max: 65535 # SELinux seLinux: # SELinux is unsed in CaaSP rule: 'RunAsAny' --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel rules: - apiGroups: ['extensions'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: ['psp.flannel.unprivileged'] - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.2.0", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-amd64 namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: amd64 tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-amd64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-amd64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-arm64 namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: arm64 tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-arm64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-arm64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-arm namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: arm tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-arm command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-arm command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-ppc64le namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: ppc64le tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-ppc64le command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-ppc64le command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-s390x namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: s390x tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-s390x command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-s390x command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg </code></pre> <p>I added this there so everything you need to do is use:</p> <pre><code>sudo kubeadm reset sudo kubeadm init --pod-network-cidr=10.244.0.0/16 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config sudo sysctl net.bridge.bridge-nf-call-iptables=1 </code></pre> <p>use nano or vi to create a kube-flannel.yaml file and copy above yaml to it.</p> <pre><code>sudo kubectl apply -f kube-flannel.yaml </code></pre>
<p>I'm new with GKE and I'm testing some scaling features. I started with a simple example, 1 pod inside 1 pool with 1 node.</p> <p>When I scaled the pool to have 2 nodes and the pod to replica=2, for my surprise the 2 pods were allocated inside the same node.</p> <p>Is it a problem for redundancy? Can I assure that my replicas are spread to all nodes?</p>
<p>The place where Pods are scheduled is decided by the <a href="https://kubernetes.io/docs/concepts/scheduling/kube-scheduler/" rel="nofollow noreferrer">Kubernetes scheduler</a>. As mentioned in the documentation, the scheduler first finds eligible nodes in a filtering phase. Following that, the scheduler finds the most adequate node using scoring criteria mentioned in <a href="https://kubernetes.io/docs/concepts/scheduling/kube-scheduler/#scoring" rel="nofollow noreferrer">this section</a>. Among other factors, image locality and fitting Pods into as few nodes as possible could be the reason both Pods were allocated on the same node.</p> <blockquote> <p>Is it a problem for redundancy? Can I assure that my replicas are spread to all nodes?</p> </blockquote> <p>This can be an issue for redundancy. If one Node, goes out, then your entire service becomes unavailable (if you use resources like Deployments and such, they will eventually be scheduled on the other node though).</p> <p>In order to favor Pod spread among nodes, you can customize the scheduler or use mechanisms such as <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity" rel="nofollow noreferrer">affinity and anti-affinity</a>.</p>
<p>I enabled monitoring for my project in Rancher UI, and it installed succesfully. But when I click <em>"Go to grafana"</em> at my workload (such as <code>nginx</code>), it moves to Grafana dashboard, but Grafana show nothing: <code>0 CPU, 0 memory, 0 networking</code> ,... </p> <ul> <li>Why doesn't it have data ?</li> <li>And how I can know consumed quota of my resource (workload, service, pod)?</li> </ul> <p>Please see my screenshots:</p> <p><a href="https://i.stack.imgur.com/OI1AL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OI1AL.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/eoXPF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eoXPF.png" alt="enter image description here"></a> Many thanks</p>
<p>Prometheus is necessary component for using Grafana.</p> <p>You can check this blogpost:</p> <blockquote> <p><a href="https://rancher.com/blog/2018/2018-10-18-monitoring-kubernetes/" rel="nofollow noreferrer">Kubernetes Monitoring with Prometheus, Grafana and Rancher</a></p> <p><a href="https://prometheus.io" rel="nofollow noreferrer">Prometheus</a> is an open-source application for monitoring systems and generating alerts. ... Prometheus will scrape targets at designated intervals and store the information in a time-series database.</p> <p><a href="https://grafana.com" rel="nofollow noreferrer">Grafana</a> is also open source and runs as a web application. ... Grafana makes it easy to create graphs and assemble those graphs into dashboards. </p> </blockquote> <p>Check, if check if <code>Prometheus</code> is turned on and whether it is configured correctly:</p> <blockquote> <p><a href="https://rancher.com/docs/rancher/v2.x/en/project-admin/tools/monitoring/#configuring-project-monitoring" rel="nofollow noreferrer">Configuring Project Monitoring</a></p> <p>From the Global view, navigate to the project that you want to configure project monitoring.</p> <ul> <li><p>Select Tools > Monitoring in the navigation bar.</p></li> <li><p>Select Enable to show the Prometheus configuration options. Enter in your desired configuration options.</p></li> <li><p>Click Save.</p></li> </ul> </blockquote> <p>Also check these settings: </p> <blockquote> <p><a href="https://rancher.com/docs/rancher/v2.x/en/cluster-admin/tools/monitoring/prometheus/" rel="nofollow noreferrer">Prometheus Configuration</a></p> <ul> <li><p><a href="https://rancher.com/docs/rancher/v2.x/en/cluster-admin/tools/monitoring/prometheus/#persistent-storage" rel="nofollow noreferrer">Enable Persistent Storage</a> for Prometheus Whether or not to configure storage for Prometheus so that metrics can be retained even if the Prometheus pod fails. </p></li> <li><p><a href="https://rancher.com/docs/rancher/v2.x/en/cluster-admin/tools/monitoring/prometheus/#persistent-storage" rel="nofollow noreferrer">Enable Persistent Storage</a> for Grafana Whether or not to configure storage for Grafana so that the Grafana dashboards and configuration can be retained even if the Grafana pod fails. </p></li> </ul> </blockquote>
<p>Akka app on Kubernetes is facing delayed heartbeats, even when there is no load. </p> <p>There is also constantly the following warning: </p> <pre><code>heartbeat interval is growing too large for address ... </code></pre> <p>I tried to add a custom dispatcher for the cluster, even for every specific actor but did not help. I am not doing any blocking operations, since it is just a simple Http server.</p> <p>When the cluster has load, the nodes get Unreachable. </p> <p>I created a repository which can be used to reproduce the issue : <a href="https://github.com/CostasChaitas/Akka-Demo" rel="nofollow noreferrer">https://github.com/CostasChaitas/Akka-Demo</a></p>
<p>First, thanks for the well documented reproducer. I did find one minor glitch with a dependency you included, but it was easy to resolve.</p> <p>That said, I was unable to reproduce your errors. Everything worked fine on my local machine and on my dev cluster. You don't include your load generator, so maybe I just wasn't generating as sustained a load, but I got no heartbeat delays at all.</p> <p>I suspect this is a duplicate of <a href="https://stackoverflow.com/questions/58015699/akka-cluster-heartbeat-delays-on-kubernetes">Akka Cluster heartbeat delays on Kubernetes</a> . If so, it sounds like you've already checked for my usual suspects of GC and CFS. And if you are able to reproduce locally it also make it improbable that it's my other common problem of badly configured K8 networking. (I had one client that was having problems with Akka clustering on K8 and it turns out that it was just a badly configured cluster: the network was dropping and delaying packets between pods.)</p> <p>Since you say this is load testing perhaps you are just running out of sockets/files? You don't have much in the way of HTTP server configuration. (Nor any JVM options.)</p> <p>I think my next debugging step would be to connect to one of the running contains and trying to test the network between the pods in the network.</p>
<p>I'd like to run a 'job' per node, one pod on a node at a time.</p> <ul> <li>I've scheduled a bunch of jobs</li> <li>I have a whole bunch of pending pods now</li> <li>I'd like these pending pods to now trigger a node scaling up event (which does <strong>NOT</strong> happen)</li> </ul> <hr> <p>Very much like this issue (made by myself): <a href="https://stackoverflow.com/questions/54314246/kubernetes-reports-pod-didnt-trigger-scale-up-it-wouldnt-fit-if-a-new-node-i">Kubernetes reports &quot;pod didn&#39;t trigger scale-up (it wouldn&#39;t fit if a new node is added)&quot; even though it would?</a></p> <p>However in this case it should indeed fit on a new node.</p> <p>How can I diagnose why Kubernetes has determined that a node scaling event is not possible?</p> <p>My job yaml:</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: example-job-${job_id} labels: job-in-progress: job-in-progress-yes spec: template: metadata: name: example-job-${job_id} spec: # this bit ensures a job/container does not get scheduled along side another - 'anti' affinity affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - topologyKey: kubernetes.io/hostname labelSelector: matchExpressions: - key: job-in-progress operator: NotIn values: - job-in-progress-yes containers: - name: buster-slim image: debian:buster-slim command: ["bash"] args: ["-c", "sleep 60; echo ${echo_param}"] restartPolicy: Never </code></pre> <p>Autoscaler logs:</p> <pre><code>I0920 19:27:58.190751 1 static_autoscaler.go:128] Starting main loop I0920 19:27:58.261972 1 auto_scaling_groups.go:320] Regenerating instance to ASG map for ASGs: [] I0920 19:27:58.262003 1 aws_manager.go:152] Refreshed ASG list, next refresh after 2019-09-20 19:28:08.261998185 +0000 UTC m=+302.102284346 I0920 19:27:58.262092 1 static_autoscaler.go:261] Filtering out schedulables I0920 19:27:58.264212 1 static_autoscaler.go:271] No schedulable pods I0920 19:27:58.264246 1 scale_up.go:262] Pod default/example-job-21-npv6p is unschedulable I0920 19:27:58.264252 1 scale_up.go:262] Pod default/example-job-28-zg4f8 is unschedulable I0920 19:27:58.264258 1 scale_up.go:262] Pod default/example-job-24-fx9rd is unschedulable I0920 19:27:58.264263 1 scale_up.go:262] Pod default/example-job-6-7mvqs is unschedulable I0920 19:27:58.264268 1 scale_up.go:262] Pod default/example-job-20-splpq is unschedulable I0920 19:27:58.264273 1 scale_up.go:262] Pod default/example-job-25-g5mdg is unschedulable I0920 19:27:58.264279 1 scale_up.go:262] Pod default/example-job-16-wtnw4 is unschedulable I0920 19:27:58.264284 1 scale_up.go:262] Pod default/example-job-7-g89cp is unschedulable I0920 19:27:58.264289 1 scale_up.go:262] Pod default/example-job-8-mglhh is unschedulable I0920 19:27:58.264323 1 scale_up.go:304] Upcoming 0 nodes I0920 19:27:58.264370 1 scale_up.go:420] No expansion options I0920 19:27:58.264511 1 static_autoscaler.go:333] Calculating unneeded nodes I0920 19:27:58.264533 1 utils.go:474] Skipping ip-10-0-1-118.us-west-2.compute.internal - no node group config I0920 19:27:58.264542 1 utils.go:474] Skipping ip-10-0-0-65.us-west-2.compute.internal - no node group config I0920 19:27:58.265063 1 factory.go:33] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"example-job-25-g5mdg", UID:"d2e0e48c-dbd9-11e9-a9e2-024e7db9d360", APIVersion:"v1", ResourceVersion:"7256", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added): I0920 19:27:58.265090 1 factory.go:33] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"example-job-8-mglhh", UID:"c7d3ce78-dbd9-11e9-a9e2-024e7db9d360", APIVersion:"v1", ResourceVersion:"7267", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added): I0920 19:27:58.265101 1 factory.go:33] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"example-job-6-7mvqs", UID:"c6a5b0e4-dbd9-11e9-a9e2-024e7db9d360", APIVersion:"v1", ResourceVersion:"7273", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added): I0920 19:27:58.265110 1 factory.go:33] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"example-job-20-splpq", UID:"cfeb9521-dbd9-11e9-a9e2-024e7db9d360", APIVersion:"v1", ResourceVersion:"7259", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added): I0920 19:27:58.265363 1 factory.go:33] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"example-job-21-npv6p", UID:"d084c067-dbd9-11e9-a9e2-024e7db9d360", APIVersion:"v1", ResourceVersion:"7275", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added): I0920 19:27:58.265384 1 factory.go:33] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"example-job-16-wtnw4", UID:"ccbe48e0-dbd9-11e9-a9e2-024e7db9d360", APIVersion:"v1", ResourceVersion:"7265", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added): I0920 19:27:58.265490 1 factory.go:33] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"example-job-28-zg4f8", UID:"d4afc868-dbd9-11e9-a9e2-024e7db9d360", APIVersion:"v1", ResourceVersion:"7269", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added): I0920 19:27:58.265515 1 factory.go:33] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"example-job-24-fx9rd", UID:"d24975e5-dbd9-11e9-a9e2-024e7db9d360", APIVersion:"v1", ResourceVersion:"7271", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added): I0920 19:27:58.265685 1 static_autoscaler.go:360] Scale down status: unneededOnly=true lastScaleUpTime=2019-09-20 19:23:23.822104264 +0000 UTC m=+17.662390361 lastScaleDownDeleteTime=2019-09-20 19:23:23.822105556 +0000 UTC m=+17.662391653 lastScaleDownFailTime=2019-09-20 19:23:23.822106849 +0000 UTC m=+17.662392943 scaleDownForbidden=false isDeleteInProgress=false I0920 19:27:58.265910 1 factory.go:33] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"example-job-7-g89cp", UID:"c73cfaea-dbd9-11e9-a9e2-024e7db9d360", APIVersion:"v1", ResourceVersion:"7263", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added): </code></pre>
<p>I had the wrong parameters defined on the autoscaler.</p> <p>I had to modify the <code>node-group-auto-discovery</code> and <code>nodes</code> parameters.</p> <pre><code> - ./cluster-autoscaler - --cloud-provider=aws - --namespace=default - --scan-interval=25s - --scale-down-unneeded-time=30s - --nodes=1:20:terraform-eks-demo20190922161659090500000007--terraform-eks-demo20190922161700651000000008 - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/example-job-runner - --logtostderr=true - --stderrthreshold=info - --v=4 </code></pre> <p>When installing the cluster autoscaler it is not enough to simply apply the example config, e.g.:</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml </code></pre> <p>As documented in <a href="https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html" rel="nofollow noreferrer">the user guide</a>, that config has placeholder for your eks cluster name in the value for <code>node-group-auto-discovery</code>, and you must either replace it before applying, or update it after deploying.</p>
<p>I am trying to perform a simple operation of logging into my cluster to update image of a deployment. I am stuck at the first step. I get an error that connection to <code>localhost:8080</code> is refused. Please help. </p> <pre><code>$ chmod u+x kubectl &amp;&amp; mv kubectl /bin/kubectl $ $KUBE_CERT &gt; ca.crt $ kubectl config set-cluster cfc --server=$KUBE_URL --certificate-authority=ca.crt Cluster "cfc" set. $ kubectl config set-context cfc --cluster=cfc Context "cfc" created. $ kubectl config set-credentials gitlab-admin --token=$KUBE_TOKEN User "gitlab-admin" set. $ kubectl config set-context cfc --user=gitlab-admin Context "cfc" modified. $ kubectl config use-context cfc Switched to context "cfc". $ echo "Deploying dashboard with version extracted from tag ${CI_COMMIT_TAG}" Deploying dashboard with version extracted from tag dev-1.0.4-22 $ kubectl get pods The connection to the server localhost:8080 was refused - did you specify the right host or port? </code></pre>
<p>The reason why you have you connection refused is because your proxy is not started. Try executing code below so kubectl can access the cluster via proxy (localhost:8080).</p> <pre><code>kubectl proxy --address 0.0.0.0 --accept-hosts '.*' &amp; </code></pre> <p>Another approach is to use <code>curl</code> and operate with your cluster just like in the following example:</p> <pre><code>curl --cacert /path/to/cert -H "Bearer {your token}" "${KUBE_URL}/api" </code></pre>
<p>I've created a chart which requires a helm chart from <a href="https://github.com/helm/charts/tree/master/stable" rel="noreferrer">stable/charts</a>, lets say <a href="https://github.com/helm/charts/tree/master/stable/mongodb" rel="noreferrer">mongodb</a>. Although you can define a <a href="https://github.com/helm/helm/blob/master/docs/helm/helm_dependency_list.md" rel="noreferrer">requirements.yaml</a> file I was wondering if it is possible to define a chart in here?</p> <p>Furthermore, to make things easier (not) I need to replace some values for the mongodb chart as well. Not sure if I'm pushing the limits here to much :) It looks like a subchart relation, so it sounds possible.</p> <p>The alternative, which I'm using now is to first install the stable/chart and then my custom chart. But it would be an improvement if I could reduce this to one <code>helm</code> command just by adding this dependency to my custom chart</p> <p><strong>Update</strong>: </p> <p>I will update this post with my findings and hopefully this will eventually lead to the solution :) The <code>requirements.yaml</code> so far:</p> <pre><code>dependencies: - name: "mongodb" version: "4.x.x" repository: "@stable" import-values: - child: default.data parent: myimports </code></pre> <p>Not sure yet how I can use <code>import-values</code> to customize mongodb, but for now I'm stuck on the following error</p> <pre><code>Error: found in requirements.yaml, but missing in charts/ directory: mongodb </code></pre> <p>Also, because I'm using <code>stable</code> I'm not sure what to do with the <code>version</code> field</p> <p>Useful links: <a href="https://helm.sh/docs/developing_charts/#using-the-child-parent-format" rel="noreferrer">link</a>, <a href="https://stackoverflow.com/questions/55748639/set-value-in-dependency-of-helm-chart">link</a></p>
<p>since helm 2.2.0, you can store the mongodb chart at the same directory and in your <code>requirements.yaml</code> file refer to it as:</p> <pre><code>dependencies: - name: mongodb version: "&lt;the_mongo_chart_version&gt;" repository: "file://../mongodb" </code></pre> <p>thats way you can play with the values and deploy the whole thing with 1 helm command.</p> <p>for more information refer to </p> <p><a href="https://github.com/helm/helm/blob/master/docs/helm/helm_dependency.md" rel="nofollow noreferrer">https://github.com/helm/helm/blob/master/docs/helm/helm_dependency.md</a></p>
<p>I need to reduce the retry count for unavailable/deleted kubernetes cluster using kubernetes-client-python, currently by default it is 3.</p> <pre><code>WARNING Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('&lt;urllib3.connection.VerifiedHTTPSConnection object at 0x00000000096E3860&gt;: Failed to establish a new connection: [Errno 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond',)': /api/v1/pods WARNING Retrying (Retry(total=1,....... /api/v1/pods WARNING Retrying (Retry(total=0,....... /api/v1/pods </code></pre> <p>After 3 retries it throws an exception.</p> <p>Is there any way to reduce the count.</p> <p>Example Code</p> <pre><code>from kubernetes import client, config config.load_kube_config(config_file='location-for-kube-config') v1 = client.CoreV1Api() ret = v1.list_pod_for_all_namespaces() for i in ret.items: print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name)) </code></pre>
<p>Sadly it seems that it's not possible because:</p> <p>Python client use urlib3 PoolManager to make requests as you can see there </p> <p><a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/client/rest.py#L162" rel="nofollow noreferrer">https://github.com/kubernetes-client/python/blob/master/kubernetes/client/rest.py#L162</a></p> <pre><code>r = self.pool_manager.request(method, url, body=request_body, preload_content=_preload_content, timeout=timeout, headers=headers) </code></pre> <p>and underhood it uses urlopen with default parameters as you can see there</p> <p><a href="https://urllib3.readthedocs.io/en/1.2.1/pools.html#urllib3.connectionpool.HTTPConnectionPool.urlopen" rel="nofollow noreferrer">https://urllib3.readthedocs.io/en/1.2.1/pools.html#urllib3.connectionpool.HTTPConnectionPool.urlopen</a></p> <p><code>urlopen(..., retries=3, ...)</code></p> <p>so there is now way to pass other value here - you must fork official lib to achieve that.</p>
<p>I'm new on kubernetes. </p> <p>I couldn't get deployments using kubectl but I can see all deployments on kubernetes dashboard. How can i fix this problem? </p> <pre><code>user@master:~$ kubectl get deployments error: the server doesn't have a resource type "deployments" </code></pre> <p><strong>kubernetes version:</strong> 1.12</p> <p><strong>kubectl version:</strong> 1.13</p> <p><strong>kubectl api-versions:</strong></p> <p>apiregistration.k8s.io/v1</p> <p>apiregistration.k8s.io/v1beta1</p> <p>v1</p> <p><strong>api-resources:</strong></p> <pre><code>user@master:~$ kubectl api-resources NAME SHORTNAMES APIGROUP NAMESPACED KIND bindings true Binding componentstatuses cs false ComponentStatus configmaps cm true ConfigMap endpoints ep true Endpoints events ev true Event limitranges limits true LimitRange namespaces ns false Namespace nodes no false Node persistentvolumeclaims pvc true PersistentVolumeClaim persistentvolumes pv false PersistentVolume pods po true Pod podtemplates true PodTemplate replicationcontrollers rc true ReplicationController resourcequotas quota true ResourceQuota secrets true Secret serviceaccounts sa true ServiceAccount services svc true Service apiservices apiregistration.k8s.io false APIService </code></pre> <p>Thanks for your helps.</p> <p><strong>-----------Edit 1-----------</strong></p> <p>Hello @EduardoBaitello, Thank you for quicly reply. The problem is not related to permission.</p> <p><code>user@master:~$ kubectl auth can-i get deployments Warning: the server doesn't have a resource type 'deployments' yes user@master:~$ kubectl auth can-i get deployment Warning: the server doesn't have a resource type 'deployment' yes user@master:~$ kubectl auth can-i get namespaces yes user@master:~$ kubectl auth can-i get pods yes</code></p> <p>So I think this is not a duplicated question.</p> <p><code>user@master:~$ kubectl get po --namespace=kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-7c6b876df8-nk7nm 1/1 Running 2 118d calico-node-8lt9f 1/1 Running 3 118d calico-node-d9r9l 1/1 Running 2 118d calico-node-ffqlj 1/1 Running 2 118d dns-autoscaler-57ff59dd4c-c9tjv 1/1 Running 2 118d kube-apiserver-node1 1/1 Running 3 118d kube-controller-manager-node1 1/1 Running 6 118d kube-dns-84467597f5-hf2fn 3/3 Running 6 118d kube-dns-84467597f5-sttgx 3/3 Running 9 118d kube-proxy-node1 1/1 Running 3 118d kube-proxy-node2 1/1 Running 2 118d kube-proxy-node3 1/1 Running 2 118d kube-scheduler-node1 1/1 Running 6 118d kubernetes-dashboard-5db4d9f45f-gkl6w 1/1 Running 3 118d nginx-proxy-node2 1/1 Running 2 118d nginx-proxy-node3 1/1 Running 2 118d tiller-deploy-6f6fd74b68-27fqc 1/1 Running 0 16d</code></p> <p><code>user@master:~$ kubectl get componentstatus NAME STATUS MESSAGE scheduler Healthy ok controller-manager Healthy ok etcd-2 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} etcd-0 Healthy {"health": "true"}</code></p>
<p>The first step would be to increase a verbosity level to help in finding out the root cause:</p> <blockquote> <p>Kubectl get deployments --v=99</p> </blockquote> <p>Overall, there few things that might cause it:</p> <ol> <li>You might have run commands below as a root user, not a regular one. So run as a regular user </li> </ol> <blockquote> <p>cp -i /etc/kubernetes/admin.conf $HOME/.kube/config </p> <p>chown $(id -u):$(id -g) $HOME/.kube/config</p> </blockquote> <p>As suggested here <a href="https://github.com/kubernetes/kubernetes/issues/52636" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/52636</a></p> <ol start="2"> <li>Certificates in kubectl config file expired or if the cluster is in "AWS EKS" then IAM access keys might be inactive. </li> </ol> <p>In my case when running "kubectl get deployments --v=99" in addition to “the server doesn't have a resource type deployments” it showed that: </p> <pre><code>Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401 </code></pre> <p>If it is the case then check out your certificates kubectl config file (they might not be there, expired, new ones have to be created, etc) or if on EKS then IAM keys issued/activated.</p> <ol start="3"> <li>Lack of RBAC permissions so that a user/group for whom certificates/keys issued/signed are not allowed to view specific resources.</li> </ol>
<p>I have a gRPC microservice running within Kubernetes, currently there is only one pod, I want to increase the number of pods for the microservice, but in order to do that I want to route the same key to the same pod always, so as to avoid race conditions, I would like to understand how to achieve this functionality.</p>
<p>In this case you need layer 7 (L7) load balancer because they operate at the application layer and can inspect traffic in order to make routing decisions. Most importantly, they can support the HTTP/2 protocol (which gRPC uses).</p> <p>Nginx and HAProxy are one of the options for L7 load balancers, but too heavyweight to microservice architecture. You can think about <a href="https://www.envoyproxy.io/docs/envoy/latest/start/start" rel="nofollow noreferrer">Envoy</a> and <a href="https://linkerd.io/2/getting-started/" rel="nofollow noreferrer">Linkerd</a>, both have support for gRPC.</p> <p>Pods can be replicated to provide scaling and are wrapped in abstractions known as services which provide a stable IP address for accessing the underlying pods. Since Kubernetes 1.2 the default behaviour on hitting a service IP is that a random backend pod will be returned. However you can reconfigure your services to be headless so that the service IP will instead return the entire list of available pod IPs, allowing you to perform your own service discovery.</p> <p>You can easily start with Linkerd with this <a href="https://linkerd.io/2/getting-started/" rel="nofollow noreferrer">official tutorial</a>.</p>
<p>I am using kubernetes v1.11.1 configured using kubeadm consisting of five nodes and hundreds of pods are running. How can I enable or configure cluster autoscaling based on the total memory utilization of the cluster? </p>
<p>K8s cluster can be scaled with the help of Cluster Autoscaler(CA) <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="nofollow noreferrer">cluster autoscaler github page</a>, find info on AWS CA there.</p> <p>It is not scaling the cluster based on “total memory utilization” but based on “pending pods” in the cluster due to not enough available cluster resources to meet their CPU and Memory requests.  Basically, Cluster Autoscaler(CA) checks for pending(unschedulable) pods every 10 seconds and if it finds any, it will request AWS Autoscaling Group(ASG) API to increase the number of instances in ASG. When a node to ASG is added, it then joins the cluster and becomes ready to serve pods. After that K8s Scheduler allocates “pending pods” to a new node. </p> <p>Scale-down is done by CA checking every 10 seconds which nodes are unneeded and the node is considered for removal if: the sum of CPU and Memory Requests of all pods is smaller than 50% of node’s capacity, pods can be moved to other nodes and no scale-down disabled annotation. </p> <p>If K8s cluster on AWS is administered with Kubeadm, all the above holds true. So in a nutshell(intricate details omitted, refer to the doc on CA): </p> <blockquote> <p>Create Autoscaling Group(ASG) <a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.html" rel="nofollow noreferrer">aws ASG doc</a>.</p> <p>Add tags to ASG like k8s.io/cluster-autoscaler/enable(mandatory), k8s.io/cluster-autoscaler/cluster_name(optional).</p> <p>Launch “CA” in a cluster following the offical doc.</p> </blockquote>
<ol> <li>I have no idea what is going on in my ubuntu VM. It was working as a 'worker k8s node' once upon a time, but I have dismantled that k8s cluster TOTALLY. That said, there is no master now corresponding to this VM now. </li> <li>But still, I see the below containers are running in my VM (Which I would still call as worker VM).</li> </ol> <blockquote> <p></p> </blockquote> <pre><code> ~$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2481e38e9a7a k8s.gcr.io/kubernetes-dashboard-amd64 "/dashboard --inse..." 2 minutes ago Up 2 minutes k8s_kubernetes-dashboard_kubernetes-dashboard-77d8b98585-c257p_kube-system_ecd335b0-7469-11e8-b4d7-fa163e1912b6_1 8ca28af62b8f k8s.gcr.io/k8s-dns-sidecar-amd64 "/sidecar --v=2 --..." 3 minutes ago Up 3 minutes k8s_sidecar_kube-dns-54cccfbdf8-tj5rv_kube-system_edee9d35-7469-11e8-b4d7-fa163e1912b6_0 9207d2eb56b2 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 "/dnsmasq-nanny -v..." 3 minutes ago Up 3 minutes k8s_dnsmasq_kube-dns-54cccfbdf8-tj5rv_kube-system_edee9d35-7469-11e8-b4d7-fa163e1912b6_0 031dab934bfa k8s.gcr.io/k8s-dns-kube-dns-amd64 "/kube-dns --domai..." 3 minutes ago Up 3 minutes k8s_kubedns_kube-dns-54cccfbdf8-tj5rv_kube-system_edee9d35-7469-11e8-b4d7-fa163e1912b6_1 13fe0b47078a gcr.io/google_containers/pause-amd64:3.0 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-dns-54cccfbdf8-tj5rv_kube-system_edee9d35-7469-11e8-b4d7-fa163e1912b6_0 b8560cb57ecb gcr.io/google_containers/pause-amd64:3.0 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kubernetes-dashboard-77d8b98585-c257p_kube-system_ecd335b0-7469-11e8-b4d7-fa163e1912b6_0 </code></pre> <ol start="3"> <li>I tried to kill the running container, stop them using kubeadm reset, I even tried removing all the k8s packages. The packages are all removed. kubectl, kubeadm, kubelet commands do not work, but as soon as I start docker service, above containers are created too. </li> <li>I was also using minikube earlier, so I removed minikube packages. </li> <li>On the previous master node, kubectl get nodes doesnt list this node.</li> <li>How do I know which matser is tring to contact this?</li> </ol> <p>Not sure how do I stop all those containers now. (Ultimately I want to make this VM part of anothe rk8s cluster)</p> <p>Any help?</p>
<blockquote> <p>/etc/kubernetes/kubelet.conf</p> </blockquote> <p>This file has the IP to which it was communicating earlier. </p>
<p>Is it possible to pass different args to pods based on their ordinal index in StatefulSets? Didn't find the answer on the StatefulSets documentation. Thanks!</p>
<p>Recommended way, see <a href="https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/#statefulset" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/#statefulset</a></p> <pre class="lang-sh prettyprint-override"><code># Generate server-id from pod ordinal index. [[ `hostname` =~ -([0-9]+)$ ]] || exit 1 ordinal=${BASH_REMATCH[1]} # ${ordinal} now holds the replica number server-id=$((100 + $ordinal)) # Copy appropriate conf.d files from config-map to emptyDir. if [[ $ordinal -eq 0 ]]; then # do something else # do something else fi </code></pre>
<p>I have a cronjob that should process events occurred since the last operation, for that I use DB to persist this timestamp, my question is whether or not it is possible to directly pass Kubernetes <strong>status.lastScheduleTime</strong> into the cronjob object as an environment variable? </p>
<p>I see the easiest way accomplish your goal via <code>kubectl set env</code> command, injecting into the particular <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">Cronjob</a> template object the desired <code>status.lastScheduleTime</code> field as <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/" rel="nofollow noreferrer">environment</a> variable <code>LAST_SCHEDULE</code>:</p> <p><code>kubectl set env cronjob/name LAST_SCHEDULE=$(kubectl get cronjob name -o jsonpath='{.status.lastScheduleTime}')</code></p> <pre><code>env: - name: LAST_SCHEDULE value: "2019-09-23T08:56:00Z" </code></pre> <p>You may also find a more comprehensive way achieving <code>Cronjob</code> resource patching, supplying target environment variable in the corresponded template with most recent <code>lastScheduleTime</code> value via <em><a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">Init Containers</a></em> or through <code>postStart</code>/<code>preStop</code> <em><a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/#define-poststart-and-prestop-handlers" rel="nofollow noreferrer">handlers</a></em>.</p>
<p>We've setup an AWS s3 storage as Helm repo.</p> <p>But with development going on, more and more package files are uploade to the S3.</p> <p>We want to cleanup/delete older files in S3, of course, I know we can't directly delete them from S3 as there're some mapping info stored in index.yaml.</p> <p>I check the <code>helm help</code>, got few information about this. Is there any formal way to delete older helm packages?</p>
<p>as mentioned in <a href="https://chartmuseum.com/docs/#helm-chart-repository" rel="nofollow noreferrer">https://chartmuseum.com/docs/#helm-chart-repository</a> </p> <p>you can use Helm Museum api to do that</p> <p>e.g.</p> <pre><code># curl -XDELETE http://helm.chartrepo.url/api/charts/chart-name/version </code></pre> <p>please pay attention to the versions, you have to delete it one version per curl hit, means if you have 2 version and want to delete all the version you have to do it per version</p> <pre><code># curl -XDELETE http://helm.chartrepo.url/api/charts/chart-name/1.0.1 # curl -XDELETE http://helm.chartrepo.url/api/charts/chart-name/1.0.2 </code></pre> <p>ps: you can search your apps versions by hit </p> <pre><code># curl http://helm.chartrepo.url/index.yaml </code></pre>
<p>My spring-boot application fails after start with an NPE in the tomcat server when running on azure kubernetes service with SSL.</p> <p>First, what does work successfully: I have a minimal Spring-boot application created with Spring Initializr:</p> <ul> <li>Gradle project</li> <li>Java 11</li> <li>Spring Boot 2.1.8</li> <li>The only dependency: Spring Web </li> </ul> <p>I want to use it with a kubernetes cluster on the azure cloud. For this, I have an Azure Kubernetes Service (AKS). And I use jib for building the docker image. Up to this point, everything works fine. I can deploy my docker image to the azure docker registry, start a kubernetes cluster (with 1 node) and I get my "hello world" after calling "http://<em>public-ip</em>:8443/hello". I use port 8443 but with normal http to make sure that this port is not the source of my problem.</p> <p>I want my application to support HTTPS, therefore I made a self-signed certificate with keytool (like described here: <a href="https://stackoverflow.com/questions/49324700/enable-https-with-self-signed-certificate-in-spring-boot-2-0">Enable HTTPS with self-signed certificate in Spring Boot 2.0</a>). This works fine on localhost. It also works inside minikube running on localhost. But when I use that docker image on AKS, my application fails a few seconds after startup.</p> <p>I activated</p> <pre><code>logging.level.org.apache.tomcat=DEBUG logging.level.org.apache.catalina=DEBUG </code></pre> <p>in my application.properties. The output of my application is:</p> <pre><code>2019-09-19 08:22:51.291 INFO 1 --- [ main] com.example.demo.DemoApplication : Started DemoApplication in 9.623 seconds (JVM running for 10.419) 2019-09-19 08:22:53.775 DEBUG 1 --- [o-8443-Acceptor] o.apache.tomcat.util.threads.LimitLatch : Counting up[https-jsse-nio-8443-Acceptor] latch=1 2019-09-19 08:22:58.764 DEBUG 1 --- [o-8443-Acceptor] o.apache.tomcat.util.threads.LimitLatch : Counting up[https-jsse-nio-8443-Acceptor] latch=2 2019-09-19 08:22:58.793 DEBUG 1 --- [nio-8443-exec-1] org.apache.tomcat.util.modeler.Registry : Managed= Tomcat:type=RequestProcessor,worker="https-jsse-nio-8443",name=HttpRequest1 2019-09-19 08:22:58.793 DEBUG 1 --- [nio-8443-exec-1] org.apache.tomcat.util.modeler.Registry : Looking for descriptor 2019-09-19 08:22:58.793 DEBUG 1 --- [nio-8443-exec-1] org.apache.tomcat.util.modeler.Registry : Introspecting 2019-09-19 08:22:58.793 DEBUG 1 --- [nio-8443-exec-1] m.m.MbeansDescriptorsIntrospectionSource : Introspected attribute virtualHost public java.lang.String org.apache.coyote.RequestInfo.getVirtualHost() null &lt;snipped 20 lines of "Introspected attribute xyz"&gt; 2019-09-19 08:22:58.796 DEBUG 1 --- [nio-8443-exec-1] m.m.MbeansDescriptorsIntrospectionSource : Introspected attribute lastRequestProcessingTime public long org.apache.coyote.RequestInfo.getLastRequestProcessingTime() public void org.apache.coyote.RequestInfo.setLastRequestProcessingTime(long) 2019-09-19 08:22:58.796 DEBUG 1 --- [nio-8443-exec-1] m.m.MbeansDescriptorsIntrospectionSource : Introspected attribute errorCount public int org.apache.coyote.RequestInfo.getErrorCount() public void org.apache.coyote.RequestInfo.setErrorCount(int) 2019-09-19 08:22:58.796 DEBUG 1 --- [nio-8443-exec-1] m.m.MbeansDescriptorsIntrospectionSource : Setting name: org.apache.coyote.RequestInfo 2019-09-19 08:22:58.797 DEBUG 1 --- [nio-8443-exec-1] o.a.tomcat.util.modeler.BaseModelMBean : preRegister org.apache.coyote.RequestInfo@21c6ab25 Tomcat:type=RequestProcessor,worker="https-jsse-nio-8443",name=HttpRequest1 2019-09-19 08:22:58.797 ERROR 1 --- [nio-8443-exec-1] o.a.coyote.http11.Http11NioProtocol : Error reading request, ignored java.lang.NullPointerException: null at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper.getSslSupport(NioEndpoint.java:1392) ~[tomcat-embed-core-9.0.24.jar:9.0.24] at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:853) ~[tomcat-embed-core-9.0.24.jar:9.0.24] at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1593) ~[tomcat-embed-core-9.0.24.jar:9.0.24] at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) ~[tomcat-embed-core-9.0.24.jar:9.0.24] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[na:na] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[na:na] at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) ~[tomcat-embed-core-9.0.24.jar:9.0.24] at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na] </code></pre> <p>Can anyone give me a hint what my problem might be? Thanks!</p>
<p>That is a <a href="https://bz.apache.org/bugzilla/show_bug.cgi?id=63706" rel="noreferrer">known bug</a> in Apache Tomcat 9.0.24. It is triggered by sending plain text to an TLS enabled connector. It is safe to ignore but since it is rather noisy you probably want to upgrade to the recently released 9.0.26 where this has been fixed.</p>
<p>I am trying to run a command in mongo deployment file like below -</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: mongo-deployment spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: mongo-pod template: metadata: labels: app: mongo-pod spec: containers: - name: mongo-container image: mongo ports: - containerPort: 27017 name: mongo volumeMounts: - name: mongo-persistent-storage mountPath: /data/db command: - "bash" - "-c" - | mongo --eval "db.getSiblingDB('admin').createUser({user : \"$user123\", pwd : \"$pass\", roles: [ { role: 'root', db: 'admin' } ]});" volumes: - name: mongo-persistent-storage persistentVolumeClaim: claimName: mongo-pv-claim </code></pre> <p>as you can see i want to given access control for multiple users in command, but i get the following error in pods logs and pods goes into error state -</p> <pre><code>MongoDB shell version v4.2.0 connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&amp;gssapiServiceName=mongodb 2019-09-21T15:42:39.549+0000 E QUERY [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused : connect@src/mongo/shell/mongo.js:341:17 @(connect):2:6 2019-09-21T15:42:39.550+0000 F - [main] exception: connect failed 2019-09-21T15:42:39.550+0000 E - [main] exiting with code 1 </code></pre> <p>any idea why i keep getting this?</p> <p>Thanks</p>
<p>Try this:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: mongo-deployment spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: mongo-pod template: metadata: labels: app: mongo-pod spec: containers: - name: mongo-container image: mongo ports: - containerPort: 27017 name: mongo volumeMounts: - name: mongo-persistent-storage mountPath: /data/db env: - name: MONGO_INITDB_ROOT_USERNAME value: user - name: MONGO_INITDB_ROOT_PASSWORD value: password volumes: - name: mongo-persistent-storage persistentVolumeClaim: claimName: mongo-pv-claim </code></pre> <p>It will create user <code>user</code> and password <code>password</code> for <code>admin</code> database, you can find more on <a href="https://hub.docker.com/_/mongo" rel="nofollow noreferrer">dockerhub</a>.</p>
<p>I have a celery worker deployed on Kubernetes pods which executes a task (not very CPU intensive but takes some time to complete due to some HTTP calls). Is there any way to autoscale the pods in K8s based on the number of tasks in the task queue?</p>
<p>Yes, by using the Kubernetes metrics registry and <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="noreferrer">Horizontal Pod Autoscaler</a>.</p> <p><strong>First, you need to collect the "queue length" metric from Celery and expose it through one of the Kubernetes metric APIs. You can do this with a Prometheus-based pipeline:</strong></p> <ol> <li>Since Celery doesn't expose Prometheus metrics, you need to install an exporter that exposes some information about Celery (including the queue length) as Prometheus metrics. For example, <a href="https://github.com/zerok/celery-prometheus-exporter" rel="noreferrer">this exporter</a>.</li> <li>Install <a href="https://prometheus.io/" rel="noreferrer">Prometheus</a> in your cluster and configure it to collect the metric corresponding to the task queue length from the Celery exporter.</li> <li>Install the <a href="https://github.com/DirectXMan12/k8s-prometheus-adapter" rel="noreferrer">Prometheus Adapter</a> in your cluster and configure it to expose the "queue length" metric through the <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md" rel="noreferrer">Custom Metrics API</a> by pulling its value from Prometheus.</li> </ol> <p><strong>Now you can configure the Horizontal Pod Autoscaler to query this metric from the Custom Metrics API and autoscale your app based on it.</strong></p> <p>For example, to scale the app between 1 and 10 replicas based on a target value for the queue length of 5:</p> <pre><code>apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: myapp-hpa spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: myapp minReplicas: 1 maxReplicas: 10 metrics: - type: Object object: metric: name: mycelery_queue_length target: type: value value: 5 describedObject: apiVersion: apps/v1 kind: Deployment name: mycelery </code></pre>
<p>I refer <a href="https://cloud.google.com/community/tutorials/cloud-iot-prometheus-monitoring" rel="nofollow noreferrer">this doc</a>.</p> <p>I want to send data from my device and visualize it on grafana so, how to connect prometheus(deployed as a cluster in gcp) to GCP pubsub.</p>
<p>Prometheus is <a href="https://prometheus.io/docs/introduction/faq/#why-do-you-pull-rather-than-push" rel="nofollow noreferrer">pull-based</a> rather than push-based. So, whatever the metrics source is, it must expose the metrics in Prometheus format, and Prometheus will periodically query them with HTTP request.</p> <p>If directly exposing the metrics is not possible, the metrics source can push the metrics to some intermediate component which exposes the metrics in Prometheus format so that Prometheus can query them.</p> <p>It seems this is the approach taken by the <a href="https://cloud.google.com/community/tutorials/cloud-iot-prometheus-monitoring" rel="nofollow noreferrer">document</a> you're referring to. The metrics are submitted from the source via PubSub to a Metrics Telemetry Converter pod running in the Kubernetes cluster, which exposes them in Prometheus format.</p> <p>You then have to <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/" rel="nofollow noreferrer">configure</a> Prometheus to scrape the metrics from this pod, as you would configure it for any other job.</p>
<p>I am using kubebuilder to create a Kubernetes operator. When an object of my kind is initiated I have to parse the spec and update the objects based on a few calculations. </p> <p>From what I can tell I can either update the status of the object, the metadata, or a managed field (I may be wrong?). It appears that the <code>sigs.k8s.io/controller-runtime/pkg/client</code> library is responsible for how to update these fields (I'm not completely sure). I am having trouble understanding the <a href="https://godoc.org/sigs.k8s.io/controller-runtime/pkg/client#Patch" rel="nofollow noreferrer">docs</a>. </p> <p>I have the following questions:</p> <ul> <li>Are there a guide to best practices about where to store configuration on the object between status, metadata (labels or annotations), and managed fields?</li> <li>How do I update/patch the annotations of an object similar to how I would use <code>r.Status().Update(ctx, &amp;thing);</code> to update the status?</li> </ul>
<p>The <a href="https://book.kubebuilder.io/" rel="nofollow noreferrer">Kubebuilder docs</a> are a bit raw but nonetheless are a handy guide when building CRDs and controllers with Kubebuilder. It walks you through a fairly detailed example which is great to study and refer back to, to see how to do certain things.</p> <p>The answer to your question generally is, "it depends." What values are you calculating, and why? Why do you need to store them on the object? Is the lifecycle of this data coupled to the lifecycle of this object, or might this computed data need to live on and be used by other controllers even when the object is deleted? In general, is anything going to interact with those values? What is it going to do with them?</p> <p>If nothing else aside from the reconciliation controller for the CRD is going to interact with the data you're putting, consider putting it within the object's Status.</p> <p>Doing <code>r.Status().Update(ctx, &amp;thing)</code> will avoid triggering any side-effects as it will only persist changes you've made to the object's Status subresource, rather than its spec or metadata.</p> <p>A common thing to do with custom resources is to set and remove finalizers, which live in the object's metadata.</p>
<p>I setup kubernetes with master and node on the same hardware (ubuntu 18) using this <a href="https://powerodit.ch/2017/10/29/all-in-one-kubernetes-cluster-with-kubeadm/" rel="nofollow noreferrer">tutorial</a>.</p> <p>Kubernetes 1.15.3 docker 19.03.2</p> <p>The container I created runs an emulation software that needs root privileges with write access to /proc/sys/kernel directory. When kubernetes start the container I get an error inside the service script /etc/init.d/myservicescript indicates that it can't write to /proc/sys/kernel/xxx. The container runs on ubuntu 14.</p> <p>I tried to set the "runAsUser: 0" in the pod's yaml file</p> <p>I tried to set "USER 0" in the Dockerfile</p> <p>Neither work. Any suggestion on how to get this working? </p>
<p>Changing the user inside the container does not give you any privilege on the host. In order to get elevated privilege, you must set <code>privileged: true</code> in the security context.</p> <p>For example:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: busybox spec: containers: - name: busybox image: busybox args: - sleep - "999" securityContext: privileged: true </code></pre>
<p>In minikube there is a nginx ingress plugin which uses a hostPort of 80 and 443. Apparently traffic from the outside can reach these ports just fine. However if you create a pod with a container with <code>hostPort: 9999</code> for example <code>telnet $(minikube ip) 9999</code> gets the following result:</p> <pre><code>Trying 192.168.99.165... telnet: connect to address 192.168.99.165: Connection refused telnet: Unable to connect to remote host </code></pre> <p>Is nginx ingress controller doing some special magic? If so what is said magic that I can also use? </p> <p>Please no answers about using NodePort. </p>
<p>There's no special magic. I suspect you're getting that telnet response because there's nothing listening on port 9999 within the container on which <code>hostPort: 9999</code> is set.</p> <p>Run <code>minikube ssh</code> and look at <code>netstat -nlt</code> and you'll see your port 9999 there. Try running a real service listening on an open hostPort and it should work, e.g.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: run: redis name: redis spec: replicas: 1 selector: matchLabels: run: redis template: metadata: labels: run: redis spec: containers: - image: redis imagePullPolicy: Always name: redis ports: - containerPort: 6379 hostPort: 6379 protocol: TCP </code></pre> <p>From my terminal:</p> <pre><code>&gt; telnet $(minikube ip) 6379 Trying 192.168.99.189... Connected to 192.168.99.189. Escape character is '^]'. </code></pre> <p>If there <em>is</em> something listening on port 9999, then there may be an issue with the way Kubernetes is setting up the proxy between the host and the container. You could look for the <code>docker-proxy</code> process to check that:</p> <pre><code>$ ps aux | grep docker-proxy root 3579 0.0 0.0 3668 1768 ? Sl 14:43 0:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 10000 -container-ip 172.17.0.2 -container-port 10000 root 19862 0.0 0.0 9240 476 pts/1 S+ 16:21 0:00 grep docker-proxy root 23466 0.0 0.0 3668 1768 ? Sl 15:20 0:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 18080 -container-ip 172.17.0.9 -container-port 18080 root 23480 0.0 0.0 3668 1768 ? Sl 15:20 0:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 443 -container-ip 172.17.0.9 -container-port 443 root 23494 0.0 0.0 3668 1676 ? Sl 15:20 0:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 80 -container-ip 172.17.0.9 -container-port 80 root 25840 0.0 0.0 3668 1768 ? Sl 15:24 0:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 9999 -container-ip 172.17.0.10 -container-port 9999 $ </code></pre>
<p>I have run a simple command:</p> <pre><code>kubectl api-resources </code></pre> <p>I was a bit surprised to get duplicates (albeit in different Api groups), e.g.:</p> <pre><code>NAME SHORTNAMES APIGROUP NAMESPACED KIND ingresses ing extensions true Ingress ingresses ing networking.k8s.io true Ingress </code></pre> <p>Is it the same resource, just listed twice because it belongs to different api groups at the same time and the result of the following calls are always the same?</p> <pre><code>kubectl get ingress kubectl get ingress.networking.k8s.io kubectl get ingress.extensions </code></pre>
<p>These are actually different versions of the same resource in different API groups.</p> <p>In general, when new resource types are introduced to Kubernetes, they are first managed in the <code>extensions</code> API group (iterating through multiple versions). Once the resources are regarded stable, they are moved to a "stable" API group, such as <code>apps</code> or <code>networking.k8s.io</code> (explained, for example, <a href="https://kubernetes.io/blog/2019/09/18/kubernetes-1-16-release-announcement/" rel="nofollow noreferrer">here</a>).</p> <p>The fact that Ingress is listed in both API groups means that your API server can understand versions of this resource from both API groups.</p> <p>The specifications of the specific resource versions may differ. You can compare them, for example, with:</p> <pre><code>kubectl explain --api-version=extensions/v1beta1 ingress </code></pre> <p>vs.</p> <pre><code>kubectl explain --api-version=networking.k8s.io/v1beta1 ingress </code></pre> <p>In-depth explanations of how the API server handles different versions of a resource can be found in this <a href="https://blog.openshift.com/kubernetes-deep-dive-api-server-part-1/" rel="nofollow noreferrer">article series</a>.</p>
<p>I have been able to successfully setup kubernetes on my Centos 7 server. On trying to get the dashboard working after following the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/" rel="nofollow noreferrer">documentation</a>, running 'kubectl proxy' it attempts to run using 127.0.0.1:9001 and not my server ip. Do this mean I cannot access kubernetes dashboard outside the server? I need help on getting the dashboard running using my public ip</p>
<p>You can specify on which address you want to run <code>kubectl proxy</code>, i.e.</p> <pre><code>kubectl proxy --address &lt;EXTERNAL-IP&gt; -p 9001 Starting to serve on 100.105.***.***:9001 </code></pre> <p>You can also use port forwarding to access the dashboard.</p> <pre><code>kubectl port-forward --address 0.0.0.0 pod/dashboard 8888:80 </code></pre> <p>This will listen port 8888 on all addresses and route traffic directly to your pod.</p> <p>For instance:</p> <pre><code>rsha:~$ kubectl port-forward --address 0.0.0.0 deploy/webserver 8888:80 Forwarding from 0.0.0.0:8888 -&gt; 80 </code></pre> <p>In another terminal running </p> <pre><code>rsha:~$ curl 100.105.***.***:8888 &lt;html&gt;&lt;body&gt;&lt;h1&gt;It works!&lt;/h1&gt;&lt;/body&gt;&lt;/html&gt; </code></pre>
<p>We have python services running in pods in a kubernetes cluster. The services are setup to receive their log-level from an environment variable. Those env vars are set during the deployment of the service in a gitlab pipeline. For debugging purposes I want to be able to just change the env var on a single pod and restart it, without having to redeploy the service from gitlab.</p> <p>Before we moved to kubernetes, we were running our containers in rancher, where the described change was very easy to do in the GUI. Change the env var -&gt; hit update -&gt; container restarts automatically.</p> <p>I found <a href="https://medium.com/@tszumowski/how-to-inject-environment-variables-into-kubernetes-pods-after-deployment-489306fff1a8" rel="nofollow noreferrer">this article</a> that suggest to change the replica set using a command like</p> <pre class="lang-sh prettyprint-override"><code>kubectl set env rs [REPLICASET_NAME] [ENV_VAR]=[VALUE] </code></pre> <p>And then terminating the pod, after which it will be recreated with the env var set accordingly.</p> <p>But it also states to</p> <blockquote> <p>Never do it on a production system.</p> <p>Never even do this on a dev environment without taking care in how it may impact your deployment workflow.</p> </blockquote> <p>Is that the only / best way to achieve my goal of quickly changing an env var in a running pod for debug purposes?</p>
<blockquote> <p>Is that the only / best way to achieve my goal of quickly changing an env var in a running pod for debug purposes?</p> </blockquote> <p>Short answer: Yes.</p> <p>Long answer: I've never used or read up on Rancher, but I suspect that it was also changing the ReplicaSet or Deployment template env var, which triggered a Pod update. It's really the only way to change an env var in a Pod. You can't change the env vars on a running container or a running Pod. You can't do that in Docker containers, and you can't do it in Kubernetes, so I assume that you can't do it in Rancher. You can only restart a Pod with a different spec. </p> <p>Why?</p> <p>Because containers are just processes running on the host machine. Once the process is started, it's not possible to change a process's environment without <a href="https://unix.stackexchange.com/questions/38205/change-environment-of-a-running-process">resorting to nasty hacks</a>.</p> <p>If you're just concerned about the warnings that state to not do this in dev or prod, I would say that the same warnings apply to the Rancher workflow you described, so if you were willing to take the risks there, it won't be any different here.</p>
<p>I've deployed my first app on my Kubernetes prod cluster a month ago.</p> <p>I could deploy my 2 services (front / back) from gitlab registry.</p> <p>Now, I pushed a new docker image to gitlab registry and would like to redeploy it in prod:</p> <p>Here is my deployment file:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: reloader.stakater.com/auto: "true" labels: app: espace-client-client name: espace-client-client namespace: espace-client spec: replicas: 1 strategy: {} template: metadata: labels: app: espace-client-client spec: containers: - envFrom: - secretRef: name: espace-client-client-env image: registry.gitlab.com/xxx/espace_client/client:latest name: espace-client-client ports: - containerPort: 3000 resources: {} restartPolicy: Always imagePullSecrets: - name: gitlab-registry </code></pre> <p>I have no clue what is inside <code>gitlab-registry</code>. I didn't do it myself, and the people who did it left the crew :( Nevertheless, I have all the permissions, so, I only need to know what to put in the secret, and maybe delete it and recreate it.</p> <p>It seems that secret is based on my .docker/config.json</p> <pre><code>➜ espace-client git:(k8s) ✗ kubectl describe secrets gitlab-registry Name: gitlab-registry Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; Type: kubernetes.io/dockerconfigjson Data ==== .dockerconfigjson: 174 bytes </code></pre> <p>I tried to delete existing secret, logout with </p> <pre><code>docker logout registry.gitlab.com kubectl delete secret gitlab-registry </code></pre> <p>Then login again:</p> <pre><code>docker login registry.gitlab.com -u myGitlabUser Password: Login Succeeded </code></pre> <p>and pull image with:</p> <pre><code>docker pull registry.gitlab.com/xxx/espace_client/client:latest </code></pre> <p>which worked.</p> <p>file: <code>~/.docker/config.json</code> is looking weird:</p> <pre><code>{ "auths": { "registry.gitlab.com": {} }, "HttpHeaders": { "User-Agent": "Docker-Client/18.09.6 (linux)" }, "credsStore": "secretservice" } </code></pre> <p>It doesn't seem to contain any credential... </p> <p>Then I recreate my secret</p> <pre><code>kubectl create secret generic gitlab-registry \ --from-file=.dockerconfigjson=/home/julien/.docker/config.json \ --type=kubernetes.io/dockerconfigjson </code></pre> <p>I also tried to do : </p> <pre><code>kubectl create secret docker-registry gitlab-registry --docker-server=registry.gitlab.com --docker-username=&lt;your-name&gt; --docker-password=&lt;your-pword&gt; --docker-email=&lt;your-email&gt; </code></pre> <p>and deploy again:</p> <pre><code>kubectl rollout restart deployment/espace-client-client -n espace-client </code></pre> <p>but I still have the same error:</p> <pre><code>Error from server (BadRequest): container "espace-client-client" in pod "espace-client-client-6c8b88f795-wcrlh" is waiting to start: trying and failing to pull image </code></pre>
<p>You have to update the <code>gitlab-registry</code> secret because this item is used to let Kubelet to pull the <em>protected</em> image using credentials.</p> <p>Please, delete the old secret with <code>kubectl -n yournamespace delete secret gitlab-registry</code> and recreate it typing credentials:</p> <pre><code>kubectl -n yournamespace create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD[ --docker-email=DOCKER_EMAIL] </code></pre> <p>where: - <code>DOCKER_REGISTRY_SERVER</code> is the GitLab Docker registry instance - <code>DOCKER_USER</code> is the username of the robot account to pull images - <code>DOCKER_PASSWORD</code> is the password attached to the robot account</p> <p>You could ignore <code>docker-email</code> since it's not mandatory (note the square brackets).</p>
<p>I am trying to use nginx reverse-proxy and use simple default.conf. Here is my docker-compose file:</p> <pre><code>version: '3' services: authorizationmicroservice: image: gcr.io/root-booking-245613/authorizationmicroservice:v1 container_name: authorizationmicroservice restart: always labels: - "kompose.service.type=LoadBalancer" ports: - "3002:${PORT:-3002}" networks: - backend musicmicroservice: image: gcr.io/root-booking-245613/musicmicroservice:v1 container_name: musicmicroservice restart: always ports: - "3001:${PORT:-3001}" networks: - backend labels: - "kompose.service.type=LoadBalancer" nginx: image: nginx:latest networks: - backend ports: - "8080:${PORT:-8080}" volumes: - ./nginxProxy:/etc/nginx/conf.d depends_on: - authorizationmicroservice - musicmicroservice labels: - "kompose.service.type=LoadBalancer" networks: backend: </code></pre> <p>Everything works fine if I do docker-compose up, but when I try to deploy it with kubernetes, I' ve got the following error log in nginx pod:</p> <pre><code>Warning Failed 34m kubelet, gke-hello-cluster-default-pool-8c57f061-7hd8 Error: failed to start container "nginx": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:424: container init caused \"rootfs_linux.go:58: mounting \\\"/home/artemda4/kubernetes-engine-samples/muniverse/home/artemda4/kubernetes-engine-samples/muniverse/nginxProxy/default.conf\\\" to rootfs \\\"/var/lib/docker/overlay2/751acda10027acdcca21d16df3be48197170c04dd3520cd7fa8aeb083b5b6bc1/merged\\\" at \\\"/var/lib/docker/overlay2/751acda10027acdcca21d16df3be48197170c04dd3520cd7fa8aeb083b5b6bc1/merged/etc/nginx/conf.d/default.conf\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type </code></pre> <p>How should i mount volumes in Kubernetes? This is my nginx config,it is placed in nginxProxy</p> <pre><code>server { listen 8080; client_max_body_size 50m; fastcgi_send_timeout 600; fastcgi_read_timeout 600; proxy_connect_timeout 600; proxy_send_timeout 600; proxy_read_timeout 600; send_timeout 600; root /srv/www/static; location /api/authorization { proxy_pass http://authorizationmicroservice:3002; } location /api/music { proxy_pass http://musicmicroservice:3001; } location /api/playlist { proxy_pass http://musicmicroservice:3001; } #this where socket io will be handling the request location /socket.io { proxy_pass http://musicmicroservice:3001/socket.io/; } } </code></pre>
<p>In Kubernetes, typically you store configuration data like this in a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">ConfigMap</a>. It looks like <a href="https://github.com/kubernetes/kompose/issues/1060" rel="nofollow noreferrer">Kompose won't automatically create a ConfigMap for you</a> so you need to create it by hand.</p> <p>I'd suggest <a href="http://kompose.io/user-guide/#kubernetes" rel="nofollow noreferrer">converting the Docker Compose YAML file to Kubernetes YAML files</a></p> <pre><code>kompose convert </code></pre> <p>The easiest thing to do here is to create a ConfigMap from your existing config file. (For longer-term use, I'd package it into a YAML file next to the other files Kompose created.)</p> <pre><code>kubectl create configmap nginx --from-file=./nginxProxy/ </code></pre> <p>In the generated <code>nginx-deployment.yaml</code> file you are looking for two things, one to make the ConfigMap available in the pod and one to mount the resulting storage into the container.</p> <pre class="lang-yaml prettyprint-override"><code># Inside the deployment template spec: volumes: - name: nginx-config configMap: name: nginx # matches `kubectl create` command containers: - name: nginx # other standard container settings volumeMounts: - name: nginx-config # matches `volumes:` above mountPath: /etc/nginx/conf.d </code></pre> <p>Then when you run <code>kubectl apply -f .</code> to install these objects into the cluster, it will read the configuration from the in-cluster ConfigMap, and not depend on any host-specific paths (which Kompose isn't generating anyways).</p>
<p>I have the <em>postgres</em> container running in a <code>Pod</code> on <strong>GKE</strong> and a <code>PersistentVolume</code> set up to store the data. However, all of the data in the database is lost if the cluster reboots or if the <code>Pod</code> is deleted. </p> <p>If I run <code>kubectl delete &lt;postgres_pod&gt;</code> to delete the existing <code>Pod</code> and check the newly created (by <em>kubernetes</em>) <code>Pod</code> to replace the deleted one, the respective database has not the data that it had before the <code>Pod</code> being deleted.</p> <p>Here are the <em>yaml</em> files I used to deploy <em>postgres</em>. </p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-storage parameters: type: pd-standard provisioner: kubernetes.io/gce-pd reclaimPolicy: Retain volumeBindingMode: Immediate --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: postgres-volume-claim spec: storageClassName: custom-storage accessModes: - ReadWriteOnce resources: requests: storage: 5Gi </code></pre> <p>deployment.yaml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: postgres spec: selector: matchLabels: app: postgres template: metadata: labels: app: postgres spec: containers: - name: postgres image: postgres:11.5 resources: {} ports: - containerPort: 5432 env: - name: POSTGRES_DB value: "dbname" - name: POSTGRES_USER value: "user" - name: POSTGRES_PASSWORD value: "password" volumeMounts: - mountPath: /var/lib/postgresql/ name: postgresdb volumes: - name: postgresdb persistentVolumeClaim: claimName: postgres-volume-claim </code></pre> <p>I double checked that the <code>persistentVolumeReclaimPolicy</code> has value <code>Retain</code>. </p> <p>What am I missing?</p>
<p>Is the cluster creating a new volume each time you delete a pod? Check with <code>kubectl get pv</code>. </p> <p>Is this a multi-zone cluster? Your storage class is not provisionig regional disks, so you might be getting a new disk when the pod moves from one zone to another.</p> <p>Possibly related to your problem, the postgres container reference recommends mounting at <code>/var/lib/postgresql/data/pgdata</code> and setting the <code>PGDATA</code> env variable: <a href="https://hub.docker.com/_/postgres#pgdata" rel="noreferrer">https://hub.docker.com/_/postgres#pgdata</a></p>
<p>I'm writing a custom controller with Kubebuilder framework, in one method, I got an object of type <a href="https://godoc.org/k8s.io/apimachinery/pkg/runtime#Object" rel="nofollow noreferrer"><code>runtime.Object</code></a>, I know I should be able to convert it to the specific type say <code>MyCustomResource</code>, but I cannot figure out how from the doc.</p>
<p>It should be as easy as this:</p> <pre class="lang-golang prettyprint-override"><code>func convertToMyCustomResource(obj runtime.Object) *v1alpha1.MyCustomResource { myobj := obj.(*v1alpha1.MyCustomResource) return myobj } </code></pre> <p>If this produces an error (e.g. Impossible type assertion), make sure <code>MyCustomResource</code> satisfies the <a href="https://godoc.org/k8s.io/apimachinery/pkg/runtime#Object" rel="nofollow noreferrer"><code>runtime.Object</code></a> interface; i.e.</p> <ol> <li><p>Run the controller-gen tool to generate the <code>DeepCopyObject</code> method</p> <pre><code>go run vendor/sigs.k8s.io/controller-tools/cmd/controller-gen/main.go paths=./api/... object </code></pre></li> <li><p>Add the <code>"k8s.io/apimachinery/pkg/apis/meta/v1".TypeMeta</code> field to the <code>MyCustomResource</code> struct, which implements the <code>GetObjectKind</code> method.</p></li> </ol> <pre class="lang-golang prettyprint-override"><code> import ( metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" ) type MyCustomResource struct { metav1.TypeMeta `json:",inline"` // ... other stuff } </code></pre>
<p>I'm running Prometheus in a kubernetes cluster. All is running find and my UI pods are counting visitors. </p> <p><a href="https://i.stack.imgur.com/Eb8l4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Eb8l4.png" alt="enter image description here"></a></p> <p>Please ignore the title, what you see here is the query at the bottom of the image. It's a counter. The gaps in the graph are due to pods restarting. I have two pods running simultaneously!</p> <p>Now suppose I would like to count the total of visitors, so I need to sum over all the pods</p> <p><a href="https://i.stack.imgur.com/FA5qY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FA5qY.png" alt="enter image description here"></a></p> <p>This is what I expect considering the first image, right?</p> <p>However, I don't want the graph to drop when a pod restarts. I would like to have something cumulative over a specified amount of time (somehow ignoring pods restarting). Hope this makes any sense. Any suggestions?</p> <p><strong>UPDATE</strong></p> <p>Below is suggested to do the following</p> <p><a href="https://i.stack.imgur.com/H1u52.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H1u52.png" alt="enter image description here"></a></p> <p>Its a bit hard to see because I've plotted everything there, but the suggested answer <code>sum(rate(NumberOfVisitors[1h])) * 3600</code> is the continues green line there. What I don't understand now is the value of 3 it has? Also why does the value increase after 21:55, because I can see some values before that. </p> <p>As the approach seems to be ok, I noticed that the actual increase is actually 3, going from 1 to 4. In the graph below I've used just one time series to reduce noise</p> <p><a href="https://i.stack.imgur.com/DZrmm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DZrmm.png" alt="enter image description here"></a></p>
<p>Rate, then sum, then multiply by the time range in seconds. That will handle rollovers on counters too.</p>
<p>We have a cluster where nodes are out of resources leading to slowness and over-committing issues. This is forcing us to restart the nodes frequently. We are planning to enforce a node allocatable resource policy to split the available CPU, Memory and Ephemeral Storage between system, kubelet, and application pods.</p> <p>Came across some guidelines on allocatable resource calculation <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture" rel="nofollow noreferrer">here</a>. </p> <p>It does not specify how much is the split between system and kubelet resources. Also, we are on open shift and so not sure how much of this is applicable.</p>
<p>As you mentioned you are using <code>OpenShift</code> and docs you provided are from GCP. Default requirements, parameters might be differnt due to Cloud Provider specs.</p> <p>Unfortunately I am not <code>OpenShift</code> user, but you can find in OS <a href="https://docs.openshift.com/enterprise/3.2/admin_guide/allocating_node_resources.html" rel="nofollow noreferrer">documentation</a> that: </p> <p>Resources reserved for node components are based on two node settings: <code>kube-reserved</code> and <code>system-reserved</code>.</p> <p>You can set these in the kubeletArguments section of the node configuration file (the <em>/etc/origin/node/node-config.yaml</em> file by default) using a set of = pairs (e.g., cpu=200m,memory=512Mi). </p> <p><strong>How to compute Allocated Resources?</strong></p> <p>An allocated amount of a resource is computed based on the following formula:</p> <p><code>[Allocatable] = [Node Capacity] - [kube-reserved] - [system-reserved]</code></p> <p>If <code>[Allocatable]</code> is negative, it is set to 0.</p> <p>Please check below OpenShift docs: <a href="https://docs.okd.io/latest/admin_guide/allocating_node_resources.html" rel="nofollow noreferrer">Allocating node resources</a>, <a href="https://blog.openshift.com/full-cluster-capacity-management-monitoring-openshift/" rel="nofollow noreferrer">Capacity management</a>, <a href="https://docs.openshift.com/container-platform/3.7/scaling_performance/cluster_limits.html" rel="nofollow noreferrer">Cluster Limits</a>, <a href="https://docs.openshift.com/enterprise/3.1/dev_guide/limits.html" rel="nofollow noreferrer">Resource Limits</a></p> <p>Many factors depends on what exactly pods/images you want to use. Some images might requested 0.1 CPU but others might need 1CPU to start.</p> <p>You can limit it by create a <a href="https://docs.openshift.com/online/dev_guide/compute_resources.html" rel="nofollow noreferrer">Quota</a>, set <code>Pod</code> <a href="https://www.magalix.com/blog/kubernetes-resource-requests-and-limits-101" rel="nofollow noreferrer">requests and limits</a>.</p> <p>Please keep in mind that you can always check current Requests/Limits in each Pod, under the Containers.containerName.Requests:</p> <pre><code>$ os describe pod &lt;pod-name&gt; </code></pre> <p>Or requested resources / limits on node</p> <pre><code>$ os describe node &lt;node-name&gt; </code></pre> <p>On the bottom of this description you should get All pods reqyest and limits</p> <pre><code>Non-terminated Pods: (6 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- default nginx-7cdbd8cdc9-b94r9 100m (10%) 0 (0%) 0 (0%) 0 (0%) 6m2s default nginx-7cdbd8cdc9-nlsw7 100m (10%) 0 (0%) 0 (0%) 0 (0%) 6m2s kube-system fluentd-gcp-v3.2.0-lwnqn 100m (10%) 1 (106%) 200Mi (7%) 500Mi (18%) 5h22m kube-system kube-proxy-gke-stc1-default-pool-094e5c74-4dzj 100m (10%) 0 (0%) 0 (0%) 0 (0%) 5h22m kube-system prometheus-to-sd-lbj57 1m (0%) 3m (0%) 20Mi (0%) 20Mi (0%) 5h22m kube-system traefik-749d86f748-frs7q 0 (0%) 0 (0%) 0 (0%) 0 (0%) 158m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 401m (42%) 1003m (106%) memory 220Mi (8%) 520Mi (19%) ephemeral-storage 0 (0%) 0 (0%) attachable-volumes-gce-pd 0 0 </code></pre> <p>Hope it will help.</p>
<p>I have a Ansible <code>group_vars</code> directory with the following file within it:</p> <pre><code>$ cat inventory/group_vars/env1 ... ... ldap_config: !vault | $ANSIBLE_VAULT;1.1;AES256 31636161623166323039356163363432336566356165633232643932623133643764343134613064 6563346430393264643432636434356334313065653537300a353431376264333463333238383833 31633664303532356635303336383361386165613431346565373239643431303235323132633331 3561343765383538340a373436653232326632316133623935333739323165303532353830386532 39616232633436333238396139323631633966333635393431373565643339313031393031313836 61306163333539616264353163353535366537356662333833653634393963663838303230386362 31396431636630393439306663313762313531633130326633383164393938363165333866626438 ... ... </code></pre> <p>This Ansible encrypted string has a Kubernetes secret encapsulated within it. A base64 blob that looks something like this:</p> <pre><code>IyMKIyBIb3N0IERhdGFiYXNlCiMKIyBsb2NhbGhvc3QgaXMgdXNlZCB0byBjb25maWd1cmUgdGhlIGxvb3BiYWNrIGludGVyZmFjZQojIHdoZW4gdGhlIHN5c3RlbSBpcyBib290aW5nLiAgRG8gbm90IGNoYW5nZSB0aGlzIGVudHJ5LgojIwoxMjcuMC4wLjEJbG9jYWxob3N0CjI1NS4yNTUuMjU1LjI1NQlicm9hZGNhc3Rob3N0Cjo6MSAgICAgICAgICAgICBsb2NhbGhvc3QKIyBBZGRlZCBieSBEb2NrZXIgRGVza3RvcAojIFRvIGFsbG93IHRoZSBzYW1lIGt1YmUgY29udGV4dCB0byB3b3JrIG9uIHRoZSBob3N0IGFuZCB0aGUgY29udGFpbmVyOgoxMjcuMC4wLjEga3ViZXJuZXRlcy5kb2NrZXIuaW50ZXJuYWwKIyBFbmQgb2Ygc2VjdGlvbgo= </code></pre> <p>How can I decrypt this in a single CLI?</p>
<p>We can use an Ansible adhoc command to retrieve the variable of interest, <code>ldap_config</code>. To start we're going to use this adhoc to retrieve the Ansible encrypted vault string:</p> <pre><code>$ ansible -i "localhost," all \ -m debug \ -a 'msg="{{ ldap_config }}"' \ --vault-password-file=~/.vault_pass.txt \ -e@inventory/group_vars/env1 localhost | SUCCESS =&gt; { "msg": "ABCD......." </code></pre> <p>Make note that we're: </p> <ul> <li>using the <code>debug</code> module and having it print the variable, <code>msg={{ ldap_config }}</code></li> <li>giving <code>ansible</code> the path to the secret to decrypt encrypted strings</li> <li>using the notation <code>-e@&lt; ...path to file...&gt;</code> to pass the file with the encrypted vault variables</li> </ul> <p>Now we can use Jinja2 filters to do the rest of the parsing:</p> <pre><code>$ ansible -i "localhost," all \ -m debug \ -a 'msg="{{ ldap_config | b64decode | from_yaml }}"' \ --vault-password-file=~/.vault_pass.txt \ -e@inventory/group_vars/env1 localhost | SUCCESS =&gt; { "msg": { "apiVersion": "v1", "bindDN": "uid=readonly,cn=users,cn=accounts,dc=mydom,dc=com", "bindPassword": "my secret password to ldap", "ca": "", "insecure": true, "kind": "LDAPSyncConfig", "rfc2307": { "groupMembershipAttributes": [ "member" ], "groupNameAttributes": [ "cn" ], "groupUIDAttribute": "dn", "groupsQuery": { "baseDN": "cn=groups,cn=accounts,dc=mydom,dc=com", "derefAliases": "never", "filter": "(objectclass=groupOfNames)", "scope": "sub" }, "tolerateMemberNotFoundErrors": false, "tolerateMemberOutOfScopeErrors": false, "userNameAttributes": [ "uid" ], "userUIDAttribute": "dn", "usersQuery": { "baseDN": "cn=users,cn=accounts,dc=mydom,dc=com", "derefAliases": "never", "scope": "sub" } }, "url": "ldap://192.168.1.10:389" } } </code></pre> <p><strong>NOTE:</strong> The above section <code>-a 'msg="{{ ldap_config | b64decode | from_yaml }}"</code> is what's doing the heavy lifting in terms of converting from Base64 to YAML.</p> <h3>References</h3> <ul> <li><a href="https://stackoverflow.com/questions/37652464/how-to-run-ansible-without-hosts-file">How to run Ansible without hosts file</a></li> <li><a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#filters-for-formatting-data" rel="nofollow noreferrer">https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#filters-for-formatting-data</a></li> <li><a href="https://stackoverflow.com/questions/39047657/base64-decode-string-in-jinja">Base64 Decode String in jinja</a></li> <li><a href="https://stackoverflow.com/questions/43467180/how-to-decrypt-string-with-ansible-vault-2-3-0">How to decrypt string with ansible-vault 2.3.0</a></li> </ul>
<p>Assuming I have a <code>Deployment</code> with a specific value set to the <code>.spec.strategy.rollingUpdate.maxUnavailable</code> field.</p> <p>Then I deploy a <code>PodDisruptionBudget</code> attached to the deployment above, setting its <code>spec.maxUnavailable</code> field to a value different to the above.</p> <p>Which one will prevail?</p>
<p>By interpreting the documentation, it seems that it depends on the event.</p> <p>For a rolling update, the Deployment's <code>maxUnavailable</code> will be in effect, even if the PodDisruptionBudget specifies a smaller value.</p> <p>But for an eviction, the PodDisruptionBudget's <code>maxUnavailable</code> will prevail, even if the Deployment specifies a smaller value.</p> <hr> <p>The documentation does not explicitly compare these two settings, but from the way the documentation is written, it can be deduced that these are separate settings for different events that don't interact with each other.</p> <p>For example:</p> <ul> <li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment" rel="noreferrer">Updating a Deployment</a></li> <li>Output of <code>kubectl explain deploy.spec.strategy.rollingUpdate.maxUnavailable</code></li> <li><a href="https://kubernetes.io/docs/tasks/run-application/configure-pdb/" rel="noreferrer">Specifying a PodDisruptionBudget</a></li> <li>Output of <code>kubectl explain pdb.spec.maxUnavailable</code></li> </ul> <p>Also, this is more in the spirit of how Kubernetes works. The Deployment Controller is not going to read a field of a PodDisruptionBudget, and vice versa.</p> <p>But to be really sure, you would just need to try it out.</p>
<p>I am new to fluentd. I have configured the basic fluentd setup I need and deployed this to my kubernetes cluster as a daemon set. I'm seeing logs shipped to my 3rd party logging solution. However I now want to deal with some logs that are coming in as multiple entries when they really should be one. The logs from the node look like they are json and are formatted like</p> <pre><code>{"log":"2019-09-23 18:54:42,102 [INFO] some message \n","stream":"stderr","time":"2019-09-23T18:54:42.102Z"} {"log": "another message \n","stream":"stderr","time":"2019-09-23T18:54:42.102Z"} </code></pre> <p>I have a config map that looks like </p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: fluent-config-map namespace: logging labels: k8s-app: fluentd-logzio data: fluent.conf: |- @include "#{ENV['FLUENTD_SYSTEMD_CONF'] || 'systemd'}.conf" @include kubernetes.conf @include conf.d/*.conf &lt;match fluent.**&gt; # this tells fluentd to not output its log on stdout @type null &lt;/match&gt; # here we read the logs from Docker's containers and parse them &lt;source&gt; @id fluentd-containers.log @type tail path /var/log/containers/*.log pos_file /var/log/es-containers.log.pos time_format %Y-%m-%dT%H:%M:%S.%NZ tag raw.kubernetes.* format json read_from_head true &lt;/source&gt; # Detect exceptions in the log output and forward them as one log entry. &lt;match raw.kubernetes.**&gt; @id raw.kubernetes @type detect_exceptions remove_tag_prefix raw message log stream stream multiline_flush_interval 5 max_bytes 500000 max_lines 1000 &lt;/match&gt; # Enriches records with Kubernetes metadata &lt;filter kubernetes.**&gt; @id filter_kubernetes_metadata @type kubernetes_metadata &lt;/filter&gt; &lt;match kubernetes.**&gt; @type logzio_buffered @id out_logzio endpoint_url "https://listener-ca.logz.io?token=####" output_include_time true output_include_tags true &lt;buffer&gt; # Set the buffer type to file to improve the reliability and reduce the memory consumption @type file path /var/log/fluentd-buffers/stackdriver.buffer # Set queue_full action to block because we want to pause gracefully # in case of the off-the-limits load instead of throwing an exception overflow_action block # Set the chunk limit conservatively to avoid exceeding the GCL limit # of 10MiB per write request. chunk_limit_size 2M # Cap the combined memory usage of this buffer and the one below to # 2MiB/chunk * (6 + 2) chunks = 16 MiB queue_limit_length 6 # Never wait more than 5 seconds before flushing logs in the non-error case. flush_interval 5s # Never wait longer than 30 seconds between retries. retry_max_interval 30 # Disable the limit on the number of retries (retry forever). retry_forever true # Use multiple threads for processing. flush_thread_count 2 &lt;/buffer&gt; &lt;/match&gt; </code></pre> <p>My question is how do I get these log messages shipped as one single entry instead of multiple?</p>
<p>There are at least two ways:</p> <h1><code>multiline</code> plugin</h1> <p>Thanks to <a href="https://stackoverflow.com/users/9150934/rickerp">@rickerp</a>, he suggested <a href="https://docs.fluentd.org/parser/multiline" rel="nofollow noreferrer"><code>multiline</code></a> plugin.</p> <blockquote> <p>The multiline parser plugin parses multiline logs. This plugin is the multiline version of regexp parser.</p> <p>The multiline parser parses log with formatN and format_firstline parameters. format_firstline is for detecting the start line of the multiline log. formatN, where N's range is [1..20], is the list of Regexp format for multiline log.</p> <p>Unlike other parser plugins, this plugin needs special code in input plugin e.g. handle format_firstline. So, currently, in_tail plugin works with multiline but other input plugins do not work with it.</p> </blockquote> <h1><code>fluent-plugin-concat</code> plugin</h1> <p>As per <a href="https://docs.fluentd.org/container-deployment/docker-logging-driver#additional-step-2-concatenate-multiple-lines-log-messages" rel="nofollow noreferrer"><code>fluentd</code> documentation</a>, <a href="https://github.com/fluent-plugins-nursery/fluent-plugin-concat" rel="nofollow noreferrer"><code>fluent-plugin-concat</code></a> solves this:</p> <blockquote> <h3>Concatenate multiple lines log messages</h3> <p>Application log is stored into <code>&quot;log&quot;</code> field in the records. You can concatenate these logs by using <a href="https://github.com/fluent-plugins-nursery/fluent-plugin-concat" rel="nofollow noreferrer">fluent-plugin-concat</a> filter before send to destinations.</p> <pre><code>&lt;filter docker.**&gt; @type concat key log stream_identity_key container_id multiline_start_regexp /^-e:2:in `\/'/ multiline_end_regexp /^-e:4:in/ &lt;/filter&gt; </code></pre> <p>Original events:</p> <pre><code>2016-04-13 14:45:55 +0900 docker.28cf38e21204: {&quot;container_id&quot;:&quot;28cf38e212042225f5f80a56fac08f34c8f0b235e738900c4e0abcf39253a702&quot;,&quot;container_name&quot;:&quot;/romantic_dubinsky&quot;,&quot;source&quot;:&quot;stdout&quot;,&quot;log&quot;:&quot;-e:2:in `/'&quot;} 2016-04-13 14:45:55 +0900 docker.28cf38e21204: {&quot;source&quot;:&quot;stdout&quot;,&quot;log&quot;:&quot;-e:2:in `do_division_by_zero'&quot;,&quot;container_id&quot;:&quot;28cf38e212042225f5f80a56fac08f34c8f0b235e738900c4e0abcf39253a702&quot;,&quot;container_name&quot;:&quot;/romantic_dubinsky&quot;} 2016-04-13 14:45:55 +0900 docker.28cf38e21204: {&quot;source&quot;:&quot;stdout&quot;,&quot;log&quot;:&quot;-e:4:in `&lt;main&gt;'&quot;,&quot;container_id&quot;:&quot;28cf38e212042225f5f80a56fac08f34c8f0b235e738900c4e0abcf39253a702&quot;,&quot;container_name&quot;:&quot;/romantic_dubinsky&quot;} </code></pre> <p>Filtered events:</p> <pre><code>2016-04-13 14:45:55 +0900 docker.28cf38e21204: {&quot;container_id&quot;:&quot;28cf38e212042225f5f80a56fac08f34c8f0b235e738900c4e0abcf39253a702&quot;,&quot;container_name&quot;:&quot;/romantic_dubinsky&quot;,&quot;source&quot;:&quot;stdout&quot;,&quot;log&quot;:&quot;-e:2:in `/'\n-e:2:in `do_division_by_zero'\n-e:4:in `&lt;main&gt;'&quot;} </code></pre> </blockquote> <p>With the plugin, you'll want to fix some regexes.</p>
<p>New to kubernetes. I was following a tutorial on kubernetes the other day. I forgot what I was doing. Running docker ps shows many containers of k8s*.</p> <pre><code>$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3ce38bbf370b f3591b2cb223 "/api-server --kubec…" 3 hours ago Up 3 hours k8s_compose_compose-api-57ff65b8c7-cc6qf_docker_460bc96e-dcfe-11e9-9213-025000000001_6 222239366ae5 eb516548c180 "/coredns -conf /etc…" 3 hours ago Up 3 hours k8s_coredns_coredns-fb8b8dccf-7vp79_kube-system_35ecd610-dcfe-11e9-9213-025000000001_6 0e4a5a5c23bd eb516548c180 "/coredns -conf /etc…" 3 hours ago Up 3 hours k8s_coredns_coredns-fb8b8dccf-h7tvr_kube-system_35edfd50-dcfe-11e9-9213-025000000001_6 332d3d26c082 9946f563237c "kube-apiserver --ad…" 3 hours ago Up 3 hours k8s_kube-apiserver_kube-apiserver-docker-desktop_kube-system_7c4f3d43558e9fadf2d2b323b2e78235_4 5778a63798ab k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_kube-apiserver-docker-desktop_kube-system_7c4f3d43558e9fadf2d2b323b2e78235_3 a0a26d6a2d08 2c4adeb21b4f "etcd --advertise-cl…" 3 hours ago Up 3 hours k8s_etcd_etcd-docker-desktop_kube-system_3773efb8e009876ddfa2c10173dba95e_3 e6038e717c64 ac2ce44462bc "kube-controller-man…" 3 hours ago Up 3 hours k8s_kube-controller-manager_kube-controller-manager-docker-desktop_kube-system_9c58c6d32bd3a2d42b8b10905b8e8f54_4 10e962e90703 004666307c5b "/usr/local/bin/kube…" 3 hours ago Up 3 hours k8s_kube-proxy_kube-proxy-pq4f7_kube-system_35ac91f0-dcfe-11e9-9213-025000000001_4 21b4a7aa37d0 953364a3ae7a "kube-scheduler --bi…" 3 hours ago Up 3 hours k8s_kube-scheduler_kube-scheduler-docker-desktop_kube-system_124f5bab49bf26c80b1c1be19641c3e8_4 d9447c41bc55 k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_kube-proxy-pq4f7_kube-system_35ac91f0-dcfe-11e9-9213-025000000001_4 65248416150d k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_kube-controller-manager-docker-desktop_kube-system_9c58c6d32bd3a2d42b8b10905b8e8f54_3 4afff5745b79 k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_kube-scheduler-docker-desktop_kube-system_124f5bab49bf26c80b1c1be19641c3e8_3 d6db038ea9b3 k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_etcd-docker-desktop_kube-system_3773efb8e009876ddfa2c10173dba95e_3 9ca30180ab45 k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_compose-api-57ff65b8c7-cc6qf_docker_460bc96e-dcfe-11e9-9213-025000000001_4 338d226f12d9 a8c3d87a58e7 "/compose-controller…" 3 hours ago Up 3 hours k8s_compose_compose-6c67d745f6-9v5k5_docker_461b37ab-dcfe-11e9-9213-025000000001_3 6e23ff5c4b86 k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_coredns-fb8b8dccf-7vp79_kube-system_35ecd610-dcfe-11e9-9213-025000000001_5 258ced5c1498 k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_coredns-fb8b8dccf-h7tvr_kube-system_35edfd50-dcfe-11e9-9213-025000000001_4 0ee3d792d79e k8s.gcr.io/pause:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_compose-6c67d745f6-9v5k5_docker_461b37ab-dcfe-11e9-9213-025000000001_4 </code></pre> <p>I also ran kubectl with --namespace provided. When I only execute kubectl get pods, it says no resource found.</p> <pre><code>$ kubectl get pods --namespace kube-system NAME READY STATUS RESTARTS AGE coredns-5644d7b6d9-frlhd 1/1 Running 1 9m30s coredns-5644d7b6d9-xmdtg 1/1 Running 1 9m30s etcd-minikube 1/1 Running 1 8m29s kube-addon-manager-minikube 1/1 Running 1 8m23s kube-apiserver-minikube 1/1 Running 1 8m42s kube-controller-manager-minikube 1/1 Running 1 8m22s kube-proxy-48kxn 1/1 Running 1 9m30s kube-scheduler-minikube 1/1 Running 1 8m32s storage-provisioner 1/1 Running 1 9m27s </code></pre> <p>I also tried stopping the containers using <code>docker stop</code>. It stopped but few seconds later, the containers started again.</p> <p>I also ran minikube delete but it only deleted minikube. The command docker ps still showed the containers.</p> <p>I'd like to start from beginning again.</p>
<p>Don't try to delete pause containers.</p> <pre><code>k8s.gcr.io/pause:3.1 "/pause" </code></pre> <p>You can bring multiple containers in a k8s pod and they share same network namespace.</p> <p>The pause containers are meant to be a way to share network namespace. </p> <p>That's <a href="https://stackoverflow.com/questions/48651269/what-are-the-pause-containers">how</a> the k8s pod is created.</p> <p>For more info, please go through <a href="https://www.ianlewis.org/en/almighty-pause-container" rel="nofollow noreferrer">this</a>.</p>
<p>I am new to Kubernetes.<br> Last few days, I did performance test for API Gateway to compare both environments, Kubernetes and Native.<br> I expected to get similar results in both types of tests, but the second test showed a 20% tps loss.</p> <p>How to stress : RestfulAPI requests through API Gateway.<br> Env 1 : <strong>Native</strong> (CentOS) - result : approximately <strong>1700 tps</strong><br> Evn 2 : <strong>Kubernetes</strong> - result : approximately <strong>1350 tps</strong><br> * Hardware specifications were the same.</p> <p>I wonder if this is a common result.<br> In my opinion, 20% is too much loss. </p> <p>Sorry for my stupid question, but I am confused. </p>
<p>According to <a href="https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/" rel="nofollow noreferrer">k8s documentation</a></p> <blockquote> <p>Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.</p> <p>The name Kubernetes originates from Greek, meaning helmsman or pilot.</p> </blockquote> <p><a href="https://en.wikipedia.org/wiki/Kubernetes" rel="nofollow noreferrer">Wikipedia</a> also says: </p> <blockquote> <p>Kubernetes (κυβερνήτης, Greek for "governor", "helmsman" or "captain")</p> </blockquote> <p>Obviously, the captain’s advantages are not in speed but in the predictability of the work of the entire complex system as a whole</p> <p>k8s strengths are:</p> <ul> <li>Service discovery and load balancing </li> <li>Storage orchestration </li> <li>Automated rollouts and rollbacks </li> <li>Automatic bin packing </li> <li>Self-healing </li> <li>Secret and configuration management </li> </ul> <p>As you can see, the performance gain is not mentioned here. </p> <p>P.S. There is <a href="https://news.ycombinator.com/item?id=21047676" rel="nofollow noreferrer">great comment</a> on HN discussion about Serverless hype (TL;DR: slowness and costiness):</p> <blockquote> <p>It's concerning how typical the hype machine is in IT. I believe Serverless has its place and value. So does Kubernetes or many other <em>products</em> that are often discussed on HN.</p> <p>But let's be clear, we are talking about commercial products and there is a capital interest in <em>selling</em> these services to all of us devs and engineers.</p> <p>So while use cases exists and benefits wait to be reaped, as a consultant I strongly feel that we should be MUCH more insistent in pointing out when a product does not make sense instead of jumping onto the hype train.</p> <p>I am basically surrounded by "LETS TRANSFORM EVERYTHING TO KUBERNETES THIS WEEK!" exclamations, conferences are basically "DID YOU ALREADY HEAR ABOUT KUBERNETES?" and so on ...</p> <p>It reminds me of Ruby on Rails, a mature and well-developed framework used by global tech firms (correct me if wrong: Airbnb, ~Stack Overflow~, Github) to handle parts of their backend in 2019. But for half a decade now even tiny companies have been screaming around about <em>FancyHTTPTechThisYear (tm)</em> because scale while reporting 1/500th of the traffic of some famous Rails users.</p> <p>This is not engineering with objectives in mind, it's more akin to the gaming community yelling for a new console.</p> </blockquote>
<p>I am trying to create a rook cluster inside k8s cluster.</p> <p>Set up - 1 master node, 1 worker node</p> <p>These are the steps I have followed</p> <pre><code>Master node: sudo kubeadm init --pod-network-cidr=10.244.0.0/16 sudo sysctl net.bridge.bridge-nf-call-iptables=1 sudo sysctl net.bridge.bridge-nf-call-ip6tables=1 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/32a765fd19ba45b387fdc5e3812c41fff47cfd55/Documentation/kube-flannel.yml kubeadm token create --print-join-command Worker node: kubeadm join {master_ip_address}:6443 --token {token} --discovery-token-ca-cert-hash {hash} --apiserver-advertise-address={worker_private_ip} Master node - Install rook - (reference - https://rook.github.io/docs/rook/master/ceph-quickstart.html): kubectl create -f ceph/common.yaml kubectl create -f ceph/operator.yaml kubectl create -f ceph/cluster-test.yaml </code></pre> <p>Error while creating rook-ceph-operator pod:</p> <pre><code>(combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "4a901f12e5af5340f2cc48a976e10e5c310c01a05a4a47371f766a1a166c304f" network for pod "rook-ceph-operator-fdfbcc5c5-jccc9": networkPlugin cni failed to set up pod "rook-ceph-operator-fdfbcc5c5-jccc9_rook-ceph" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.1.1/24 </code></pre> <p><a href="https://i.stack.imgur.com/ExUDb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ExUDb.png" alt="enter image description here"></a></p> <p>Can anybody help me with this issue?</p>
<p>This issue start if you did <code>kubeadm reset</code> and after that <code>kubeadm init</code> reinitialize Kubernetes.</p> <pre><code>kubeadm reset systemctl stop kubelet systemctl stop docker rm -rf /var/lib/cni/ rm -rf /var/lib/kubelet/* rm -rf /etc/cni/ ifconfig cni0 down ifconfig flannel.1 down ifconfig docker0 down </code></pre> <p>After this start docker and kubelet and kubeadm again.</p> <p><strong>Work around</strong> </p> <p>You can also try this way as simple easy solution </p> <pre><code>ip link delete cni0 ip link delete flannel.1 </code></pre> <p>that depends on which network you are using inside k8s.</p>
<p>I install kubernetes 1000 times but now it does not work. I install kubectl kubeadm kubelet then </p> <pre><code> sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=185.73.114.92 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml </code></pre> <p>but I see coredns is in pending state</p> <pre><code>kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-5644d7b6d9-492q4 0/1 Pending 0 13m kube-system coredns-5644d7b6d9-cvwjg 0/1 Pending 0 13m kube-system etcd-amghezi 1/1 Running 0 12m kube-system kube-apiserver-amghezi 1/1 Running 0 12m kube-system kube-controller-manager-amghezi 1/1 Running 0 12m kube-system kube-flannel-ds-amd64-fkxnf 1/1 Running 0 12m kube-system kube-proxy-pspw2 1/1 Running 0 13m kube-system kube-scheduler-amghezi 1/1 Running 0 12m </code></pre> <p>and then I get describe of coredns</p> <pre><code>kubectl describe pods coredns-5644d7b6d9-492q4 -n kube-system Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling &lt;unknown&gt; default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate. Warning FailedScheduling &lt;unknown&gt; default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate. </code></pre> <p>I taint the node by </p> <pre><code>kubectl taint nodes amghezi node-role.kubernetes.io/master- </code></pre> <p>It did not work I see at </p> <pre><code>journalctl -xe message:docker: network plugin is not ready: cni config uninitialized service docker status ● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; disabled; vendor preset: enabled) Active: active (running) since Sun 2019-09-22 17:29:45 CEST; 34min ago Docs: https://docs.docker.com Main PID: 987 (dockerd) Tasks: 20 CGroup: /system.slice/docker.service └─987 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock Sep 22 17:29:45 ubuntu systemd[1]: Started Docker Application Container Engine. Sep 22 17:29:45 ubuntu dockerd[987]: time="2019-09-22T17:29:45.728818467+02:00" level=info msg="API listen on /var/run/docker.sock" Sep 22 17:29:45 ubuntu dockerd[987]: time="2019-09-22T17:29:45.757401709+02:00" level=warning msg="failed to retrieve runc version: unknown output format: runc version spec: 1.0.1-dev\n" Sep 22 17:29:45 ubuntu dockerd[987]: time="2019-09-22T17:29:45.786776798+02:00" level=warning msg="failed to retrieve runc version: unknown output format: runc version spec: 1.0.1-dev\n" Sep 22 17:29:46 ubuntu dockerd[987]: time="2019-09-22T17:29:46.296798944+02:00" level=warning msg="failed to retrieve runc version: unknown output format: runc version spec: 1.0.1-dev\n" Sep 22 17:29:46 ubuntu dockerd[987]: time="2019-09-22T17:29:46.364459982+02:00" level=warning msg="failed to retrieve runc version: unknown output format: runc version spec: 1.0.1-dev\n" Sep 22 17:30:06 ubuntu dockerd[987]: time="2019-09-22T17:30:06.996299645+02:00" level=warning msg="failed to retrieve runc version: unknown output format: runc version spec: 1.0.1-dev\n" Sep 22 17:30:41 ubuntu dockerd[987]: time="2019-09-22T17:30:41.633452599+02:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Sep 22 17:30:41 ubuntu dockerd[987]: time="2019-09-22T17:30:41.633831003+02:00" level=warning msg="d72e19bd0e929513a1c9092ec487e5dc3f3e009bdaa4d33668b610e86cdadf9e cleanup: failed to unmount IPC: umount /var/lib/docker/containers/d72e19bd0e929513a1c9092ec487e5dc3f3e009bdaa4d33668b610e86cdadf9e/mounts/shm, flags: 0x2 Sep 22 17:30:41 ubuntu dockerd[987]: time="2019-09-22T17:30:41.903058543+02:00" level=warning msg="Your kernel does not support swap limit capabilities,or the cgroup is not mounted. Memory limited without swap." </code></pre> <p>and let us see kubelet status</p> <pre><code>Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized </code></pre>
<p>I assume per given data outputs that the problem comes from <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">Kubelet</a> node agent, since <code>kubelet</code> primarily demands on <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins" rel="nofollow noreferrer"><em>CNI network plugin</em></a> installation. In order to automatically configure networking features for the Pods, <code>kubelet</code> starts each time <em>CNI addon</em> in prior Pod creation to set up pod’s network interface as well. Furthermore, <a href="https://kubernetes.io/docs/tasks/administer-cluster/coredns/" rel="nofollow noreferrer"><em>CoreDNS</em></a> discovery service relies on overlay container network to be reachable for all cluster nodes. </p> <p>Although you've used <a href="https://github.com/coreos/flannel" rel="nofollow noreferrer">Flannel</a> CNI provider, <code>flannel</code> Pod is up and running, since <code>kubelet</code> can't create container interface for the particular <code>CoreDNS</code> Pods with lack of CNI configuration, I would recommend to reset <code>kubeadm</code> cluster with purging redundant component folder structure:</p> <p><code>$ sudo kubeadm reset</code></p> <p><code>$ sudo systemctl stop docker &amp;&amp; sudo systemctl stop kubelet</code></p> <p><code>$ sudo rm -rf /etc/kubernetes/</code></p> <p><code>$ sudo rm -rf .kube/</code></p> <p><code>$ sudo rm -rf /var/lib/kubelet/</code></p> <p><code>$ sudo rm -rf /var/lib/cni/</code></p> <p><code>$ sudo rm -rf /etc/cni/</code></p> <p><code>$ sudo rm -rf /var/lib/etcd/</code></p> <p>Bootstrap K8s cluster via <code>kubeadm</code>:</p> <p><code>$ sudo systemctl start docker &amp;&amp; sudo systemctl start kubelet</code></p> <p><code>$ sudo kubeadm init ...</code></p> <p>Further remove <code>node-role.kubernetes.io/master</code> taint and apply Flannel addon:</p> <p><code>$ kubectl taint nodes --all node-role.kubernetes.io/master-</code></p> <p><code>$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml</code></p> <p>You might find also some useful information about <code>kubeadm</code> <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/" rel="nofollow noreferrer"><em>troubleshooting</em></a> guide steps in the official K8s documentation.</p>
<p>Let me describe the situation what I faced today:</p> <ol> <li>I received NodeJS application image from Developers and published it to AKS(Azure Kubernetes Services)</li> </ol> <p>There is nothing specific in manifest of this application, this is simple deployment with service on 80 port.</p> <ol start="2"> <li><p>I have configured Ingress using helm package and installed common one from helm repo: stable/nginx-ingress . When it was installed - I have started to configure Ingress.</p></li> <li><p>Below my yaml:</p></li> </ol> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: app-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: "false" nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - http: paths: - backend: serviceName: ui-service servicePort: 80 path: /(.*) - backend: serviceName: ui-service servicePort: 80 path: /test-service/ui(/|$)(.*) </code></pre> <p>Okay, I'm trying to open page: <a href="http://1.2.3.4/" rel="nofollow noreferrer">http://1.2.3.4/</a> - everything works fine, I see web page which redirects me to:</p> <p><a href="http://1.2.3.4/page1" rel="nofollow noreferrer">http://1.2.3.4/page1</a> , in case of I clicked something - <a href="http://1.2.3.4/page2" rel="nofollow noreferrer">http://1.2.3.4/page2</a> etc.</p> <p>However, when I'm trying to open the same web application using:</p> <p><a href="http://1.2.3.4/test-service/ui" rel="nofollow noreferrer">http://1.2.3.4/test-service/ui</a> , I got blank page and errors in console:</p> <pre><code>Resource interpreted as Stylesheet but transferred with MIME type text/html: "http://1.2.3.4/test-services/ui/static/css/test.css". </code></pre> <p>What a difference that I found:</p> <p>In the second case all JS and CSS files has content-type: text/html.</p>
<p>Let me describe how I was managed to resolve the issue:</p> <p>In the YAML settings, as you can see <code>nginx.ingress.kubernetes.io/rewrite-target: /$1</code></p> <p>So, for root path directories and applications - it should be:</p> <pre><code>nginx.ingress.kubernetes.io/rewrite-target: / </code></pre> <p>For example.com/example/</p> <p>It should be:</p> <pre><code>nginx.ingress.kubernetes.io/rewrite-target: /$1 </code></pre> <p>etc.</p> <p>Now, web page is showing correctly.</p>
<p>I'm trying to make a backup from Mongo / K8S with this script</p> <pre><code>export POD=$(kubectl get pods -l app=mongo-client -o custom-columns=:metadata.name -n espace-client) kubectl exec "$POD" sh -c 'mongodump --archive' &gt; ~/backup/mongo/$(date +%F).db.dump </code></pre> <p>I get this error:</p> <pre><code>Error from server (NotFound): pods "\nmongo-client-79667cc85d-tsg72" not found </code></pre> <p>When I check the pods, I can see <code>mongo-client-79667cc85d-tsg72</code></p> <p>When I put the name without variable, it works well, so it might be because of initial <code>\n</code>. How can I avoid it ? How can I remove it from name ?</p>
<p>Your <code>kubectl get pods</code> command is constrained with a namespace selector <code>-n espace-client</code>. Your <code>kubectl exec</code> command also needs the namespace flag. </p>
<p>I am trying to install helm chart on kubernates cluster. When i try to initialize the helm using init command, it is throwing error as <code>"error installing: the server could not find the requested resource"</code></p> <p>provider.helm v2.14.3</p> <p>provider.kubernetes v1.16</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl version Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <pre class="lang-sh prettyprint-override"><code>$ helm version Client: &amp;version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"} Error: could not find tiller </code></pre> <pre class="lang-sh prettyprint-override"><code>$ helm init Creating /home/cloud_admin/.helm Creating /home/cloud_admin/.helm/repository Creating /home/cloud_admin/.helm/repository/cache Creating /home/cloud_admin/.helm/repository/local Creating /home/cloud_admin/.helm/plugins Creating /home/cloud_admin/.helm/starters Creating /home/cloud_admin/.helm/cache/archive Creating /home/cloud_admin/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /home/cloud_admin/.helm. Error: error installing: the server could not find the requested resource </code></pre> <pre class="lang-sh prettyprint-override"><code>$ kubectl get node -n kube-system NAME STATUS ROLES AGE VERSION openamvmimsload0 Ready master 5h11m v1.16.0 openamvmimsload1 Ready &lt;none&gt; 5h1m v1.16.0 </code></pre> <pre class="lang-sh prettyprint-override"><code>$ kubectl config get-clusters NAME kubernetes </code></pre> <pre class="lang-sh prettyprint-override"><code>$ kubectl cluster-info Kubernetes master is running at https://172.16.128.40:6443 KubeDNS is running at https://172.16.128.40:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. </code></pre> <pre class="lang-sh prettyprint-override"><code>$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 5h15m </code></pre>
<p>This seems to be a bug with Helm 2.14.3 (and previous) and Kubernetes 1.16 <a href="https://github.com/helm/helm/issues/6374" rel="noreferrer">Helm init fails on Kubernetes 1.16.0 bug report on GitHub</a>.</p> <p>The ticket lists some workarounds - the simplest one is:</p> <pre><code>helm init --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f - </code></pre> <p>or with RBAC enabled and <code>tiller</code> service account:</p> <pre><code>helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f - </code></pre>
<p>I am using <code>minikube</code> to run kubernetes cluster. I followed the setup given in <a href="https://istio.io/docs/setup/getting-started/" rel="nofollow noreferrer">Istio Docs</a> to install istio plugin. I am able to access the dashboard using <code>localhost:3000</code> but unable to access using <code>ip:3000</code>. </p> <p>Checked firewall. It isn’t blocking any port.</p>
<p>Unfortunately, if you are using <code>istio</code> it's not so easy to just change service type. You have to configure Istio <code>VirtualService</code> and create <code>Gateway</code>. </p> <p>I am posting this answer as community wiki related to another stack case: <a href="https://stackoverflow.com/a/56019381/11148139">https://stackoverflow.com/a/56019381/11148139</a> As mentioned question have 1 downvote and might be delete in the future I am posting for future use in this case.</p> <p>============================================</p> <p>You can create <a href="https://istio.io/docs/tasks/traffic-management/ingress/#configuring-ingress-using-an-istio-gateway" rel="nofollow noreferrer">Istio Gateway and VirtualService</a> in order to forward your requests to <code>grafana</code> service running by default on port 3000</p> <p>Firstly, let's check <code>grafana</code> and <code>istio-ingressgateway</code> service</p> <pre><code>kubectl get svc grafana istio-ingressgateway -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana ClusterIP 100.71.67.105 &lt;none&gt; 3000/TCP 18h istio-ingressgateway LoadBalancer 100.64.42.106 &lt;Public IP address&gt; 15020:31766/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:32576/TCP,15030:30728/TCP,15031:31037/TCP,15032:31613/TCP,15443:32501/TCP 18h </code></pre> <p>So, we have <code>grafana</code> running service listening on port 3000, and default <code>istio-ingressgateway</code> LoadBalancer service running with assigned public ip address.</p> <p>Then we create <code>gateway</code> to use this default LoadBalancer.</p> <pre><code>$ kubectl apply -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: grafana-gateway namespace: istio-system # Use same namespace with backend service spec: selector: istio: ingressgateway # use Istio default gateway implementation servers: - port: number: 80 name: HTTP protocol: HTTP hosts: - "*" EOF </code></pre> <p>Then configure route to <code>grafana service</code> for traffic entering via the this gateway:</p> <pre><code>$ kubectl apply -f - &lt;&lt;EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: grafana namespace: istio-system # Use same namespace with backend service spec: hosts: - "*" gateways: - grafana-gateway # define gateway name http: - match: - uri: prefix: "/" route: - destination: port: number: 3000 # Backend service port host: grafana # Backend service name EOF </code></pre> <p>Then hit the <code>http://&lt;public_ip_istio_ingressgateway&gt;</code>, you should see the grafana dashboard</p> <p>I hope it will be helpful for you.</p>
<p>I want to change the base url of my JupyterHub, how can I do this with the <a href="https://zero-to-jupyterhub.readthedocs.io/en/latest/index.html#" rel="nofollow noreferrer">Zero to Jupyterhub with Kubernetes project</a>?</p> <p>I know it is possible to change the hub's base url, it is stated on the bottom of this page: <a href="https://jupyterhub.readthedocs.io/en/stable/getting-started/networking-basics.html" rel="nofollow noreferrer">https://jupyterhub.readthedocs.io/en/stable/getting-started/networking-basics.html</a>. But I can't find it as config parameter in the <a href="https://zero-to-jupyterhub.readthedocs.io/en/latest/reference.html" rel="nofollow noreferrer">Configuration Reference</a>.</p>
<p>you can put in config.yaml of helm:</p> <pre><code>hub: baseUrl: /jupyter </code></pre>
<p>While installing Kubernetes with, I'm stuck at CNI plugin installation and configuration part. I have installed Flannel but I see error in kubelet logs due to which coredns pods are in pending state.</p> <p>OS: Centos7 k8s version: 1.16 Kubeadm is being used to setup the cluster.</p> <p>I had installed the plugin using: kubectl apply -f <a href="https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml" rel="noreferrer">https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml</a></p> <p>This is error I can see in Kubelet logs:</p> <pre><code>Sep 21 04:47:29 peteelizalde2c kubelet: W0921 04:47:29.897759 17817 cni.go:202] Error validating CNI config &amp;{cbr0 false [0xc000fb3ee0 0xc000fb3f60] [123 10 32 32 34 110 97 109 101 34 58 32 34 99 98 114 48 34 44 10 32 32 34 112 108 117 103 105 110 115 34 58 32 91 10 32 32 32 32 123 10 32 32 32 32 32 32 34 116 121 112 101 34 58 32 34 102 108 97 110 110 101 108 34 44 10 32 32 32 32 32 32 34 100 101 108 101 103 97 116 101 34 58 32 123 10 32 32 32 32 32 32 32 32 34 104 97 105 114 112 105 110 77 111 100 101 34 58 32 116 114 117 101 44 10 32 32 32 32 32 32 32 32 34 105 115 68 101 102 97 117 108 116 71 97 116 101 119 97 121 34 58 32 116 114 117 101 10 32 32 32 32 32 32 125 10 32 32 32 32 125 44 10 32 32 32 32 123 10 32 32 32 32 32 32 34 116 121 112 101 34 58 32 34 112 111 114 116 109 97 112 34 44 10 32 32 32 32 32 32 34 99 97 112 97 98 105 108 105 116 105 101 115 34 58 32 123 10 32 32 32 32 32 32 32 32 34 112 111 114 116 77 97 112 112 105 110 103 115 34 58 32 116 114 117 101 10 32 32 32 32 32 32 125 10 32 32 32 32 125 10 32 32 93 10 125 10]}: [plugin flannel does not support config version ""] Sep 21 04:47:29 peteelizalde2c kubelet: W0921 04:47:29.897824 17817 cni.go:237] Unable to update cni config: no valid networks found in /etc/cni/net.d Sep 21 04:47:32 peteelizalde2c kubelet: E0921 04:47:32.007379 17817 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized </code></pre> <p>Here is the pods:</p> <pre><code>kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-5644d7b6d9-n4h5x 0/1 Pending 0 114m kube-system coredns-5644d7b6d9-t2q54 0/1 Pending 0 114m kube-system etcd-ip-10-29-89-124 1/1 Running 0 113m kube-system kube-apiserver-ip-10-29-89-124 1/1 Running 0 113m kube-system kube-controller-manager-ip-10-29-89-124 1/1 Running 0 113m kube-system kube-flannel-ds-amd64-dqpzj 1/1 Running 0 110m kube-system kube-proxy-vzlqb 1/1 Running 0 114m kube-system kube-scheduler-ip-10-29-89-124 1/1 Running 0 113m </code></pre> <p>There is a file in <code>/etc/cni/net.d</code> named <code>10-flannel.conflist</code>. Its contents are:</p> <pre><code>{ "name": "cbr0", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } </code></pre>
<p>The accepted solution is incomplete and will cause headache down the line.</p> <p>The proper solution to make this change permanent is to edit the ConfigMap created by flannel in your Kubernetes cluster. Otherwise, the file will be recreated the next time the flannel pod volumes are populated with the ConfigMap (e.g. on node reboot).</p> <p>Use <code>kubectl edit cm -n kube-system kube-flannel-cfg</code> to edit the ConfigMap provided by flannel, and add the missing line:</p> <pre><code> 5 apiVersion: v1 6 data: 7 cni-conf.json: | 8 { 9 "name": "cbr0", 10 "cniVersion": "0.2.0", 11 "plugins": [ </code></pre> <p>Reboot the node, or alternatively make the change manually in <code>/etc/cni/net.d/10-flannel.conflist</code> and do <code>systemctl restart kubelet</code> afterwards to skip the reboot.</p>
<p>While there's a lot of documentation and examples on Kubernetes RBAC and also the available verbs for different resources, I couldn't find any rules on whether certain verbs are always used in combination or whether there are use cases to using them individually. In particular, I'm wondering about the verbs get, list, and watch. What uses are for combining them, and especially not combining them?</p> <ul> <li>are there uses for allowing get on resources, but not list?</li> <li>au contraire, are there uses for list without allowing get? Maybe along the lines of information sparseness?</li> <li>get and list, but no watch? To restrict only trusted subjects and service accounts to put more strain on the API server and etcd?</li> <li>watch without list or get? Wouldn't that cripple most clients because they're listwatchers?</li> </ul>
<p>Interesting question, here are some ideas and examples of usages in practice.</p> <p>There are many more examples in practice. For example, you can inspect the default ClusterRoles by browsing through <code>kubectl describe clusterroles</code>. And to see which API requests kubectl makes under the hood, you can increase the log verbosity, for example, <code>kubectl get pods -w -v 10</code>.</p> <h3><em>get</em> but not <em>list</em></h3> <p>You want someone to be able to read resources they know by name but not discover what other resources exist. For example, allows to do <code>kubectl get mypod</code>, but not <code>kubectl get pods</code>.</p> <p>Examples:</p> <ul> <li>The <code>system:node</code> ClusterRole has <em>get</em> but not <em>list</em> permissions on Endpoints, PVs, and PVCs.</li> <li>The <code>system:coredns</code> ClusterRole has <em>get</em> but not <em>list</em> permissions on Nodes.</li> <li>The <code>system:controller:expand-controller</code> ClusterRole has <em>get</em> but not <em>list</em> permissions on Endpoints, Secrets, and Services.</li> </ul> <h3><em>list</em> but not <em>get</em></h3> <p>Allows to do, for example, <code>kubectl get pods</code> but not <code>kubectl get pod mypod</code>. It doesn't make much sense, because all the information you can get with <em>get</em> is also included in <em>list</em>. Nevertheless, there are some usages of this in practice. </p> <p>Examples:</p> <ul> <li>The <code>system:kube-dns</code> ClusterRole has <em>list</em> and <em>watch</em> permissions for Endpoints and Services, but not <em>get</em>.</li> <li>The <code>system:controller:daemon-set-controller</code> ClusterRoel has <em>list</em> and <em>watch</em> permissions for Nodes, but not <em>get</em>.</li> <li>The <code>system:coredns</code> ClusterRole has <em>list</em> and <em>watch</em> permissions for Endpoints, Namespaces, Pods, and Services, but not <em>get</em>.</li> </ul> <h3><em>get</em> and <em>list</em>, but not <em>watch</em></h3> <p>In practice, in most cases where there is <em>list</em> there is also <em>watch</em>. You could deprive someone of <em>watch</em> to reduce the number of watchers on etcd. Users can do <code>kubectl get pods</code> and <code>kubectl get pods mypod</code>, but not use the <code>-w</code> option.</p> <p>Makes also sense if the API does not support <em>watch</em> operations, like, for example, the optional metric APIs.</p> <p>Examples:</p> <ul> <li>The <code>system:controller:persistent-volume-binder</code> ClusterRole has <em>get</em> and <em>list</em> permissions for Nodes, but not <em>watch</em></li> </ul> <h3><em>watch</em>, but not <em>get</em> and <em>list</em></h3> <p>Regarding the use case, it doesn't make much sense, because all the information you can get with <em>get</em> and <em>list</em> is also included in <em>watch</em>. I don't know of any concrete usage of this in practice.</p> <p>However, technically, it's possible. For example, if you have <em>watch</em> permissions for Pods, but not <em>get</em> and <em>list</em>, you can do:</p> <pre><code>✅ kubectl get --raw="/api/v1/watch/namespaces/default/pods" ✅ kubectl get --raw="/api/v1/watch/namespaces/default/pods/mypod" </code></pre> <p>And it works. However, these <code>watch</code> endpoints are deprecated and you should use the <em>list</em> endpoint with a <code>watch</code> parameter instead. But this also works:</p> <pre><code>✅ kubectl get --raw="/api/v1/namespaces/default/pods?watch=true" </code></pre> <p>However, you can't watch a single Pod like this, because the <em>get</em> endpoint doesn't have a <code>watch</code> parameter. So, the following is invalid:</p> <pre><code>❌ kubectl get --raw="/api/v1/namespaces/default/pods/mypod?watch=true" </code></pre> <p>And you can't watch resources with kubectl at all. The following fails:</p> <pre><code>❌ kubectl get pods -w ❌ kubectl get pods mypod -w </code></pre> <p>Because kubectl makes a <em>list</em> and <em>get</em> request, respectively, before the <em>watch</em> request, most probably to get the <code>resourceVersion</code> of the resources which will then be included in the subsequent <em>watch</em> request.</p> <p>Note: that means, if you have <em>list</em> and <em>watch</em>, then <code>kubectl get pods -w</code> works, but <code>kubectl get pods mypod -w</code> doesn't, and if you have <em>get</em> and <em>watch</em>, then <code>kubectl get pods mypod -w</code> works but <code>kubectl get pods -w</code> doesn't.</p>
<p>I couldn't find an official one which suggests that maybe it's a bad idea? My idea was to be able to spin up the whole of minikube and deploy an app to it so that developers are not required to install minikube manually.</p> <p>If this is indeed possible, is there an already created official minikube docker image?</p>
<p>Minikube is not available as Docker image. It's a VM based tool not container based.</p> <p>Quoting a statement from <a href="https://github.com/kubernetes/minikube/issues/3192#issuecomment-425915749" rel="nofollow noreferrer">this</a> link.</p> <blockquote> <p>Hypervisor based minikube doesn't really work from a docker container and the None driver won't really work either as it relies on systemd to run the kubelet on your machine.</p> </blockquote> <p>For more info, check <a href="https://github.com/kubernetes/minikube/issues/3192" rel="nofollow noreferrer">this</a>.</p>
<p>Consider the following <code>ConfigMap</code> definition, in, say, <code>foo.yml</code>:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: foo-data data: foo.json: |- { "foo": "foo", "bar": 42 } </code></pre> <p>Is there a way to load the <code>foo.json</code> data from an external file, instead of inlining it in the template? It would be nice if I could put the data in a file <code>foo.json</code> next to <code>foo.yml</code>, reference it somehow, and have K8s figure it out when I apply the template.</p> <p>Is that possible? How?</p> <p>If not, is the feature on the roadmap?</p>
<p>Afaik there is no build-in templating as for now but you can use ansible templating capabilities with k8s yaml files saute or together with e.g. k8s plugin.</p> <p>Works like a charm :)</p>
<p>When provisioning a k8 cluster, in the digital ocean's dashboard, under choose cluster capacity, i see an option to either add nodes in a pool or add a new pool altogether. I don't quite understand the difference.</p> <p>DO gives the following suggestion which is quite ambiguous to me -</p> <blockquote> <p>Increasing the number of nodes in a pool lets you run more instances of the scheduled services. Adding more node pools allows you to schedule pods to different node pools so each pod has the RAM, CPU, and storage it requires. You can add and remove nodes and node pools at any time.</p> </blockquote> <p>Can anyone give me an example when one increases nodes in a pool and when one creates a different pools altogether?</p> <p>Thanks!</p>
<p>From GCP's documentation:</p> <blockquote> <p>A node pool is a group of nodes within a cluster that all have the same configuration.</p> </blockquote> <p>Node Pools have a specific node type, which tells them what type of virtual machines they can use as their nodes. For example, I can have these three node pools:</p> <p>highmem: 4 vCPUs, 32G of RAM and no GPU</p> <p>highcpu: 8 vCPUs, 16G of RAM and no GPU</p> <p>gpu: 4 vCPUs, 8G of RAM, 1 Nvidia Tesla GPU</p> <p>I can scale up each of these node pools independently, I can have 3 highmem nodes in my pool for processes that use a lot of memory, but less CPU, while I have only 1 node in my highcpu pool for processes that use a lot of CPU, but less RAM. Meanwhile I have a node pool for GPU processes such as Tensorflow or Image processing programs. <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="noreferrer">Assigning pods to node pools</a> is done through node selector or affinity options in resource definitions.</p> <p>So, having different node pools gives you the ability to use fine-tuned VMs for differnet purposes. If you were limited to a single pool, you would have to use large VMs with a lot of RAM, CPU and GPU, and that would be extremely expensive and inefficient.</p> <p>Please note that node pools are not meant to allow "isolation" of applications. In order to logically separate applications, namespaces can be used rather than node pools. Node pools are only meant to be used for managing resources and virtual machine types used.</p>
<p>I am looking for a way to delete all pods of specific deployments of specific namespaces. In other words, given: </p> <blockquote> <p>x - list of namespaces<br> y - list of deployments </p> </blockquote> <pre><code>for i in x: for j in y: delete all pods of deployment j in namespace i </code></pre> <p>I would be happy if someone knows how to do it in bash commands. </p> <p>Thanks!</p>
<p>I believe something like this will do it:</p> <pre><code>#!/bin/bash NAMESPACES=( n1 n2 n3 ) DEPLOYMENTS=( dep1 dep2 dep3 ) for i in "${NAMESPACES[@]}" do for x in "${DEPLOYMENTS[@]}" do # Get the pods in the deployment PODS=$(kubectl -n $i get pods --no-headers | awk '{print $1}' | grep $x | tr '\n' ' ') kubectl -n $i delete pods $PODS done done </code></pre>
<p><strong>What I Did:</strong><br> I installed Helm with</p> <pre><code>curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash helm init --history-max 200 </code></pre> <p><br> <strong>Getting an error:</strong></p> <pre><code>$HELM_HOME has been configured at /root/.helm. Error: error installing: the server could not find the requested resource </code></pre> <ol> <li>what does that error mean?</li> <li>How should I install Helm and tiller?</li> </ol> <p>Ubuntu version: 18.04<br> Kubernetes version: 1.16<br> Helm version: </p> <pre><code>helm version Client: &amp;version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"} Error: could not find tiller </code></pre> <p><strong>Update:</strong><br> I tried @shawndodo's answer but still tiller not installed</p> <pre><code>helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f - </code></pre> <p><strong>Update 2:</strong><br> <code>helm init --history-max 200</code> working in kubernetes version 1.15</p>
<p>I met the same problem, then I found this reply on <a href="https://github.com/helm/helm/issues/6374#issuecomment-533427268" rel="noreferrer">here</a>.</p> <pre><code>helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f - </code></pre> <p>It works for me. You can see the detail in this issue.</p>
<p>I am trying to deploy the next frontend-deployment.yaml in Kubernetes using Jenkins, but I am having the next error:</p> <pre><code>[frontend] Running shell script + kubectl apply -f IKonnekt/frontend-deployment.yaml error: unable to recognize "IKonnekt/frontend-deployment.yaml": no matches for kind "Deployment" in version "extensions/v1beta1" </code></pre> <p>If I run <code>kubectl apply -f IKonnekt/frontend-deployment.yaml</code> from a machine inside the Kubernetes cluster it works fine.</p> <pre><code>Kubernetes Client Version: 1.12.1 Kubernetes Server Version: 1.11.0 </code></pre> <p>This is my frontend-deployment.yaml:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: ikonnekt-frontend-deployment spec: replicas: 1 template: metadata: labels: app: ikonnekt-frontend spec: containers: - name: ikonnekt-frontend image: ikonnektfrontend imagePullPolicy: Always env: - name: REACT_APP_API value: "http://IP:Port" - name: REACT_APP_AUTH_ENDPOINT value: "http://IP:Port/auth" ports: - containerPort: 80 imagePullSecrets: - name: regcred </code></pre>
<p>Just change <strong>apiVersion</strong> to this: <code>apiVersion: apps/v1</code></p>
<p>I'm looking for the go equivalent of:</p> <pre><code>kubectl get some-custom-resource-kind some-custom-resource -o yaml &gt; file.yaml Modify the yaml file... kubectl apply -f file.yaml </code></pre> <p>Kubernetes has a client go library for <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">standard resource</a> kinds.</p> <p>And various vendors have client go libraries for their <a href="https://github.com/openshift/client-go" rel="nofollow noreferrer">custom resources</a>.</p> <p>But I need to get/update a resource kind that doesn't have a publicly available client go library. The logic is implemented in bash script today and I'm trying to move that function to a go controller. </p> <p>Seems like it there should be a straightforward way in go to do the equivalent of kubectl.</p> <p>Thanks, Paul</p>
<p>For any type, including your CRDs, use <a href="https://godoc.org/sigs.k8s.io/controller-runtime/pkg/client#Client" rel="noreferrer"><code>client.Client</code></a>. </p> <p>From the documentation:</p> <pre class="lang-golang prettyprint-override"><code>// Using a typed object. pod := &amp;corev1.Pod{} // c is a created client. _ = c.Get(context.Background(), client.ObjectKey{ Namespace: "namespace", Name: "name", }, pod) pod.SetFinalizers(append(pod.GetFinalizers(), "new-finalizer")) _ = c.Update(context.Background(), pod) // Using a unstructured object. u := &amp;unstructured.Unstructured{} u.SetGroupVersionKind(schema.GroupVersionKind{ Group: "apps", Kind: "Deployment", Version: "v1", }) _ = c.Get(context.Background(), client.ObjectKey{ Namespace: "namespace", Name: "name", }, u) u.SetFinalizers(append(u.GetFinalizers(), "new-finalizer")) _ = c.Update(context.Background(), u) </code></pre> <p>You could just as easily swap in <code>SomeCustomResourceKind</code>:</p> <pre><code>myCR := &amp;v1alpha1.SomeCustomResourceKind{} // c is a created client.Client _ = c.Get(context.TODO(), client.ObjectKey{ Namespace: "namespace", Name: "some-custom-resource", }, myCR) myCR.MyProperty = "NewValue" _ = c.Update(context.TODO(), myCR) </code></pre> <p>You mentioned you're trying to move this functionality from a bash script to a Go controller, so it would be worth checking out the <a href="https://github.com/kubernetes-sigs/kubebuilder" rel="noreferrer">Kubebuilder</a> project, which can scaffold out a controller for you (and any additional APIs you might need). It creates fully functional controllers with the controller-runtime Client and wires up all the reconciliation logic to manage your CRDs. </p>
<p>Had created a simple hello world Azure Functions QueueTrigger and when tried to deploy using the below command, got the highlighted error</p> <pre><code>$ sudo func kubernetes deploy --name myfunction --namespace functions-ns --registry myregistry </code></pre> <blockquote> <p>error: unable to recognize "STDIN": no matches for kind "ScaledObject" in version "keda.k8s.io/v1alpha1"</p> </blockquote>
<p>Running the below command resolved the issue:</p> <p><code>func kubernetes install --namespace {a namespace}</code></p> <p>More details <a href="https://github.com/kedacore/keda/wiki/Using-Azure-Functions-with-Keda-and-Osiris" rel="nofollow noreferrer">here</a>.</p>
<p>How to configure Kubernetes deployment to run only one replica of pod always ? We can configure replicas as 1 in the deployment file but this will not prevent someone from scaling up pods using kubectl. </p>
<p>You can't directly restrict a Deployment resource to not allow more than one replica.</p> <p>But you could create a custom <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook" rel="nofollow noreferrer">ValidatingAdmissionWebhook</a> admission controller that rejects requests that try to modify the <code>replicas</code> field of this Deployment.</p> <p>Or you could create a Custom Resource Definition (CRD) that has a similar behaviour as a Deployment, but has the number of replicas fixed to 1.</p> <p>In general, if you just want to prevent certain users from scaling a replica, you could restrict the RBAC permissions for these users to prevent them from updating Deployment objects.</p>
<p>I am basically looking for mechanics similair to init containers with a caveat, that I want it to run after pod is ready (responds to readinessProbe for instance). Are there any hooks that can be applied to readinessProbe, so that it can fire a job after first sucessfull probe?</p> <p>thanks in advance</p>
<p>you can use some short lifecycle hook to pod or say container.</p> <p>for example</p> <pre><code>lifecycle: postStart: exec: command: [&quot;/bin/sh&quot;, &quot;-c&quot;, &quot;echo In postStart &gt; /dev/termination-log&quot;] </code></pre> <p>it's postStart hook so I think it will work.</p> <p>But Post hook is async function so as soon as container started it will be triggered sometime may possible before the entry point of container it triggers.</p> <p><strong>Update</strong></p> <p>Above <strong>postStart</strong> runs as soon as the container is <strong>created</strong> not when it's <strong>Ready</strong>.</p> <p>So I you are looking for when it become read you have to use either <strong>startup</strong> probe.</p> <p><strong>Startup</strong> probe is like Readiness &amp; Liveness probe only but it's one time. <strong>Startup</strong> <strong>probe</strong> check for application <strong>Readiness</strong> once the application is Ready <strong>liveness</strong> <strong>probe</strong> takes it's place.</p> <p>Read More about <a href="https://startupProbe:%20%20%20httpGet:%20%20%20%20%20path:%20/healthz%20%20%20%20%20port:%20liveness-port%20%20%20failureThreshold:%2030%20%20%20periodSeconds:%2010" rel="nofollow noreferrer">startup probe</a></p> <p>So from the <strong>startup</strong> <strong>probe</strong> you can invoke the <strong>Job</strong> or Run any type of <strong>shell script</strong> file it will be one time, also it's after your application sends <strong>200</strong> to <strong>/healthz</strong> <strong>Endpoint</strong>.</p> <pre><code>startupProbe: exec: command: - bin/bash - -c - ./run-after-ready.sh failureThreshold: 30 periodSeconds: 10 </code></pre> <p>file <strong>run-after-ready.sh</strong> in container</p> <pre><code>#!/bin/sh curl -f -s -I &quot;http://localhost/healthz&quot; &amp;&gt;/dev/null &amp;&amp; echo OK || echo FAIL . . #your extra code or logic, wait, sleep you can handle now everything . </code></pre> <p>You can add more checks or conditions <strong>shell script</strong> if the application <strong>Ready</strong> or <strong>some API</strong> as per need.</p>
<p>We have deployed some of the services (like Web API) on Azure Kubernetes. When we logged onto the Azure Kubernetes Pods and execute the printenv on the terminal, it shows environment variable on the screen including the service we deployed like</p> <pre><code>&lt;SERVICENAME&gt;_PORT= &lt;SERVICENAME&gt;_HOST=` ..... </code></pre> <p>How can we read above variable value in the .Net code? I tried with the below code but it didn't work</p> <pre><code> var builder = new ConfigurationBuilder(); builder.AddJsonFile("appsettings_prod.json", true, true).AddEnvironmentVariables(); Configuration = builder.Build(); var port = Configuration["&lt;SERVICENAME&gt;_PORT"] </code></pre>
<p>To access env from k8s pods, you need to provide those env through pod's <code>spec.containers[].env[]</code>.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: demo-pod spec: containers: - name: mycontainer image: demo/new-image env: # define env from k8s secret (used specially for credentials) - name: SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: username # define env from configmap - name: SPECIAL_CREDENTIALS valueFrom: configMapKeyRef: name: configmap-name key: config.json # define value directly - name: DEMO_GREETING value: "Hello from the environment" </code></pre>
<p>I am working on a system that is spread across both Digital Ocean and AWS. Their Node.js instances are on a Kubernetes cluster on Digital Ocean and their databases and S3 spaces are hosted on AWS. I was able to connect to the Kubernetes cluster using kubectl. Then, I was able to do a 'terraform init' with no issues. But, when I tried to do a 'terraform plan', I got this error.</p> <p>Error: Error retrieving Kubernetes cluster: GET <a href="https://api.digitalocean.com/v2/kubernetes/clusters/1234" rel="nofollow noreferrer">https://api.digitalocean.com/v2/kubernetes/clusters/1234</a>: 401 Unable to authenticate you.</p> <p>I am new to both Kubernetes and Terraform. Does Terraform expect the Kubernetes config information to be in a different place then where kubectl expects it? </p>
<p>You need a token so that Digital Ocean’s servers know that you have permission to access your account. Follow the steps in the instruction <a href="https://www.digitalocean.com/docs/api/create-personal-access-token/" rel="nofollow noreferrer">creating-access-token</a> and copy the token to your clipboard.Remember to store it as an environment variable: <code>export TF_VAR_do_token=your-token</code>.</p> <p>Set environment variables:</p> <pre><code>export TF_VAR_do_token=your_digital_ocean_token export TF_VAR_do_cluster_name=your_cluster_name </code></pre> <p>Otherwise problem is with the API token. Create a new token and then the operation will succeeded.</p> <p>Useful blog article about setting up <a href="https://web.archive.org/web/20190314161613/https://ponderosa.io/blog/kubernetes/2019/03/13/terraform-cluster-create/" rel="nofollow noreferrer">Kubernetes cluster wit Digital Ocean and Terraform</a>.</p>
<p>I'm trying to restrict all pods except few from running with the privileged mode.</p> <p>So I created two Pod security policies: One allowing running privileged containers and one for restricting privileged containers.</p> <pre><code>[root@master01 vagrant]# kubectl get psp NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES privileged true RunAsAny RunAsAny RunAsAny RunAsAny false * restricted false RunAsAny RunAsAny RunAsAny RunAsAny false * </code></pre> <p>Created the Cluster role that can use the pod security policy "restricted" and binded that role to all the serviceaccounts in the cluster</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: psp-restricted rules: - apiGroups: ['policy'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: - restricted --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: psp-restricted roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: psp-restricted subjects: - kind: Group name: system:serviceaccounts apiGroup: rbac.authorization.k8s.io </code></pre> <p>Now I deploying a pod with "privileged" mode. But it is getting deployed. The created pod annotation indicating that the psp "privileged" was validated during the pod creation time. Why is that ? the restricted PSP should be validated right ? </p> <pre><code>apiVersion: v1 kind: Pod metadata: name: psp-test-pod namespace: default spec: serviceAccountName: default containers: - name: pause image: k8s.gcr.io/pause securityContext: privileged: true </code></pre> <pre><code>[root@master01 vagrant]# kubectl create -f pod.yaml pod/psp-test-pod created </code></pre> <pre><code>[root@master01 vagrant]# kubectl get pod psp-test-pod -o yaml |grep kubernetes.io/psp kubernetes.io/psp: privileged </code></pre> <p>Kubernetes version: v1.14.5</p> <p>Am i missing something here ? Any help is appreciated.</p>
<p>Posting the answer to my own question. Hope it will help someone </p> <p>All my PodSecurityPolicy configurations are correct. The issue was, I tried to deploy a pod by its own not via any controller manager like Deployment/Replicaset/Daemonset etc.. Most Kubernetes pods are not created directly by users. Instead, they are typically created indirectly as part of a Deployment, ReplicaSet or other templated controller via the controller manager.</p> <p>In the case of a pod deployed by its own, pod is created by kubectl not by controller manager.</p> <p>In Kubernetes there is one superuser role named "cluster-admin". In my case, kubectl is running with superuser role "cluster-admin". This "cluster-admin" role has access to all the pod security policies. Because, to associate a pod security policy to a role, we need to use 'use' verbs and set 'resources' to 'podsecuritypolicy' in 'apiGroups'</p> <p>In the cluster-admin role 'resources' * include 'podsecuritypolicies' and 'verbs' * include 'use'. So all policies will also enforce on the cluster-admin role as well. </p> <pre><code>[root@master01 vagrant]# kubectl get clusterrole cluster-admin -o yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: cluster-admin rules: - apiGroups: - '*' resources: - '*' verbs: - '*' - nonResourceURLs: - '*' verbs: - '*' </code></pre> <p>pod.yaml</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: psp-test-pod namespace: default spec: serviceAccountName: default containers: - name: pause image: k8s.gcr.io/pause securityContext: privileged: true </code></pre> <p>I deployed the above pod.yaml using the command <code>kubectl create -f pod.yaml</code> Since I had created two pod security policies one for restriction and one for privileges, cluster-admin role has access to both policies. So the above pod will launch fine with kubectl because cluster-admin role has access to the privileged policy( privileged: false also works because admin role has access to restriction policy as well). This situation happens only if either a pod created directly by kubectl not by kube control managers or a pod has access to the "cluster-admin" role via serviceaccount</p> <p>In the case of a pod created by Deployment/Replicaset etc..first kubectl pass the control to the controller manager, then the controller will try to deploy the pod after validating the permissions(serviceaccount, podsecuritypolicies)</p> <p>In the below Deployment file, pod is trying to run with privileged mode. In my case, this deployment will fail because I already set the "restricted" policy as the default policy for all the serviceaccounts in the cluster. So no pod will able to run with privileged mode. If a pod needs to run with privileged mode, allow the serviceaccount of that pod to use the "privileged" policy</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: pause-deploy-privileged namespace: kube-system labels: app: pause spec: replicas: 1 selector: matchLabels: app: pause template: metadata: labels: app: pause spec: serviceAccountName: default containers: - name: pause image: k8s.gcr.io/pause securityContext: privileged: true </code></pre>
<p>I'm trying to get my logz.io set up and running and during that process I'm noticing that my ingress controller pod is spitting out a lot of logs. I don't know if its normal or not, but I see a TON of entries in the logs that look like this</p> <pre><code>[06/Sep/2019:21:27:14 +0000]TCP200004.999 [06/Sep/2019:21:27:17 +0000]TCP200005.000 [06/Sep/2019:21:27:19 +0000]TCP200005.001 [06/Sep/2019:21:27:22 +0000]TCP200004.999 [06/Sep/2019:21:27:24 +0000]TCP200005.001 [06/Sep/2019:21:27:27 +0000]TCP200005.000 . . . </code></pre> <p>Is this normal? Is my ingress configured wrong? I don't want to see thousands of these entries in my logz.io instance.</p>
<p>Usually, these logs records reflect <em><a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="nofollow noreferrer">TCP/UDP</a></em> services configuration for upstream backends, when <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer"><em>Nginx Ingress controller</em></a> applies proxy stream for TCP/UDP connections from relevant <code>ConfigMap</code> by adding specific lua block to the nested <code>Nginx</code> ingress controller Pod. </p> <p>Find related discussion in <a href="https://github.com/kubernetes/ingress-nginx/issues/3612" rel="nofollow noreferrer">#3612</a> Github thread.</p>
<p>Google course on Kubernetes proposes: </p> <pre><code>$ kubectl run kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1 --port=8080 kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead. </code></pre> <p>Now I'm trying to use the new syntax: </p> <pre><code>$ kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1 $ kubectl expose deployment/kubernetes-bootcamp --port=8080 type=NodePort </code></pre> <p>I'm sure that they are not the same. Could anyone help me?</p> <p>Rr</p> <p>Not duplicate: I'm asking about equivalence of two commands</p>
<p>You can see what these commands do by executing a dry-run and inspecting the generated manifests:</p> <pre><code>kubectl run kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1 --port=8080 \ --dry-run -o yaml &gt; kubectl-run.yaml </code></pre> <p>And</p> <pre><code>kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1 \ --dry-run -o yaml &gt;kubectl-create.yaml </code></pre> <p>The only substantial difference between the two manifests is that there is no <code>ports</code> field in <code>kubectl-create.yaml</code>.</p> <p>The <code>kubectl create deployment</code> command has no option for specifying any exposed ports of the container, so there's probably no easy way to achieve exactly the same output as <code>kubectl run</code> with a <code>kubectl create</code> command.</p> <p>The <code>kubectl expose</code> command that you're using just creates a Service resource and doesn't modify the Deployment resource.</p> <p>In general, <code>kubectl create &lt;resource&gt;</code> commands are rather limited. They're mostly useful to either create a resource with only default settings, or to create the base for a manifest with <code>--dry-run -o yaml</code> that you can then customise.</p>
<p>I am trying to deploy service using helm chart on kubernetes cluster. It is throwing error as </p> <blockquote> <p>"Error: Non-absolute URLs should be in form of repo_name/path_to_chart, got: guestbook"</p> </blockquote> <p>Here is the guestbook service that i am deploying <a href="https://github.com/phcollignon/helm/tree/master/lab5_helm_chart_version1/" rel="nofollow noreferrer">https://github.com/phcollignon/helm/tree/master/lab5_helm_chart_version1/</a></p> <p>provider.helm v2.14.3</p> <p>provider.kubernetes v1.16</p> <pre><code>$ kubectl version Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <pre><code>$ helm install guestbook Error: failed to download "guestbook" (hint: running `helm repo update` may help) </code></pre> <pre><code>$ helm version Client: &amp;version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"} Server: &amp;version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"} </code></pre> <pre><code>$ helm install guestbook --debug [debug] Created tunnel using local port: '39069' [debug] SERVER: "127.0.0.1:39069" [debug] Original chart version: "" Error: Non-absolute URLs should be in form of repo_name/path_to_chart, got: guestbook </code></pre>
<p>There are five different ways you can express the chart you want to install:</p> <ol> <li>By chart reference: <code>helm install stable/mariadb</code></li> <li>By path to a packaged chart: <code>helm install ./nginx-1.2.3.tgz</code></li> <li>By path to an unpacked chart directory: <code>helm install ./nginx</code></li> <li>By absolute URL: <code>helm install https://example.com/charts/nginx-1.2.3.tgz</code></li> <li>By chart reference and repo url: <code>helm install --repo https://example.com/charts/ nginx</code></li> </ol> <p>There is example using option number <strong>3</strong></p> <p>Download github repository using this command:</p> <pre><code>git clone https://github.com/phcollignon/helm </code></pre> <p>Then go to the <strong>lab5_helm_chart_version1</strong> file</p> <pre><code>cd helm/lab5_helm_chart_version1 </code></pre> <p>And simply use helm install to create guestbook</p> <pre><code>helm install chart/guestbook/ --name guestbook </code></pre>
<p>I have a single node Kafka broker running inside a pod on a single node kubernetes environment. I am using this image for kafka: <a href="https://hub.docker.com/r/wurstmeister/kafka" rel="nofollow noreferrer">https://hub.docker.com/r/wurstmeister/kafka</a></p> <p><strong>kafka version</strong> = 1.1.0</p> <p>Kubernetes cluster is running inside a VM on a server. The VM has the following IP on the active interface ens32 - <code>192.168.3.102</code></p> <p><strong>Kafka.yaml</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: namespace: casb-deployment name: kafkaservice spec: replicas: 1 template: metadata: labels: app: kafkaservice spec: hostname: kafkaservice containers: - name: kafkaservice imagePullPolicy: IfNotPresent image: wurstmeister/kafka:1.1.0 env: - name: KAFKA_BROKER_ID value: "1" # - name: KAFKA_ADVERTISED_HOST_NAME # value: "kafkaservice" - name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP value: "INTERNAL_PLAINTEXT:PLAINTEXT,EXTERNAL_PLAINTEXT:PLAINTEXT" - name: KAFKA_LISTENERS value: "INTERNAL_PLAINTEXT://0.0.0.0:9092,EXTERNAL_PLAINTEXT://0.0.0.0:9093" - name: KAFKA_ADVERTISED_LISTENERS value: "INTERNAL_PLAINTEXT://kafkaservice:9092,EXTERNAL_PLAINTEXT://192.168.3.102:9093" - name: KAFKA_INTER_BROKER_LISTENER_NAME value: "INTERNAL_PLAINTEXT" - name: KAFKA_CREATE_TOPICS value: "topic-1:100:1,topic-2:1:1" - name: KAFKA_ZOOKEEPER_CONNECT value: "zookeeper:2181" ports: - name: port9092 containerPort: 9092 - name: port9093 containerPort: 9093 volumeMounts: - mountPath: /kafka/kafka-logs-kafkaservice name: kafka-volume volumes: - name: kafka-volume hostPath: path: /home/volume/kafka-logs --- apiVersion: v1 kind: Service metadata: namespace: casb-deployment name: kafkaservice labels: app: kafkaservice spec: selector: app: kafkaservice ports: - name: port9092 port: 9092 targetPort: 9092 protocol: TCP --- apiVersion: v1 kind: Service metadata: namespace: casb-deployment name: kafkaservice-external labels: app: kafkaservice-external spec: selector: app: kafkaservice ports: - name: port9093 port: 9093 protocol: TCP nodePort: 30035 type: NodePort </code></pre> <p>I am able to ping the VM i.e. the kubernetes node from my local machine <code>ping 192.168.3.102</code> and I am using nodePort to expose the service.</p> <p>I can also telnet <code>telnet 192.168.3.102 30035</code> and it gives:</p> <pre><code>Trying 192.168.3.102... Connected to 192.168.3.102. Escape character is '^]'. </code></pre> <p>I tried running kafka console consumer and producer from my local machine:</p> <p><strong>Consumer:</strong> bin/kafka-console-consumer.sh --bootstrap-server 192.168.3.102:30035 --topic topic-1 --from-beginning</p> <p><strong>Output:</strong></p> <pre><code>[2019-09-25 12:30:40,716] WARN [Consumer clientId=consumer-1, groupId=console-consumer-20551] Connection to node 1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) </code></pre> <p><strong>Producer:</strong></p> <pre><code>bin/kafka-console-producer.sh --broker-list 192.168.3.102:30035 --topic topic-1 </code></pre> <p><strong>Output:</strong></p> <pre><code>[2019-09-25 12:32:07,958] WARN [Producer clientId=console-producer] Connection to node 1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) </code></pre> <p><strong>Netstat:</strong></p> <pre><code>netstat -tulpn | grep 30035 tcp6 0 0 :::30035 :::* LISTEN 113545/kube-proxy </code></pre> <p>I tried running a python based consumer i.e <a href="https://pypi.org/project/kafka-python/" rel="nofollow noreferrer">kafka-python==1.4.2</a> it gave me the following logs:</p> <pre><code>[2019-09-25T12:15:39+0500] INFO kafka.client Bootstrapping cluster metadata from [('192.168.3.102', 30035, &lt;AddressFamily.AF_INET: 2&gt;)] [2019-09-25T12:15:39+0500] INFO kafka.conn &lt;BrokerConnection node_id=bootstrap host=192.168.3.102:30035 &lt;connecting&gt; [IPv4 ('192.168.3.102', 30035)]&gt;: connecting to 192.168.3.102:30035 [('192.168.3.102', 30035) IPv4] [2019-09-25T12:15:39+0500] INFO kafka.conn &lt;BrokerConnection node_id=bootstrap host=192.168.3.102:30035 &lt;connecting&gt; [IPv4 ('192.168.3.102', 30035)]&gt;: Connection complete. [2019-09-25T12:15:39+0500] INFO kafka.client Bootstrap succeeded: found 1 brokers and 26 topics. [2019-09-25T12:15:39+0500] INFO kafka.conn &lt;BrokerConnection node_id=bootstrap host=192.168.3.102:30035 &lt;connected&gt; [IPv4 ('192.168.3.102', 30035)]&gt;: Closing connection. [2019-09-25T12:15:39+0500] INFO kafka.conn &lt;BrokerConnection node_id=1 host=192.168.3.102:9093 &lt;connecting&gt; [IPv4 ('192.168.3.102', 9093)]&gt;: connecting to 192.168.3.102:9093 [('192.168.3.102', 9093) IPv4] [2019-09-25T12:15:39+0500] INFO kafka.conn Probing node 1 broker version [2019-09-25T12:15:39+0500] ERROR kafka.conn Connect attempt to &lt;BrokerConnection node_id=1 host=192.168.3.102:9093 &lt;connecting&gt; [IPv4 ('192.168.3.102', 9093)]&gt; returned error 111. Disconnecting. [2019-09-25T12:15:39+0500] INFO kafka.conn &lt;BrokerConnection node_id=1 host=192.168.3.102:9093 &lt;connecting&gt; [IPv4 ('192.168.3.102', 9093)]&gt;: Closing connection. ConnectionError: 111 ECONNREFUSED [2019-09-25T12:15:40+0500] INFO kafka.conn &lt;BrokerConnection node_id=1 host=192.168.3.102:9093 &lt;connecting&gt; [IPv4 ('192.168.3.102', 9093)]&gt;: connecting to 192.168.3.102:9093 [('192.168.3.102', 9093) IPv4] [2019-09-25T12:15:40+0500] ERROR kafka.conn Connect attempt to &lt;BrokerConnection node_id=1 host=192.168.3.102:9093 &lt;connecting&gt; [IPv4 ('192.168.3.102', 9093)]&gt; returned error 111. Disconnecting. [2019-09-25T12:15:40+0500] INFO kafka.conn &lt;BrokerConnection node_id=1 host=192.168.3.102:9093 &lt;connecting&gt; [IPv4 ('192.168.3.102', 9093)]&gt;: Closing connection. ConnectionError: 111 ECONNREFUSED [2019-09-25T12:15:40+0500] INFO Activity URL collector Exception in activity url collector: NoBrokersAvailable </code></pre> <p>From the logs it seems like the connection was made i.e.</p> <pre><code>&lt;connecting&gt; [IPv4 ('192.168.3.102', 30035)]&gt;: Connection complete. Bootstrap succeeded: found 1 brokers and 26 topics. </code></pre> <p>But then it got disconnected.</p> <p>Please help me out in figuring out what am I missing and how can I resolve this. Thanks.</p>
<p>You're bootstrapping to port 30035, and getting the initial connection, then the advertised port of 9093 is being returned for subsequent connections, not 30035.</p> <p>You need the NodePort and advertised one to be the same, or at least both be externally routable. Then you additionally need to have port forwarding on your VM if your code is running on your host machine </p> <p>Note: Confluent or Strimzi Helm Charts exist for setting up Kafka in Kubernetes </p>
<p>In my cluster Im using <a href="https://github.com/fluxcd/flux" rel="nofollow noreferrer">weave flux</a> along with their flux-helm-operator to manage my cluster the gitops way.</p> <p>However, when I make a change to the chart at flux git repository, I often come across the following error msg:</p> <pre><code>ts=2019-09-25T11:54:37.604506452Z caller=chartsync.go:328 component=chartsync warning="unable to proceed with release" resource=mychart:helmrelease/mychart release=mychart err="release requires a rollback before it can be upgraded (FAILED)" </code></pre> <p>Im not sure what does it mean in helm, but anyway, I am not supposed to run any helm command since the releases are managed by flux so I am wondering what is the correct way to deal with this error on production</p> <p>(besides deleting the release and waiting for flux to recreate it)</p> <p>A well-explained answer will be very much accepted, Thanks.</p>
<h1>Let's dive into the code of <a href="https://github.com/fluxcd/helm-operator" rel="nofollow noreferrer"><code>helm-operator</code></a></h1> <p>Warning <code>unable to proceed with release</code> arises after <a href="https://github.com/fluxcd/helm-operator/blob/801c64c5fde1e0aa228e5012f2f9514311bff6a9/pkg/chartsync/chartsync.go#L325" rel="nofollow noreferrer"><code>GetUpgradableRelease</code></a></p> <blockquote> <pre><code> // GetUpgradableRelease returns a release if the current state of it // allows an upgrade, a descriptive error if it is not allowed, or // nil if the release does not exist. </code></pre> </blockquote> <p>It returns error <code>release requires a rollback before it can be upgraded</code> if release has <code>Status_FAILED</code> state (see <a href="https://github.com/fluxcd/helm-operator/blob/68b5f64c1ba2a4859581a82fa6b16bbc313c02f6/pkg/release/release.go#L89" rel="nofollow noreferrer">release.go#89</a> )</p> <h1><code>UNHEALTHY</code> state blocks release</h1> <p>As <code>flux</code> developers <a href="https://github.com/fluxcd/flux/issues/2265" rel="nofollow noreferrer">mentioned in #2265</a>, there is no way to roll to <code>UNHEALTHY</code> state.</p> <blockquote> <p>This is not a bug but I can see where your expectation is coming from.</p> <p>Flux will only move healthy releases forward, one of the reasons for this is to ensure we do not end up in a loop of failure, the <code>--force</code> flag is thus not intended to be used to force the upgrade of an unhealthy resource (you should use the <a href="https://github.com/fluxcd/flux/blob/8292179855e15370fb3d3b03135a61b54f00ae42/site/helm-integration.md#rollbacks" rel="nofollow noreferrer">rollback feature</a> for this) but was developed to make it possible to upgrade charts with e.g. backwards incompatible changes (changes on immutable fields for example, which require a resource to be removed first, see <a href="https://github.com/fluxcd/flux/pull/1760" rel="nofollow noreferrer">#1760</a>).</p> <p><strong>Conclusion:</strong> the <code>forceUpgrade</code> is honoured, but can not be used to force the upgrade of a release in an <code>UNHEALTHY</code> state. </p> </blockquote> <h1><a href="https://github.com/fluxcd/flux/blob/8292179855e15370fb3d3b03135a61b54f00ae42/site/helm-integration.md#rollbacks" rel="nofollow noreferrer">Rollback</a></h1> <p>As author recommends, you should use <a href="https://github.com/fluxcd/flux/blob/8292179855e15370fb3d3b03135a61b54f00ae42/site/helm-integration.md#rollbacks" rel="nofollow noreferrer"><code>rollback</code></a> feature</p> <blockquote> <p>From time to time a release made by the Helm operator may fail, it is possible to automate the rollback of a failed release by setting <code>.spec.rollback.enable</code> to true on the HelmRelease resource.</p> <pre><code>Note: a successful rollback of a Helm chart containing a StatefulSet resource is known to be tricky, and one of the main reasons automated rollbacks are not enabled by default for all HelmReleases. Verify a manual rollback of your Helm chart does not cause any problems before enabling it. </code></pre> <p>When enabled, the Helm operator will detect a faulty upgrade and perform a rollback, it will not attempt a new upgrade unless it detects a change in values and/or the chart.</p> </blockquote> <pre><code>apiVersion: flux.weave.works/v1beta1 kind: HelmRelease # metadata: ... spec: # Listed values are the defaults. rollback: # If set, will perform rollbacks for this release. enable: false # If set, will force resource update through delete/recreate if # needed. force: false # Prevent hooks from running during rollback. disableHooks: false # Time in seconds to wait for any individual Kubernetes operation. timeout: 300 # If set, will wait until all Pods, PVCs, Services, and minimum # number of Pods of a Deployment are in a ready state before # marking the release as successful. It will wait for as long # as the set timeout. wait: false </code></pre>
<p>I have <code>minikube</code> and <code>kubectl</code> installed:</p> <pre><code>$ minikube version minikube version: v1.4.0 commit: 7969c25a98a018b94ea87d949350f3271e9d64b6 $ kubectl version Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>I have then followed the instructions from <a href="https://helm.sh/docs/using_helm/" rel="noreferrer">https://helm.sh/docs/using_helm/</a>:</p> <ol> <li><p>I have downloaded <a href="https://get.helm.sh/helm-v2.13.1-linux-amd64.tar.gz" rel="noreferrer">https://get.helm.sh/helm-v2.13.1-linux-amd64.tar.gz</a></p></li> <li><p>I have run</p></li> </ol> <pre><code>$ tar -xzvf Downloads/helm-v2.13.1-linux-amd64.tar.gz linux-amd64/ linux-amd64/LICENSE linux-amd64/tiller linux-amd64/helm linux-amd64/README.md </code></pre> <p>But now, if I check my <code>helm</code> version, I get this:</p> <pre><code>$ helm version Client: &amp;version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"} Error: could not find tiller </code></pre> <p>I have tried running <code>helm init</code>, but get the following:</p> <pre><code>$ helm init $HELM_HOME has been configured at /home/SERILOCAL/&lt;my-username&gt;/.helm. Error: error installing: the server could not find the requested resource </code></pre> <p>How can I get <code>helm</code> to initialise correctly?</p>
<p>The current helm version does not work with kubernetes version 1.16.0 </p> <p>You can downgrade kubernetes to version 1.15.3 </p> <pre><code>minikube start --kubernetes-version 1.15.3 helm init </code></pre> <p>or use my solution to fix it at version 1.16.0</p> <p>You have to create tiller <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="noreferrer"><strong>Service Account</strong></a> and <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding" rel="noreferrer"><strong>ClusterRoleBinding</strong></a>.</p> <p>You can simply do that by using those commands:</p> <pre><code>kubectl --namespace kube-system create sa tiller kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller </code></pre> <p>And simply create tiller </p> <pre><code>helm init --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f - </code></pre>
<p>I know that I can define something like this to get the pod name in my containers:</p> <pre><code>env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name </code></pre> <p>But when I run: <code>$kubectl get pods</code></p> <p>I have a different result, something like: <code>uipod-5d6795db87-cxxkg</code> which corresponds to <code>&lt;replicatset name&gt;-cxxkg</code>. </p> <p>Is it possible to get that full name (<code>uipod-5d6795db87-cxxkg</code>) as environnement variable? Instead of only the pod name (<code>uipod</code>).</p> <p>Thanks a lot</p>
<p>You don't need to explicitly set environment variable with pod name, its already present inside pod as an environment variable called <code>HOSTNAME</code>.</p> <p>For example</p> <pre><code>$ kubectl run nginx --image=nginx kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectlrun --generator=run-pod/v1 or kubectl create instead. deployment.apps/nginx created $ $ kubectl get pods NAME READY STATUS RESTARTS AGE nginx-7bb7cd8db5-kddbs 1/1 Running 0 12s $ $ kubectl exec -it nginx-7bb7cd8db5-kddbs bash root@nginx-7bb7cd8db5-kddbs:/# root@nginx-7bb7cd8db5-kddbs:/# env | grep HOSTNAME HOSTNAME=nginx-7bb7cd8db5-kddbs root@nginx-7bb7cd8db5-kddbs:/# </code></pre> <p><em>NOTE: As you can see <code>HOSTNAME</code> environment variable already have the exact pod name set.</em></p>
<p>We are currently setting up a Micro-service Architecture using spring boot and netflix components, for Deployment we are planning to go with aws kubernetes(EKS) setup. We are in a in a dilemma to choose whether to use netflix's Eureka &amp; ribbon services on Kubernetes for service discovery or to use Kubernetes own service discovery Mechanism. The advantages I see in using k8s service discovery is that horizontal scaling becomes easy. Any thoughts on this will be really good for us to take it in the right direction.</p> <p>Thanks. </p>
<p>If you don't have a strong case for client side load balancing I would not use Eureka and ribbon. I am getting the load balancing and service discovery from my platform (k8s) for free. If not for client side load balancing, there is no value that eureka and ribbon brings other than beefing up your resume :). On the other hand, it brings another stack that you need to maintain in the long term.</p> <p><a href="https://stackoverflow.com/a/40568412/6785908">This</a> is a kind of related question</p>
<p>I would like to develop a playbook rule for addressing how to manage socket securing for Docker and Kubernetes from either the standpoint of Docker For Mac or MiniKube- after auto-updates to any of the pieces of the puzzle. Maybe we need to throw out there the LLVM or (VM in question if we say use Virtual Box <em>and</em> a particular Unix/Linux flavor. Then we may also need look at enforcing SELinux or AppArmor or Seccomp. I just want to see where this rabbit hole may end us up in today, 2019-09-25, as I just embarked on the Kubernetes quest with minishift, co, and micro-services architecture with RHEL (I may sway away from the OS image to Alpine or something someone may suggest as a suitable alternative). The goal here is to provide support to a team from a system administration point of view, potentially from the stance of calming some long lived traditional experience in the world concerns with real manageable solutions to infrastructure migrations for larger business. </p> <p>RHEL D0092 course work. Months of reading documentation with Docker and watching the past four updates on my development machine go by without a workable solution, knowing the inevitability was to get a Kubernetes cluster going after chewing on Kerrisk's bible for a while on the subject matter- Datagrams and Stream sockets and the like (and how people like to port it well for their individual use cases). I'm a novice system admin here, don't let me fool you. I just am not afraid to ride the big waves, that's all.</p> <pre class="lang-sh prettyprint-override"><code>kubectl --namespace=kubedemo set image deployment/myvertex myvertex=burr/myvertx:v2 </code></pre> <p>or </p> <pre class="lang-py prettyprint-override"><code>import subprocess import sys import os if len(sys.argv) &gt; 1: name = sys.argv[1] else: # fd = input(the_path_passed_stdin) name = input("Update path/name for docker build:") # test this with a "dryrun" first, you don't need a bunch of image clutter or control -c's # No such file or directory trace means we need to call docker directly proc = subprocess.run([f"error_test docker build -it {name} ."], encoding='utf-8', stdout=subprocess.PIPE) </code></pre> <p>Just a thought on automation of a playbook rule in some type of sequential fashion- in python if I can get suggestions, but bash is fine too.</p>
<p>Kubernetes works over an HTTP API and normal IP sockets, not a local domain socket. So there isn't really anything to lock down. DfM's control socket is already tightly locked down to the active GUI user because DfM runs inside a Mac app container (totally different use of the term "container"). Basically there isn't anything you need to do, it's already as safe as it's getting.</p>
<p>I am aware that we can specify CPU and memory allocation for a Container using <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/" rel="noreferrer">requests and limits</a> in Kubernetes. But I would like to know how much disk space gets allocated to each container? Also, is there a way to specify this similar to CPU and memory allocation. </p> <p>Thanks in advance.</p>
<p>Similarly to the CPU and memory resources, you can use <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#local-ephemeral-storage" rel="nofollow noreferrer">ephemeral storage</a> to specify disk resources used. Requests and limits can also be use with ephemeral storage.</p> <p><strong>Update</strong>:</p> <p>Please refer to <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">volumes</a>: </p> <blockquote> <p>On-disk files in a Container are ephemeral, which presents some problems for non-trivial applications when running in Containers. First, when a Container crashes, kubelet will restart it, but the files will be lost - the Container starts with a clean state. Second, when running Containers together in a Pod it is often necessary to share files between those Containers. The Kubernetes Volume abstraction solves both of these problems.</p> </blockquote> <p>You can use one of the different types than ephemeral-storage,emptydir from volumes to store your data in persistent way.</p>
<p>The github repo of Prometheus Operator <a href="https://github.com/coreos/prometheus-operator/" rel="noreferrer">https://github.com/coreos/prometheus-operator/</a> project says that</p> <blockquote> <p>The Prometheus Operator makes the Prometheus configuration Kubernetes native and manages and operates Prometheus and Alertmanager clusters. It is a piece of the puzzle regarding full end-to-end monitoring.</p> <p>kube-prometheus combines the Prometheus Operator with a collection of manifests to help getting started with monitoring Kubernetes itself and applications running on top of it.</p> </blockquote> <p>Can someone elaborate this?</p>
<p>I've always had this exact same question/repeatedly bumped into both, but tbh reading the above answer didn't clarify it for me/I needed a short explanation. I found this github issue that just made it crystal clear to me. <a href="https://github.com/coreos/prometheus-operator/issues/2619" rel="noreferrer">https://github.com/coreos/prometheus-operator/issues/2619</a></p> <p>Quoting nicgirault of GitHub:</p> <blockquote> <p>At last I realized that prometheus-operator chart was packaging kube-prometheus stack but it took me around 10 hours playing around to realize this.</p> </blockquote> <p>**Here's my summarized explanation: <br> &quot;kube-prometheus&quot; and &quot;Prometheus Operator Helm Chart&quot; both do the same thing: <br></p> <ol> <li>Basically the Ingress/Ingress Controller Concept, applied to Metrics/Prometheus Operator. <br></li> <li>Both are a means of easily configuring, installing, and managing a huge distributed application (Kubernetes Prometheus Stack) on Kubernetes:** <br> <br><strong>What is the Entire Kube Prometheus Stack you ask?</strong> <br>Prometheus, Grafana, AlertManager, CRDs (Custom Resource Definitions), Prometheus Operator(software bot app), IaC Alert Rules, IaC Grafana Dashboards, IaC ServiceMonitor CRDs (which auto-generate Prometheus Metric Collection Configuration and auto hot import it into Prometheus Server) <br>(Also when I say easily configuring I mean 1,000-10,000++ lines of easy for humans to understand config that generates and auto manage 10,000-100,000 lines of machine config + stuff with sensible defaults + monitoring configuration self-service, distributed configuration sharding with an operator/controller to combine config + generate verbose boilerplate machine-readable config from nice human-readable config.</li> </ol> <p><strong>If they achieve the same end goal, you might ask what's the difference between them?</strong> <br> <a href="https://github.com/coreos/kube-prometheus" rel="noreferrer">https://github.com/coreos/kube-prometheus</a> <br> <a href="https://github.com/helm/charts/tree/master/stable/prometheus-operator" rel="noreferrer">https://github.com/helm/charts/tree/master/stable/prometheus-operator</a> <br> Basically, CoreOS's kube-prometheus deploys the Prometheus Stack using Ksonnet. <br> Prometheus Operator Helm Chart wraps kube-prometheus / achieves the same end result but with Helm.</p> <p><strong>So which one to use?</strong> <br> Doesn't matter + they achieve the same end result + shouldn't be crazy difficult to start with 1 and switch to the other. <br><br> Helm tends to be faster to learn/develop basic mastery of. <br><br> Ksonnet is harder to learn/develop basic mastery of, but: <br></p> <ul> <li>it's more idempotent (better for CICD automation) (but it's only a difference of 99% idempotent vs 99.99% idempotent.)<br></li> <li>has built-in templating which means that if you have multiple clusters you need to manage / that you want to always keep consistent with each other. Then you can leverage ksonnet's templating to manage multiple instances of the Kube Prometheus Stack (for multiple envs) using a DRY code base with lots of code reuse. (If you only have a few envs and Prometheus doesn't need to change often it's not completely unreasonable to keep 4 helm values files in sync by hand. I've also seen Jinja2 templating used to template out helm values files, but if you're going to bother with that you may as well just consider ksonnet.)</li> </ul>
<p>As per prometheus storage.md , the recommendation is not to use nfs storage as persistent volume for prometheus.</p> <p>But solutions like prometheus operator and openshift shows examples which uses nfs as persistent volumes for prometheus.</p> <p>So what am I missing here? If nfs is not recommended then why do these tools share examples to use nfs as the storage options for prometheus?</p> <p>Does anyone know what could be the nfs alternative for NetApp/ Trident for prometheus?</p>
<p>The example in the prom-operator docs is just a hypothetical to show how to manually control the storage provisioning. NFS is generally an option of last resort in all cases :) Check out <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/</a> for more general information on how to use each of the various PV plugins (or if none of those work, look up CSI stuffs), but for NetApp you would probably use the iSCSI interface.</p>
<p>I want to get a list of all kubernetes nodes with seperated by comma for using it in <code>NO_PROXY</code> env variable and wrote the following commands:</p> <pre><code>kgnoowide | grep -v NAME | awk '{printf "%s,", $1}' </code></pre> <p>This itself works, with a little detail missing: When having three nodes like</p> <pre><code>node1 node2 node3 </code></pre> <p>my awk command returns <code>node1,node2,node3,</code></p> <p>So the last comma is wrong, a perfect output would be <code>node1,node2,node3</code>. I know this is a minor problem that could easily solved by simply removing the comma when pasting. But I'm interested how to solve this with <code>awk</code>, that itself is a very powerfull tool so I assume it should be possible. And it's also relevant for later usage in scripts, that e.g. generate a <code>~/.bashrc</code> entry with the correct <code>NO_PROXY</code> variable. </p>
<p>You don't need grep there</p> <pre><code>kgnoowide | awk '!/NAME/{out=(out sep $0);sep=","} END{print out}' </code></pre>
<p>I'd like to direct traffic from a load balancer within Kubernetes to a deployment. However, rather than attempting to achieve a uniform load across all pods of a deployment, I'd like each connection to and maintain a connection to a specific pod. I'll be sending GRPC requests to a stateful instance on the pod and it's critical that the client's GRPC requests are not sent to other pods.</p> <p>My current implementation is probably needlessly complex. Here's the pseudocode:</p> <ol> <li>Cluster initialized with a custom python scheduler.</li> <li>Several pods with the stateful application are created, each with a node port service and unique node port.</li> <li>Client talks to the python scheduler using a socket interface and is assigned a port.</li> <li>Client talks to the pod using the assigned nodeport.</li> <li>Client (or the scheduler) terminates the pod.</li> </ol> <p>I'm limited by the number of ports and am unable to direct traffic using AKS due to their node port limitations. Additionally, though the advantage of the scheduler is the client can request pods of varying resources, but it's too much to test and maintain.</p> <p>Is there a better solution to direct external traffic to individual stateful pods?</p>
<p>The default iptables service proxy implementation uses a very simple randomized round-robin algorithm for selecting which pod to use. If you use the IPVS implementation instead that does offer a lot more options, though that is unlikely to be an option on a hosted provider like AKS. So that would leave you with using a userspace proxy that supports gRPC like Traefik or Istio Ingress. Picking one is out of scope for SO but most of those proxies support some kind of connection stickiness of some form.</p>
<p>I have a node.js application that sends out gmail notifications for certain events. We recently switched from being directly hosted on DO to a Kubernetes cluster. After we switched, we started getting invalid login exceptions in the Node.js app and critical security alerts from Google. After researching the issue, we turned the "Less secure app access" setting on. Now, we are getting the error message that says "Please log in via your web browser and then try again."</p> <p>I'm not sure where to go from here since I can't log in with a web browser from my Kubernetes cluster.</p> <p>My code looks like this.</p> <pre><code>const nodemailer = require('nodemailer'); const mailer = nodemailer.createTransport(config.email); ... req.app.locals.mailer.sendMail({ from: 'myaddress@gmail.com', to: emails, subject: subject + " " + serverName, text: message, }); </code></pre> <p>Note that the code was working before the move to kubernetes.</p> <p>Thanks in advance for your help.</p>
<p>Answered in comments, user did need to log into Google to acknowledge a blocker message.</p>
<p>Trying to create a few <code>Airflow</code> connections and executing the command in <code>entrypoint.sh</code> file which is passed as entrypoint file in the <code>dockerfile</code>. As these database credentials are very sensitive, is it possible that we store them securely in <code>kubernetes</code> or any other place on <code>GCP</code> and based on the <code>local</code>, <code>staging</code> or <code>production</code> we' populate them in the <code>entrypoint.sh</code>?</p>
<p>You can store them as kubernetes secret and mount them as an environment variable that will be accessible by entrypoint.sh</p> <p>Remember kubernetes secret just encode secret as base64 otherwise you can use <a href="https://github.com/bitnami-labs/sealed-secrets" rel="nofollow noreferrer">sealed secrets</a>.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: secret-env-pod spec: containers: - name: mycontainer image: redis env: - name: SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: username - name: SECRET_PASSWORD valueFrom: secretKeyRef: name: mysecret key: password restartPolicy: Never </code></pre>
<p>I have a configmap which contains a toml file</p> <p>something like </p> <pre><code>apiVersion: v1 kind: ConfigMap data: burrow.toml: | [zookeeper] servers=[abc.2181, cde.2181] timeout=6 root-path="/burrow" </code></pre> <p>When I am trying to create a helm chart to generate this configmap, I was putting something like:</p> <pre><code>apiVersion: v1 kind: ConfigMap data: burrow.toml: | [zookeeper] servers={{ .Values.config.zookeeperServers }} timeout=6 root-path="/burrow" </code></pre> <p>and in the values.yaml, I put:</p> <pre><code> zookeeperServers: [ "abc.2181", "cde.2181"] </code></pre> <p>However, the rendered value became:</p> <pre><code>apiVersion: v1 kind: ConfigMap data: burrow.toml: | [zookeeper] servers=[abc.2181 cde.2181] timeout=6 root-path="/burrow" </code></pre> <p>The comma is missing. Is there a good way to template this correctly? Thanks!</p>
<p>Here is one solution, in the values.yaml put </p> <pre><code>zookeeperServers: | [ "abc.2181", "cde.2181"] </code></pre> <p>solves the problem.</p>
<p>I have a configmap which contains a toml file</p> <p>something like </p> <pre><code>apiVersion: v1 kind: ConfigMap data: burrow.toml: | [zookeeper] servers=[abc.2181, cde.2181] timeout=6 root-path="/burrow" </code></pre> <p>When I am trying to create a helm chart to generate this configmap, I was putting something like:</p> <pre><code>apiVersion: v1 kind: ConfigMap data: burrow.toml: | [zookeeper] servers={{ .Values.config.zookeeperServers }} timeout=6 root-path="/burrow" </code></pre> <p>and in the values.yaml, I put:</p> <pre><code> zookeeperServers: [ "abc.2181", "cde.2181"] </code></pre> <p>However, the rendered value became:</p> <pre><code>apiVersion: v1 kind: ConfigMap data: burrow.toml: | [zookeeper] servers=[abc.2181 cde.2181] timeout=6 root-path="/burrow" </code></pre> <p>The comma is missing. Is there a good way to template this correctly? Thanks!</p>
<p>Try this, <code>servers=[{{ .Values.config.zookeeperServers | join "," }}]</code>. Quoting could get weird if you put TOML metachars in those values, but for simple things it should work.</p>