prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>Hi I am trying to expose 5 ports for an Informix Container which is within a statefulSet. It has a headless service attached, to allow other internal stateless sets communicate with it internally. </p>
<p>I can ping the headless service <code>informix-set-service</code> from my <code>informix-0</code> pod and other pods however when I try <code>nmap -p 9088 informix-set-service</code> the port is listed as closed. I am assuming this is because my yaml is wrong but I can't for the life find out where it's wrong. </p>
<p>It appears that the headless service is indeed attached and pointing at the correct stateful-set and within the minikube dashboard everything looks and appears to be correct.</p>
<p><a href="https://i.stack.imgur.com/kdW5M.png" rel="nofollow noreferrer">Service minikube dash screenshot</a></p>
<pre><code>informix@informix-0:/$ nmap -p 9088 informix-set-service
Starting Nmap 6.47 ( http://nmap.org ) at 2019-08-20 03:50 UTC
Nmap scan report for informix-set-service (172.17.0.7)
Host is up (0.00011s latency).
rDNS record for 172.17.0.7: informix-0.informix.default.svc.cluster.local
PORT STATE SERVICE
9088/tcp closed unknown
Nmap done: 1 IP address (1 host up) scanned in 0.03 seconds
informix@informix-0:/$ nmap -p 9088 localhost
Starting Nmap 6.47 ( http://nmap.org ) at 2019-08-20 03:50 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00026s latency).
Other addresses for localhost (not scanned): 127.0.0.1
PORT STATE SERVICE
9088/tcp open unknown
Nmap done: 1 IP address (1 host up) scanned in 0.06 seconds
</code></pre>
<p>Anyone got any ideas?</p>
<h1>Deployment yaml snippet:</h1>
<pre><code>###############################################################################
# Informix Container
###############################################################################
#
# Headless service for Informix container StatefulSet.
# Headless service with clusterIP set to NULL
# create DNS records for Informix container hosts.
#
apiVersion: v1
kind: Service
metadata:
name: informix-set-service
labels:
component: informix-set-service
provider: IBM
spec:
clusterIP: None
ports:
- port: 9088
name: informix
- port: 9089
name: informix-dr
- port: 27017
name: mongo
- port: 27018
name: rest
- port: 27883
name: mqtt
selector:
component: informix-set-service
---
#
# Service for Informix container StatefulSet service.
# This is used as an external entry point for
# the ingress controller.
#
apiVersion: v1
kind: Service
metadata:
name: informix-service
labels:
component: informix-service
provider: 4js
spec:
ports:
- port: 9088
name: informix
- port: 9089
name: informix-dr
- port: 27017
name: mongo
- port: 27018
name: rest
- port: 27883
name: mqtt
selector:
component: informix-set-service
---
#
# StatefulSet for Informix cluster.
# StatefulSet sets predictible hostnames,and external storage is bound
# to the pods within StateFulSets for the life.
# Replica count configures number of Informix Server containers.
#
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: informix
labels:
app: informix
component: db
release: "12.10"
provider: IBM
spec:
serviceName: informix
#replicas: 2 #keep it simple for now...
selector:
matchLabels:
component: informix-set-service
template:
metadata:
labels:
component: informix-set-service
spec:
containers:
- name: informix
image: ibmcom/informix-innovator-c:12.10.FC12W1IE
tty: true
securityContext:
privileged: true
env:
- name: LICENSE
value: "accept"
- name: DBDATE
value: "DMY4"
- name: SIZE
value: "custom"
- name: DB_USER
value: "db_root"
- name: DB_NAME
value: "db_main"
- name: DB_PASS
value: "db_pass123"
ports:
- containerPort: 9088
name: informix
- containerPort: 9089
name: informix-dr
- containerPort: 27017
name: mongo
- containerPort: 27018
name: rest
- containerPort: 27883
name: mqtt
volumeMounts:
- name: data
mountPath: /opt/ibm/data
- name: bind-dir-mnt
mountPath: /mnt
- name: bind-patch-informix-setup-sqlhosts
mountPath: /opt/ibm/scripts/informix_setup_sqlhosts.sh
- name: bind-file-dbexport
mountPath: /opt/ibm/informix/bin/dbexport
- name: bind-file-dbimport
mountPath: /opt/ibm/informix/bin/dbimport
- name: bind-file-ontape
mountPath: /opt/ibm/informix/bin/ontape
- name: bind-file-informix-config
mountPath: /opt/ibm/data/informix_config.custom
- name: bind-file-sqlhosts
mountPath: /opt/ibm/data/sqlhosts
volumes:
- name: data
persistentVolumeClaim:
claimName: ifx-data
- name: bind-dir-mnt
hostPath:
path: <PROJECTDIR>/resources/informix
type: DirectoryOrCreate
- name: bind-patch-informix-setup-sqlhosts
hostPath:
path: <PROJECTDIR>/containers/informix/resources/scripts/informix_setup_sqlhosts.sh
type: File
- name: bind-file-dbexport
hostPath:
path: <PROJECTDIR>/containers/informix/resources/bin/dbexport
type: File
- name: bind-file-dbimport
hostPath:
path: <PROJECTDIR>/containers/informix/resources/bin/dbimport
type: File
- name: bind-file-ontape
hostPath:
path: <PROJECTDIR>/containers/informix/resources/bin/ontape
type: File
- name: bind-file-informix-config
hostPath:
path: <PROJECTDIR>/containers/informix/resources/informix_config.custom
type: File
- name: bind-file-sqlhosts
hostPath:
path: <PROJECTDIR>/containers/informix/resources/sqlhosts.k8s
type: File
---
</code></pre>
<p><strong>Edit 1: (added output of ss -lnt)</strong></p>
<pre><code>informix@informix-0:/$ ss -lnt
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 0 127.0.0.1:9088 *:*
LISTEN 0 0 127.0.0.1:9089 *:*
LISTEN 0 0 172.17.0.7:27017 *:*
LISTEN 0 0 172.17.0.7:27018 *:*
LISTEN 0 0 172.17.0.7:27883 *:*
LISTEN 0 0 *:22 *:*
LISTEN 0 0 :::22 :::*
</code></pre>
| <p>From the <code>ss</code> output, you are listening on 127.0.0.1, rather than all interfaces:</p>
<pre><code>informix@informix-0:/$ ss -lnt
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 0 127.0.0.1:9088 *:*
LISTEN 0 0 127.0.0.1:9089 *:*
</code></pre>
<p>You need to adjust your application configuration to listen on something like <code>0.0.0.0</code> to enable it to be accessed from outside of the pod.</p>
|
<p>I am deploying simple hello-world microservice that listens on port given by following variable:</p>
<pre><code>PORT = os.environ.get("TEST_SERVICE_PORT", "").strip() or "50001"
</code></pre>
<p>I deployed it without configuring any variables on container, and expected it to serve on default 50001 port, but instead got error </p>
<pre><code>socket.gaierror: [Errno -8] Servname not supported for ai_socktype
</code></pre>
<p>When I logged into container and checked environment, I found out that evironment is filled with different variables (some of them belong to other services), and <code>TEST_SERVICE_PORT</code> variable exists and contains definitely not port:</p>
<pre><code>root@test-service-697464787c-xpd6k:/opt/app/src# env | grep TEST
TEST_SERVICE_PORT_7002_TCP_ADDR=10.145.23.43
TEST_SERVICE_SERVICE_PORT_GRPC_API=7002
TEST_SERVICE_PORT_7002_TCP_PORT=7002
TEST_SERVICE_PORT=tcp://10.145.23.43:7002
TEST_SERVICE_SERVICE_HOST=10.145.23.43
TEST_SERVICE_PORT_7002_TCP=tcp://10.145.23.43:7002
TEST_SERVICE_PORT_7002_TCP_PROTO=tcp
TEST_SERVICE_SERVICE_PORT=7002
</code></pre>
<p>I have following questions and were not able find answers to them in documentation:</p>
<p>What created this variables? Could I somehow isolate container from them? Or are they set intentionally by kubernetes, and serve some purpose I don't know about? How should I name my configuration variables to avoid naming collisions? Should I use that variables istead of using services names as hostnames? </p>
<p>There is <a href="https://kubernetes.io/docs/concepts/containers/container-environment-variables/" rel="nofollow noreferrer">following documentation</a>, but it only explains variable <code>TEST_SERVICE_SERVICE_PORT</code> and <code>TEST_SERVICE_SERVICE_HOST</code>. What <code>TEST_SERVICE_PORT</code> and others mean then? What adds <code>TEST_SERVICE_SERVICE_PORT_GRPC_API</code>?</p>
<p>There is also Istio and Ambassador gateway installed on cluster that I'm using.</p>
| <blockquote>
<p>Q: What created this variables? </p>
</blockquote>
<p> <strong>A</strong>: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#discovering-services" rel="nofollow noreferrer">discovery-service</a> (more at the end)</p>
<blockquote>
<p>Q: Could I somehow isolate container from them? </p>
</blockquote>
<p> <strong>A</strong>: If you want to disable that, you can set <code>enableServiceLinks: false</code> on your <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.15/#podspec-v1-core" rel="nofollow noreferrer">PodSpec</a></p>
<blockquote>
<p>Q: Or are they set intentionally by kubernetes, and serve some purpose I don't know about? </p>
</blockquote>
<p> <strong>A</strong>: No, they are there just there to give options besides DNS and names, kubernetes does not use them</p>
<blockquote>
<p>Q: How should I name my configuration variables to avoid naming collisions? </p>
</blockquote>
<p> <strong>A</strong>: Either you use <code>enableServiceLinks: false</code> or use a naming pattern that does not conflict with the pattern described on the docs, usually I prefer use <code>_SVC_PORT</code> when I need to do something like you </p>
<blockquote>
<p>Q: Should I use that variables istead of using services names as hostnames?</p>
</blockquote>
<p> <strong>A</strong>: From docs: "You can (and almost always should) set up a DNS service for your Kubernetes cluster",</p>
<blockquote>
<p>Q: There is following documentation, but it only explains variable TEST_SERVICE_SERVICE_PORT and TEST_SERVICE_SERVICE_HOST. What TEST_SERVICE_PORT and others mean then? What adds TEST_SERVICE_SERVICE_PORT_GRPC_API?</p>
</blockquote>
<p> <strong>A</strong>: You have a named port called <code>grpc-api</code>, in that case it is using the named instead protocol + port number. Note: I could not find any references on the docs for that, so I had dig into <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/envvars/envvars.go#L53" rel="nofollow noreferrer">code</a></p>
<hr>
<p>From docs <a href="https://kubernetes.io/docs/concepts/services-networking/service/#discovering-services" rel="nofollow noreferrer">discovery-service</a> </p>
<blockquote>
<p>When a Pod is run on a Node, the kubelet adds a set of environment
variables for each active Service. ... simpler {SVCNAME}_SERVICE_HOST
and {SVCNAME}_SERVICE_PORT variables, where the Service name is
upper-cased and dashes are converted to underscores... </p>
<p>For example,
the Service "redis-master" which exposes TCP port 6379 and has been
allocated cluster IP address 10.0.0.11, produces the following
environment variables:</p>
<pre><code>REDIS_MASTER_SERVICE_PORT=6379 REDIS_MASTER_PORT=tcp://10.0.0.11:6379
REDIS_MASTER_PORT_6379_TCP=tcp://10.0.0.11:6379
REDIS_MASTER_PORT_6379_TCP_PROTO=tcp
REDIS_MASTER_PORT_6379_TCP_PORT=6379
REDIS_MASTER_PORT_6379_TCP_ADDR=10.0.0.11
</code></pre>
</blockquote>
<p>From k8s api <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.15/#podspec-v1-core" rel="nofollow noreferrer">PodSpec</a>/EnableServiceLinks:</p>
<blockquote>
<p><strong>EnableServiceLinks</strong> indicates whether information about services should be injected into pod's environment variables, matching the syntax of Docker links. Optional: Defaults to true.</p>
</blockquote>
|
<p><strong>Current flow:</strong></p>
<blockquote>
<p>incoming request (/sso-kibana) --> Envoy proxy --> /sso-kibana</p>
</blockquote>
<p><strong>Expected flow:</strong></p>
<blockquote>
<p>incoming request (/sso-kibana) --> Envoy proxy --> keycloak-gatekeeper
-->
keycloak </p>
<p>--> If not logged in --> keycloak loging page --> /sso-kibana</p>
<p>--> If Already logged in --> /sso-kibana</p>
</blockquote>
<p>I deployed keycloak-gatekeeper as a k8s cluster which has the following configuration:</p>
<p><strong>keycloak-gatekeeper.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: keycloak-gatekeeper
name: keycloak-gatekeeper
spec:
selector:
matchLabels:
app: keycloak-gatekeeper
replicas: 1
template:
metadata:
labels:
app: keycloak-gatekeeper
spec:
containers:
- image: keycloak/keycloak-gatekeeper
imagePullPolicy: Always
name: keycloak-gatekeeper
ports:
- containerPort: 3000
args:
- "--config=/keycloak-proxy-poc/keycloak-gatekeeper/gatekeeper.yaml"
- "--enable-logging=true"
- "--enable-json-logging=true"
- "--verbose=true"
volumeMounts:
-
mountPath: /keycloak-proxy-poc/keycloak-gatekeeper
name: secrets
volumes:
- name: secrets
secret:
secretName: gatekeeper
</code></pre>
<p><strong>gatekeeper.yaml</strong> </p>
<pre><code>discovery-url: https://keycloak/auth/realms/MyRealm
enable-default-deny: true
listen: 0.0.0.0:3000
upstream-url: https://kibana.k8s.cluster:5601
client-id: kibana
client-secret: d62e46c3-2a65-4069-b2fc-0ae5884a4952
</code></pre>
<p><strong>Envoy.yaml</strong></p>
<pre><code>- name: kibana
hosts: [{ socket_address: { address: keycloak-gatekeeper, port_value: 3000}}]
</code></pre>
<p><strong>Problem:</strong> </p>
<p>I am able to invoke keycloak login on /Kibana but after login user is not going to /Kibana url i.e. Kibana dashboard is not loading.</p>
<p><strong>Note:</strong> Kibana is also running as k8s cluster.</p>
<p><strong>References:</strong><br>
<a href="https://medium.com/@vcorreaniche/securing-serverless-services-in-kubernetes-with-keycloak-gatekeeper-6d07583e7382" rel="nofollow noreferrer">https://medium.com/@vcorreaniche/securing-serverless-services-in-kubernetes-with-keycloak-gatekeeper-6d07583e7382</a> </p>
<p><a href="https://medium.com/stakater/proxy-injector-enabling-sso-with-keycloak-on-kubernetes-a1012c3d9f8d" rel="nofollow noreferrer">https://medium.com/stakater/proxy-injector-enabling-sso-with-keycloak-on-kubernetes-a1012c3d9f8d</a></p>
<p><strong>Update 1:</strong></p>
<p>I'm able to invoke keycloak login on /sso-kibana but after entering credentials its giving 404. The flow is following:</p>
<p><strong>Step 1.</strong> Clicked on <a href="http://something/sso-kibana" rel="nofollow noreferrer">http://something/sso-kibana</a><br>
<strong>Step 2.</strong> Keycloak login page opens at <a href="https://keycloak/auth/realms/THXiRealm/protocol/openid-connect/auth" rel="nofollow noreferrer">https://keycloak/auth/realms/THXiRealm/protocol/openid-connect/auth</a>?...<br>
<strong>Step 3.</strong> After entering credentials redirected to this URL <a href="https://something/sso-kibana/oauth/callback?state=890cd02c-f" rel="nofollow noreferrer">https://something/sso-kibana/oauth/callback?state=890cd02c-f</a>...<br>
<strong>Step 4.</strong> 404</p>
<p><strong>Update 2:</strong></p>
<p>404 error was solved after I added a new route in Envoy.yaml</p>
<p><strong>Envoy.yaml</strong> </p>
<pre><code> - match: { prefix: /sso-kibana/oauth/callback }
route: { prefix_rewrite: "/", cluster: kibana.k8s.cluster }
</code></pre>
<p>Therefore, Expected flow (as shown below) is working fine now.</p>
<blockquote>
<p>incoming request (/sso-kibana) --> Envoy proxy --> keycloak-gatekeeper
--> keycloak</p>
<p>--> If not logged in --> keycloak loging page --> /sso-kibana</p>
<p>--> If Already logged in --> /sso-kibana</p>
</blockquote>
| <p>In your config you explicitly enabled <code>enable-default-deny</code> which is explained in the documentation as:</p>
<blockquote>
<p>enables a default denial on all requests, you have to explicitly say what is permitted (recommended)</p>
</blockquote>
<p>With that enabled, you will need to specify urls, methods etc. either via <code>resources</code> entries as shown in [1] or an commandline argument [2]. In case of Kibana, you can start with:</p>
<pre><code>resources:
- uri: /app/*
</code></pre>
<p>[1] <a href="https://www.keycloak.org/docs/latest/securing_apps/index.html#example-usage-and-configuration" rel="nofollow noreferrer">https://www.keycloak.org/docs/latest/securing_apps/index.html#example-usage-and-configuration</a></p>
<p>[2] <a href="https://www.keycloak.org/docs/latest/securing_apps/index.html#http-routing" rel="nofollow noreferrer">https://www.keycloak.org/docs/latest/securing_apps/index.html#http-routing</a></p>
|
<p>I am running kubectl on:
<code>Microsoft Windows [Version 10.0.14393]</code></p>
<p>Pointing to Kubernetes cluster deployed in Azure.</p>
<p>A <code>kubectl version</code> command with verbose logging and preceded with time echo shows a delay of ~ 2 Min before showing any activity on the API calls.</p>
<p>Note the first log line that show 2 Min after invoking the command.</p>
<pre><code>C:\tmp>echo **19:12:50**.23
19:12:50.23
C:\tmp>kubectl version --kubeconfig=C:/Users/jbafd/.kube/config-hgfds-acs-containerservice-1 -v=20
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2
017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"windows/amd64"}
I0610 **19:14:58.311364 9488 loader.go:354]** Config loaded from file C:/Users/jbafd/.kube/config-hgfds-acs-containerservice-1
I0610 19:14:58.313864 9488 round_trippers.go:398] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl.exe/v1.6.4 (windows/amd64) kub
ernetes/d6f4332" https://xxjjmaster.australiasoutheast.cloudapp.azure.com/version
I0610 19:14:58.519869 9488 round_trippers.go:417] GET https://xxjjmaster.australiasoutheast.cloudapp.azure.com/version in 206 milliseconds
</code></pre>
<p>Other kubectl commands (get nodes etc.) exhibit the same delay.
Flushing dns cache didn't resolve the issue but it looks like the API requests are responsive. Also running the command as admin didn't help.
What other operation kubectl is attempting before loading the config?</p>
| <p>there could be two reasons for latency</p>
<ol>
<li>kubectl is on network drive(mostly <strong>H:</strong> drive) so kubectl is first copied to your. system and the run</li>
<li>.kube/config file is on network drive</li>
</ol>
<p>So to summarise either of the thing is on network drive you will face.</p>
<p><em>You can try one more thing if this doesn't work out, you can run kubectl command -v=20 this will give all the time duration taken by it.</em></p>
<p><a href="https://cymbeline.ch/2018/04/10/fix-slow-kubectl-on-windows/" rel="nofollow noreferrer">reference</a> </p>
|
<p>With stackdriver's kubernetes engine integration, I can view real-time information on my pods and services, including how many are ready. I can't find any way to monitor this, however.</p>
<p>Is there a way to set up an alerting policy that triggers if no pods in a deployment or service are ready? I can set up a log-based metric, but this seems like a crude workaround for information that stackdriver logging seems to already have access to.</p>
| <p>I am not sure about the Stackdriver support of this feature; however, you can try creating following alerts as a workaround:</p>
<ol>
<li>In Alerting policy creation user interface, select resource type as
"k8s_container", also select a metric that always exists ( for
example, 'CPU usage time').</li>
<li>Define any "filter" or you can use "group by" which will trigger the alert conditions.</li>
<li>In aggregation, choose "count" aggregator.</li>
</ol>
|
<p>With stackdriver's kubernetes engine integration, I can view real-time information on my pods and services, including how many are ready. I can't find any way to monitor this, however.</p>
<p>Is there a way to set up an alerting policy that triggers if no pods in a deployment or service are ready? I can set up a log-based metric, but this seems like a crude workaround for information that stackdriver logging seems to already have access to.</p>
| <p>Based on the <a href="https://cloud.google.com/monitoring/api/metrics_kubernetes" rel="nofollow noreferrer">Kubernetes metrics</a> documentation there doesn't seems to be such metric in place.</p>
<p>It does however look like a potential <a href="https://cloud.google.com/support/docs/issue-trackers" rel="nofollow noreferrer">Feature Request</a>.</p>
|
<p>We have a service with multiple replicas which works with storage without transactions and blocking approaches. So we need somehow to synchronize concurrent requests between multiple instances by some "sharding" key. Right now we host this service in Kubernetes environment as a ReplicaSet.</p>
<p>Don't you know any simple out-of-the-box approaches on how to do this to not implement it from scratch? </p>
<p>Here are several of our ideas on how to do this:</p>
<ol>
<li><p>Deploy the service as a StatefulSet and implement some proxy API which will route traffic to the specific pod in this StatefulSet by sharding key from the HTTP request. In this scenario, all requests which should be synchronized will be handled by one instance and it wouldn't be a problem to handle this case.</p></li>
<li><p>Deploy the service as a StatefulSet and implement some custom logic in the same service to re-route traffic to the specific instance (or process on this exact instance). As I understand it's not possible to have abstract implementation and it would work only in Kubernetes environment. </p></li>
<li><p>Somehow expose each pod IP outside the cluster and implement routing logic on the client-side.</p></li>
<li><p>Just implement synchronization between instances through some third-party service like Redis.</p></li>
</ol>
<p>I would like to try to route traffic to the specific pods. If you know standard approaches how to handle this case I'll be much appreciated.</p>
<p>Thank you a lot in advance!</p>
| <p>Another approach would be to put a messaging queue (like Kafka and RabbitMq) in front of your service.
Then your pods will subscribe to the MQ topic/stream. The pod will decide if it should process the message or not.</p>
<p>Also, try looking into service meshes like <a href="https://istio.io/" rel="nofollow noreferrer">Istio</a> or <a href="https://linkerd.io/" rel="nofollow noreferrer">Linkerd</a>.
They might have an OOTB solution for your use-case, although I wasn't able to find one.</p>
|
<p>How to change the default nodeport range on Mac (docker-desktop)?</p>
<p>I'd like to change the default nodeport range on Mac. Is it possible? I'm glad to have found this article: <code>http://www.thinkcode.se/blog/2019/02/20/kubernetes-service-node-port-range</code>. Since I can't find <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code> in my environment, I tried to achieve what I want to do by running <code>sudo kubectl edit pod kube-apiserver-docker-desktop --namespace=kube-system</code> and add the parameter <code>--service-node-port-range=443-22000</code>. But when I tried to save it, I got the following error:</p>
<pre><code># pods "kube-apiserver-docker-desktop" was not valid:
# * spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
</code></pre>
<p>(I get the same error even if I don't touch port 443.) Can someone please share his/her thoughts or experience? Thanks!</p>
<p>Append:</p>
<pre><code>skwok-mbp:kubernetes skwok$ kubectl get deployment -A
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
docker compose 1/1 1 1 15d
docker compose-api 1/1 1 1 15d
ingress-nginx nginx-ingress-controller 1/1 1 1 37m
kube-system coredns 2/2 2 2 15d
skwok-mbp:kubernetes skwok$ kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default fortune-configmap-volume 2/2 Running 4 14d
default kubia-2qzmm 1/1 Running 2 15d
docker compose-6c67d745f6-qqmpb 1/1 Running 2 15d
docker compose-api-57ff65b8c7-g8884 1/1 Running 4 15d
ingress-nginx nginx-ingress-controller-756f65dd87-sq6lt 1/1 Running 0 37m
kube-system coredns-fb8b8dccf-jn8cm 1/1 Running 6 15d
kube-system coredns-fb8b8dccf-t6qhs 1/1 Running 6 15d
kube-system etcd-docker-desktop 1/1 Running 2 15d
kube-system kube-apiserver-docker-desktop 1/1 Running 2 15d
kube-system kube-controller-manager-docker-desktop 1/1 Running 29 15d
kube-system kube-proxy-6nzqx 1/1 Running 2 15d
kube-system kube-scheduler-docker-desktop 1/1 Running 30 15d
</code></pre>
| <p><strong>Update</strong>: The <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/#increasing-the-nodeport-range" rel="nofollow noreferrer">example</a> from the documentation shows a way to adjust apiserver parameters during Minikube start:</p>
<pre><code>minikube start --extra-config=apiserver.service-node-port-range=1-65535
</code></pre>
<p><strong>--extra-config</strong>: A set of <code>key=value</code> pairs that describe configuration that may be passed to different components. The key should be '.' separated, and the first part before the dot is the component to apply the configuration to. Valid components are: <code>kubelet</code>, <code>apiserver</code>, <code>controller-manager</code>, <code>etcd</code>, <code>proxy</code>, <code>scheduler</code>. <a href="https://github.com/kubernetes/minikube/blob/8611a455cab8bce3cf218c8c276a97ebaa730c3c/docs/minikube_start.md" rel="nofollow noreferrer"><sup>link</sup></a></p>
<p>The list of available options could be found in <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/" rel="nofollow noreferrer">CLI documentation</a></p>
<hr />
<p>Another way to change <code>kube-apiserver</code> parameters for Docker-for-desktop on Mac:</p>
<ol>
<li><p>login to Docker VM:</p>
<pre><code> $ screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
#(you can also use privileged container for the same purpose)
docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -n -i sh
#or
docker run --rm -it --privileged --pid=host walkerlee/nsenter -t 1 -m -u -i -n sh
# as suggested here: https://forums.docker.com/t/is-it-possible-to-ssh-to-the-xhyve-machine/17426/5
# in case of minikube use the following command:
$ minikube ssh
</code></pre>
</li>
<li><p>Edit kube-apiserver.yaml (it's one of static pods, they are created by kubelet using files in /etc/kubernetes/manifests)</p>
<pre><code> $ vi /etc/kubernetes/manifests/kube-apiserver.yaml
# for minikube
$ sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml
</code></pre>
</li>
<li><p>Add the following line to the pod spec:</p>
<pre><code> spec:
containers:
- command:
- kube-apiserver
- --advertise-address=192.168.65.3
...
- --service-node-port-range=443-22000 # <-- add this line
...
</code></pre>
</li>
<li><p>Save and exit. Pod kube-apiserver will be restarted with new parameters.</p>
</li>
<li><p>Exit Docker VM (for <code>screen</code>: <code>Ctrl-a,k</code> , for container: <code>Ctrl-d</code> )</p>
</li>
</ol>
<p>Check the results:</p>
<pre><code>$ kubectl get pod kube-apiserver-docker-desktop -o yaml -n kube-system | less
</code></pre>
<p>Create simple deployment and expose it with service:</p>
<pre><code>$ kubectl run nginx1 --image=nginx --replicas=2
$ kubectl expose deployment nginx1 --port 80 --type=NodePort
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14d
nginx1 NodePort 10.99.173.234 <none> 80:14966/TCP 5s
</code></pre>
<p>As you can see NodePort was chosen from the new range.</p>
<p>There are other <a href="https://stackoverflow.com/a/54345488/9521610">ways</a> to expose your container: <a href="http://alesnosek.com/blog/2017/02/14/accessing-kubernetes-pods-from-outside-of-the-cluster/" rel="nofollow noreferrer">HostNetwork, HostPort</a>, <a href="https://metallb.universe.tf" rel="nofollow noreferrer">MetalLB</a></p>
<p>You need to add the correct <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">security context</a> for that purpose, check out how the ingress addon in minikube works, for example.</p>
<pre><code>...
ports:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
...
securityContext:
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
</code></pre>
|
<p>I am looking for a syntax/condition of percentage decrease threshold to be inserted in HPA.yaml file which would allow the Horizontal Pod Autoscaler to start decreasing the pod replicas when the CPU utilization falls that particular percentage threshold.</p>
<p>Consider this scenario:-
I mentioned an option targetCPUUtilizationPercentage and assigned it with value 50. minReplicas to be 1 and MaxReplicas to be 5.
Now lets assume the CPU utilization went above 50, and went till 100, making the HPA to create 2 replicas. If the utilization decreases to 51% also, HPA will not terminate 1 pod replica.</p>
<p>Is there any way to conditionize the scale down on the basis of % decrease in CPU utilization?</p>
<p>Just like targetCPUUtilizationPercentage, I could be able to mention targetCPUUtilizationPercentageDecrease and assign it value 30, so that when the CPU utilization falls from 100% to 70%, HPA terminates a pod replica and further 30% decrease in CPU utilization, so that when it reaches 40%, the other remaining pod replica gets terminated.</p>
| <p>As per on-line resources, this topic is still under community progress "<a href="https://github.com/kubernetes/kubernetes/pull/74525" rel="nofollow noreferrer">Configurable HorizontalPodAutoscaler options</a>" </p>
<p>I didn't try but as workaround you can try to create custom metrics f.e. using <a href="https://github.com/helm/charts/tree/master/stable/prometheus-adapter" rel="nofollow noreferrer">Prometheus Adapter</a>, <a href="https://www.ibm.com/support/knowledgecenter/en/SSBS6K_3.2.0/manage_cluster/hpa.html" rel="nofollow noreferrer">Horizontal pod auto scaling by using custom metrics</a>
in order to have more control about provided limits.</p>
<p>At the moment you can use <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-cooldown-delay" rel="nofollow noreferrer">horizontal-pod-autoscaler-downscale-stabilization</a>:</p>
<blockquote>
<p>--horizontal-pod-autoscaler-downscale-stabilization option to control </p>
<p>The value for this option is a duration that specifies how long the autoscaler has to wait before another downscale operation can be performed after the current one has completed. The default value is 5 minutes (5m0s).</p>
</blockquote>
<p>On the other point of view this is expected due to the basis of HPA:</p>
<blockquote>
<p>Applications that process very important data events. These should scale up as fast as possible (to reduce the data processing time), and scale down as soon as possible (to reduce cost).</p>
</blockquote>
<p>Hope this help.</p>
|
<p>I'm using <a href="https://hub.docker.com/r/splunk/splunk/" rel="nofollow noreferrer">this</a> Splunk image on Kubernetes (testing locally with minikube).</p>
<p>After applying the code below I'm facing the following error:</p>
<blockquote>
<p>ERROR: Couldn't read "/opt/splunk/etc/splunk-launch.conf" -- maybe
$SPLUNK_HOME or $SPLUNK_ETC is set wrong?</p>
</blockquote>
<p>My Splunk deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: splunk
labels:
app: splunk-app
tier: splunk
spec:
selector:
matchLabels:
app: splunk-app
track: stable
replicas: 1
template:
metadata:
labels:
app: splunk-app
tier: splunk
track: stable
spec:
volumes:
- name: configmap-inputs
configMap:
name: splunk-config
containers:
- name: splunk-client
image: splunk/splunk:latest
imagePullPolicy: Always
env:
- name: SPLUNK_START_ARGS
value: --accept-license --answer-yes
- name: SPLUNK_USER
value: root
- name: SPLUNK_PASSWORD
value: changeme
- name: SPLUNK_FORWARD_SERVER
value: splunk-receiver:9997
ports:
- name: incoming-logs
containerPort: 514
volumeMounts:
- name: configmap-inputs
mountPath: /opt/splunk/etc/system/local/inputs.conf
subPath: "inputs.conf"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: splunk-config
data:
inputs.conf: |
[monitor:///opt/splunk/var/log/syslog-logs]
disabled = 0
index=my-index
</code></pre>
<p>I tried to add also this env variables - with no success:</p>
<pre><code> - name: SPLUNK_HOME
value: /opt/splunk
- name: SPLUNK_ETC
value: /opt/splunk/etc
</code></pre>
<p>I've tested the image with the following <strong>docker</strong> configuration - <strong>and it ran successfully</strong>:</p>
<pre><code>version: '3.2'
services:
splunk-forwarder:
hostname: splunk-client
image: splunk/splunk:latest
environment:
SPLUNK_START_ARGS: --accept-license --answer-yes
SPLUNK_USER: root
SPLUNK_PASSWORD: changeme
ports:
- "8089:8089"
- "9997:9997"
</code></pre>
<hr>
<p>Saw <a href="https://answers.splunk.com/answers/728533/getting-error-couldnt-read-optsplunketcsplunk-laun.html" rel="nofollow noreferrer">this</a> on Splunk forum but the answer did not help in my case. </p>
<p>Any ideas?</p>
<hr>
<p>Edit #1: </p>
<p>Minikube version: Upgraded from<code>v0.33.1</code> to <code>v1.2.0</code>.</p>
<p>Full error log:</p>
<pre><code>$kubectl logs -l tier=splunk
splunk_common : Set first run fact -------------------------------------- 0.04s
splunk_common : Set privilege escalation user --------------------------- 0.04s
splunk_common : Set current version fact -------------------------------- 0.04s
splunk_common : Set splunk install fact --------------------------------- 0.04s
splunk_common : Set docker fact ----------------------------------------- 0.04s
Execute pre-setup playbooks --------------------------------------------- 0.04s
splunk_common : Setting upgrade fact ------------------------------------ 0.04s
splunk_common : Set target version fact --------------------------------- 0.04s
Determine captaincy ----------------------------------------------------- 0.04s
ERROR: Couldn't read "/opt/splunk/etc/splunk-launch.conf" -- maybe $SPLUNK_HOME or $SPLUNK_ETC is set wrong?
</code></pre>
<p>Edit #2: Adding config map to the code (was removed from the original question for the sake of brevity). <strong>This is the cause of failure</strong>.</p>
| <p>Based on the direction pointed out by @Amit-Kumar-Gupta I'll try also to give a full solution.</p>
<p>So <a href="https://github.com/kubernetes/kubernetes/pull/58720" rel="nofollow noreferrer">this PR</a> change makes it so that containers cannot write to <code>secret</code>, <code>configMap</code>, <code>downwardAPI</code> and projected volumes since the runtime will now <em>mount them as read-only</em>. <br/>
This change is since <code>v1.9.4</code> and can lead to issues for various applications which chown or otherwise manipulate their configs.</p>
<p>When Splunk boots, it registers all the config files in various locations on the filesystem under <code>${SPLUNK_HOME}</code> which is in our case <code>/opt/splunk</code>.<br/>
The error specified in the my question reflect that splunk failed to manipulate all the relevant files in the <code>/opt/splunk/etc</code> directory because of the change in the mounting mechanism.</p>
<hr>
<p>Now for the solution.</p>
<p>Instead of mounting the configuration file directly inside the <code>/opt/splunk/etc</code> directory we'll use the following setup:</p>
<p>We'll start the docker container with a <code>default.yml</code> file which will be mounted in <code>/tmp/defaults/default.yml</code>.</p>
<p>For that, we'll create the <code>default.yml</code> file with: <br/> <code>docker run splunk/splunk:latest create-defaults > ./default.yml</code></p>
<p>Then, We'll go to the <code>splunk:</code> block and add a <code>config:</code> sub block under it:</p>
<pre><code>splunk:
conf:
inputs:
directory: /opt/splunk/etc/system/local
content:
monitor:///opt/splunk/var/log/syslog-logs:
disabled : 0
index : syslog-index
outputs:
directory: /opt/splunk/etc/system/local
content:
tcpout:splunk-indexer:
server: splunk-indexer:9997
</code></pre>
<p>This setup will generate two files with a <code>.conf</code> postfix (Remember that the sub block start with <code>conf:</code>) which be owned by the correct Splunk user and group.</p>
<p>The <code>inputs:</code> section will produce the a <code>inputs.conf</code> with the following content:</p>
<pre><code>[monitor:///opt/splunk/var/log/syslog-logs]
disabled = 0
index=syslog-index
</code></pre>
<p>In a similar way, the <code>outputs:</code> block will resemble the following:</p>
<pre><code>[tcpout:splunk-receiver]
server=splunk-receiver:9997
</code></pre>
<p>This is instead of the passing an environment variable directly like I did in the origin code:</p>
<pre><code>SPLUNK_FORWARD_SERVER: splunk-receiver:9997
</code></pre>
<p><strong>Now everything is up and running (:</strong></p>
<hr>
<p>Full setup of the <code>forwarder.yaml</code>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: splunk-forwarder
labels:
app: splunk-forwarder-app
tier: splunk
spec:
selector:
matchLabels:
app: splunk-forwarder-app
track: stable
replicas: 1
template:
metadata:
labels:
app: splunk-forwarder-app
tier: splunk
track: stable
spec:
volumes:
- name: configmap-forwarder
configMap:
name: splunk-forwarder-config
containers:
- name: splunk-forwarder
image: splunk/splunk:latest
imagePullPolicy : Always
env:
- name: SPLUNK_START_ARGS
value: --accept-license --answer-yes
- name: SPLUNK_PASSWORD
valueFrom:
secretKeyRef:
name: splunk-secret
key: password
volumeMounts:
- name: configmap-forwarder
mountPath: /tmp/defaults/default.yml
subPath: "default.yml"
</code></pre>
<hr>
<p>For further reading:</p>
<p><a href="https://splunk.github.io/docker-splunk/ADVANCED.html" rel="nofollow noreferrer">https://splunk.github.io/docker-splunk/ADVANCED.html</a></p>
<p><a href="https://github.com/splunk/docker-splunk/blob/develop/docs/ADVANCED.md" rel="nofollow noreferrer">https://github.com/splunk/docker-splunk/blob/develop/docs/ADVANCED.md</a></p>
<p><a href="https://www.splunk.com/blog/2018/12/17/deploy-splunk-enterprise-on-kubernetes-splunk-connect-for-kubernetes-and-splunk-insights-for-containers-beta-part-1.html" rel="nofollow noreferrer">https://www.splunk.com/blog/2018/12/17/deploy-splunk-enterprise-on-kubernetes-splunk-connect-for-kubernetes-and-splunk-insights-for-containers-beta-part-1.html</a></p>
<p><a href="https://splunk.github.io/splunk-ansible/ADVANCED.html#inventory-script" rel="nofollow noreferrer">https://splunk.github.io/splunk-ansible/ADVANCED.html#inventory-script</a></p>
<p><a href="https://static.rainfocus.com/splunk/splunkconf18/sess/1521146368312001VwQc/finalPDF/FN1089_DockerizingSplunkatScale_Final_1538666172485001Loc0.pdf" rel="nofollow noreferrer">https://static.rainfocus.com/splunk/splunkconf18/sess/1521146368312001VwQc/finalPDF/FN1089_DockerizingSplunkatScale_Final_1538666172485001Loc0.pdf</a></p>
|
<p>I am trying to optimize the CPU resources allocated to a pod based on previous runs of that pod. </p>
<p>The only problem is that I have only been able to find how much CPU is allocated to a given pod, not how much CPU a pod is actually using. </p>
| <p>That information is not stored anywhere in Kubernetes. You typically can get the 'current' CPU utilization from a metrics endpoint.</p>
<p>You will have to use another system/database to store that information through time. The most common one to use is the open-source time series DB <a href="https://prometheus.io/" rel="nofollow noreferrer">Prometheus</a>. You can also visualize its content using another popular tool: <a href="https://grafana.com/" rel="nofollow noreferrer">Grafana</a>. There are other open-source alternatives too. For example, <a href="https://www.influxdata.com/" rel="nofollow noreferrer">InfluxDB</a>.</p>
<p>Also, there are a ton of commercial solutions that support Kubernetes metrics. For example:</p>
<ul>
<li><a href="https://www.datadoghq.com/" rel="nofollow noreferrer">Datadog</a></li>
<li><a href="https://sysdig.com/" rel="nofollow noreferrer">Sysdig</a></li>
<li><a href="https://www.signalfx.com/" rel="nofollow noreferrer">SignalFX</a></li>
<li><a href="https://newrelic.com/" rel="nofollow noreferrer">New Relic</a></li>
<li><a href="https://www.dynatrace.com/" rel="nofollow noreferrer">Dynatrace</a> </li>
<li>Etc...</li>
</ul>
|
<p>I have an app running with Docker and the .git directory is ignored to reduced the project's size.</p>
<p>The problem is that every time an artisan command is ran this message is displayed and stored inside the logs of Kubernetes. Moreover, in some cases is the reason why some kubernetes tasks cannot reach the HEALTHY status.</p>
<p>I have a Cronjob with kubernetes which reaches just 2/3 and the only message that the logs displayed is this one.</p>
| <p><a href="https://github.com/monicahq/monica/pull/950" rel="noreferrer">monicahq/monica PR 950</a> is an example of a workaround where the Sentry configuration is modified to test for the presence of the Git repository, ensuring <code>php artisan config:cache</code> is run only once.</p>
<pre><code>// capture release as git sha
'release' => is_dir(__DIR__.'/../.git') ? trim(exec('git log --pretty="%h" -n1 HEAD')) : null,
</code></pre>
|
<p>I tried to add a logic that will send slack notification when the pipeline terminated due to some error. I tried to implement this with <code>ExitHandler</code>. But, seems the <code>ExitHandler</code> can’t dependent on any op. Do you have any good idea?</p>
| <p>I found a solution to which uses <code>ExitHandler</code>. I post my code below, hope it can help someone else.</p>
<pre class="lang-py prettyprint-override"><code>
def slack_notification(slack_channel: str, status: str, name: str, is_exit_handler: bool = False):
"""
performs slack notifications
"""
send_slack_op = dsl.ContainerOp(
name=name,
image='wenmin.wu/slack-cli:latest',
is_exit_handler=is_exit_handler,
command=['sh', '-c'],
arguments=["/send-message.sh -d {} '{}'".format(slack_channel, status)]
)
send_slack_op.add_env_variable(V1EnvVar(name = 'SLACK_CLI_TOKEN', value_from=V1EnvVarSource(config_map_key_ref=V1ConfigMapKeySelector(name='workspace-config', key='SLACK_CLI_TOKEN'))))
return send_slack_op
@dsl.pipeline(
name='forecasting-supply',
description='forecasting supply ...'
)
def ml_pipeline(
param1,
param2,
param3,
):
exit_task = slack_notification(
slack_channel = slack_channel,
name = "supply-forecasting",
status = "Kubeflow pipeline: {{workflow.name}} has {{workflow.status}}!",
is_exit_handler = True
)
with dsl.ExitHandler(exit_task):
# put other tasks here
</code></pre>
|
<p>I have a Kubernetes cluster and I have tested submitting 1,000 jobs at a time and the cluster has no problem handling this. I am interested in submitting 50,000 to 100,000 jobs and was wondering if the cluster would be able to handle this?</p>
| <p>Yes you can but only if only don't run out of resources or you don't exceed this <a href="https://kubernetes.io/docs/setup/best-practices/cluster-large/#support" rel="nofollow noreferrer">criteria</a> regarding building large clusters.</p>
<p>Usually you want to limit your jobs in some way in order to better handle memory and CPU or to adjust it in any other way according to your needs. </p>
<p>So the best practice in your case would be to:</p>
<ul>
<li>set as many jobs as you want (bear in mind the building large clusters criteria)</li>
<li>observe the resource usage</li>
<li>if needed use for example <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/" rel="nofollow noreferrer">Resource Quotas</a> in order to limit resources used by the jobs</li>
</ul>
<p>I hope you find this helpful. </p>
|
<p>I have a bare metal cluster with 3 servers which are publicly available but each one has a totally different ip address. I have my DNS entry to point to all 3 hosts - so traffic can received by any host. I have all of them connected via a vlan. Now I would like that traffic coming from internet is forwarded to the k8s node, where the service is running:</p>
<pre><code>
browse "mysrv.mydomain.com"
|
.-~|~-.
.- ~ ~-( | )_ _
/ v ~ -.
| | \
\ | .'
~- . _____|_______ . -~
|
| request can go to any service
|
+----------->--------------+------------>-------------+
v v v
| | |
| | |
+---------------------+ +---------------------+ +---------------------+
| host01 | | host02 | | host03 |
| k8s master | | k8s node | | k8s node |
| | | | | |
| Pub.Ip: x.x.x.x | | Pub.Ip: y.y.y.y | | Pub.Ip: z.z.z.z |
| | | | | |
| VlanIp: 192.168.0.1 |----| VlanIp: 192.168.0.2 |----| VlanIp: 192.168.0.3 |
+---------------------+ +---------------------+ +---------------------+
</code></pre>
<p>I understand that I have to use metallb, thus I followed the setup description here: <a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#a-pure-software-solution-metallb" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#a-pure-software-solution-metallb</a>. I can successfully deploy the nginx load balancer but I am not user how to configure metallb.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- x.x.x.x,y.y.y.y,z.z.z.z
- name: internal
protocol: layer2
addresses:
- 192.168.0.200-192.168.0.210
</code></pre>
<p>Using <code>default</code> results in a <code>nginx</code> services which never gets an external Ip</p>
<pre><code>> kubectl get svc -n ingress-nginx ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx LoadBalancer 10.107.136.170 <pending> 80:30152/TCP,443:30276/TCP 43h
</code></pre>
<p>Using <code>internal</code> works but only when I am connected to the internal vlan.</p>
<p>Any hint would be useful on how I can make my scenario work.</p>
| <p>Your environment is not a good fit for MetalLB - you would need IPs that can be assigned to any node (floating IP / service IP).
For your situation a reverse proxy, which is called Ingress in Kubernetes, would be a better solution.</p>
|
<p>I'm planning to use Google Cloud Composer (Apache Airflow) to manage our data pipelines. Certain processing steps are defined in a Docker image that I'd like to trigger to run on Google Kubernetes Engine. These processing steps are often resource-intensive jobs and I'm wondering what's the best way to approach scheduling them.</p>
<p>I looked into the Kubernetes Operator to build a Docker image hosted on Google Container Registry. However, it's my understanding that this workload will be created within the existing Cloud Composer Kubernetes cluster. Therefore, the resources available to run the workload are limited by the amount of resources allocated to the Cloud Composer cluster. It seems wasteful to allocate a huge amount of resources to the Cloud Composer cluster to only be available when this certain task runs. Is there any type of autoscaling at the Cloud Composer cluster-level that could handle this?</p>
<p>As an alternative, I was thinking that Cloud Composer could have a DAG that creates an external Kubernetes cluster with the proper resources to run this step, and then tear down when completed. Does this sound like a valid approach? What would be the best way to implement this? I was thinking to use the BashOperator with gcloud commands to kubectl.</p>
<p>TLDR: Is it a valid pattern to use Cloud Composer to manage external Kubernetes clusters as a way to handle resource-intensive processing steps?</p>
| <p>I think it's a good practice to separate your own pods on different nodes than the existing Airflow pods (executed on the default node pool of your Cloud Composer Kubernetes cluster). Doing so, you won't interfere with the existing Airflow pods in any manner.</p>
<p>If you don't want to use an external Kubernetes cluster, you can create a node pool directly inside your Cloud Composer Kubernetes cluster, with minimum 0 nodes and auto-scaling enabled. When there is no pod running, there will be no node in the node pool (you won't pay). When you will start a pod (using node affinity), a node will automatically be started. An other advantage is that you can choose the node pool's nodes machines type depending on your needs.</p>
<p>To schedule a pod on a specific node pool, use <code>KubernetesPodOperator</code>'s <code>affinity</code> parameter :</p>
<pre><code>KubernetesPodOperator(
task_id=task_id,
namespace='default',
image=image,
arguments=arguments,
name=task_id.replace('_', '-'),
affinity={
'nodeAffinity': {
'requiredDuringSchedulingIgnoredDuringExecution': {
'nodeSelectorTerms': [{
'matchExpressions': [{
'key': 'cloud.google.com/gke-nodepool',
'operator': 'In',
'values': [
'<name_of_your_node_pool>',
]
}]
}]
}
}
},
is_delete_operator_pod=True,
hostnetwork=False,
)
</code></pre>
<p>I'm using this in production and it works correctly.</p>
|
<p>I have this spec for Kubernetes CronJob</p>
<pre><code>---
kind: CronJob
apiVersion: batch/v1beta1
metadata:
name: do-registry-cleanup
spec:
schedule: "* * * * *"
successfulJobsHistoryLimit: 2
failedJobsHistoryLimit: 4
jobTemplate:
spec:
template:
spec:
automountServiceAccountToken: false
restartPolicy: OnFailure
containers:
- name: podtest2
image: alpine
args:
- wget
- http://some_real_url/test/pod/2
imagePullPolicy: Always
lifecycle:
postStart:
exec:
command:
- "sh"
- "-c"
- "sleep 2s;"
</code></pre>
<p>When i do <code>kubectl describe pod some_pod_name</code> i get this output (truncated) </p>
<pre><code>Normal Pulling 105s kubelet, general-rl8c pulling image "alpine"
Normal Pulled 105s kubelet, general-rl8c Successfully pulled image "alpine"
Normal Created 105s kubelet, general-rl8c Created container
Normal Started 104s kubelet, general-rl8c Started container
Warning FailedPostStartHook 104s kubelet, general-rl8c Exec lifecycle hook ([sh -c sleep 2s;]) for Container "podtest2" in Pod "do-registry-cleanup-1566391980-dvjdn_default(9d87fe8a-c412-11e9-8744-d2e7c0045fbd)" failed - error: command 'sh -c sleep 2s;' exited with 137: , message: ""
Normal Killing 104s kubelet, general-rl8c Killing container with id docker://podtest2:FailedPostStartHook
</code></pre>
<p>As result in this example wget is request url, and i know that sleep command is executed,not broken.
My Question is why are:</p>
<ol>
<li>why is it occured?</li>
<li>what are side effects of this?</li>
</ol>
<hr>
<p>Some additional info. If command is "cmd1; sleep; cmd2" then cmd2 not executed. So by some reason sleep cmd invoke error in container.</p>
| <p>Refering to the official documentation:</p>
<h3><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/" rel="nofollow noreferrer">Pod lifecycle</a>:</h3>
<blockquote>
<p>Once a container enters into <strong>Running</strong> state, <strong>postStart</strong> hook (if any) is executed. </p>
<p>A container enters into <strong>Terminated</strong> state when it has successfully completed execution or when it has failed for some reason. Regardless, a reason and exit code is displayed, as well as the container’s start and finish time. Before a container enters into Terminated, <strong>preStop</strong> hook (if any) is executed.</p>
</blockquote>
<h3><a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/" rel="nofollow noreferrer">Lifecycle hooks</a>:</h3>
<blockquote>
<p>There are two hooks that are exposed to Containers:</p>
<p><strong>PostStart</strong></p>
<p>This hook executes immediately after a container is created. However, there is no guarantee that the hook will execute before the container ENTRYPOINT. No parameters are passed to the handler.</p>
<p><strong>PreStop</strong></p>
<p>This hook is called immediately before a container is terminated due to an API request or management event such as liveness probe failure, preemption, resource contention and others. A call to the preStop hook fails if the container is already in terminated or completed state. It is blocking, meaning it is synchronous, so it must complete before the call to delete the container can be sent. No parameters are passed to the handler</p>
</blockquote>
<p>Actually, what is written for PreStop works for PostStart also.</p>
<p>Basically, Kubelet doesn't wait until all hooks are finished. It just terminate everything after main container exits.</p>
<p>For <strong>PreStop</strong> we can only increase grace period, but for <strong>PostStart</strong> we can make the main contaner waiting until the hook is finished.
Here is an example:</p>
<pre><code>kind: CronJob
apiVersion: batch/v1beta1
metadata:
name: test1
spec:
schedule: "* * * * *"
successfulJobsHistoryLimit: 2
failedJobsHistoryLimit: 4
jobTemplate:
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: test1
image: nginx
command: ["bash", "-c", "touch file1; while [ ! -f file2 ] ; do ls file*; sleep 1 ; done; ls file*"]
lifecycle:
postStart:
exec:
command: ["bash", "-c", "sleep 10; touch file2"]
</code></pre>
<p>If you check the logs of the pod, you'll see that hook has created file before the main container has been terminated. You can see that cycle has run 12 times, instead of 10. That means that PostStart has been started after 2 second after main container starts running. It means, that the container enters into Running state with some delay after start. </p>
<pre><code>$ kubectl describe cronjob/test1 | grep Created
Normal SuccessfulCreate 110s cronjob-controller Created job test1-1566402420
$ kubectl describe job/test1-1566402420 | grep Created
Normal SuccessfulCreate 2m28s job-controller Created pod: test1-1566402420-d5lfr
$ kubectl logs pod/test1-1566402420-d5lfr -c test1
file1
file1
file1
file1
file1
file1
file1
file1
file1
file1
file1
file1
file2
</code></pre>
|
<p>I'm currently using Kubernetes Client Plugin in Jenkins but still it's confusing to configure, because it looks for Kubernetes config, and those credetials are not available even I configured them in the credentials section.</p>
<p>Please see the screenshot below for my credentials which are configure in my Jenkins.</p>
<p><a href="https://i.stack.imgur.com/2C6rN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2C6rN.png" alt="enter image description here"></a></p>
<p>When I do try to add those credentials from Jenkins side, which is not listed under the Kubernetes credentials. The red colored are has no kist of my credentials.</p>
<p><a href="https://i.stack.imgur.com/EIlik.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EIlik.png" alt="enter image description here"></a></p>
<p>How can I configure this Kubernetes plugin in Jenkins ? or any other alternative methods to configure Jenkins + Amazon EKS ?</p>
<p>Thanks.</p>
<p>Plugin : <a href="https://wiki.jenkins.io/display/JENKINS/Kubernetes+Plugin" rel="nofollow noreferrer">https://wiki.jenkins.io/display/JENKINS/Kubernetes+Plugin</a></p>
| <p>To reproduce your issue I have installed EKS using <a href="https://logz.io/blog/amazon-eks/" rel="nofollow noreferrer">Deploying a Kubernetes Cluster with Amazon EKS</a> article.</p>
<p>After adding worker nodes next steps were performed:</p>
<p>1) install helm</p>
<p>2) install jenkins from stable/jenkins chart.</p>
<pre><code>curl https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller
helm search jenkins
helm install stable/jenkins --name myjenkins
If you want to change any parameters before deploying helm chart, you can first download and edit values.
helm inspect values stable/jenkins > /tmp/jenkins.values
helm install stable/jenkins --values /tmp/stable_jenkins.values --name myjenkins
</code></pre>
<p>Wait until everything will be deployed, you can check via <code>watch kubectl get all --all-namespaces</code></p>
<pre><code>kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/myjenkins-c9bc6bbbc-hvdzg 1/1 Running 0 19m
kube-system pod/aws-node-5swq5 1/1 Running 0 21m
kube-system pod/aws-node-h5vl7 1/1 Running 0 20m
kube-system pod/aws-node-ttkgn 1/1 Running 0 21m
kube-system pod/coredns-7fb855c998-7lglx 1/1 Running 0 48m
kube-system pod/coredns-7fb855c998-h7stl 1/1 Running 0 48m
kube-system pod/kube-proxy-drvc2 1/1 Running 0 21m
kube-system pod/kube-proxy-gfwh8 1/1 Running 0 20m
kube-system pod/kube-proxy-kscm8 1/1 Running 0 21m
kube-system pod/tiller-deploy-5d6cc99fc-7mv88 1/1 Running 0 45m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 48m
default service/myjenkins LoadBalancer 10.100.9.131 ***********************************-*******.eu-west-1.elb.amazonaws.com 8080:30878/TCP 19m
default service/myjenkins-agent ClusterIP 10.100.28.95 <none> 50000/TCP 19m
kube-system service/kube-dns ClusterIP 10.100.0.10 <none> 53/UDP,53/TCP 48m
kube-system service/tiller-deploy ClusterIP 10.100.250.226 <none> 44134/TCP 45m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/aws-node 3 3 3 3 3 <none> 48m
kube-system daemonset.apps/kube-proxy 3 3 3 3 3 <none> 48m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default deployment.apps/myjenkins 1/1 1 1 19m
kube-system deployment.apps/coredns 2/2 2 2 48m
kube-system deployment.apps/tiller-deploy 1/1 1 1 45m
NAMESPACE NAME DESIRED CURRENT READY AGE
default replicaset.apps/myjenkins-c9bc6bbbc 1 1 1 19m
kube-system replicaset.apps/coredns-7fb855c998 2 2 2 48m
kube-system replicaset.apps/tiller-deploy-5d6cc99fc 1 1 1 45m
</code></pre>
<p>Next </p>
<pre><code>1. Get your 'admin' user password by running:
printf $(kubectl get secret --namespace default myjenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
2. Get the Jenkins URL to visit by running these commands in the same shell:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc --namespace default -w myjenkins'
export SERVICE_IP=$(kubectl get svc --namespace default myjenkins --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}")
echo http://$SERVICE_IP:8080/login
3. Login with the password from step 1 and the username: admin
</code></pre>
<ul>
<li><p>Open browser, login and go to "Manage Jenkins"-->"Configure System"--> Cloud Section</p></li>
<li><p>Click Add - Jenkins</p></li>
<li><p>Choose <code>Kubernetes Service Account</code> and click add..
<a href="https://i.stack.imgur.com/OcNO5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OcNO5.png" alt="enter image description here"></a></p></li>
<li><p>Next chose "Secret Text" in drop down menu left to Add button, Test connection, apply and save.
<a href="https://i.stack.imgur.com/Q5SKA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Q5SKA.png" alt="enter image description here"></a></p></li>
</ul>
<p>-Check Credentials:
<a href="https://i.stack.imgur.com/cVahj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cVahj.png" alt="enter image description here"></a></p>
<p>-Check again "Manage Jenkins"-->"Configure System"--> Cloud Section
<a href="https://i.stack.imgur.com/NPUqK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NPUqK.png" alt="enter image description here"></a></p>
<p>Hope it helps...</p>
|
<p>I have a very simple java jar application that fetches DB properties from the env variables.
I`ve dockerized it and migrated to Kubernetes. I also created a config map and secret with parameters to my DB - so I have access to these properties in the container.
Is it possible to fetch all properties and inject it in Dockerfile? How can I do it?</p>
<pre><code>FROM openjdk:8-jre
ADD target/shopfront-0.0.1-SNAPSHOT.jar app.jar
EXPOSE 8010
ENTRYPOINT ["java","-Ddb.host=**value-from-env-variable**","-jar","/app.jar"]
</code></pre>
| <p>You can use them like this</p>
<pre><code>ENTRYPOINT ["java", "-jar", "-Ddb.host=${DB_HOST}", "/app.jar"]
</code></pre>
<p>where DB_HOST should be defined in config map you have created.</p>
<p>I have tried this in my Spring Boot Application for setting Spring Profile.</p>
|
<p>I have a very simple java jar application that fetches DB properties from the env variables.
I`ve dockerized it and migrated to Kubernetes. I also created a config map and secret with parameters to my DB - so I have access to these properties in the container.
Is it possible to fetch all properties and inject it in Dockerfile? How can I do it?</p>
<pre><code>FROM openjdk:8-jre
ADD target/shopfront-0.0.1-SNAPSHOT.jar app.jar
EXPOSE 8010
ENTRYPOINT ["java","-Ddb.host=**value-from-env-variable**","-jar","/app.jar"]
</code></pre>
| <p>The array or <a href="https://docs.docker.com/engine/reference/builder/#exec-form-entrypoint-example" rel="nofollow noreferrer">"exec form"</a> of entrypoint uses <code>exec</code> to run the specified binary rather than a shell. Without a shell the string <code>$DB_HOST</code> is passed to your program as argument. </p>
<h3>Shell form</h3>
<pre><code>ENTRYPOINT java -Ddb.host="${DB_HOST}" -jar /app.jar
</code></pre>
<h3>Shell script</h3>
<p>If your startup get more complex you can also use an <code>ENTRYPOINT</code> script. </p>
<pre><code>ENTRYPOINT ["/launch.sh"]
</code></pre>
<p>Then <code>launch.sh</code> contains:</p>
<pre><code>#!/bin/sh -uex
java -Ddb.host="${DB_HOST}" -jar /app.jar
</code></pre>
|
<p>I'm trying to patch a deployment, but I keep hitting deployment.extensions/velero not patched.</p>
<p>I've tried a few different variations of the following: </p>
<pre><code>kubectl patch deployment velero -n velero -p '{"spec":{"containers":[{"env":[{"name":"AWS_CLUSTER_NAME","value":"test-cluster"}]}]}}'
</code></pre>
<p>My initial deployment.yaml file</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: velero
labels:
app.kubernetes.io/name: velero
app.kubernetes.io/instance: velero
app.kubernetes.io/managed-by: Tiller
helm.sh/chart: velero-2.1.1
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: velero
app.kubernetes.io/name: velero
template:
metadata:
labels:
app.kubernetes.io/name: velero
app.kubernetes.io/instance: velero
app.kubernetes.io/managed-by: Tiller
helm.sh/chart: velero-2.1.1
spec:
restartPolicy: Always
serviceAccountName: velero-server
containers:
- name: velero
image: "gcr.io/heptio-images/velero:v1.0.0"
imagePullPolicy: IfNotPresent
command:
- /velero
args:
- server
volumeMounts:
- name: plugins
mountPath: /plugins
- name: cloud-credentials
mountPath: /credentials
- name: scratch
mountPath: /scratch
env:
- name: AWS_SHARED_CREDENTIALS_FILE
value: /credentials/cloud
- name: VELERO_SCRATCH_DIR
value: /scratch
volumes:
- name: cloud-credentials
secret:
secretName: cloud-credentials
- name: plugins
emptyDir: {}
- name: scratch
emptyDir: {}
</code></pre>
<p>I'm a bit stuck right now and fear I may be going about this the wrong way. Any suggestions would be much appreciated.</p>
| <p>Apart from <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#patching-resources" rel="noreferrer">kubectl patch</a> command, you can also make use of <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-env-em-" rel="noreferrer">kubectl set env</a> to update environment variable of k8s deployment.</p>
<pre><code>kubectl set env deployment/velero AWS_CLUSTER_NAME=test-cluster
</code></pre>
<p>Hope this helps.</p>
|
<h2>Summary</h2>
<p>I am unable to issue a valid certificate for my terraform kubernetes cluster on azure aks. The domain and certificate is successfully created (cert is created according to <a href="https://crt.sh/" rel="nofollow noreferrer">crt.sh</a>), however the certificate is not applied to my domain and my browser reports "Kubernetes Ingress Controller Fake Certificate" as the applied certificate. </p>
<p>The terraform files are converted to the best of my abilities from a working set of yaml files (that issues certificates just fine). See my terraform code <a href="https://github.com/Krande/terraform_aks_minimum" rel="nofollow noreferrer">here</a>.</p>
<p>UPDATE! In the original question I was also unable to create certificates. This was fixed by using the "tls_cert_request" resource from <a href="https://www.terraform.io/docs/providers/acme/r/certificate.html" rel="nofollow noreferrer">here</a>. The change is included in my updated code below.</p>
<p>Here a some things I have checked out and found NOT to be the issue</p>
<ul>
<li>The number of issued certificates from acme letsencrypt is not above rate-limits for either staging or prod. </li>
<li>I get the same "Fake certificate" error using both staging or prod certificate server.</li>
</ul>
<p>Here are some areas that I am currently investigating as potential sources for the error.</p>
<ul>
<li>I do not see a terraform-equivalent of the letsencrypt yaml input "privateKeySecretRef" and consequently what the value of my deployment ingress "certmanager.k8s.io/cluster-issuer" should be.</li>
</ul>
<p>If anyone have any other suggestions, I would really appreciate to hear them (as this has been bugging me for quite some time now)!</p>
<h2>Certificate Resources</h2>
<pre><code>provider "acme" {
server_url = var.context.cert_server
}
resource "tls_private_key" "reg_private_key" {
algorithm = "RSA"
}
resource "acme_registration" "reg" {
account_key_pem = tls_private_key.reg_private_key.private_key_pem
email_address = var.context.email
}
resource "tls_private_key" "cert_private_key" {
algorithm = "RSA"
}
resource "tls_cert_request" "req" {
key_algorithm = "RSA"
private_key_pem = tls_private_key.cert_private_key.private_key_pem
dns_names = [var.context.domain_address]
subject {
common_name = var.context.domain_address
}
}
resource "acme_certificate" "certificate" {
account_key_pem = acme_registration.reg.account_key_pem
certificate_request_pem = tls_cert_request.req.cert_request_pem
dns_challenge {
provider = "azure"
config = {
AZURE_CLIENT_ID = var.context.client_id
AZURE_CLIENT_SECRET = var.context.client_secret
AZURE_SUBSCRIPTION_ID = var.context.azure_subscription_id
AZURE_TENANT_ID = var.context.azure_tenant_id
AZURE_RESOURCE_GROUP = var.context.azure_dns_rg
}
}
}
</code></pre>
<h2>Pypiserver Ingress Resource</h2>
<pre><code>resource "kubernetes_ingress" "pypi" {
metadata {
name = "pypi"
namespace = kubernetes_namespace.pypi.metadata[0].name
annotations = {
"kubernetes.io/ingress.class" = "inet"
"kubernetes.io/tls-acme" = "true"
"certmanager.k8s.io/cluster-issuer" = "letsencrypt-prod"
"ingress.kubernetes.io/ssl-redirect" = "true"
}
}
spec {
tls {
hosts = [var.domain_address]
}
rule {
host = var.domain_address
http {
path {
path = "/"
backend {
service_name = kubernetes_service.pypi.metadata[0].name
service_port = "http"
}
}
}
}
}
}
</code></pre>
<p>Let me know if more info is required, and I will update my question text with whatever is missing. And lastly I will let the terraform code git repo stay up and serve as help for others.</p>
| <p>The answer to my question was that I had to include a cert-manager to my cluster and as far as I can tell there are no native terraform resources to create it. I ended up using Helm for my ingress and cert manager.</p>
<p>The setup ended up a bit more complex than I initially imagined, and as it stands now it needs to be run twice. This is due to the kubeconfig not being updated (have to apply "set KUBECONFIG=.kubeconfig" before running "terraform apply" a second time). So it's not pretty, but it "works" as a minimum example to get your deployment up and running.</p>
<p>There definitively are ways of simplifying the pypi deployment part using native terraform resources, and there is probably an easy fix to the kubeconfig not being updated. But I have not had time to investigate further.</p>
<p>If anyone have tips for a more elegant, functional and (probably most of all) secure minimum terraform setup for a k8s cluster I would love to hear it!</p>
<p>Anyways, for those interested, the resulting terraform code can be found <a href="https://github.com/Krande/terraform_aks_minimum" rel="nofollow noreferrer">here</a></p>
|
<p>I'm setting a bare-metal kubernetes cluster for a web application in a google cloud instance, I am connecting to microservices through an ingress controller. How do I access the ingress controller from all incoming hosts?</p>
<p>There is a pod running angular web application and another pod running a node api microservice. Angular Web Application has been exposed globally. When accessing the microservice externally and passing the header with the hostname I was able to get the expected response. On removing the host in the ingress yaml I am not able to access the ingress.</p>
<pre><code>kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/rewrite-target: nginx
creationTimestamp: "2019-08-12T07:41:37Z"
generation: 7
name: test
namespace: default
resourceVersion: "546400"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/test
uid: 374836d2-34c3-4053-b0e3-9fe3f63167cc
spec:
rules:
- host: bar.com
http:
paths:
- backend:
serviceName: login-service
servicePort: 3000
path: /login-service
- backend:
serviceName: organization-service
servicePort: 3000
path: /organization-service
status:
loadBalancer:
ingress:
- ip: 10.128.0.16
- ip: 203.0.113.2
</code></pre>
<p>I except the ingress to be accessed from all the hosts other than the specified host(bar.com) in ingress.</p>
<p>Any other way to access the API microservice from the outside cluster(globally)?</p>
| <p>In order to access the API service From outside the cluster(Globally).</p>
<p>Create a proxy nginx server and expose the port of the nginx proxy server. From the web application server, call a request to the proxy server through the external IP and exposed Port. The proxy server will pass the request to the respected API microservice and return the expected response.</p>
<p>Edit the nginx.conf file.</p>
<pre><code>location /<your_requested_URL> {
proxy_pass http://service_name:port;
}
</code></pre>
|
<p>I am using Spring Cloud Kubernetes + Spring Cloud Gateway(SCG) and I have some trouble to deploy my app on GKE.
SCG does not find k8s service, I still get this error:</p>
<pre><code>There was an unexpected error (type=Service Unavailable, status=503).
Unable to find instance for uiservice
</code></pre>
<p><code>uiservice</code> is Angular app. </p>
<p>When I take a look at <code>.../actuator/gateway/routes</code> I have this result:</p>
<pre><code>[
{
"route_id": "CompositeDiscoveryClient_gateway",
"route_definition": {
"id": "CompositeDiscoveryClient_gateway",
"predicates": [
{
"name": "Path",
"args": {
"pattern": "/gateway/**"
}
}
],
"filters": [
{
"name": "RewritePath",
"args": {
"regexp": "/gateway/(?<remaining>.*)",
"replacement": "/${remaining}"
}
}
],
"uri": "lb://gateway",
"order": 0
},
"order": 0
},
{
"route_id": "CompositeDiscoveryClient_uiservice",
"route_definition": {
"id": "CompositeDiscoveryClient_uiservice",
"predicates": [
{
"name": "Path",
"args": {
"pattern": "/uiservice/**"
}
}
],
"filters": [
{
"name": "RewritePath",
"args": {
"regexp": "/uiservice/(?<remaining>.*)",
"replacement": "/${remaining}"
}
}
],
"uri": "lb://uiservice",
"order": 0
},
"order": 0
},
{
"route_id": "uiservice_route",
"route_definition": {
"id": "uiservice_route",
"predicates": [
{
"name": "Path",
"args": {
"_genkey_0": "/*"
}
}
],
"filters": [],
"uri": "lb://uiservice",
"order": 0
},
"order": 0
},
....
]
</code></pre>
<p>Please note that services are well discovered because of that: <code>"route_id": "CompositeDiscoveryClient_gateway"</code> and <code>"route_id": "CompositeDiscoveryClient_uiservice"</code>, those routes are not mine (I did not define them). </p>
<p>I took a look at this post:<a href="https://stackoverflow.com/questions/56170511/how-to-set-up-spring-cloud-gateway-application-so-it-can-use-the-service-discove">How to set up Spring Cloud Gateway application so it can use the Service Discovery of Spring Cloud Kubernetes?</a>
without success.</p>
<p>My configuration:</p>
<pre><code> spring:
profiles:
active: prod
cloud:
kubernetes:
reload:
enabled: true
gateway:
discovery:
locator:
enabled: true
lower-case-service-id: true
globalcors:
cors-configurations:
'[/**]':
allowedOrigins: uiservice
allowedMethods: "*"
allowCredentials: true
maxAge: 7200
allowedHeaders: "*"
exposedHeaders:
- "Access-Control-Allow-Origin"
- "Access-Control-Allow-Methods"
- "Access-Control-Max-Age"
- "Access-Control-Allow-Headers"
- "Cache-Control"
- "Authorization"
- "Content-Type"
routes:
#======UISERVICE========
- id: uiservice_route
uri: lb://uiservice
predicates:
- Path=/* #default route
- id: uiservice_route_assets
uri: lb://uiservice
predicates:
- Path=/assets/**
management:
endpoints:
web:
exposure:
include: "*"
endpoint:
restart:
enabled: true
</code></pre>
<p>Also, how can I disable gateway autodiscovering? I don't want the <code>"route_id": "CompositeDiscoveryClient_gateway"</code></p>
<p>Dependencies:</p>
<pre><code><dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes-all</artifactId>
</dependency>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>Greenwich.SR2</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
</code></pre>
<p>Thanks for your help</p>
| <p>I finally found a solution after losing an afternoon. I think there is an issue on service discovering when using Ribbon. I use k8s dns service discovering instead of relying on Ribbon, so my new config is:</p>
<pre><code>routes:
- id: uiservice_route
uri: http://uiservice:4200 # switch 'lb://' to 'http://'
predicates:
- Path=/*
</code></pre>
<p>K8s uiservice config:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: uiservice
spec:
sessionAffinity: ClientIP
selector:
app: uiservice
ports:
- name: http
port: 4200
targetPort: ui-port
</code></pre>
<p>A new question arised: Why using Ribbon to load balance requests since k8s services natively do that?</p>
|
<p>I have a config map that defines some variables like environment that are then passed into alot of deployment configurations like this</p>
<pre><code>- name: ENV
valueFrom:
configMapKeyRef:
name: my-config-map
key: ENV
</code></pre>
<p>secrets and some volumes like ssl certs are common across the configs also. Is there some kubernetes type that I could create a base service deployment that extends a normal deployment? Or some other way to deal with this? Also using kustomize, there might be an option there.</p>
| <p>You can use a PodPreset object to inject information like secrets, volume mounts, and environment variables etc into pods at creation time.</p>
<p>Before starting using PodPreset you need to take few steps:</p>
<ul>
<li>Firstly need to enable API type <code>settings.k8s.io/v1alpha1/podpreset</code>, which can be done by including <code>settings.k8s.io/v1alpha1=true</code> in the <code>--runtime-config</code> option for the API server</li>
<li>Enable the admission controller PodPreset. You can do it by including PodPreset in the <code>--enable-admission-plugins</code> option value specified for the API server</li>
<li>After that you need to creatie PodPreset objects in the namespace you will work in and create it by typing <code>kubectl apply -f preset.yaml</code></li>
</ul>
<p>Please refer to <a href="https://kubernetes.io/docs/concepts/workloads/pods/podpreset/" rel="nofollow noreferrer">official documentation</a> to see how it works.</p>
|
<p>I deployed a helm chart (<code>helm install --name=my-release stable/kube-ops-view</code>) which created a svc with clusterIP, I tried to create a route to it via traefik ingress, but its not working</p>
<p>I have been able to route other applications (nginx) using similar ingress configurations</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo-ingress
namespace: kube-ops-view #svc is created in this namespace
spec:
rules:
- host:
http:
paths:
- path: /kube
backend:
serviceName: kube-ops-view
servicePort: 80
</code></pre>
<p>Ingress should have worked</p>
| <p>I was able to make it work on GKE cluster.
After cluster creation :</p>
<p>1) Installed Helm</p>
<p>2) Installed NGINX Ingress Controller](<a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/</a>) </p>
<p>Next..
You should use <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/#rewrite-target" rel="nofollow noreferrer">Rewrite target</a> with <code>nginx.ingress.kubernetes.io/rewrite-target</code> <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/" rel="nofollow noreferrer">annotation</a>.</p>
<blockquote>
<p>Starting in Version 0.22.0, ingress definitions using the annotation
nginx.ingress.kubernetes.io/rewrite-target are not backwards
compatible with previous versions. In Version 0.22.0 and beyond, any
substrings within the request URI that need to be passed to the
rewritten path must explicitly be defined in a capture group.</p>
</blockquote>
<p>you ingress should look like</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: kubeops-kube-ops-view
namespace: default
spec:
rules:
- http:
paths:
- backend:
serviceName: kubeops-kube-ops-view
servicePort: 80
path: /kube(/?|$)(.*)
</code></pre>
<p>And another option: you can enable ingress in helm chart without writing ingress yaml.</p>
<p>My way:</p>
<pre><code>helm fetch stable/kube-ops-view --untar
</code></pre>
<p>edit values.yaml</p>
<pre><code>ingress:
enabled: true
path: /kube(/|$)(.*)
# hostname: kube-ops-view.local
annotations: {
kubernetes.io/ingress.class: nginx,
nginx.ingress.kubernetes.io/rewrite-target: /$2
}
tls: []
## Secrets must be manually created in the namespace
# - secretName: kube-ops-view.local-tls
# hosts:
# - kube-ops-view.local
</code></pre>
<p>edit templates/ingress.yaml</p>
<pre><code>{{- if .Values.ingress.enabled -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ template "kube-ops-view.fullname" . }}
labels:
app: {{ template "kube-ops-view.fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
{{- if .Values.ingress.annotations }}
annotations:
{{ toYaml .Values.ingress.annotations | indent 4 }}
{{- end }}
spec:
rules:
- host:
http:
paths:
- path: {{ .Values.ingress.path }}
backend:
serviceName: {{ template "kube-ops-view.fullname" . }}
servicePort: {{ .Values.service.externalPort }}
{{- if .Values.ingress.tls }}
tls:
{{ toYaml .Values.ingress.tls | indent 4 }}
{{- end -}}
{{- end -}}
</code></pre>
<p>Verify and install:
helm install --server-dry-run --debug kube-ops-view
helm install kube-ops-view --name kubeops</p>
<p>Check Result:</p>
<pre><code>curl -iLk http://Ingress-external-ip/kube/
HTTP/1.1 308 Permanent Redirect
Server: openresty/1.15.8.1
Date: Thu, 22 Aug 2019 09:21:21 GMT
Content-Type: text/html
Content-Length: 177
Connection: keep-alive
Location: https://Ingress-external-ip/kube/
HTTP/2 200
server: openresty/1.15.8.1
date: Thu, 22 Aug 2019 09:21:21 GMT
content-type: text/html; charset=utf-8
content-length: 1276
vary: Accept-Encoding
strict-transport-security: max-age=15724800; includeSubDomains
<!doctype html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Kubernetes Operational View 0.11</title>
<link rel="shortcut icon" href="static/favicon.ico">
<style>* {padding: 0; margin: 0} body { color: #aaaaff; background: #000; }</style>
<style>
/* latin */
@font-face {
font-family: 'ShareTechMono';
font-style: normal;
font-weight: 400;
/* ShareTechMono-Regular.ttf: Copyright (c) 2012, Carrois Type Design, Ralph du Carrois (www.carrois.com post@carrois.com), with Reserved Font Name 'Share'
License: SIL Open Font License, 1.1 */
src: local('Share Tech Mono'), local('ShareTechMono-Regular'), url(static/sharetechmono.woff2) format('woff2');
unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2212, U+2215, U+E0FF, U+EFFD, U+F000;
}
</style>
</head>
<body>
<!-- make sure the font is loaded -->
<div id="loading" style="font-family: ShareTechMono">Loading..</div>
<script src="static/build/app-ee71ac795470b05e7f76.js"></script>
<script>document.getElementById('loading').style.display = 'none'; const app = new App(); app.run()</script>
</body>
</html>
</code></pre>
<p><a href="https://i.stack.imgur.com/FN6ui.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FN6ui.png" alt="enter image description here"></a></p>
|
<p>I am trying to upload a jar file in a config map. I am able to upload the small jar file of size 1 MB but not a file of size 4MB. I am using below command to create configmap.</p>
<p><code>kubectl create configmap configmap-test --from-file=jarfile.jar</code></p>
<p>Is there a way to increase this size limit?</p>
<p>Client version is GitVersion:"v1.14.1" and server version is GitVersion:"v1.15.0"</p>
| <p>It is the constraint from the ETCD.</p>
<blockquote>
<p>The limit is 1MB because that's the limit for etcd.<a href="https://github.com/kubernetes/kubernetes/issues/19781" rel="nofollow noreferrer">size limit</a></p>
</blockquote>
|
<p>I have an API that describes itself through an openapi3 file. This app is contained in a pod that also has a sidecar app that is supposed to read this file at startup time.</p>
<p>My probleme is how my sidecar app can read the openapi file from the other container ?</p>
<p>I know I could do it using a volume (emptyDir) and modify the command so my api copies the file at startup time. I'd rather not go this route. I have been looking for a feature, where I define a volume which is mapped to an existing folder in my app, but without being empty. Is there such a thing ?</p>
| <p>One of the simplest approaches is to use <code>emptyDir</code>: <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#emptydir</a></p>
<p>In your container that generates the file that needs to be shared, mount emptyDir volume with write access and copy the file there. In your sidecar that needs to read the file, mount the same volume as read only and read the file.
With this pattern, all containers in the pod can have access to the same file system with read / write as needed.</p>
|
<h2>Update</h2>
<p>Apparenlty this behavior is caused by <code>ServiceAccount</code>:
<a href="https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#service-account-admission-controller" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#service-account-admission-controller</a></p>
<p>Which uses something called an <code>AdmissionController</code>. I guess what I'm looking for is one of the following:</p>
<ul>
<li><p>Find a setting in <code>AdmissionController</code> which skips the secrets mount for a given container (initContainer) in my case</p></li>
<li><p>Find an implementation of <code>AdmissionController</code> which has this flexibility</p></li>
<li><p>Change the location of secrets from /var/run/secrets to somewhere else</p></li>
</ul>
<h2>Original</h2>
<p>I have an initContainer which is a part of a pod a part of statefulset. I am mounting some straight forward volumes (so I can create paths/permissions before my app container starts). However as soon as I check the file system, I see a nested path with what seems to be kubernetes secrets.</p>
<p>How did this get mounted? Is this our own doing? Why this path? Can I stop the secrets from being mounted? Can i change the mount path?</p>
<pre><code>$ kubectl logs nmnode-0-0 -n test -c prep-hadoop-paths
drwxrwsrwt 4 root root 80 Aug 21 03:52 /run
/run:
total 0
drwxrwsr-x 2 1000 root 40 Aug 21 03:52 configmaps
drwxr-sr-x 3 root root 60 Aug 21 03:52 secrets
/run/configmaps:
total 0
/run/secrets:
total 0
drwxr-sr-x 3 root root 60 Aug 21 03:52 kubernetes.io
/run/secrets/kubernetes.io:
total 0
drwxrwsrwt 3 root root 140 Aug 21 03:51 serviceaccount
/run/secrets/kubernetes.io/serviceaccount:
total 0
lrwxrwxrwx 1 root root 13 Aug 21 03:51 ca.crt -> ..data/ca.crt
lrwxrwxrwx 1 root root 16 Aug 21 03:51 namespace -> ..data/namespace
lrwxrwxrwx 1 root root 12 Aug 21 03:51 token -> ..data/token
</code></pre>
<pre><code>
initContainers:
- command:
- sh
- -c
- umask 002; ls -ld /run; ls -lR /run; mkdir -p /var/run/secrets/credentials
; mkdir -p /var/opt/hdfs ; mkdir -p /var/run/configmaps ; mkdir -p /var/run/secrets/certificates
; ls -lR /var;
image: ubuntu:16.04
imagePullPolicy: IfNotPresent
name: prep-hadoop-paths
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/opt
name: data
subPath: hadoop/var/opt
- mountPath: /var/log
name: logs
subPath: hadoop
- mountPath: /var/run
name: var-run
subPath: hadoop
</code></pre>
<p>As you can see from the initContainer spec, there is nowhere that I specify or require any secrets to be mounted. However they show up regardless</p>
<p>The following is the volumes listing for the pod.</p>
<pre><code>
volumes:
- name: mssql-master-pool-secret
secret:
defaultMode: 420
secretName: mssql-master-pool-secret
- name: controller-internal-secret
secret:
defaultMode: 420
secretName: controller-internal-secret
- emptyDir:
medium: Memory
name: var-run
- configMap:
defaultMode: 420
name: mssql-hadoop-storage-0-configmap
name: hadoop-config-volume
- name: nmnode-0-agent-secret
secret:
defaultMode: 420
secretName: nmnode-0-agent-secret
- configMap:
defaultMode: 420
name: cluster-configmap
name: cluster-config-volume
</code></pre>
<p>If you need more parts of the yaml please let me know.</p>
| <p>You are on a right way, as <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/" rel="nofollow noreferrer">Admission controller</a> is the main contributor for implementing various features via Kubernetes API. As you mentioned above <a href="https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#service-account-admission-controller" rel="nofollow noreferrer">Service Account</a> is also propagated with some admission features in Kubernetes cluster, initially bounded to the particular admission plugin called <code>ServiceAccount</code>.</p>
<p>According to the official Kubernetes <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#how-do-i-turn-on-an-admission-controller" rel="nofollow noreferrer">documentation</a>, there are special flags<code>--enable-admission-plugins</code> and <code>--disable-admission-plugins</code> included in <code>kube-apiserver</code> <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/" rel="nofollow noreferrer">configuration</a> that can be used to enable or disable admission plugins respectively.</p>
<p>By default, <code>ServiceAccount</code> admission controller plugin is enabled like described <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#which-plugins-are-enabled-by-default" rel="nofollow noreferrer">here</a>. Besides other actions, this plugin mounts volume with token data and CA certificate into the each Pod across K8s cluster for <a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod" rel="nofollow noreferrer">authentication</a> to the apiserver purposes. </p>
<p>In order to deactivate <code>ServiceAccount</code> admission plugin, you can inject <code>--disable-admission-plugins=ServiceAccount</code> into the <code>kube-apiserver</code> configuration. </p>
<p>Otherwise, if you want to </p>
<blockquote>
<p>Change the location of secrets from /var/run/secrets to somewhere else</p>
</blockquote>
<p>This is <a href="https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#service-account-admission-controller" rel="nofollow noreferrer">where</a> you can change the path to whatever you want</p>
|
<p>I'm working on a project that has several event-driven microservices and also using Kubernetes for load balancing. All the services are both publishers and listeners. When a microservice publishes an event, all the listeners are catching the event (if it listens that specific event) and doing their job. There was no problem with this flow until this:</p>
<p>Let's say I have a microservice responsible for sending e-mails. And this service is duplicated 2 times by the load balancer due to high load. Now we have 3 instances of e-mail service. When a "sendMail" event is published, all the 3 instances are catching the event and sending an e-mail for their own. At the end of the day 3 e-mail are being sent.</p>
<p>My question is, can I configure a cloud bus that allows me to publish events for both scenarios. I want to say to an event "when a single listener catches you, disappear" or "go to every listener waiting out there".</p>
<p>For example;</p>
<p>Microservices: A, B, C</p>
<p>Duplicated by load balancer: A1, A2, A3, B1...</p>
<p>Case 1: I want to publish an event for all the services' instances.</p>
<p>Case 2: I want to publish an event for service A instances.</p>
<p>Case 3: I want to publish an event for a single instance of A (don't
care which).</p>
<p>I have tried;
Giving destinations to events, but all the instances have same bus name since they are duplicated as same. If I give/know a single instance bus name, I wouldn't use it because that pod might die.</p>
<p>Event publishing;</p>
<pre class="lang-java prettyprint-override"><code>applicationContext().publishEvent(
new customEvent(
this, // Source
busProperties().getId(), // Origin Service
null // Destination Service (null: all)
)
);
</code></pre>
<p>Event listener;</p>
<pre class="lang-java prettyprint-override"><code>@Component
public class Listener implements ApplicationListener<CustomEvent> {
@Override
public void onApplicationEvent(CustomEvent event) {
Foo();
}
}
</code></pre>
| <p>I understand better now, Here is the <a href="https://images.squarespace-cdn.com/content/v1/56894e581c1210fead06f878/1532295877340-V08C7VRC02TR94J303GM/ke17ZwdGBToddI8pDm48kJMyHperWZxre3bsQoFNoPhZw-zPPgdn4jUwVcJE1ZvWEtT5uBSRWt4vQZAgTJucoTqqXjS3CfNDSuuf31e0tVGDclntk9GVn4cF1XFdv7wlNvED_LyEM5kIdmOo2jMRZpu3E9Ef3XsXP1C_826c-iU/KafkaPubSub.png?format=500w" rel="nofollow noreferrer">Image Link</a> i posted earlier. I'm sure you already know about it. It is a common issue and the best way is to use a redis acting as a "<a href="https://redis.io/commands/brpop" rel="nofollow noreferrer">blocking</a>" mechanism. Since the message queue is sending requests asynchronously to all the consumers, we will not know who is receiving and processing the request but we are ensure that it isn't processed by all by using blocking. Before doing any operation, check for the block and if it doesn't exist, process the request. </p>
|
<p>I have a cluster with 3 control-planes. As any cluster my cluster also has a default <code>kubernetes</code> service. As any service it has a list of endpoints:</p>
<pre><code>apiVersion: v1
items:
- apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: 2017-12-12T17:08:34Z
name: kubernetes
namespace: default
resourceVersion: "6242123"
selfLink: /api/v1/namespaces/default/endpoints/kubernetes
uid: 161edaa7-df5f-11e7-a311-d09466092927
subsets:
- addresses:
- ip: 10.9.22.25
- ip: 10.9.22.26
- ip: 10.9.22.27
ports:
- name: https
port: 8443
protocol: TCP
kind: List
metadata:
resourceVersion: ""
selfLink: ""
</code></pre>
<p>Everything is ok, but I completely can't understand where do these endpoints come from? It is logical to assume from the <code>Service</code> label selector, but there's no any label selectors:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2017-12-12T17:08:34Z
labels:
component: apiserver
provider: kubernetes
name: kubernetes
namespace: default
resourceVersion: "6"
selfLink: /api/v1/namespaces/default/services/kubernetes
uid: 161e4f00-df5f-11e7-a311-d09466092927
spec:
clusterIP: 10.100.0.1
ports:
- name: https
port: 443
protocol: TCP
targetPort: 8443
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
type: ClusterIP
status:
loadBalancer: {}
</code></pre>
<p>So, could anybody explain how k8s services and endpoints work in case of built-in default <code>kubernetes</code> service? </p>
| <p>Its not clear how you created multi-node cluster, but here are some research for you:</p>
<p><a href="https://kubernetes.io/docs/tasks/administer-cluster/highly-available-master/" rel="nofollow noreferrer">Set up High-Availability Kubernetes Masters</a> describe HA k8s creation. They have <a href="https://kubernetes.io/docs/tasks/administer-cluster/highly-available-master/" rel="nofollow noreferrer">notes</a> about default kubernetes service.</p>
<blockquote>
<p>Instead of trying to keep an up-to-date list of Kubernetes apiserver
in the Kubernetes service, the system directs all traffic to the
external IP:</p>
<p>in one master cluster the IP points to the single master,</p>
<p>in multi-master cluster the IP points to the load balancer in-front of
the masters.</p>
<p>Similarly, the external IP will be used by kubelets to communicate
with master</p>
</blockquote>
<p>So I would rather expect a LB ip instead of 3 masters.</p>
<p>Service creation: <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/master/controller.go#L46-L83" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/pkg/master/controller.go#L46-L83</a></p>
<pre><code>const kubernetesServiceName = "kubernetes"
// Controller is the controller manager for the core bootstrap Kubernetes
// controller loops, which manage creating the "kubernetes" service, the
// "default", "kube-system" and "kube-public" namespaces, and provide the IP
// repair check on service IPs
type Controller struct {
ServiceClient corev1client.ServicesGetter
NamespaceClient corev1client.NamespacesGetter
EventClient corev1client.EventsGetter
healthClient rest.Interface
ServiceClusterIPRegistry rangeallocation.RangeRegistry
ServiceClusterIPInterval time.Duration
ServiceClusterIPRange net.IPNet
ServiceNodePortRegistry rangeallocation.RangeRegistry
ServiceNodePortInterval time.Duration
ServiceNodePortRange utilnet.PortRange
EndpointReconciler reconcilers.EndpointReconciler
EndpointInterval time.Duration
SystemNamespaces []string
SystemNamespacesInterval time.Duration
PublicIP net.IP
// ServiceIP indicates where the kubernetes service will live. It may not be nil.
ServiceIP net.IP
ServicePort int
ExtraServicePorts []corev1.ServicePort
ExtraEndpointPorts []corev1.EndpointPort
PublicServicePort int
KubernetesServiceNodePort int
runner *async.Runner
}
</code></pre>
<p>Service periodically updates: <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/master/controller.go#L204-L242" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/pkg/master/controller.go#L204-L242</a></p>
<pre><code>// RunKubernetesService periodically updates the kubernetes service
func (c *Controller) RunKubernetesService(ch chan struct{}) {
// wait until process is ready
wait.PollImmediateUntil(100*time.Millisecond, func() (bool, error) {
var code int
c.healthClient.Get().AbsPath("/healthz").Do().StatusCode(&code)
return code == http.StatusOK, nil
}, ch)
wait.NonSlidingUntil(func() {
// Service definition is not reconciled after first
// run, ports and type will be corrected only during
// start.
if err := c.UpdateKubernetesService(false); err != nil {
runtime.HandleError(fmt.Errorf("unable to sync kubernetes service: %v", err))
}
}, c.EndpointInterval, ch)
}
// UpdateKubernetesService attempts to update the default Kube service.
func (c *Controller) UpdateKubernetesService(reconcile bool) error {
// Update service & endpoint records.
// TODO: when it becomes possible to change this stuff,
// stop polling and start watching.
// TODO: add endpoints of all replicas, not just the elected master.
if err := createNamespaceIfNeeded(c.NamespaceClient, metav1.NamespaceDefault); err != nil {
return err
}
servicePorts, serviceType := createPortAndServiceSpec(c.ServicePort, c.PublicServicePort, c.KubernetesServiceNodePort, "https", c.ExtraServicePorts)
if err := c.CreateOrUpdateMasterServiceIfNeeded(kubernetesServiceName, c.ServiceIP, servicePorts, serviceType, reconcile); err != nil {
return err
}
endpointPorts := createEndpointPortSpec(c.PublicServicePort, "https", c.ExtraEndpointPorts)
if err := c.EndpointReconciler.ReconcileEndpoints(kubernetesServiceName, c.PublicIP, endpointPorts, reconcile); err != nil {
return err
}
return nil
}
</code></pre>
<p>Endpoint update place: <a href="https://github.com/kubernetes/kubernetes/blob/72f69546142a84590550e37d70260639f8fa3e88/pkg/master/reconcilers/lease.go#L163" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/72f69546142a84590550e37d70260639f8fa3e88/pkg/master/reconcilers/lease.go#L163</a></p>
<p>Also endpoint could be created manually. Visit <a href="https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors" rel="nofollow noreferrer">Services without selectors</a> for more info.</p>
|
<p>I am looking for a way to get resource quotas for the namespace using client-go, similar to <code>kubectl describe ns my-namespace-name</code>.</p>
<p>I have tried <code>ns, err := k8client.CoreV1().Namespaces().Get("my-namespace-name", metav1.GetOptions{})</code> but it does not give quota info.</p>
<p>Also tried <code>ns, err := k8client.CoreV1().ResourceQuotas("my-namespace-name").Get("name", metav1.GetOptions{})</code> but I can not figure out what to put as <code>name</code> parameter in .Get(). Tried namespace name, tried several resource types from <a href="https://kubernetes.io/docs/reference/kubectl/overview/#resource-types" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/kubectl/overview/#resource-types</a> but no luck with errors like <code>resourcequotas "namespaces" not found</code> or
<code>resourcequotas "limits.cpu" not found</code> </p>
<p>Tried <code>ns, err := k8client.CoreV1().ResourceQuotas("my-namespace-name").List(metav1.ListOptions{})</code> as well but it returned no result. </p>
<p>Any ideas on how to get it will be much appreciated!</p>
| <p>Ok, after some debugging and going through kubernetes and kubectl code, the way to get it is:
<code>ns, err := k8client.CoreV1().ResourceQuotas("my-namespace-name").List(metav1.ListOptions{})</code> </p>
<p>Not sure why it did not work for me first time I have tried, might have made a typo in namespace name. </p>
|
<p>I have created a new docker image that I want to use to replace the current docker image. The application is on the kubernetes engine on google cloud platform.</p>
<p>I believe I am supposed to use the gcloud container clusters update command. Although, I struggle to see how it works and how I'm supposed to replace the old docker image with the new one.</p>
| <p>You may want to use <code>kubectl</code> in order to interact with your GKE cluster. Method of image update depends on how the Pod / Container was created.</p>
<p>For some example commands, see <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#updating-resources" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/kubectl/cheatsheet/#updating-resources</a></p>
<p>For example, <code>kubectl set image deployment/frontend www=image:v2</code> will do a rolling update "www" containers of "frontend" deployment, updating the image.</p>
<p>Getting up and running on GKE: <a href="https://cloud.google.com/kubernetes-engine/docs/quickstart" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/quickstart</a></p>
|
<p>Is there a way in Kubernetes to create a service for an external service that should return multiple IP addresses? Currently I am hacking around this by creating an A record in my public DNS provider (route53) and then in Kubernetes creating a service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: rabbitmq
labels:
app: rabbitmq
spec:
type: ExternalName
externalName: rabbitmq.mydomainhere.dev
</code></pre>
<p>Is there a way to create a service natively in Kubernetes that returns a fixed set of IP addresses that are not managed inside of the Kubernetes cluster without creating a public DNS record and using <code>externalName</code>?</p>
| <p>You can create a headless service without selectors and set clusterIP to None, then create an endpoint manually to have all your IPs set in the endpoint. You can take a look at the following example.</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: my-es
spec:
clusterIP: None
---
kind: Endpoints
apiVersion: v1
metadata:
name: my-es
subsets:
- addresses:
- ip: 172.22.111.250
- ip: 172.22.149.230
ports:
- port: 9200
</code></pre>
<p>nslookup output from one Pod</p>
<pre><code>root@curl-66bdcf564-8m6h7:/ ]$ nslookup my-es
Server: 169.254.25.10
Address 1: 169.254.25.10
Name: my-es
Address 1: 172.22.111.250 172-22-111-250.my-es.default.svc.cluster.local
Address 2: 172.22.149.230 172-22-149-230.my-es.default.svc.cluster.local
</code></pre>
|
<p>How to find, that which yaml file was used for the deployment of any kubernetes resource.</p>
<p>I checked "kubectl describe", it doesn't list the same, is there anyway to know.</p>
<p>use case:</p>
<p>I want to update the yaml and redeploy, one option, I guess is to generate the yaml from running resource, update and redeploy.</p>
<p>any suggestions ?</p>
| <p>To get yaml for your k8s application deployment.</p>
<p>Use <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#viewing-finding-resources" rel="noreferrer">this</a></p>
<pre><code>kubectl get deploy my-deployment -o yaml --export
OR
kubectl get pod my-pod -o yaml --export
OR
kubectl get svc my-svc -o yaml --export
</code></pre>
<p>Editing is also simple.</p>
<pre><code>kubectl get deploy my-deployment -o yaml --export > my-deployment.yml
<Edit the my-deployment.yml file and kubectl apply -f my-deployment.yml>
OR
kubectl edit deployment my-deployment
</code></pre>
<p>Hope this helps.</p>
|
<p>I have a simple kubernetes cluster setup on GKE. To persist the data for my express web app, I have a mongodb deployment, cluster-ip-service for the mongodb deployment and persistent volume claim running in the cluster. </p>
<p>Users data are being stored and everything works fine until I deleted the mongodb deployment on GKE console. When I try to bring the mongodb deployment back with the command:</p>
<pre><code>kubectl apply -f mongodb-deployment.yaml
</code></pre>
<p>The mongodb deployment and PVC are running again but <strong>all the previous data was lost.</strong></p>
<p>My mongodb deployment yaml file:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database-persistent-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
</code></pre>
<p>My persistent volume claim yaml file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-deployment
spec:
replicas: 1
selector:
matchLabels:
component: mongo
template:
metadata:
labels:
component: mongo
spec:
volumes:
- name: mongo-storage
persistentVolumeClaim:
claimName: database-persistent-volume-claim
containers:
- name: mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-storage
mountPath: /var/lib/mongo/data
</code></pre>
<p>Since the data is be stored in persistent volume which is out of the cluster's lifecycle.</p>
<p><strong>Shouldn't the previous data persist and become available when the database deployment is up and running again?</strong> </p>
<p>I think I might be missing something here.</p>
| <p>Yes it is possible with the reclaim setting. Please refer this <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming" rel="nofollow noreferrer">documentation</a></p>
|
<p>I have a folder of TFRecords on a network that I want to expose to multiple pods. The folder has been exported via NFS.</p>
<p>I have tried creating a Persistent Volume, followed by a Persistent Volume Claim. However, that just creates a folder inside the NFS mount, which I don't want. Instead, I want to Pod to <em>access</em> the folder with the TFRecords.</p>
<p>I have listed the manifests for the PV and PVC. </p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-tfrecord-pv
spec:
capacity:
storage: 30Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
path: /media/veracrypt1/
server: 1.2.3.4
readOnly: false
</code></pre>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-tfrecord-pvc
namespace: default
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs-tfrecord
resources:
requests:
storage: 1Gi
</code></pre>
| <p>I figured it out. The issue was I was looking at the problem the wrong way. I didn't need any <em>provisioning</em>. Instead, what was need was to simply mount the NFS volume within the container:</p>
<pre><code>kind: Pod
apiVersion: v1
metadata:
name: pod-using-nfs
spec:
containers:
- name: app
image: alpine
volumeMounts:
- name: data
mountPath: /mnt/data
command: ["/bin/sh"]
args: ["-c", "sleep 500000"]
volumes:
- name: data
nfs:
server: 1.2.3.4
path: /media/foo/DATA
</code></pre>
|
<p>I am trying to size our pods using the actuator metrics info. With the below K8 resource quota configuration;</p>
<pre><code>resources:
requests:
memory: "512Mi"
limits:
memory: "512Mi"
</code></pre>
<p>We are observing that <code>jvm.memory.max</code> returns ~1455 mb. I understand that this value includes heap and non-heap. Further drilling into the api (<code>jvm.memory.max?tag=area:nonheap</code>) and (<code>jvm.memory.max?tag=area:heap</code>) results in ~1325mb and ~129mb respectively.</p>
<p>Obviously with the non-heap set to max out at a value greater than the K8 limit, the container is bound to get killed eventually. But why is the jvm (non-heap memory) not bounded by the memory configuration of the container (configured in K8)?</p>
<p>The above observations are valid with java 8 and java 11. The below blog discusses the experimental options with java 8 where CPU and heap configurations are discussed but no mention of non-heap. What are some suggestions to consider in sizing the pods?</p>
<p><code>-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap</code>
<a href="https://blogs.oracle.com/java-platform-group/java-se-support-for-docker-cpu-and-memory-limits" rel="nofollow noreferrer">Source</a></p>
| <p>Java 8 has a few flags that can help the runtime operate in a more container aware manner:</p>
<pre><code>java -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -jar app.jar
</code></pre>
<p>Why you get maximum JVM heap memory of 129 MB if you set the maximum container memory limit to 512 MB? So the answer is that memory consumption in JVM includes both heap and non-heap memory. The memory required for class metadata, JIT complied code, thread stacks, GC, and other processes is taken from the non-heap memory. Therefore, based on the cgroup resource restrictions, the JVM reserves a portion of the memory for non-heap use to ensure system stability.
The exact amount of non-heap memory can vary widely, but a safe bet if you’re doing resource planning is that the heap is about 80% of the JVM’s total memory. So if you set the set maximum heap to 1000 MB, you can expect that the whole JVM might need around 1250 MB.</p>
<p>The JVM read that the container is limited to 512M and created a JVM with maximum heap size of ~129MB. Exactly 1/4 of the container memory as defined in the JDK ergonomic page.</p>
<p>If you dig into the <a href="https://docs.oracle.com/javase/9/gctuning/parallel-collector1.htm#JSGCT-GUID-74BE3BC9-C7ED-4AF8-A202-793255C864C4" rel="nofollow noreferrer">JVM Tuning guide</a> you will see the following.</p>
<blockquote>
<p>Unless the initial and maximum heap sizes are specified on the command line, they're calculated based on the amount of memory on the machine. The default maximum heap size is one-fourth of the physical memory while the initial heap size is 1/64th of physical memory. The maximum amount of space allocated to the young generation is one third of the total heap size.</p>
</blockquote>
<p>You can find more information about it <a href="https://developers.redhat.com/blog/2017/03/14/java-inside-docker/" rel="nofollow noreferrer">here</a>.</p>
|
<p>I have recently started working with Kubernetes and Docker and still new with how it all works. I have made a ps1 script to run all the steps I need to do to build image and execute image on Kubernetes. </p>
<p>What I see is all steps work fine on ISE (except this one: "kubectl exec -it test-runner pwsh"). For this step alone, I have to run it on another PowerShell window. </p>
<p>When I run this step in ISE, the script keeps running without ever giving any error or stopping. </p>
<p>Does anyone know if its a limitation with Kubernetes working on ISE or if there is a workaround to make it work? </p>
<p>Working with ISE is quick and saves me tons of time, so this really makes a difference when I have to copy, paste, enter this each time in a separate PowerShell window.</p>
<p>Thanks in advance for your help!</p>
<p>P.S: I looked at other suggested similar questions/answers and none of them seem to be related to Kubernetes not working on ISE. Hence this question. </p>
<p>command: </p>
<pre><code>kubectl exec -it test-runner pwsh
</code></pre>
<p>Expected (and actual when running from PowerShell console):</p>
<pre><code>----------------------
PS C:\windows\system32> kubectl exec -it test-runner pwsh
PowerShell 6.2.2
Copyright (c) Microsoft Corporation. All rights reserved.
https://aka.ms/pscore6-docs
Type 'help' to get help.
PS /test>
-----------------------------
Actual (when running from PowerShell ISE):
PS C:\SourceCodeTLM\Apollo> kubectl exec -it test-runner pwsh
PowerShell 6.2.2
Copyright (c) Microsoft Corporation. All rights reserved.
https://aka.ms/pscore6-docs
Type 'help' to get help.
(with a blinking cursor and script running without breaking and changing to the new path)...
-----------------------------------------------
</code></pre>
| <p><strong>The PowerShell <em>ISE</em> doesn't support <em>interactive</em> console applications</strong>, which notably means that you cannot start other <em>shells</em> from it.</p>
<p>The ISE tries to anticipate that problem by refusing to start well-known shells.
For instance, trying to start <code>cmd.exe</code> fails with the following error message:</p>
<pre class="lang-none prettyprint-override"><code>Cannot start "cmd". Interactive console applications are not supported.
To run the application, use the Start-Process cmdlet or use
"Start PowerShell.exe" from the File menu.
</code></pre>
<p>Note:</p>
<ul>
<li><code>pwsh.exe</code>, the CLI of PowerShell (Core) 7+, is <em>not</em> among the well-known shells, which indicates the <strong><a href="https://learn.microsoft.com/en-us/powershell/scripting/components/ise/introducing-the-windows-powershell-ise#support" rel="nofollow noreferrer">ISE's obsolescent status</a></strong>. It is <strong>being superseded by <a href="https://code.visualstudio.com/" rel="nofollow noreferrer">Visual Studio Code</a> with the <a href="https://marketplace.visualstudio.com/items?itemName=ms-vscode.PowerShell" rel="nofollow noreferrer">PowerShell extension</a></strong>. Obsolesence aside, there are other pitfalls - see the bottom section of <a href="https://stackoverflow.com/a/57134096/45375">this answer</a>.</li>
</ul>
<p>However, it is impossible for the ISE to detect all cases where a given command (ultimately) invokes an interactive console application; when it doesn't, <strong>invocation of the command is attempted, resulting in obscure error messages or, as in your case, hangs</strong>.</p>
<p>As the error message shown for <code>cmd.exe</code> implies, <strong>you must run interactive console applications <em>outside</em> the ISE, in a regular console window.</strong></p>
<p><strong>From the ISE</strong> you can <strong>use <a href="https://learn.microsoft.com/powershell/module/microsoft.powershell.management/start-process" rel="nofollow noreferrer"><code>Start-Process</code></a> to launch a program in a new, regular console window</strong>; in the case at hand:</p>
<pre><code>Start-Process kubectl 'exec -it test-runner pwsh'
</code></pre>
<p><strong>Alternatively</strong>, run your PowerShell sessions outside the ISE to begin with, such as in a regular console window, <strong>Windows Terminal</strong>, or in <strong>Visual Studio Code's integrated terminal</strong>.</p>
|
<p>I deployed mongodb in a Kubernetes cluster with this helm chart : <a href="https://github.com/helm/charts/tree/master/stable/mongodb" rel="noreferrer">https://github.com/helm/charts/tree/master/stable/mongodb</a>. All is right. I can connect to mongo from within a replicatset container or from outside the cluster with a port-forward, or with a NodePort service. But I can't connect via an ingress.</p>
<p>When the ingress is deployed, I can curl mongodb and have this famous message : "It looks like you are trying to access MongoDB over HTTP on the native driver port.". But I can't connect with a mongo client, the connection stucks and I can see in mongodb logs that I never reach mongo.</p>
<p>Does someone have any information about accessing mongodb via an ingress object ? Maybe it's a protocol problem ?</p>
<p>The ingress manifests :</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ template "mongodb.fullname" . }}
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: {{ .Values.ingress.hostName }}
http:
paths:
- path: /
backend:
serviceName: "{{ template "mongodb.fullname" $ }}"
servicePort: mongodb
tls:
- hosts:
- {{ .Values.ingress.hostName }}
secretName: secret
</code></pre>
<p>Thank you very much !</p>
| <p>Ingress controllers are designed for HTTP connections, as the error hinted, the ingress is not the way to access mongodb.</p>
<p>None of the information in an ingress definition makes much sense for a plain TCP connection, <code>host</code> names and http URL <code>paths</code> don't apply to plain TCP connections. </p>
<p>Some ingress controllers (like <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/exposing-tcp-udp-services.md" rel="noreferrer">nginx-ingress</a>) can support plain TCP load balancers but not via an ingress definition. They use custom config maps. </p>
<p>Use a Service with <code>type: loadBalancer</code> if your hosting environment supports it or <code>type: nodePort</code> if not. There is an <a href="https://github.com/helm/charts/blob/ef50e8c7114b097dd1fdd497912e57274608aecc/stable/mongodb/templates/svc-standalone.yaml" rel="noreferrer">example in the stable mongodb helm chart</a> and it's associated <a href="https://github.com/helm/charts/blob/ef50e8c7114b097dd1fdd497912e57274608aecc/stable/mongodb/values.yaml#L118-L141" rel="noreferrer">values</a>. </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: {{ template "mongodb.fullname" . }}
labels:
app: {{ template "mongodb.name" . }}
spec:
type: loadBalancer
ports:
- name: mongodb
port: 27017
targetPort: mongodb
- name: metrics
port: 9216
targetPort: metrics
</code></pre>
|
<p>I want to add a extra DNS server as a pod (internal for the other pods). </p>
<p>Kube-DNS works fine, the pods can resolve the short-names of the others and could find the extra dns-pod by : #>host dns </p>
<p>So far so good. The resolve.conf on all of the pods have only the coredns entry (namesever CLUSTER_IP). </p>
<p>I I manually add a second name server to the resolve.conf but it do not work. Now I thought about a extra rule in the Kube-DNS to forward requests to the dns-pod. But have no idea if this is the right way to go. Furthermore I do not know the ip of the dns-pod during auto-creation of the complete setup (terraform). </p>
<p>Would be nice to receive some tips how this should be done. </p>
<p>best</p>
| <p>Setup a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a> for your DNS pod, specifying a cluster IP in your allocated cluster service range.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-dns-service
spec:
selector:
app: dns-app
ports:
- protocol: UDP
port: 53
targetPort: 53
- protocol: TCP
port: 53
targetPort: 53
clusterIP: 10.233.8.8
</code></pre>
<p><a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#coredns" rel="nofollow noreferrer">Configure CoreDNS</a> to forward your custom domains to that cluster IP. </p>
<p>Modify the <code>coredns</code> ConfigMap to include the <a href="https://coredns.io/plugins/forward/" rel="nofollow noreferrer">forward plugin</a></p>
<pre><code> domain:53 {
errors
cache 30
forward . 10.233.8.8
}
internal.domain:53 {
errors
cache 30
forward . 10.233.8.8
}
</code></pre>
|
<p>I need a piece of advice / recommendation / link to tutorial.</p>
<p>I am designing a kubernetes cluster and one of the projects is a Wordpress site with lots of pictures (photo blog).</p>
<p>I want to be able to tear down and re-create my cluster within an hour, so all "persistent" pieces need to be hosted outside of cluster (say, separate linux instance).</p>
<p>It is doable with database - I will just have a MySQL server running on that machine and will update WP configs accordingly.</p>
<p>It is not trivial with filesystem storage. I am looking at Kubernetes volume providers, specifically NFS. I want to setup NFS server on a separate machine and have each WP pod use that NFS share through volume mechanism. In that case, I can rebuild my cluster any time and data will be preserved. Almost like database access, but filesystem.</p>
<p>The questions are the following. Does this solution seem feasible? Is there any better way to achieve same goal? Does Kubernetes NFS plugin support the functionality I need? What about authorization? </p>
| <p>so I am using a very similar strategy for my cluster where all my PVC are placed on a standalone VM instance with a static IP and which has an NFS-server running and a simple nfs-client-provisioner helm chart on my cluster.</p>
<p>So here what I did : </p>
<ol>
<li>Created a server(Ubuntu) and installed the NFS server on it. Reference <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nfs-mount-on-ubuntu-16-04" rel="nofollow noreferrer">here</a> </li>
<li><p>Install a helm chart/app for nfs-client-proviosner with parameters.</p>
<p><code>nfs.path = /srv ( the path on server which is allocated to NFS and shared)</code> </p>
<p><code>nfs.server = xx.yy.zz.ww ( IP of my NFS server configured above)</code></p></li>
</ol>
<p>And that's it the chart creates an <code>nfs-client</code> storage class which you can use to create a PVC and attach to your pods. </p>
<p><strong>Note</strong> - Make sure to configure the <code>/etc/exports</code> file on the NFS server to look like this as mentioned in the reference digital ocean document.</p>
<p><code>/srv kubernetes_node_1_IP(rw,sync,no_root_squash,no_subtree_check)</code></p>
<p><code>/srv kubernetes_node_2_IP(rw,sync,no_root_squash,no_subtree_check)</code>
... and so on.</p>
<p>I am using the PVC for some php and laravel applications, seem to work well without any considerable delays. Although you will have to check for your specific requirements. HTH.</p>
|
<p>I have recently been using the nginxdemo/nginx-ingress controller.</p>
<p>As I understand it this controller cannot do SSL Passthrough (by that I mean pass the client certificate all the way through to the backend service for authentication), so instead I have been passing the clients subject DN through a header.</p>
<p>Ultimately I would prefer SSL-Passthrough and have been looking at the kubernetes/ingress-nginx project which apparently supports SSL passthrough.</p>
<p>Does anyone have an experience with this controller and SSL Passthrough.</p>
<p>The few Ingress examples showing passthrough that I have found leave the path setting blank.</p>
<p>Is this because passthrough has to take place at the TCP level (4) rather then at HTTP (7)?</p>
<p>Right now, I have a single host rule that services mutiple paths.</p>
| <p>completing on lch answer I would like to add that I had the same problem recently and I sorted it out modifiying the ingress-service deployment (I know, it should be a DaemonSet but that's a different story)</p>
<p>The change was adding the parameter to spec.containers.args:</p>
<pre><code> --enable-ssl-passthrough
</code></pre>
<p>Then I've added the following annotations to my ingress:</p>
<pre><code>kubernetes.io/ingress.allow-http: "false"
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
</code></pre>
<p>The important one are secure-backends and ssl-passthrough but I think the rest are a good idea, provided you're not expecting http traffic there</p>
|
<p>I need to delete POD on my GCP kubernetes cluster. Actually in Kubernetes Engine API <a href="https://cloud.google.com/kubernetes-engine/docs/reference/rest/" rel="nofollow noreferrer">documentation</a> I can find only REST api's for: <code>projects.locations.clusters.nodePools</code>, but nothing for PODs.</p>
| <p>The GKE API is used to manage the cluster itself on an infrastructure level. To manage Kubernetes resources, you'd have to use the Kubernetes API. There are clients for various languages, but of course you can also directly call the API.</p>
<p>Deleting a Pod from within another or the same Pod:</p>
<pre class="lang-sh prettyprint-override"><code>PODNAME=ubuntu-xxxxxxxxxx-xxxx
curl https://kubernetes/api/v1/namespaces/default/pods/$PODNAME \
-X DELETE -k \
-H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"
</code></pre>
<p>From outside, you'd have to use the public Kubernetes API server URL and a valid token. Here's how you get those using <code>kubectl</code>:</p>
<pre class="lang-sh prettyprint-override"><code>APISERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')
TOKEN=$(kubectl get secret $(kubectl get serviceaccount default -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 --decode )
</code></pre>
<p>Here's more official information on accessing the <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/" rel="nofollow noreferrer">Kubernetes API server</a>.</p>
|
<p>I'm running a cluster on AWS EKS. Container(StatefulSet POD) that currently running has Docker installation inside of it. </p>
<p>I ran this image as Kubernetes StatefulSet in my cluster. Here is my yaml file,</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: jenkins
labels:
run: jenkins
spec:
serviceName: jenkins
replicas: 1
selector:
matchLabels:
run: jenkins
template:
metadata:
labels:
run: jenkins
spec:
securityContext:
fsGroup: 1000
containers:
- name: jenkins
image: 99*****.dkr.ecr.<region>.amazonaws.com/<my_jenkins_image>:0.0.3
imagePullPolicy: Always
ports:
- containerPort: 8080
name: jenkins-port
</code></pre>
<p>Inside this POD, I can not run any docker command which gives a ERROR:</p>
<blockquote>
<p>/etc/init.d/docker: 96: ulimit: error setting limit (Operation not permitted)</p>
</blockquote>
<p>In my research, I went through some artcile which did not fix my issue.
I have listed down solution that i tried but not fixed in my case</p>
<p><strong>First solution: (I ran inside the container)</strong>
<a href="https://stackoverflow.com/questions/24318543/how-to-set-ulimit-file-descriptor-on-docker-container-the-image-tag-is-phusion">aricle link</a></p>
<pre><code>$ sudo service docker stop
$ sudo bash -c "echo \"limit nofile 262144 262144\" >> /etc/init/docker.conf"
$ sudo service docker start
</code></pre>
<p><strong>Second solution: (I ran inside the container)</strong></p>
<pre><code>ulimit -n 65536 in /etc/init.d/docker
</code></pre>
<p><strong>Third solution: ** <a href="https://unix.stackexchange.com/questions/450799/getting-operation-not-permitted-error-when-setting-ulimit-for-memlock-in-a-doc">article link</a>
This seems a far better answer, which i could not add into my configuration file.
it says, run pod with as privilaged. But there is no way to add that option in ***Kubernetes StatefulSet*</strong> .
So I tried by adding a SecurityContext <strong>(securityContext:fsGroup: 1000)</strong> like this inside configuration file,</p>
<pre><code>spec:
serviceName: jenkins
replicas: 1
selector:
matchLabels:
run: jenkins
template:
metadata:
labels:
run: jenkins
spec:
securityContext:
fsGroup: 1000
</code></pre>
<p>still it does not work.</p>
<p><strong>Note :same image worked on Docker swarm</strong></p>
<p>Anyhelp would be appreciated!</p>
| <p>I had this issue with Elastic Search and adding <code>initContainer</code> worked. In this case it could be the solution: </p>
<pre><code>spec:
.
.
.
initContainers:
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
</code></pre>
<p>If it doesn't work, there is a <a href="https://github.com/kubernetes/kubernetes/issues/3595#issuecomment-487919341" rel="nofollow noreferrer">second</a> way to solve this problem which includes creating a new Dockerfile or changing existing:</p>
<pre><code>FROM 99*****.dkr.ecr.<region>.amazonaws.com/<my_jenkins_image>:0.0.3
RUN ulimit -n 65536
USER 1000
</code></pre>
<p>and change securityContext to:</p>
<pre><code> securityContext:
runAsNonRoot: true
runAsUser: 1000
capabilities:
add: ["IPC_LOCK"]
</code></pre>
|
<p>I am trying to get the list of pods that are servicing a particular service</p>
<p>There are 3 pods associated with my service.</p>
<p>I tried to execute the below command</p>
<p><code>oc describe svc my-svc-1</code></p>
<p>I am expecting to see the pods associated with this service. but that does not show up. What command gets me just the list of pods associated with the service.</p>
| <p>A service chooses the pods using the selector. Look at the selector for the service, and get the pods using that selector. For kubectl, the command looks like:</p>
<pre><code>kubectl get pods --selector <selector>
</code></pre>
|
<p>I have a custom resource that manages a deployment. I want my HPA to be able to scale the CR replica count based on the deployment CPU utilization instead of scaling the deployment directly. If it scales the deployment directly then when the reconciler loop is triggered it would just immediately see the discrepancy between the deployment replica count and the desired replica count as stated in the CR and update the deployment accordingly.</p>
<p>I am pretty close. I have the scale endpoint of my CR functioning properly and my HPA can even hit the endpoint. It just can't read the resource usage of the child.</p>
<p>I've also gotten it working if I have it scaling the deployment directly but as I stated above it's not a viable solution. More just proof that I have the metrics server functioning properly and the resource utilization is obtainable.</p>
<p>HPA YAML:</p>
<pre><code>kind: HorizontalPodAutoscaler
metadata:
name: {{.metadata.name}}
namespace: {{.spec.namespace}}
spec:
minReplicas: 1
maxReplicas: 2
metrics:
- resource:
name: cpu
targetAverageUtilization: 2
type: Resource
scaleTargetRef:
apiVersion: testcrds.group.test/v1alpha1
kind: MyKind
name: my-kind-1
</code></pre>
<p>And proof that the HPA is at least able to hit the scale endpoint of the CR:</p>
<pre><code>Name: my-hpa
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"kind":"HorizontalPodAutoscaler","apiVersion":"autoscaling/v2beta1","metadata":{"name":"my-kind-1","namespace":"default","creationTimestamp":n...
CreationTimestamp: Wed, 21 Aug 2019 17:22:11 -0400
Reference: MyKind/my-kind-1
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): <unknown> / 2%
Min replicas: 1
Max replicas: 2
MLP pods: 0 current / 1 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 1
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulRescale 3m54s (x80 over 23m) horizontal-pod-autoscaler New size: 1; reason: Current number of replicas below Spec.MinReplicas
</code></pre>
<p>As can be seen no dice on retrieving the resource utilization...</p>
| <p>I finally figured it out. Wrote a brief medium article as there are a few steps to follow, answer in article:
<a href="https://medium.com/@thescott111/autoscaling-kubernetes-custom-resource-using-the-hpa-957d00bb7993" rel="nofollow noreferrer">https://medium.com/@thescott111/autoscaling-kubernetes-custom-resource-using-the-hpa-957d00bb7993</a></p>
|
<p>Does someone know how to solve this issue: <code>WARN | main | o.s.c.k.c.ConfigMapPropertySource | Can't read configMap with name: [commons] in namespace:[dev]. Ignoring</code>
I have this configuration in my <code>bootstrap-prod.yml</code>:</p>
<pre><code>spring:
cloud:
kubernetes:
config:
name: ${spring.application.name}
sources:
- name: commons
namespace: dev
secrets:
name: commons-secret
reload:
enabled: true
</code></pre>
<p>But the application fails to start because of that error.
Same issue as described here: <a href="https://github.com/spring-cloud/spring-cloud-kubernetes/issues/138" rel="nofollow noreferrer">https://github.com/spring-cloud/spring-cloud-kubernetes/issues/138</a>
I bound the namespace's ServiceAccount to the cluster <code>view</code> role.</p>
<p>What's strange is in the same namespace there are 2 applications, the first one (a spring clud gateway app) can read its configMap but the second one (a simple spring boot web app) can't.
What am I missing?
The application is deployed on GKE.</p>
<pre><code>#:::::::::::::::::DEPLOYMENT::::::::::::::::::
apiVersion: apps/v1
kind: Deployment
metadata:
name: appservice
namespace: dev
spec:
...
</code></pre>
<p>And the ConfigMap:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: commons
namespace: dev
data:
application.yml: |-
server:
tomcat:
basedir: ..${file.separator}tomcat-${spring.application.name}
spring:
profiles:
active: prod
cache:
...
</code></pre>
<p>Thanks for your help</p>
| <p>I found the issue, I guess it is. The problem came from a malformatted yaml.
If you take a lookk at the ConfigMap configuration we have: </p>
<pre><code>...
data:
application.yml: |-
server:
tomcat:
basedir: ..${file.separator}tomcat-${spring.application.name} # issue is here, bad indentation
spring:
profiles:
active: prod
...
</code></pre>
<p>After changing that to:</p>
<pre><code>data:
application.yml: |-
server:
tomcat:
basedir: ..${file.separator}tomcat-${spring.application.name}
spring:
profiles:
active: prod
</code></pre>
<p>Everything seems to work fine. It's a bit strange that the error message doesn't point out that explicitly. </p>
|
<p>I am exploring the istio service mesh on my k8s cluster hosted on EKS(Amazon).</p>
<p>I tried deploying istio-1.2.2 on a new k8s cluster with the demo.yml file used for bookapp demonstration and most of the use cases I understand properly. </p>
<p>Then, I deployed istio using helm default profile(recommended for production) on my existing dev cluster with 100s of microservices running and what I noticed is my services can can call http endpoints but not able to call external secure endpoints(<a href="https://www.google.com" rel="nofollow noreferrer">https://www.google.com</a>, etc.) </p>
<p>I am getting :</p>
<blockquote>
<p>curl: (35) error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong
version number</p>
</blockquote>
<p>Though I am able to call external https endpoints from my testing cluster.</p>
<p>To verify, I check the egress policy and it is <strong>mode: ALLOW_ANY</strong> in both the clusters.</p>
<p>Now, I removed the the istio completely from my dev cluster and install the demo.yml to test but now this is also not working. </p>
<p>I try to relate my issue with this but didn't get any success.</p>
<p><a href="https://discuss.istio.io/t/serviceentry-for-https-on-httpbin-org-resulting-in-connect-cr-srvr-hello-using-curl/2044" rel="nofollow noreferrer">https://discuss.istio.io/t/serviceentry-for-https-on-httpbin-org-resulting-in-connect-cr-srvr-hello-using-curl/2044</a> </p>
<p>I don't understand what I am missing or what I am doing wrong.</p>
<p>Note: I am referring to this setup: <a href="https://istio.io/docs/setup/kubernetes/install/helm/" rel="nofollow noreferrer">https://istio.io/docs/setup/kubernetes/install/helm/</a></p>
| <p>This is most likely a bug in Istio (see for example <a href="https://github.com/istio/istio/issues/14520" rel="nofollow noreferrer">istio/istio#14520</a>): if you have any Kubernetes Service object, anywhere in your cluster, that listens on port 443 but whose name starts with <code>http</code> (not <code>https</code>), it will break all outbound HTTPS connections.</p>
<p>The instance of this I've hit involves configuring an AWS load balancer to do TLS termination. The Kubernetes Service needs to expose port 443 to configure the load balancer, but it receives plain unencrypted HTTP.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: breaks-istio
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:...
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
selector: ...
ports:
- name: http-ssl # <<<< THIS NAME MATTERS
port: 443
targetPort: http
</code></pre>
<p>When I've experimented with this, changing that <code>name:</code> to either <code>https</code> or <code>tcp-https</code> seems to work. Those <a href="https://istio.io/docs/setup/kubernetes/additional-setup/requirements/" rel="nofollow noreferrer">name prefixes</a> are significant to Istio, but I haven't immediately found any functional difference between telling Istio the port is HTTPS (even though it doesn't actually serve TLS) vs. plain uninterpreted TCP.</p>
<p>You do need to search your cluster and find <em>every</em> Service that listens to port 443, and make sure the port name doesn't start with <code>http-...</code>.</p>
|
<pre class="lang-scala prettyprint-override"><code>package learn.spark
import org.apache.spark.{SparkConf, SparkContext}
object MasterLocal2 {
def main(args: Array[String]): Unit = {
val conf = new SparkConf()
conf.setAppName("spark-k8s")
conf.setMaster("k8s://https://192.168.99.100:16443")
conf.set("spark.driver.host", "192.168.99.1")
conf.set("spark.executor.instances", "5")
conf.set("spark.kubernetes.executor.request.cores", "0.1")
conf.set("spark.kubernetes.container.image", "spark:latest")
val sc = new SparkContext(conf)
println(sc.parallelize(1 to 5).map(_ * 10).collect().mkString(", "))
sc.stop()
}
}
</code></pre>
<p>I am trying to speed up the local running of the Spark program, but I got some exceptions. I don't know how to configure to pass the JVM things to the executors.</p>
<pre class="lang-php prettyprint-override"><code>Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 0.0 failed 4 times, most recent failure: Lost task 1.3 in stage 0.0 (TID 8, 10.1.1.217, executor 4): java.lang.ClassNotFoundException: learn.spark.MasterLocal2$$anonfun$main$1
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
</code></pre>
| <p>Mount the Idea compilation result directory to the executor, then set <code>spark.executor.extraClassPath</code> to that.</p>
<pre class="lang-scala prettyprint-override"><code>conf.set("spark.kubernetes.executor.volumes.hostPath.anyname.options.path", "/path/to/your/project/out/production/examples")
conf.set("spark.kubernetes.executor.volumes.hostPath.anyname.mount.path", "/intellij-idea-build-out")
conf.set("spark.executor.extraClassPath", "/intellij-idea-build-out")
</code></pre>
<p>Make sure that your compilation out directory can be mounted to the executor container via <a href="https://kubernetes.io/docs/concepts/storage/volumes" rel="nofollow noreferrer">K8S volume</a>, which involves the use of Kubernetes.</p>
|
<p>I would like to be able to reference the current namespace in <code>values.yaml</code> to use it to suffix some values like this</p>
<pre><code># in values.yaml
someParam: someval-{{ .Release.Namespace }}
</code></pre>
<p>It much nicer to define it this way instead of going into all my templates and adding <code>{{ .Release.Namespace }}</code>. If I can do it in <code>values.yaml</code> it's much clearer and only needs to be defined in one place.</p>
| <p>You can use <a href="https://helm.sh/docs/chart_template_guide/#named-templates" rel="nofollow noreferrer">named templates</a> to define re-usable helper templates. E.g.</p>
<p>In <code>templates/_helpers.tpl</code>:</p>
<pre><code>{{- define "myChart.someParam" -}}someval-{{ .Release.Namespace }}{{- end -}}
</code></pre>
<p>In <code>templates/configmap.yaml</code> (for example):</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: something
data:
foo: {{ template "myChart.someParam" . }}
</code></pre>
<p>The result:</p>
<pre><code>$ helm template . --namespace=bar
---
# Source: helm/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: something
data:
foo: someval-bar
</code></pre>
|
<p>I have a NodeJS app that renders server-side ReactJS components.</p>
<p>When I populate the environment variables via the deployment file.
It seems as my NodeJS server or ReactJS app does not see my environment variables. </p>
<p>How does envs work here?</p>
<p>When I build the image.
I can set the envs there.</p>
<p>But that is being set via the Kubernetes Deployment file.</p>
<p>We are using <em>webpack</em> to build the server.</p>
<p>I tried building the image with the envs inside it.
I tried building the image without the envs inside it.</p>
<p>When I do a <code>console.log(process.env)</code>, I don't see the envs populated in the deployment file for Kubernetes.</p>
| <p>you cant <code>get</code> environment variables from the reactjs app, since it happens in the client browser. you can set them at build time or do some sort of parsing before the application starts to preprocess js files with a script\init container</p>
<p><a href="https://medium.com/@trekinbami/using-environment-variables-in-react-6b0a99d83cf5" rel="nofollow noreferrer">https://medium.com/@trekinbami/using-environment-variables-in-react-6b0a99d83cf5</a></p>
|
<p>It is my understanding that you're gonna have an NLB or ALB in front of your Istio Gateway anyway?</p>
<p>But I am confused because it seems like Istio Gateway does a lot of things ALB does for Layer 7 and even more?</p>
<p>So I read ALB -> Istio Gateway is ok, but isn't that redundant? What about NLB -> ALB -> Istio Gateway, which seems like too much?</p>
<p>It seems like it is best to have NLB -> Istio Gateway to let them handle Layer 4 and Layer 7 respectively like they do best, can anyone enlighten and confirm?</p>
| <p>If you are using Istio then yes, istio orginally created with ingress controller in mind. Gateway+Virtual Service basically enables what you want. Some ingress controllers are more easy and have different plusses but if istio handles all you want then go for it.</p>
|
<p>I would like to be able to reference the current namespace in <code>values.yaml</code> to use it to suffix some values like this</p>
<pre><code># in values.yaml
someParam: someval-{{ .Release.Namespace }}
</code></pre>
<p>It much nicer to define it this way instead of going into all my templates and adding <code>{{ .Release.Namespace }}</code>. If I can do it in <code>values.yaml</code> it's much clearer and only needs to be defined in one place.</p>
| <p>If you know that <code>someParam</code> might contain templating constructs, Helm includes a <a href="https://helm.sh/docs/developing_charts/#using-the-tpl-function" rel="nofollow noreferrer"><code>tpl</code></a> function that interprets it.</p>
<pre><code>- name: SOME_VARIABLE
value: {{ .Values.someParam | tpl }}
</code></pre>
<p>If a chart allows this it generally documents it. As a specific example, the <a href="https://github.com/helm/charts/tree/master/stable/postgresql" rel="nofollow noreferrer">helm/charts PostgreSQL chart</a> documents that its <code>configurationConfigMap</code> setting is</p>
<blockquote>
<p>ConfigMap with the PostgreSQL configuration files (Note: Overrides <code>postgresqlConfiguration</code> and <code>pgHbaConfiguration</code>). <strong>The value is evaluated as a template.</strong></p>
</blockquote>
<p>So for that specific value, you can put a template macro in the <code>values.yaml</code> file.</p>
|
<pre><code>NAME CPU(cores) MEMORY(bytes)
apache-757ddfbc75-khhfw 10m 61Mi
</code></pre>
| <p>k8s doc refer:</p>
<blockquote>
<p>A Container might or might not be allowed to exceed its CPU limit for extended periods of time. However, it will not be killed for excessive CPU usage.</p>
</blockquote>
<p>Most of those case, nothing will happen. CPU usage is very very flexible.</p>
|
<p>I am trying to install Kuberentes 1.15 on Centos 7 but Kubeadm init keeps fail at Waiting for the kubelet to boot up the control
plane as static Pods from directory "/etc/kubernetes/manifests"</p>
<pre><code>[root@kmaster manifests]# kubeadm init
--apiserver-advertise-address=10.0.15.10 --pod-network-cidr=10.244.0.0/16 [init] Using Kubernetes version: v1.15.3 [preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version:
18.09 [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using kubeadm config images pull
</code></pre>
<p>I could see couple of warnings, for the cgroups my understanding that after 1.11 it should pick up the right cfgroup, if not kindly advise how to fix it or if it is related to the main issue</p>
<pre><code>[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
[root@kmaster manifests]#
[root@kmaster manifests]# kubeadm init --apiserver-advertise-address=10.0.15.10 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.15.3
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kmaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.15.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kmaster localhost] and IPs [10.0.15.10 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kmaster localhost] and IPs [10.0.15.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
[root@kmaster manifests]#
[root@kmaster manifests]# journalctl -xeu kubelet
Aug 25 14:17:08 kmaster kubelet[24756]: E0825 14:17:08.068707 24756 kubelet.go:2248] node "kmaster" not found
Aug 25 14:17:08 kmaster kubelet[24756]: E0825 14:17:08.169804 24756 kubelet.go:2248] node "kmaster" not found
Aug 25 14:17:08 kmaster kubelet[24756]: E0825 14:17:08.270287 24756 kubelet.go:2248] node "kmaster" not found
Aug 25 14:17:08 kmaster kubelet[24756]: E0825 14:17:08.370660 24756 kubelet.go:2248] node "kmaster" not found
Aug 25 14:17:08 kmaster kubelet[24756]: E0825 14:17:08.471301 24756 kubelet.go:2248] node "kmaster" not found
Aug 25 14:17:08 kmaster kubelet[24756]: E0825 14:17:08.571726 24756 kubelet.go:2248] node "kmaster" not found
[root@kmaster manifests]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Sun 2019-08-25 13:58:18 EDT; 27min ago
Docs: https://kubernetes.io/docs/
Main PID: 24756 (kubelet)
Tasks: 16
Memory: 37.2M
CGroup: /system.slice/kubelet.service
└─24756 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --conf...
Aug 25 14:25:24 kmaster kubelet[24756]: E0825 14:25:24.825290 24756 kubelet.go:2248] node "kmaster" not found
Aug 25 14:25:25 kmaster kubelet[24756]: E0825 14:25:25.071450 24756 kubelet.go:2248] node "kmaster" not found
Aug 25 14:25:25 kmaster kubelet[24756]: E0825 14:25:25.172702 24756 enter code herekubelet.go:2248] node "kmaster" not found
Aug 25 14:25:25 kmaster kubelet[24756]: E0825 14:25:25.277570 24756 kubelet.go:2248] node "kmaster" not found
Aug 25 14:25:25 kmaster kubelet[24756]: E0825 14:25:25.378218 24756 kubelet.go:2248] node "kmaster" not found
Aug 25 14:25:25 kmaster kubelet[24756]: E0825 14:25:25.478339 24756 kubelet.go:2248] node "kmaster" not found
</code></pre>
| <p>I dont know how you pre-configured a node prior to cluster initialization but I can show you the way it works.</p>
<p>1) Reset cluster</p>
<pre><code>sudo kubeadm reset
rm -rf .kube/
sudo rm -rf /etc/kubernetes/
sudo rm -rf /var/lib/kubelet/
sudo rm -rf /var/lib/etcd
</code></pre>
<p>2) put SELinux to permissive mode</p>
<pre><code>setenforce 0
</code></pre>
<p>3) enable <code>net.bridge.bridge-nf-call-ip6tables</code> and <code>net.bridge.bridge-nf-call-iptables</code></p>
<pre><code>sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
modprobe br_netfilter
cat <<EOF > /etc/sysctl.d/kube.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
</code></pre>
<p>4) Add Kube repo fo <code>kubeadm</code>, <code>kubelet</code>, <code>kubectl</code> components:</p>
<pre><code>cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF
</code></pre>
<p>5) Install ans start Kube components ans services:</p>
<pre><code>yum update && yum upgrade && yum install -y docker kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl start docker kubelet && systemctl enable docker kubelet
</code></pre>
<p>6)<code>kubeadm init</code></p>
<pre><code>kubeadm init --pod-network-cidr=10.244.0.0/16 -v=9
</code></pre>
<p>Result:</p>
<pre><code>Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join *.*.*.*:6443 --token ******.****************** \
--discovery-token-ca-cert-hash sha256:*******************************************************
</code></pre>
<p>Next you should:
- apply CNI (<a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#tabs-pod-install-6" rel="noreferrer">Flannel</a> if you use <code>--pod-network-cidr=10.244.0.0/16</code>)
- join worker nodes</p>
|
<p>I had been trying to implement Kubernetes HPA using Metrics from Kafka-exporter. Hpa supports Prometheus, so we tried writing the metrics to prometheus instance. From there, we are unclear on the steps to do. Is there an article where it will explain in details ?</p>
<p>I followed <a href="https://medium.com/google-cloud/kubernetes-hpa-autoscaling-with-kafka-metrics-88a671497f07" rel="nofollow noreferrer">https://medium.com/google-cloud/kubernetes-hpa-autoscaling-with-kafka-metrics-88a671497f07</a></p>
<p>for same in GCP and we used stack driver, and the implementation worked like a charm. But, we are struggling in on-premise setup, as stack driver needs to be replaced by Prometheus</p>
| <p>When I implemented Kubernetes HPA using Metrics from Kafka-exporter I had a few setbacks which I solved doing the following: </p>
<ol>
<li>I deployed the kafka-exporter container as a sidecar to the pods I
wanted to scale. I found that the HPA scales the pod it gets the
metrics from. </li>
<li><p>I used <a href="https://www.weave.works/docs/cloud/latest/tasks/monitor/configuration-k8s/#per-pod-prometheus-annotations" rel="nofollow noreferrer">annotations</a> to make Prometheus scrape the metrics from the pods with exporter.</p></li>
<li><p>Then I verified that the kafka-exporter metrics are getting to Prometheus. If it's not there you can't advance further.</p></li>
<li>I deployed <a href="https://github.com/DirectXMan12/k8s-prometheus-adapter" rel="nofollow noreferrer">prometheus adapter</a> using its <a href="https://github.com/helm/charts/tree/master/stable/prometheus-adapter" rel="nofollow noreferrer">helm chart</a>. The adapter will "translate" Prometheus's metrics into custom Metrics
Api, which will make it visible to HPA.</li>
<li>I made sure that the metrics are visible in k8s by executing <code>kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1</code> from one of
the master nodes.</li>
<li>I created an hpa with the matching metric name.</li>
</ol>
<p>Here is a complete guide explaining <a href="https://medium.com/@ranrubin/horizontal-pod-autoscaling-hpa-triggered-by-kafka-event-f30fe99f3948" rel="nofollow noreferrer">how to implement Kubernetes HPA using Metrics from Kafka-exporter</a></p>
<p>Please comment if you have more questions</p>
|
<p>I'm new to Kubernetes. I try to scale my pods. First I started 3 pods:</p>
<pre><code>./cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80
</code></pre>
<p>There were starting 3 pods. First I tried to scale up/down by using a replicationcontroller but this did not exist. It seems to be a replicaSet now.</p>
<pre><code>./cluster/kubectl.sh get rs
NAME DESIRED CURRENT AGE
my-nginx-2494149703 3 3 9h
</code></pre>
<p>I tried to change the amount of replicas described in my replicaset:</p>
<pre><code>./cluster/kubectl.sh scale --replicas=5 rs/my-nginx-2494149703
replicaset "my-nginx-2494149703" scaled
</code></pre>
<p>But I still see my 3 original pods</p>
<pre><code>./cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
my-nginx-2494149703-04xrd 1/1 Running 0 9h
my-nginx-2494149703-h3krk 1/1 Running 0 9h
my-nginx-2494149703-hnayu 1/1 Running 0 9h
</code></pre>
<p>I would expect to see 5 pods. </p>
<pre><code>./cluster/kubectl.sh describe rs/my-nginx-2494149703
Name: my-nginx-2494149703
Namespace: default
Image(s): nginx
Selector: pod-template-hash=2494149703,run=my-nginx
Labels: pod-template-hash=2494149703
run=my-nginx
Replicas: 3 current / 3 desired
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
</code></pre>
<p>Why isn't it scaling up? Do I also have to change something in the deployment?</p>
<p>I see something like this when I describe my rs after scaling up:
(Here I try to scale from one running pod to 3 running pods). But it remains one running pod. The other 2 are started and killed immediatly</p>
<pre><code> 34s 34s 1 {replicaset-controller } Normal SuccessfulCreate Created pod: my-nginx-1908062973-lylsz
34s 34s 1 {replicaset-controller } Normal SuccessfulCreate Created pod: my-nginx-1908062973-5rv8u
34s 34s 1 {replicaset-controller } Normal SuccessfulDelete Deleted pod: my-nginx-1908062973-lylsz
34s 34s 1 {replicaset-controller } Normal SuccessfulDelete Deleted pod: my-nginx-1908062973-5rv8u
</code></pre>
| <p>This is working for me</p>
<pre><code>kubectl scale --replicas=<expected_replica_num> deployment <deployment_label_name> -n <namespace>
</code></pre>
<p>Example</p>
<pre><code># kubectl scale --replicas=3 deployment xyz -n my_namespace
</code></pre>
|
<p>I am trying to understand how to deploy an application on Kubernetes which requires each Pod of the same deployment to have different args used with the starting command.</p>
<p>I have this application which runs spark on Kubernetes and needs to spawn executor Pods on start. The problem is that each Pod of the application needs to spawn its own executors using its own port and spark app name.</p>
<p>I've read of stateful sets and searched the documentation but I didn't found a solution to my problem. Since every Pod needs to use a different port, I need that port to be declared in a service if I understood correctly, and also directly passed as an argument to the pod command in the args.</p>
<p>Is there a way to obtain this without using multiple deployments, one for each pod I need to create? Because this is the only solution i can think of but it can't be scaled after being deployed.
I'm using Helm to deploy the application, so I can easily create as many deployments and / or services as needed, but I would like to find a solution which can scale at runtime, if possible.</p>
| <p>I don't think you can have a Deployment which creates PODs from different Specs. You can't have it in Kubernetes and Helm won't help here (since Helm is just a template manager over Kubernetes configurations).</p>
<p>What you can do is to specify each Pod as a separate configuration (if single Pod, you don't necessarily need Deployment) and let Helm manage it.</p>
|
<p>I want to loop over a secret file I'm generating in my ci pipeline.</p>
<pre><code>helm template \
...
--set-file secretmap="secretmap.yaml" \
...
</code></pre>
<p>The <code>secretmap.yaml</code> is generated and contains the following:</p>
<pre><code>SEC_1: 111
SEC_2: 222
...
</code></pre>
<p>The secret.yaml looks like this:</p>
<pre><code>kind: Secret
...
data:
{{- range $key, $val := .Values.secretmap }}
{{ $key }}: {{ $val | b64enc | quote }}
{{- end }}
...
</code></pre>
<p>And the error I get is:</p>
<pre><code>Error: render error in ".../secret.yaml": template: .../secret.yaml:4:31: ... range can't iterate over SEC_1: 111
SEC_1: 222
...
</code></pre>
<p>The same configuration does work with configmap though,<br>
where I set the configmap from <code>-f chart/values/common.yaml</code></p>
| <p><code>--set-file</code> sets the contents of the variable to the text contents of the file; it doesn’t try to interpret it at all. (Somewhat described in <a href="https://helm.sh/docs/using_helm/#customizing-the-chart-before-installing" rel="nofollow noreferrer">this section of the Helm docs</a>; note the example there is a JavaScript script being embedded in a ConfigMap.) That means you need to tell Helm to parse the file. Helm includes a minimally-documented <a href="https://github.com/helm/helm/blob/main/pkg/engine/funcs.go#L98" rel="nofollow noreferrer"><code>fromYaml</code></a> function that can do this.</p>
<p>When you iterate through the contents of the value, try explicitly parsing it first:</p>
<pre><code>{{- range $key, $val := fromYaml .Values.secretmap }}
...
{{ end }}
</code></pre>
|
<p>One of the points in the <code>kubectl</code> <a href="https://kubernetes.io/docs/reference/kubectl/conventions/#best-practices" rel="nofollow noreferrer">best practices section in Kubernetes Docs</a> state below:</p>
<blockquote>
<p>Pin to a specific generator version, such as <code>kubectl run
--generator=deployment/v1beta1</code></p>
</blockquote>
<p>But then a little down in the doc, we get to learn that except for Pod, the use of <code>--generator</code> option is deprecated and that it would be removed in future versions.</p>
<p>Why is this being done? Doesn't generator make life easier in creating a template file for resource definition of deployment, service, and other resources? What alternative is the kubernetes team suggesting? This isn't there in the docs :(</p>
| <p>For deployment you can try</p>
<pre><code>kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node
</code></pre>
<p>and </p>
<p><strong>Note</strong>: <code>kubectl run --generator except for run-pod/v1</code> is deprecated in v1.12.</p>
|
<p>I have an a service that accepts POSTs with base64 encoded files in the body. I'm currently getting
<code>Error: 413 Request Entity Too Large</code> when I POST anything larger than 1MB, otherwise it works fine.</p>
<p>My setup has kong proxying to the service. I have the following annotation for the proxy's ingress installed via the stable kong helm chart :</p>
<pre><code> kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: “true”
ingress.kubernetes.io/proxy-body-size: 50m
</code></pre>
<p>I also added this to the kong env values:</p>
<p><code>client_max_body_size: 0</code></p>
<p>My understanding is this should update the nginx.conf</p>
<p>Kong has an nginx-ingress sitting in front of it which I installed with the stable helm chart. For the ingress-controller I have set:</p>
<pre><code>--set controller.config.proxy-body-size: "50m"
</code></pre>
<p>However none of these settings are working. Looking through the ingress-controller's pod logs I see: </p>
<pre><code>2019/08/02 15:01:34 [warn] 42#42: *810139 a client request body is buffered to a temporary file /tmp/client-body/0000000014, client: 1X.XXX.X0.X, server: example.com, request: "POST /endpoint HTTP/1.1", host: "example.com"
</code></pre>
<p>And the corresponding log in the kong pod:</p>
<pre><code>2019/08/02 15:01:39 [warn] 33#0: *1147388 a client request body is buffered to a temporary file /usr/local/kong/client_body_temp/0000000017, client: XX.XXX.XXX.XX, server: kong, request: "POST /ENDPOINT HTTP/1.1", host: "example.com"
10.120.20.17 - - [02/Aug/2019:15:01:39 +0000] "POST /endpoint HTTP/1.1" 413 794 "-" "PostmanRuntime/7.15.2"
</code></pre>
<p>Is there another setting I am missing or am I going about this wrong? How can I get this to work as expected.</p>
<ul>
<li>If I just POST to the pod directly using it's IP, no ingress controllers involved I get the same 413 error. Does kubernetes have a default ingress somewhere that also needs to be changed?</li>
</ul>
| <p>the annotation seemed to work fine. The limitation I was running into was because the code I was testing was in a kubeless. The kubeless functions use bottle and the ingress limit was on bottle. I increased that in a custom python3.7 image for kubeless and all worked fine. </p>
|
<p>I am trying to install the Kubernetes cluster on ubuntu 18.04 system..</p>
<p>As part of intializing the cluster with below command getting below error</p>
<pre><code> [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.14.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: access denied
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.14.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: access denied
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.14.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: access denied
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.14.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: access denied
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: access denied
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.3.10: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: access denied
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: access denied
, error: exit status 1
</code></pre>
<p>When i tried to run <code>wget https://k8s.gcr.io/v2/</code> i am getting below error:</p>
<pre><code>Proxy request sent, awaiting response... 401 Unauthorized
Username/Password Authentication Failed.
</code></pre>
<p>Can you please let me know what is the issue.</p>
| <p>In my case I have</p>
<pre><code>[root@instance-1 ~]# wget https://k8s.gcr.io/v2/
--2019-08-26 12:58:24-- https://k8s.gcr.io/v2/
Resolving k8s.gcr.io (k8s.gcr.io)... 64.233.167.82, 2a00:1450:400c:c06::52
Connecting to k8s.gcr.io (k8s.gcr.io)|64.233.167.82|:443... connected.
HTTP request sent, awaiting response... 401 Unauthorized
Authorization failed.
</code></pre>
<p>That can be the proxy issue. Inspect <a href="https://github.com/kubernetes/kubeadm/issues/1201" rel="nofollow noreferrer">[Kubeadm] Failing to pull images</a>, and more concrete [this answer](<a href="https://github.com/kubernetes/kubeadm/issues/1201#issuecomment-494686022" rel="nofollow noreferrer">https://github.com/kubernetes/kubeadm/issues/1201#issuecomment-494686022</a>.</p>
<p>So try to <a href="https://www.thegeekdiary.com/how-to-configure-docker-to-use-proxy/" rel="nofollow noreferrer">configure your docker to use proxy</a> and I hope your issue would be resolved.</p>
<p>And below is summarized set of commands for both the methods:</p>
<p>Method 1 : Configuring proxy variables in the /etc/sysconfig/docker file</p>
<p>-Add following configuration in /etc/sysconfig/docker file</p>
<pre><code># cat /etc/sysconfig/docker
HTTP_PROXY="http://USERNAME:PASSWORD@[your.proxy.server]:[port]"
HTTPS_PROXY="https://USERNAME:PASSWORD@[your.proxy.server]:[port]
</code></pre>
<p>-Restart docker</p>
<pre><code># service docker restart
</code></pre>
<p>Method 2:</p>
<p>1) Create a drop-in</p>
<pre><code># mkdir /etc/systemd/system/docker.service.d
</code></pre>
<p>2) Create a file with name <code>/etc/systemd/system/docker.service.d/http-proxy.conf</code> that adds the <code>HTTP_PROXY</code> environment variable:</p>
<pre><code>[Service]
Environment="HTTP_PROXY=http://user01:password@10.10.10.10:8080/"
Environment="HTTPS_PROXY=https://user01:password@10.10.10.10:8080/"
Environment="NO_PROXY= hostname.example.com,172.10.10.10"
</code></pre>
<p>3) reload the systemd daemon</p>
<pre><code># systemctl daemon-reload
</code></pre>
<p>4) restart docker</p>
<p><code># systemctl restart dock</code>er
5) Verify that the configuration has been loaded:</p>
<pre><code># systemctl show docker --property Environment
Environment=GOTRACEBACK=crash HTTP_PROXY=http://10.10.10.10:8080/ HTTPS_PROXY=http://10.10.10.10:8080/ NO_PROXY= hostname.example.com,172.10.10.10
</code></pre>
|
<p>I understand that ImagePullBackOff or ErrImagePull happens when K8 cannot pull containers, but I do not think that this is the case here. I say this because this error is randomly thrown by only <em>some</em> of the pods as my service scales, while others come up perfectly fine, with OK status. </p>
<p>For instance, please refer to this replica set here. </p>
<p><a href="https://imgur.com/Irl3w7R" rel="nofollow noreferrer"></a></p>
<p>I retrieved the events from one such failed pod. </p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m45s default-scheduler Successfully assigned default/storefront-jtonline-prod-6dfbbd6bd8-jp5k5 to gke-square1-prod-clu-nap-n1-highcpu-2-82b95c00-p5gl
Normal Pulling 2m8s (x4 over 3m44s) kubelet, gke-square1-prod-clu-nap-n1-highcpu-2-82b95c00-p5gl pulling image "gcr.io/square1-2019/storefront-jtonline-prod:latest"
Warning Failed 2m7s (x4 over 3m43s) kubelet, gke-square1-prod-clu-nap-n1-highcpu-2-82b95c00-p5gl Failed to pull image "gcr.io/square1-2019/storefront-jtonline-prod:latest": rpc error: code = Unknown desc = Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
Warning Failed 2m7s (x4 over 3m43s) kubelet, gke-square1-prod-clu-nap-n1-highcpu-2-82b95c00-p5gl Error: ErrImagePull
Normal BackOff 113s (x6 over 3m42s) kubelet, gke-square1-prod-clu-nap-n1-highcpu-2-82b95c00-p5gl Back-off pulling image "gcr.io/square1-2019/storefront-jtonline-prod:latest"
Warning Failed 99s (x7 over 3m42s) kubelet, gke-square1-prod-clu-nap-n1-highcpu-2-82b95c00-p5gl Error: ImagePullBackOff
</code></pre>
<p>The logs tell me it failed to pull the container because of incorrect credentials, which seems... confusing? This pod was created automatically when autoscaling exactly like the others. </p>
<p>I have a feeling this might have to do with resourcing. I have seen a much higher rate of these errors when the cluster spins off new nodes really fast due to a spike in traffic, or when I set lower resource requests in my deployment configurations. </p>
<p>How do I go about debugging this error, and what could be a possible reason this is happening? </p>
<p>Here is my configuation:</p>
<pre><code>apiVersion: "extensions/v1beta1"
kind: "Deployment"
metadata:
name: "storefront-_STOREFRONT-_ENV"
namespace: "default"
labels:
app: "storefront-_STOREFRONT-_ENV"
spec:
replicas: 10
selector:
matchLabels:
app: "storefront-_STOREFRONT-_ENV"
template:
metadata:
labels:
app: "storefront-_STOREFRONT-_ENV"
spec:
containers:
- name: "storefront-_STOREFRONT-_ENV"
image: "gcr.io/square1-2019/storefront-_STOREFRONT-_ENV"
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /?healthz
port: 8080
initialDelaySeconds: 5
periodSeconds: 1
imagePullPolicy: Always
</code></pre>
<pre><code>apiVersion: "autoscaling/v2beta1"
kind: "HorizontalPodAutoscaler"
metadata:
name: "storefront-_STOREFRONT-hpa"
namespace: "default"
labels:
app: "storefront-_STOREFRONT-_ENV"
spec:
scaleTargetRef:
kind: "Deployment"
name: "storefront-_STOREFRONT-_ENV"
apiVersion: "apps/v1beta1"
minReplicas: 10
maxReplicas: 1000
metrics:
- type: "Resource"
resource:
name: "cpu"
targetAverageUtilization: 75
</code></pre>
<p>EDIT: I have been able to verify that this is in fact an auth issue. This only happens for "some" pods, since it only occurs for pods scheduled on nodes created automatically because of vertical scaling. I do not know how to fix this yet, though.</p>
| <p>As we can read in the <a href="https://kubernetes.io/docs/concepts/containers/images/" rel="nofollow noreferrer">Kubernetes docs regarding images</a> there is no need to do anything if You are running cluster on GKE.</p>
<blockquote>
<p><strong>Note:</strong> If you are running on Google Kubernetes Engine, there will already be a <code>.dockercfg</code> on each node with credentials for Google Container Registry. You cannot use this approach.</p>
</blockquote>
<p>But it's also stated that:</p>
<blockquote>
<p><strong>Note:</strong> This approach is suitable if you can control node configuration. <strong>It will not work reliably on GCE, and any other cloud provider that does automatic node replacement</strong>.</p>
</blockquote>
<p>Also in section <a href="https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod" rel="nofollow noreferrer">Specifying ImagePullSecrets on a Pod</a>.</p>
<blockquote>
<p><strong>Note:</strong> This approach is currently the recommended approach for Google Kubernetes Engine, GCE, and any cloud-providers <strong>where node creation is automated</strong>.</p>
</blockquote>
<p>It's recommended to use create a Secret with a Docker Config.</p>
<p>This can be done in a following way:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl create secret docker-registry <name> --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
</code></pre>
|
<p>We want to deploy <em>laravel</em> project in k8s. For database we have a <code>data.sql</code> file. When we create docker image for this project, <code>data.sql</code> was easy to pass in volume like:</p>
<p>volumes:</p>
<pre><code> - ./data.sql:/docker-entrypoint-initdb.d/data.sql
</code></pre>
<p>but in k8s i'm not able to make connection between the database and .sql file
So do let me know, how to pass .sql file in kubernetes volume?</p>
<p>in .sql file we have a large amount of data, when we use configmap it gives an error</p>
<pre><code>"command terminated with exit code 137"
</code></pre>
| <p>Considering the limit of 1MB for configmaps I would initiate a sidecar that pulls this sql file in an empty dir to the main container for the DB. Naturally the git method is deprecated but this should help:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/storage/volumes/#gitrepo" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#gitrepo</a></li>
<li><a href="https://github.com/kubernetes/git-sync/blob/master/docs/ssh.md" rel="nofollow noreferrer">https://github.com/kubernetes/git-sync/blob/master/docs/ssh.md</a></li>
</ul>
|
<p>Here is my <code>containerTemplate</code> snippet from Jenkinsfile which creates a pod and a container called mvn-c1 onto Kubernetes.</p>
<pre><code>containerTemplate(
name: 'mvn-c1',
image: 'mycompany.lab/build_images/mvn_c1_k8s:0.3',
privileged: true,
ttyEnabled: true,
command: 'cat',
imagePullSecrets: ['docker-repo'],
volumeMounts: [ name: 'maven-repo1' , mountPath: '/root/.m2' ],
volumes: [
nfsVolume( mountPath: '/root/.m2', serverAddress: 'nfs-server-ip',
serverPath: '/jenkins_data', readOnly: false ),
]
)
</code></pre>
<p>The problem is that the volume is not able to mount to the container nor doesn't show any parse errors on the console.</p>
<p>I have referred <a href="https://jenkins.io/doc/pipeline/steps/kubernetes/" rel="nofollow noreferrer">this documentation</a> to construct the <code>containerTemplate</code></p>
<p>Has anybody had luck trying this method?</p>
| <p>Welcome on StackOverflow @Vamshi</p>
<p>I think you have two issues with your current Pipeline definition:</p>
<ol>
<li><code>volumes</code> are part of podTemplate not containerTemplate</li>
</ol>
<blockquote>
<p>WARNING: Unknown parameter(s) found for class type
'org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate': volumes</p>
</blockquote>
<ol>
<li>Usually your Kubernetes plugin spawns Jenkins slave Pods in another namespace than your NFS server resides, therefore it's safer to specify for NFS volumes as 'serverAddress' an IP address of NFS server instead of its Kubernetes Service name.</li>
</ol>
<p>Here is a fully working example:</p>
<pre><code>podTemplate(containers: [
containerTemplate(name: 'maven',
image: 'maven:3.3.9-jdk-8-alpine',
ttyEnabled: true,
command: 'cat')
],
volumes: [
nfsVolume( mountPath: '/root/.m2', serverAddress: '10.0.174.57',
serverPath: '/', readOnly: false ),
]
) {
node(POD_LABEL) {
stage('Get a Maven project') {
container('maven') {
stage('Build a Maven project') {
sh 'while true; do date > /root/.m2/index.html; hostname >> /root/.m2/index.html; sleep $(($RANDOM % 5 + 5)); done'
}
}
}
}
}
</code></pre>
<p>Verifying the correctness of nfs-based Persistence Volume mounted inside POD:</p>
<pre><code>kubectl exec container-template-with-nfs-pv-10-cl42l-042tq-z3n7b -c maven -- cat /root/.m2/index.html
</code></pre>
<blockquote>
<p>Output:
Mon Aug 26 14:47:28 UTC 2019 nfs-busybox-9t7wx</p>
</blockquote>
|
<p>I'm trying to setup an VPC Peering from my MongoDB Atlas Cluster to my Kubernetes EKS Cluster on AWS. The Peering is established successfully but i get no connection to the cluster on my pod's.</p>
<p>The peering is setup.
<a href="https://i.stack.imgur.com/7qR8n.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7qR8n.png" alt="Peering"></a></p>
<p>The default entry for the whitelist ist added as well. Once the connection works i will replace it with a security Group.
<a href="https://i.stack.imgur.com/KTysQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KTysQ.png" alt="IP Whitelist"></a></p>
<p>The peering on AWS is accepted and "DNS resolution from requester VPC to private IP" is enabled.
<a href="https://i.stack.imgur.com/TbPyF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TbPyF.png" alt="Peering AWS"></a></p>
<p>The route as been added to the Public Route Table of the K8S Cluster.
<a href="https://i.stack.imgur.com/OH7CH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OH7CH.png" alt="Route AWS"></a></p>
<p>When i connect to a pod and try to establish a connection with the following command:</p>
<pre><code># mongo "mongodb://x.mongodb.net:27017,y.mongodb.net:27017,z.mongodb.net:27017/test?replicaSet=Cluster0-shard-0" --ssl --authenticationDatabase admin --username JackBauer
</code></pre>
<p>I get "CONNECT_ERROR" for every endpoint.</p>
<p>What am I missing?</p>
<p>NOTE:
I've just created a new paid cluster and the VPC is working perfectly. Might this feature be limited to paid clusters only?</p>
| <p>Well... as the documentation states:</p>
<blockquote>
<p>You cannot configure Set up a Network Peering Connection on M0 Free
Tier or M2/M5 shared clusters.</p>
</blockquote>
<p>Peering is not working on shared Cluster. Which makes, after i think about it, totally sense. </p>
|
<p>We have added a dockerised build agent to our development Kubernetes cluster which we use to build our applications as part of our Azure Devops pipelines. We created our own image based on <a href="https://github.com/Microsoft/vsts-agent-docker" rel="noreferrer">the deprecated Microsoft/vsts-agent-docker on Github.</a></p>
<p>The build agent uses Docker outside of Docker (DooD) to create images on our development cluster.</p>
<p>This agent was working well for a few days but then an error would occasionally occur on the docker commands in our build pipeline:</p>
<blockquote>
<p>Error response from daemon: No such image: fooproject:ci-3284.2
/usr/local/bin/docker failed with return code: 1</p>
</blockquote>
<p>We realised that the build agent was creating tons of images that weren't being removed. There were tons of images that were blocking up the build agent and there were missing images, which would explain the "no such image" error message.</p>
<p>By adding a step to our build pipelines with the following command we were able to get our build agent working again:</p>
<pre><code>docker system prune -f -a
</code></pre>
<p>But of course this then removes all our images, and they must be built from scratch every time, which causes our builds to take an unnecessarily long time.</p>
<p>I'm sure this must be a solved problem but I haven't been able to locate any documentation on the normal strategy for dealing with a dockerised build agent becoming clogged over time. Being new to docker and kubernetes I may simply not know what I am looking for. <strong>What is the best practice for creating a dockerised build agent that stays clean and functional, while maintaining a cache?</strong></p>
<p><strong>EDIT:</strong> Some ideas:</p>
<ul>
<li>Create a build step that cleans up all but the latest image for the given pipeline (this might still clog the build server though).</li>
<li>Have a cron job run that removes all the images every x days (this would result in slow builds the first time after the job is run, and could still clog the build server if it sees heavy usage.</li>
<li>Clear all images nightly and run all builds outside of work hours. This way builds would run quickly during the day. However heavy usage could still clog the build server.</li>
</ul>
<p><strong>EDIT 2:</strong></p>
<p>I found someone with a <a href="https://github.com/docker/cli/issues/625#issuecomment-342383849" rel="noreferrer">docker issue on Github</a> that seems to be trying to do exactly the same thing as me. He came up with a solution which he described as follows:</p>
<blockquote>
<p>I was exactly trying to figure out how to remove "old" images out of my automated build environment <em>without</em> removing my build dependencies. This means I can't just remove by age, because the nodejs image might not change for weeks, while my app builds can be worthless in literally minutes.</p>
<p><code>docker image rm $(docker image ls --filter reference=docker --quiet)</code></p>
<p>That little gem is exactly what I needed. I dropped my <em>repository name</em> in the <em>reference</em> variable (not the most self-explanatory.) Since I tag both the build number and <em>latest</em> the <code>docker image rm</code> command fails on the images I want to keep. I really don't like using daemon errors as a protection mechanism, but its effective.</p>
</blockquote>
<p>Trying to follow these directions, I have applied the <code>latest</code> tag to everything that is built during the process, and then run</p>
<p><code>docker image ls --filter reference=fooproject</code></p>
<p>If I try to remove these I get the following error:</p>
<blockquote>
<p>Error response from daemon: conflict: unable to delete b870ec9c12cc (must be forced) - image is referenced in multiple repositories</p>
</blockquote>
<p>Which prevents the latest one from being removed. However this is not exactly a clean way of doing this. There must be a better way?</p>
| <p>Probably you've already found a solution, but it might be useful for the rest of the community to have an answer here.</p>
<p><code>docker prune</code> has a limited purpose. It was created to address the issue with cleaning up all local Docker images. (As it was mentioned by <strong>thaJeztah</strong> <a href="https://github.com/docker/cli/issues/625#issuecomment-340067544" rel="noreferrer">here</a>)</p>
<p>To remove images in the more precise way it's better to divide this task into two parts:
1. select/filter images to delete
2. delete the list of selected images</p>
<p>E.g: </p>
<pre><code>docker image rm $(docker image ls --filter reference=docker --quiet)
docker image rm $(sudo docker image ls | grep 1.14 | awk '{print $3}')
docker image ls --filter reference=docker --quiet | xargs docker image rm
</code></pre>
<p>It is possible to combine filters clauses to get exactly what you what:<br>
(I'm using Kubernetes master node as an example environment)</p>
<pre><code>$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.14.2 5c24210246bb 3 months ago 82.1MB
k8s.gcr.io/kube-apiserver v1.14.2 5eeff402b659 3 months ago 210MB
k8s.gcr.io/kube-controller-manager v1.14.2 8be94bdae139 3 months ago 158MB
k8s.gcr.io/kube-scheduler v1.14.2 ee18f350636d 3 months ago 81.6MB # before
quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 6 months ago 52.6MB
k8s.gcr.io/coredns 1.3.1 eb516548c180 7 months ago 40.3MB # since
k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 8 months ago 258MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 20 months ago 742kB
$ docker images --filter "since=eb516548c180" --filter "before=ee18f350636d"
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 6 months ago 52.6MB
$ docker images --filter "since=eb516548c180" --filter "reference=quay.io/coreos/flannel"
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 6 months ago 52.6MB
$ docker images --filter "since=eb516548c180" --filter "reference=quay*/*/*"
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 6 months ago 52.6MB
$ docker images --filter "since=eb516548c180" --filter "reference=*/*/flan*"
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 6 months ago 52.6MB
</code></pre>
<p>As mentioned in the <a href="https://github.com/moby/moby/blob/10c0af083544460a2ddc2218f37dc24a077f7d90/docs/reference/commandline/images.md#filtering" rel="noreferrer">documentation</a>, <code>images</code> / <code>image ls</code> filter is much better than <code>docker prune</code> filter, which supports <code>until</code> clause only:</p>
<pre><code>The currently supported filters are:
• dangling (boolean - true or false)
• label (label=<key> or label=<key>=<value>)
• before (<image-name>[:<tag>], <image id> or <image@digest>) - filter images created before given id or references
• since (<image-name>[:<tag>], <image id> or <image@digest>) - filter images created since given id or references
</code></pre>
<p>If you need more than one filter, then pass multiple flags
(e.g., <code>--filter "foo=bar" --filter "bif=baz"</code>)</p>
<p>You can use other linux cli commands to filter <code>docker images</code> output:</p>
<pre><code>grep "something" # to include only specified images
grep -v "something" # to exclude images you want to save
sort [-k colN] [-r] [-g]] | head/tail -nX # to select X oldest or newest images
</code></pre>
<p>Combining them and putting the result to CI/CD pipeline allows you to leave only required images in the local cache without collecting a lot of garbage on your build server.</p>
<p>I've copied here a good example of using that approach provided by <strong>strajansebastian</strong> in the <a href="https://github.com/docker/cli/issues/625#issuecomment-507761506" rel="noreferrer">comment</a>:</p>
<pre><code>#example of deleting all builds except last 2 for each kind of image
#(the image kind is based on the Repository value.)
#If you want to preserve just last build modify to tail -n+2.
# delete dead containers
docker container prune -f
# keep last 2 builds for each image from the repository
for diru in `docker images --format "{{.Repository}}" | sort | uniq`; do
for dimr in `docker images --format "{{.ID}};{{.Repository}}:{{.Tag}};'{{.CreatedAt}}'" --filter reference="$diru" | sed -r "s/\s+/~/g" | tail -n+3`; do
img_tag=`echo $dimr | cut -d";" -f2`;
docker rmi $img_tag;
done;
done
# clean dangling images if any
docker image prune -f
</code></pre>
|
<p>I would like to be able to reference the current namespace in <code>values.yaml</code> to use it to suffix some values like this</p>
<pre><code># in values.yaml
someParam: someval-{{ .Release.Namespace }}
</code></pre>
<p>It much nicer to define it this way instead of going into all my templates and adding <code>{{ .Release.Namespace }}</code>. If I can do it in <code>values.yaml</code> it's much clearer and only needs to be defined in one place.</p>
| <p>Just to clarify:</p>
<p>As described by community: <a href="https://stackoverflow.com/a/57641009/11207414">Amit Kumar Gupta</a> and <a href="https://stackoverflow.com/a/57641142/11207414">David Maze</a> there is no good solution natively supported by <a href="https://helm.sh/docs/chart_best_practices/" rel="nofollow noreferrer">helm</a> in order to change this behavior without modifying templates.
It looks that in your case (without modifying helm templates) the best solution it will be just using <strong>set</strong> with parameters during helm install.</p>
<p>like:</p>
<pre><code>helm install --set foo=bar --set foo=newbar ./redis
</code></pre>
|
<p>regarding to below logs which I used describe pod, my pods stuck in pending state due to “FailedCreatePodSandBox”</p>
<p>there is some key note:
-I use calico as CNI.
-this log repeat multple time, I just past here this one as sample.
-the ip 192.168.90.152 belong to ingress and 129 belong to tiller in the munitoring NS of k8s and I do not know why k8s try to bind it to another pod.</p>
<p>I google this issue and got nothing here I’m.</p>
<pre><code> Warning FailedCreatePodSandBox 2m56s kubelet, worker-dev Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "2abca59b55efb476723ec9c4402ede6e3a6ee9aed67ecd19c3ef5c7719ae51f1" network for pod "service-stg-8d9d68475-2h4b8": NetworkPlugin cni failed to set up pod "service-stg-8d9d68475-2h4b8_stg" network: error adding host side routes for interface: cali670b0a20d66, error: route (Ifindex: 10688, Dst: 192.168.90.152/32, Scope: 253) already exists for an interface other than 'cali670b0a20d66'
Warning FailedCreatePodSandBox 2m53s kubelet, worker-dev Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ec155fd442c0ea09b282a13c4399ae25b97d5c3786f90f1a045449b52ced4cb7" network for pod "service-stg-8d9d68475-2h4b8": NetworkPlugin cni failed to set up pod "service-stg-8d9d68475-2h4b8_stg" network: error adding host side routes for interface: cali670b0a20d66, error: route (Ifindex: 10691, Dst: 192.168.90.129/32, Scope: 253) already exists for an interface other than 'cali670b0a20d66'
</code></pre>
<p>can any one help on this issue?</p>
| <p>As per design of CNI network plugins and according to Kubernetes network <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">model</a>, Calico defines special IP pool <a href="https://www.keycdn.com/support/what-is-cidr" rel="nofollow noreferrer">CIDR</a> <code>CALICO_IPV4POOL_CIDR</code> to determine what IP ranges are valid to use for allocating pod IP addresses across k8s cluster.</p>
<p>When you spin up a new Pod on a particular K8s node Calico plugin will do the following:</p>
<ul>
<li>Check whether the Pod exists on this node;</li>
<li>Allocates IP address for this Pod from within defined <a href="https://www.projectcalico.org/calico-ipam-explained-and-enhanced/" rel="nofollow noreferrer">IPAM</a>
range;</li>
<li>Creates an virtual interface on the node's host and appropriate
routing rules in order to bridge network traffic between Pod and
container;</li>
<li>Register Pod IP in K8s API server.</li>
</ul>
<p>You can fetch the data about Calico virtual interfaces on the relevant node, i.e.: </p>
<pre><code>$ ip link | grep cali
cali80d3ff89956@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP mode DEFAULT group default
calie58f9d521fb@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP mode DEFAULT group default
</code></pre>
<p>In order to estimate the current issue , you can consider to query Calico <code>install-cni</code> Pod's container logs and retrieve the data about the certain Pod <code>service-stg-8d9d68475-2h4b8</code>, searching for the existing virtual interface mapping:</p>
<pre><code>kubectl logs $(kubectl get po -l k8s-app=calico-node -o jsonpath='{.items[0].metadata.name}' -n kube-system) -c calico-node -n kube-system| grep service-stg-8d9d68475-2h4b8_stg
</code></pre>
|
<p>I have <code>Elasticsearch</code> running on Kubernetes (EKS), with <code>filebeat</code> running as <code>daemonset</code> on Kubernetes.</p>
<p>Now I am trying to get the logs from other <code>EC2</code> machines (outside of the EKS), so have installed exact version of <code>filebeat</code> on <code>EC2</code> and configured it to send logs to <code>Elasticsearch</code> running on Kubernetes.</p>
<p>But not able to see any logs in Elasticsearch (Kibana). Here are the logs for filebeat</p>
<pre><code>2019-08-26T18:18:16.005Z INFO instance/beat.go:292 Setup Beat: filebeat; Version: 7.2.1
2019-08-26T18:18:16.005Z INFO [index-management] idxmgmt/std.go:178 Set output.elasticsearch.index to 'filebeat-7.2.1' as ILM is enabled.
2019-08-26T18:18:16.005Z INFO elasticsearch/client.go:166 Elasticsearch url: http://elasticsearch.dev.domain.net:9200
2019-08-26T18:18:16.005Z INFO add_cloud_metadata/add_cloud_metadata.go:351 add_cloud_metadata: hosting provider type detected as aws, metadata={"availability_zone":"us-west-2a","instance":{"id":"i-0185e1d68306f95b4"},"machine":{"type":"t2.medium"},"provider":"aws","region":"us-west-2"}
2019-08-26T18:18:16.005Z INFO [publisher] pipeline/module.go:97 Beat name: dev-web1
2019-08-26T18:18:16.006Z INFO elasticsearch/client.go:166 Elasticsearch url: http://elasticsearch.dev.domain.net:9200
</code></pre>
<p>Not much info in the logs.</p>
<p>Then I notice :</p>
<pre><code>root@dev-web1:~# sudo systemctl status filebeat
● filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
Loaded: loaded (/lib/systemd/system/filebeat.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2019-08-26 18:18:47 UTC; 18min ago
Docs: https://www.elastic.co/products/beats/filebeat
Main PID: 7768 (filebeat)
CGroup: /system.slice/filebeat.service
└─7768 /usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs
Aug 26 18:35:38 dev-web1 filebeat[7768]: 2019-08-26T18:35:38.156Z ERROR pipeline/output.go:100 Failed to connect to backoff(elasticsearch(http://elasticsear
Aug 26 18:35:38 dev-web1 filebeat[7768]: 2019-08-26T18:35:38.156Z INFO pipeline/output.go:93 Attempting to reconnect to backoff(elasticsearch(http://elastic
Aug 26 18:35:38 dev-web1 filebeat[7768]: 2019-08-26T18:35:38.156Z INFO [publisher] pipeline/retry.go:189 retryer: send unwait-signal to consumer
Aug 26 18:35:38 dev-web1 filebeat[7768]: 2019-08-26T18:35:38.157Z INFO [publisher] pipeline/retry.go:191 done
Aug 26 18:35:38 dev-web1 filebeat[7768]: 2019-08-26T18:35:38.157Z INFO [publisher] pipeline/retry.go:166 retryer: send wait signal to consumer
Aug 26 18:35:38 dev-web1 filebeat[7768]: 2019-08-26T18:35:38.157Z INFO [publisher] pipeline/retry.go:168 done
Aug 26 18:35:47 dev-web1 filebeat[7768]: 2019-08-26T18:35:47.028Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitori
Aug 26 18:36:17 dev-web1 filebeat[7768]: 2019-08-26T18:36:17.028Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitori
root@dev-web1:~#
</code></pre>
<p>But I can't read complete line in above status message. </p>
<p>So I tried :</p>
<pre><code>root@dev-web1:~# curl elasticsearch.dev.domain.net/_cat/health
1566844775 18:39:35 dev-eks-logs green 3 3 48 24 0 0 0 0 - 100.0%
root@dev-web1:~#
</code></pre>
<p>which worked but not with port</p>
<pre><code>root@dev-web1:~# curl elasticsearch.dev.domain.net:9200/_cat/health
</code></pre>
<p>filebeat has following config</p>
<pre><code>output.elasticsearch:
hosts: ["elasticsearch.dev.domain.net"]
username: "elastic"
password: "changeme"
</code></pre>
<p>How can I fix this at filebeat side ?</p>
<p>Telnet Test :</p>
<pre><code>root@dev-web1:~# telnet <ip> 5044
Trying <ip>...
telnet: Unable to connect to remote host: Connection refused
root@dev-web1:~# telnet localhost 5044
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
root@dev-web1:~#
</code></pre>
| <p><a href="https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html#hosts-option" rel="nofollow noreferrer">https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html#hosts-option</a> says:</p>
<blockquote>
<p><code>hosts</code>...If no port is specified, 9200 is used.</p>
</blockquote>
<p>Adding <code>hosts: ["elasticsearch.dev.domain.net:80"]</code> in the filbeat configuration should resolve the issue.</p>
|
<p>Question: How can I use raw devices attached to the host within the pod as block device.</p>
<p>I tried using "hostPath" with type "BlockDevice"</p>
<pre><code>volumes:
- my-data:
hostPath:
path: /dev/nvme1n2
type: BlockDevice
</code></pre>
<pre><code>containers:
.....
volumeDevices:
- name: my-data
devicePath: /dev/sda
</code></pre>
<p>This configuration gives me the below error.</p>
<pre><code>Invalid value: "my-data": can only use volume source type of PersistentVolumeClaim for block mode
</code></pre>
<p>Can I achieve this using <code>PersistentVolume</code> and <code>PersistentVolumeClaim</code> ? Can someone help me with an example config. Appreciate the help.</p>
| <p><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#raw-block-volume-support" rel="noreferrer">Support for Block devices in K8s</a> allows user and admins to use PVs & PVCs for raw block devices to be mounted in Pods. Excerpts below show a small use-case.</p>
<ul>
<li>Create a PV which refers the Raw device on host say <code>/dev/xvdf</code>.</li>
</ul>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: local-raw-pv
spec:
volumeMode: Block
capacity:
storage: 100Gi
local:
path: /dev/xvdf
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
</code></pre>
<ul>
<li>Create a PVC claiming the block device for applications</li>
</ul>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: block-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
resources:
requests:
storage: 10Gi
</code></pre>
<ul>
<li>Create pod with the above claim which will mount the host device <code>/dev/xvdf</code> inside pod at path <code>/dev/xvda</code></li>
</ul>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: pod-with-block-volume
spec:
containers:
- name: some-container
image: ubuntu
command: ["/bin/sh", "-c"]
args: [ "tail -f /dev/null" ]
volumeDevices:
- name: data
devicePath: /dev/xvda
volumes:
- name: data
persistentVolumeClaim:
claimName: block-pvc
</code></pre>
|
<p>I am trying to pass a toleration when deploying to a chart located in stable. The toleration should be applied to a specific YAML file in the templates directory, NOT the values.yaml file as it is doing by default.</p>
<p>I've applied using patch and I can see that the change I need would work if it were applied to the right Service, which is a DaemonSet.</p>
<p>Currently I'm trying "helm install -f tolerations.yaml --name release_here"</p>
<p>This is simply creating a one-off entry when running get chart release_here, and is not in the correct service YAML</p>
| <p>Quoting your requirement </p>
<blockquote>
<p>The toleration should be applied to a specific YAML file in the
templates directory</p>
</blockquote>
<p>First, in order to make it happen your particular helm chart file needs to allow such an end-user customization.</p>
<p>Here is the example based on <a href="https://github.com/helm/charts/tree/9eadc46ed1d9b17d1e67d06c2516dddc3fdde2d3/stable/kiam" rel="nofollow noreferrer">stable/kiam</a> chart:</p>
<p>Definition of <a href="https://raw.githubusercontent.com/helm/charts/9eadc46ed1d9b17d1e67d06c2516dddc3fdde2d3/stable/kiam/templates/server-daemonset.yaml" rel="nofollow noreferrer">kiam/templates/server-daemonset.yaml</a></p>
<pre><code>{{- if .Values.server.enabled -}}
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
labels:
app: {{ template "kiam.name" . }}
chart: {{ template "kiam.chart" . }}
component: "{{ .Values.server.name }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "kiam.fullname" . }}-server
spec:
selector:
matchLabels:
app: {{ template "kiam.name" . }}
component: "{{ .Values.server.name }}"
release: {{ .Release.Name }}
template:
metadata:
{{- if .Values.server.podAnnotations }}
annotations:
{{ toYaml .Values.server.podAnnotations | indent 8 }}
{{- end }}
labels:
app: {{ template "kiam.name" . }}
component: "{{ .Values.server.name }}"
release: {{ .Release.Name }}
{{- if .Values.server.podLabels }}
{{ toYaml .Values.server.podLabels | indent 8 }}
{{- end }}
spec:
serviceAccountName: {{ template "kiam.serviceAccountName.server" . }}
hostNetwork: {{ .Values.server.useHostNetwork }}
{{- if .Values.server.nodeSelector }}
nodeSelector:
{{ toYaml .Values.server.nodeSelector | indent 8 }}
{{- end }}
tolerations: <---- TOLERATIONS !
{{ toYaml .Values.server.tolerations | indent 8 }}
{{- if .Values.server.affinity }}
affinity:
{{ toYaml .Values.server.affinity | indent 10 }}
{{- end }}
volumes:
- name: tls
</code></pre>
<p>Override default <code>values.yaml</code> with your <code>customs-values</code> to set toleration in Pod spec of DeamonSet.</p>
<pre><code>server:
enabled: true
tolerations: ## Agent container resources
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: foo.bar.com/role
operator: In
values:
- master
</code></pre>
<p>Render the resulting manifest file, to see how it would look like when overriding default values with install/upgrade helm command using --values/--set argument:</p>
<pre><code>helm template --name my-release . -x templates/server-daemonset.yaml --values custom-values.yaml
</code></pre>
<p>Rendered file (output truncated):</p>
<pre><code>---
# Source: kiam/templates/server-daemonset.yaml
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
labels:
app: kiam
chart: kiam-2.5.1
component: "server"
heritage: Tiller
release: my-release
name: my-release-kiam-server
spec:
selector:
matchLabels:
app: kiam
component: "server"
release: my-release
template:
metadata:
labels:
app: kiam
component: "server"
release: my-release
spec:
serviceAccountName: my-release-kiam-server
hostNetwork: false
tolerations:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: foo.bar.com/role
operator: In
values:
- master
volumes:
...
</code></pre>
<p>I hope this will help you to solve your problem.</p>
|
<p>I want to deploy an application in AWS EKS using kubernetes. My application needs to access the SQS and AWS S3. I am not sure how to allow the kubernetes application to access the SQS and S3. I looked into RBAC but I guess RBAC only provides access to manage the cluster, namespace or pods.</p>
<p>I am trying to pass the access key and secret key as the secrets to the environment variable and allow the permission. But I think this is not a good idea.</p>
<p>Is there any other way like creating the IAM role and passing the role to the pods running the application?
If I try to provide the permission in the worker node role then all the pods sharing the node will get that permission. I need something like the specific pod or container will get the permission</p>
<p>I also tried to create the RBAC in which a role is assigned to the group and the group is bind to the namespace.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: default
data:
mapRoles: |
- rolearn: arn:aws:iam::xxxxxxxxx:role/EksWorkers-NodeInstanceRole-xxxxx
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- rolearn: arn:aws:iam::xxxxxxxxx:role/iam-role-for-application
groups:
- app-group
</code></pre>
<p>and</p>
<pre><code>kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: admins
namespace: default
subjects:
- kind: Group
name: app-group
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: admin
apiGroup: rbac.authorization.k8s.io
</code></pre>
| <p><code>aws-auth</code> configMap is used to map users/roles to the cluster.</p>
<p>Update: Here is how you can do it natively </p>
<p><a href="https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/" rel="nofollow noreferrer">https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/</a>
<a href="https://docs.aws.amazon.com/en_pv/eks/latest/userguide/iam-roles-for-service-accounts.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/en_pv/eks/latest/userguide/iam-roles-for-service-accounts.html</a></p>
<p><s>
For giving an IAM role to pods you can use one of the below tools</p>
<p><a href="https://github.com/jtblin/kube2iam" rel="nofollow noreferrer">https://github.com/jtblin/kube2iam</a></p>
<p><a href="https://github.com/uswitch/kiam" rel="nofollow noreferrer">https://github.com/uswitch/kiam</a></p>
<p>Amazon team is working on to bring this natively</p>
<p><a href="https://github.com/aws/containers-roadmap/projects/1?card_filter_query=iam" rel="nofollow noreferrer">https://github.com/aws/containers-roadmap/projects/1?card_filter_query=iam</a>
<a href="https://github.com/aws/containers-roadmap/issues/23" rel="nofollow noreferrer">https://github.com/aws/containers-roadmap/issues/23</a> </s></p>
|
<p>I would like to ask you about some assistance:</p>
<p>Entrypoint to cluster for http/https is NGINX: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0 running as deamonset</p>
<p>I want to achieve 2 things:</p>
<ol>
<li>preserve source IP of client</li>
<li>direct traffic to nginx replica on
current server (so if request is sent to server A, listed as
externalIP address, nginx on node A should handle it)</li>
</ol>
<p>Questions:</p>
<ul>
<li>How is it possible?</li>
<li>Is it possible without nodeport? <em>Control plane can be started with custom --service-node-port-range so I can add nodeport for 80
and 443, but it looks a little bit like a hack (after reading about
nodeport intended usage)</em></li>
</ul>
<p>I was considering using <strong>metallb</strong>, but layer2 configuration will cause bottleneck (high traffic on cluster). I am not sure if BGP will solve this problem.</p>
<ul>
<li>Kubernetes <strong>v15</strong></li>
<li>Bare-metal</li>
<li>Ubuntu <strong>18.04</strong></li>
<li>Docker <strong>(18.9)</strong> and WeaveNet <strong>(2.6)</strong></li>
</ul>
| <p>You can preserve the source IP of client by using <code>externalTrafficPolicy</code> set to <code>local</code>, this will proxy requests to local endpoints. This is explained on <a href="https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-nodeport" rel="nofollow noreferrer">Source IP for Services with Type=NodePort</a>.</p>
<p>Can should also have a look at <a href="https://kubernetes.io/docs/tutorials/services/source-ip/" rel="nofollow noreferrer">Using Source IP</a>.</p>
<p>In case of <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">MetalLB</a>:</p>
<blockquote>
<p>MetalLB respects the service’s <code>externalTrafficPolicy</code> option, and implements two different announcement modes depending on what policy you select. If you’re familiar with Google Cloud’s Kubernetes load balancers, you can probably skip this section: MetalLB’s behaviors and tradeoffs are identical.</p>
<h3><a href="https://v0-2-0--metallb.netlify.com/usage/#local-traffic-policy" rel="nofollow noreferrer" title="Permanent link">“Local” traffic policy</a></h3>
<p>With the <code>Local</code> traffic policy, nodes will only attract traffic if they are running one or more of the service’s pods locally. The BGP routers will load-balance incoming traffic only across those nodes that are currently hosting the service. On each node, the traffic is forwarded only to local pods by <code>kube-proxy</code>, there is no “horizontal” traffic flow between nodes.</p>
<p>This policy provides the most efficient flow of traffic to your service. Furthermore, because <code>kube-proxy</code> doesn’t need to send traffic between cluster nodes, your pods can see the real source IP address of incoming connections.</p>
<p>The downside of this policy is that it treats each cluster node as one “unit” of load-balancing, regardless of how many of the service’s pods are running on that node. This may result in traffic imbalances to your pods.</p>
<p>For example, if your service has 2 pods running on node A and one pod running on node B, the <code>Local</code> traffic policy will send 50% of the service’s traffic to each <em>node</em>. Node A will split the traffic it receives evenly between its two pods, so the final per-pod load distribution is 25% for each of node A’s pods, and 50% for node B’s pod. In contrast, if you used the <code>Cluster</code> traffic policy, each pod would receive 33% of the overall traffic.</p>
<p>In general, when using the <code>Local</code> traffic policy, it’s recommended to finely control the mapping of your pods to nodes, for example using <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity" rel="nofollow noreferrer">node anti-affinity</a>, so that an even traffic split across nodes translates to an even traffic split across pods.</p>
</blockquote>
<p>You need to take for the account the limitations of <a href="https://v0-2-0--metallb.netlify.com/usage/#limitations" rel="nofollow noreferrer">BGP routing protocol</a> for MetalLB.</p>
<p>Please also have a look at this blog post <a href="https://mauilion.dev/posts/kind-metallb/" rel="nofollow noreferrer">Using MetalLb with Kind</a>.</p>
|
<p>One of the points in the <code>kubectl</code> <a href="https://kubernetes.io/docs/reference/kubectl/conventions/#best-practices" rel="nofollow noreferrer">best practices section in Kubernetes Docs</a> state below:</p>
<blockquote>
<p>Pin to a specific generator version, such as <code>kubectl run
--generator=deployment/v1beta1</code></p>
</blockquote>
<p>But then a little down in the doc, we get to learn that except for Pod, the use of <code>--generator</code> option is deprecated and that it would be removed in future versions.</p>
<p>Why is this being done? Doesn't generator make life easier in creating a template file for resource definition of deployment, service, and other resources? What alternative is the kubernetes team suggesting? This isn't there in the docs :(</p>
| <p><code>kubectl create</code> is the recommended alternative if you want to use more than just a pod (like deployment).</p>
<p><a href="https://kubernetes.io/docs/reference/kubectl/conventions/#generators" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/kubectl/conventions/#generators</a> says:</p>
<blockquote>
<p>Note: <code>kubectl run --generator</code> except for <code>run-pod/v1</code> is deprecated in v1.12.</p>
</blockquote>
<p><a href="https://github.com/kubernetes/kubernetes/pull/68132" rel="nofollow noreferrer">This pull</a> request has the reason why generators (except <code>run-pod/v1</code>) were deprecated:</p>
<blockquote>
<p>The direction is that we want to move away from <code>kubectl run</code> because it's over bloated and complicated for both users and developers. We want to mimic <code>docker run</code> with <code>kubectl run</code> so that it <em>only</em> creates a pod, and if you're interested in other resources <code>kubectl create</code> is the intended replacement.</p>
</blockquote>
|
<p>I have defined application.properties files in a config dir. Config dir is on the below structure.</p>
<pre><code>config
application.properties
application-test.properties
application-dev.properties
application-prod.properties
</code></pre>
<p>I have created a helm chart to create the configmap. Helm chart is defined as below</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configmap
data:
{{- (.Files.Glob "config/*").AsConfig | nindent 2 }}
</code></pre>
<p>I see that the configmap is created. </p>
<p>We are consuming the ConfigMap via environment variables in a running container using the <code>envFrom</code> property. (This in my deployment yaml file)</p>
<pre><code>spec:
containers:
- envFrom:
- configMapRef:
name: nginx-configmap
</code></pre>
<p>I see that the values are stored as environment variables. However the variables are in lower cases. </p>
<pre><code>server.port=8080
server.name=localhost
</code></pre>
<p>Since these are env variables, they have to be uppercase and . should be replaced with _. So, I have modifed my chart as below</p>
<pre><code>data:
{{- (.Files.Glob "config/*").AsConfig | nindent 2 | upper | replace "." "_" }}
</code></pre>
<p>The generated configmap is as below</p>
<pre><code> APPLICATION_PROPERTIES: |
SERVER_PORT = 8080
SERVER_NAME = LOCALHOST
</code></pre>
<p>Below is the env variables inside container</p>
<pre><code>APPLICATION_PROPERTIES=SERVER_PORT = 8080
SERVER_NAME = LOCALHOST
</code></pre>
<p>My requirement is that only the contents of the file should be upper case and . should be replaced with _. The filename should not be converted. The filename should be as is.</p>
<p>Can this be achieved? </p>
| <p>Try this:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configmap
data:
{{ $root := . }}
{{ range $path, $bytes := .Files.Glob "config/*" }}
{{ base $path }}: '{{ $root.Files.Get $path | nindent 2 | upper | replace "." "_" }}'
{{ end }}
</code></pre>
|
<p>I have deployed a spring boot application on a pod(pod1) on a node(node1). I have also deployed JMeter on another pod(pod2) on a different node(node2). I am trying to perform automated load testing from pod2. To perform load testing, I require to restart the pod1 for each test cases. How do I restart pod1 from pod2?</p>
| <p>To restart or delete a pod from another pod you have to access the APIServer.
There are many ways to do this check this <a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod" rel="nofollow noreferrer">link</a>. </p>
<p>You also have to authorize the pod user to do this, build a Role and a Rolebinding entity.</p>
|
<p>I am trying to install grafana helm chart with opsgenie notification like so</p>
<pre><code> helm install stable/grafana -n grafana --namespace monitoring --set-string notifiers."notifiers\.yaml"="notifiers:
- name: opsgenie-notifier
type: opsgenie
uid: notifier-1
settings:
apiKey: some-key
apiUrl: https://some-server/alerts"
</code></pre>
<p>When I check the config map I see the value is set with an extra pipe at the begining --> <strong>|-</strong></p>
<pre><code>apiVersion: v1
data:
notifiers.yaml: |
|-
notifiers:
- name: opsgenie-notifier
type: opsgenie
uid: notifier-1
settings:
apiKey: some-key
apiUrl: https://some-server/alerts
kind: ConfigMap
metadata:
creationTimestamp: "2019-08-27T00:32:40Z"
labels:
app: grafana
chart: grafana-3.5.10
heritage: Tiller
release: grafana
name: grafana
namespace: monitoring
</code></pre>
<p>Checking the source code - <a href="https://github.com/helm/charts/blob/master/stable/grafana/templates/configmap.yaml" rel="nofollow noreferrer">https://github.com/helm/charts/blob/master/stable/grafana/templates/configmap.yaml</a>, I can't figure out why. The below source code should print the values verbatim but its adding an extra line --> |-, causing grafana server to crash as it is unable to read the configuration.</p>
<pre><code>{{- if .Values.notifiers }}
{{- range $key, $value := .Values.notifiers }}
{{ $key }}: |
{{ toYaml $value | indent 4 }}
{{- end -}}
{{- end -}}
</code></pre>
<p>I have tried with --set, --set-file and --set-string. Its the same behavio.</p>
| <p>Easy way to achieve this is by using a values.yaml file as below</p>
<pre><code>notifiers:
notifiers.yaml:
notifiers:
- name: opsgenie-notifier
type: opsgenie
uid: notifier-1
settings:
apiKey: some-key
apiUrl: https://some-server/alerts
</code></pre>
<p>and by installing as</p>
<p><code>helm install stable/grafana -n grafana --namespace monitoring --values values.yaml</code></p>
<p>You can do via the --set/--set-string flag as below</p>
<pre><code>helm install stable/grafana -n grafana --namespace monitoring \
--set notifiers."notifiers\.yaml".notifiers[0].name="opsgenie-notifier" \
--set notifiers."notifiers\.yaml".notifiers[0].type="opsgenie" \
--set notifiers."notifiers\.yaml".notifiers[0].uid="notifier-1" \
--set notifiers."notifiers\.yaml".notifiers[0].settings.apiKey="some-key" \
--set notifiers."notifiers\.yaml".notifiers[0].settings.apiUrl="https://some-server/alerts"
</code></pre>
|
<p>I have deployed a spring boot application on a pod(pod1) on a node(node1). I have also deployed JMeter on another pod(pod2) on a different node(node2). I am trying to perform automated load testing from pod2. To perform load testing, I require to restart the pod1 for each test cases. How do I restart pod1 from pod2?</p>
| <p><strong>Via kubectl:</strong></p>
<p>Install kubectl and configure in the pod2 and then do <code>kubectl delete pod1</code> via shell after every load-testing</p>
<p><strong>Via Springboot:</strong></p>
<p>Add actuator dependency</p>
<pre><code><dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
</code></pre>
<p>Enable shutdown</p>
<p><code>management.endpoint.shutdown.enabled=true</code></p>
<p>Request to shutdown</p>
<p><code>curl -X POST IP:port/actuator/shutdown</code></p>
|
<p>I am displaying the output of the "docker ps -a command" to list all the containers to my Html page. I want to change the port of these containers using a button in the page itself. In docker normally if the container is running, I would run a docker stop on the container-id and restart it by adding the -p HOSTPORT:CONTAINERPORT to the command. But since all the containers running are Kubernetes containers/pods, stopping them will re-create a new pod/container with a different name. So how do I change the port of the container/pod in such cases?</p>
<p>output of "docker ps -a command"</p>
<pre><code> NAMES CONTAINER ID STATUS
k8s_nginx_nginx-6cdb6c86d4-z7m7m 56711e6de1be Up 2 seconds
k8s_POD_nginx-6cdb6c86d4-z7m7m_d 70b21761cb74 Up 3 seconds
k8s_coredns_coredns-5c98db65d4-7 dfb21bb7c7f4 Up 7 days
k8s_POD_coredns-5c98db65d4-7djs8 a336be8230ce Up 7 days
k8s_POD_kube-proxy-9722h_kube-sy 5e290420dec4 Up 7 days
k8s_POD_kube-apiserver-wootz_kub a23dea72b38b Exited (255) 7 days ago
</code></pre>
<p>nginx.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type: NodePort
ports:
- name: nginxport
port: 80
targetPort: 80
nodePort: 30000
selector:
app: nginx
tier: frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
tier: frontend
template:
metadata:
labels:
app: nginx
tier: frontend
spec:
containers:
- image: suji165475/devops-sample:mxgraph
name: nginx
ports:
- containerPort: 80
name: nginxport
</code></pre>
<p>So how can I change the port of any of the containers/pod ?</p>
| <p>Most of the attributes of a <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.15/#podspec-v1-core" rel="nofollow noreferrer">PodSpec</a> cannot be changed once the pod has been created. The port information is inside the <code>containers</code> array, and the linked documentation explicitly notes that <code>containers</code> "Cannot be updated." You <em>must</em> delete and recreate the pod if you want to change the ports it makes visible (or most of its other properties); there is no other way to do it.</p>
<p>You almost never directly deal with Pods (and for that matter you almost never mix plain Docker containers and Kubernetes on the same host). Typically you create a Deployment object, which can be updated in place, and it takes responsibility for creating and deleting Pods for you.</p>
<p>(The corollary to this is that if you're trying to manually delete and recreate Pods, in isolation, changing their properties, but these Pods are also managed by Deployments or StatefulSets or DaemonSets, the controller will notice that a replica is missing when you delete it and recreate it, with its original settings.)</p>
|
<p>Readiness Probe keeps the application in at a non-ready state. While being in this state the application cannot connect to any kubernetes service.</p>
<p>I'm using Ubuntu 18 for both master and nodes for my kubernetes cluster. (The problem still appeared when I used only master in the cluster, so I don't think this is a master node kind of problem).</p>
<p>I set up my kubernetes cluster with an Spring application, which uses hazelcast in order to manage cache. So, while using readiness probe, the application can't access a kubernetes service I created in order to connect the applications via hazelcast using the <a href="https://github.com/hazelcast/hazelcast-kubernetes" rel="nofollow noreferrer">hazelcast-kubernetes</a> plugin.</p>
<p>When I take out the readiness-probe, the application connects as soon as it can to the service creating hazelcast clusters successfully and everything works properly.</p>
<p>The readiness probe will connect to a rest api which its only response is a 200 code. However, while the application is going up, in the middle of the process it will start the hazelcast cluster, and as such, it will try to connect to the kubernetes hazelcast service which connects the app's cache with other pods, while the readiness probe hasn't been cleared and the pod is in a non-ready state due to the probe. This is when the application will not be able to connect to the kubernetes service and it will either fail or get stuck as a consequence of the configuration I add.</p>
<p>service.yaml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-app-cluster-hazelcast
spec:
selector:
app: my-app
ports:
- name: hazelcast
port: 5701
</code></pre>
<p>deployment.yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
labels:
app: my-app-deployment
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 2
replicas: 2
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
terminationGracePeriodSeconds: 180
containers:
- name: my-app
image: my-repo:5000/my-app-container
imagePullPolicy: Always
ports:
- containerPort: 5701
- containerPort: 9080
readinessProbe:
httpGet:
path: /app/api/excluded/sample
port: 9080
initialDelaySeconds: 120
periodSeconds: 15
securityContext:
capabilities:
add:
- SYS_ADMIN
env:
- name: container
value: docker
</code></pre>
<p>hazelcast.xml:</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<hazelcast
xsi:schemaLocation="http://www.hazelcast.com/schema/config http://www.hazelcast.com/schema/config/hazelcast-config-3.11.xsd"
xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<properties>
<property name="hazelcast.jmx">false</property>
<property name="hazelcast.logging.type">slf4j</property>
</properties>
<network>
<port auto-increment="false">5701</port>
<outbound-ports>
<ports>49000,49001,49002,49003</ports>
</outbound-ports>
<join>
<multicast enabled="false"/>
<kubernetes enabled="true">
<namespace>default</namespace>
<service-name>my-app-cluster-hazelcast</service-name>
</kubernetes>
</join>
</network>
</hazelcast>
</code></pre>
<p>hazelcast-client.xml:</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<hazelcast-client
xsi:schemaLocation="http://www.hazelcast.com/schema/client-config http://www.hazelcast.com/schema/client-config/hazelcast-client-config-3.11.xsd"
xmlns="http://www.hazelcast.com/schema/client-config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<properties>
<property name="hazelcast.logging.type">slf4j</property>
</properties>
<connection-strategy async-start="false" reconnect-mode="ON">
<connection-retry enabled="true">
<initial-backoff-millis>1000</initial-backoff-millis>
<max-backoff-millis>60000</max-backoff-millis>
</connection-retry>
</connection-strategy>
<network>
<kubernetes enabled="true">
<namespace>default</namespace>
<service-name>my-app-cluster-hazelcast</service-name>
</kubernetes>
</network>
</hazelcast-client>
</code></pre>
<p>Expected result:</p>
<p>The service is able to connect to the pods, creating endpoints in its description.</p>
<p>$ kubectl describe service my-app-cluster-hazelcast</p>
<pre><code>Name: my-app-cluster-hazelcast
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-app-cluster-hazelcast","namespace":"default"},"spec":{"ports...
Selector: app=my-app
Type: ClusterIP
IP: 10.244.28.132
Port: hazelcast 5701/TCP
TargetPort: 5701/TCP
Endpoints: 10.244.4.10:5701,10.244.4.9:5701
Session Affinity: None
Events: <none>
</code></pre>
<p>The application runs properly and shows two members in its hazelcast cluster and the deployment is shown as ready, the application can be fully accessed:</p>
<p>logs:</p>
<pre><code>2019-08-26 23:07:36,614 TRACE [hz._hzInstance_1_dev.InvocationMonitorThread] (com.hazelcast.spi.impl.operationservice.impl.InvocationMonitor): [10.244.4.10]:5701 [dev] [3.11] Broadcasting operation control packets to: 2 members
</code></pre>
<p>$ kubectl get deployments</p>
<pre><code>NAME READY UP-TO-DATE AVAILABLE AGE
my-app-deployment 2/2 2 2 2m27s
</code></pre>
<p>Actual Result:</p>
<p>The service doesn't get any endpoint.</p>
<p>$ kubectl describe service my-app-cluster-hazelcast</p>
<pre><code>Name: my-app-cluster-hazelcast
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-app-cluster-hazelcast","namespace":"default"},"spec":{"ports...
Selector: app=my-app
Type: ClusterIP
IP: 10.244.28.132
Port: hazelcast 5701/TCP
TargetPort: 5701/TCP
Endpoints:
Session Affinity: None
Events: <none>
</code></pre>
<p>The application gets stuck with the connection-strategy enabled in hazelcast-client.xml with the following logs, keeping its own cluster with no communication and the deployment in a non-ready state forever:</p>
<p>logs:</p>
<pre><code>22:54:11.236 [hz.client_0.cluster-] WARN com.hazelcast.client.connection.ClientConnectionManager - hz.client_0 [dev] [3.11] Unable to get alive cluster connection, try in 57686 ms later, attempt 52 , cap retrytimeout millis 60000
22:55:02.036 [hz._hzInstance_1_dev.cached.thread-4] DEBUG com.hazelcast.internal.cluster.impl.MembershipManager - [10.244.4.8]:5701 [dev] [3.11] Sending member list to the non-master nodes:
Members {size:1, ver:1} [
Member [10.244.4.8]:5701 - 6a4c7184-8003-4d24-8023-6087d68e9709 this
]
22:55:08.968 [hz.client_0.cluster-] WARN com.hazelcast.client.connection.ClientConnectionManager - hz.client_0 [dev] [3.11] Unable to get alive cluster connection, try in 51173 ms later, attempt 53 , cap retrytimeout millis 60000
22:56:00.184 [hz.client_0.cluster-] WARN com.hazelcast.client.connection.ClientConnectionManager - hz.client_0 [dev] [3.11] Unable to get alive cluster connection, try in 55583 ms later, attempt 54 , cap retrytimeout millis 60000
</code></pre>
<p>$ kubectl get deployments</p>
<pre><code>NAME READY UP-TO-DATE AVAILABLE AGE
my-app-deployment 0/2 2 0 45m
</code></pre>
| <p>Just to clarify:</p>
<p>As described by <a href="https://stackoverflow.com/users/11019984/cristian-cordova">OP</a> with reference to <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer">readiness probe</a>: </p>
<blockquote>
<p>The kubelet uses readiness probes to know when a Container is ready to start accepting traffic. A Pod is considered ready when all of its Containers are ready. One use of this signal is to control which Pods are used as backends for Services. <strong>When a Pod is not ready, it is removed from Service load balancers</strong></p>
</blockquote>
|
<p>Currently, I am migrating one of our microservice from K8S Deployment type to StatefulSets.
While updating Kubernetes deployment config I noticed StatefulSets doesn't support <code>revisionHistoryLimit</code> and <code>minReadySeconds</code>.</p>
<ol>
<li><code>revesionHistoryLimit</code> is used keep previous N numbers of replica sets for rollback.</li>
<li><code>minReadySeconds</code> is number of seconds pod should be ready without any of its container crashing.</li>
</ol>
<p>I couldn't find any compatible settings for <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer" title="StatefulSets doc">StatefulSets</a>.</p>
<p>So my questions are:
1) How long master will wait to consider Stateful Pod ready?
2) How to handle rollback of Stateful application.</p>
| <ol start="2">
<li>After reverting the configuration, you must also delete any Pods that StatefulSet had already attempted to run with the bad configuration. The new pod will automatically spin up with correct configuration.</li>
</ol>
|
<p>In a scenario where I set the context path of the Tomcat server by changing the <code>server.xml</code> file like this:</p>
<pre><code><Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true">
<Context path="${catalinaContextPath}" docBase="${catalina.home}/atlassian-jira" reloadable="false" useHttpOnly="true">
<Resource name="UserTransaction" auth="Container" type="javax.transaction.UserTransaction"
factory="org.objectweb.jotm.UserTransactionFactory" jotm.timeout="60"/>
<Manager pathname=""/>
<JarScanner scanManifest="false"/>
<Valve className="org.apache.catalina.valves.StuckThreadDetectionValve" threshold="120" />
</Context>
</Host>
</code></pre>
<p>If <code>catalinaContextPath</code> is set to <code>/my/new/context</code> The server will start up in the Pod with the URL: <code>localhost:8080/my/new/context</code>. How do I change the service so that it sends all traffic arriving on service port 80 to container path <code><pod_ip>:8080/my/new/context</code></p>
<p>This is my current service: </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
namespace: jira
name: jira
spec:
selector:
app: jira
component: jira
ports:
- protocol: TCP
name: serverport
port: 80
targetPort: 8080
</code></pre>
<p>My use case is, I am deploying this <a href="https://hub.docker.com/r/dchevell/jira-software" rel="nofollow noreferrer">this JIRA docker image</a> in a Pod and I set the context path using the environment variable <code>CATALINA_CONTEXT_PATH</code> as specified in <a href="https://hub.docker.com/r/dchevell/jira-software" rel="nofollow noreferrer">this documentation</a>. <strong>When I try to access it, it results in a 404. I assume this is because traffic is being redirected to <code><pod_ip>:8080</code> and nothing is running on <code><pod_ip>:8080</code> since tomcat has started up on <code><pod_ip>:8080/my/new/context</code></strong></p>
<p>EDIT:
This is the <code>ingress.yaml</code> I am using:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: {{ .Values.global.app }}
name: jira-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: myhost
http:
paths:
- path: /dev/jira(/|$)(.*)
backend:
serviceName: jira
servicePort: 80
- path: /prod/jira(/|$)(.*)
backend:
serviceName: jira
servicePort: 80
</code></pre>
<p>Whenever I visit <code>myhost/dev/jira</code> I need it to go to my JIRA instance.</p>
| <p>Since your application <em>"real"</em> root is <em><code>/my/new/context</code></em>, you can rewrite every incoming request matching the <em><code>/dev/jira</code></em> URI using <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#rewrite" rel="nofollow noreferrer">Nginx's <code>AppRoot</code></a>:</p>
<blockquote>
<p>If the Application Root is exposed in a different path and needs to be redirected, set the annotation nginx.ingress.kubernetes.io/app-root to redirect requests for /.</p>
</blockquote>
<p>If you're using this approach, there is no need to use capture groups with <code>rewrite-target</code>.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: {{ .Values.global.app }}
name: jira-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/app-root: /my/new/context
spec:
rules:
- host: myhost
http:
paths:
- path: /dev/jira(/|$)(.*)
backend:
serviceName: jira
servicePort: 80
- path: /prod/jira(/|$)(.*)
backend:
serviceName: jira
servicePort: 80
</code></pre>
|
<p>I'm experiencing the same error that was answered here <a href="https://stackoverflow.com/questions/57488845/istio-manual-sidecar-injection-gives-an-error">Istio manual sidecar injection gives an error</a> </p>
<p>I can't manually inject istio sidecar into an existing deployment. The solution of making sure the versions match doesn't help me.</p>
<pre><code># kubectl apply -f <(istioctl kube-inject -f apps_deployment.yml --log_output_level debug) --v=4
I0827 11:17:38.661208 98950 decoder.go:224] decoding stream as YAML
Error: missing configuration map key "values" in "istio-sidecar-injector"
F0827 11:17:38.728899 98950 helpers.go:119] error: error parsing /dev/fd/63: error converting YAML to JSON: yaml: line 4: mapping values are not allowed in this context
# kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.10", GitCommit:"7a578febe155a7366767abce40d8a16795a96371", GitTreeState:"clean", BuildDate:"2019-05-01T04:14:38Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.10-docker-1", GitCommit:"3ffbabe840b101d7c84263431c8a18b27785929a", GitTreeState:"clean", BuildDate:"2019-05-10T17:53:06Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
# istioctl version
client version: 1.2.4
ingressgateway version: 94746ccd404a8e056483dd02e4e478097b950da6-dirty
ingressgateway version: 94746ccd404a8e056483dd02e4e478097b950da6-dirty
pilot version: 1.2.4
pilot version: 1.2.4
</code></pre>
<p>Expected Results:
I expect it to inject the istio sidecar into my deployment.yml configuration.</p>
| <p>Istio 1.2.4 hasn't been tested for kubernetes 1.11 as per docs.</p>
<p>Reference:</p>
<p><a href="https://istio.io/docs/setup/kubernetes/" rel="nofollow noreferrer">https://istio.io/docs/setup/kubernetes/</a>
<a href="https://web.archive.org/web/20190827164324/https://istio.io/docs/setup/kubernetes/" rel="nofollow noreferrer">https://web.archive.org/web/20190827164324/https://istio.io/docs/setup/kubernetes/</a></p>
|
<p>I have developed a Camel route with Spring Boot. Now I want to trace the route using Jaeger. I tried <a href="https://github.com/apache/camel/tree/master/examples/camel-example-opentracing" rel="nofollow noreferrer">this example</a> to trace the route using <code>camel-opentracing</code> component, but I am unable to get the traces to Jaeger.</p>
<p>I can only see it in the console. One thing I am not clear is where to add the Jaeger URL?
Any working example will be helpful.</p>
| <p>What I eventually did is create a JaegerTraces and annotated with Bean</p>
|
<p>I am trying to follow the <a href="https://istio.io/docs/examples/bookinfo/" rel="nofollow noreferrer">Istio BookInfo example</a> for Kubernetes. But instead of installing the resources in the <code>default</code> namespace, I am using a namespace called <code>qa</code>. On step 5 is where I am running into an issue. When I try to curl the productpage I get the following response:</p>
<p><code>upstream connect error or disconnect/reset before headers. reset reason: connection termination</code></p>
<p>However, if I follow the same example but use the <code>default</code> namespace a get a successful response from the productpage.</p>
<p>Any ideas why this is breaking in my <code>qa</code> namespace?</p>
<p>Istio version:</p>
<pre><code>client version: 1.2.4
citadel version: 1.2.2
egressgateway version: 1.2.2
galley version: 1.2.2
ingressgateway version: 1.2.2
pilot version: 1.2.2
policy version: 1.2.2
sidecar-injector version: 1.2.2
telemetry version: 1.2.2
</code></pre>
<p>Kubernetes version (running in AKS):</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.7", GitCommit:"4683545293d792934a7a7e12f2cc47d20b2dd01b", GitTreeState:"clean", BuildDate:"2019-06-06T01:39:30Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| <p>I would recommend the following steps in order to debug the reported issue:</p>
<ol>
<li>Check whether sidecar has injected into <code>qa</code> namespace:</li>
</ol>
<p><code>$ kubectl get namespace -L istio-injection| grep qa</code></p>
<pre><code>qa Active 83m enabled
</code></pre>
<ol start="2">
<li>Verify k8s <strong>Bookinfo</strong> app resources properly distributed and located in <code>qa</code> namespace:</li>
</ol>
<p><code>$ kubectl get all -n qa</code></p>
<pre><code>NAME READY STATUS RESTARTS AGE
pod/details-v1-74f858558f-vh97g 2/2 Running 0 29m
pod/productpage-v1-8554d58bff-5tpbl 2/2 Running 0 29m
pod/ratings-v1-7855f5bcb9-hhlds 2/2 Running 0 29m
pod/reviews-v1-59fd8b965b-w9lk5 2/2 Running 0 29m
pod/reviews-v2-d6cfdb7d6-hsjqq 2/2 Running 0 29m
pod/reviews-v3-75699b5cfb-vl7t9 2/2 Running 0 29m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/details ClusterIP IP_ADDR <none> 9080/TCP 29m
service/productpage ClusterIP IP_ADDR <none> 9080/TCP 29m
service/ratings ClusterIP IP_ADDR <none> 9080/TCP 29m
service/reviews ClusterIP IP_ADDR <none> 9080/TCP 29m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/details-v1 1/1 1 1 29m
deployment.apps/productpage-v1 1/1 1 1 29m
deployment.apps/ratings-v1 1/1 1 1 29m
deployment.apps/reviews-v1 1/1 1 1 29m
deployment.apps/reviews-v2 1/1 1 1 29m
deployment.apps/reviews-v3 1/1 1 1 29m
NAME DESIRED CURRENT READY AGE
replicaset.apps/details-v1-74f858558f 1 1 1 29m
replicaset.apps/productpage-v1-8554d58bff 1 1 1 29m
replicaset.apps/ratings-v1-7855f5bcb9 1 1 1 29m
replicaset.apps/reviews-v1-59fd8b965b 1 1 1 29m
replicaset.apps/reviews-v2-d6cfdb7d6 1 1 1 29m
replicaset.apps/reviews-v3-75699b5cfb 1 1 1 29m
</code></pre>
<p><code>$ kubectl get sa -n qa</code></p>
<pre><code>NAME SECRETS AGE
bookinfo-details 1 36m
bookinfo-productpage 1 36m
bookinfo-ratings 1 36m
bookinfo-reviews 1 36m
default 1 97m
</code></pre>
<ol start="3">
<li>Inspect Istio <a href="https://istio.io/docs/concepts/what-is-istio/#envoy" rel="nofollow noreferrer">Envoy</a> in particular Pod container, thus you can extract some essential data about proxy state and traffic routing information:</li>
</ol>
<p><code>kubectl logs $(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}' -n qa) -c istio-proxy -n qa</code></p>
<p>I encourage you to look at Istio network traffic-management <a href="https://istio.io/docs/tasks/traffic-management/ingress/secure-ingress-mount/#troubleshooting" rel="nofollow noreferrer">troubleshooting</a> documentation chapter to get some more insights.</p>
|
<p>I have defined application properties files in a config dir. Config dir is on the below structure.</p>
<pre><code>config
application.properties
application-test.properties
application-dev.properties
application-prod.properties
</code></pre>
<p>I have defined configmap as below </p>
<pre><code>{{ range $path, $bytes := .Files.Glob "config/*" }}
{{ base $path }}: '{{- $.Files.Get $path | nindent 2 | upper | replace "." "_" }}'
{{- end }}
</code></pre>
<p>We are consuming the ConfigMap via environment variables in a running container using the envFrom property. (This in my deployment yaml file)</p>
<pre><code>spec:
containers:
- envFrom:
- configMapRef:
name: nginx-configmap
</code></pre>
<p>Post running <code>helm install</code>, I see that configmap is generated.</p>
<pre><code>kubectl describe cm sample-configmap
Data
====
----
SERVER_PORT = 8080 SERVER_NAME = LOCALHOST SERVER_GREETING = GREETING-SERVICE
----
</code></pre>
<p>In configmap, the contents are not generated as single string instead of key/value pair.</p>
<p>Even, inside the container, the values are not stored as key/value pair. they are stored as string.</p>
<pre><code>kubectl exec -it <pod> sh
/data # env | grep application.properties
application.properties= SERVER_PORT = 8080 SERVER_NAME = LOCALHOST SERVER_GREETING = GREETING-SERVICE SAMPLE_GROUPA_PARAM1 = VALUE1BASE SAMPLE_GROUPA_PARAM2 = VALUE2BASE SAMPLE_HIGHERPARAM = VALUEHIGHERBASE
/data # echo $application.properties
.properties
/data # echo $SERVER_PORT
</code></pre>
<p>I have also tried </p>
<pre><code>data:
{{- (.Files.Glob "config/*").AsConfig | nindent 2 | upper | replace "." "_" }}
</code></pre>
<p>observe the same issue. </p>
<p>Is it possible to expose the contents of application.properties as key/value pairs in env inside the container?</p>
| <p>The issue is the way you are reading your files and configmap configuration.
Assuming your file from <em>config</em> folder looks like this (please note that integers should be included in as quoted strings):</p>
<p><strong>config.ini</strong></p>
<pre><code>SERVER_PORT = "8080"
SERVER_NAME = LOCALHOST
SERVER_GREETING = GREETING-SERVICE
SAMPLE_GROUPA_PARAM1 = VALUE1BASE
SAMPLE_GROUPA_PARAM2 = VALUE2BASE
SAMPLE_HIGHERPARAM = VALUEHIGHERBASE
</code></pre>
<p><strong>configmap.yaml</strong></p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
data:
{{ range $path, $bytes := .Files.Glob "config/*" }}
{{- $.Files.Get $path | indent 2 | upper | replace "." "_" | replace " = " ": " }}
{{- end }}
</code></pre>
<p><strong>pod.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: special-config
</code></pre>
<p><strong>output:</strong></p>
<pre><code>$ kubectl logs test-pod
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
SAMPLE_GROUPA_PARAM1=VALUE1BASE
SAMPLE_GROUPA_PARAM2=VALUE2BASE
HOSTNAME=test-pod
...
PWD=/
KUBERNETES_SERVICE_HOST=10.96.0.1
SERVER_GREETING=GREETING-SERVICE
SERVER_PORT=8080
SERVER_NAME=LOCALHOST
</code></pre>
<p>See <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables" rel="nofollow noreferrer">Configure all key-value pairs in a ConfigMap as container environment variables</a></p>
|
<p>I want to setup <code>statsd-exporter</code> as DaemonSet on my Kubernetes cluster. It exposes a UDP port 9125, on which applications can send metrics using statsD client library. <code>Prometheus</code> crawlers can crawl this exporter for application or system metrics. I want to send the metrics to the UDP server running in the exporter on port 9125. I have two options:</p>
<ol>
<li><p>Expose a service as <code>ClusterIP</code> for the DaemonSet and then configure the statsD clients to use that IP and Port for sending metrics</p></li>
<li><p>Make the <code>statsd-exporter</code> run on <code>hostNetwork</code>, and somehow enable the pods to send metrics to <code>exporter</code> running on the same node. </p></li>
</ol>
<p>Somehow, option 2 seems better, since my pods will be sending metrics to an exporter running on the same node, but I am not able to send metrics to the local pod of <code>statsd-exporter</code> since I don't have the IP of the node the pod is running on.</p>
<p>Can you please compare the pros and cons of both methods, and suggest how can I know the IP address of Node on which the pod is running along with the exporter. </p>
<p><strong>EDIT 1</strong></p>
<p>I am able to get the Node IP by adding the environment variable.</p>
<pre><code>- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
</code></pre>
<p>I still need clarity on which will be better approach to setting this up. Exposing a service with type <code>ClusterIP</code> and then using the <code>HOST:PORT</code> from environment variable in the pod or use <code>hostNetwork: true</code> in pod spec and then access it using <code>NODE_IP</code> environment variable. Does clusterIp guarantees that the packet will be routed to the same node as the pod sending the packet?</p>
<p><strong>EDIT 2</strong></p>
<p>I explored <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">headless service</a> and I think this comes closest to what I want. But I am not sure that the DNS resolution will return the local node IP as the first result in <code>nslookup</code>. </p>
| <p>I would suggest one of the below two approaches, both have their pros and cons.</p>
<ul>
<li>Fast, may not be entirely secure.</li>
</ul>
<p>Daemon set using <code>hostPort</code>. It would be fast as both pods would be on the same node, but the statsd port would be exposed. (You need to secure statsd by some other way)</p>
<ul>
<li>Not as fast as <code>hostPort</code> but secure</li>
</ul>
<p>Exposing a service and using the service dns to connect (<code>servicename.namespace.svc.cluster.local</code>). Not as fast as the <code>hostPort</code> as there is no way to reach specific pod, but secure as no one from outside the cluster can hit the statsd.</p>
<p>More details: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#communicating-with-daemon-pods" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#communicating-with-daemon-pods</a></p>
|
<p>I am unable to patch my statefulset to use a <code>RollingUpdate</code> strategy. </p>
<p>(Encountered while working through the "StatefulSet Basics" tutorial <a href="https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#rolling-update" rel="nofollow noreferrer">here</a>)</p>
<pre><code>$ kubectl patch statefulset web -p '{"spec":{"strategy":{"type":"RollingUpdate"}}}'
statefulset "web" not patched
</code></pre>
<p>I wish <code>kubectl patch</code> would return more info as to the reason the statefulset could not be patched. </p>
<p><code>kubectl edit</code> tells me...</p>
<blockquote>
<p>found invalid field updateStrategy for v1beta1.StatefulSetSpec </p>
</blockquote>
<p>But I am not sure I put the key and value in the proper place to be sure this is the same issue <code>patch</code> is encountering.</p>
<p>How do tell my statefulset to use a RollingUpdate strategy?</p>
<p>To reproduce this issue just follow the Kubernetes tutorial here: <a href="https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/</a></p>
| <p>Better to apply changes on your yaml file directly. </p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginx-sts
spec:
serviceName: "nginx-headless"
replicas: 3
#podManagementPolicy: Parallel
selector:
matchLabels:
run: nginx-sts-demo
updateStrategy:
rollingUpdate:
partition: 0 #for full partition update
type: RollingUpdate
</code></pre>
|
<p>I am trying to delete the entire kubernetes that created for my CI/CD pipeline R&D. So for deleting the cluster and everything I run the following command,</p>
<pre><code>kubectl config delete-cluster <cluster-name>
kubectl config delete-context <Cluster-context>
</code></pre>
<p>For making sure that the clustee is deleted, I build the jenkins pipeline job again. So I found that it is deploying with updated changes.</p>
<p>When I run the command "kubectl config view", I found the following result,</p>
<pre><code>docker@mildevdcr01:~$ kubectl config view
apiVersion: v1
clusters: []
contexts: []
current-context: kubernetes-admin@cluster.local
kind: Config
preferences: {}
users: []
docker@mildevdcr01:~$
</code></pre>
<p>Still my Spring Boot micro service is deploying in cluster with updated changes.</p>
<p>I created the Kubernetes cluster using kubespray tool that I got reference from Github:</p>
<p><a href="https://github.com/kubernetes-incubator/kubespray.git" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/kubespray.git</a></p>
<p>What do I need to do for the deletion of everything that I created for this Kubernetes cluster? I need to remove everything including master node.</p>
| <p>If you setup your cluster using Kubespray, you ran whole installation using ansible, so to delete cluster you have to use it too.</p>
<p>But you can also reset the entire cluster for fresh installation:</p>
<pre><code>$ ansible-playbook -i inventory/mycluster/hosts.ini reset.yml
</code></pre>
<p>Remember to keep the “hosts.ini” updated properly.</p>
<p>You can remove node by node from your cluster simply adding specific node do section [kube-node] in inventory/mycluster/hosts.ini file (your hosts file) and run command:</p>
<pre><code> $ ansible-playbook -i inventory/mycluster/hosts.ini remove-node.yml
</code></pre>
<p>KubeSpray documentation: <a href="https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md" rel="noreferrer">kubespray</a>.</p>
<p>Useful articles: <a href="https://dzone.com/articles/kubespray-10-simple-steps-for-installing-a-product" rel="noreferrer">kubespray-steps</a>, <a href="https://shadow-soft.com/kubernetes-ansible-kubespray/" rel="noreferrer">kubespray-ansible</a>.</p>
|
<p>i have to backup my kubernetes cluster.
i choose Etcd backup with rancher rke but after restoring the backup i have too many differente errors on pods like Calico, Nginx-ingress-controller, kube-dns and ... .</p>
<p>is Etcd backup reliable?
what is the best backup tools for kubernetes?</p>
<p>i tried step by step restore from:
<a href="https://rancher.com/docs/rancher/v2.x/en/backups/restorations/ha-restoration/#4-restore-database" rel="nofollow noreferrer">https://rancher.com/docs/rancher/v2.x/en/backups/restorations/ha-restoration/#4-restore-database</a></p>
| <p>One of the most popular is <a href="https://velero.io/docs/v1.1.0/" rel="nofollow noreferrer">Valero</a>, which gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. You can run Velero with a cloud provider or on-premises. It is well documented. You can check <a href="https://velero.io/docs/v1.1.0/install-overview/" rel="nofollow noreferrer">installation overview</a> to see details.</p>
<p>You can also check <a href="https://blog.kubernauts.io/backup-and-restore-of-kubernetes-applications-using-heptios-velero-with-restic-and-rook-ceph-as-2e8df15b1487" rel="nofollow noreferrer">tutorial</a> how to srart with Valero. If you have more question feel free to ask.</p>
|
<p>I want to automate the deployment procedure of my kubernetes pod manifest file through Bazel. I have gone through <code>k8s_object</code> in Bazel which takes information of the k8s context, cluster and kubeconfig file , not sure how i can make use of it to deploy my application in k8s tenant.</p>
| <p>You need to use the <a href="https://github.com/bazelbuild/rules_k8s#k8s_object" rel="nofollow noreferrer">k8s_object</a> to interact with Kubernetes cluster.</p>
<p>You can <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment" rel="nofollow noreferrer">create a Deployment</a> that will deploy your application to the cluster:</p>
<p>Here is an example <code>nginx-deployment.yaml</code>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
</code></pre>
<p>You have to use that <code>yaml</code> file as a template in the bazel k8s_object, so it would look like this:
<code>k8s_object(name = "nginx", kind = "create", template = "nginx-deployment.yaml")</code></p>
|
<p>I was playing around in minikube and installed the wrong version of istio. I ran:</p>
<pre><code>kubectl apply -f install/kubernetes/istio-demo-auth.yaml
</code></pre>
<p>instead of:</p>
<pre><code>kubectl apply -f install/kubernetes/istio-demo.yaml
</code></pre>
<p>I figured I would just undo it and install the right one.</p>
<p>But I cannot seem to find an <code>unapply</code> command.</p>
<p><strong>How do I <em>undo</em> a "kubectl apply" command?</strong></p>
| <p>One way would be <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete" rel="noreferrer"><code>kubectl delete -f <filename></code></a> but it implies few things:</p>
<ol>
<li><p>The resources were first created. It simply removes all of those, if you really want to "revert to the previous state" I'm not sure there are built-in tools in Kubernetes to do that (so you really would restore from a backup, if you have one)</p></li>
<li><p>The containers did not modify the host machines: containers may mount root filesystem and change it, or kernel subsystems (iptables, etc). The <code>delete</code> command would not revert it either, and in that case you really need to check the documentation for the product to see if they offer any official way to guarantees a proper cleanup</p></li>
</ol>
|
<p>I've been through a huge amount of stuff on <a href="https://kubernetes.io/docs/" rel="nofollow noreferrer">https://kubernetes.io/docs/</a> and just want to be able to find definitions of what fields I need for what K8s structures I'm trying to develop. I want to be able to create persistent volumes and understand what the options are and be able to encode that in a sensible way. Surely these are somewhere? </p>
<p>I get this is a terrible question, but I feel like I'm trying to code these manifests up with a mixture of random rubbish from Google and examples, which I'm pretty sure is a terrible way to develop production critical infrastructure. </p>
| <p>You can check the resource spec for kubernetes by consulting the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.15" rel="nofollow noreferrer">Web API</a> (this is for v1.15).</p>
<p>If you do not want to leave the CLI, you can also use <code>kubectl explain</code>, for instance:</p>
<pre class="lang-sh prettyprint-override"><code># check description of a persistent volume spec
$ kubectl explain persistentvolume.spec
</code></pre>
|
<p>I have environment where I can simply push images(created with Jib) to local repository. I want to now to be able to deploy this on kubernetes, but from the "safety" of maven. </p>
<p>I know I can spin some Skaffold magic, but I don't like to have it installed separately. Is there some Jib-Skaffold workflow I can use to continuously force Skaffold to redeploy on source change(without running it in command line)</p>
<p>Is there some Skaffold plugin? I really like what they have <a href="https://www.youtube.com/watch?v=H6gR_Cv4yWI&t=1219s" rel="nofollow noreferrer">here</a>, but proposed kubernetes-dev-maven-plugin is probably internal only.</p>
| <p>Skaffold can <a href="https://skaffold.dev/docs/workflows/dev/" rel="nofollow noreferrer">monitor your local code and detect changes that will trigger a build and deployment</a> in your cluster. This is built-in on Skaffold using the <code>dev</code> mode so it solves the <em>redeploy on source change</em> part.</p>
<p>As for the workflow, Jib is a <a href="https://skaffold.dev/docs/pipeline-stages/builders/jib/" rel="nofollow noreferrer">supported builder for Skaffold</a> so the same dynamic applies.</p>
<p>Although these features automate the tasks, it is still necessary to run it once with <code>skaffold dev</code> and let it run in the "background".</p>
|
<p>I am trying to put some kind of authenticator in front of my kubernetes applications. I have been using the nginx-ldap-auth image with the traditional applications on docker containers.
But when on kubernetes and exposing applications on NodePort Services, what is the best way to put authentication around it?</p>
<p>Dex seems to be authentication solution for kubernetes as a whole, but does it also help in authenticating web UIservices hosted on Kubernetes?</p>
| <p>Kubernetes provides a few fundamental <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#authentication-strategies" rel="nofollow noreferrer">Authentication</a> concepts that actually manage <em>access-control</em> function. <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens" rel="nofollow noreferrer">OpenID Connect</a> as a part of authentication model represents a flexible way how to handle token ID based verification for user identity through a variety of <em>Identity Provider software's</em> protocols like <a href="https://oauth.net/2/" rel="nofollow noreferrer">OAuth2</a>, however K8s doesn't provide any OpenID Identity Provider in front of the cluster.</p>
<p><a href="https://github.com/dexidp/dex/blob/master/Documentation/kubernetes.md" rel="nofollow noreferrer">Dex</a> as OpenID service can be used for authentication purposes to Kubernetes API server through <em>OpenID K8s authentication <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#configuring-the-api-server" rel="nofollow noreferrer">plugin</a></em>, however hosted in Kubernetes web application needs to be supplied with any of OAuth2 client in order to determine user identity and obtain Token ID as described <a href="https://github.com/dexidp/dex/blob/master/Documentation/using-dex.md" rel="nofollow noreferrer">here</a>.</p>
<p>Assuming that you have exposed web application running on K8s cluster, <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> resource might extend L7 network features for the target application service, like Load balancing, SSL/TLS termination, network traffic routing, etc.; for that purpose <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="nofollow noreferrer">Ingress Controller</a> has to be implemented in K8s cluster, thus all HTTP/HTTPS requests will be routed and processed according to specified rules inside Ingress object. </p>
<p>Go further, and searching for <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">NGINX</a> <em>Ingress Controller</em>, gives you an opportunity to adjust or extend some significant functionality of typical Ingress Controller via <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#annotations" rel="nofollow noreferrer">Annotations</a> and apply i.e. <a href="https://github.com/pusher/oauth2_proxy" rel="nofollow noreferrer">oauth2_proxy</a> as external authentication provider to handle user request identification on Ingress object, as described in Kubernetes dashboard <a href="https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/auth/oauth-external-auth" rel="nofollow noreferrer">example</a>.</p>
<p>By the way, <code>nginx-ldap-auth</code> module seems to be compatible with <em>NGINX Ingress Controller</em> as well, hence you can check it <a href="https://github.com/tiagoapimenta/nginx-ldap-auth" rel="nofollow noreferrer">tiagoapimenta/nginx-ldap-auth</a>.</p>
|
<p>I want to create labels for each node under a nodepool , but i dont find this option in azure cli , can someone help me with this?</p>
<p>Expected :
Must be able to label nodepools which can help in autoscaling and pod scheduling.</p>
| <p>Adding labels is a feature that is still in progress (tracked here: <a href="https://github.com/Azure/AKS/issues/1088" rel="nofollow noreferrer">https://github.com/Azure/AKS/issues/1088</a>).</p>
<p>However, you can add Taints using ARM: <a href="https://learn.microsoft.com/en-us/rest/api/aks/agentpools/createorupdate#examples" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/rest/api/aks/agentpools/createorupdate#examples</a> or Terraform: <a href="https://www.terraform.io/docs/providers/azurerm/r/kubernetes_cluster.html#node_taints" rel="nofollow noreferrer">https://www.terraform.io/docs/providers/azurerm/r/kubernetes_cluster.html#node_taints</a> (looks like the CLI still lacks the functionality).</p>
|
<p>Is it possible to generate yaml with kubernetes kubectl command ? to clarify - I'm not talking about generating yaml from existing deployments like kubectl get XXXX -o yaml, but merely about generating yamls for the very first time for pod, service, ingress, etc. </p>
<p>PS There is a way to get yaml files from kubernetes.io site ( <a href="https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/simple_deployment.yaml" rel="noreferrer">1</a> , <a href="https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/update_deployment.yaml" rel="noreferrer">2</a> ) but I am looking if there is a way to generate yamls templates with kubectl only.</p>
| <p>There's the command <code>create</code> in <code>kubectl</code> that does the trick and replaced the <code>run</code> used in the past: let's image you want to create a <em>Deployment</em> running a <em>nginx:latest</em> Docker image.</p>
<pre><code># kubectl create deployment my_deployment --image=busybox --dry-run=client --output=yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: my_deployment
name: my_deployment
spec:
replicas: 1
selector:
matchLabels:
app: my_deployment
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: my_deployment
spec:
containers:
- image: busybox
name: busybox
resources: {}
status: {}
</code></pre>
<p>Let's analyze each parameter:</p>
<ul>
<li><code>my_deployment</code> is the <em>Deployment</em> name you chose</li>
<li><code>--image</code> is the Docker image you want to deploy</li>
<li><code>--dry-run=client</code> won't execute the resource creation, used mainly for validation. Replace 'client' with 'true' for older versions of Kubernetes. Neither <code>client</code> nor <code>server</code> will actually create the resource, though <code>server</code> will return an error if the resource cannot be created without a dry run (ie: resource already exists). The difference is very subtle.</li>
<li><code>--output=yaml</code> prints to <em>standard output</em> the YAML definition of the <em>Deployment</em> resource.</li>
</ul>
<p>Obviously, you can perform this options just with few Kubernetes default resources:</p>
<pre><code># kubectl create
clusterrole Create a ClusterRole.
clusterrolebinding Create a ClusterRoleBinding for a particular ClusterRole
configmap Create a configmap from a local file, directory or literal value
deployment Create a deployment with the specified name.
job Create a job with the specified name.
namespace Create a namespace with the specified name
poddisruptionbudget Create a pod disruption budget with the specified name.
priorityclass Create a priorityclass with the specified name.
quota Create a quota with the specified name.
role Create a role with single rule.
rolebinding Create a RoleBinding for a particular Role or ClusterRole
secret Create a secret using specified subcommand
service Create a service using specified subcommand.
serviceaccount Create a service account with the specified name
</code></pre>
<p>According to this, you can render the template without the prior need of deploying your resource.</p>
|
<p>I would like to be able to programmatically query Kubernetes to find overcommitted nodes.</p>
<p>If I do <code>kubectl describe nodes</code>, I get human-readable output including information about resource usage that I'm after, e.g.</p>
<pre><code>Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 786m (40%) 5078m (263%)
memory 8237973504500m (74%) 13742432Ki (126%)
</code></pre>
<p>However, <code>kubectl describe</code> doesn't support JSON or YAML output, and <code>kubectl get nodes -ojson</code> doesn't include the allocated resource stats. Is there any other way to access this information?</p>
| <p>If you run any <code>kubectl</code> command with the <code>--v=6</code> option the output will include the kubernetes API calls that make up the output.</p>
<p>In the case of <code>kubectl describe nodes NODE</code> you will see there is a <code>api/v1/pods</code> request that filters pods on the node and removes some "not running" statuses</p>
<pre><code>I0828 13:44:29.310208 55233 round_trippers.go:438] GET https://kubernetes.docker.internal:6443/api/v1/pods?fieldSelector=spec.nodeName%3Ddocker-desktop%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded 200 OK in 4 milliseconds
</code></pre>
<p>If you complete this request with authentication information from your <code>~/.kube/config</code> file you should be able to get the output. In this case, using <code>jq</code> to filter the output down to the <code>resources</code> component of the container spec with CA/Cert/Key auth (base64 decoded). </p>
<pre><code>curl --cacert ~/.kube/docker-desktop.ca \
--cert ~/.kube/docker-desktop.cert \
--key ~/.kube/docker-desktop.key \
https://kubernetes.docker.internal:6443/api/v1/pods?fieldSelector=spec.nodeName%3Ddocker-desktop%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded \
| jq '.items[].spec.containers[].resources'
</code></pre>
<pre><code>{}
{}
{
"limits": {
"memory": "170Mi"
},
"requests": {
"cpu": "100m",
"memory": "70Mi"
}
}
{
"limits": {
"memory": "170Mi"
},
"requests": {
"cpu": "100m",
"memory": "70Mi"
}
}
{}
{
"requests": {
"cpu": "250m"
}
}
{
"requests": {
"cpu": "200m"
}
}
{}
{
"requests": {
"cpu": "100m"
}
}
{}
{}
</code></pre>
<p>Running these calls and filters will generally be easier with one of the <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">kubernetes API clients</a> if you are regularly going to this level. </p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.