Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
|---|---|---|---|
<p>I am working on a go application where I want to use the the mount-utils package: <a href="https://pkg.go.dev/k8s.io/utils/mount" rel="nofollow noreferrer">https://pkg.go.dev/k8s.io/utils/mount</a></p>
<p>Here, this package parses /proc/mounts file instead of /proc/self/mountinfo.</p>
<p>How do I figure out value for source using this package? This piece of information is missing in /proc/mounts file as compared to /proc/self/mountinfo.</p>
|
codego123
|
<p><a href="https://pkg.go.dev/k8s.io/utils/mount#ParseMountInfo" rel="nofollow noreferrer">mount.ParseMountInfo</a> parses <code>/proc/<pid>/mountinfo</code>.</p>
<pre class="lang-golang prettyprint-override"><code>package main
import (
"fmt"
"k8s.io/utils/mount"
)
func main() {
mounts, err := mount.ParseMountInfo("/proc/self/mountinfo")
if err != nil {
panic(err)
}
for _, m := range mounts {
fmt.Printf("%-100s%s\n", m.MountPoint, m.Source)
}
}
</code></pre>
<p>BTW, The package <code>k8s.io/utils/mount</code> has been moved to new location. Use <code>k8s.io/mount-utils</code> instead.</p>
|
Zeke Lu
|
<p>I would like to be able to type</p>
<p><code>ibmcloud ks</code> TAB</p>
<p>with the api giving me options to choose from, same as the kubectl autocomplete function.
I have seen this somewhere before but can't seem to find any documentation so maybe it was a tweak.</p>
<p>Anyone any solutions or said tweaks?
The official docs <a href="https://cloud.ibm.com/docs/containers?topic=containers-cli-plugin-kubernetes-service-cli" rel="nofollow noreferrer">IBM Kubernetes Service Documentation</a></p>
<p>Cheers (I'm using zsh on Ubuntu 20.04)</p>
|
Lorberta
|
<p>Here's <a href="https://cloud.ibm.com/docs/cli/reference/ibmcloud?topic=cli-shell-autocomplete#shell-autocomplete-linux" rel="nofollow noreferrer">link</a> to the official documentation you are looking for</p>
<p><strong>For Linux or macOS</strong></p>
<pre><code>source /usr/local/ibmcloud/autocomplete/zsh_autocomplete
</code></pre>
<p>into <code>~/.zshrc</code>.</p>
<p>Once sourced, the output of</p>
<p>ibmcloud ks TAB will be</p>
<p><a href="https://i.stack.imgur.com/3YMck.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3YMck.png" alt="enter image description here" /></a></p>
|
Vidyasagar Machupalli
|
<p>I've done quite a bit of searching and cannot seem to find anyone that shows a resolution to this problem.</p>
<p>I'm getting intermittent 111 Connection refused errors on my kubernetes clusters. It seems that about 90% of my requests succeed and the other 10% fail. If you "refresh" the page, a previously failed request will then succeed. I have 2 different Kubernetes clusters with the same exact setup both showing the errors.</p>
<p>This looks to be very close to what I am experiencing. I did install my setup onto a new cluster, but the same problem persisted:
<a href="https://stackoverflow.com/questions/58401610/kubernetes-clusterip-intermittent-502-connection-refused">Kubernetes ClusterIP intermittent 502 connection refused</a></p>
<p><strong>Setup</strong></p>
<ul>
<li>Kubernetes Cluster Version: 1.18.12-gke.1206</li>
<li>Django Version: 3.1.4</li>
<li>Helm to manage kubernetes charts</li>
</ul>
<p><strong>Cluster Setup</strong></p>
<p>Kubernetes nginx ingress controller that serves web traffic into the cluster:
<a href="https://kubernetes.github.io/ingress-nginx/deploy/#gce-gke" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/#gce-gke</a></p>
<p>From there I have 2 Ingresses defined that route traffic based on the referrer url.</p>
<ol>
<li>Stage Ingress</li>
<li>Prod Ingress</li>
</ol>
<p><strong>Ingress</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: potr-tms-ingress-{{ .Values.environment }}
namespace: {{ .Values.environment }}
labels:
app: potr-tms-{{ .Values.environment }}
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
# this line below doesn't seem to have an effect
# nginx.ingress.kubernetes.io/service-upstream: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "100M"
cert-manager.io/cluster-issuer: "letsencrypt-{{ .Values.environment }}"
spec:
rules:
- host: {{ .Values.ingress_host }}
http:
paths:
- path: /
backend:
serviceName: potr-tms-service-{{ .Values.environment }}
servicePort: 8000
tls:
- hosts:
- {{ .Values.ingress_host }}
- www.{{ .Values.ingress_host }}
secretName: potr-tms-{{ .Values.environment }}-tls
</code></pre>
<p>These ingresses route to 2 services that I have defined for prod and stage:</p>
<p><strong>Service</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: potr-tms-service-{{ .Values.environment }}
namespace: {{ .Values.environment }}
labels:
app: potr-tms-{{ .Values.environment }}
spec:
type: ClusterIP
ports:
- name: potr-tms-service-{{ .Values.environment }}
port: 8000
protocol: TCP
targetPort: 8000
selector:
app: potr-tms-{{ .Values.environment }}
</code></pre>
<p>These 2 services route to deployments that I have for both prod and stage:</p>
<p><strong>Deployment</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: potr-tms-deployment-{{ .Values.environment }}
namespace: {{ .Values.environment }}
labels:
app: potr-tms-{{ .Values.environment }}
spec:
replicas: {{ .Values.deployment_replicas }}
selector:
matchLabels:
app: potr-tms-{{ .Values.environment }}
strategy:
type: RollingUpdate
template:
metadata:
annotations:
rollme: {{ randAlphaNum 5 | quote }}
labels:
app: potr-tms-{{ .Values.environment }}
spec:
containers:
- command: ["gunicorn", "--bind", ":8000", "config.wsgi"]
# - command: ["python", "manage.py", "runserver", "0.0.0.0:8000"]
envFrom:
- secretRef:
name: potr-tms-secrets-{{ .Values.environment }}
image: gcr.io/potrtms/potr-tms-{{ .Values.environment }}:latest
name: potr-tms-{{ .Values.environment }}
ports:
- containerPort: 8000
resources:
requests:
cpu: 200m
memory: 512Mi
restartPolicy: Always
serviceAccountName: "potr-tms-service-account-{{ .Values.environment }}"
status: {}
</code></pre>
<p><strong>Error</strong>
This is the error that I'm seeing inside of my ingress controller logs:
<a href="https://i.stack.imgur.com/mRAki.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mRAki.png" alt="Error log" /></a></p>
<p>This seems pretty clear, if my deployment pods were failing or showing errors they would be "unavailable" and the service would not be able to route them to the pod. To try and debug this I did increase my deployment resources and replica counts. The amount of web traffic to this app is pretty low though, ~10 users.</p>
<p><strong>What I've Tried</strong></p>
<ol>
<li>I tried using a completely different ingress controller <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx</a></li>
<li>Increasing deployment resources / replica counts (seems to have no effect)</li>
<li>Installing my whole setup on a brand new cluster (same results)</li>
<li>restart the ingress controller / deleting and re installing</li>
<li>Potentially it sounds like this could be a Gunicorn problem. To test I tried starting my pods with python manage.py runserver, problem remained.</li>
</ol>
<p><strong>Update</strong></p>
<p>Raising the pod counts seems to have helped a little bit.</p>
<ul>
<li>deployment replicas: 15</li>
<li>cpu request: 200m</li>
<li>memory request: 512Mi</li>
</ul>
<p>Some requests do fail still though.</p>
|
Kris Tryber
|
<p>Did you find a solution to this? I am seeing something very similar on a minikube setup.</p>
<p>In my case, I believe I also see the nginx controller restarting after the 502. The 502 is intermittent, frequently the first access fails, then reload works.</p>
<p>The best idea I've found so far is to increase the Nginx timeout parameter, but I have not tried that yet. Still trying to search out all options.</p>
|
bearcat
|
<p>I would like to run a command to clone a script from a remote repository before running <code>skaffold dev</code> I need to either somehow inject a <code>git clone</code> command or put the git clone command and the corresponding arguments in a shell script and run the shell script with Skaffold. </p>
<p>From the Skaffold workflow point of view, this step should be run before build. I am using Jib for the build phase and it appears that Jib state does not give me any ability to run a script before the actual build. I don't know if I can add a new phase to the Skaffold life cycle like <code>pre-build</code>. One solution came to my mind is to use <code>custom</code> build instead of <code>Jib</code> and put all pre-build commands as well as the jib related commands in a single script to run. This approach probably works, but won't be very convenient. I was wondering if there is a better approach to do this with Skaffold.</p>
<pre><code>build:
artifacts:
- image: gcr.io/k8s-skaffold/example
custom:
buildCommand: ./prebuild-and-build.sh
</code></pre>
|
Ali
|
<p>skaffold supports lifecycle hooks which allow running custom scripts before/after a build - <a href="https://skaffold.dev/docs/pipeline-stages/lifecycle-hooks/" rel="nofollow noreferrer">https://skaffold.dev/docs/pipeline-stages/lifecycle-hooks/</a></p>
<p>With this, you should be able to add a stanza in your skaffold.yaml similar to this:</p>
<pre><code>build:
artifacts:
- image: gcr.io/k8s-skaffold/example
hooks:
before:
- command: ["sh", "-c", "./prebuild-and-build.sh"]
os: [darwin, linux]
# - command: # ...TODO
# os: [windows]
</code></pre>
<p>NOTE: This feature is relatively new, be sure to use the latest skaffold version (v1.33.0 at the time of this writing) for this feature</p>
|
aaron-prindle
|
<p>I am not able to communicate between two services.</p>
<p>post-deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: python-data-deployment
labels:
spec:
replicas: 1
selector:
matchLabels:
app: python-web-selector
tier: backend
template:
metadata:
labels:
app: python-web-selector
tier: backend
spec:
containers:
- name: python-web-pod
image: sakshiarora2012/python-backend:v10
ports:
- containerPort: 5000
</code></pre>
<p>post-deployment2.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: python-data-deployment2
labels:
spec:
replicas: 1
selector:
matchLabels:
app: python-web-selector2
tier: backend
template:
metadata:
labels:
app: python-web-selector2
tier: backend
spec:
containers:
- name: python-web-pod2
image: sakshiarora2012/python-backend:v8
ports:
- containerPort: 5000
</code></pre>
<p>post-service.yml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: python-data-service
spec:
selector:
app: python-web-selector
tier: backend
ports:
- port: 5000
nodePort: 30400
type: NodePort
</code></pre>
<p>post-service2.yml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: python-data-service2
spec:
selector:
app: python-web-selector2
tier: backend
ports:
- port: 5000
type: ClusterIP
</code></pre>
<p>When I go and try to ping from 1 container to another, it is not able to ping</p>
<pre><code>root@python-data-deployment-7bd65dc685-htxmj:/project# ping python-data-service.default.svc.cluster.local
PING python-data-service.default.svc.cluster.local (10.107.11.236) 56(84) bytes of data.
^C
--- python-data-service.default.svc.cluster.local ping statistics ---
7 packets transmitted, 0 received, 100% packet loss, time 139ms
</code></pre>
<p>If I see dns entry it is showing</p>
<pre><code>sakshiarora@Sakshis-MacBook-Pro Student_Registration % kubectl exec -i -t dnsutils -- nslookup python-data-service
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: python-data-service.default.svc.cluster.local
Address: 10.107.11.236
sakshiarora@Sakshis-MacBook-Pro Student_Registration %
sakshiarora@Sakshis-MacBook-Pro Student_Registration % kubectl exec -i -t dnsutils -- nslookup python-data-service2
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: python-data-service2.default.svc.cluster.local
Address: 10.103.97.40
sakshiarora@Sakshis-MacBook-Pro Student_Registration % kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
dnsutils 1/1 Running 0 5m54s 172.17.0.9 minikube <none> <none>
python-data-deployment-7bd65dc685-htxmj 1/1 Running 0 47m 172.17.0.6 minikube <none> <none>
python-data-deployment2-764744b97d-mc9gm 1/1 Running 0 43m 172.17.0.8 minikube <none> <none>
python-db-deployment-d54f6b657-rfs2b 1/1 Running 0 44h 172.17.0.7 minikube <none> <none>
sakshiarora@Sakshis-MacBook-Pro Student_Registration % kubectl describe svc python-data-service
Name: python-data-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"python-data-service","namespace":"default"},"spec":{"ports":[{"no...
Selector: app=python-web-selector,tier=backend
Type: NodePort
IP: 10.107.11.236
Port: <unset> 5000/TCP
TargetPort: 5000/TCP
NodePort: <unset> 30400/TCP
Endpoints: 172.17.0.6:5000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
sakshiarora@Sakshis-MacBook-Pro Student_Registration % kubectl describe svc python-data-service2
Name: python-data-service2
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"python-data-service2","namespace":"default"},"spec":{"ports":[{"p...
Selector: app=python-web-selector2,tier=backend
Type: ClusterIP
IP: 10.103.97.40
Port: <unset> 5000/TCP
TargetPort: 5000/TCP
Endpoints: 172.17.0.8:5000
Session Affinity: None
Events: <none>
</code></pre>
<p>sakshiarora@Sakshis-MacBook-Pro Student_Registration %</p>
<p>I think if in DNS table it show if of range 172,17.0.X then it will work, but not sure why it is not showing in dns entry, Any pointers?</p>
|
Sakshi Ahuja
|
<p>In order to start debugging your services I would suggest the following steps:</p>
<p>Check that your service 1 is accessible as a Pod:</p>
<p><code>kubectl run test1 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - http://172.17.0.6:5000</code></p>
<p>Check that your service 2 is accessible as a Pod:</p>
<p><code>kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - 172.17.0.8:5000</code></p>
<p>Then, check that your service 1 is accessible as a Service using the corresponding cluster IP and then DNS Name:</p>
<p><code>kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - 10.107.11.236:5000</code></p>
<p><code>kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - http://python-data-service:5000</code></p>
<p>Then, check that your service 2 is accessible as a Service using the corresponding cluster IP and then DNS Name:</p>
<p><code>kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - 10.103.97.40:5000</code></p>
<p><code>kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - http://python-data-service2:5000</code></p>
<p>Then, if needed, check that your service 2 is accessible through your node port (you would need to know the IP Address of the Node where the service has been exposed, for instance in minikube it should work:)</p>
<pre><code>wget -O - http://192.168.99.101:30400
</code></pre>
<p>From your Service manifest I can recommend as a good practice to specify both <code>port</code> and <code>targetPort</code> as you can see at</p>
<p><a href="https://canterafonseca.eu/kubernetes/certification/application/developer/cncf/k8s/cloud/native/computing/ckad/deployments/services/preparation-k8s-ckad-exam-part4-services.html#-services" rel="nofollow noreferrer">https://canterafonseca.eu/kubernetes/certification/application/developer/cncf/k8s/cloud/native/computing/ckad/deployments/services/preparation-k8s-ckad-exam-part4-services.html#-services</a></p>
<p>On the other hand if you only need to expose to the outside world one of the services you can create a headless service (see also my blog post above).</p>
|
Jose Manuel Cantera
|
<p>I am working on Springboot and Kubernetes and I have really simple application that connects to Postgres database. I want to get the value of datasource from configmap and password from secrets as mount file.</p>
<p>Configmap file :</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: customer-config
data:
application.properties: |
server.forward-headers-strategy=framework
spring.datasource.url=jdbc:postgresql://test/customer
spring.datasource.username=postgres
</code></pre>
<p>Secrets File :</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: secret-demo
data:
spring.datasource.password: cG9zdGdyZXM=
</code></pre>
<p>deployment file :</p>
<pre><code>spec:
containers:
- name: customerc
image: localhost:8080/customer
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8282
volumeMounts:
- mountPath: /workspace/config/default
name: config-volume
- mountPath: /workspace/secret/default
name: secret-volume
volumes:
- name: config-volume
configMap:
name: customer-config
- name: secret-volume
secret:
secretName: secret-demo
items:
- key: spring.datasource.password
path: password
</code></pre>
<p>If I move spring.datasource.password prop from secret to configmap then it works fine or If I populate its value as env variable then also work fine.
But as we know both are not secure way to do so, can someone tell me what's wrong with file mounting for secrets.</p>
|
keepmoving
|
<p>Spring Boot 2.4 added support for <a href="https://docs.spring.io/spring-boot/docs/2.4.3/reference/htmlsingle/#boot-features-external-config-files-configtree" rel="nofollow noreferrer">importing a config tree</a>. This support can be used to consume configuration from a volume mounted by Kubernetes.</p>
<p>As an example, let’s imagine that Kubernetes has mounted the following volume:</p>
<pre><code>etc/
config/
myapp/
username
password
</code></pre>
<p>The contents of the username file would be a config value, and the contents of password would be a secret.</p>
<p>To import these properties, you can add the following to your application.properties file:</p>
<pre><code>spring.config.import=optional:configtree:/etc/config/
</code></pre>
<p>This will result in the properties <code>myapp.username</code> and <code>myapp.password</code> being set . Their values will be the contents of <code>/etc/config/myapp/username</code> and <code>/etc/config/myapp/password</code> respectively.</p>
|
Andy Wilkinson
|
<p>I have an EKS cluster for which I want :
- 1 Load Balancer per cluster,
- Ingress rules to direct to the right namespace and the right service.</p>
<p>I have been following this guide : <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes" rel="noreferrer">https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes</a></p>
<p>My deployments:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: IMAGENAME
ports:
- containerPort: 8000
name: hello-world
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: bleble
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: bleble
template:
metadata:
labels:
app: bleble
spec:
containers:
- name: bleble
image: IMAGENAME
ports:
- containerPort: 8000
name: bleble
</code></pre>
<p>the service of those deployments:</p>
<pre><code>
apiVersion: v1
kind: Service
metadata:
name: hello-world-svc
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8000
selector:
app: hello-world
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: bleble-svc
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8000
selector:
app: bleble
type: NodePort
</code></pre>
<p>My Load balancer:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
</code></pre>
<p>My ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: simple-fanout-example
namespace : default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: internal-lb.aws.com
http:
paths:
- path: /bleble
backend:
serviceName: bleble-svc
servicePort: 80
- path: /hello-world
backend:
serviceName: hello-world-svc
servicePort: 80
</code></pre>
<p>I've set up the Nginx Ingress Controller with this : kubectl apply -f <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.24.1/deploy/mandatory.yaml" rel="noreferrer">https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.24.1/deploy/mandatory.yaml</a></p>
<p>I am unsure why I get a 503 Service Temporarily Unavailable for one service and one 502 for another... I would guess it's a problem of ports or of namespace? In the guide, they don't define namespace for the deployment...</p>
<p>Every resources create correctly, and I think the ingress is actually working but is getting confused where to go. </p>
<p>Thanks for your help!</p>
|
shrimpy
|
<p>In general, use <code>externalTrafficPolicy: Cluster</code> instead of <code>Local</code>. You can gain some performance (latency) improvement by using <code>Local</code> but you need to configure those pod allocations with a lot efforts. You will hit 5xx errors with those misconfigurations. In addition, <code>Cluster</code> is the default option for <code>externalTrafficPolicy</code>.</p>
<p>In your <code>ingress</code>, you route <code>/bleble</code> to service <code>bleble</code>, but your service name is actually <code>bleble-svc</code>. please make them consistent. Also, you would need to set your <code>servicePort</code> to 8080 as you exposed 8080 in your service configuration.</p>
<p>For internal service like <code>bleble-svc</code>, <code>Cluster IP</code> is good enough in your case as it does not need external access.</p>
<p>Hope this helps.</p>
|
Fei
|
<p>I exploring using Skaffold with our EKS cluster and I wonder if the tool is agnostic to the cloud vendor and if he can work with any k8s cluster?</p>
<p>Does he have any limitations regarding e.g shared volumes and other resources?</p>
|
Matan Tubul
|
<p>Skaffold is a tool for deploying to any Kubernetes cluster, agnostic of cloud vendor. Skaffold can be used with any Kubernetes cluster, whether it is hosted on a cloud provider like Google's GKE, Amazon EKS, or running on-premises. Skaffold does not have any specific limitations regarding shared volumes or other resources, as it is simply a tool for deploying to a Kubernetes cluster. Any limitations you may encounter would be due to the limitations of Kubernetes itself, rather than Skaffold.</p>
<p>NOTE: Skaffold does poll resources it deploys for changes so API rate limits might be a possible concern but this isn't an issue for most users.</p>
<p>Disclaimer: I am contributor to this project</p>
|
aaron-prindle
|
<p>This is a very wried thing.</p>
<p>I created a <strong>private</strong> GKE cluster with a node pool of 3 nodes. Then I have a replica set with 3 pods. some of these pods will be scheduled to one node. </p>
<p>So one of these pods always get <code>ImagePullBackOff</code>, I check the error </p>
<pre><code>Failed to pull image "bitnami/mongodb:3.6": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
</code></pre>
<p>And the pods scheduled to the remaining two nodes work well.</p>
<p>I ssh to that node, run <code>docker pull</code> and everything is fine. I cannot find another way to troubleshoot this error. </p>
<p>I tried to <code>drain</code> or <code>delete</code> that node and let the cluster to recreate the node. but it is still not working.</p>
<p>Help me, please.</p>
<p>Update:
From GCP <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#docker_hub" rel="nofollow noreferrer">documentation</a>, it will fail to pull images from the docker hub. </p>
<p>BUT the weirdest thing is ONLY ONE node is unable to pull the images. </p>
|
Chao
|
<p>There was a related reported bug in <a href="https://issuetracker.google.com/issues/119820482" rel="nofollow noreferrer">Kubernetes 1.11</a> </p>
<p>Make sure it is not your case</p>
|
Meir Tseitlin
|
<p>Need an advice to dockerize and run a node JS static-content app on K8s cluster.</p>
<p>I have a static web-content which I run "npm run build” into the terminal which generates /build and direct my IIS webserver to /build/Index.html. </p>
<p>Now, I started creating a Docker file, how do I point my nodeJS image to invoke /build/Index.html file </p>
<pre><code>FROM node:carbon
WORKDIR /app
COPY /Core/* ./app
npm run build
EXPOSE 8080
CMD [ "node", ".app/build/index.html" ]
</code></pre>
<p>Please how can I run this app only on node v8.9.3 and
npm 5.6.0 ?</p>
<p>Any inputs please ?</p>
|
Abraham Dhanyaraj Arumbaka
|
<p>You can specify the version of node specifically:</p>
<pre><code>FROM node:8.9.3
</code></pre>
|
Jim B.
|
<p>I have a pod with 3 containers.</p>
<ol>
<li>.Net core REST microservice</li>
<li>.Net core Reverse Proxy</li>
<li>Istio proxy</li>
</ol>
<p>Traffic comes into the Reverse Proxy, is validated, and then proxied onto the microservice. This is my most heavily used service and it starts to have this error after running for about a day.</p>
<pre><code>System.Net.Http.HttpRequestException: Cannot assign requested address
---> System.Net.Sockets.SocketException (99): Cannot assign requested address
at System.Net.Http.ConnectHelper.ConnectAsync(String host, Int32 port, CancellationToken cancellationToken)
--- End of inner exception stack trace ---
at System.Net.Http.ConnectHelper.ConnectAsync(String host, Int32 port, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.ConnectAsync(HttpRequestMessage request, Boolean allowHttp2, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.CreateHttp11ConnectionAsync(HttpRequestMessage request, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.GetHttpConnectionAsync(HttpRequestMessage request, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.SendWithRetryAsync(HttpRequestMessage request, Boolean doRequestAuth, CancellationToken cancellationToken)
at System.Net.Http.DiagnosticsHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
at System.Net.Http.HttpClient.FinishSendAsyncUnbuffered(Task`1 sendTask, HttpRequestMessage request, CancellationTokenSource cts, Boolean disposeCts)
</code></pre>
<p>Restarting the pod is my only fix right now. I've goofed around looking at tcp statistics on the nodes, but it doesn't make sense that that would be the issue as killing the pod and restarting makes the problem go away.</p>
<p>I've also messed around with the httpclient available in .net core using the best practices with no change.</p>
<p>Any thoughts would be appreciated.</p>
|
Robert Smith
|
<p>Upon further gathering of clues, I learned that these errors only show up while our REST microservice restarts (due to a memory leak). The error makes sense in context and I overestimated the severity of the issue.</p>
|
Robert Smith
|
<p>Don't know if this is an error from AWS or something. I created an IAM user and gave it full admin policies. I then used this user to create an EKS cluster using the <code>eksctl</code> CLI but when I logging to AWS console with the <strong>root user</strong> I got the below error while trying to access the cluster nodes.</p>
<p><em><strong>Your current user or role does not have access to Kubernetes objects on this EKS cluster
This may be due to the current user or role not having Kubernetes RBAC permissions to describe cluster resources or not having an entry in the cluster’s auth config map.</strong></em></p>
<p>I have these questions</p>
<ol>
<li>Does not the root user have full access to view every resource from the console?</li>
<li>If the above is true, does it mean when I create a resource from the CLI I must login with the same user to view it?</li>
<li>Or is there way I could attach policies to the root user? Didn't see anything like in the console.</li>
</ol>
<p>AWS itself does not recommend creating access keys for root user and using it for programmable access, so I'm so confused right now. Someone help</p>
<p>All questions I have seen so far and the link to the doc <a href="https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting_iam.html#security-iam-troubleshoot-cannot-view-nodes-or-workloads" rel="noreferrer">here</a> are talking about a user or role created in the AWS IAM and not the root user.</p>
|
benyusouf
|
<p>If you're logged in with the root user and get this error, run the below command to edit the <code>aws-auth</code> configMap:</p>
<pre><code>kubectl edit configmap aws-auth -n kube-system
</code></pre>
<p>Then go down to <code>mapUsers</code> and add the following (replace <code>[account_id]</code> with your Account ID)</p>
<pre><code>mapUsers: |
- userarn: arn:aws:iam::[account_id]:root
groups:
- system:masters
</code></pre>
|
Idrizi.A
|
<p>I have a set of Kubernetes pods (Kafka). They have been created by Terraform but somehow, they "fell" out of the state (Terraform does not recognize them) and are wrong configured (I don't need them anyways anymore).</p>
<p><a href="https://i.stack.imgur.com/zyuNP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zyuNP.png" alt="enter image description here" /></a></p>
<p>I now want to remove the pods from the cluster completely now. The main problem is, that even after I kill/delete them they keep being recreated/restarting.</p>
<p>I tried:</p>
<pre><code>kubectl get deployments --all-namespaces
</code></pre>
<p>and then deleted the namespace the pods were in with</p>
<pre><code>kubectl delete -n <NS> deployment <DEPLOY>
</code></pre>
<p>This namespace got removed correctly. Still, if I now try to remove/kill the pods (forced and with cascade) they still re-appear. In the events, I can see they are re-created by kubelet but I don't know why nor how I can stop this behavior.</p>
<p><a href="https://i.stack.imgur.com/4KabW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4KabW.png" alt="enter image description here" /></a></p>
<p>I also checked</p>
<pre><code>kubectl get jobs --all-namespaces
</code></pre>
<p>But there are no resources found. And also</p>
<pre><code>kubectl get daemonsets.app --all-namespaces
kubectl get daemonsets.extensions --all-namespaces
</code></pre>
<p><a href="https://i.stack.imgur.com/T79R4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/T79R4.png" alt="enter image description here" /></a></p>
<p>But I don't think that one of those is relevant for the Kafka deployment at all.</p>
<p>What else can I try to remove those pods? Any help is welcome.</p>
|
Alex
|
<p>Ok, I was able to find the root cause.</p>
<p>With:</p>
<pre><code>kubectl get all --all-namespaces
</code></pre>
<p>I looked up everything that is related to the name of the pods. In this cause, I found services that were related. After I deleted those services, the pods did not get recreated again.</p>
<p>I still think this is not a good solution to the problem ("Just delete everything that has the same name" ...) and I would be happy if someone can suggest a better solution to resolve this.</p>
|
Alex
|
<p>Trying to create a Laod Balancer resource with Kubernetes (for an EKS cluster). It works normally with the Label Selector, but we want to only have one LB per cluster, then let ingress direct services.
Here is what I currently have :</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
namespace: default
name: name
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
spec:
ports:
- port: 80
type: LoadBalancer
</code></pre>
<p>This creates a LB and gives it an internal DNS, but instances never get healthy (although they are).</p>
<p>Any advices</p>
|
shrimpy
|
<p>Per discussion in <a href="https://stackoverflow.com/questions/56459127/kubernetes-ingress-not-creating-an-lb/56459410?noredirect=1#comment99510401_56459410">another question</a> you posted. I think what you want is to achieve <code>One Load Balancer Per Cluster</code>, referring to this: <a href="https://itnext.io/save-on-your-aws-bill-with-kubernetes-ingress-148214a79dcb" rel="nofollow noreferrer">Save on your AWS bill with Kubernetes Ingress</a>.</p>
<p>To achieve this, you would need to create:</p>
<ol>
<li>A <code>Load Balancer Service</code> with <code>Nginx-Ingress-Controller</code> pod as backend.</li>
<li>Your <code>Load balancer Service</code> would have an External IP, point all your cluster traffic to that IP.</li>
<li>Ingress rules that route all cluster traffic as you wish.</li>
</ol>
<p>So your traffic would go through the following pipeline:</p>
<blockquote>
<p>all traffic -> AWS LoadBalancer -> Node1:xxxx ->
Nginx-Ingress-Controller Service -> Nginx-Ingress-Controller Pod -> Your Service1 (based on your ingress rules) ->
Your Pod</p>
</blockquote>
<p>Here is an example how to bring up a Nginx-Ingress-Controller: <a href="https://hackernoon.com/setting-up-nginx-ingress-on-kubernetes-2b733d8d2f45" rel="nofollow noreferrer">https://hackernoon.com/setting-up-nginx-ingress-on-kubernetes-2b733d8d2f45</a></p>
|
Fei
|
<p>The official documentation of kubernetes (<a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/authentication/</a>) states at some point: "3. Call Kubectl with --token being the id_token OR add tokens to .kube/config" (just search for mentioned phrase in the provided doc url to get the context).</p>
<p><a href="https://i.stack.imgur.com/uANTH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uANTH.png" alt="kubernetes doc" /></a></p>
<p>Can anyone give me example where can I "add tokens to .kube/config" directly?</p>
<p>I am in a scenario, when it is needed for me, I can access my cluster with --token inline option but I need to go with adding it to .kube/config.</p>
<p>I am trying to do sth like this but doesn't work (still need to add --token inline option, doesn't work without it):</p>
<pre><code>users:
- user:
token: >-
ey...........
</code></pre>
|
supertramp
|
<p>Yeah... yellow duck works... 5 sec after posting question I noticed that the "context" stuff is the key factor here, so the user of clyster need to match the name of user in users (I was missing the "name" filed for my user, matching the correct cluster context...), e.g.:</p>
<pre><code>users:
- name: shoot-x
user:
token: >-
ey
</code></pre>
|
supertramp
|
<p>Trying to mount a local drive to my minikube host (seems to be a dupe of this <a href="https://stackoverflow.com/questions/57012731/minikube-mount-crashes-mount-system-call-fails">thread</a> but no solution was provided...)</p>
<p>Using:</p>
<p>OSX 10.14.3 and minikube (using HyperVisor)</p>
<pre class="lang-sh prettyprint-override"><code>$ minikube mount --ip 192.168.64.5 --v=7 ~/Documents/projects/docker_storage/tf:/mnt/vda1/data/tf
Found binary path at /usr/local/bin/docker-machine-driver-hyperkit
Launching plugin server for driver hyperkit
Plugin server listening at address 127.0.0.1:58272
() Calling .GetVersion
Using API Version 1
() Calling .SetConfigRaw
() Calling .GetMachineName
(minikube) Calling .DriverName
📁 Mounting host path /Users/XXXXXX/Documents/projects/docker_storage/tf into VM as /mnt/vda1/data/tf ...
💾 Mount options:
▪ Type: 9p
▪ UID: docker
▪ GID: docker
▪ Version: 9p2000.L
▪ MSize: 262144
▪ Mode: 755 (-rwxr-xr-x)
▪ Options: map[]
(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
🚀 Userspace file server: (minikube) Calling .GetSSHUsername
ufs starting
🛑 Userspace file server is shutdown
💣 mount failed: mount: /mnt/vda1/data/tf: mount(2) system call failed: Connection refused.
: Process exited with status 32
😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉 https://github.com/kubernetes/minikube/issues/new
</code></pre>
<p>Expected the mount to map the from local:host however the connection is refused. Might be some firewall or proxy issues?</p>
|
Nicholas Cilfone
|
<p>I'll just leave this here since I had a similar issue on Fedora 32:</p>
<pre><code>mount failed: mount with cmd /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=38811,trans=tcp,version=9p2000.L 192.168.39.1 /data/pvtheme" : /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=38811,trans=tcp,version=9p2000.L 192.168.39.1 /data/pvtheme": Process exited with status 32
stdout:
stderr:
mount: /data/pvtheme: mount(2) system call failed: Connection refused.
</code></pre>
<p>I solved thanks to this comment: <a href="https://github.com/kubernetes/minikube/issues/4726#issuecomment-510217223" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/issues/4726#issuecomment-510217223</a></p>
<p>It was a <code>firewalld</code> issue:</p>
<pre><code>$ sudo firewall-cmd --permanent --zone=libvirt --add-rich-rule='rule family="ipv4" source address="192.168.39.0/24" accept'
$ sudo firewall-cmd --reload
</code></pre>
<p>After the changes, here's my <code>firewalld</code> configuration:</p>
<pre><code>firewall-cmd --zone=libvirt --list-all
libvirt (active)
target: ACCEPT
icmp-block-inversion: no
interfaces: crc virbr0 virbr1
sources:
services: dhcp dhcpv6 dns ssh tftp
ports:
protocols: icmp ipv6-icmp
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
rule family="ipv4" source address="192.168.39.0/24" accept
rule priority="32767" reject
</code></pre>
|
Ricardo Zanini
|
<p>I'm a Kubernetes amateur trying to use NGINX ingress controller on GKE. I'm following <a href="https://cloud.google.com/community/tutorials/nginx-ingress-gke" rel="nofollow noreferrer">this</a> google cloud documentation to setup NGINX Ingress for my services, but, I'm facing issues in accessing the NGINX locations.</p>
<p><strong>What's working?</strong></p>
<ol>
<li>Ingress-Controller deployment using Helm (RBAC enabled)</li>
<li>ClusterIP service deployments</li>
</ol>
<p><strong>What's not working?</strong></p>
<ol>
<li>Ingress resource to expose multiple ClusterIP services using unique paths (fanout routing)</li>
</ol>
<p><strong>K8S Services</strong></p>
<pre><code>[msekar@ebs kube-base]$ kubectl get services -n payment-gateway-7682352
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-controller LoadBalancer 10.35.241.255 35.188.161.171 80:31918/TCP,443:31360/TCP 6h
nginx-ingress-default-backend ClusterIP 10.35.251.5 <none> 80/TCP 6h
payment-gateway-dev ClusterIP 10.35.254.167 <none> 5000/TCP 6h
payment-gateway-qa ClusterIP 10.35.253.94 <none> 5000/TCP 6h
</code></pre>
<p><strong>K8S Ingress</strong></p>
<pre><code>[msekar@ebs kube-base]$ kubectl get ing -n payment-gateway-7682352
NAME HOSTS ADDRESS PORTS AGE
pgw-nginx-ingress * 104.198.78.169 80 6h
[msekar@ebs kube-base]$ kubectl describe ing pgw-nginx-ingress -n payment-gateway-7682352
Name: pgw-nginx-ingress
Namespace: payment-gateway-7682352
Address: 104.198.78.169
Default backend: default-http-backend:80 (10.32.1.4:8080)
Rules:
Host Path Backends
---- ---- --------
*
/dev/ payment-gateway-dev:5000 (<none>)
/qa/ payment-gateway-qa:5000 (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx","nginx.ingress.kubernetes.io/ssl-redirect":"false"},"name":"pgw-nginx-ingress","namespace":"payment-gateway-7682352"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"payment-gateway-dev","servicePort":5000},"path":"/dev/"},{"backend":{"serviceName":"payment-gateway-qa","servicePort":5000},"path":"/qa/"}]}}]}}
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: false
Events: <none>
</code></pre>
<p>Last applied configuration in the annotations (ingress description output) shows the ingress resource manifest. But, I'm pasting it below for reference</p>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: pgw-nginx-ingress
namespace: payment-gateway-7682352
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- backend:
serviceName: payment-gateway-dev
servicePort: 5000
path: /dev/
- backend:
serviceName: payment-gateway-qa
servicePort: 5000
path: /qa/
</code></pre>
<p><strong>Additional Info</strong></p>
<p>The services I'm trying to access are springboot services that use contexts, so, the root location isn't a valid end-point.</p>
<p>The container's readiness and liveliness probes are defined accordingly.</p>
<p>For example, "payment-gateway-dev" service is using a context /pgw/v1 context, so, the deployment can only be accessed through the context. To access application's swagger spec you would use the URL</p>
<p>http://<>/pgw/v1/swagger-ui.html</p>
<p><strong>Behaviour of my deployment</strong></p>
<blockquote>
<p>ingress-controller-LB-ip = 35.188.161.171</p>
</blockquote>
<ul>
<li>Accessing ingress controller load balancer "<a href="http://35.188.161.171" rel="nofollow noreferrer">http://35.188.161.171</a>" takes me to default 404 backend</li>
<li>Accessing ingress controller load balancer health "<a href="http://35.188.161.171/healthz" rel="nofollow noreferrer">http://35.188.161.171/healthz</a>" returns 200 HTTP response as expected</li>
<li>Trying to access the services using the URLs below returns "404: page not found" error
<ul>
<li><a href="http://35.188.161.171/dev/pgw/v1/swagger-ui.html" rel="nofollow noreferrer">http://35.188.161.171/dev/pgw/v1/swagger-ui.html</a></li>
<li><a href="http://35.188.161.171/qa/pgw/v1/swagger-ui.html" rel="nofollow noreferrer">http://35.188.161.171/qa/pgw/v1/swagger-ui.html</a></li>
</ul></li>
</ul>
<p>Any suggestions about or insights into what I might be doing wrong will be much appreciated.</p>
|
manikandan sekar
|
<p>+1 for this well asked question.</p>
<p>Your setup seemed right to me. In you explanation, I could find that your services would require <code>http://<>/pgw/v1/swagger-ui.html</code> as context. However, in your setup the path submitted to the service will be <code>http://<>/qa/pgw/v1/swagger-ui.html</code> if your route is <code>/qa/</code>.</p>
<p>To remove the prefix, what you would need to do is to add a <code>rewrite</code> rule to your ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: pgw-nginx-ingress
namespace: payment-gateway-7682352
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- backend:
serviceName: payment-gateway-dev
servicePort: 5000
path: /dev/(.+)
- backend:
serviceName: payment-gateway-qa
servicePort: 5000
path: /qa/(.+)
</code></pre>
<p>After this, you service should receive the correct contexts.</p>
<p>Ref:</p>
<ul>
<li>Rewrite: <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md</a></li>
<li>Ingress Route Matching: <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/ingress-path-matching.md" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/ingress-path-matching.md</a></li>
</ul>
|
Fei
|
<p>I am deploying a Flask app in a docker container with Kubernetes. I am using uwsgi to serve it. Most documentation shows deploying Flask using a WSGI server and NGINX when I look it up, but is it necessary? Can I just use uWSGI? </p>
|
CrabbyPete
|
<p>You don't <em>need</em> an nginx proxy in front of a flask application, but there are some benefits. If you have control over the k8s cluster, I would recommend using an nginx ingress and have it route the traffic to the service that the flask app is running in, then if you have more services its trivial to add them to the nginx config of the ingress.</p>
|
reptilicus
|
<p>Helm 3 does not provide any way to create an action.Configuration structure if the code is running from within the cluster.</p>
<p>I have tried by building my own generic flags:</p>
<pre><code>config, err := rest.InClusterConfig()
if err != nil {
panic(err)
}
insecure := false
genericConfigFlag := &genericclioptions.ConfigFlags{
Timeout: stringptr("0"),
Insecure: &insecure,
APIServer: stringptr(config.Host),
CAFile: stringptr(config.CAFile),
BearerToken: stringptr(config.BearerToken),
ImpersonateGroup: &[]string{},
Namespace: stringptr(namespace),
}
actionConfig := &action.Configuration{
RESTClientGetter: genericConfigFlag,
KubeClient: kube.New(genericConfigFlag),
Log: log.Infof,
}
</code></pre>
<p>Unfortunately, this result in a SIGSEGV error later when running <code>action.NewList(actionConfig).Run()</code>.</p>
<p>Is it the right way to define an action config for Helm 3 from within a Kubernetes cluster?</p>
|
Arkon
|
<p>This is what I did, and works fine for me (using helm 3.2.0 level sdk code):
<strong>imports</strong></p>
<pre class="lang-golang prettyprint-override"><code>import (
"log"
"helm.sh/helm/v3/pkg/action"
"k8s.io/cli-runtime/pkg/genericclioptions"
"k8s.io/client-go/rest"
)
</code></pre>
<p><strong>ActionConfig</strong></p>
<pre class="lang-golang prettyprint-override"><code>func getActionConfig(namespace string) (*action.Configuration, error) {
actionConfig := new(action.Configuration)
var kubeConfig *genericclioptions.ConfigFlags
// Create the rest config instance with ServiceAccount values loaded in them
config, err := rest.InClusterConfig()
if err != nil {
return nil, err
}
// Create the ConfigFlags struct instance with initialized values from ServiceAccount
kubeConfig = genericclioptions.NewConfigFlags(false)
kubeConfig.APIServer = &config.Host
kubeConfig.BearerToken = &config.BearerToken
kubeConfig.CAFile = &config.CAFile
kubeConfig.Namespace = &namespace
if err := actionConfig.Init(kubeConfig, namespace, os.Getenv("HELM_DRIVER"), log.Printf); err != nil {
return nil, err
}
return actionConfig, nil
}
</code></pre>
<p><strong>Usage</strong></p>
<pre class="lang-golang prettyprint-override"><code>actionConfig, kubeConfigFileFullPath, err := getActionConfig(namespace)
listAction := action.NewList(actionConfig)
releases, err := listAction.Run()
if err != nil {
log.Println(err)
}
for _, release := range releases {
log.Println("Release: " + release.Name + " Status: " + release.Info.Status.String())
}
</code></pre>
<p>It is not much different from what what you originally did, except the initialization of the actionConfig. It could also be that newer version fixed some issues. Let me know if this works for you.</p>
|
Darshan Karia
|
<p>I've been building fat jars for spark-submits for quite a while and they work like a charm.</p>
<p>Now I'd like to deploy spark-jobs on top of kubernetes. </p>
<p>The way described on the spark site (<a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html" rel="nofollow noreferrer">https://spark.apache.org/docs/latest/running-on-kubernetes.html</a>) just calls a script <code>docker-image-tool.sh</code> to bundle a basic jar into a docker container. </p>
<p>I was wondering:</p>
<p>Could this be nicer by using <code>sbt-native-packager</code> in combination with <code>sbt-assembly</code> to build docker images that contain all the code needed for starting the spark driver, running the code (with all libraries bundled) and perhaps offer a way to bundle classpath libraries (like postgres jar) into a single image. </p>
<p>This way running the pod would spin up the spark k8s master (client mode or cluster mode, whatever works best), trigger the creation of worker pods spark submit the local jar (with all libraries needed included) and run until completion.</p>
<p>Maybe I'm missing why this can't work or is a bad idea, but I feel like configuration would be more centralised and straight forward then the current approach? </p>
<p>Or are there other best practises?</p>
|
Tom Lous
|
<p>So in the end I got everything working using helm, the <a href="https://github.com/GoogleCloudPlatform/spark-on-k8s-operator" rel="nofollow noreferrer">spark-on-k8s-operator</a> and <a href="https://github.com/marcuslonnberg/sbt-docker" rel="nofollow noreferrer">sbt-docker</a></p>
<p>First I extract some of the config into variables in the build.sbt, so they can be used by both the assembly and the docker generator.</p>
<pre><code>// define some dependencies that should not be compiled, but copied into docker
val externalDependencies = Seq(
"org.postgresql" % "postgresql" % postgresVersion,
"io.prometheus.jmx" % "jmx_prometheus_javaagent" % jmxPrometheusVersion
)
// Settings
val team = "hazelnut"
val importerDescription = "..."
val importerMainClass = "..."
val targetDockerJarPath = "/opt/spark/jars"
val externalPaths = externalDependencies.map(module => {
val parts = module.toString().split(""":""")
val orgDir = parts(0).replaceAll("""\.""","""/""")
val moduleName = parts(1).replaceAll("""\.""","""/""")
val version = parts(2)
var jarFile = moduleName + "-" + version + ".jar"
(orgDir, moduleName, version, jarFile)
})
</code></pre>
<p>Next I define the assembly settings to create the fat jar (which can be whatever you need):</p>
<pre><code>lazy val assemblySettings = Seq(
// Assembly options
assembly / assemblyOption := (assemblyOption in assembly).value.copy(includeScala = false),
assembly / assemblyMergeStrategy := {
case PathList("reference.conf") => MergeStrategy.concat
case PathList("META-INF", _@_*) => MergeStrategy.discard
case "log4j.properties" => MergeStrategy.concat
case _ => MergeStrategy.first
},
assembly / logLevel := sbt.util.Level.Error,
assembly / test := {},
pomIncludeRepository := { _ => false }
)
</code></pre>
<p>Then the docker settings are defined:</p>
<pre><code>lazy val dockerSettings = Seq(
imageNames in docker := Seq(
ImageName(s"$team/${name.value}:latest"),
ImageName(s"$team/${name.value}:${version.value}"),
),
dockerfile in docker := {
// The assembly task generates a fat JAR file
val artifact: File = assembly.value
val artifactTargetPath = s"$targetDockerJarPath/$team-${name.value}.jar"
externalPaths.map {
case (extOrgDir, extModuleName, extVersion, jarFile) =>
val url = List("https://repo1.maven.org/maven2", extOrgDir, extModuleName, extVersion, jarFile).mkString("/")
val target = s"$targetDockerJarPath/$jarFile"
Instructions.Run.exec(List("curl", url, "--output", target, "--silent"))
}
.foldLeft(new Dockerfile {
// https://hub.docker.com/r/lightbend/spark/tags
from(s"lightbend/spark:${openShiftVersion}-OpenShift-${sparkVersion}-ubuntu-${scalaBaseVersion}")
}) {
case (df, run) => df.addInstruction(run)
}.add(artifact, artifactTargetPath)
}
)
</code></pre>
<p>And I create some <code>Task</code> to generate some helm Charts / values:</p>
<pre><code>lazy val createImporterHelmChart: Def.Initialize[Task[Seq[File]]] = Def.task {
val chartFile = baseDirectory.value / "../helm" / "Chart.yaml"
val valuesFile = baseDirectory.value / "../helm" / "values.yaml"
val jarDependencies = externalPaths.map {
case (_, extModuleName, _, jarFile) =>
extModuleName -> s""""local://$targetDockerJarPath/$jarFile""""
}.toMap
val chartContents =
s"""# Generated by build.sbt. Please don't manually update
|apiVersion: v1
|name: $team-${name.value}
|version: ${version.value}
|description: $importerDescription
|""".stripMargin
val valuesContents =
s"""# Generated by build.sbt. Please don't manually update
|version: ${version.value}
|sparkVersion: $sparkVersion
|image: $team/${name.value}:${version.value}
|jar: local://$targetDockerJarPath/$team-${name.value}.jar
|mainClass: $importerMainClass
|jarDependencies: [${jarDependencies.values.mkString(", ")}]
|fileDependencies: []
|jmxExporterJar: ${jarDependencies.getOrElse("jmx_prometheus_javaagent", "null").replace("local://","")}
|""".stripMargin
IO.write(chartFile, chartContents)
IO.write(valuesFile, valuesContents)
Seq(chartFile, valuesFile)
}
</code></pre>
<p>Finally it all combines into a project definition in the build.sbt</p>
<pre><code>lazy val importer = (project in file("importer"))
.enablePlugins(JavaAppPackaging)
.enablePlugins(sbtdocker.DockerPlugin)
.enablePlugins(AshScriptPlugin)
.dependsOn(util)
.settings(
commonSettings,
testSettings,
assemblySettings,
dockerSettings,
scalafmtSettings,
name := "etl-importer",
Compile / mainClass := Some(importerMainClass),
Compile / resourceGenerators += createImporterHelmChart.taskValue
)
</code></pre>
<p>Finally together with values files per environment and a helm template:</p>
<pre><code>apiVersion: sparkoperator.k8s.io/v1beta1
kind: SparkApplication
metadata:
name: {{ .Chart.Name | trunc 64 }}
labels:
name: {{ .Chart.Name | trunc 63 | quote }}
release: {{ .Release.Name | trunc 63 | quote }}
revision: {{ .Release.Revision | quote }}
sparkVersion: {{ .Values.sparkVersion | quote }}
version: {{ .Chart.Version | quote }}
spec:
type: Scala
mode: cluster
image: {{ .Values.image | quote }}
imagePullPolicy: {{ .Values.imagePullPolicy }}
mainClass: {{ .Values.mainClass | quote }}
mainApplicationFile: {{ .Values.jar | quote }}
sparkVersion: {{ .Values.sparkVersion | quote }}
restartPolicy:
type: Never
deps:
{{- if .Values.jarDependencies }}
jars:
{{- range .Values.jarDependencies }}
- {{ . | quote }}
{{- end }}
{{- end }}
...
</code></pre>
<p>I can now build packages using </p>
<p><code>sbt [project name]/docker</code></p>
<p>and deploy them using</p>
<p><code>helm install ./helm -f ./helm/values-minikube.yaml --namespace=[ns] --name [name]</code></p>
<p>It can probably be made prettier, but for now this works like a charm</p>
|
Tom Lous
|
<p>I have a k8s cluster on which i have deployed ELK my kibana deployment and service looks like</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: kibana
name: kibana
spec:
replicas: 1
selector:
matchLabels:
service: kibana
strategy:
type: RollingUpdate
template:
metadata:
labels:
service: kibana
spec:
containers:
- image: docker.elastic.co/kibana/kibana:6.6.0
name: kibana
ports:
- containerPort: 5601
resources:
requests:
memory: 1Gi
limits:
memory: 1Gi
restartPolicy: Always
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
labels:
service: kibana
name: kibana
spec:
ports:
- name: "5601"
port: 5601
targetPort: 5601
selector:
service: kibana
type: NodePort
</code></pre>
<p>and the nginx ingress looks like</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: loadbalancer-https
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: true
nginx.ingress.kubernetes.io/force-ssl-redirect: true
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/app-root: /
nginx.org/ssl-services: "kibana"
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- kibana.some.com
secretName: secret
rules:
- host: kibana.some.com
http:
paths:
- path: /
backend:
serviceName: kibana
servicePort: 5601
</code></pre>
<p>But i get 502 Bad Gateway and if i look on the nginx ingress logs i see </p>
<pre><code>2019/03/02 01:25:09 [error] 875#875: *470787 upstream prematurely closed connection while reading response header from upstream, client: 10.138.82.98, server: kibana.some.com, request: "GET /favicon.ico HTTP/2.0", upstream: "http://10.244.2.86:5601/favicon.ico", host: "kibana.some.com", referrer: "https://kibana.some.com/"
</code></pre>
<p>It seems pretty straightforward but i don't know what im missing here. I would appreciate some help.</p>
<p>Thanks</p>
|
bitgandtter
|
<p>It should be related to your <code>nginx.ingress.kubernetes.io/secure-backends: "true"</code> configuration in your ingress. Did you configure HTTPS/TLS support for your Kibana in Pod? Otherwise I would suggest removing this annotation.</p>
<p>Also, please note that <code>nginx.ingress.kubernetes.io/secure-backends: "true"</code> annotation is deprecated.</p>
<p>Ref: </p>
<ul>
<li><a href="https://docs.giantswarm.io/guides/advanced-ingress-configuration/" rel="nofollow noreferrer">https://docs.giantswarm.io/guides/advanced-ingress-configuration/</a></li>
<li><a href="https://github.com/kubernetes/ingress-nginx/issues/3416" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/3416</a></li>
</ul>
|
Fei
|
<p>I am trying to send the stdout logs of my application running in k8s pods to a remote syslog server. I have the fluentd container running as a sidecar to my main application container and it works to send the log to the remote syslog server. But without any formatting etc, the logs that get sent to the remote syslog sever is in the exact way that it is written in the /var/log/container/my_application*.log file. I just want the value of the "log" key as a raw string to be given as the output from fluentd and sent to the remote syslog server.</p>
<p>Logs written in the /var/log/container/my_application*.log file</p>
<pre><code>{"log":"I am application\n","stream":"stdout","time":"2023-04-29T15:11:50.728436003Z"}
</code></pre>
<p>I want to send just the "log" key to the remote syslog sever, which actually contains the logs written by the application.</p>
<p>I have the below fluentd config which I'm passing via a k8s config-map to the application pod:</p>
<pre><code><source>
@type tail
path "/var/log/containers/my_application*default*demo*.log"
pos_file "/var/log/my_app.log.pos"
read_from_head true
tag "app-development"
<parse>
@type none
</parse>
</source>
<filter app-development>
@type parser
key_name log
<parse>
@type none
</parse>
</filter>
<match app-development>
@type remote_syslog
@id out_kube_remote_syslog
host "#{ENV['SYSLOG_HOST']}"
port "#{ENV['SYSLOG_PORT']}"
severity debug
program app-development_p
hostname ${kubernetes_host}
facility daemon
protocol "#{ENV['SYSLOG_PROTOCOL'] || 'tcp'}"
tls "#{ENV['SYSLOG_TLS'] || 'false'}"
ca_file "#{ENV['SYSLOG_CA_FILE'] || ''}"
verify_mode "#{ENV['SYSLOG_VERIFY_MODE'] || ''}"
packet_size 65535
<buffer kubernetes_host>
</buffer>
<system>
file_permission 666
</system>
</match>
</code></pre>
<p>For some reason this doesn't work. I took a look at this documentation to add the filter section: <a href="https://docs.fluentd.org/filter/parser#key_name" rel="nofollow noreferrer">https://docs.fluentd.org/filter/parser#key_name</a> I get the below error from the fluentd container:</p>
<pre><code> #0 dump an error event: error_class=ArgumentError error="log does not exist" location=nil tag="app-development" time=2023-04-29 15:23:03.221790676 +0000 record={"message"=>"{\"log\":\"I am application\\n\",\"stream\":\"stdout\",\"time\":\"2023-04-29T15:23:02.448835229Z\"}"}
</code></pre>
<p>What am I doing wrong in the fluentd configuration?</p>
|
user1452759
|
<p>I finally think I have a solution for this after scanning through all fluentd based questions on stackover flow.
<a href="https://stackoverflow.com/questions/58611351/using-fluentd-i-want-to-output-only-one-key-data-from-json-data">This</a>
was what finally helped me get what I was looking for</p>
<pre><code><source>
@type tail
path "/var/log/containers/my_application*default*demo*.log"
pos_file "/var/log/my_app.log.pos"
read_from_head true
tag "app-development"
<parse>
@type json
</parse>
</source>
<match app-development>
@type remote_syslog
@id out_kube_remote_syslog
.
.
.
.
<buffer kubernetes_host>
</buffer>
#Added the below format section
<format>
@type single_value
message_key log
</format>
<system>
file_permission 666
</system>
</match>
</code></pre>
<p>The format section above finally gives me the below output from the "log" key</p>
<pre><code>I am application
</code></pre>
|
user1452759
|
<p>I have created a service account "serviceacc" in a namespace xyz and gave it needed permissions. Yet it could not list pods. Here are the steps I followed.</p>
<pre><code>$kubectl create namespace xyz
$kubectl apply -f objects.yaml
</code></pre>
<p>Where content of objects.yaml</p>
<pre><code>---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: xyz
name: listpodser
rules:
- apiGroups: [""]
resources: ["pod"]
verbs: ["get", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: xyz
name: service-listpodser
subjects:
- kind: ServiceAccount
name: serviceacc
apiGroup: ""
roleRef:
kind: Role
name: listpodser
apiGroup: ""
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: serviceacc
namespace: xyz
</code></pre>
<p>Then I checked if the service account has permission to list pods:</p>
<pre><code>$ kubectl auth can-i get pods --namespace xyz --as system:serviceaccount:xyz:serviceacc
no
$ kubectl auth can-i list pods --namespace xyz --as system:serviceaccount:xyz:serviceacc
no
</code></pre>
<p>As we can see from the output of above command, it cannot get/list pods.</p>
|
Nipun Talukdar
|
<p>Simple naming confusion. Use <code>pods</code> instead of <code>pod</code> in the resource list.</p>
|
JulioHM
|
<h1>What I have:</h1>
<p>I have difficulty setting up an Ingress with Helm Chart on the cloud.<br></p>
<ul>
<li>I have a project with a Front, a Back and a MySQL database.</li>
<li>I setup two Ingress, one for my <strong>BackEnd</strong> and one for my <strong>FrontEnd</strong>, I can access it with an IP given by <strong>Google Cloud Platform</strong>.</li>
</ul>
<p>In the <strong>FrontEnd</strong> and <strong>BackEnd</strong> charts <code>values.yaml</code>:</p>
<pre><code>...
service:
type: LoadBalancer
port: 8000 # 4200 for the FrontEnd
targetPort: 8000 # 4200 for FrontEnd
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS, DELETE"
nginx.ingress.kubernetes.io/cors-allow-headers: "DNT,X-CustomHeader,X-LANG,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-ConSSH / 51970trol,Content-Type,X-Api-Key,X-Device-Id,Access-Control-Allow-Origin"
hosts:
- paths:
- path: /
pathType: ImplementationSpecific
...
</code></pre>
<h1>My Issue:</h1>
<p>The <strong>FrontEnd</strong> needs to talk to the <strong>BackEnd</strong> throughout the Ingress.<br>
In the <strong>FrontEnd</strong> values.yaml, I need to have a value:<br>
<code>BACKEND_URL: XXX.XXX.XXX.XXX:8000</code><br>
But I don't know the URL of the <strong>BackEnd</strong> Ingress, or at least, until I deploy the back.</p>
<ul>
<li>How can I variabilize it, to retrieve the URL ingress of the <strong>BackEnd</strong>?</li>
<li>Or at least, how can I find the ingress IP? (I've tried kubectl get ingress, it doesn't show the address).</li>
</ul>
|
Hamza Ince
|
<p>You have two options:</p>
<ol>
<li><p>Don't use the ingress but the service DNS name. This way your traffic doesn't even leave the cluster. If your backend service is called <code>api</code> and deployed in the <code>backend</code> namespace you can reach it internally using <code>api.backend</code>. <a href="https://kubernetes.io/docs/concepts/services-networking/service/#dns" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#dns</a> has details about the mechanism.</p>
</li>
<li><p>You reserve the IP on GCP side and pass the IP as a parameter to your helm charts. If you don't, each deletion and recreation of the service will end up on a different IP by GCP. Clients who have a cached DNS response will not be able to use your service until it has expired.</p>
</li>
</ol>
<p>For GCP this snippet from the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">documentation</a> is correct.</p>
<blockquote>
<p>Some cloud providers allow you to specify the <code>loadBalancerIP</code>. In those cases, the load-balancer is created with the user-specified <code>loadBalancerIP</code>. If the <code>loadBalancerIP</code> field is not specified, the loadBalancer is set up with an ephemeral IP address. If you specify a <code>loadBalancerIP</code> but your cloud provider does not support the feature, the <code>loadbalancerIP</code> field that you set is ignored.</p>
</blockquote>
<p>So get a permanent IP and pass it as <code>loadBalancerIP</code> to the service.</p>
<pre><code> service:
spec:
type: LoadBalancer
port: 8000 # 4200 for the FrontEnd
targetPort: 8000 # 4200 for FrontEnd
loadBalancerIP: <the Global or Regional IP you got from GCP (depends on the LB)>
</code></pre>
|
Jonathan
|
<p>I'm trying to set a new default storage class in our Azure Kubernetes Service. (1.15.10). I've tried a few things but the behavior is strange to me. </p>
<p>I've created a new storage class <code>custom</code>, set it to be the default storage class and then I remove the is-default-class from the <code>default</code> storageclass. </p>
<p>default-storage-class.yml:</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: custom
parameters:
cachingmode: ReadOnly
kind: Managed
storageaccounttype: Standard_LRS
provisioner: kubernetes.io/azure-disk
reclaimPolicy: Delete
volumeBindingMode: Immediate
</code></pre>
<p>and the commands:</p>
<pre><code># create new storage class "custom"
kubectl apply -f ./default-storage-class.yml
# set storageclass as new default
kubectl patch storageclass custom -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
# remove default storage class from default
kubectl patch storageclass default -p '{"metadata": {"annotations":{"storageclass.beta.kubernetes.io/is-default-class":"false"}}}'
</code></pre>
<p>At first, it seems to be working fine: </p>
<pre><code>$kubectl get sc
custom (default) kubernetes.io/azure-disk 6d23h
default kubernetes.io/azure-disk 14m
</code></pre>
<p>But within a minute, without changing anything: </p>
<pre><code>$kubectl get sc
custom (default) kubernetes.io/azure-disk 6d23h
default (default) kubernetes.io/azure-disk 16m
</code></pre>
<p>I'm probably missing something here, but no idea what. </p>
<p>If I do a <code>kubectl describe sc default</code> in the minute it hasn't changed back : </p>
<pre><code>storageclass.beta.kubernetes.io/is-default-class=false,storageclass.kubernetes.io/is-default-class=false
</code></pre>
<p>And a moment later: </p>
<pre><code>storageclass.beta.kubernetes.io/is-default-class=true,storageclass.kubernetes.io/is-default-class=false
</code></pre>
|
vieskees
|
<p>After a lot of testing, found the only way to make the default to be not default was by updating not only the <code>storageclass.beta.kubernetes.io/is-default-class</code> annotation but <code>kubectl.kubernetes.io/last-applied-configuration</code> annotation as well.</p>
<pre><code>kubectl patch storageclass default -p '{"metadata": {"annotations":{"storageclass.beta.kubernetes.io/is-default-class":"false", "kubectl.kubernetes.io/last-applied-configuration": "{\"allowVolumeExpansion\":true,\"apiVersion\":\"storage.k8s.io/v1beta1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.beta.kubernetes.io/is-default-class\":\"false\"},\"labels\":{\"kubernetes.io/cluster-service\":\"true\"},\"name\":\"default\"},\"parameters\":{\"cachingmode\":\"ReadOnly\",\"kind\":\"Managed\",\"storageaccounttype\":\"StandardSSD_LRS\"},\"provisioner\":\"kubernetes.io/azure-disk\"}"}}}'
</code></pre>
<p>After applying this, default StorageClass stayed non-default.</p>
|
Jawad
|
<p>Okay so I'm using a modified version of this repo: <a href="https://github.com/CaptTofu/mysql_replication_kubernetes/tree/master/galera_sync_replication" rel="nofollow noreferrer">https://github.com/CaptTofu/mysql_replication_kubernetes/tree/master/galera_sync_replication</a></p>
<p>modified files are:</p>
<p>service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ro-db
labels:
unit: pxc-cluster
spec:
ports:
- port: 3306
name: mysql
selector:
unit: pxc-cluster
</code></pre>
<p>pxc1, its the same replication controller, service for discovery and persistent volume claim for 2 and 3, just changing the numbers</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: pxc-node1
labels:
node: pxc-node1
spec:
ports:
- port: 3306
name: mysql
- port: 4444
name: state-snapshot-transfer
- port: 4567
name: replication-traffic
- port: 4568
name: incremental-state-transfer
selector:
node: pxc-node1
---
apiVersion: v1
kind: ReplicationController
metadata:
name: pxc-node1
spec:
replicas: 1
template:
metadata:
labels:
node: pxc-node1
unit: pxc-cluster
spec:
nodeSelector:
number: '1'
containers:
- image: capttofu/percona_xtradb_cluster_5_6:beta
name: pxc-node1
ports:
- containerPort: 3306
- containerPort: 4444
- containerPort: 4567
- containerPort: 4568
env:
- name: GALERA_CLUSTER
value: "true"
- name: WRSEP_ON
value: "true"
- name: WSREP_CLUSTER_ADDRESS
value: gcomm://
- name: WSREP_SST_USER
value: sst
- name: WSREP_SST_PASSWORD
value: sst
- name: MYSQL_USER
value: mysql
- name: MYSQL_PASSWORD
value: mysql
- name: MYSQL_ROOT_PASSWORD
value: c-krit
volumeMounts:
- name: mysql-persistent-storage-1
mountPath: /var/lib
securityContext:
capabilities: {}
privileged: true #privileged required for mount
volumes:
- name: mysql-persistent-storage-1
persistentVolumeClaim:
claimName: claim-galera-1
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: claim-galera-1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
selector:
matchLabels:
name: pxc1
</code></pre>
<p>Thing is it was working a few days ago and made a lot of testing bringing down pods, nodes and look how replication voting and everything was working, now when I'm integrating to the app it just won't start and I can't understand why if its the same configuration that was working, I've looked a lot over the internet, SO, GitHub and tried the fixes suggested but won't work.</p>
<pre><code>2018-10-23 20:36:46 1 [Note] WSREP: (4be59ce1, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://10.244.2.61:4567
2018-10-23 20:36:47 1 [Note] WSREP: forgetting 49c4d2cf (tcp://10.244.2.61:4567)
2018-10-23 20:36:47 1 [Note] WSREP: (4be59ce1, 'tcp://0.0.0.0:4567') turning message relay requesting off
2018-10-23 20:36:47 1 [Warning] WSREP: no nodes coming from prim view, prim not possible
2018-10-23 20:36:47 1 [Note] WSREP: view(view_id(NON_PRIM,4be59ce1,5) memb {
4be59ce1,0
} joined {
} left {
} partitioned {
47f2860c,0
49c4d2cf,0
})
2018-10-23 20:36:50 1 [Note] WSREP: view((empty))
2018-10-23 20:36:50 1 [ERROR] WSREP: failed to open gcomm backend connection: 110: failed to reach primary view: 110 (Connection timed out)
at gcomm/src/pc.cpp:connect():162
2018-10-23 20:36:50 1 [ERROR] WSREP: gcs/src/gcs_core.cpp:long int gcs_core_open(gcs_core_t*, const char*, const char*, bool)():206: Failed to open backend connection: -110 (Connection timed out)
2018-10-23 20:36:50 1 [ERROR] WSREP: gcs/src/gcs.cpp:long int gcs_open(gcs_conn_t*, const char*, const char*, bool)():1379: Failed to open channel 'galera_kubernetes' at 'gcomm://pxc-node2,pxc-node3': -110 (Connection timed out)
2018-10-23 20:36:50 1 [ERROR] WSREP: gcs connect failed: Connection timed out
2018-10-23 20:36:50 1 [ERROR] WSREP: wsrep::connect(gcomm://pxc-node2,pxc-node3) failed: 7
2018-10-23 20:36:50 1 [ERROR] Aborting
2018-10-23 20:36:50 1 [Note] WSREP: Service disconnected.
2018-10-23 20:36:51 1 [Note] WSREP: Some threads may fail to exit.
2018-10-23 20:36:51 1 [Note] Binlog end
2018-10-23 20:36:51 1 [Note] mysqld: Shutdown complete
</code></pre>
<p>any suggestions? its been a few hours now and just can't make it work</p>
|
paltaa
|
<p>Percona XtraDB Cluster now has native support for Kubernetes. The PXC Operator went GA 1.0 several weeks ago. <a href="https://percona.com/doc/kubernetes-operator-for-pxc/index.html" rel="nofollow noreferrer">https://percona.com/doc/kubernetes-operator-for-pxc/index.html</a></p>
|
utdrmac
|
<p>Is there a way to install/download the library <code>jts-core</code> in the folder <code>SOLR_INSTALL/server/solr-webapp/webapp/WEB-INF/lib/</code>? As specified in the official guide: <a href="https://solr.apache.org/guide/8_10/spatial-search.html#jts-and-polygons-flat" rel="nofollow noreferrer">https://solr.apache.org/guide/8_10/spatial-search.html#jts-and-polygons-flat</a></p>
<p>Actually, I tried to put in the middle an <code>initContainer</code> that downloads such jar, but I get obviously a <code>Permission denied</code> from the Solr container since only root can write on the final solr container.</p>
<p>I tried also to set a <code>securityContext</code> only for my <code>initContainer</code>, in order to run as root, but that configuration has no effect in the <code>initContainer</code>, I think it is not seen by the Solr CRD.</p>
<pre><code>podOptions:
initContainers:
- name: "install-jts-core"
image: solr:8.9.0
command: ['sh', '-c', 'wget -O /opt/solr-8.9.0/server/solr-webapp/webapp/WEB-INF/lib/jts-core-1.15.1.jar https://repo1.maven.org/maven2/org/locationtech/jts/jts-core/1.15.1/jts-core-1.15.1.jar']
securityContext: <--- this has no effect on SolrCloud CRD
runAsUser: 0
</code></pre>
<p>Another disperate attempt was to set a <code>podSecurityContext.runAsUser: 0</code>, so for all containers in the pod, but Solr does not run as <code>root</code>, I discarded that option by the way.</p>
<p>Any hint/idea/solution please?</p>
<p>Thank you very much in advance.</p>
|
Fernando Aspiazu
|
<p>I have recently found a solution that may not be elegant, but works well in any version of Solr image, below a configuration example:</p>
<pre class="lang-yaml prettyprint-override"><code>podOptions:
initContainers:
- name: "install-jts-lib"
image: solr:8.9.0
command:
- 'sh'
- '-c'
- |
wget -O /tmp-webinf-lib/jts-core-1.15.1.jar https://repo1.maven.org/maven2/org/locationtech/jts/jts-core/1.15.1/jts-core-1.15.1.jar
cp -R /opt/solr/server/solr-webapp/webapp/WEB-INF/lib/* /tmp-webinf-lib
volumeMounts:
- mountPath: /tmp-webinf-lib
name: web-inf-lib
volumes:
- name: web-inf-lib
source:
emptyDir: {}
defaultContainerMount:
name: web-inf-lib
mountPath: /opt/solr/server/solr-webapp/webapp/WEB-INF/lib
</code></pre>
<p>In this example, I create an <code>emptyDir</code> volume and attach it in any directory of the <code>initContainer</code>, but in the final Solr container I attach it in the <code>target directory ($SOLR_HOME/server/solr-webapp/webapp/WEB-INF/lib)</code>.</p>
<p>This will empty the <code>../WEB-INF/lib</code> directory, but since I'm using <strong>the same Solr image</strong>, I can copy the content of <code>../WEB-INF/lib</code> (jars and folders) of the <code>initContainer</code> at the end.</p>
<p>The effect is that the final container will have all the content it should have had plus the <code>jts-core-1.15.1.jar</code> jar.</p>
<p>This works also with other files or libraries you want to bring in the Solr container.</p>
<p>Let me know what do you think of this workaround 👍</p>
<p>Thank you.</p>
|
Fernando Aspiazu
|
<p>I'm trying to debug why a service for a perfectly working deployment is not answering (connection refused).</p>
<p>I've double and tripled checked that the <code>port</code> and <code>targetPort</code> match (4180 for the container and 80 for the service)</p>
<p>when I list the my endpoints I get the following:</p>
<pre><code>$ kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 10.40.63.79:443 82d
oauth2-proxy 10.40.34.212:4180 33s // <--this one
</code></pre>
<p>and from a pod running in the same namespace:</p>
<pre><code># curl 10.40.34.212:4180
curl: (7) Failed to connect to 10.40.34.212 port 4180: Connection refused
</code></pre>
<p>(By the way, same happens if I try to curl the service)</p>
<p>yet, if I port forward directly to the pod, I get a response:</p>
<pre><code>$ kubectl port-forward oauth2-proxy-559dd9ddf4-8z72c 4180:4180 &
$ curl -v localhost:4180
* Rebuilt URL to: localhost:4180/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 4180 (#0)
> GET / HTTP/1.1
> Host: localhost:4180
> User-Agent: curl/7.58.0
> Accept: */*
>
Handling connection for 4180
< HTTP/1.1 403 Forbidden
< Date: Tue, 25 Jun 2019 07:53:19 GMT
< Content-Type: text/html; charset=utf-8
< Transfer-Encoding: chunked
<
<!DOCTYPE html>
// more of the expected response
* Connection #0 to host localhost left intact
</code></pre>
<p>I also checked that I get the pods when I use the selector from the service (I copy pasted it from what I see in <code>kubectl describe svc oauth2-proxy</code>):</p>
<pre><code>$ kubectl describe svc oauth2-proxy | grep Selector
Selector: app.kubernetes.io/name=oauth2-proxy,app.kubernetes.io/part-of=oauth2-proxy
$ kubectl get pods --selector=app.kubernetes.io/name=oauth2-proxy,app.kubernetes.io/part-of=oauth2-proxy
NAME READY STATUS RESTARTS AGE
oauth2-proxy-559dd9ddf4-8z72c 1/1 Running 0 74m
</code></pre>
<p>I don't get why the endpoint is refusing the connection while using port forwarding gets a valid response. Anything else I should check?</p>
|
Tom Klino
|
<p>Alright, turns out that this specific service was listening on localhost only by default:</p>
<pre><code>$ netstat -tunap | grep LISTEN
tcp 0 0 127.0.0.1:4180 0.0.0.0:* LISTEN 1/oauth2_proxy
</code></pre>
<p>I had to add an argument (<code>-http-address=0.0.0.0:4180</code>) to tell it to listen on 0.0.0.0</p>
|
Tom Klino
|
<p>Assuming I have the following <code>skaffold.yaml</code></p>
<pre><code>apiVersion: skaffold/v2beta12
kind: Config
metadata:
name: myapp
build:
local:
push: true
artifacts:
- image: pkaramol/my-image
docker:
dockerfile: Dockerfile
deploy:
helm:
releases:
- name: myapp
chartPath: charts
kubectl:
manifests:
- ./k8s/*
</code></pre>
<p>How can I instruct <code>skaffold</code> to avoid uploading (and then downloading) <code>pkaramol/myimage</code> to dockerhub but rather build it locally and using it directly within the cluster I am connected to?</p>
|
pkaramol
|
<p>You can instruct Skaffold to build the image locally by using the <code>local</code> build mode in the <code>build</code> section of the skaffold.yaml file, like this:</p>
<pre><code>apiVersion: skaffold/v2beta12
kind: Config
metadata:
name: myapp
build:
local:
push: false
artifacts:
- image: pkaramol/my-image
docker:
dockerfile: Dockerfile
deploy:
helm:
releases:
- name: myapp
chartPath: charts
kubectl:
manifests:
- ./k8s/*
</code></pre>
<p>The <code>push</code> parameter should be set to <code>false</code> to prevent Skaffold from uploading the image to a registry. This will tell Skaffold to build the image locally and use it directly in the cluster that you are connected to.</p>
|
aaron-prindle
|
<p>Using Kubernetes v1.16 on AWS I am facing a weird issue while trying to reduce the time it takes to start a pod on a newly spawned node.</p>
<p>By default, a node AMI does not contains any pre cached docker image, so when a pod is scheduled onto it, its 1st job is to pull the docker image.</p>
<p>Pulling large docker image can take a while, so the pod takes a long time to run.</p>
<p>Recently I come with the idea of pre-pulling my large docker image right into the AMI, so that when a pod is scheduled onto it, it won't have to download it. Turns out a lot of people was doing this dor a while, known as "baking AMI":</p>
<ul>
<li><p><a href="https://runnable.com/blog/how-we-pre-bake-docker-images-to-reduce-infrastructure-spin-up-time" rel="noreferrer">https://runnable.com/blog/how-we-pre-bake-docker-images-to-reduce-infrastructure-spin-up-time</a></p></li>
<li><p><a href="https://github.com/kubernetes-incubator/kube-aws/issues/505" rel="noreferrer">https://github.com/kubernetes-incubator/kube-aws/issues/505</a></p></li>
<li><p><a href="https://github.com/kubernetes/kops/issues/3378" rel="noreferrer">https://github.com/kubernetes/kops/issues/3378</a></p></li>
</ul>
<p>My issue is that when I generate an AMI with my large image onto it and use this AMI, everything works as expected and the docker image is not downloaded as already present so the pod starts almost in 1 second but the pod itself runs 1000 times slower than if the docker image was not pre pulled on the AMI.</p>
<p>What I am doing:</p>
<ul>
<li>start a base working AMI on an EC2 instance</li>
<li>ssh onto it</li>
<li>run docker pull myreposiroty/myimage</li>
<li>right click on the EC2 instance from AWS console & generate an AMI</li>
</ul>
<p>If I don't prepull my docker image, then it's running normally, only if i prepull it, using the generated a new AMI then eben if its running in a second the container will be slow like never before.</p>
<p>My docker image use GPU resources and it is based on tensorflow/tensorflow:1.14.0-gpu-py3 image.It seems to be relates to the use of combined nvidia-docker & tensorflow om GPU enabled EC2.</p>
<p>Id anyone have an idea from where this extreme running latency comes from it would be much appreciated.</p>
<p><strong>EDIT #1</strong></p>
<p>Since then I am now using Packer to build my AMI.
Here is my template file:</p>
<pre><code>{
"builders": [
{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"ami_name": "compute-{{user `environment_name`}}-{{timestamp}}",
"region": "{{user `region`}}",
"instance_type": "{{user `instance`}}",
"ssh_username": "admin",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "debian-stretch-hvm-x86_64-gp2-*",
"root-device-type": "ebs"
},
"owners":"379101102735",
"most_recent": true
}
}
],
"provisioners": [
{
"execute_command": "sudo env {{ .Vars }} {{ .Path }}",
"scripts": [
"ami/setup_vm.sh"
],
"type": "shell",
"environment_vars": [
"ENVIRONMENT_NAME={{user `environment_name`}}",
"AWS_ACCOUNT_ID={{user `aws_account_id`}}",
"AWS_REGION={{user `region`}}",
"AWS_ACCESS_KEY_ID={{user `aws_access_key`}}",
"AWS_SECRET_ACCESS_KEY={{user `aws_secret_key`}}",
"DOCKER_IMAGE_NAME={{user `docker_image_name`}}"
]
}
],
"post-processors": [
{
"type": "manifest",
"output": "ami/manifest.json",
"strip_path": true
}
],
"variables": {
"aws_access_key": "{{env `AWS_ACCESS_KEY_ID`}}",
"aws_secret_key": "{{env `AWS_SECRET_ACCESS_KEY`}}",
"environment_name": "",
"region": "eu-west-1",
"instance": "g4dn.xlarge",
"aws_account_id":"",
"docker_image_name":""
}
}
</code></pre>
<p>and here is the associated script to configure the AMI for Docker & Nvidia Docker:</p>
<pre><code>#!/bin/bash
cd /tmp
export DEBIAN_FRONTEND=noninteractive
export APT_LISTCHANGES_FRONTEND=noninteractive
# docker
apt-get update
apt-get install -y apt-transport-https ca-certificates curl gnupg2 software-properties-common
curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian stretch stable"
apt-get update
apt-get install -y docker-ce
usermod -a -G docker $USER
# graphical drivers
apt-get install -y software-properties-common
wget http://us.download.nvidia.com/XFree86/Linux-x86_64/440.64/NVIDIA-Linux-x86_64-440.64.run
bash NVIDIA-Linux-x86_64-440.64.run -sZ
# nvidia-docker
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | tee /etc/apt/sources.list.d/nvidia-docker.list
apt-get update
apt-get install -y nvidia-container-toolkit
apt-get install -y nvidia-docker2
cat > /etc/docker/daemon.json <<EOL
{
"default-runtime": "nvidia",
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
}
}
EOL
systemctl restart docker
# enable nvidia-persistenced service
cat > /etc/systemd/system/nvidia-persistenced.service <<EOL
[Unit]
Description=NVIDIA Persistence Daemon
Wants=syslog.target
[Service]
Type=forking
PIDFile=/var/run/nvidia-persistenced/nvidia-persistenced.pid
Restart=always
ExecStart=/usr/bin/nvidia-persistenced --verbose
ExecStopPost=/bin/rm -rf /var/run/nvidia-persistenced
[Install]
WantedBy=multi-user.target
EOL
systemctl enable nvidia-persistenced
# prepull docker
apt-get install -y python3-pip
pip3 install awscli --upgrade
aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
docker pull $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$DOCKER_IMAGE_NAME:$ENVIRONMENT_NAME
# Clean up
apt-get -y autoremove
apt-get -y clean
</code></pre>
<p>Anyway, as soon as I put this line:</p>
<pre><code>docker pull $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$DOCKER_IMAGE_NAME:$ENVIRONMENT_NAME
</code></pre>
<p>I am facing the same weird issue, when pods are scheduled on nodes booted from this AMI, it says "image already present on machine", so it don't pull it again, but then the container is slow as hell when using TensorFlow, for eg. ts.Session() takes something like a minute to run.</p>
<p><strong>EDIT #2</strong></p>
<p>Adding extra information regarding what is executed on the pod:</p>
<p>Dockerfile</p>
<pre><code>FROM tensorflow/tensorflow:1.14.0-gpu-py3
CMD ["python", "main.py"]
</code></pre>
<p>main.py</p>
<pre><code>import tensorflow as tf
config = tf.ConfigProto(allow_soft_placement=True)
config.gpu_options.allow_growth = True
return tf.Session(graph=tf.Graph(), config=config)
</code></pre>
<p>Only with those lines TF Session initialization takes up to 1 mins to be done when the image is prepulled vs. 1 second when the image is not.</p>
|
repié
|
<p>This is most likely due to the new instance not having "fully downloaded" all parts of the disk. <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-initialize.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-initialize.html</a> has details on this.</p>
<blockquote>
<p>For volumes that were created from snapshots, the storage blocks must be pulled down from Amazon S3 and written to the volume before you can access them. This preliminary action takes time and can cause a significant increase in the latency of I/O operations the first time each block is accessed. Volume performance is achieved after all blocks have been downloaded and written to the volume.</p>
</blockquote>
<p>When the image is fully downloaded from S3, I assume everything will return to normal speed.</p>
|
Jonathan
|
<p>I can create a regular GKE cluster and pull the docker image I need and get it running. When I create the GKE cluster with a routing rule through a NAT my user no longer has permission to pull the docker image.</p>
<p>I start the cluster with these settings:</p>
<pre><code>resources:
######## Network ############
- name: gke-nat-network
type: compute.v1.network
properties:
autoCreateSubnetworks: false
######### Subnets ##########
######### For Cluster #########
- name: gke-cluster-subnet
type: compute.v1.subnetwork
properties:
network: $(ref.gke-nat-network.selfLink)
ipCidrRange: 172.16.0.0/12
region: us-east1
########## NAT Subnet ##########
- name: nat-subnet
type: compute.v1.subnetwork
properties:
network: $(ref.gke-nat-network.selfLink)
ipCidrRange: 10.1.1.0/24
region: us-east1
########## NAT VM ##########
- name: nat-vm
type: compute.v1.instance
properties:
zone: us-east1-b
canIpForward: true
tags:
items:
- nat-to-internet
machineType: https://www.googleapis.com/compute/v1/projects/{{
env["project"] }}/zones/us-east1-b/machineTypes/f1-micro
disks:
- deviceName: boot
type: PERSISTENT
boot: true
autoDelete: true
initializeParams:
sourceImage:
https://www.googleapis.com/compute/v1/projects/debian-
cloud/global/images/debian-7-wheezy-v20150423
networkInterfaces:
- network: projects/{{ env["project"] }}/global/networks/gke-nat-
network
subnetwork: $(ref.nat-subnet.selfLink)
accessConfigs:
- name: External NAT
type: ONE_TO_ONE_NAT
metadata:
items:
- key: startup-script
value: |
#!/bin/sh
# --
# ---------------------------
# Install TCP DUMP
# Start nat; start dump
# ---------------------------
apt-get update
apt-get install -y tcpdump
apt-get install -y tcpick
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
nohup tcpdump -e -l -i eth0 -w /tmp/nat.pcap &
nohup tcpdump -e -l -i eth0 > /tmp/nat.txt &
echo 1 | tee /proc/sys/net/ipv4/ip_forward
########## FIREWALL RULES FOR NAT VM ##########
- name: nat-vm-firewall
type: compute.v1.firewall
properties:
allowed:
- IPProtocol : tcp
ports: []
sourceTags:
- route-through-nat
network: $(ref.gke-nat-network.selfLink)
- name: nat-vm-ssh
type: compute.v1.firewall
properties:
allowed:
- IPProtocol : tcp
ports: [22]
sourceRanges:
- 0.0.0.0/0
network: $(ref.gke-nat-network.selfLink)
########## GKE CLUSTER CREATION ##########
- name: nat-gke-cluster
type: container.v1.cluster
metadata:
dependsOn:
- gke-nat-network
- gke-cluster-subnet
properties:
cluster:
name: nat-gke-cluster
initialNodeCount: 1
network: gke-nat-network
subnetwork: gke-cluster-subnet
nodeConfig:
machineType: n1-standard-4
tags:
- route-through-nat
zone: us-east1-b
########## GKE MASTER ROUTE ##########
- name: master-route
type: compute.v1.route
properties:
destRange: $(ref.nat-gke-cluster.endpoint)
network: $(ref.gke-nat-network.selfLink)
nextHopGateway: projects/{{ env["project"]
}}/global/gateways/default-internet-gateway
priority: 100
tags:
- route-through-nat
########## NAT ROUTE ##########
- name: gke-cluster-route-through-nat
metadata:
dependsOn:
- nat-gke-cluster
- gke-nat-network
type: compute.v1.route
properties:
network: $(ref.gke-nat-network.selfLink)
destRange: 0.0.0.0/0
description: "route all other traffic through nat"
nextHopInstance: $(ref.nat-vm.selfLink)
tags:
- route-through-nat
priority: 800
</code></pre>
<p>When I try to pull and start a docker image I get:</p>
<blockquote>
<p>ImagePullBackOff error Google Kubernetes Engine</p>
</blockquote>
<p>When I do kubectl describe pod I get:</p>
<blockquote>
<p>Failed to pull image : rpc error: code = Unknown desc = unauthorized: authentication required</p>
</blockquote>
<p>Edit:</p>
<p>I have found out that the gcloud console command has changed since v1.10
<a href="https://cloud.google.com/kubernetes-engine/docs/how-to/access-scopes" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/access-scopes</a></p>
<p>Basically certain roles are not allowed by default for these clusters which includes pulling an image from google storage.</p>
<p>I am still having trouble figuring out how assign these roles while using</p>
<blockquote>
<p>gcloud deployment-manager deployments create gke-with-nat --config gke-with-nat-route.yml</p>
</blockquote>
|
Apothan
|
<p>So the reason the container images were not pulling is because gcloud clusters have changed how they handle permissions. It used to grant the 'storage-ro' role to new clusters allowing them to pull container images from the container registry. As per <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/access-scopes" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/access-scopes</a> .</p>
<p>I had to add scopes to the YML cluster deployment as I create my deployment using</p>
<blockquote>
<p>gcloud deployment-manager deployments create gke-with-nat --config gke-with-nat-route.yml</p>
</blockquote>
<p>The new YML included these settings</p>
<pre><code>nodeConfig:
serviceAccount: thisuser@project-id.iam.gserviceaccount.com
oauthScopes:
- https://www.googleapis.com/auth/devstorage.read_only
</code></pre>
<p>If you are using cluster create I think you can use</p>
<blockquote>
<p>gcloud container clusters create example-cluster --scopes scope1,scope2</p>
</blockquote>
<p>If you are using the website UI I think you can choose to use the legacy setting using a checkbox in the UI. I am not sure how long this will be supported.</p>
|
Apothan
|
<p>I have a pod which keeps restarting because of failed liveliness probes:</p>
<pre><code>Events:
... Container ... failed liveness probe, will be restarted
</code></pre>
<p>I suspect the the liveliness timeout of 1 sec is the issue here.</p>
<p>The liveliness probe is defined as follows:</p>
<pre><code>livenessProbe:
httpGet:
path: /health
port: 80
scheme: HTTP
</code></pre>
<p>Is there a way to simulate the HTTP request with <code>kubectl</code> to take a few samples on the response time?</p>
<p><code>exec</code>ing into the container and running <code>curl</code> is not an option, because the container is distroless.</p>
<p>Felix</p>
|
flix
|
<p>You can port-forward to the container with kubectl and then try to query the health api endpoint.</p>
<p>Also, slow down your <code>livenessProbe</code> while you check that with <code>initialDelaySeconds: 120</code> (in the <code>livenessProbe</code> object, outside the <code>httpGet</code>)</p>
<p>Like so:</p>
<pre class="lang-yaml prettyprint-override"><code>livenessProbe:
initialDelaySeconds: 120
httpGet:
path: /health
port: 80
scheme: HTTP
</code></pre>
|
Tom Klino
|
<p>I am newly studying kubernetes for my own interest, i am trying to create jenkins jobs to deploy our application. I have one master and worker machine and both are up and running i can able ping both machine from one to another.</p>
<p>As of now, i don't have any pods and deployment services my cluster its fresh setup environment.
right now jenkins file contains Pod Template for nodejs and docker with single stage to install NPM modules.</p>
<pre><code>def label = "worker-${UUID.randomUUID().toString()}"
podTemplate(
cloud: 'kubernetes',
namespace: 'test',
imagePullSecrets: ['regcred'],
label: label,
containers: [
containerTemplate(name: 'nodejs', image: 'nodejscn/node:latest', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'docker', image: 'nodejscn/node:latest', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'kubectl', image: 'k8spoc1/kubctl:latest', ttyEnabled: true, command: 'cat')
],
volumes: [
hostPathVolume(hostPath: '/var/run/docker.sock', mountPath: '/var/run/docker.sock'),
hostPathVolume(hostPath: '/root/.m2/repository', mountPath: '/root/.m2/repository')
]
) {
node(label) {
def scmInfo = checkout scm
def image_tag
def image_name
sh 'pwd'
def gitCommit = scmInfo.GIT_COMMIT
def gitBranch = scmInfo.GIT_BRANCH
def commitId
commitId= scmInfo.GIT_COMMIT[0..7]
image_tag = "${scmInfo.GIT_BRANCH}-${scmInfo.GIT_COMMIT[0..7]}"
stage('NPM Install') {
container ('nodejs') {
withEnv(["NPM_CONFIG_LOGLEVEL=warn"]) {
sh 'npm install'
}
}
}
}
}
</code></pre>
<p>Now the question is if run the jenkins jobs with above code, above mentioned docker and nodejs image will download from docker registry and this will save into my local machine ? how this will work, can you please some one explain me ? </p>
|
tp.palanisamy thangavel
|
<p>The above code is for jenkins plugin <a href="https://github.com/jenkinsci/kubernetes-plugin" rel="nofollow noreferrer">https://github.com/jenkinsci/kubernetes-plugin</a>.</p>
<p>So the running the above jenkins would run job on a agent or on master. The images would be downloaded to that agent/master. The above plugin is used to setup jenkins agent, so if there are no agents, it would be run on master.</p>
|
Shambu
|
<p>Is this a valid imperative command for creating job? </p>
<pre><code>kubectl create job my-job --image=busybox
</code></pre>
<p>I see this in <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands</a>. But the command is not working. I am getting error as bellow:</p>
<blockquote>
<p>Error: unknown flag: --image</p>
</blockquote>
<p>What is the correct imperative command for creating job?</p>
|
Rad4
|
<p>Try this one</p>
<pre><code>kubectl create cronjob my-job --schedule="0,15,30,45 * * * *" --image=busy-box
</code></pre>
|
ERK
|
<p>I have a problem with Kubernetes that run in a CentOS virtual machine in CloudStack. My pods remain in pending state.
I got the following error message when I print the log for a pod:</p>
<pre><code> [root@kubernetes-master ~]# kubectl logs wildfly-rc-6a0fr
Error from server: Internal error occurred: Pod "wildfly-rc-6a0fr" in namespace "default" : pod is not in 'Running', 'Succeeded' or 'Failed' state - State: "Pending"
</code></pre>
<p>If I launch describe command on the pod, this is the result:</p>
<pre><code>[root@kubernetes-master ~]# kubectl describe pod wildfly-rc-6a0fr
Name: wildfly-rc-6a0fr
Namespace: default
Image(s): jboss/wildfly
Node: kubernetes-minion1/
Start Time: Sun, 03 Apr 2016 15:00:20 +0200
Labels: name=wildfly
Status: Pending
Reason:
Message:
IP:
Replication Controllers: wildfly-rc (2/2 replicas created)
Containers:
wildfly-rc-pod:
Container ID:
Image: jboss/wildfly
Image ID:
QoS Tier:
cpu: BestEffort
memory: BestEffort
State: Waiting
Ready: False
Restart Count: 0
Environment Variables:
Volumes:
default-token-0dci1:
Type: Secret (a secret that should populate this volume)
SecretName: default-token-0dci1
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
8m 8m 1 {kubelet kubernetes-minion1} implicitly required container POD Pulled Container image "registry.access.redhat.com/rhel7/pod-infrastructure:latest" already present on machine
8m 8m 1 {kubelet kubernetes-minion1} implicitly required container POD Created Created with docker id 97c1a3ea4aa5
8m 8m 1 {kubelet kubernetes-minion1} implicitly required container POD Started Started with docker id 97c1a3ea4aa5
8m 8m 1 {kubelet kubernetes-minion1} spec.containers{wildfly-rc-pod} Pulling pulling image "jboss/wildfly"
</code></pre>
<p>Kubelet has some errors that I print below.Is this possible because of the vm has only 5GB of storage?</p>
<pre><code>systemctl status -l kubelet
● kubelet.service - Kubernetes Kubelet Server
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since lun 2016-04-04 08:08:59 CEST; 9min ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 2112 (kubelet)
Memory: 39.3M
CGroup: /system.slice/kubelet.service
└─2112 /usr/bin/kubelet --logtostderr=true --v=0 --api-servers=http://kubernetes-master:8080 --address=0.0.0.0 --allow-privileged=false --pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest
apr 04 08:13:33 kubernetes-minion1 kubelet[2112]: W0404 08:13:33.877859 2112 kubelet.go:1690] Orphaned volume "167d0ead-fa29-11e5-bddc-064278000020/default-token-0dci1" found, tearing down volume
apr 04 08:13:53 kubernetes-minion1 kubelet[2112]: W0404 08:13:53.887279 2112 kubelet.go:1690] Orphaned volume "9f772358-fa2b-11e5-bddc-064278000020/default-token-0dci1" found, tearing down volume
apr 04 08:14:35 kubernetes-minion1 kubelet[2112]: I0404 08:14:35.341994 2112 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider
apr 04 08:14:35 kubernetes-minion1 kubelet[2112]: E0404 08:14:35.397168 2112 manager.go:1867] Failed to create pod infra container: impossible: cannot find the mounted volumes for pod "wildfly-rc-oroab_default"; Skipping pod "wildfly-rc-oroab_default"
apr 04 08:14:35 kubernetes-minion1 kubelet[2112]: E0404 08:14:35.401583 2112 pod_workers.go:113] Error syncing pod 167d0ead-fa29-11e5-bddc-064278000020, skipping: impossible: cannot find the mounted volumes for pod "wildfly-rc-oroab_default"
apr 04 08:14:58 kubernetes-minion1 kubelet[2112]: E0404 08:14:58.076530 2112 manager.go:1867] Failed to create pod infra container: impossible: cannot find the mounted volumes for pod "wildfly-rc-1aimv_default"; Skipping pod "wildfly-rc-1aimv_default"
apr 04 08:14:58 kubernetes-minion1 kubelet[2112]: E0404 08:14:58.078292 2112 pod_workers.go:113] Error syncing pod 9f772358-fa2b-11e5-bddc-064278000020, skipping: impossible: cannot find the mounted volumes for pod "wildfly-rc-1aimv_default"
apr 04 08:15:23 kubernetes-minion1 kubelet[2112]: W0404 08:15:23.879138 2112 kubelet.go:1690] Orphaned volume "56257e55-fa2c-11e5-bddc-064278000020/default-token-0dci1" found, tearing down volume
apr 04 08:15:28 kubernetes-minion1 kubelet[2112]: E0404 08:15:28.574574 2112 manager.go:1867] Failed to create pod infra container: impossible: cannot find the mounted volumes for pod "wildfly-rc-43b0f_default"; Skipping pod "wildfly-rc-43b0f_default"
apr 04 08:15:28 kubernetes-minion1 kubelet[2112]: E0404 08:15:28.581467 2112 pod_workers.go:113] Error syncing pod 56257e55-fa2c-11e5-bddc-064278000020, skipping: impossible: cannot find the mounted volumes for pod "wildfly-rc-43b0f_default"
</code></pre>
<p>Could someone, kindly, help me?</p>
|
DarkSkull
|
<p>Run below command to get the events. This will show the issue ( and all other events) why pod has not be scheduled.</p>
<pre><code>kubectl get events
</code></pre>
|
Shambu
|
<p>I'm trying to find the above information but I cannot seem to find anything concrete.</p>
<p>I've tried looking at <a href="https://github.com/kubernetes/kubernetes" rel="nofollow noreferrer">k8s source code</a> but it's a bit hard to find relevant bits there and I'm not sure I can rely on such information going forward.</p>
<p>Reason for this is I'd like to extract certain bit of data about pods without calling the API.</p>
<p>So having:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
grafana-79dcc6469f-zzgmh 2/2 Running 2 4d
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
collection-grafana-79dcc6469f 1 1 1 4d1h
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
collection-grafana 1/1 1 1 4d1h
</code></pre>
<p>I'd like to exctract <code>zzgmh</code> as pod ID and <code>79dcc6469f</code> as replica set ID.</p>
|
Patryk
|
<blockquote>
<p>Where can I find information about kubernetes ID spec i.e. pod suffix ID length?</p>
</blockquote>
<p>The <code>simpleNameGenerator</code> function is used for name generation in many places.</p>
<pre class="lang-golang prettyprint-override"><code>$ cat kubernetes/staging/src/k8s.io/apiserver/pkg/storage/names/generate.go
var SimpleNameGenerator NameGenerator = simpleNameGenerator{}
const (
// TODO: make this flexible for non-core resources with alternate naming rules.
maxNameLength = 63
randomLength = 5
maxGeneratedNameLength = maxNameLength - randomLength
)
func (simpleNameGenerator) GenerateName(base string) string {
if len(base) > maxGeneratedNameLength {
base = base[:maxGeneratedNameLength]
}
return fmt.Sprintf("%s%s", base, utilrand.String(randomLength))
}
</code></pre>
<blockquote>
<p>replicaSet suffix ID length</p>
</blockquote>
<p><a href="https://github.com/kubernetes/kubernetes/pull/51538/files" rel="nofollow noreferrer">Since 1.8+</a>, the names have a 10 runes long <a href="https://github.com/DataDog/dd-agent/pull/3563/files#diff-39e718425aecc2d45eb92af886071ad2R390" rel="nofollow noreferrer">suffix</a> (consonants + numbers).
That is controlled via <code>func ControllerRevisionName</code> from <code>kubernetes/pkg/controller/history/controller_history.go</code> file.</p>
<pre class="lang-golang prettyprint-override"><code>$ cat pkg/controller/history/controller_history.go
// ControllerRevisionName returns the Name for a ControllerRevision in the form prefix-hash. If the length
// of prefix is greater than 223 bytes, it is truncated to allow for a name that is no larger than 253 bytes.
func ControllerRevisionName(prefix string, hash string) string {
if len(prefix) > 223 {
prefix = prefix[:223]
}
return fmt.Sprintf("%s-%s", prefix, rand.SafeEncodeString(strconv.FormatInt(int64(hash), 10)))
}
</code></pre>
<p>Additionally, if you are about introducing your own changes to <code>ReplicaSet</code> and <code>Pod</code> namings you might want looking at least into the <code>pkg/controller/deployment/sync.go</code> file that describes the usage of the <code>podTemplateSpecHash</code>.</p>
<p>Additionally you might want to check <a href="https://github.com/wercker/stern" rel="nofollow noreferrer">Stern</a> and <a href="https://github.com/thecasualcoder/kube-fzf" rel="nofollow noreferrer">kube-fzf</a> tools if all you need is to navigate through pods quickly or to tail logs.</p>
|
Nick
|
<p>I created the csr and approved it - </p>
<pre><code>$ kubectl get csr
NAME AGE REQUESTOR CONDITION
parth-csr 28m kubernetes-admin Approved,Issued
</code></pre>
<p>Created the certificate using kubectl only with username parth and group devs</p>
<pre><code> Issuer: CN=kubernetes
Validity
Not Before: Dec 16 18:51:00 2019 GMT
Not After : Dec 15 18:51:00 2020 GMT
Subject: O=devs, CN=parth
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
</code></pre>
<p>Here, I want to do the authentication on the basis of group - devs.<br></p>
<p>Clusterrole.yaml is as follows - </p>
<pre><code>kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: devs
rules:
- apiGroups: [""]
resources: ["nodes", "pods", "secrets", "pods", "pods/log", "configmaps", "services", "endpoints", "deployments", "jobs", "crontabs"]
verbs: ["get", "watch", "list"]
</code></pre>
<p>Clusterrolebinding.yaml as </p>
<pre><code>kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: devs-clusterrolebinding
subjects:
- kind: Group
name: devs # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: devs
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>Kubeconfig file is as follows -</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: XXXXXXXXXXXXX
server: https://XX.XX.XX.XX:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: parth
name: dev
current-context: "dev"
kind: Config
preferences: {}
users:
- name: parth
user:
client-certificate: /etc/kubernetes/access-credentials/parth/parth.crt
client-key: /etc/kubernetes/access-credentials/parth/parth.key
</code></pre>
<p>As I want to do auth using group only, I am getting the following error -</p>
<pre><code>$ kubectl get nodes
error: You must be logged in to the server (Unauthorized)
</code></pre>
<p>I am running k8s on bare-metal.
Group based auth reference from offical docs - <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding</a></p>
|
Parth Wadhwa
|
<p>I see you have given permission to groups and not to a user . In that case you need to use impersonation as group</p>
<pre><code>kubectl get nodes --as-group=devs
</code></pre>
|
Shambu
|
<p>Given below is my command to install bitnami keycloak on my kubernetes cluster</p>
<pre><code>helm install kc --set auth.adminPassword=admin,auth.adminUser=admin,service.httpPort=8180 bitnami/keycloak -n my-namespace
</code></pre>
<p>I want to import realms(contains users,groups,clients and roles) into my keycloak but before i do that i need to enable upload scripts flag , most of you might already know that we can do that in using standalone.sh as given below
on standalone keycloak installation</p>
<pre><code>bin/standalone.bat -Djboss.socket.binding.port-offset=10 -Dkeycloak.profile.featur
e.upload_scripts=enabled
</code></pre>
<p>can someone help me how can I do this using helm install command by passing flags just as I am doing for <code>auth.adminPassword=admin,auth.adminUser=admin,service.httpPort=8180</code></p>
<p>thanks in advance</p>
|
Girish Kumar
|
<p>In your Keycloak yaml file you need to add the field <code>extraEnvVars</code> and set the <code>KEYCLOAK_EXTRA_ARGS</code> environment variable as shown in the example below:</p>
<pre><code>keycloak:
enabled: true
auth:
adminUser: admin
adminPassword: secret
extraEnvVars:
- name: KEYCLOAK_EXTRA_ARGS
value: -Dkeycloak.profile.feature.upload_scripts=enabled
extraVolumeMounts:
...
</code></pre>
<p>Bear in mind, however, that the feature <code>upload_scripts</code> will be remove from Keycloak in the future.</p>
<p>From <a href="https://www.keycloak.org/docs/latest/server_development/#using-keycloak-administration-console-to-upload-scripts" rel="nofollow noreferrer">Keycloak Documentation</a>:</p>
<blockquote>
<p>Ability to upload scripts through the admin console is deprecated and
will be removed in a future version of Keycloak</p>
</blockquote>
|
dreamcrash
|
<p>I am getting error when I try to deploy kubernetes resource as below:</p>
<pre><code>suv@Suvankars-MacBook-Pro[8:50:09]:~/thermeon/gke-staging-envs/charts$ helm install --name=postfix postfix
NAME: postfix
LAST DEPLOYED: Sun Jul 12 20:50:15 2020
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Service
NAME AGE
postfix 2s
==> v1beta2/Deployment
postfix 2s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
postfix-bdc88887f-4bp8q 0/1 ContainerCreating 0 2s
NOTES:
1. Get the application URL by running these commands:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w postfix'
export SERVICE_IP=$(kubectl get svc --namespace default postfix -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:25
suv@Suvankars-MacBook-Pro[8:50:39]:~/thermeon/gke-staging-envs/charts$ kubectl get svc -w postfix
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
postfix LoadBalancer 10.1.22.218 <pending> 25:31916/TCP 27s
^C% suv@Suvankars-MacBook-Pro[8:50:58]:~/thermeon/gke-staging-envs/charts$
suv@Suvankars-MacBook-Pro[8:50:59]:~/thermeon/gke-staging-envs/charts$
suv@Suvankars-MacBook-Pro[8:50:59]:~/thermeon/gke-staging-envs/charts$
suv@Suvankars-MacBook-Pro[8:51:03]:~/thermeon/gke-staging-envs/charts$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postfix-bdc88887f-4bp8q 0/1 CrashLoopBackOff 2 50s
suv@Suvankars-MacBook-Pro[8:51:21]:~/thermeon/gke-staging-envs/charts$ kubectl get logs postfix-bdc88887f-4bp8q
error: the server doesn't have a resource type "logs"
</code></pre>
|
Suvankar Chakraborty
|
<p><a href="https://kubernetes.io/docs/reference/kubectl/overview/" rel="nofollow noreferrer">Kubectl's Official Documentation</a> covers kubectl syntax, describes the command operations, and provides common examples.</p>
<blockquote>
<p><code>logs</code> <code>kubectl logs POD [-c CONTAINER] [--follow] [flags]</code> Print the logs for a container in a pod.</p>
</blockquote>
<p>You can always check <code>kubectl</code>'s commands (and syntax examples) with commands like:</p>
<pre><code>$ kubectl --help
$ kubectl get --help
$ kubectl logs --help
</code></pre>
<p>and so on.</p>
<p>In this very case it is needed to run</p>
<pre><code>kubectl logs postfix-bdc88887f-4bp8q
</code></pre>
<p>Hope that explains it and gives insight on where to get the further info.</p>
|
Nick
|
<p><strong>Environment</strong></p>
<p>I am running Cloud Composer cluster (composer-1.6.0-airflow-1.10.1) with 3 nodes with default yaml GKE files provided when creating composer environment. We have 3 worker celery nodes running 4 worker threads each (<code>celery.dag_concurrency</code>)</p>
<p><strong>The problem</strong></p>
<p>I have noticed that two celery worker pods are scheduled on the same cluster node (let's say node A), the third pod is on node B. Node C has some supporting pods running but its cpu and memory utilisation is marginal. </p>
<p>Previously, we used 10 worker threads per worker and it led to all three worker pods being scheduled on the same node! causing pods to be evicted every few minutes due to node going out of memory. </p>
<p>I would expect that each pod is scheduled on a separate node for the best resource utilisation. </p>
<pre><code>GKE Master version - 1.11.10-gke.5
Total size - 3 nodes
Node spec:
Image type - Container-Optimised OS (cos)
Machine type - n1-standard-1
Boot disk type - Standard persistent disk
Boot disk size (per node) - 100 GB
Pre-emptible nodes - Disabled
</code></pre>
<p><strong>Workaround</strong></p>
<p>By default Cloud Composer doesn't specify requested memory for worker pods. By setting requested memory in such a way that prevents scheduling two worker pods on the same node kind of fixes the problem. In my case I set requested memory to <code>1.5Gi</code></p>
|
Pawel
|
<p>Cloud Composer's worker pods attempt to avoid co-scheduling by using pod anti-affinity during scheduling, but it is not always effective. For example, it is still possible for multiple pods to be scheduled on the same node when other nodes are not yet available (such as when the cluster is coming back online after a GKE version upgrade, or an Airflow upgrade, etc).</p>
<p>In these cases, the solution is to delete Airflow workers using the GKE workload interface, which lead to them be re-created and eventually balanced. Similarly, the evictions you've observed are somewhat disruptive, but also serve to eventually balance the workers across the nodes.</p>
<p>This is notably somewhat inconvenient, so it's being tracked in the public issue tracker under issue <a href="https://issuetracker.google.com/issues/136548942" rel="nofollow noreferrer">#136548942</a> as a feature request. I would recommend following along there.</p>
|
hexacyanide
|
<p>I want to configure a custom theme for login, register and forgot password pages in keycloak on kubernetes.</p>
<p>I am using the following url and configuration for keycloak on kubernetes.</p>
<p><a href="https://www.keycloak.org/getting-started/getting-started-kube" rel="nofollow noreferrer">https://www.keycloak.org/getting-started/getting-started-kube</a></p>
<pre><code> apiVersion: v1
kind: Service
metadata:
name: keycloak
labels:
app: keycloak
spec:
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: keycloak
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
namespace: default
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:12.0.4
env:
- name: KEYCLOAK_USER
value: "admin"
- name: KEYCLOAK_PASSWORD
value: "admin"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
</code></pre>
<p>Please suggest me any existing blog url or existing solution.</p>
|
Santosh Shinde
|
<p>The approach that I have used on the past was to first create a .tar file (<em>e.g.,</em> <code>custom_theme.tar</code>) with the custom themes to be used in Keycloak. Then mount volume to the folder where the Keycloak themes are stored (<em>i.e.,</em> <code>/opt/jboss/keycloak/themes/my_custom_theme</code>), and copy the .tar file with the custom themes from a local folder into the Keycloak container.</p>
<p>The helm char folder structure:</p>
<pre><code>Chart.yaml custom_theme.tar templates values.yaml
</code></pre>
<p>the content of :</p>
<p><strong>values.yaml:</strong></p>
<pre><code>password: adminpassword
</code></pre>
<p>The template folder structure:</p>
<pre><code>customThemes-configmap.yaml ingress.yaml service.yaml
deployment.yaml secret.yaml
</code></pre>
<p>the content of :</p>
<p><strong>customThemes-configmap.yaml</strong></p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: customthemes-configmap
binaryData:
custom_theme.tar: |-
{{ .Files.Get "custom_theme.tar" | b64enc}}
</code></pre>
<p><strong>ingress.yaml</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: keycloak
spec:
tls:
- hosts:
- keycloak-sprint01.demo
rules:
- host: keycloak-sprint01.demo
http:
paths:
- backend:
serviceName: keycloak
servicePort: 8080
</code></pre>
<p><strong>service.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: keycloak
labels:
app: keycloak
spec:
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: keycloak
type: LoadBalancer
</code></pre>
<p><strong>secret.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: keycloak-password
type: Opaque
stringData:
password: {{.Values.password}}
</code></pre>
<p><strong>deployment.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
namespace: default
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:10.0.1
env:
- name: KEYCLOAK_USER
value: "admin"
- name: KEYCLOAK_PASSWORD
valueFrom:
secretKeyRef:
name: keycloak-password
key: password
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_VENDOR
value: "h2"
- name: JAVA_TOOL_OPTIONS
value: -Dkeycloak.profile.feature.scripts=enabled
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
volumeMounts:
- mountPath: /opt/jboss/keycloak/themes/my_custom_theme
name: shared-volume
initContainers:
- name: init-customtheme
image: busybox:1.28
command: ['sh', '-c', 'cp -rL /CustomTheme/custom_theme.tar /shared && cd /shared/ && tar -xvf custom_theme.tar && rm -rf custom_theme.tar']
volumeMounts:
- mountPath: /shared
name: shared-volume
- mountPath: /CustomTheme
name: theme-volume
volumes:
- name: shared-volume
emptyDir: {}
- name: theme-volume
configMap:
name: customthemes-configmap
</code></pre>
<hr />
<p>I am not claiming that this is the best way to do it, I am not an expert in Kubernetes or helm. A Git repo containing the aforementioned files can be found <a href="https://github.com/dreamcrash/StackOverflow-/tree/main/Keycloak/minikube" rel="noreferrer">here</a>.</p>
|
dreamcrash
|
<p>The microservices are getting registered to Eureka with the pod name as hostname, this causing UnknownHostException, when the Zull API gateway trying to forward the request to the service.</p>
<p>The complete setup working fine with the docker-compose, the issues are happening when I am trying to use the Kubernetes.</p>
<p>For example, the order microservice running with the pod name as "<code>oc-order-6b64576f4b-7gp85</code>" and the order microservice getting register to to Eureka with "<code>oc-order-6b64576f4b-7gp85</code>" as the hostname. Which is causing </p>
<pre><code>java.net.UnknownHostException: oc-order-6b64576f4b-7gp85: Name does not resolve
</code></pre>
<p>I am running just one instance of each micro-services as a POC and one instance of the Eureka server. How to fix, how micro-service can register them self with the right hostname.</p>
<p>I want to use the Eureka as discovery service as it is well integrated with spring boot and I do not want to do code changes for Kubernetes deployment.</p>
|
Krushna
|
<p>Add the below property to your spring properties file in each service project:</p>
<pre><code>eureka.instance.hostName=${spring.application.name}
</code></pre>
<p>or (if you want a different name than the spring application name)</p>
<pre><code>eureka.instance.hostName=<custom-host-name>
</code></pre>
<p>Spring boot will now register with the provided hostname in Eureka</p>
|
sajux
|
<p>I have a simple yaml file</p>
<pre><code>apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: transit-monitor
spec:
version: 7.9.2
http:
service:
spec:
type: LoadBalancer
nodeSets:
- name: elasticsearch
count: 1
config:
node.store.allow_mmap: false
</code></pre>
<p>I was noticing that when i say <code>kubectl get elasticsearch</code> it will display:</p>
<pre><code>NAME HEALTH NODES VERSION PHASE AGE
transit-monitor 7.9.2 8m18s
</code></pre>
<p>It doesn't give any health info or nodes.</p>
<p>I was trying to figure out what to do. I was deleting them with a simple <code>kubectl delete elasticsearch</code> and then reapply the yaml file, but it doesn't do anything.</p>
<p>Givens:</p>
<p>I am on a mac, so I run through though <code>minikube</code>, which is fine as it connects to <code>kubectl</code>. I don't see any issues with this though being the case. My next step might be delete and reinit the minikube vm.</p>
|
Fallenreaper
|
<p>I ended up deleting the minikube instance, and went through the following steps before it getting resolved.</p>
<pre><code>minikube delete
minikube start
# Added Definitions
kubectl apply -f https://download.elastic.co/downloads/eck/1.2.1/all-in-one.yaml
kubectl apply -f .
</code></pre>
<p>Then everything was initializing as usual.</p>
|
Fallenreaper
|
<p>is there anyway to see the relationship of kubernetes v1.15.2 pod and veth? now I could see the veth in host but do not know which pod owned.</p>
<pre><code>vethe4297f4: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
ether ba:01:db:4a:7d:d0 txqueuelen 0 (Ethernet)
RX packets 9999796 bytes 1671107011 (1.5 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 9231477 bytes 2153738950 (2.0 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vethf059d46: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
ether 6a:8f:a3:65:dd:4c txqueuelen 0 (Ethernet)
RX packets 11724557 bytes 5581499446 (5.1 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 12847645 bytes 2142367255 (1.9 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vethf9efebf: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
ether fa:c7:76:53:4a:36 txqueuelen 0 (Ethernet)
RX packets 11103657 bytes 2587046474 (2.4 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8993500 bytes 1816804215 (1.6 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
</code></pre>
<p>by the way, I am learning the flannel communication procedure from the architecture :</p>
<p><a href="https://i.stack.imgur.com/ilGX5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ilGX5.png" alt="enter image description here"></a></p>
|
Dolphin
|
<blockquote>
<p>is there anyway to see the relationship of kubernetes v1.15.2 pod and veth?</p>
</blockquote>
<p>TL;DR :
Yes.<br />
There is a <a href="https://stackoverflow.com/questions/21724225">bunch of similar topics</a> on StackOverflow and even some <a href="https://github.com/micahculpepper/dockerveth" rel="noreferrer">scripts on Github</a>.</p>
<h1>Explanation:</h1>
<p>There is a <a href="https://dustinspecker.com/posts/how-do-kubernetes-and-docker-create-ip-addresses/" rel="noreferrer"> very good article</a> on Kubernetes (K8s) networking.</p>
<p>Oversimplified, "K8s networking" handled by Linux’s network namespaces and virtual interfaces.</p>
<p>Below console output has been taken on my GKE cluster, but shall be applicable to standalone cluster as well.</p>
<pre><code>$ sudo ip link show | egrep "veth|docker" | awk -F":" '{print $1": "$2}'
3: docker0
5: vethcf35c1bb@if3
6: veth287168da@if3
7: veth5c70f15b@if3
11: veth62f193f7@if3
12: vetha38273b3@if3
14: veth240a8f81@if3
sudo docker ps --format '{{.ID}} {{.Names}} {{.Image}}' "$@" | wc -l
25
</code></pre>
<p>As you can see, I have 6 <code>veth</code>'s serving traffic for 25 docker containers. Let's find the <code>veth</code> that serves traffic for one of the pods.</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
server-go-7b57857cfb-p6m62 1/1 Running 0 7m41s
</code></pre>
<ol>
<li>Lets find the docker container id for the pod.</li>
</ol>
<pre><code>$ sudo docker ps --format '{{.ID}} {{.Pid}} {{.Names}} {{.Image}}' "$@" | grep POD_server
6aa1d952a9f3 k8s_POD_server-go-7b57857cfb-p6m62_default_02206a28-42e1-43a5-adb8-f6ab13258fb1_0 k8s.gcr.io/pause:3.1
</code></pre>
<ol start="2">
<li>Checking a <code>pid</code> for it:</li>
</ol>
<pre><code>$ sudo docker inspect --format '{{.State.Pid}}' 6aa1d952a9f3
4012085
</code></pre>
<ol start="3">
<li>Allowing system tools accessing the namespace of that <code>pid</code>:</li>
</ol>
<pre><code>$ sudo ln -sf /proc/${pid}/ns/net /var/run/netns/ns-${pid}
</code></pre>
<pre><code>#in my case the commands were :
$ if [ ! -d /var/run/netns ]; then sudo mkdir -p /var/run/netns; fi
$ sudo ln -sf /proc/4012085/ns/net /var/run/netns/ns-4012085
</code></pre>
<pre><code>$ sudo ip netns exec "ns-4012085" ip link show type veth | grep "eth0"
3: eth0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP mode DEFAULT group default
</code></pre>
<ol start="4">
<li>Checking exact interface that serves traffic for the container.</li>
</ol>
<p>From that output (<code>eth0@if14</code>) we can say that the <code>eth0</code> for the <code>6aa1d952a9f3</code> docker container is linked to the interface <code>14: veth240a8f81@if3</code> on host machine.</p>
<p>Based on this example you can write your own script to match <code>veth</code> interfaces to Pods, containers, etc.</p>
<p>Hope that helps.</p>
|
Nick
|
<p>We have installed consul through helm charts on k8 cluster. Here, I have deployed one consul server and the rest are consul agents.</p>
<pre><code>kubectl get pods
NAME READY STATUS RESTARTS AGE
consul-7csp9 1/1 Running 0 4h
consul-connect-injector-webhook-deployment-66d46867f6-wqtt7 1/1 Running 0 4h
consul-server-0 1/1 Running 0 4h
consul-sync-catalog-85f5654b89-9qblx 1/1 Running 0 4h
consul-x4mqq 1/1 Running 0 4h
</code></pre>
<p>We see that the nodes are registered onto the Consul Server. <a href="http://XX.XX.XX.XX/ui/kube/nodes" rel="nofollow noreferrer">http://XX.XX.XX.XX/ui/kube/nodes</a></p>
<p>We have deployed an hello world application onto k8 cluster. This will bring-up Hello-World </p>
<pre><code>kubectl get pods
NAME READY STATUS RESTARTS AGE
consul-7csp9 1/1 Running 0 4h
consul-connect-injector-webhook-deployment-66d46867f6-wqtt7 1/1 Running 0 4h
consul-server-0 1/1 Running 0 4h
consul-sync-catalog-85f5654b89-9qblx 1/1 Running 0 4h
consul-x4mqq 1/1 Running 0 4h
sampleapp-69bf9f84-ms55k 2/2 Running 0 4h
</code></pre>
<p>Below is the yaml file.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: sampleapp
spec:
replicas: 1
selector:
matchLabels:
app: sampleapp
template:
metadata:
labels:
app: sampleapp
annotations:
"consul.hashicorp.com/connect-inject": "true"
spec:
containers:
- name: sampleapp
image: "docker-dev-repo.aws.com/sampleapp-java/helloworld-service:a8c9f65-65"
ports:
- containerPort: 8080
name: http
</code></pre>
<p>Successful deployment of sampleapp, I see that sampleapp-proxy is registered in consul. and sampleapp-proxy is listed in kubernetes services. (This is because the toConsul and toK8S are passed as true during installation)</p>
<pre><code>kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul ExternalName <none> consul.service.test <none> 4h
consul-connect-injector-svc ClusterIP XX.XX.XX.XX <none> 443/TCP 4h
consul-dns ClusterIP XX.XX.XX.XX <none> 53/TCP,53/UDP 4h
consul-server ClusterIP None <none> 8500/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP 4h
consul-ui LoadBalancer XX.XX.XX.XX XX.XX.XX.XX 80:32648/TCP 4h
dns-test-proxy ExternalName <none> dns-test-proxy.service.test <none> 2h
fluentd-gcp-proxy ExternalName <none> fluentd-gcp-proxy.service.test <none> 33m
kubernetes ClusterIP XX.XX.XX.XX <none> 443/TCP 5d
sampleapp-proxy ExternalName <none> sampleapp-proxy.service.test <none> 4h
</code></pre>
<p>How can I access my sampleapp? Should I expose my application as kube service again? </p>
<p>Earlier, without consul, we used a create a service for the sampleapp and expose the service as ingress. Using the Ingress Loadbalancer, we used to access our application. </p>
|
Sunil Gajula
|
<p>Consul does not provide any new ways to expose your apps. You need to create ingress Loadbalancer as before.</p>
|
dds
|
<p>I want one of my node only accepts some kind of pods.
So I wonder, is there a way to make one node only accept those pods with some specific labels? </p>
|
Jasonling
|
<p>You have two options: </p>
<ol>
<li><a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature" rel="nofollow noreferrer">Node Affinity</a>: property of Pods which attract them to set of nodes.</li>
<li><a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="nofollow noreferrer">Taints & Toleration</a> : Taints are opposite of Node Affinity, they allow node to repel set of Pods. </li>
</ol>
<p><strong>Using Node Affinity</strong> </p>
<ol>
<li><p>You need to label your nodes:
<code>kubectl label nodes node1 mylabel=specialpods</code></p></li>
<li><p>Then when you launch Pods specify the <code>affinity</code>: </p></li>
</ol>
<pre>
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: mylabel
operator: In
values:
- specialpods
containers:
- name: nginx-container
image: nginx
</pre>
<p><strong>Using Taint & Toleration</strong> </p>
<p>Taint & Toleration work together: you taint a node, and then specify the toleration for pod, only those Pods will be scheduled on node whose toleration "matches" taint: </p>
<ol>
<li><p>Taint: <code>kubectl taint nodes node1 mytaint=specialpods:NoSchedule</code></p></li>
<li><p>Add toleration in Pod Spec:</p></li>
</ol>
<pre>
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
tolerations:
- key: "mytaint"
operator: "Equal"
value: "specialpods"
effect: "NoSchedule"
containers:
- name: nginx-container
image: nginx
</pre>
|
bits
|
<p>I have my current application batches developed on Spring Batch framework. Now I need to forklift the same to a Kubernetes platform for cloud nativity. Please help me with the following queries.</p>
<ol>
<li>How do I achieve auto scaling (HPA) for the spring batches.</li>
<li>Whether spring batch remote partitioning is the recommended approach for auto scaling in k8s
platform and any best practices for this approach. Like how to effectively scale-up and scale-down
etc.</li>
<li>What are the advantages of refactoring the current spring batch over Spring Cloud Task. Is this a
best practice for cloud compliance.</li>
</ol>
<p>Thanks</p>
<p><strong>UPDATE</strong><br />
while choosing spring batch remote partitioning - whether worker containers should be configured over k8s deployment(pods) or k8s jobs. Any recommended approach ?</p>
<p>do we have hpa/autoscaling for k8s jobs ?</p>
<p>spring batch remote partitioning over k8s platform which is better - using MessagingPartitionHandler+k8s jobs(worker queue pattern) or (DeployerPartitionHandler+KubernetesTaskLauncher) ?</p>
|
Balu R
|
<blockquote>
<p>do we have hpa/autoscaling for k8s jobs ?</p>
</blockquote>
<p>No. There is a concept of Jobs Parallelism, but it is not quite the HPA.</p>
<p>If you have a continuous stream of background processing work to run (so work queue Jobs don't fit you), then consider running your background workers with a ReplicaSet instead, and consider running a background processing library such as <a href="https://github.com/resque/resque" rel="nofollow noreferrer">https://github.com/resque/resque</a>.</p>
<p>A ReplicaSet can also be a target for Horizontal Pod Autoscalers (HPA). That is, a ReplicaSet can be auto-scaled by an HPA.</p>
|
Nick
|
<p>Asp.Net Core microservices on Docker/Kubernetes are disagreeing on the duration of inter-service calls between the caller and callee.</p>
<p>The caller logs can show anywhere from a few milliseconds up to 10 full seconds more than the callee. The problem worsens under heavy load, but is still present under light load. Many calls do agree between the caller and callee, but this discrepancy does happen frequent enough to make a real dent in performance overall.</p>
<p>The timestamps indicate that the time gap can either be <em>before</em> or <em>after</em> the callee has reported that its response is complete.</p>
<p><strong>Example logs (numbers from a real time discrepancy)</strong> </p>
<pre><code>ServiceB: [2018-10-11T22:41:41.374Z] S2S request complete to ServiceA, Duration: 11644
ServiceA: [2018-10-11T22:41:29.732Z] Request complete, Duration: 5
</code></pre>
<p><strong>Caller Timing (common class for all S2S calls)</strong> </p>
<pre><code>var timer = Stopwatch.StartNew();
var response = await _httpClientFactory.CreateClient().SendAsync(request);
timer.Stop();
Logger.Info($"S2S request complete to {service}, Duration: {timer.EllapsedMilliseconds}");
</code></pre>
<p><strong>Callee Timing (custom Asp.Net middleware)</strong></p>
<pre><code>var timer = Stopwatch.StartNew();
await _next(context);
timer.Stop();
Logger.Info($"Request complete, Duration: {timer.EllapsedMilliseconds}");
</code></pre>
<p>This middleware is registered as almost the first in the pipeline (second to only the ActivityId / TraceId middleware for log correlation).</p>
<p><strong>Troubleshooting Steps</strong> </p>
<ul>
<li>Not able to reproduce the issue on Windows development machine</li>
<li>Monitored CPU, Memory, Thread Count, GC Collects, Open Handles (all at reasonable levels)</li>
<li>Adjusted k8s spec CPU and Memory request / limit (various levels with some effect, but does not alleviate the problem)</li>
<li>Turned on Server GC with Environment Variable: COMPlus_gcServer=1</li>
<li>Issue occurs on services that are within resource limits and have not needed to autoscale</li>
<li>Changed to new Kestrel Socket Transport (instead of libuv)</li>
<li>Changed to new .Net Core 2.1 SocketsHttpHandler </li>
</ul>
<p><strong>System Topology</strong></p>
<p>Asp.Net Core 2.1 self-hosted Kestrel<br>
.Net Core 2.1.5 runtime<br>
Docker / Kubernetes 1.10.5<br>
K8s Addons: kube-proxy, weave, etcd, SkyDNS<br>
AWS c5.4xlarge </p>
<p><strong>Updates</strong> </p>
<ol>
<li>Found out that the time gap can sometimes be before or after the callee starts/completes</li>
</ol>
|
TylerOhlsen
|
<p>In this case, this issue was fixed by <em>removing</em> the k8s spec CPU limit. </p>
<p>Monitoring the <code>container_cpu_cfs_throttled_seconds_total</code> metric found that one of the service containers was getting <em>paused</em> very frequently. These pauses were mostly on the caller side of the S2S calls. Which increased the elapsed time reported by the caller.</p>
<p>Removing the CPU limit in the k8s spec prevents k8s from passing the <code>--cpu-quota</code> and <code>--cpu-period</code> <a href="https://docs.docker.com/config/containers/resource_constraints/#cpu" rel="nofollow noreferrer">docker parameters</a>. Which is what controls the container pauses.</p>
|
TylerOhlsen
|
<p>My environment is that the ignite client is on kubernetes and the ignite server is running on a normal server.
In such an environment, TCP connections are not allowed from the server to the client.
For this reason, CommunicationSpi(server -> client) cannot be allowed.
What I'm curious about is what issues can occur in situations where Communication Spi is not available?
In this environment, Is there a way to make a CommunicationSpi(server -> client) connection?</p>
|
Lee Changmyung
|
<p>In Kubernetes, the <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">service</a> is used to communicate with pods.</p>
<p>The default service type in Kubernetes is ClusterIP</p>
<p>ClusterIP is an <strong>internal</strong> IP address reachable from inside of the Kubernetes cluster only. The ClusterIP enables the applications running within the pods to access the service.</p>
<p>To expose the pods outside the kubernetes cluster, you will need k8s service of <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer"><code>NodePort</code></a> or <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer"><code>LoadBalancer</code></a> type.</p>
<ul>
<li><p>NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort). A <code>ClusterIP</code> Service, to which the <code>NodePort</code> Service routes, is automatically created. You’ll be able to contact the NodePort Service, from outside the cluster, by requesting <code><NodeIP>:<NodePort></code> .</p>
<p>Please note that it is needed to have <strong>external</strong> IP address assigned to one of the nodes in cluster and a Firewall rule that allows ingress traffic to that port. As a result kubeproxy on Kubernetes node (the external IP address is attached to) will proxy that port to the pods selected by the service.</p>
</li>
<li><p>LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer. <code>NodePort</code> and <code>ClusterIP</code> Services, to which the external load balancer routes, are automatically created.</p>
</li>
</ul>
<p>Alternatively it is possible to use <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer"><code>Ingress</code></a></p>
<p>There is a very good article on <a href="http://alesnosek.com/blog/2017/02/14/accessing-kubernetes-pods-from-outside-of-the-cluster/" rel="nofollow noreferrer">acessing Kubernetes Pods from Outside of cluster</a> .</p>
<p>Hope that helps.</p>
<p>Edited on 09-Dec-2019</p>
<p>upon your comment I recall that it's possible to use hostNetwork and hostPort methods.</p>
<h3>hostNetwork</h3>
<p>The <code>hostNetwork</code> setting applies to the Kubernetes pods. When a pod is configured with <code>hostNetwork: true</code>, the applications running in such a pod can directly see the network interfaces of the host machine where the pod was started. An application that is configured to listen on all network interfaces will in turn be accessible on all network interfaces of the host machine.
Example:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
hostNetwork: true
containers:
- name: nginx
image: nginx
</code></pre>
<p>You can check that the application is running with: <code>curl -v http://kubenode01.example.com</code></p>
<p>Note that every time the pod is restarted Kubernetes can reschedule the pod onto a different node and so the application will change its IP address. Besides that two applications requiring the same port cannot run on the same node. This can lead to port conflicts when the number of applications running on the cluster grows.</p>
<p>What is the host networking good for? For cases where a direct access to the host networking is required.</p>
<h3>hostPort</h3>
<p>The <code>hostPort</code> setting applies to the Kubernetes containers. The container port will be exposed to the external network at <em>:</em>, where the <em>hostIP</em> is the IP address of the Kubernetes node where the container is running and the <em>hostPort</em> is the port requested by the user.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 8086
hostPort: 443
</code></pre>
<p>The hostPort feature allows to expose a single container port on the host IP. Using the hostPort to expose an application to the outside of the Kubernetes cluster has the same drawbacks as the hostNetwork approach discussed in the previous section. The host IP can change when the container is restarted, two containers using the same hostPort cannot be scheduled on the same node.</p>
<p>What is the hostPort used for? For example, the nginx based <a href="https://github.com/kubernetes/ingress/tree/master/controllers/nginx" rel="nofollow noreferrer">Ingress controller</a> is deployed as a set of containers running on top of Kubernetes. These containers are configured to use hostPorts 80 and 443 to allow the inbound traffic on these ports from the outside of the Kubernetes cluster.</p>
|
Nick
|
<p>I am using Google cloud composer ,and created composer environment.Composer environment is ready(has green tick), now I am trying to set variables used in DAG python code using google cloud shell.</p>
<p>command to set variables:</p>
<pre><code> gcloud composer environments run test-environment \
--location us-central1 variables -- \
--set gcp_project xxx-gcp
</code></pre>
<p><strong>Exact error message:</strong></p>
<pre><code> ERROR: (gcloud.composer.environments.run) Desired GKE pod not found. If the environment was recently started, please wait and retry.
</code></pre>
<p>I tried following things as part of investigation, but got same error each time.
I have created a new environment using UI and not google shell commands.
I checked pods in kubernetes engine and all are green , did not see any issue.
I verified composer API, Billing kubernetes, all required API's are enabled.</p>
<p>I have 'Editor' role assigned.</p>
<p><strong>added screenshot I saw first time some failures</strong></p>
<p><a href="https://i.stack.imgur.com/bkF1M.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bkF1M.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/SLfll.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SLfll.png" alt="enter image description here"></a></p>
<p><strong>Error with exit code 1</strong>
google troubleshooting guide describe: If the exit code is 1, the container crashed because the application crashed.</p>
|
Tokci
|
<p>This is a side effect of Composer version 1.6.0 if you are using a <a href="https://github.com/google-cloud-sdk/google-cloud-sdk" rel="noreferrer">google-cloud-sdk</a> that is too old, because it now launches pods in namespaces other than <code>default</code>. The error you see is a result of <a href="https://github.com/google-cloud-sdk/google-cloud-sdk/blob/636d6c1b48f97c844aaa4cc2ddc8a5c2bb1be55e/lib/googlecloudsdk/command_lib/composer/util.py#L314" rel="noreferrer">looking for Kubernetes pods in the default namespace</a> and <a href="https://github.com/google-cloud-sdk/google-cloud-sdk/blob/636d6c1b48f97c844aaa4cc2ddc8a5c2bb1be55e/lib/googlecloudsdk/command_lib/composer/util.py#L275" rel="noreferrer">failing to find them</a>.</p>
<p>To fix this, run <code>gcloud components update</code>. If you cannot yet update, a workaround to execute Airflow commands is to manually SSH to a pod yourself and run <code>airflow</code>. To start, obtain GKE cluster credentials:</p>
<pre><code>$ gcloud container clusters get-credentials $COMPOSER_GKE_CLUSTER_NAME
</code></pre>
<p>Once you have the credentials, you should find which namespace the pods are running in (which you can also find using Cloud Console):</p>
<pre><code>$ kubectl get namespaces
NAME STATUS AGE
composer-1-6-0-airflow-1-9-0-6f89fdb7 Active 17h
default Active 17h
kube-public Active 17h
kube-system Active 17h
</code></pre>
<p>You can then SSH into any scheduler/worker pod, and run commands:</p>
<pre><code>$ kubectl exec \
--namespace=$NAMESPACE \
-it airflow-worker-569bc59df5-x6jhl airflow list_dags -r
</code></pre>
<p>You can also open a shell if you prefer:</p>
<pre><code>$ kubectl exec \
--namespace=$NAMESPACE \
-it airflow-worker-569bc59df5-x6jhl bash
airflow@airflow-worker-569bc59df5-x6jhl:~$ airflow list_dags -r
</code></pre>
<p>The failed <code>airflow-database-init-job</code> jobs are unrelated and will not cause problems in your Composer environment.</p>
|
hexacyanide
|
<p>It works on my mac k8s instance, but not in my raspberry pi instance. Essentially im trying to set up a k8s cloud implementation of pihole. That way, I can monitor it, and keep it containerized as opposed to running outside the scope of the application. Ideally, im trying to containerize everything for cleanliness.</p>
<p>I am running on a 2 node Raspberry Pi 4, 4G /ea cluster.</p>
<p>When running the following file on my mac it builds correctly, but on the pi, named: master-pi, it will fail:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 44m default-scheduler 0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
Warning FailedScheduling 44m default-scheduler 0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
</code></pre>
<p>The YAML i implemented was pretty simple seemingly:</p>
<pre><code>---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pihole-local-etc-volume
labels:
directory: etc
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local
local:
path: /home/pi/Documents/pihole/etc #Location where it will live.
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- master-pi #docker-desktop # Hosthome where lives.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pihole-local-etc-claim
spec:
storageClassName: local
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi # Possibly update to 2Gi later.
selector:
matchLabels:
directory: etc
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pihole-local-dnsmasq-volume
labels:
directory: dnsmasq.d
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local
local:
path: /home/pi/Documents/pihole/dnsmasq #Location where it will live.
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- master-pi #docker-desktop # Hosthome where lives.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pihole-local-dnsmasq-claim
spec:
storageClassName: local
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
selector:
matchLabels:
directory: dnsmasq.d
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pihole
labels:
app: pihole
spec:
replicas: 1
selector:
matchLabels:
app: pihole
template:
metadata:
labels:
app: pihole
name: pihole
spec:
containers:
- name: pihole
image: pihole/pihole:latest
imagePullPolicy: Always
env:
- name: TZ
value: "America/New_York"
- name: WEBPASSWORD
value: "secret"
volumeMounts:
- name: pihole-local-etc-volume
mountPath: "/etc/pihole"
- name: pihole-local-dnsmasq-volume
mountPath: "/etc/dnsmasq.d"
volumes:
- name: pihole-local-etc-volume
persistentVolumeClaim:
claimName: pihole-local-etc-claim
- name: pihole-local-dnsmasq-volume
persistentVolumeClaim:
claimName: pihole-local-dnsmasq-claim
---
apiVersion: v1
kind: Service
metadata:
name: pihole
spec:
selector:
app: pihole
ports:
- port: 8000
targetPort: 80
name: pihole-admin
- port: 53
targetPort: 53
protocol: TCP
name: dns-tcp
- port: 53
targetPort: 53
protocol: UDP
name: dns-udp
externalIPs:
- 192.168.10.75 #Static IP I need to assign for the network.
</code></pre>
<p>Other notes:
I made sure I created the folders previously, and they are both chmod 777.
<code>df</code> produces:</p>
<pre><code>pi@master-pi:~/Documents/pihole$ df
Filesystem 1K-blocks Used Available Use% Mounted on
tmpfs 383100 5772 377328 2% /run
/dev/mmcblk0p2 30450144 14283040 14832268 50% /
tmpfs 1915492 0 1915492 0% /dev/shm
tmpfs 5120 4 5116 1% /run/lock
tmpfs 4096 0 4096 0% /sys/fs/cgroup
/dev/mmcblk0p1 258095 147696 110399 58% /boot/firmware
tmpfs 383096 116 382980 1% /run/user/1000
</code></pre>
<p>So i believe the location has the size needed ( /home/pi/Documents/etc ) is just 1G but it looks half full, so ~15G available.</p>
<p>I can give more information, but I am just confused as to why this</p>
|
Fallenreaper
|
<p>There were 2 things here to learn.</p>
<ol>
<li><p>Master nodes do not get scheduled pods. They have enough going on, just organizing. That being said, A since node cluster is both a Master and Slave, where as 2 or more, 1 is a Master and rest are Slaves.</p>
</li>
<li><p>When assigning the path <code>/hello/world</code> for the volume in this case, it will not make the path automatically on the host, which is actually REALLY annoying because if you have N pods, you need ALL Nodes to have that path, in case it is scheduled to a different one. The master determines where things so, so if it passes it to a Node which cant handle it, it will get a backoff error. It is best to put the path on all nodes then.</p>
</li>
</ol>
<p>The key takeaway is that the cluster (master or otherwise) should auto make Node Paths, which just isnt true. One would think that since it has sudo it should be able to say "Mount this here", but doesnt. I need to manually configure each Node to have the paths consumed, which creates provisioning errors.</p>
<p>If i need to spin up MORE nodes ad-hoc, I need to ensure they are all provisioned accordingly, such as adding this particular path. you will need to add that to your own set up routine.</p>
<p>You can read more about hostPath for Volumes here: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume</a></p>
<p>Websites state that hostPath is good for single node clusters, but when dealing with Production or >1 Nodes, you should use NFS or Some other mechanism for storage.</p>
<p>Something else which would benefit would be using Storage Classes for auto provisioning, which is why I personally wanted in the first place: <a href="https://kubernetes.io/blog/2016/10/dynamic-provisioning-and-storage-in-kubernetes/" rel="nofollow noreferrer">https://kubernetes.io/blog/2016/10/dynamic-provisioning-and-storage-in-kubernetes/</a></p>
<p>It talks about how you define storage classes, as well as how to request storage size of 30gi for example. This will be used with the claim instead. It is too late, but i will attempt to write up a similar example for the base question.</p>
|
Fallenreaper
|
<p>I have a scenario where I have multiple mysql servers running in different namespaces in single kubernetes cluster. All mysql servers belong to different departments.</p>
<p>My requirement is I should be able to connect to different mysql servers using hostname, i.e.,</p>
<p>mysqlServerA running in ServerA namespace should be reachable from outside the cluster using:</p>
<pre><code>mysql -h mysqlServerA.mydomain.com -A
</code></pre>
<p>mysqlServerB running in ServerB namespace should be reachable from outside the cluster using:</p>
<pre><code>mysql -h mysqlServerB.mydomain.com -A
</code></pre>
<p>and so on.</p>
<p>I have tried TCP based routing using config maps of Nginx Ingress controller, where I am routing traffic from clients to different mysql servers by assigning different port numbers:</p>
<p>for mysqlServerA:</p>
<pre><code>mysql -h mysqlServer.mydomain.com -A -P 3301
</code></pre>
<p>for mysqlServerB:</p>
<pre><code>mysql -h mysqlServer.mydomain.com -A -P 3302
</code></pre>
<p>this works perfectly. But I want to know if hostname based routing is possible or not, because I don't want separate load balancer for each mysql service.</p>
<p>Thanks</p>
|
Ammar Taj
|
<p><strong>General info</strong></p>
<blockquote>
<p>I routing traffic by different port numbers</p>
</blockquote>
<p>You are right, the reason for that is that connection to Mysql is done via TCP. That is why it is definitely not possible to have two simultaneous connections to two servers on the same <code>IP:port</code>.</p>
<p>Unlike HTTP, the TCP don't have headers that allows distinguishing the host the traffic shall be routed to. However, still there are at least two ways to achieve the functionality you'd like to achieve :) I'll describe that later.</p>
<blockquote>
<p>I want to know if hostname based routing is possible or not
I don't want separate load balancer for each mysql service.</p>
</blockquote>
<p>K8s allows <a href="http://alesnosek.com/blog/2017/02/14/accessing-kubernetes-pods-from-outside-of-the-cluster/" rel="nofollow noreferrer">a few methods</a> for service to be reachable outside the cluster (namely <code>hostNetwork</code>, <code>hostPort</code>, <code>NodePort</code> , <code>LoadBalancer</code>, <code>Ingress</code> )</p>
<p>The <code>LoadBalancer</code> is the simplest way to serve traffic on LoadBalancerIP:port ; however, due to TCP nature of connection you'll have to use one <code>LoadBalancer</code> per one mysql instance.</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: mysql
spec:
type: LoadBalancer
ports:
- port: 3306
selector:
name: my-mysql
</code></pre>
<p>The <code>NodePort</code> looks good, but it allows you connecting only when you know port (which can be tedious work for clients)</p>
<p><strong>Proposed solutions</strong></p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/service/#external-ips" rel="nofollow noreferrer"><strong>External IPs</strong></a></li>
</ul>
<p>If there are external IPs that route to one or more cluster nodes, Kubernetes Services can be exposed on those <code>externalIPs</code>. Traffic that ingresses into the cluster with the external IP (as destination IP), on the Service port, will be routed to one of the Service endpoints. <code>externalIPs</code> are not managed by Kubernetes and are the responsibility of the cluster administrator.</p>
<p>In the Service spec, <code>externalIPs</code> can be specified along with any of the <code>ServiceTypes</code>. In the example below, <code>mysql-1</code> can be accessed by clients on <code>1.2.3.4:3306</code> (<code>externalIP:port</code>) and <code>mysql-2</code> can be accessed by clients on <code>4.3.2.1:3306</code> </p>
<pre><code>$ cat stereo-mysql-3306.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql-1234-inst-1
spec:
selector:
app: mysql-prod
ports:
- name: mysql
protocol: TCP
port: 3306
targetPort: 3306
externalIPs:
- 1.2.3.4
---
apiVersion: v1
kind: Service
metadata:
name: mysql-4321-inst-1
spec:
selector:
app: mysql-repl
ports:
- name: mysql
protocol: TCP
port: 3306
targetPort: 3306
externalIPs:
- 4.3.2.1
</code></pre>
<p>Note: you need to have <code>1.2.3.4</code> and <code>4.3.2.1</code> assigned to your Nodes (and resolve <code>mysqlServerA</code> / <code>mysqlserverB</code> at <code>mydomain.com</code>to these IPs as well). I've tested that solution on my GKE cluster and it works :). </p>
<p>With that config all the requests for <code>mysqlServerA.mydomain.com:3306</code> that resolves to <code>1.2.3.4</code> are going to be routed to the <code>Endpoints</code> for service <code>mysql-1234-inst-1</code> with the <code>app: mysql-prod</code> selector, and <code>mysqlServerA.mydomain.com:3306</code> will be served by <code>app: mysql-repl</code>.</p>
<p>Of course it is possible to split that config for 2 namespaces (one namespace - one mysql - one service per one namespace).</p>
<ul>
<li><a href="https://itnext.io/use-helm-to-deploy-openvpn-in-kubernetes-to-access-pods-and-services-217dec344f13" rel="nofollow noreferrer"><strong>ClusterIP+OpenVPN</strong></a></li>
</ul>
<p>Taking into consideration that your mysql pods have ClusterIPs, it is possible to spawn additional VPN pod in cluster and connect to mysqls through it. </p>
<p>As a result, you can establish VPN connection and have access to all the cluster resources. That is very limited solution which requires establishing the VPN connection for anyone who needs access to mysql. </p>
<p>Good practice is to add a bastion server on top of that solution. That server will be responsible for providing access to cluster services via VPN.</p>
<p>Hope that helps. </p>
|
Nick
|
<p>How to spin up cloud proxy for cloud composer cluster</p>
<p>Currently we use airflow to manage jobs and dynamic DAG creation. For this, one separate Dag is written to check database table in PostgreSQL for existing rules & if rule is active/inactive in PostgreSQL, we manually have set up to off/on dynamic DAGs in Airflow.Now, we are going to use Google's self managed Cloud Composer but problem is that we don't have access of db of cloud composer. How can we use cloud sql proxy to resolve this problem?</p>
|
Aniruddha Dwivedi
|
<p>The Cloud Composer database is actually already accessible, because there is a Cloud SQL Proxy running within the environment's attached GKE cluster. You can use its service name <code>airflow-sqlproxy-service</code> to connect to it from within the cluster, using <code>root</code>. For example, on Composer 1.6.0, and if you have Kubernetes cluster credentials, you can list running pods:</p>
<pre><code>$ kubectl get po --all-namespaces
composer-1-6-0-airflow-1-9-0-6f89fdb7 airflow-database-init-job-kprd5 0/1 Completed 0 1d
composer-1-6-0-airflow-1-9-0-6f89fdb7 airflow-scheduler-78d889459b-254fm 2/2 Running 18 1d
composer-1-6-0-airflow-1-9-0-6f89fdb7 airflow-worker-569bc59df5-x6jhl 2/2 Running 5 1d
composer-1-6-0-airflow-1-9-0-6f89fdb7 airflow-worker-569bc59df5-xxqk7 2/2 Running 5 1d
composer-1-6-0-airflow-1-9-0-6f89fdb7 airflow-worker-569bc59df5-z5lnj 2/2 Running 5 1d
default airflow-redis-0 1/1 Running 0 1d
default airflow-sqlproxy-668fdf6c4-vxbbt 1/1 Running 0 1d
default composer-agent-6f89fdb7-0a7a-41b6-8d98-2dbe9f20d7ed-j9d4p 0/1 Completed 0 1d
default composer-fluentd-daemon-g9mgg 1/1 Running 326 1d
default composer-fluentd-daemon-qgln5 1/1 Running 325 1d
default composer-fluentd-daemon-wq5z5 1/1 Running 326 1d
</code></pre>
<p>You can see that one of the worker pods is named <code>airflow-worker-569bc59df5-x6jhl</code>, and is running in the namespace <code>composer-1-6-0-airflow-1-9-0-6f89fdb7</code>. If I SSH to one of them and run the MySQL CLI, I have access to the database:</p>
<pre><code>$ kubectl exec \
-it airflow-worker-569bc59df5-x6jhl \
--namespace=composer-1-6-0-airflow-1-9-0-6f89fdb7 -- \
mysql \
-u root \
-h airflow-sqlproxy-service.default
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 27147
Server version: 5.7.14-google-log (Google)
Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
</code></pre>
<p>TL;DR for anything running in your DAGs, connect using <code>root@airflow-sqlproxy-service.default</code> with no password. This will connect to the Airflow metadata database through the Cloud SQL Proxy that's already running in your Composer environment.</p>
<hr>
<p>If you need to connect to a database that <em>isn't</em> the Airflow database running in Cloud SQL, then you can spin up another proxy by deploying a new proxy pod into GKE (like you would deploy anything else into a Kubernetes cluster).</p>
|
hexacyanide
|
<p>I'm new to Kubernetes and I'm trying to add a PVC in my <code>StatefulSet</code> on Minikube.
PV and PVC are shown here:</p>
<pre><code>NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
neo4j-backups 5Gi RWO Retain Bound default/backups-claim manual 1h
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
backups-claim Bound neo4j-backups 5Gi RWO manual 51m
</code></pre>
<p>Basically I want all pods of the StatefulSet to see the contents of that volume as backup files are stored there.</p>
<p>StatefulSet used can be found <a href="https://github.com/neo4j-contrib/kubernetes-neo4j/blob/master/cores/statefulset.yaml" rel="nofollow noreferrer">here</a></p>
<p>Minikube version:
<code>minikube version: v0.25.2</code><br />
Kubernetes version: <code>GitVersion:"v1.9.4"</code></p>
|
dimzak
|
<p>If you use <code>volumeClaimTemplates</code> in <code>StatefulSet</code> k8s will do dynamic provisioning & create one PVC and corresponding PV for each pod, so each one of them gets their own storage. </p>
<p>What you want is to create one PV & one PVC and use it in all replicas of Statefulset. </p>
<p>Below is example on Kubernetes 1.10 how you can do it, where <code>/var/www/html</code> will be shared by all three Pods, just change <code>/directory/on/host</code> to some local directory on your machine. Also I ran this example on minikube v0.26.0</p>
<p>Ofcourse below is just an example to illustrate the idea, but in a real example processes in Pod should be aware of syncronizing access to shared storage. </p>
<hr>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
spec:
capacity:
storage: 100Gi
# volumeMode field requires BlockVolume Alpha feature gate to be enabled.
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /directory/on/host
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: example-local-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-storage
---
apiVersion: "apps/v1beta1"
kind: StatefulSet
metadata:
name: nginx
spec:
serviceName: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: "nginx:1.12.2"
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: localvolume
mountPath: /var/www/html
volumes:
- name: localvolume
persistentVolumeClaim:
claimName: example-local-claim
</code></pre>
|
bits
|
<p>I have a k8s cluster like below</p>
<pre>
#kubectl get all
NAME READY STATUS RESTARTS AGE
pod/nginx-ingress-controller-d78c45477-gxm59 1/1 Running 0 8d
pod/nginx-ingress-default-backend-5b967cf596-dc8ss 1/1 Running 0 8d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.245.0.1 443/TCP 9d
service/nginx-ingress-controller LoadBalancer 10.245.203.193 A.B.C.D 80:30033/TCP,443:31490/TCP 8d
service/nginx-ingress-default-backend ClusterIP 10.245.58.229 80/TCP 8d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-ingress-controller 1/1 1 1 8d
deployment.apps/nginx-ingress-default-backend 1/1 1 1 8d
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-ingress-controller-d78c45477 1 1 1 8d
replicaset.apps/nginx-ingress-default-backend-5b967cf596 1 1 1 8d
</pre>
<p>As above, I have an external ip A.B.C.D.</p>
<p>I also have two domains domainA.com and domainB.com.</p>
<p>My DNS setting is like below:</p>
<p>for domainA.com:</p>
<pre>
-----domain A----
A www.domainA.com A.B.C.D
-----domain B----
A www.domainB.com A.B.C.D
</pre>
<p>After I install two apps with helm</p>
<p>I got</p>
<pre>
# kubectl describe ingress
Name: app1
Namespace: default
Address: A.B.C.D
Default backend: default-http-backend:80 ()
Rules:
Host Path Backends
---- ---- --------
www.domainA.com
app1:80 (10.244.1.15:80,10.244.1.33:80)
Annotations:
kubernetes.io/ingress.class: nginx
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 10m nginx-ingress-controller Ingress default/app1
Normal UPDATE 9m48s nginx-ingress-controller Ingress default/app1
Name: app2
Namespace: default
Address: A.B.C.D
Default backend: default-http-backend:80 ()
Rules:
Host Path Backends
---- ---- --------
www.domainB.com
app2:80 (10.244.1.15:80,10.244.1.33:80)
Annotations:
kubernetes.io/ingress.class: nginx
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 8m24s nginx-ingress-controller Ingress default/app2
Normal UPDATE 7m49s nginx-ingress-controller Ingress default/app2
</pre>
<p>I don't know why the backends have two IPs.</p>
<p>www.domainA.com and www.domainB.com may route to same ip(10.244.1.15:80) which is I don't want.</p>
<p>I want a single external ip route to different pods by hosts like virtual server</p>
<pre>
www.domainA.com
app1:80 (10.244.1.15:800)
-----------------
www.domainB.com
app2:80 (10.244.1.33:80)
</pre>
<p>How could I fix my configuration?</p>
<p>Thank you</p>
|
Min-Yi Tsai
|
<p>If I understood the situation correctly you would like to have single external IP domains A and B resolved to, single Ingress and two different apps (one per domain). And at the moment the issue is that traffic is delivered to Both Apps instated of proper routing. </p>
<p>Unfortunately, you haven't provided your Ingress and Services configs. That is why I'll have to explain all needed setup :)</p>
<p>What is needed here is to have 2 distinct apps (with different labels), 2 distinct services that route to endpoints for each service and single Ingress that lists both domains.</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: name-virtual-host-ingress
spec:
rules:
- host: domain-A
http:
paths:
- backend:
serviceName: service-A
servicePort: 8080
- host: domain-B
http:
paths:
- backend:
serviceName: Service-B
servicePort: 8080
</code></pre>
<p>here we've created Ingress that routes traffic for 2 different services</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: Service-A
spec:
selector:
app: nginx
...
</code></pre>
<p>I've omitted half of the services specs for clarity. The most important point is "selector"</p>
<p>It is needed to create 2 services with 2 different selectors.
You can check services and corresponding EndPoints with the <code>kubectl get svc -o wide</code> and <code>kubectl get ep</code></p>
<p>Needless to say, that both apps shall be deployed separately to have different labels.</p>
<p>Please check your config and compare with the above.</p>
<p>Hope that helps. Will be happy to elaborate further if needed.</p>
|
Nick
|
<p>We launched a Cloud Composer cluster and want to use it to move data from Cloud SQL (Postgres) to BQ. I followed the notes about doing this mentioned at these two resources:</p>
<p><a href="https://stackoverflow.com/questions/50154306/google-cloud-composer-and-google-cloud-sql">Google Cloud Composer and Google Cloud SQL</a></p>
<p><a href="https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine" rel="nofollow noreferrer">https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine</a></p>
<p>We launch a pod running the cloud_sql_proxy and launch a service to expose the pod. The problem is that Cloud Composer cannot see the service stating the error when attempting to use an ad-hoc query to test:</p>
<p><code>cloud not translate host name "sqlproxy-service" to address: Name or service not known"</code></p>
<p>Trying by the service IP address results in the page timing out.</p>
<p>The <code>-instances</code> passed to cloud_sql_proxy work when used in a local environment or cloud shell. The log files seem to indicate no connection is ever attempted</p>
<pre><code>me@cloudshell:~ (my-proj)$ kubectl logs -l app=sqlproxy-service
me@2018/11/15 13:32:59 current FDs rlimit set to 1048576, wanted limit is 8500. Nothing to do here.
2018/11/15 13:32:59 using credential file for authentication; email=my-service-account@service.iam.gserviceaccount.com
2018/11/15 13:32:59 Listening on 0.0.0.0:5432 for my-proj:my-ds:my-db
2018/11/15 13:32:59 Ready for new connections
</code></pre>
<p>I see a comment here <a href="https://stackoverflow.com/a/53307344/1181412">https://stackoverflow.com/a/53307344/1181412</a> that possibly this isn't even supported?</p>
<p><strong>Airflow</strong></p>
<p><a href="https://i.stack.imgur.com/TWWCZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TWWCZ.png" alt="enter image description here"></a></p>
<p><strong>YAML</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: sqlproxy-service
namespace: default
labels:
app: sqlproxy
spec:
ports:
- port: 5432
protocol: TCP
targetPort: 5432
selector:
app: sqlproxy
sessionAffinity: None
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: sqlproxy
labels:
app: sqlproxy
spec:
selector:
matchLabels:
app: sqlproxy
template:
metadata:
labels:
app: sqlproxy
spec:
containers:
- name: cloudsql-proxy
ports:
- containerPort: 5432
protocol: TCP
image: gcr.io/cloudsql-docker/gce-proxy:latest
imagePullPolicy: Always
command: ["/cloud_sql_proxy",
"-instances=my-proj:my-region:my-db=tcp:0.0.0.0:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
</code></pre>
|
JohnB
|
<p>The information you found in the answer you linked is correct - ad-hoc queries from the Airflow web server to cluster-internal services within the Composer environment are not supported. This is because the web server runs on App Engine flex using its own separate network (not connected to the GKE cluster), which you can see in the <a href="https://cloud.google.com/composer/docs/concepts/overview#architecture" rel="nofollow noreferrer">Composer architecture diagram</a>.</p>
<p>Since that is the case, your SQL proxy must be exposed on a public IP address for the Composer Airflow web server to connect to it. For any services/endpoints listening on RFC1918 addresses within the GKE cluster (i.e. not exposed on a public IP), you will need additional network configuration to accept external connections.</p>
<p>If this is a major blocker for you, consider running a <a href="https://cloud.google.com/composer/docs/how-to/managing/deploy-webserver" rel="nofollow noreferrer">self-managed Airflow web server</a>. Since this web server would run in the same cluster as the SQL proxy you set up, there would no longer be any issues with name resolution.</p>
|
hexacyanide
|
<p>I want to have a Kustomize manifest where value for some attribute comes from entire contents of some file or URI.</p>
<p>How can I do this?</p>
|
Ark-kun
|
<p>Usually <strong>with kustomize</strong> what you are going to use is an <a href="https://github.com/kubernetes-sigs/kustomize#2-create-variants-using-overlays" rel="nofollow noreferrer">overlay and patches</a> (which is one or multiple files) that are kind of merged into your base file. A Patch overrides an attribute.
With those two features you predefine some probable manifest-compositions and just combine them right before you apply them to your cluster.</p>
<p>You can add or edit/set some specific attributes <a href="https://kubectl.docs.kubernetes.io/pages/app_management/container_images.html" rel="nofollow noreferrer">with patches</a> or with kustomize subcommands like so:</p>
<pre><code>kustomize edit set image your-registry.com:$image_tag
# just to identify version in metadata sections for service/deployment/pods - not just via image tag
kustomize edit add annotation appVersion:$image_tag
kustomize build . | kubectl -n ${your_ns} apply -f -
</code></pre>
<p>But if you want to have a single manifest file and manipulate the same attributes over and over again (on the fly), you should consider using <strong>helm's templating mechanism</strong>.
This is also an option if kustomize does not allow you to edit that single specific attribute you want to alter.</p>
<p>You just need a <em>values.yaml</em> file (containing key/value pairs) and a <em>template.yaml</em> file. You can pre-set some attributes in the <em>values.yaml</em> - on demand you can override them per CLI. The tool will generate you a k8s manifest with those values backed in.</p>
<p>template file:</p>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ .Values.appSettings.appName }}
namespace: {{ .Values.appSettings.namespace }}
labels:
name: {{ .Values.appSettings.appName }}
spec:
replicas: 1
template:
metadata:
labels:
name: {{ .Values.appSettings.appName }}
spec:
containers:
- name: {{ .Values.appSettings.appName }}
image: "{{ .Values.appSettings.image }}"
ports:
- containerPort: 8080
[...]
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.appSettings.appName }}-svc
namespace: {{ .Values.appSettings.namespace }}
labels:
name: {{ .Values.appSettings.appName }}
spec:
ports:
- port: 8080
targetPort: 8080
selector:
name: {{ .Values.appSettings.appName }}
</code></pre>
<p>Values file:</p>
<pre><code>appSettings:
appName: your-fancy-app
appDomain: please_override_via_cli
namespace: please_override_via_cli
</code></pre>
<p>CLI:</p>
<pre><code>helm template
--set appSettings.image=your-registry.com/service:$(cat versionTag.txt)
--set appSettings.namespace=your-ns
--set appSettings.appDomain=your-domain.com
./ -f ./values.yaml | kubectl apply -f -
</code></pre>
|
fuma
|
<p>Need to move from use of minio client to a docker image having gcloud/gsutil and mysql images.</p>
<p>What i have currently:</p>
<ol>
<li>/tmp/mc alias set gcs1 <a href="https://storage.googleapis.com" rel="nofollow noreferrer">https://storage.googleapis.com</a> $ACCESS_KEY $SECRET_KEY</li>
<li>mysqldump --skip-lock-tables --triggers --routines --events --set-gtid-purged=OFF --single-transaction --host=$PXC_SERVICE -u root --all-databases | /tmp/mc pipe gcs1/mysql-test-dr/mdmpdb10.sql</li>
</ol>
<p>What i need to change to:
3. <something similar to line no.1 (authorization)>
4. mysqldump --skip-lock-tables --triggers --routines --events --set-gtid-purged=OFF --single-transaction --host=$PXC_SERVICE -u root --all-databases --skip-add-locks > $FILE_NAME && gsutil cp $FILE_NAME gs://$BUCKET_NAME/$FILE_NAME</p>
<p>Is there any replacement in gcloud/gsutil for the line no.1?
I was able to find gcloud auth activate-service-account [ACCOUNT] --key-file=[KEY_FILE]
But that would be the service account key. I need to authenticate to the bucket using hmac keys.</p>
|
Saloni Srivastava
|
<p>You'll need to generate a boto configuration file using <code>gsutil config -a</code>. If you also have gcloud auth credentials configured, you may have to tell gcloud to not pass those (non-HMAC) credentials to gsutil, as gsutil may not allow you to have multiple credential types active at once. You can do this by running this gcloud command:</p>
<pre><code>gcloud config set pass_credentials_to_gsutil false
</code></pre>
<p>This is all mentioned in the <code>gsutil config</code> docs page: <br />
<a href="https://cloud.google.com/storage/docs/gsutil/commands/config" rel="nofollow noreferrer">https://cloud.google.com/storage/docs/gsutil/commands/config</a></p>
|
mhouglum
|
<p>I use minikube on windows 10 and try to generate Persistent Volume with minikube dashboard. Belows are my PV yaml file contents.</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: blog-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: blog-pv-claim
spec:
storageClassName: manual
volumeName: blog-pv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
</code></pre>
<p>But minikube dashboard throw the following errors.</p>
<pre><code>## Deploying file has failed
the server could not find the requested resource
</code></pre>
<p>But I can generate PV with kubectl command as executing the following command</p>
<pre><code>kubectl apply -f pod-pvc-test.yaml
</code></pre>
<p>For your information, the version of kubectl.exe is</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:22:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>How can I generate the Persistent Volume with minikube dashboard as well as kubectl command?</p>
<p><strong>== Updated Part==</strong></p>
<pre><code>> kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
blog-pv 1Gi RWO Recycle Bound default/blog-pv-claim manual 5m1s
</code></pre>
|
Joseph Hwang
|
<p>I've managed to reproduce the issue you've been describing on my minikube with the <code>v2.0.0-beta8</code> dashboard.</p>
<pre><code>$ minikube version
minikube version: v1.9.1
$ kubectl version
Client Version: GitVersion:"v1.17.4"
Server Version: GitVersion:"v1.18.0"
</code></pre>
<p>Please note that the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/" rel="nofollow noreferrer">offical guide</a> reffers to <code>v2.0.0-beta8</code> which is broken :). </p>
<p>Recently there were <a href="https://github.com/kubernetes/dashboard/issues/4999" rel="nofollow noreferrer">some fixes</a> for the broken functionality (they'd been merged to <code>master</code> branch).</p>
<p>Please <strong>update the version of the dashboard</strong> to at least the <code>v2.0.0-rc6</code>. </p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc6/aio/deploy/recommended.yaml
</code></pre>
<p>I was able successfully creating PV and PVC (via dashboard) from yaml provided.</p>
<p>Hope that helps!</p>
|
Nick
|
<p>When I run the application locally the application is up and running but when i deploy the same application in the Kubernetes cluster i am getting the error</p>
<p><strong>Error</strong></p>
<pre><code>java.lang.NoClassDefFoundError: org/springframework/core/env/Profiles
at org.springframework.cloud.kubernetes.config.PropertySourceUtils.lambda$null$3(PropertySourceUtils.java:69)
at org.springframework.beans.factory.config.YamlProcessor.process(YamlProcessor.java:239)
at org.springframework.beans.factory.config.YamlProcessor.process(YamlProcessor.java:167)
at org.springframework.beans.factory.config.YamlProcessor.process(YamlProcessor.java:139)
at org.springframework.beans.factory.config.YamlPropertiesFactoryBean.createProperties(YamlPropertiesFactoryBean.java:135)
at org.springframework.beans.factory.config.YamlPropertiesFactoryBean.getObject(YamlPropertiesFactoryBean.java:115)
at org.springframework.cloud.kubernetes.config.PropertySourceUtils.lambda$yamlParserGenerator$4(PropertySourceUtils.java:77)
at java.util.function.Function.lambda$andThen$1(Function.java:88)
at org.springframework.cloud.kubernetes.config.ConfigMapPropertySource.processAllEntries(ConfigMapPropertySource.java:149)
at org.springframework.cloud.kubernetes.config.ConfigMapPropertySource.getData(ConfigMapPropertySource.java:100)
at org.springframework.cloud.kubernetes.config.ConfigMapPropertySource.<init>(ConfigMapPropertySource.java:78)
at org.springframework.cloud.kubernetes.config.ConfigMapPropertySourceLocator.getMapPropertySourceForSingleConfigMap(ConfigMapPropertySourceLocator.java:96)
at org.springframework.cloud.kubernetes.config.ConfigMapPropertySourceLocator.lambda$locate$0(ConfigMapPropertySourceLocator.java:79)
at java.util.ArrayList.forEach(ArrayList.java:1259)
at org.springframework.cloud.kubernetes.config.ConfigMapPropertySourceLocator.locate(ConfigMapPropertySourceLocator.java:78)
at org.springframework.cloud.bootstrap.config.PropertySourceBootstrapConfiguration.initialize(PropertySourceBootstrapConfiguration.java:94)
at org.springframework.boot.SpringApplication.applyInitializers(SpringApplication.java:628)
at org.springframework.boot.SpringApplication.prepareContext(SpringApplication.java:364)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:305)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1242)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1230)
at com.daimler.daivb.msl.MbappsSnapLocalSearchServiceApplication.main(MbappsSnapLocalSearchServiceApplication.java:30)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:87)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:50)
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51)
Caused by: java.lang.ClassNotFoundException: org.springframework.core.env.Profiles
at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
at org.springframework.boot.loader.LaunchedURLClassLoader.loadClass(LaunchedURLClassLoader.java:93)
at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
</code></pre>
<p>Dependencies I am using in the application are</p>
<ol>
<li>spring-boot-starter-web - 2.0.8.RELEASE</li>
<li>gson - 2.3.1</li>
<li>json-lib - 2.3</li>
<li>spring-cloud-starter-kubernetes-config -1.1.10.RELEASE</li>
<li>json - 20230227</li>
<li>xmlrpc-client - 3.1.3</li>
<li>spring-security-oauth2-autoconfigure - 2.0.8.RELEASE</li>
<li>spring-security-config</li>
<li>spring-security-web</li>
<li>spring-cloud-starter-openfeign - 2.0.0.RELEASE</li>
<li>spring-cloud-starter-netflix-ribbon - 2.0.0.RELEASE</li>
<li>spring-boot-starter-actuator</li>
<li>commons-lang3 - 3.8.1</li>
<li>lombok</li>
<li>spring-cloud-starter-config - 2.0.3.RELEASE</li>
<li>micrometer-registry-prometheus - 1.2.2</li>
<li>micrometer-core - 1.2.2</li>
<li>spring-boot-starter-test</li>
<li>spring-cloud-dependencies - Finchley.SR3</li>
</ol>
|
Chaithra Shenoy
|
<p>The version of Spring Cloud Kubernetes that you are using (1.1.10.RELEASE) requires Spring Boot 2.2.x. You are using 2.0.x. This older version of Spring Boot uses an older version of Spring Framework that does not contain the <code>org.springframework.core.env.Profiles</code> class. It was introduced in Spring Framework 5.1 and Spring Boot 2.0.x uses Spring Framework 5.0.x.</p>
<p>You should update your dependency versions to ensure that they're compatible. To make it easier to do so, I would recommend using the <code>spring-cloud-dependencies</code> bom as shown on its <a href="https://spring.io/projects/spring-cloud" rel="nofollow noreferrer">project page</a>.</p>
|
Andy Wilkinson
|
<p>I have 3 services in my ingress, the first 2 use <code>default</code> namespace. The third service is <strong>prometheus-server</strong> service which has namespace <code>ingress-nginx</code>.
Now, I want to map my prometheus DNS to the service, but getting error because ingress can't find the prometheus service in <code>default</code> namespace.</p>
<p>How to deal with non-default namespace in ingress definition?</p>
|
Justinus Hermawan
|
<p>You would want to create a new <code>Ingress</code> in namespace <code>ingress-nginx</code> that would route your DNS to that service. For example:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example
namespace: ingress-nginx
spec:
rules:
- host: your.domain.com
http:
paths:
- path: /
backend:
serviceName: prometheus-server
servicePort: 80
</code></pre>
|
Fei
|
<p>The command <code>helm init</code> does not work any longer as of version 3.
Running <code>helm --help</code> lists all available commands, amongst which <code>init</code> is no longer present.</p>
<p>Why is that?</p>
|
aboger
|
<p>According to the <a href="https://helm.sh/blog/helm-v3-beta/" rel="noreferrer">official documentation</a>, the <code>helm init</code> command has been removed without replacement:</p>
<blockquote>
<p>The helm init command has been removed. It performed two primary functions. First, it installed Tiller. This is no longer needed. Second, it setup directories and repositories where Helm configuration lived. This is now automated. If the directory is not present it will be created.</p>
</blockquote>
<hr />
<p>There has been another notable change, which might trouble you next:</p>
<blockquote>
<p>The <code>stable</code> repository is no longer added by default. This repository will be deprecated during the life of Helm v3 and we are now moving to a distributed model of repositories that can be searched by the Helm Hub.</p>
</blockquote>
<p>However, according to the <a href="https://helm.sh/docs/intro/quickstart/" rel="noreferrer">official quickstart guide</a>, this can be done manually if desired:</p>
<pre><code>$ helm repo add stable https://charts.helm.sh/stable
$ helm repo update
</code></pre>
<p>⎈ Happy Helming!⎈</p>
|
aboger
|
<p>How to deploy kubernertes service (type LoadBalancer) on onprem VMs ? When I using type=LoadBalcer it's shows external IP as "pending" but everything works fine with the same yaml if I deployed on GKS. My question is-: </p>
<p>Do we need a Load balancer if I use type=LoadBalcer on Onprem VMs?
Can I assign LoadBalncer IP manually in yaml?</p>
|
Vikas Kalra
|
<p>It might be helpful to check the <a href="https://banzaicloud.com/products/pke/" rel="nofollow noreferrer">Banzai Cloud Pipeline Kubernetes Engine (PKE)</a> that is "a simple, secure and powerful CNCF-certified Kubernetes distribution" platform. It was designed to work on any cloud, VM or <strong>on bare metal</strong> nodes to provide a scalable and secure foundation for private clouds. PKE is cloud-aware and includes an ever-increasing number of cloud and platform integrations.</p>
<blockquote>
<p>When I using type=LoadBalcer it's shows external IP as "pending" but everything works fine with the same yaml if I deployed on GKS.</p>
</blockquote>
<p>If you create a LoadBalancer service — for example try to expose your own TCP based service, or install an ingress controller — the cloud provider integration will take care of creating the needed cloud resources, and writing back the endpoint where your service will be available. If you don't have a cloud provider integration or a controller for this purpose, your Service resource will remain in Pending state.</p>
<p>In case of Kubernetes, LoadBalancer services are the easiest and most common way to expose a service (redundant or not) for the world outside of the cluster or the mesh — to other services, to internal users, or to the internet.</p>
<p>Load balancing as a concept can happen on different levels of the OSI network model, mainly on L4 (transport layer, for example TCP) and L7 (application layer, for example HTTP). In Kubernetes, Services are an abstraction for L4, while Ingresses are a generic solution for L7 routing.</p>
<blockquote>
<p>You need to setup metalLB.</p>
</blockquote>
<p>MetalLB is one of the most popular on-prem replacements for LoadBalancer cloud integrations. The whole solution runs inside the Kubernetes cluster.</p>
<p>The main component is an in-cluster Kubernetes controller which watches LB service resources, and based on the configuration supplied in a ConfigMap, allocates and writes back IP addresses from a dedicated pool for new services. It maintains a leader node for each service, and depending on the working mode, advertises it via BGP or ARP (sending out unsolicited ARP packets in case of failovers).</p>
<p>MetalLB can operate in two ways: either all requests are forwarded to pods on the leader node, or distributed to all nodes with kubeproxy.</p>
<p>Layer 7 (usually HTTP/HTTPS) load balancer appliances like F5 BIG-IP, or HAProxy and Nginx based solutions may be integrated with an <strong>applicable ingress-controller</strong>. If you have such, you won't need a LoadBalancer implementation in most cases.</p>
<p>Hope that sheds some light on a "LoadBalancer on bare metal hosts" question. </p>
|
Nick
|
<p>I try to deploy an application with a mariadb database on my k8s cluster. This is the deployment i use:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: app-back
labels:
app: app-back
namespace: dev
spec:
type: ClusterIP
ports:
- port: 8080
name: app-back
selector:
app: app-back
---
apiVersion: v1
kind: Service
metadata:
name: app-db
labels:
app: app-db
namespace: dev
spec:
type: ClusterIP
clusterIP: None
ports:
- port: 3306
name: app-db
selector:
app: app-db
---
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql
labels:
app: mysql
data:
60-server.cnf: |
[mysqld]
bind-address = 0.0.0.0
skip-name-resolve
connect_timeout = 600
net_read_timeout = 600
net_write_timeout = 600
max_allowed_packet = 256M
default-time-zone = +00:00
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-db
namespace: dev
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: app-db
template:
metadata:
labels:
app: app-db
spec:
containers:
- name: app-db
image: mariadb:10.5.8
env:
- name: MYSQL_DATABASE
value: app
- name: MYSQL_USER
value: app
- name: MYSQL_PASSWORD
value: app
- name: MYSQL_RANDOM_ROOT_PASSWORD
value: "true"
ports:
- containerPort: 3306
name: app-db
resources:
requests:
memory: "200Mi"
cpu: "100m"
limits:
memory: "400Mi"
cpu: "200m"
volumeMounts:
- name: config-volume
mountPath: /etc/mysql/conf.d
volumes:
- name: config-volume
configMap:
name: mysql
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-back
namespace: dev
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: app-back
template:
metadata:
labels:
app: app-back
spec:
containers:
- name: app-back
image: private-repository/app/app-back:latest
env:
- name: spring.profiles.active
value: dev
- name: DB_HOST
value: app-db
- name: DB_PORT
value: "3306"
- name: DB_NAME
value: app
- name: DB_USER
value: app
- name: DB_PASSWORD
value: app
ports:
- containerPort: 8080
name: app-back
resources:
requests:
memory: "200Mi"
cpu: "100m"
limits:
memory: "200Mi"
cpu: "400m"
imagePullSecrets:
- name: docker-private-credentials
</code></pre>
<p>When i run this, the mariadb container log the following warning :</p>
<pre><code>2020-12-03 8:23:41 28 [Warning] Aborted connection 28 to db: 'app' user: 'app' host: 'xxx.xxx.xxx.xxx' (Got an error reading communication packets)
2020-12-03 8:23:41 25 [Warning] Aborted connection 25 to db: 'app' user: 'app' host: 'xxx.xxx.xxx.xxx' (Got an error reading communication packets)
2020-12-03 8:23:41 31 [Warning] Aborted connection 31 to db: 'app' user: 'app' host: 'xxx.xxx.xxx.xxx' (Got an error reading communication packets)
2020-12-03 8:23:41 29 [Warning] Aborted connection 29 to db: 'app' user: 'app' host: 'xxx.xxx.xxx.xxx' (Got an error reading communication packets)
...
</code></pre>
<p>My app is stuck on trying to connect to the database. The application is a Sprinbboot application build with this dockerfile:</p>
<pre><code>FROM maven:3-adoptopenjdk-8 AS builder
WORKDIR /usr/src/mymaven/
COPY . .
RUN mvn clean package -e -s settings.xml -DskipTests
FROM tomcat:9-jdk8-adoptopenjdk-hotspot
ENV spring.profiles.active=dev
ENV DB_HOST=localhost
ENV DB_PORT=3306
ENV DB_NAME=app
ENV DB_USER=app
ENV DB_PASSWORD=app
COPY --from=builder /usr/src/mymaven/target/app.war /usr/local/tomcat/webapps/
</code></pre>
<p>Any idea?</p>
|
Scandinave
|
<p>Ok, i found the solution. This was not an error of mariadb. This is due to apache that break the connection if running inside a container with to low memory. Set memory limit to 1500Mi solved the problem.</p>
|
Scandinave
|
<p>I have an existing system that uses a relational DBMS. I am unable to use a NoSQL database for various internal reasons.</p>
<p>The system is to get some microservices that will be deployed using Kubernetes and Docker with the intention to do rolling upgrades to reduce downtime. The back end data layer will use the existing relational DBMS. The micro services will follow good practice and "own" their data store on the DBMS. The one big issue with this seems to be how to deal with managing the structure of the database across this. I have done my research:</p>
<ul>
<li><a href="https://blog.philipphauer.de/databases-challenge-continuous-delivery/" rel="nofollow noreferrer">https://blog.philipphauer.de/databases-challenge-continuous-delivery/</a></li>
<li><a href="http://www.grahambrooks.com/continuous%20delivery/continuous%20deployment/zero%20down-time/2013/08/29/zero-down-time-relational-databases.html" rel="nofollow noreferrer">http://www.grahambrooks.com/continuous%20delivery/continuous%20deployment/zero%20down-time/2013/08/29/zero-down-time-relational-databases.html</a></li>
<li><a href="http://blog.dixo.net/2015/02/blue-turquoise-green-deployment/" rel="nofollow noreferrer">http://blog.dixo.net/2015/02/blue-turquoise-green-deployment/</a></li>
<li><a href="https://spring.io/blog/2016/05/31/zero-downtime-deployment-with-a-database" rel="nofollow noreferrer">https://spring.io/blog/2016/05/31/zero-downtime-deployment-with-a-database</a></li>
<li><a href="https://www.rainforestqa.com/blog/2014-06-27-zero-downtime-database-migrations/" rel="nofollow noreferrer">https://www.rainforestqa.com/blog/2014-06-27-zero-downtime-database-migrations/</a></li>
</ul>
<p>All of the discussions seem to stop around the point of adding/removing columns and data migration. There is no discussion of how to manage stored procedures, views, triggers etc.</p>
<p>The application is written in .NET Full and .NET Core with Entity Framework as the ORM.</p>
<p>Has anyone got any insights on how to do continious delivery using a relational DBMS where it is a full production system? Is it back to the drawing board here? In as much that using a relational DBMS is "too hard" for rolling updates?</p>
<p>PS. Even though this is a continious delivery problem I have also tagged with Kubernetes and Docker as that will be the underlying tech in use for the orchestration/container side of things.</p>
|
TheEdge
|
<p>All of the following under the assumption that I understand correctly what you mean by "rolling updates" and what its consequences are.</p>
<p>It has very little (as in : nothing at all) to do with "relational DBMS". Flatfiles holding XML will make you face the exact same problem. Your "rolling update" will inevitably cause (hopefully brief) periods of time during which your server-side components (e.g. the db) must interact with "version 0" as well as with "version -1" of (the client-side components of) your system.</p>
<p>Here "compatibility theory" (*) steps in. A "working system" is a system in which the set of offered services is a superset (perhaps a proper superset) of the set of required services. So backward compatibility is guaranteed if "services offered" is never ever reduced <em>and</em> "services required" is never extended. However, the latter is typically what always happens when the current "version 0" is moved to "-1" and a new "current version 0" is added to the mix. So the conclusion is that "rolling updates" are theoretically doable as long as the "services" offered on server side are only ever extended, and always in such a way as to be, and always remain, a superset of the services required on (any version currently in use on) the client side.</p>
<p>"Services" here is to be interpreted as something very abstract. It might refer to a guarantee to the effect that, say, if column X in this row of this table has value Y then I <em>will</em> find another row in that other table using a key computed such-and-so, and that other row might be guaranteed to have column values satisfying this-or-that condition.</p>
<p>If that "guarantee" is <em>introduced</em> as an <em>expectation</em> (i.e. requirement) on (new version of) client side, you must do something on server side to comply. If that "guarantee" is <em>currently offered</em> but a <em>contradicting guarantee</em> is introduced as an expectation on (new version of) client side, then your rolling update scenario has by definition become inachievable.</p>
<p>(*) <a href="http://davidbau.com/archives/2003/12/01/theory_of_compatibility_part_1.html" rel="nofollow noreferrer">http://davidbau.com/archives/2003/12/01/theory_of_compatibility_part_1.html</a></p>
<p>There are also parts 2 and 3.</p>
|
Erwin Smout
|
<p>I installed contour locally on <code>minikube version: v1.5.0</code> with:</p>
<pre><code>kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
</code></pre>
<p>I check the details of the contour ingress controller with:</p>
<pre><code>$ kubectl get -n projectcontour service contour -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
contour ClusterIP 10.103.81.4 <none> 8001/TCP 4d12h app=contour
</code></pre>
<p>Now in the book it says:</p>
<blockquote>
<p>If you are using minikube, you probably won’t have anything listed for
EXTERNAL-IP. To fix this, you need to open a separate terminal window
and run minikube tunnel. This configures networking routes such that
you have unique IP addresses assigned to every service of type:
LoadBalancer. - Brendan Burns. “Kubernetes: Up and Running.”</p>
</blockquote>
<p>I ran <code>minikube tunnel</code> in a seperate window but it still did not give me an <code>EXTERNAL-IP</code>.</p>
<p>How can I get this <code>EXTERNAL-IP</code> so I can point some hosts to it and test the ingress.</p>
<p><strong>Update</strong></p>
<p>I get all the services in the namespace and the <code>enoy</code> service has an external ip. I added those to <code>/etc/hosts</code> and it worked.</p>
<pre><code>$ kubectl get -n projectcontour service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
contour ClusterIP 10.103.81.4 <none> 8001/TCP 4d12h app=contour
envoy LoadBalancer 10.97.220.191 10.97.220.191 80:32162/TCP,443:31422/TCP 4d12h app=envoy
</code></pre>
<p><strong>Update 2</strong>:</p>
<p>Only where the <code>service.type = LoadBalancer</code> do you get an <code>externalIP</code>:</p>
<pre><code>$ kubectl describe service envoy -n projectcontour
Name: envoy
Namespace: projectcontour
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"service.beta.kubernetes.io/aws-load-balancer-backend-protocol":"tcp"},"nam...
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
Selector: app=envoy
Type: LoadBalancer
IP: 10.97.220.191
LoadBalancer Ingress: 10.97.220.191
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 32162/TCP
Endpoints: 172.17.0.17:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 31422/TCP
Endpoints: 172.17.0.17:443
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 31511
Events: <none>
</code></pre>
<p>and:</p>
<pre><code>$ kubectl describe service contour -n projectcontour
Name: contour
Namespace: projectcontour
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"contour","namespace":"projectcontour"},"spec":{"ports":[{"name":"...
Selector: app=contour
Type: ClusterIP
IP: 10.103.81.4
Port: xds 8001/TCP
TargetPort: 8001/TCP
Endpoints: 172.17.0.13:8001,172.17.0.14:8001
Session Affinity: None
Events: <none>
</code></pre>
|
tread
|
<p>Your service type would need to be <code>LoadBalancer</code> to have an external-ip.</p>
|
Fei
|
<p>I have installed Apache Superset from its Helm Chart in a Google Cloud Kubernetes cluster. I need to <code>pip install</code> a package that is not installed when installing the Helm Chart. If I connect to the Kubernetes bash shell like this:</p>
<p><code>kubectl exec -it superset-4934njn23-nsnjd /bin/bash</code></p>
<p>Inside there's no python available, no pip and apt-get doesn't find most of the packages.</p>
<p>I understand that during the container installation process the packages are listed in the Dockerfile, I suppose that I need to fork the docker container, modify the Dockerfile, register the container to a container registry and make a new Helm Chart that will run this container.</p>
<p>But all this seems too complicated for a simple <code>pip install</code>, is there a simpler way to do this?</p>
<p>Links:</p>
<p>Docker- <a href="https://hub.docker.com/r/amancevice/superset/" rel="nofollow noreferrer">https://hub.docker.com/r/amancevice/superset/</a></p>
<p>Helm Chart - <a href="https://github.com/helm/charts/tree/master/stable/superset" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/superset</a></p>
|
Ben Gosub
|
<p>Docker file seems to be installing python3 package.
Try 'python3' or "pip3" instead of 'python'/'pip'</p>
|
Murli
|
<p>Situation: We got a webapp with 3 Docker Container (frontend, backend, database) (which I will call 'Unit' in the following context) and we would like to make a demo side where each visitor would get a unique Unit which does not live longer then for example 20 minutes of inactivity.
The reason is, that we want each potential user to have a clean-state experience on the demo page. And specially, we don't want anyone to read anything (for example racist) that an other visitor has left behind.</p>
<p>Is something like this doable with Docker / k8s?</p>
<p>EDIT: It would not be a problem to hide this behind another website á la "Click here to start the demo" which then would start up a new Unit before navigating.</p>
<p>TLDR: How to deploy a unique docker container for each visitor of our webapp.</p>
|
dturnschek
|
<blockquote>
<p>Is something like this doable with Docker / k8s?</p>
</blockquote>
<p>Absolutely. I can see at least a two ways of achieving that with k8s.</p>
<ol>
<li>The Easy, but potentially expensive way (if you run k8s in a cloud).</li>
</ol>
<p>You can combine these 3 containers (frontend, backend, database) into a single Pod and spin up a new pod for each user. One user - one Pod (or set of Pods if needed). This is easy, but can be very expensive (if you have a lot of users). The Pod can be later destroyed if it is not needed anymore.</p>
<ol start="2">
<li>More complex, but not that expensive way.</li>
</ol>
<p>If your software allows you running a frontend and backend that are the same for all the users, than it is possible moving "unique" content to the database. For example into some table/set of tables. In this case only a few pods will be constantly up instaed of "full set for each user". The data is stored in database.</p>
<p>You can flush the database periodically, to wipe out all the assets your users create during tests. </p>
<p>As a result you may end up with much smaller "footprint" compared to the first option.</p>
<p>Hope that helps. </p>
|
Nick
|
<p>I was trying to understand the relation between <code>Kubernetes</code> <code>Ingress Resource</code> and <code>Ingress Controller</code>.
I read that <code>Ingress</code> resource is mainly the rules and controller Pods route the traffic actually for Ingress rules.</p>
<p>I'm confused, like other Objects why <code>Ingress</code> resource can not spin up PODs on its own by specifying the image.</p>
<p>Secondly, how <code>Ingress Object</code> connects to actual <code>Ingress Controller Pods</code> to get its work done ( or say other way round). I don't see specifying any selector in the <code>Ingress Object</code>.</p>
<p>Thirdly, if the <code>Ingress Resource</code> gets its own IP address ( internal or external) then why <code>Ingress Controller</code> needs external IP address.</p>
<p>thanks</p>
<p>PS: I do not have a great knowledge of Kubernetes, please pardon if the questions sound silly.</p>
|
Learner
|
<p>Details with diagram are posted in GKE tutorial
<a href="https://cloud.google.com/community/tutorials/nginx-ingress-gke" rel="nofollow noreferrer">Ingress with NGINX controller on Google Kubernetes Engine
</a></p>
|
Learner
|
<p>I have a 3 node cluster. What I am going to do is create a persistence volume with <strong>ReadWriteMany</strong> access mode for MySQL Deployment. Also, Mount Option is GCEPersistentDisk.
My question is if I could use ReadWriteMany access mode for MySQL deployment, Will it be an issue? Because the volume can be mounted by many nodes. if I am wrong please correct me.</p>
|
YMA
|
<p>GCE persistent disks do not support ReadWriteMany. You can see this here in the documentation: `
<a href="https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#access_modes" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#access_modes</a></p>
<p>However, there is a workaround to achieve the same with a NFS server:<br />
<a href="https://medium.com/@Sushil_Kumar/readwritemany-persistent-volumes-in-google-kubernetes-engine-a0b93e203180" rel="nofollow noreferrer">https://medium.com/@Sushil_Kumar/readwritemany-persistent-volumes-in-google-kubernetes-engine-a0b93e203180</a></p>
<p>I wouldn't recommend this as the MySQL performance will be suboptimal, though. Consider using a <a href="https://cloud.google.com/sql" rel="nofollow noreferrer">Cloud SQL</a> instance instead, and connecting from multiple nodes nodes to that using MySQL protocol instead of accessing the disk.</p>
|
Vi Pau
|
<p>I am using CentOS Linux 7 (Core) and tried to install minikube and followed all steps provided at <a href="https://kubernetes.io/docs/tasks/tools/install-minikube/" rel="nofollow noreferrer">Install Minikube</a>.</p>
<p>I have alo installed virtualbox on CentOS7 which can be seen logs installed correctly when started minikube but its failing, please seee full error logs.</p>
<p>can someone please help what I am missing.</p>
<p>Kubectl is also installed but could not connect with server as Minikube in not starting..</p>
<pre><code>kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:37:52Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: net/http: TLS handshake timeout
</code></pre>
<p><strong>Minikube start command error logs</strong></p>
<pre><code>minikube v0.35.0 on linux (amd64)
> Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
@ Downloading Minikube ISO ...
184.42 MB / 184.42 MB [============================================] 100.00% 0s
- "minikube" IP address is 192.168.99.102
- Configuring Docker as the container runtime ...
- Preparing Kubernetes environment ...
@ Downloading kubeadm v1.13.4
@ Downloading kubelet v1.13.4
- Pulling images required by Kubernetes v1.13.4 ...
- Launching Kubernetes v1.13.4 using kubeadm ...
! Error starting cluster: kubeadm init:
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI
[init] Using Kubernetes version: v1.13.4
[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING Swap]: running with swap on is not supported. Please disable swap
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/var/lib/minikube/certs/"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [192.168.99.102 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [192.168.99.102 127.0.0.1 ::1]
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
: Process exited with status 1
* Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
- https://github.com/kubernetes/minikube/issues/new
</code></pre>
|
kj007
|
<p>Kubelet will fail to start after v1.8.0 if swap is enabled. You could overwrite this in your Kubelet configuration.</p>
<p>Reference:</p>
<ol>
<li>Update Notice: <a href="https://github.com/apprenda/kismatic/blob/master/docs/upgrade/v1.6.0/kubelet-swap.md" rel="nofollow noreferrer">https://github.com/apprenda/kismatic/blob/master/docs/upgrade/v1.6.0/kubelet-swap.md</a></li>
<li>Reasons for disabling swap: <a href="https://serverfault.com/questions/881517/why-disable-swap-on-kubernetes">https://serverfault.com/questions/881517/why-disable-swap-on-kubernetes</a></li>
</ol>
|
Fei
|
<p>Trying to move from Flink 1.3.2 to 1.5 We have cluster deployed with kubernetes. Everything works fine with 1.3.2 but I can not submit job with 1.5. When I am trying to do that I just see spinner spin around infinitely, same via REST api. I even can't submit wordcount example job.
Seems my taskmanagers can not connect to jobmanager, I can see them in flink UI, but in logs I see</p>
<blockquote>
<p>level=WARN akka.remote.transport.netty.NettyTransport - Remote connection to [null] failed with
org.apache.flink.shaded.akka.org.jboss.netty.channel.ConnectTimeoutException:
connection timed out:
flink-jobmanager-nonprod-2.rpds.svc.cluster.local/25.0.84.226:6123</p>
<p>level=WARN akka.remote.ReliableDeliverySupervisor - Association with remote system
[akka.tcp://flink@flink-jobmanager-nonprod-2.rpds.svc.cluster.local:6123]
has failed, address is now gated for [50] ms. Reason: [Association
failed with
[akka.tcp://flink@flink-jobmanager-nonprod-2.rpds.svc.cluster.local:6123]]
Caused by: [No response from remote for outbound association.
Associate timed out after [20000 ms].]</p>
<p>level=WARN akka.remote.transport.netty.NettyTransport - Remote
connection to [null] failed with
org.apache.flink.shaded.akka.org.jboss.netty.channel.ConnectTimeoutException:
connection timed out:
flink-jobmanager-nonprod-2.rpds.svc.cluster.local/25.0.84.226:6123</p>
</blockquote>
<p>But I can do telnet from taskmanager to jobmanager </p>
<p>Moreover everything works on my local if I start flink in cluster mode (jobmanager + taskmanager).
In 1.5 documentation I found <strong>mode</strong> option which flip mode between flip6 and legacy (default flip6), but If I set mode: legacy I don't see my taskmanagers registered at all.</p>
<p>Is this something specific about k8s deployment and 1.5 I need to do? I checked 1.5 k8s config and it looks pretty same as we have, but we using customized docker image for flink (Security, HA, checkpointing)</p>
<p>Thank you.</p>
|
Georgy Gobozov
|
<p>The issue with jobmanage connectivity. Jobmanager docker image cannot connect to "flink-jobmanager" (${JOB_MANAGER_RPC_ADDRESS}) address.</p>
<p><strong>Just use afilichkin/flink-k8s Docker instead of flink:latest</strong></p>
<p>I've fixed it by adding new host to jobmanager docker. You can see it in my github project</p>
<p><a href="https://github.com/Aleksandr-Filichkin/flink-k8s/tree/master" rel="nofollow noreferrer">https://github.com/Aleksandr-Filichkin/flink-k8s/tree/master</a></p>
|
Aleksandr Filichkin
|
<p>We are going to be developing a client which subscribes to an AMQP channel, but the client is going to be clustered (in Kubernetes) and we want only one of the clustered client to process the subscribed message.</p>
<p>For example, if we have a replica set of 3, we only want one to get the message, not all 3.</p>
<p>In JMS 2.0 this is possible using the shared consumers: <a href="https://www.oracle.com/technical-resources/articles/java/jms2messaging.html" rel="nofollow noreferrer">https://www.oracle.com/technical-resources/articles/java/jms2messaging.html</a></p>
<pre><code>1 message is sent to RabbitMQ Channel 1:
Consumer 1 (with 3 replicas) <----- RabbitMQ Channel 1
Consumer 2 (with 3 replicas) <----- RabbitMQ Channel 1
Only 2 messages would be processed
</code></pre>
<p>Is something similar possible with AMQP? The client will be developed either in C# or MuleSoft.</p>
<p>Cheers,
Steve</p>
|
Steve
|
<p>AMQP is designed for this. If you have three clients consuming from the <em>same queue</em>, RabbitMQ will round-robin delivery of messages to them. You may also be interested in the <a href="https://www.rabbitmq.com/consumers.html#single-active-consumer" rel="nofollow noreferrer">Single Active Consumer</a> feature.</p>
<hr>
<p><sub><b>NOTE:</b> the RabbitMQ team monitors the <code>rabbitmq-users</code> <a href="https://groups.google.com/forum/#!forum/rabbitmq-users" rel="nofollow noreferrer">mailing list</a> and only sometimes answers questions on StackOverflow.</sub></p>
|
Luke Bakken
|
<p>I think -o is supposed be an universal option for kubectl.
But, somehow I get the following error when I run the following kubectl command.</p>
<p>Can you please tell me why? Thank you.</p>
<pre><code>mamun$ kubectl describe secret -n development serviceaccount-foo -o yaml
Error: unknown shorthand flag: 'o' in -o
See 'kubectl describe --help' for usage.
</code></pre>
|
Mamun
|
<p><code>-o | --output</code> is not a universal flag, it is not included in the default <a href="https://kubernetes.io/docs/reference/kubectl/kubectl/" rel="noreferrer"><code>kubectl</code> flags</a> (<code>1.18</code>) and <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe" rel="noreferrer"><code>kubectl describe</code> does not support the <code>--output</code> (or shorthand <code>-o</code>) flag</a>.</p>
|
masseyb
|
<p>I have a bunch of microservices running in a kubernetes cluster where each microservice implements a basic health check over HTTP.</p>
<p>e.g for the endpoint <code>/health</code> each service will return a HTTP response 200 if that particular service is currently healthy or some other HTPP 4xx / 5xx code (and possible additional info) if not healthy.</p>
<p>I see Kubernetes has its own built in concepth of a HTTP health check <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-http-request" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-http-request</a></p>
<p>Unfortunatley its not quite what I want. I like to be able to trigger an alert (and record the state of the health check request) in some database so I can quickly check what state all my services are in as well as alerting on any services in an unhealthy state. </p>
<p>I'm wondering are there existing tools or approaches in Kubernetes I should use for this sort of thing? Or will need to build some custom solution for this. </p>
<p>Was considering having a general "HealthCheck" service which each microservice would register with when started. That way the "HealthCheck" service would monitor the health of each service as well as trigerring alerts for any issues it finds. </p>
|
user805703
|
<p>I would caution against trying to build your own in-house monitoring solution. There are considerable drawbacks to that approach.</p>
<p>If all you need is external service HTTP health checks, then many existing monitoring solutions will do fine. You can either install a traditional IT solution like Zabbix or Nagios. Or use a SAS like <strong>Datadog</strong> and others.<br />
There are also blackbox extensions for <strong>Prometheus</strong>, which is very popular among K8s users.</p>
<p>Many of these options require a learning curve of some steepness.</p>
|
Adi Dembak
|
<p>I'm new to EKS, and am following the examples to set up a sample app that creates an ingress controller, ingress, service, and deployment from <a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-alb-ingress-controller-fargate/" rel="nofollow noreferrer">How do I set up the ALB Ingress Controller on an Amazon EKS cluster for Fargate?</a>. I have everything created (deployements, pods, service, iam, service account, etc.) but my ingress controller is failing to come up with the error</p>
<pre><code>E0224 19:09:07.053006 1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to build LoadBalancer configuration due to retrieval of subnets failed to resolve 2 qualified subnets. Subnets must contain the kubernetes.io/cluster/\u003ccluster name\u003e tag with a value of shared or owned and the kubernetes.io/role/elb tag signifying it should be used for ALBs Additionally, there must be at least 2 subnets with unique availability zones as required by ALBs. Either tag subnets to meet this requirement or use the subnets annotation on the ingress resource to explicitly call out what subnets to use for ALB creation. The subnets that did resolve were []" "controller"="alb-ingress-controller" "request"={"Namespace":"mynamespace","Name":"2048-ingress"}
</code></pre>
<p>I do have my VPCs and subnets tagged appropriately per <a href="https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html" rel="nofollow noreferrer">Application load balancing on Amazon EKS</a> and other pages that shows how to tag my VPCs and subnets.</p>
<p>One question I have, my ingress manifest has</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: "2048-ingress"
namespace: "mynamespace"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
</code></pre>
|
Chris F
|
<p>So it turns out I had to add the tags to the public subnets of my VPC, even though this is a private only cluster.</p>
|
Chris F
|
<p>I have done a sample application to deploy Jenkins on Kubernetes and exposing the same using Ingress. When I access the jenkins pod via NodePort it is working but when I try to access it via Ingress / Nginx setup I am getting 404</p>
<p>I did google around and tried a few workarounds but none have worked so far. Here are the details for the files</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose -f ../docker-compose.yml -f ../docker-compose.utils.yml -f
../docker-compose.demosite.yml convert
kompose.version: 1.17.0 (a74acad)
creationTimestamp: null
labels:
io.kompose.service: ci
name: ci
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: ci
spec:
containers:
- image: jenkins/jenkins:lts
name: almsmart-ci
ports:
- containerPort: 8080
env:
- name: JENKINS_USER
value: admin
- name: JENKINS_PASS
value: admin
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false
- name: JENKINS_OPTS
value: --prefix=/ci
imagePullPolicy: Always
resources: {}
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose -f ../docker-compose.yml -f ../docker-compose.utils.yml -f
../docker-compose.demosite.yml convert
kompose.version: 1.17.0 (a74acad)
creationTimestamp: null
labels:
io.kompose.service: ci
name: ci
spec:
type : NodePort
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
io.kompose.service: ci
status:
loadBalancer: {}
</code></pre>
<p>Here is my ingress definition </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/cors-allow-headers: Authorization, origin, accept
nginx.ingress.kubernetes.io/cors-allow-methods: GET, OPTIONS
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /ci
backend:
serviceName: ci
servicePort: 8080
</code></pre>
<p>When I checked the logs in nginx controller I am seeing the following </p>
<pre><code>I0222 19:59:45.826062 6 controller.go:172] Configuration changes detected, backend reload required.
I0222 19:59:45.831627 6 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"almsmart-ingress", UID:"444858e5-36d9-11e9-9e29-080027811fa3", APIVersion:"extensions/v1beta1", ResourceVersion:"198832", FieldPath:""}): type: 'Normal' reason: 'DELETE' Ingress default/almsmart-ingress
I0222 19:59:46.063220 6 controller.go:190] Backend successfully reloaded.
[22/Feb/2019:19:59:46 +0000]TCP200000.000
W0222 20:00:00.870990 6 endpoints.go:76] Error obtaining Endpoints for Service "default/ci": no object matching key "default/ci" in local store
W0222 20:00:00.871023 6 controller.go:842] Service "default/ci" does not have any active Endpoint.
I0222 20:00:00.871103 6 controller.go:172] Configuration changes detected, backend reload required.
I0222 20:00:00.872556 6 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"almsmart-ingress", UID:"6fc5272c-36dc-11e9-9e29-080027811fa3", APIVersion:"extensions/v1beta1", ResourceVersion:"198872", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/almsmart-ingress
I0222 20:00:01.060291 6 controller.go:190] Backend successfully reloaded.
[22/Feb/2019:20:00:01 +0000]TCP200000.000
W0222 20:00:04.205398 6 controller.go:842] Service "default/ci" does not have any active Endpoint.
[22/Feb/2019:20:00:09 +0000]TCP200000.000
10.244.0.0 - [10.244.0.0] - - [22/Feb/2019:20:00:36 +0000] "GET /ci/ HTTP/1.1" 404 274 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:65.0) Gecko/20100101 Firefox/65.0" 498 0.101 [default-ci-8080] 10.244.1.97:8080 315 0.104 404 b5b849647749e2b626f00c011c15bc4e
10.244.0.0 - [10.244.0.0] - - [22/Feb/2019:20:00:46 +0000] "GET /ci HTTP/1.1" 404 274 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:65.0) Gecko/20100101 Firefox/65.0" 497 0.003 [default-ci-8080] 10.244.1.97:8080 315 0.004 404 ac8fbe2faa37413f5e533ed3c8d98a7d
10.244.0.0 - [10.244.0.0] - - [22/Feb/2019:20:00:49 +0000] "GET /ci/ HTTP/1.1" 404 274 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:65.0) Gecko/20100101 Firefox/65.0" 498 0.003 [default-ci-8080] 10.244.1.97:8080 315 0.004 404 865cb82af7f570f2144ef27fdea850c9
I0222 20:00:54.871828 6 status.go:388] updating Ingress default/almsmart-ingress status from [] to [{ }]
I0222 20:00:54.877693 6 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"almsmart-ingress", UID:"6fc5272c-36dc-11e9-9e29-080027811fa3", APIVersion:"extensions/v1beta1", ResourceVersion:"198974", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress default/almsmart-ingress
</code></pre>
<p>When I try </p>
<blockquote>
<p>kubectl get endpoints, I get the following</p>
</blockquote>
<pre><code>NAME ENDPOINTS AGE
ci 10.244.1.97:8080 31m
</code></pre>
<p>The default 404 page is available so I assume Ingress Controller is working fine but not sure why it is not able to find the service. I have all the objects in default namespace only and working but still unable to access it using nginx ingress. </p>
|
Gaurav Sharma
|
<p>+1 for this well asked question.</p>
<p>You setup seemed OK to me. One problem is that, you have <code>--prefix=/ci</code> configured for your Jenkins, but you configured <code>nginx.ingress.kubernetes.io/rewrite-target: /</code> for your ingress. This would cause rewrite of your route: <code>xxx/ci => xxx/</code>. I think the 404 is returned by your Jenkins. </p>
<p>You could try to modify your rewrite rule to <code>nginx.ingress.kubernetes.io/rewrite-target: /ci</code> and see if this works for you.</p>
|
Fei
|
<p>I'm trying to deploy a cassandra multinode cluster in minikube, I have followed this tutorial <a href="https://kubernetes.io/docs/tutorials/stateful-application/cassandra/" rel="nofollow noreferrer">Example: Deploying Cassandra with Stateful Sets</a> and made some modifications, the cluster is up and running and with kubectl I can connect via cqlsh, but I want to connect externally, I tried to expose the service via NodePort and test the connection with datastax studio (192.168.99.100:32554) but no success, also later I want to connect in spring boot, I supose that I have to use the svc name or the node ip.</p>
<pre><code>All host(s) tried for query failed (tried: /192.168.99.100:32554 (com.datastax.driver.core.exceptions.TransportException: [/192.168.99.100:32554] Cannot connect))
</code></pre>
<p><strong>[cassandra-0] /etc/cassandra/cassandra.yaml</strong></p>
<pre><code>rpc_port: 9160
broadcast_rpc_address: 172.17.0.5
listen_address: 172.17.0.5
# listen_interface: eth0
start_rpc: true
rpc_address: 0.0.0.0
# rpc_interface: eth1
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "cassandra-0.cassandra.default.svc.cluster.local"
</code></pre>
<p>Here is minikube output for the svc and pods </p>
<pre><code>$ kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:8443
KubeDNS is running at https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cassandra NodePort 10.102.236.158 <none> 9042:32554/TCP 20m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22h
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cassandra-0 1/1 Running 0 20m 172.17.0.4 minikube <none> <none>
cassandra-1 1/1 Running 0 19m 172.17.0.5 minikube <none> <none>
cassandra-2 1/1 Running 1 19m 172.17.0.6 minikube <none> <none>
$ kubectl describe service cassandra
Name: cassandra
Namespace: default
Labels: app=cassandra
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"cassandra"},"name":"cassandra","namespace":"default"},"s...
Selector: app=cassandra
Type: NodePort
IP: 10.102.236.158
Port: <unset> 9042/TCP
TargetPort: 9042/TCP
NodePort: <unset> 32554/TCP
Endpoints: 172.17.0.4:9042,172.17.0.5:9042,172.17.0.6:9042
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
$ kubectl exec -it cassandra-0 -- nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 172.17.0.5 104.72 KiB 256 68.1% 680bfcb9-b374-40a6-ba1d-4bf7ee80a57b rack1
UN 172.17.0.4 69.9 KiB 256 66.5% 022009f8-112c-46c9-844b-ef062bac35aa rack1
UN 172.17.0.6 125.31 KiB 256 65.4% 48ae76fe-b37c-45c7-84f9-3e6207da4818 rack1
$ kubectl exec -it cassandra-0 -- cqlsh
Connected to K8Demo at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.11.4 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.
cqlsh>
</code></pre>
<p><strong>cassandra-service.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: cassandra
name: cassandra
spec:
type: NodePort
ports:
- port: 9042
selector:
app: cassandra
</code></pre>
<p><strong>cassandra-statefulset.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cassandra
labels:
app: cassandra
spec:
serviceName: cassandra
replicas: 3
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
terminationGracePeriodSeconds: 1800
containers:
- name: cassandra
image: cassandra:3.11
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
resources:
limits:
cpu: "500m"
memory: 1Gi
requests:
cpu: "500m"
memory: 1Gi
securityContext:
capabilities:
add:
- IPC_LOCK
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- nodetool drain
env:
- name: MAX_HEAP_SIZE
value: 512M
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_SEEDS
value: "cassandra-0.cassandra.default.svc.cluster.local"
- name: CASSANDRA_CLUSTER_NAME
value: "K8Demo"
- name: CASSANDRA_DC
value: "DC1-K8Demo"
- name: CASSANDRA_RACK
value: "Rack1-K8Demo"
- name: CASSANDRA_START_RPC
value: "true"
- name: CASSANDRA_RPC_ADDRESS
value: "0.0.0.0"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
# These volume mounts are persistent. They are like inline claims,
# but not exactly because the names need to match exactly one of
# the stateful pod volumes.
volumeMounts:
- name: cassandra-data
mountPath: /var/lib/cassandra
# These are converted to volume claims by the controller
# and mounted at the paths mentioned above.
# do not use these in production until ssd GCEPersistentDisk or other ssd pd
volumeClaimTemplates:
- metadata:
name: cassandra-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: fast
resources:
requests:
storage: 1Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: k8s.io/minikube-hostpath
parameters:
type: pd-standard
</code></pre>
|
jhuamanchumo
|
<p>Just for anyone with this problem:
After reading <a href="https://docs.datastax.com/en/dse/5.1/dse-dev/datastax_enterprise/studio/aboutStudio.html" rel="nofollow noreferrer">docs</a> on datastax I realized that DataStax Studio is meant for use with DataStax Enterprise, for local development and the community edition of cassanda I'm using DataStax DevCenter and it works.</p>
<p>For spring boot (Cassandra cluster running on minikube):</p>
<pre><code>spring.data.cassandra.keyspacename=mykeyspacename
spring.data.cassandra.contactpoints=cassandra-0.cassandra.default.svc.cluster.local
spring.data.cassandra.port=9042
spring.data.cassandra.schemaaction=create_if_not_exists
</code></pre>
<p>For DataStax DevCenter(Cassandra cluster running on minikube):</p>
<pre><code>ContactHost = 192.168.99.100
NativeProtocolPort: 300042
</code></pre>
<p>Updated cassandra-service</p>
<pre><code># ------------------- Cassandra Service ------------------- #
apiVersion: v1
kind: Service
metadata:
labels:
app: cassandra
name: cassandra
spec:
type: NodePort
ports:
- port: 9042
nodePort: 30042
selector:
app: cassandra
</code></pre>
|
jhuamanchumo
|
<p>When we use a "simple fanout" Ingress pattern, as described here: <a href="https://v1-18.docs.kubernetes.io/docs/concepts/services-networking/ingress/#simple-fanout" rel="nofollow noreferrer">https://v1-18.docs.kubernetes.io/docs/concepts/services-networking/ingress/#simple-fanout</a> (we're using v1.18), the Ingress performs a simple redirect, sending <code>302 - Found</code></p>
<p>This causes the HTTP method to change to GET.</p>
<p>I'm trying to get the Ingress to give either a 307 or a 308 response instead.</p>
<p>I've tried using NGINX/k8s annotations, and I've tried applying the sort of approach here: <a href="https://stackoverflow.com/questions/56177198/kubernetes-nginx-ingress-changes-http-request-from-a-post-to-a-get">Kubernetes NGINX Ingress changes HTTP request from a POST to a GET</a> but this doesn't work</p>
<p>We have 2 applications mapped to the same server using different ports behind the scenes.</p>
<p>Strangely enough, the POST is preserved fine with <code>curl</code></p>
|
pc3356
|
<p>In the end it's not really anything to do with K8S or NGINX:</p>
<p>The endpoint I wanted to use was mapped to "/" in the context (i.e. <code>/blah/</code>)</p>
<p>Since it's an API, it would be unusual to have something with a trailing slash, so I was referring to it by <code>/blah</code>, which the underlying application server rewrote to <code>/blah/</code> with a 302.</p>
<p>Fixed it by using <code>/v1</code> as the context (and ingress path), so I can have <code>/v1/blah</code> and it won't rewrite/redirect.</p>
|
pc3356
|
<p>Using the Nginx Ingress Controller, we would like to expose different paths of a Kubernetes service, with different security requirements.</p>
<ol>
<li><p><code>/</code> is open to the public</p></li>
<li><p><code>/white-list</code> only allows connections from a specific IP Address</p></li>
<li><p><code>/need-key</code> requires an API key</p></li>
</ol>
<p>I'm running in AWS EKS. Kubernetes version is as follows:<code>v1.12.6-eks-d69f1b</code>.</p>
<p>If we use Annotations, they apply to the entire service. Ideally I would like to apply an Annotation only to a path. </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-myServiceA
annotations:
# use the shared ingress-nginx
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: myServiceA.foo.org
http:
paths:
- path: /
backend:
serviceName: myServiceA
servicePort: 80
- path: /white-list
backend:
serviceName: myServiceA
servicePort: 80
**NEED SOMETHING HERE TO WHITELIST**
- path: /need-key
backend:
serviceName: myServiceA
servicePort: 80
**NEED SOMETHING HERE TO USE API-KEY**
</code></pre>
<p>The results I've been having end up applying to all the paths.
I can live without API-Key as I can code that out, but ideally, I'd rather have it managed outside of the container.</p>
<p>Has anyone accomplished this with NGINX Ingress controller?</p>
|
Rolando Cintron
|
<p>To apply annotation for each path, you could write one <code>ingress</code> rule for each path you want to apply. Nginx Ingress Controller will collect those <code>ingress</code> rules by itself and apply accordingly. </p>
<p>For example:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-myServiceA-root
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: myServiceA.foo.org
http:
paths:
- path: /
backend:
serviceName: myServiceA
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-myServiceA-white-list
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/whitelist-source-range: X.X.X.X/32
spec:
rules:
- host: myServiceA.foo.org
http:
paths:
- path: /white-list
backend:
serviceName: myServiceA
servicePort: 80
...
</code></pre>
|
Fei
|
<p>my problem is that I need to increase NATS max_payload in production environment but it uses Kubernetes and I have no idea how to do that, I tried to use ConfigMap but I had not success.</p>
<p>In local/dev environment it uses a NATS config file with docker so it works fine.</p>
<p>Way to make it works local: <a href="https://stackoverflow.com/questions/65638631/nats-with-moleculer-how-can-i-change-nats-max-payload-value">NATS with moleculer. How can I change NATS max_payload value?</a></p>
<p>Code inside k8s/deployment.develop.yaml</p>
<pre><code>(...)
apiVersion: v1
kind: ServiceAccount
metadata:
name: nats
namespace: develop
labels:
account: nats
---
apiVersion: v1
kind: Service
metadata:
name: nats
namespace: develop
labels:
app: nats
service: nats
spec:
ports:
- port: 4222
targetPort: 4222
selector:
app: nats
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nats-v1
namespace: develop
labels:
app: nats
version: v1
spec:
replicas: 1
strategy: {}
selector:
matchLabels:
app: nats
version: v1
template:
metadata:
labels:
app: nats
version: v1
spec:
serviceAccountName: nats
containers:
- image: 'nats:alpine'
name: nats
ports:
- containerPort: 4222
restartPolicy: Always
tolerations: []
affinity: {}
</code></pre>
<p>Thanks!</p>
|
Matheus Chignolli
|
<p>The payload value is hardcoded into the docker image (in the nats-server.conf file), so you cannot change it via a configmap. The only solution I found is to build my own nats image with a modified <code>nats-server.conf</code> file.</p>
<p>Here is how to do it.</p>
<ol>
<li>First, select an image flavor from <a href="https://github.com/nats-io/nats-docker" rel="nofollow noreferrer">the official repo</a> and download the content (for example <code>Dockerfile</code> and <code>nats-server.conf</code> from <a href="https://github.com/nats-io/nats-docker/tree/master/2.2.6/scratch" rel="nofollow noreferrer">here</a>).</li>
<li>Modify the <code>nats-server.conf</code> file to change the <code>max_payload</code> value : just add the following line (this file is NOT in JSON format)</li>
</ol>
<pre><code>max_payload: 4Mb
</code></pre>
<ol start="3">
<li>Then build the image : <code>docker image build .</code></li>
<li>Tag it, upload it to your own repo and use it instead of the official <code>nats</code> image</li>
</ol>
|
Fabien Quatravaux
|
<p>I have a statefulset of kafka. I need to expand the disk size, i try wihout succes to use the automatic resize feature of k8s 1.9</p>
<p>Here : <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims</a></p>
<p>I did activate feature gates and admission pluging, i think it's work beacause i can succefully change the size of the pvc after the modification.</p>
<p>But nothing happend i have modified the size of the PVC from 50Gi to 250Gi.</p>
<p>The capacity did change everywhere in the pvc, but not on AWS the EBS volume is still 50gb and a df -h in the pod still show 50gb</p>
<p>Did i miss something ? Do i have to manually resize on aws ?</p>
<p>thank you</p>
|
Guillaume Alouege
|
<p>I made the feature work, but in a very very dirty way. </p>
<ol>
<li>Modify the size of the PVC</li>
<li>Modify the size of the EBS manually</li>
<li>Force unmount the volume on AWS</li>
<li>The pod crash and is
rescheduled by the statefullset, when the pod is up again the volume and partition have the correct size</li>
</ol>
|
Guillaume Alouege
|
<p>I have virtual machine centOS(v7.4) running on Openstack. In virtual machine, I deployed kubernetes and 7 pods, virtual machine could ping the external server. But one pod uaa could NOT connect database on external network (app.xxx.com 11521), and logs say:</p>
<pre><code> ... 49 common frames omitted
Caused by: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:419) ~[ojdbc6-11.2.0.1.0.jar!/:11.2.0.2.0]
at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:536) ~[ojdbc6-11.2.0.1.0.jar!/:11.2.0.2.0]
at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:228) ~[ojdbc6-11.2.0.1.0.jar!/:11.2.0.2.0]
at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:32) ~[ojdbc6-11.2.0.1.0.jar!/:11.2.0.2.0]
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:521) ~[ojdbc6-11.2.0.1.0.jar!/:11.2.0.2.0]
at org.apache.commons.dbcp2.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:39) ~[commons-dbcp2-2.1.1.jar!/:2.1.1]
at org.apache.commons.dbcp2.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:256) ~[commons-dbcp2-2.1.1.jar!/:2.1.1]
at org.apache.commons.dbcp2.BasicDataSource.validateConnectionFactory(BasicDataSource.java:2304) ~[commons-dbcp2-2.1.1.jar!/:2.1.1]
at org.apache.commons.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:2290) ~[commons-dbcp2-2.1.1.jar!/:2.1.1]
... 53 common frames omitted
Caused by: oracle.net.ns.NetException: The Network Adapter could not establish the connection
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:375) ~[ojdbc6-11.2.0.1.0.jar!/:11.2.0.2.0]
at oracle.net.resolver.AddrResolution.resolveAndExecute(AddrResolution.java:422) ~[ojdbc6-11.2.0.1.0.jar!/:11.2.0.2.0]
at oracle.net.ns.NSProtocol.establishConnection(NSProtocol.java:678) ~[ojdbc6-11.2.0.1.0.jar!/:11.2.0.2.0]
at oracle.net.ns.NSProtocol.connect(NSProtocol.java:238) ~[ojdbc6-11.2.0.1.0.jar!/:11.2.0.2.0]
at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1054) ~[ojdbc6-11.2.0.1.0.jar!/:11.2.0.2.0]
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:308) ~[ojdbc6-11.2.0.1.0.jar!/:11.2.0.2.0]
... 61 common frames omitted
Caused by: java.net.UnknownHostException: app.xxx.com: Name or service not known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) ~[na:1.8.0_111]
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928) ~[na:1.8.0_111]
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323) ~[na:1.8.0_111]
at java.net.InetAddress.getAllByName0(InetAddress.java:1276) ~[na:1.8.0_111]
at java.net.InetAddress.getAllByName(InetAddress.java:1192) ~[na:1.8.0_111]
at java.net.InetAddress.getAllByName(InetAddress.java:1126) ~[na:1.8.0_111]
at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:171) ~[ojdbc6-11.2.0.1.0.jar!/:11.2.0.2.0]
at oracle.net.nt.ConnOption.connect(ConnOption.java:123) ~[ojdbc6-11.2.0.1.0.jar!/:11.2.0.2.0]
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:353) ~[ojdbc6-11.2.0.1.0.jar!/:11.2.0.2.0]
... 66 common frames omitted
</code></pre>
<p>I ssh into the pod, </p>
<pre><code>[centos@kube-master ingress2]$ sudo kubectl exec -it gearbox-rack-uaa-service -5b5fd58d87-5zsln -- /bin/bash
root@gearbox-rack-uaa-service-5b5fd58d87-5zsln:/# ping app.xxx.com
ping: unknown host
</code></pre>
<p>My questions is how to let pod connects to external database?</p>
|
user84592
|
<p>in your spec file try setting "spec.template.spec.hostNetwork: true"
There are other things this affects but should get over your ping issue</p>
|
in need of help
|
<p>I'm developing an Electron app and want to distribute the back-end portion of the web application (PHP) via Docker and Kubernetes (using Helm charts). I plan to package the expanded dmg of Docker, but haven't found a way to configure Docker from terminal. Is this possible - enable Kubernetes and increase CPU size and RAM via terminal?</p>
<p><strong>Edit:</strong> I don't want to just start Docker from the command line. I want to configure the first installation as well specifying the amount of resources the Docker daemon has access to and enabling Kubernetes. </p>
|
jth_92
|
<p>After continued research, I did find an answer to this. On Docker for Mac, the Docker daemon is actually run inside of Hyperkit VM and the Docker CLI just communicates with the Docker engine running in Hyperkit. The configuration for this is at ~/Library/Group Containers/group.com.docker/settings.json. </p>
<pre><code>{
"proxyHttpMode" : "system",
"diskPath" : "~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2",
"diskSizeMiB" : 65536,
"cpus" : 5,
"defaultMachineMigrationStatus" : 4,
"memoryMiB" : 9216,
"displayedWelcomeWhale" : true,
"buildNumber" : "26764",
"autoStart" : true,
"kubernetesInitialInstallPerformed" : true,
"channelID" : "stable",
"checkForUpdates" : true,
"settingsVersion" : 1,
"kubernetesEnabled" : true,
"version" : "18.06.1-ce-mac73",
"displayedWelcomeMessage" : true,
"analyticsEnabled" : true,
"linuxDaemonConfigCreationDate" : "2017-10-24 15:59:40 +0000",
"dockerAppLaunchPath" : "/Applications/Docker.app"
}
</code></pre>
<p>When Docker starts up, it will allocate these settings to hyperkit as command line arguments: <code>com.docker.hyperkit -A -u -F vms/0/hyperkit.pid -c 5 -m 9216M</code>.</p>
<p>By default, when running docker containers, docker will allocate all memory and CPU of hyperkit for them to consume but can be overridden by docker run arguments.</p>
|
jth_92
|
<p>I just started kubernetes with help of minikube in windows10 machine+Hyper V. deployed and exposed nginx container as followed but could not access the deployed application through minikube ip or inside the minikube server.</p>
<pre><code> C:\WINDOWS\system32>kubectl run nginx --image=nginx --port=8090
deployment "nginx" created
C:\WINDOWS\system32>kubectl expose deployment nginx --type="NodePort"
service "nginx" exposed
C:\WINDOWS\system32>minikube service list
|-------------|----------------------|----------------------------|
| NAMESPACE | NAME | URL |
|-------------|----------------------|----------------------------|
| default | kubernetes | No node port |
| default | nginx | http://10.16.234.206:30684 |
| kube-system | kube-dns | No node port |
| kube-system | kubernetes-dashboard | http://10.16.234.206:30000 |
|-------------|----------------------|----------------------------|
C:\WINDOWS\system32>minikube ssh "curl localhost:30684"
curl: (7) Failed to connect to localhost port 30684: Connection refused
$ curl "http://10.16.234.206:30684"
curl: (7) Failed to connect to 10.16.234.206 port 30684: Connection refused
</code></pre>
|
prabhakaran
|
<p>The nginx dockerfile exposes port 80, but your pod is using port 8090. Run the deployment using the right port 80 should fix it:</p>
<pre><code>kubectl run nginx --image=nginx --port=80
</code></pre>
|
hc6
|
<p>I’m struggling with the last step of a configuration using MetalLB, Kubernetes, Istio on a bare-metal instance, and that is to have a web page returned from a service to the outside world via an Istio VirtualService route. I’ve just updated the instance to</p>
<ul>
<li>MetalLB (version 0.7.3)</li>
<li>Kubernetes (version 1.12.2)</li>
<li>Istio (version 1.0.3)</li>
</ul>
<p>I’ll start with what does work.</p>
<p>All complementary services have been deployed and most are working:</p>
<ol>
<li>Kubernetes Dashboard on <a href="http://localhost:8001" rel="nofollow noreferrer">http://localhost:8001</a></li>
<li>Prometheus Dashboard on <a href="http://localhost:10010" rel="nofollow noreferrer">http://localhost:10010</a> (I had something else on 9009)</li>
<li>Envoy Admin on <a href="http://localhost:15000" rel="nofollow noreferrer">http://localhost:15000</a></li>
<li>Grafana (Istio Dashboard) on <a href="http://localhost:3000" rel="nofollow noreferrer">http://localhost:3000</a></li>
<li>Jaeger on <a href="http://localhost:16686" rel="nofollow noreferrer">http://localhost:16686</a></li>
</ol>
<p>I say most because since the upgrade to Istio 1.0.3 I've lost the telemetry from istio-ingressgateway in the Jaeger dashboard and I'm not sure how to bring it back. I've dropped the pod and re-created to no-avail.</p>
<p>Outside of that, MetalLB and K8S appear to be working fine and the load-balancer is configured correctly (using ARP).</p>
<pre><code>kubectl get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.109.247.149 <none> 3000/TCP 9d
istio-citadel ClusterIP 10.110.129.92 <none> 8060/TCP,9093/TCP 28d
istio-egressgateway ClusterIP 10.99.39.29 <none> 80/TCP,443/TCP 28d
istio-galley ClusterIP 10.98.219.217 <none> 443/TCP,9093/TCP 28d
istio-ingressgateway LoadBalancer 10.108.175.231 192.168.1.191 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:30805/TCP,8060:32514/TCP,853:30601/TCP,15030:31159/TCP,15031:31838/TCP 28d
istio-pilot ClusterIP 10.97.248.195 <none> 15010/TCP,15011/TCP,8080/TCP,9093/TCP 28d
istio-policy ClusterIP 10.98.133.209 <none> 9091/TCP,15004/TCP,9093/TCP 28d
istio-sidecar-injector ClusterIP 10.102.158.147 <none> 443/TCP 28d
istio-telemetry ClusterIP 10.103.141.244 <none> 9091/TCP,15004/TCP,9093/TCP,42422/TCP 28d
jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP,5778/TCP 27h
jaeger-collector ClusterIP 10.104.66.65 <none> 14267/TCP,14268/TCP,9411/TCP 27h
jaeger-query LoadBalancer 10.97.70.76 192.168.1.193 80:30516/TCP 27h
prometheus ClusterIP 10.105.176.245 <none> 9090/TCP 28d
zipkin ClusterIP None <none> 9411/TCP 27h
</code></pre>
<p>I can expose my deployment using:</p>
<pre><code>kubectl expose deployment enrich-dev --type=LoadBalancer --name=enrich-expose
</code></pre>
<p>it all works perfectly fine and I can hit the webpage from the external load balanced IP address (I deleted the exposed service after this).</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
enrich-expose LoadBalancer 10.108.43.157 192.168.1.192 31380:30170/TCP 73s
enrich-service ClusterIP 10.98.163.217 <none> 80/TCP 57m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 36d
</code></pre>
<p>If I create a K8S Service in the default namespace (I've tried multiple)</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: enrich-service
labels:
run: enrich-service
spec:
ports:
- name: http
port: 80
protocol: TCP
selector:
app: enrich
</code></pre>
<p>followed by a gateway and a route (VirtualService), the only response I get is a 404 outside of the mesh. You'll see in the gateway I'm using the reserved word mesh but I've tried both that and naming the specific gateway. I've also tried different match prefixes for specific URI and the port you can see below.</p>
<p>Gateway</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: enrich-dev-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
</code></pre>
<p>VirtualService</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: enrich-virtualservice
spec:
hosts:
- "enrich-service.default"
gateways:
- mesh
http:
- match:
- port: 80
route:
- destination:
host: enrich-service.default
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: enrich-destination
spec:
host: enrich-service.default
trafficPolicy:
loadBalancer:
simple: LEAST_CONN
subsets:
- name: v1
labels:
app: enrich
</code></pre>
<p>I've double checked it's not the DNS playing up because I can go into the shell of the ingress-gateway either via busybox or using the K8S dashboard</p>
<p><a href="http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/shell/istio-system/istio-ingressgateway-6bbdd58f8c-glzvx/?namespace=istio-system" rel="nofollow noreferrer">http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/shell/istio-system/istio-ingressgateway-6bbdd58f8c-glzvx/?namespace=istio-system</a></p>
<p>and do both an</p>
<pre><code>nslookup enrich-service.default
</code></pre>
<p>and</p>
<pre><code>curl -f http://enrich-service.default/
</code></pre>
<p>and both work successfully, so I know the ingress-gateway pod can see those. The sidecars are set for auto-injection in both the default namespace and the istio-system namespace.</p>
<p>The logs for the ingress-gateway show the 404:</p>
<pre><code>[2018-11-01T03:07:54.351Z] "GET /metadataHTTP/1.1" 404 - 0 0 1 - "192.168.1.90" "curl/7.58.0" "6c1796be-0791-4a07-ac0a-5fb07bc3818c" "enrich-service.default" "-" - - 192.168.224.168:80 192.168.1.90:43500
[2018-11-01T03:26:39.339Z] "GET /HTTP/1.1" 404 - 0 0 1 - "192.168.1.90" "curl/7.58.0" "ed956af4-77b0-46e6-bd26-c153e29837d7" "enrich-service.default" "-" - - 192.168.224.168:80 192.168.1.90:53960
</code></pre>
<p>192.168.224.168:80 is the IP address of the gateway.
192.168.1.90:53960 is the IP address of my external client.</p>
<p>Any suggestions, I've tried hitting this from multiple angles for a couple of days now and I feel I'm just missing something simple. Suggested logs to look at perhaps?</p>
|
sturmstrike
|
<p>Just to close this question out for the solution to the problem in my instance. The mistake in configuration started all the way back in the Kubernetes cluster initialisation. I had applied:</p>
<pre><code>kubeadm init --pod-network-cidr=n.n.n.n/n --apiserver-advertise-address 0.0.0.0
</code></pre>
<p>the pod-network-cidr using the same address range as the local LAN on which the Kubernetes installation was deployed i.e. the desktop for the Ubuntu host used the same IP subnet as what I'd assigned the container network.</p>
<p>For the most part, everything operated fine as detailed above, until the Istio proxy was trying to route packets from an external load-balancer IP address to an internal IP address which happened to be on the same subnet. Project Calico with Kubernetes seemed to be able to cope with it as that's effectively Layer 3/4 policy but Istio had a problem with it a L7 (even though it was sitting on Calico underneath).</p>
<p>The solution was to tear down my entire Kubernetes deployment. I was paranoid and went so far as to uninstall Kubernetes and deploy again and redeploy with a pod network in the 172 range which wasn't anything to do with my local lan. I also made the same changes in the Project Calico configuration file to match pod networks. After that change, everything worked as expected.</p>
<p>I suspect that in a more public configuration where your cluster was directly attached to a BGP router as opposed to using MetalLB with an L2 configuration as a subset of your LAN wouldn't exhibit this issue either. I've documented it more in this post:</p>
<p><a href="https://blooprynt.io/blog/2018/11/12/start-to-finish-net-containers-deployed-in-on-premise-load-balanced-kubernetes-with-istio-mesh" rel="nofollow noreferrer">Microservices: .Net, Linux, Kubernetes and Istio make a powerful combination</a></p>
|
sturmstrike
|
<p>When I put the application into production with pod-managed Kubernetes architecture where it has the possibility of scaling so today it has two servers running the same application, hangfire recognizes both but returns an error 500</p>
<pre><code>Unable to refresh the statistics: the server responded with 500 (error). Try reloading the page manually, or wait for automatic reload that will happen in a minute.
</code></pre>
<p>But when I leave on stage which is the testing application where there is only one server, hangfire works normally.</p>
<p>Hangfire Configuration:</p>
<pre class="lang-cs prettyprint-override"><code>Startup.cs
services.AddHangfire(x => x.UsePostgreSqlStorage(Configuration.GetConnectionString("DefaultConnection")));
app.UseHangfireDashboard("/hangfire", new DashboardOptions
{
Authorization = new[] { new AuthorizationFilterHangfire() }
});
app.UseHangfireServer();
</code></pre>
<p><a href="https://i.stack.imgur.com/Z7w6s.png" rel="nofollow noreferrer">Error</a></p>
|
iago soares
|
<p>You can now add <code>IgnoreAntiforgeryToken</code> to your service which should resolve this issue.</p>
<p>According to <a href="https://github.com/HangfireIO/Hangfire/issues/1248#issuecomment-517357213" rel="nofollow noreferrer">this github post</a>, the issue occured when you had multiple servers running the dashboard and due to load balancing when your request went to different server from the one you originally got the page, you'd see the error.</p>
<p>Adding <code>IgnoreAntiforgeryToken = true</code> to the dashboard should resolve the issue.</p>
<p>Excerpt Taken from <a href="https://github.com/HangfireIO/Hangfire/issues/1248#issuecomment-517357213" rel="nofollow noreferrer">here</a></p>
<pre><code>app.UseHangfireDashboard("/hangfire", new DashboardOptions
{
Authorization = new[] {new HangfireAuthFilter()},
IgnoreAntiforgeryToken = true // <--This
});
</code></pre>
|
Jawad
|
<p>I have a k8s ingress with more than 5 backend services behind it. The ingress spawns a GoogleCloud LoadBalancer.</p>
<p>Each of the services is routed traffic to by an http <code>path</code> rule. Eg. one app is on <code>/foo</code>, another is on <code>/bar</code>, etc.
All of them work fine. Then I added a new app, with backend service and routing rule, all the same way as the others.</p>
<p>But I'm constantly getting this error when I hit the URL of the new app:</p>
<pre><code>Error: Server Error
The server encountered a temporary error and could not complete your request.
Please try again in 30 seconds.
</code></pre>
<p>When I open the ingress in GCP console, I can see this warning: <a href="https://i.stack.imgur.com/ZyA7I.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZyA7I.png" alt="enter image description here" /></a></p>
<p>and the unhealthy service is the one from my newly added app.</p>
<p><strong>The weird thing is that the app actually does get traffic when I hit the URL. I can see it in the logs. But I still get that 502 error and the backend service is shown as unhealthy.</strong></p>
<p>I am not rly sure how to debug this in order to figure what's the issue.</p>
|
Milkncookiez
|
<p>So, the problem was that the LB health-check was hitting <code>/</code> which was a non-existing endpoint on the app (aka. it wasn't returning <code>OK 200</code>).</p>
<p>I added <code>readiness</code> probe to the k8s Deployment. According to the <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#health_checks" rel="noreferrer">GCP Ingress docs</a>, if there is a <code>readiness</code> probe, the ingress will pick it up and use it as the LB health-check.</p>
<p>I also had to manually update the path to which the health-check object of the backend service was hitting. I guess the pod with readiness probe should exist before the ingress is set up, otherwise it doesn't update the health-check object automatically.</p>
|
Milkncookiez
|
<pre><code> |--> service1:8081 --> pod1-a, pod1-b, pod1-c
UI -> load balancer -> ingress (mydomain.com)
|--> service2:8082 --> pod2-a, pod2-b, pod2-c
</code></pre>
<p>So from <code>service1</code>, I could call <code>service2</code> directly with <code>http://service2:8082</code>, but since this is not being done through the UI -> load balancer, how does this get load balanced? Should I not call <code>service2</code> directly, and call it through <code>mydomain.com/service2</code> instead so it would have to go through the flow?</p>
|
atkayla
|
<p>Invoking a service from another service will hit the iptable routes on the node and pick service endpoint to route traffic to. This will be faster.</p>
<p>If you call it through mydomain.com/service2, then the flow passes through additional L7 ingress and will be comparatively slow.</p>
|
techuser soma
|
<p>I am trying to keep always the slave pod running. Unfortunately using Kubernetes agent inside the pipeline, I am still struggling with adding "podRetention" as always</p>
|
Ashish Kumar
|
<p>For a declarative pipeline you would use idleMinutes to keep the pod longer</p>
<pre><code>pipeline {
agent {
kubernetes {
label "myPod"
defaultContainer 'docker'
yaml readTrusted('kubeSpec.yaml')
idleMinutes 30
}
}
</code></pre>
<p>the idea is to keep the pod alive for a certain time for jobs that are triggered often, the one watching master branch for instance. That way if developers are on rampage pushing on master, the build will be fast. When devs are done we don't need the pod to be up forever and we don't want to pay extra resources for nothing so we let the pod kill itself</p>
|
fredericrous
|
<p>I just started using Istio and securing service to service communication and have two questions: </p>
<ol>
<li>When using nginx ingress, will Istio secure the data from the ingress controller to the service with TLS?</li>
<li>Is it possible to secure with TLS all the way to the pod?</li>
</ol>
|
ItFreak
|
<ol>
<li><p>With "Envoy" deployed as sidecar container to both i.e. (a) <strong>NGINX POD</strong> and (b) <strong>Application POD</strong>, istio will ensure that both the services communicate to each-other over TLS. </p></li>
<li><p>Infact that's the whole idea behind using Istio i.e. to secure all the communication way till the POD using ENVOY side-car. Envoy will intercept all the traffic going in/out of the POD and perform TLS communication with the peer Envoy counterpart.</p></li>
</ol>
<p>All this is done in a transparent manner i.e transparent to the application container. The responsibility to perform TLS layer jobs ex. handshake, encryption/decryption, peer discovery etc. are all offloaded to the envoy sidecar. </p>
|
piy26
|
<p>Currently I am using Kubernetes v1.11.6.
I deployed kubernetes in AWS by using KOPS.
In k8s cluster, deployed kafka, elasticsearch.</p>
<p>PVC for kafka and elasticsearch are EBS volumes in AWS.</p>
<p>My question is how to monitor PVC used and remaining available.</p>
<p>This did not worked, <a href="https://stackoverflow.com/questions/44718268/how-to-monitor-disk-usage-of-kubernetes-persistent-volumes">How to monitor disk usage of kubernetes persistent volumes?</a>
They no longer seem to be exposed starting from 1.12</p>
<p>I thought of using aws cloudwatch but I am thinking kubernetes will have some answer for this generic problem.</p>
<p>I should be able to see PVC used and remaining available disk space</p>
|
Ram
|
<p>generally speaking you can monitor following metrics:</p>
<pre><code>kubelet_volume_stats_capacity_bytes
kubelet_volume_stats_available_bytes
</code></pre>
<p>These metrics can be scraped from the kubelet endpoint on each node with tools like Prometheus :) </p>
|
Stepan Vrany
|
<p>I've got this ingress.yaml base configuration:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
sia: aza
app: asap-ingress-internal
name: asap-ingress-internal
annotations:
kubernetes.io/ingress.class: "nginx-external"
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: the-host-value
http:
paths:
- path: /asap-srv-template/(.*)
backend:
serviceName: asap-srv-template
servicePort: 8080
</code></pre>
<p>And want to replace the spec.rules.host value only (and keep all http.paths as is.</p>
<p>So I create a env-var.yaml like this :</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: asap-ingress-internal
spec:
rules:
- host: the.real.hostname
</code></pre>
<p>But the result is the following:</p>
<pre><code>$ kustomize build
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx-external
nginx.ingress.kubernetes.io/use-regex: "true"
labels:
app: asap-ingress-internal
env: dev
sia: aza
name: asap-ingress-internal
namespace: aza-72461-dev
spec:
rules:
- host: the.real.hostname
</code></pre>
<p>I have lost all http.paths configuration and I can't find out how to do.</p>
<p>I tried with patches: or patchesStrategicMerge in kustomization.yaml but the result is always the same.</p>
<p>Any help would be greatly appreciated</p>
|
jmcollin92
|
<p>You can use a json patch for this, below is an example.</p>
<p>Here is an example <code>kustomization.yaml</code>. It will call out a patch in the <code>patches</code> section:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base/app1
patches:
- target:
kind: Ingress
name: my-ingress
path: ingress-patch.json
</code></pre>
<p>Here would be an example <code>ingress-patch.json</code>:</p>
<pre><code>[
{
"op": "replace",
"path": "/spec/rules/0/host",
"value": "the.real.hostname"
}
]
</code></pre>
|
mroma
|
<p>I'm trying to use this feature: <a href="https://cloud.ibm.com/docs/services/appid?topic=appid-kube-auth#kube-auth" rel="nofollow noreferrer">https://cloud.ibm.com/docs/services/appid?topic=appid-kube-auth#kube-auth</a></p>
<p>I've followed the steps in the documentation, but the authentication process is not triggered. Unfortunately I don't see any errors and don't know what else to do.</p>
<p>Here is my sample service (nginx.yaml):</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
labels:
app: nginx
spec:
ports:
- name: http
port: 80
protocol: TCP
selector:
app: nginx
type: NodePort
</code></pre>
<p>Here is my sample service (ingress.yaml). Replace 'niklas-heidloff-4' with your cluster name and 'niklas-heidloff-appid' with the name of your App ID service instance.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-with-app-id
annotations:
ingress.bluemix.net/appid-auth: "bindSecret=binding-niklas-heidloff-appid namespace=default requestType=web"
spec:
tls:
- hosts:
- niklas.niklas-heidloff-4.us-south.containers.appdomain.cloud
secretName: niklas-heidloff-4
rules:
- host: niklas.niklas-heidloff-4.us-south.containers.appdomain.cloud
http:
paths:
- path: /
backend:
serviceName: nginx
servicePort: 80
</code></pre>
<p>Here are the steps to reproduce the sample:</p>
<p>First create a new cluster with at least two worker nodes in Dallas as described in the documentation. Note that it can take some extra time to get a public IP for your cluster.</p>
<p>Then create a App ID service instance.</p>
<p>Then invoke the following commands (replace 'niklas-heidloff-4' with your cluster name):</p>
<pre><code>$ ibmcloud login -a https://api.ng.bluemix.net
$ ibmcloud ks region-set us-south
$ ibmcloud ks cluster-config niklas-heidloff-4 (and execute export....)
$ ibmcloud ks cluster-service-bind --cluster niklas-heidloff-4 --namespace default --service niklas-heidloff-appid
$ kubectl apply -f nginx.yaml
$ kubectl apply -f ingress.yaml
</code></pre>
<p>After this I could open '<a href="https://niklas.niklas-heidloff-4.us-south.containers.appdomain.cloud/" rel="nofollow noreferrer">https://niklas.niklas-heidloff-4.us-south.containers.appdomain.cloud/</a>' but the authentication process is not triggered and the page opens without authentication. </p>
|
Niklas Heidloff
|
<p>I tried the steps mentioned in the <a href="https://cloud.ibm.com/docs/services/appid?topic=appid-kube-auth#kube-auth" rel="nofollow noreferrer">link</a> and this is how it <strong>worked</strong> for me. </p>
<p><strong>ingress.yaml</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
annotations:
ingress.bluemix.net/appid-auth: "bindSecret=binding-appid-ks namespace=default requestType=web serviceName=nginx idToken=false"
spec:
tls:
- hosts:
- test.vidya-think-cluster.us-south.containers.appdomain.cloud
secretName: vidya-think-cluster
rules:
- host: test.vidya-think-cluster.us-south.containers.appdomain.cloud
http:
paths:
- path: /
backend:
serviceName: nginx
servicePort: 80
</code></pre>
<p>I added the following web redirect URL in the <code>authentication settings</code> of App ID service - <code>http://test.vidya-think-cluster.us-south.containers.appdomain.cloud/appid_callback</code>.</p>
<p>Now, when you try accessing the app at <code>http://test.vidya-think-cluster.us-south.containers.appdomain.cloud/</code> you should see the redirection to App ID</p>
<p>Looks like <code>idToken=false</code> is a mandatory parameter as there is an error when you run <code>kubectl describe myingress</code> </p>
<p><strong>Error</strong>: <em>Failed to apply ingress.bluemix.net/appid-auth annotation. Error annotation format error : One of the mandatory fields not valid/missing for annotation ingress.bluemix.net/appid-auth</em></p>
|
Vidyasagar Machupalli
|
<ul>
<li><p>Since containers are lightweight os virtualization, can we get the same performance as native (host)?</p>
</li>
<li><p>What would be the difference in performance?</p>
</li>
</ul>
<p>Any leads are highly appreciated or if you have any analysis reports or any reference with host vs containers performance comparison will help.</p>
<p>IA</p>
|
Containers-Intrest
|
<p>This question has already been <a href="https://stackoverflow.com/questions/21889053/what-is-the-runtime-performance-cost-of-a-docker-container">answered</a>. While the answer is specifically referring to Docker, it can be generalized to other
OCI compliant Container technologies: they all use the same primitives such as cgroups, Linux namespaces and (mostly) unionfs.</p>
<p>Keep in mind, that (in case of Docker) this only refers to containers running in Linux. If you operate Docker containers on Windows or Mac you have a LInux virtualization layer which causes extra (significant1) performance loss.</p>
<p>Edit: there are other approaches of "containerization" like KVM - as the question is tagged with "Docker" I assume it is specifically asking for the performance impact of docker containers.</p>
|
cvoigt
|
<p>I want to use istio with existing jaeger tracing system in K8S, I began with installing jaeger system following <a href="https://github.com/jaegertracing/jaeger-kubernetes" rel="noreferrer">the official link</a> with cassandra as backend storage. Then installed istio by <a href="https://istio.io/docs/setup/kubernetes/helm-install/" rel="noreferrer">the helm way</a>, but with only some selected components enabled: </p>
<pre><code>helm upgrade istio -i install/kubernetes/helm/istio --namespace istio-system \
--set security.enabled=true \
--set ingress.enabled=false \
--set gateways.istio-ingressgateway.enabled=true \
--set gateways.istio-egressgateway.enabled=false \
--set galley.enabled=false \
--set sidecarInjectorWebhook.enabled=true \
--set mixer.enabled=false \
--set prometheus.enabled=false \
--set global.proxy.envoyStatsd.enabled=false \
--set pilot.sidecar=true \
--set tracing.enabled=false
</code></pre>
<p>Jaeger and istio are installed inside the same namespace <code>istio-sytem</code>, after all done, all pods inside it looks like this:</p>
<pre><code>kubectl -n istio-system get pods
NAME READY STATUS RESTARTS AGE
istio-citadel-5c9544c886-gr4db 1/1 Running 0 46m
istio-ingressgateway-8488676c6b-zq2dz 1/1 Running 0 51m
istio-pilot-987746df9-gwzxw 2/2 Running 1 51m
istio-sidecar-injector-6bd4d9487c-q9zvk 1/1 Running 0 45m
jaeger-collector-5cb88d449f-rrd7b 1/1 Running 0 59m
jaeger-query-5b5948f586-gxtk7 1/1 Running 0 59m
</code></pre>
<p>Then I followed <a href="https://istio.io/docs/examples/bookinfo/" rel="noreferrer">the link</a> to deploy the bookinfo sample into another namespace <code>istio-play</code>, which has label <code>istio-injection=enabled</code>, but no matter how I flush the <code>productpage</code> page, there's no tracing data be filled into jaeger.</p>
<p>I guess maybe tracing spans are sent to jaeger by mixer, like the way istio do all other telementry stuff, so I <code>-set mixer.enabled=true</code>, but unfortunately only some services like <code>istio-mixer</code> or <code>istio-telementry</code> are displayed. Finally I cleaned up all the above installation and followed <a href="https://istio.io/docs/tasks/telemetry/distributed-tracing/" rel="noreferrer">this task</a> step by step, but the tracing data of bookinfo app are still not there.</p>
<p>My questions is: How indeed istio send tracing data to jaeger? Does sidecar proxy send it directly to jaeger-collector(<code>zipkin.istio-system:9411</code>) like <a href="https://www.envoyproxy.io/docs/envoy/latest/start/sandboxes/jaeger_tracing" rel="noreferrer">how envoy does</a>, or the data flows like this: <code>sidecar-proxy -> mixer -> jaeger-collector</code>? And how could I debug how the data flow between all kinds of components inside the istio mesh? </p>
<p>Thanks for any help and info :-)</p>
<hr>
<p><strong>Update</strong>: I tried again by installing istio without helm: <code>kubectl -n istio-system apply -f install/kubernetes/istio-demo.yaml</code>, this time everything works just fine, there must be something different between <code>kubectl way</code> and <code>helm way</code>. </p>
|
shizhz
|
<p>Based on my experience and reading online, I found this interesting line in Istio <a href="https://istio.io/help/faq/mixer/" rel="nofollow noreferrer">mixer faq</a></p>
<blockquote>
<p>Mixer trace generation is controlled by command-line flags: trace_zipkin_url, trace_jaeger_url, and trace_log_spans. If any of those flag values are set, trace data will be written directly to those locations. If no tracing options are provided, Mixer will not generate any application-level trace information.</p>
</blockquote>
<p>Also, if you go deep into mixer <a href="https://github.com/istio/istio/blob/d6c3ebcaaffd7e45772beefeeb71708ef1588cb4/install/kubernetes/helm/subcharts/mixer/templates/deployment.yaml" rel="nofollow noreferrer">helm chart</a>, you will find traces of Zipkin and Jaeger signifying that it’s mixer that is passing trace info to Jaeger.</p>
<p>I also got confused which reading this line in one of the articles </p>
<blockquote>
<p>Istio injects a sidecar proxy (Envoy) in the pod in which your application container is running. This sidecar proxy transparently intercepts (iptables magic) all network traffic going in and out of your application. Because of this interception, the sidecar proxy is in a unique position to automatically trace all network requests (HTTP/1.1, HTTP/2.0 & gRPC).</p>
</blockquote>
<p>On Istio mixer documentation, The Envoy sidecar logically calls Mixer before each request to perform precondition checks, and after each request to report telemetry. The sidecar has local caching such that a large percentage of precondition checks can be performed from cache. Additionally, the sidecar buffers outgoing telemetry such that it only calls Mixer infrequently.</p>
<p><strong>Update:</strong> You can enable tracing to understand what happens to a request in Istio and also the role of mixer and envoy. Read more information <a href="https://istio.io/help/faq/telemetry/#life-of-a-request" rel="nofollow noreferrer">here</a></p>
|
Vidyasagar Machupalli
|
<p>I am trying to install istio in a minikube cluster</p>
<p>I followed the tutorial on this page <a href="https://istio.io/docs/setup/kubernetes/quick-start/" rel="nofollow noreferrer">https://istio.io/docs/setup/kubernetes/quick-start/</a></p>
<p>I am trying to use Option 1 : <a href="https://istio.io/docs/setup/kubernetes/quick-start/#option-1-install-istio-without-mutual-tls-authentication-between-sidecars" rel="nofollow noreferrer">https://istio.io/docs/setup/kubernetes/quick-start/#option-1-install-istio-without-mutual-tls-authentication-between-sidecars</a></p>
<p>I can see that the services have been created but the deployment seems to have failed.</p>
<pre><code>kubectl get pods -n istio-system
No resources found
</code></pre>
<p>How can i troubleshoot this ?</p>
<p>Here are the results of get deployment</p>
<pre><code>kubectl get deployment -n istio-system
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
grafana 1 0 0 0 4m
istio-citadel 1 0 0 0 4m
istio-egressgateway 1 0 0 0 4m
istio-galley 1 0 0 0 4m
istio-ingressgateway 1 0 0 0 4m
istio-pilot 1 0 0 0 4m
istio-policy 1 0 0 0 4m
istio-sidecar-injector 1 0 0 0 4m
istio-telemetry 1 0 0 0 4m
istio-tracing 1 0 0 0 4m
prometheus 1 0 0 0 4m
servicegraph 1 0 0 0 4m
</code></pre>
|
sharpcodes
|
<p>This is what worked for me. Don't use the <code>--extra-config</code>s while starting minikube. This is crashing kube-controller-manager-minikube as its not able to find the file </p>
<blockquote>
<p>error starting controllers: failed to start certificate controller:
error reading CA cert file "/var/lib/localkube/certs/ca.crt": open
/var/lib/localkube/certs/ca.crt: no such file or directory</p>
</blockquote>
<p>Just start minikube with this command. I have minikube V0.30.0.</p>
<pre><code>minikube start
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Downloading Minikube ISO
170.78 MB / 170.78 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
Downloading kubelet v1.10.0
Downloading kubeadm v1.10.0
Finished Downloading kubeadm v1.10.0
Finished Downloading kubelet v1.10.0
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.
</code></pre>
<p>Pointing to <code>istio-1.0.4 folder</code>, run this command </p>
<pre><code>kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml
</code></pre>
<p>This should install all the required crds </p>
<p>Run this command</p>
<pre><code>kubectl apply -f install/kubernetes/istio-demo.yaml
</code></pre>
<p>After successful creation of rules, services, deployments etc., Run this command</p>
<pre><code> kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
grafana-9cfc9d4c9-h2zn8 1/1 Running 0 5m
istio-citadel-74df865579-d2pbq 1/1 Running 0 5m
istio-cleanup-secrets-ghlbf 0/1 Completed 0 5m
istio-egressgateway-58df7c4d8-4tg4p 1/1 Running 0 5m
istio-galley-8487989b9b-jbp2d 1/1 Running 0 5m
istio-grafana-post-install-dn6bw 0/1 Completed 0 5m
istio-ingressgateway-6fc88db97f-49z88 1/1 Running 0 5m
istio-pilot-74bb7dcdd-xjgvz 0/2 Pending 0 5m
istio-policy-58878f57fb-t6fqt 2/2 Running 0 5m
istio-security-post-install-vqbzw 0/1 Completed 0 5m
istio-sidecar-injector-5cfcf6dd86-lr8ll 1/1 Running 0 5m
istio-telemetry-bf5558589-8hzcc 2/2 Running 0 5m
istio-tracing-ff94688bb-bwzfs 1/1 Running 0 5m
prometheus-f556886b8-9z6vp 1/1 Running 0 5m
servicegraph-55d57f69f5-fvqbg 1/1 Running 0 5m
</code></pre>
|
Vidyasagar Machupalli
|
<p>since a couple of days and without any change in the environment one of the clusters running kubernetes 1.19.9 on-prem showed some errors regarding kubelet certificates.</p>
<p>A node is in NON-READY state due to an expired certificate. Investigating a bit i've found out that the CSR are in pending state. I can approve them manually but no issued at all.</p>
<p>I've tried to rejoin those nodes to the cluster but i face the same situation with the CSR approval.</p>
<p>Example:</p>
<pre><code>NAME AGE SIGNERNAME REQUESTOR CONDITION
csr-4dc9x 3m28s kubernetes.io/kube-apiserver-client-kubelet system:node:vm-k8s-ctrl-prod-1 Pending
csr-4xljn 18m kubernetes.io/kube-apiserver-client-kubelet system:node:vm-k8s-wk-stage-9 Pending
csr-6jdmg 3m19s kubernetes.io/kube-apiserver-client-kubelet system:node:vm-k8s-wk-stage-6 Pending
csr-9lr8n 18m kubernetes.io/kube-apiserver-client-kubelet system:node:vm-k8s-wk-stage-6 Pending
csr-g2pjt 3m35s kubernetes.io/kube-apiserver-client-kubelet system:node:vm-k8s-ctrl-prod-2 Pending
</code></pre>
<p>CSR example:</p>
<pre><code>apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
creationTimestamp: "2021-08-08T10:10:19Z"
generateName: csr-
managedFields:
- apiVersion: certificates.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:generateName: {}
f:spec:
f:request: {}
f:signerName: {}
f:usages: {}
manager: kubelet
operation: Update
time: "2021-08-08T10:10:19Z"
name: csr-4dc9x
resourceVersion: "775314577"
selfLink: /apis/certificates.k8s.io/v1/certificatesigningrequests/csr-4dc9x
uid: 8c51be15-4ec4-4dc7-8a7a-486e27c74607
spec:
groups:
- system:nodes
- system:authenticated
request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlIN01JR2lBZ0VBTUVBeEZUQVRCZ05WQkFvVERITjVjM1JsYlRwdWIyUmxjekVuTUNVR0ExVUVBeE1lYzNsegpkR1Z0T201dlpHVTZkbTB0YXpoekxXTjBjbXd0Y0hKdlpDMHhNRmt3RXdZSEtvWkl6ajBDQVFZSUtvWkl6ajBECkFRY0RRZ0FFazNESFh2cTloVkZxZzB3bW5VeWd6Z3VGdmFRdDZFUkFCcHcrUmhRNHFCRlRqdkxTSGo3ZUxVK1oKT3JGaThaOGpYUjZqRE5nekVpUkxRQTloS1pxR0c2QUFNQW9HQ0NxR1NNNDlCQU1DQTBnQU1FVUNJUURObFJBcAphT0hFZWRteENDajZiK2tLMWJrNjVYVDc0aC9Nd1VCenVDSnBrUUlnU2F0U0Z3Rkp5ekNQaWtFZTRKQys0QStqClVtVUVWUzhlOWZRbkdXdjROTms9Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=
signerName: kubernetes.io/kube-apiserver-client-kubelet
usages:
- digital signature
- key encipherment
- client auth
username: system:node:vm-k8s-ctrl-prod-1
status: {}
</code></pre>
<p>Did anyone face the same situation? i've checked all the certificates in the cluster and everything looks good to me.</p>
<pre><code>
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Jun 10, 2022 22:17 UTC 306d no
apiserver Jun 10, 2022 22:16 UTC 306d ca no
apiserver-kubelet-client Jun 10, 2022 22:16 UTC 306d ca no
controller-manager.conf Jun 10, 2022 22:17 UTC 306d no
front-proxy-client Jun 10, 2022 22:16 UTC 306d front-proxy-ca no
scheduler.conf Jun 10, 2022 22:17 UTC 306d no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Apr 07, 2029 17:39 UTC 7y no
front-proxy-ca Apr 07, 2029 17:39 UTC 7y no
</code></pre>
<p>Thanks in advance</p>
|
trookam
|
<p>just in case anyone else face this situation. The issue was a legacy configuration for kubelet on the master nodes.</p>
<p><a href="https://serverfault.com/questions/1065444/how-can-i-find-which-kubernetes-certificate-has-expired">https://serverfault.com/questions/1065444/how-can-i-find-which-kubernetes-certificate-has-expired</a></p>
<p>reconfiguring manually the kubelet.conf on the controllers and restarting the control-plane, fixed the issue.</p>
<p>Thanks</p>
|
trookam
|
<p>I'm new to spark and doing on POC to download a file and then read it. However, I am facing issue that the file doesn't exists.</p>
<blockquote>
<p>java.io.FileNotFoundException: File file:/app/data-Feb-19-2023_131049.json does not exist</p>
</blockquote>
<p>But when I printed the path of the file I find out the file exists and the path is also correct.</p>
<p>This is the output</p>
<pre><code>23/02/19 13:10:46 INFO BlockManagerMasterEndpoint: Registering block manager 10.14.142.21:37515 with 2.2 GiB RAM, BlockManagerId(1, 10.14.142.21, 37515, None)
FILE IS DOWNLOADED
['/app/data-Feb-19-2023_131049.json']
23/02/19 13:10:49 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir.
23/02/19 13:10:49 INFO SharedState: Warehouse path is 'file:/app/spark-warehouse'.
23/02/19 13:10:50 INFO InMemoryFileIndex: It took 39 ms to list leaf files for 1 paths.
23/02/19 13:10:51 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 206.6 KiB, free 1048.6 MiB)
23/02/19 13:10:51 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 35.8 KiB, free 1048.6 MiB)
23/02/19 13:10:51 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on experian-el-d41b428669cc1e8e-driver-svc.environments-quin-dev-1.svc:7079 (size: 35.8 KiB, free: 1048.8 MiB)
23/02/19 13:10:51 INFO SparkContext: Created broadcast 0 from json at <unknown>:0
23/02/19 13:10:51 INFO FileInputFormat: Total input files to process : 1
23/02/19 13:10:51 INFO FileInputFormat: Total input files to process : 1
23/02/19 13:10:51 INFO SparkContext: Starting job: json at <unknown>:0
23/02/19 13:10:51 INFO DAGScheduler: Got job 0 (json at <unknown>:0) with 1 output partitions
23/02/19 13:10:51 INFO DAGScheduler: Final stage: ResultStage 0 (json at <unknown>:0)
23/02/19 13:10:51 INFO DAGScheduler: Parents of final stage: List()
23/02/19 13:10:51 INFO DAGScheduler: Missing parents: List()
23/02/19 13:10:51 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[2] at json at <unknown>:0), which has no missing parents
23/02/19 13:10:51 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 9.0 KiB, free 1048.6 MiB)
23/02/19 13:10:51 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 4.8 KiB, free 1048.5 MiB)
23/02/19 13:10:51 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on experian-el-d41b428669cc1e8e-driver-svc.environments-quin-dev-1.svc:7079 (size: 4.8 KiB, free: 1048.8 MiB)
23/02/19 13:10:51 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1513
23/02/19 13:10:51 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[2] at json at <unknown>:0) (first 15 tasks are for partitions Vector(0))
23/02/19 13:10:51 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks resource profile 0
23/02/19 13:10:51 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0) (10.14.142.21, executor 1, partition 0, PROCESS_LOCAL, 4602 bytes) taskResourceAssignments Map()
23/02/19 13:10:52 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 10.14.142.21:37515 (size: 4.8 KiB, free: 2.2 GiB)
23/02/19 13:10:52 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.14.142.21:37515 (size: 35.8 KiB, free: 2.2 GiB)
23/02/19 13:10:52 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0) (10.14.142.21 executor 1): java.io.FileNotFoundException: File file:/app/data-Feb-19-2023_131049.json does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:779)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:1100)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:769)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:462)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:160)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:372)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:976)
at org.apache.spark.sql.execution.datasources.CodecStreams$.createInputStream(CodecStreams.scala:40)
at org.apache.spark.sql.execution.datasources.CodecStreams$.createInputStreamWithCloseResource(CodecStreams.scala:52)
at org.apache.spark.sql.execution.datasources.json.MultiLineJsonDataSource$.dataToInputStream(JsonDataSource.scala:195)
at org.apache.spark.sql.execution.datasources.json.MultiLineJsonDataSource$.createParser(JsonDataSource.scala:199)
at org.apache.spark.sql.execution.datasources.json.MultiLineJsonDataSource$.$anonfun$infer$4(JsonDataSource.scala:165)
at org.apache.spark.sql.catalyst.json.JsonInferSchema.$anonfun$infer$3(JsonInferSchema.scala:86)
at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2763)
at org.apache.spark.sql.catalyst.json.JsonInferSchema.$anonfun$infer$2(JsonInferSchema.scala:86)
at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
at scala.collection.Iterator.isEmpty(Iterator.scala:387)
at scala.collection.Iterator.isEmpty$(Iterator.scala:387)
at scala.collection.AbstractIterator.isEmpty(Iterator.scala:1431)
at scala.collection.TraversableOnce.reduceLeftOption(TraversableOnce.scala:249)
at scala.collection.TraversableOnce.reduceLeftOption$(TraversableOnce.scala:248)
at scala.collection.AbstractIterator.reduceLeftOption(Iterator.scala:1431)
at scala.collection.TraversableOnce.reduceOption(TraversableOnce.scala:256)
at scala.collection.TraversableOnce.reduceOption$(TraversableOnce.scala:256)
at scala.collection.AbstractIterator.reduceOption(Iterator.scala:1431)
at org.apache.spark.sql.catalyst.json.JsonInferSchema.$anonfun$infer$1(JsonInferSchema.scala:103)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:855)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2$adapted(RDD.scala:855)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
23/02/19 13:10:52 INFO TaskSetManager: Starting task 0.1 in stage 0.0 (TID 1) (10.14.142.21, executor 1, partition 0, PROCESS_LOCAL, 4602 bytes) taskResourceAssignments Map()
23/02/19 13:10:52 INFO TaskSetManager: Lost task 0.1 in stage 0.0 (TID 1) on 10.14.142.21, executor 1: java.io.FileNotFoundException (File file:/app/data-Feb-19-2023_131049.json does not exist) [duplicate 1]
23/02/19 13:10:52 INFO TaskSetManager: Starting task 0.2 in stage 0.0 (TID 2) (10.14.142.21, executor 1, partition 0, PROCESS_LOCAL, 4602 bytes) taskResourceAssignments Map()
23/02/19 13:10:52 INFO TaskSetManager: Lost task 0.2 in stage 0.0 (TID 2) on 10.14.142.21, executor 1: java.io.FileNotFoundException (File file:/app/data-Feb-19-2023_131049.json does not exist) [duplicate 2]
23/02/19 13:10:52 INFO TaskSetManager: Starting task 0.3 in stage 0.0 (TID 3) (10.14.142.21, executor 1, partition 0, PROCESS_LOCAL, 4602 bytes) taskResourceAssignments Map()
23/02/19 13:10:52 INFO TaskSetManager: Lost task 0.3 in stage 0.0 (TID 3) on 10.14.142.21, executor 1: java.io.FileNotFoundException (File file:/app/data-Feb-19-2023_131049.json does not exist) [duplicate 3]
23/02/19 13:10:52 ERROR TaskSetManager: Task 0 in stage 0.0 failed 4 times; aborting job
23/02/19 13:10:52 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
23/02/19 13:10:52 INFO TaskSchedulerImpl: Cancelling stage 0
23/02/19 13:10:52 INFO TaskSchedulerImpl: Killing all running tasks in stage 0: Stage cancelled
23/02/19 13:10:52 INFO DAGScheduler: ResultStage 0 (json at <unknown>:0) failed in 1.128 s due to Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3) (10.14.142.21 executor 1): java.io.FileNotFoundException: File file:/app/data-Feb-19-2023_131049.json does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:779)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:1100)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:769)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:462)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:160)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:372)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:976)
at org.apache.spark.sql.execution.datasources.CodecStreams$.createInputStream(CodecStreams.scala:40)
at org.apache.spark.sql.execution.datasources.CodecStreams$.createInputStreamWithCloseResource(CodecStreams.scala:52)
at org.apache.spark.sql.execution.datasources.json.MultiLineJsonDataSource$.dataToInputStream(JsonDataSource.scala:195)
at org.apache.spark.sql.execution.datasources.json.MultiLineJsonDataSource$.createParser(JsonDataSource.scala:199)
at org.apache.spark.sql.execution.datasources.json.MultiLineJsonDataSource$.$anonfun$infer$4(JsonDataSource.scala:165)
at org.apache.spark.sql.catalyst.json.JsonInferSchema.$anonfun$infer$3(JsonInferSchema.scala:86)
at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2763)
at org.apache.spark.sql.catalyst.json.JsonInferSchema.$anonfun$infer$2(JsonInferSchema.scala:86)
at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
at scala.collection.Iterator.isEmpty(Iterator.scala:387)
at scala.collection.Iterator.isEmpty$(Iterator.scala:387)
at scala.collection.AbstractIterator.isEmpty(Iterator.scala:1431)
at scala.collection.TraversableOnce.reduceLeftOption(TraversableOnce.scala:249)
at scala.collection.TraversableOnce.reduceLeftOption$(TraversableOnce.scala:248)
at scala.collection.AbstractIterator.reduceLeftOption(Iterator.scala:1431)
at scala.collection.TraversableOnce.reduceOption(TraversableOnce.scala:256)
at scala.collection.TraversableOnce.reduceOption$(TraversableOnce.scala:256)
at scala.collection.AbstractIterator.reduceOption(Iterator.scala:1431)
at org.apache.spark.sql.catalyst.json.JsonInferSchema.$anonfun$infer$1(JsonInferSchema.scala:103)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:855)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2$adapted(RDD.scala:855)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
</code></pre>
<p>This is my code to download the file and and print its path</p>
<pre><code> def find_files(self, filename, search_path):
result = []
# Wlaking top-down from the root
for root, dir, files in os.walk(search_path):
if filename in files:
result.append(os.path.join(root, filename))
return result
def downloadData(self, access_token, data):
headers = {
'Content-Type': 'application/json',
'Charset': 'UTF-8',
'Authorization': f'Bearer {access_token}'
}
try:
response = requests.post(self.kyc_url, data=json.dumps(data), headers=headers)
response.raise_for_status()
logger.debug("received kyc data")
response_filename = ("data-" + time.strftime('%b-%d-%Y_%H%M%S', time.localtime()) + ".json")
with open(response_filename, 'w', encoding='utf-8') as f:
json.dump(response.json(), f, ensure_ascii=False, indent=4)
f.close()
print("FILE IS DOWNLOADED")
print(self.find_files(response_filename, "/"))
except requests.exceptions.HTTPError as err:
logger.error("failed to fetch kyc data")
raise SystemExit(err)
return response_filename
</code></pre>
<p>This is my code to read the file and upload to minio</p>
<pre><code> def load(spark: SparkSession, json_file_path: str, destination_path: str) -> None:
df = spark.read.option("multiline", "true").json(json_file_path)
df.write.format("delta").save(f"s3a://{destination_path}")
</code></pre>
<p>I'm running spark in k8s with spark operator.</p>
<p>This is my SparkApplication manifest</p>
<pre><code>apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: myApp
namespace: demo
spec:
type: Python
pythonVersion: "3"
mode: cluster
image: "myImage"
imagePullPolicy: Always
mainApplicationFile: local:///app/main.py
sparkVersion: "3.3.1"
restartPolicy:
type: OnFailure
onFailureRetries: 3
onFailureRetryInterval: 10
onSubmissionFailureRetries: 5
onSubmissionFailureRetryInterval: 20
timeToLiveSeconds: 86400
deps:
packages:
- io.delta:delta-core_2.12:2.2.0
- org.apache.hadoop:hadoop-aws:3.3.1
driver:
env:
- name: NAMESPACE
value: demo
cores: 2
coreLimit: "2000m"
memory: "2048m"
labels:
version: 3.3.1
serviceAccount: spark-driver
executor:
cores: 4
instances: 1
memory: "4096m"
coreRequest: "500m"
coreLimit: "4000m"
labels:
version: 3.3.1
dynamicAllocation:
enabled: false
</code></pre>
<p>Can someone please point out what I am doing wrong ?</p>
<p>Thank you</p>
|
Aman
|
<p>If you are running in cluster mode then you need your input files to be shared on a shared FS like <code>HDFS</code> or <code>S3</code> but not on local FS, since both of driver and executors should have access to the input file.</p>
|
Islam Elbanna
|
<p>I've follow the documentation about <a href="https://cloud.google.com/iap/docs/enabling-kubernetes-howto" rel="nofollow noreferrer">how to enable IAP on GKE</a>.</p>
<p>I've:</p>
<ol>
<li>configured the consent screen</li>
<li>Create OAuth credentials</li>
<li>Add the universal redirect URL</li>
<li>Add myself as <code>IAP-secured Web App User</code></li>
</ol>
<p>And write my deployment like this:</p>
<pre class="lang-yaml prettyprint-override"><code>data:
client_id: <my_id>
client_secret: <my_secret>
kind: Secret
metadata:
name: backend-iap-secret
type: Opaque
---
apiVersion: v1
kind: Service
metadata:
name: grafana
spec:
ports:
- port: 443
protocol: TCP
targetPort: 3000
selector:
k8s-app: grafana
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: grafana
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: grafana
spec:
containers:
- env:
- name: GF_SERVER_HTTP_PORT
value: "3000"
image: docker.io/grafana/grafana:6.7.1
name: grafana
ports:
- containerPort: 3000
protocol: TCP
readinessProbe:
httpGet:
path: /api/health
port: 3000
---
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: backend-config-iap
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: backend-iap-secret
---
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: monitoring-tls
spec:
domains:
- monitoring.foo.com
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
beta.cloud.google.com/backend-config: '{"default": "backend-config-iap"}'
kubernetes.io/ingress.global-static-ip-name: monitoring
networking.gke.io/managed-certificates: monitoring-tls
name: grafana
spec:
backend:
serviceName: grafana
servicePort: 443
</code></pre>
<p>When I look at my ingress I've this:</p>
<pre class="lang-sh prettyprint-override"><code>$ k describe ingress
Name: grafana
[...]
Annotations: beta.cloud.google.com/backend-config: {"default": "backend-config-iap"}
ingress.kubernetes.io/backends: {"k8s-blabla":"HEALTHY"}
[...]
Events: <none>
$
</code></pre>
<p>I can connect to the web page without any problem, the grafana is up and running, but I can also connect without being authenticated (witch is a problem).</p>
<p>So everything look fine, but IAP is not activated, why ?</p>
<p>The worst is that, if I enable it manualy it work but if I redo <code>kubectl apply -f monitoring.yaml</code> IAP is disabled.</p>
<p>What am I missing ?</p>
<hr>
<p>Because my secret values are stored in secret manager (and retrieved at build time) I suspected my secret to have some glitches (spaces, \n, etc.) in them so I've add a script to test it:</p>
<pre class="lang-sh prettyprint-override"><code>gcloud compute backend-services update \
--project=<my_project_id> \
--global \
$(kubectl get ingress grafana -o json | jq -r '.metadata.annotations."ingress.kubernetes.io/backends"' | jq -r 'keys[0]') \
--iap=enabled,oauth2-client-id=$(gcloud --project="<my_project_id>" beta secrets versions access latest --secret=Monitoring_client_id),oauth2-client-secret=$(gcloud --project="<my_project_id>" beta secrets versions access latest --secret=Monitoring_secret)
</code></pre>
<p>And now IAP is properly enabled with the correct OAuth Client, so my secrets are "clean"</p>
<p>By the way, I also tried to rename secret variables like this (from client_id):
* oauth_client_id
* oauth-client-id
* clientID (<a href="https://cloud.google.com/kubernetes-engine/docs/concepts/backendconfig#oauthclientcredentials_v1beta1_cloudgooglecom" rel="nofollow noreferrer">like in backend documentation</a> )</p>
<p>I've also write the value in the backend like this:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: BackendConfig
metadata:
name: backend-config-iap
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: backend-iap-secret
clientID: <value>
clientSecret: <value>
</code></pre>
<p>But doesn't work either.</p>
<hr>
<p><strong>Erratum</strong>:</p>
<p>The fact that the IAP is destroyed when I deploy again (after I enable it in web UI) is part of my deployment script in this test (I made a kubectl delete before).</p>
<p>But nevertheless, I can't enable IAP only with my backend configuration.</p>
<hr>
<p>As suggested I've filed a bug report: <a href="https://issuetracker.google.com/issues/153475658" rel="nofollow noreferrer">https://issuetracker.google.com/issues/153475658</a></p>
<hr>
<p>Solution given by <a href="https://stackoverflow.com/users/1432893/totem">Totem</a></p>
<p>Change given yaml with this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.global-static-ip-name: monitoring
networking.gke.io/managed-certificates: monitoring-tls
name: grafana
[...]
---
apiVersion: v1
kind: Service
metadata:
name: grafana
annotations:
beta.cloud.google.com/backend-config: '{"default": "backend-config-iap"}'
[...]
</code></pre>
<p>The backend is associated with the service and not the Ingress...
Now it Works !</p>
|
Djabx
|
<p>You did everything right, just a one small change:
The annotation should be added on the Service resource</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
beta.cloud.google.com/backend-config: '{"ports": { "443":"backend-config-iap"}}'
name: grafana
</code></pre>
<p>Usually you need to associate it with a port so ive added this example above, but make sure it works with 443 as expected.</p>
<p>this is based on internal example im using:</p>
<pre><code>beta.cloud.google.com/backend-config: '{"ports": { "3000":"be-cfg}}'
</code></pre>
|
Totem
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.