prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>I'm trying to mount a local folder as the <code>/data/db</code> of mongo in my minikube cluster. So far no luck :(</p>
<p>So, I followed <a href="https://leadwithoutatitle.wordpress.com/2018/03/05/how-to-deploy-mongodb-with-persistent-volume-in-kubernetes/" rel="nofollow noreferrer">these</a> steps. They describe how to create a persistent volume, a persistent volume claim, a service and a pod. </p>
<p>The config files make sense, but when I eventually spin up the pod, it restarts due to an error and then it keeps running. The log from the pod (<code>kubectl log mongo-0</code>) is</p>
<pre><code>2019-07-02T13:51:49.177+0000 I CONTROL [main] note: noprealloc may hurt performance in many applications
2019-07-02T13:51:49.180+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2019-07-02T13:51:49.184+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=mongo-0
2019-07-02T13:51:49.184+0000 I CONTROL [initandlisten] db version v4.0.10
2019-07-02T13:51:49.184+0000 I CONTROL [initandlisten] git version: c389e7f69f637f7a1ac3cc9fae843b635f20b766
2019-07-02T13:51:49.184+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016
2019-07-02T13:51:49.184+0000 I CONTROL [initandlisten] allocator: tcmalloc
2019-07-02T13:51:49.184+0000 I CONTROL [initandlisten] modules: none
2019-07-02T13:51:49.184+0000 I CONTROL [initandlisten] build environment:
2019-07-02T13:51:49.184+0000 I CONTROL [initandlisten] distmod: ubuntu1604
2019-07-02T13:51:49.184+0000 I CONTROL [initandlisten] distarch: x86_64
2019-07-02T13:51:49.184+0000 I CONTROL [initandlisten] target_arch: x86_64
2019-07-02T13:51:49.184+0000 I CONTROL [initandlisten] options: { net: { bindIp: "0.0.0.0" }, storage: { mmapv1: { preallocDataFiles: false, smallFiles: true } } }
2019-07-02T13:51:49.186+0000 I STORAGE [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2019-07-02T13:51:49.186+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=483M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
2019-07-02T13:51:51.913+0000 I STORAGE [initandlisten] WiredTiger message [1562075511:913047][1:0x7ffa7b8fca80], txn-recover: Main recovery loop: starting at 3/1920 to 4/256
2019-07-02T13:51:51.914+0000 I STORAGE [initandlisten] WiredTiger message [1562075511:914009][1:0x7ffa7b8fca80], txn-recover: Recovering log 3 through 4
2019-07-02T13:51:51.948+0000 I STORAGE [initandlisten] WiredTiger message [1562075511:948068][1:0x7ffa7b8fca80], txn-recover: Recovering log 4 through 4
2019-07-02T13:51:51.976+0000 I STORAGE [initandlisten] WiredTiger message [1562075511:976820][1:0x7ffa7b8fca80], txn-recover: Set global recovery timestamp: 0
2019-07-02T13:51:51.979+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
2019-07-02T13:51:51.986+0000 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
2019-07-02T13:51:51.986+0000 I CONTROL [initandlisten]
2019-07-02T13:51:51.986+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2019-07-02T13:51:51.986+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
2019-07-02T13:51:51.986+0000 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2019-07-02T13:51:51.986+0000 I CONTROL [initandlisten]
2019-07-02T13:51:52.003+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
2019-07-02T13:51:52.005+0000 I NETWORK [initandlisten] waiting for connections on port 27017
</code></pre>
<p>If I connect to the MongoDB/pod, mongo is just running fine!
But, it is not using the persistent volume. Here is my pv.yaml:</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: mongo-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/k8s/mongo"
</code></pre>
<p>Inside the mongo pod is see the mongo files in <code>/data/db</code> but on my local machine (<code>/k8s/mongo</code>) the folder is empty.</p>
<p>Below I'll also list the persistent volume claim (pvc) and pod/service yaml</p>
<p>pvc.yaml:</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mongo-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
</code></pre>
<p>mongo.yaml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
clusterIP: None
ports:
- port: 27017
targetPort: 27017
selector:
role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
volumes:
- name: mongo-pv-storage
persistentVolumeClaim:
claimName: mongo-pv-claim
containers:
- name: mongo
image: mongo
command:
- mongod
- "--bind_ip"
- 0.0.0.0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-pv-storage
mountPath: /data/db
</code></pre>
<p>I've also tried, instead of using <code>persistentVolumeClaim</code> to do</p>
<pre><code>volumes:
- name: mongo-pv-storage
hostPath:
path: /k8s/mongo
</code></pre>
<p>Gives same issues except there is no error during creation.</p>
<p>Any suggestion what the problem might be or where to look next for more details?</p>
<p>Also, how are the PV and the PVC connected?</p>
| <p>Please try this </p>
<pre><code>apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
app: mongodb
spec:
replicas: 1
template:
metadata:
labels:
app: mongodb
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo:3
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-volume
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: mongo-persistent-volume
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
</code></pre>
<p>you can create whole new PVC and use that here or change the name. This is working for me also i faced same issue to configure mongoDB when passing commands. remove the commands and try it.</p>
<p>For more details check <a href="https://github.com/harsh4870/Single-pod-graylog-with-PVC/blob/master/templates/mongodb-deployment.yaml" rel="nofollow noreferrer">this</a> github</p>
|
<p>I am trying to create a server-snippet, that will return a 503 for mobile user. I am doing that, by checking the user agent.
The problem:
Server-Snippet is not returning 503 in case of mobile user agent.
All over, the 503 is returned when the user agent is NOT a mobile devide. Mobile itself, sends a 200.
I cannot understand what is done by the Ingress. It seems, as the server-snipper code is somehow parsing it "in a hard-coded way".
Maybe someone got similar issues. If someone might give a hint on such server-snippers, it would be nice.
Thanks </p>
<p>I tried several server-snippets. Please check code below.</p>
<ol>
<li>Try This is a official code snipper from github.</li>
</ol>
<blockquote>
<p>nginx.ingress.kubernetes.io/server-snippet: set $agentflag 0;</p>
<p>if ($http_user_agent ~* "(Mobile|ios|android)" ){
set $agentflag 1; }</p>
<p>if ( $agentflag = 1 ) {
return 503; }</p>
</blockquote>
<p>As json:</p>
<pre><code>"nginx.ingress.kubernetes.io/server-snippet": "| set $agentflag 0; if ($http_user_agent ~* \"(Mobile|ios|android)\" ) { set $agentflag 1; } if ( $agentflag = 1 ) { return 503;}"
</code></pre>
<hr>
<ol start="2">
<li>Try</li>
</ol>
<blockquote>
<p>nginx.ingress.kubernetes.io/server-snippet:
if ($http_user_agent ~* "(Mobile|ios|android)" ){
return 503;
}</p>
</blockquote>
<p>As json:</p>
<pre><code> "nginx.ingress.kubernetes.io/server-snippet": " if ($http_user_agent ~* (Mobile|ios|android) ) {\n return 503; }"
</code></pre>
| <p>A short working solution is</p>
<pre><code> annotations:
kubernetes.io/ingress.class: private-nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/server-snippet: |
if ($http_user_agent ~* "(Mobile)" ) {
return 503;
}
</code></pre>
|
<p>Cant share a PVC with multiple pods in the GCP (with the GCP-CLI)</p>
<p>When I apply the config with <code>ReadWriteOnce</code> works at once</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: <name>
namespace: <namespace>
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50Gi
</code></pre>
<p>But with <code>ReadWriteMany</code> the status hangs on pending</p>
<p>Any ideas?</p>
| <p>So it is normal that when you apply the config with <strong>ReadWriteOnce</strong> works at once, that's the rule.</p>
<p><strong>ReadWriteOnce</strong> is the most common use case for Persistent Disks and works as the default access mode for most applications. </p>
<p>GCE persistent disk do not support <strong>ReadWriteMany</strong> !</p>
<p><a href="https://i.stack.imgur.com/8sjgv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8sjgv.png" alt="enter image description here"></a>
Instead of <strong>ReadWriteMany</strong>, you can just use <strong>ReadOnlyMany.</strong>
More information you can find here: <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/readonlymany-disks" rel="nofollow noreferrer">persistentdisk</a>. But as you now result will not be the same as you want.</p>
<p>If you want to share volumes you could try some workarounds:</p>
<p>You can create services.</p>
<p>Your service should look after the data that is related to its area of concern and should allow access to this data to other services via an interface. Multi-service access to data is an anti-pattern akin to global variables in OOP.</p>
<p>If you where looking to write logs, you should have a log service which each service can call with the relevant data it needs to log. Writing directly to a shared disk means that you'd need to update every container if you change your log directory structure or add extra features.</p>
<p>Also try to use <code>high-performance, fully managed file storage</code> for applications that require a file system interface and a shared file system.
More information you can find here: <a href="https://cloud.google.com/filestore/docs/accessing-fileshares" rel="nofollow noreferrer">access-fileshare</a>.</p>
|
<p>When I do </p>
<pre><code>helm create my-app
</code></pre>
<p>I get default labels like below in generated templates (deployment, service, ingress yaml files):</p>
<pre><code>app.kubernetes.io/name: {{ include "my-app.name" . }}
helm.sh/chart: {{ include "my-app.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
</code></pre>
<p>Can I remove all of them and just use my own labels, will it affect helm features like rollback etc.???</p>
| <p>Yeah they can all be removed - from <a href="https://github.com/helm/helm/blob/master/docs/chart_best_practices/labels.md#standard-labels" rel="nofollow noreferrer">here</a>:</p>
<blockquote>
<p>Helm itself never requires that a particular label be present.</p>
</blockquote>
|
<p>I have my <code>rook-ceph</code>cluster running on <code>AWS</code>. Its loaded up with data.
Is there's any way to stimulate <strong>POWER FAILURE</strong> so that I can test the behaviour of my cluster?.</p>
| <p>It depends what is the purpose of your crash test. I see two options:</p>
<ol>
<li><p>You want to test if you correctly deployed Kubernetes on AWS - then, I'd terminate the related AWS EC2 Instance (or set of Instances)</p></li>
<li><p>You want to test if your end application is resilient to Kubernetes Node failures - then I'd just check what PODs are running on the given Node and kill them all suddenly with:</p></li>
</ol>
<pre><code>kubectl delete pods <pod> --grace-period=0 --force
</code></pre>
|
<p>I was trying to display the metrics for 64 nodes on my k8s clsuter. Then I found out that whenever I select more than 60 nodes in the variable dropdown</p>
<p><a href="https://i.stack.imgur.com/VLQmc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VLQmc.png" alt="enter image description here"></a></p>
<p>Grafana throws query error that looks like this:</p>
<p><a href="https://i.stack.imgur.com/PbmhX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PbmhX.png" alt="enter image description here"></a></p>
<p>The exception message is not particularly helpful, could somebody provide me more insights? Thanks!</p>
| <p>I've had a similar problem after selecting too many variables. As long as the rest of your monitor is able to pull the info successfully from prometheus, you can disable the annotation query. Go to the dashboard and remove the annotations under settings. </p>
|
<p>I'm setting up a namespace in my kubernetes cluster to deny any outgoing network calls like <a href="http://company.com" rel="nofollow noreferrer">http://company.com</a> but to allow inter pod communication within my namespace like <a href="http://my-nginx" rel="nofollow noreferrer">http://my-nginx</a> where my-nginx is a kubernetes service pointing to my nginx pod.</p>
<p>How to achieve this using network policy. Below network policy helps in blocking all outgoing network calls </p>
<pre><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-all-egress
namespace: sample
spec:
policyTypes:
- Egress
podSelector: {}
</code></pre>
<p>How to white list only the inter pod calls?</p>
| <p>This can be done using the following combination of network policies:</p>
<pre><code># The same as yours
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-all-egress
namespace: sample
spec:
policyTypes:
- Egress
podSelector: {}
---
# allows connections to all pods in your namespace from all pods in your namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-namespace-egress
namespace: sample
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels: {}
---
# allows connections from all pods in your namespace to all pods in your namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-namespace-internal
namespace: sample
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels: {}
</code></pre>
<p>assuming your network policy implementation implements the full spec.</p>
|
<p>Say we have a simple deployment.yml file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
namespace: ikg-api-demo
name: ikg-api-demo
spec:
selector:
matchLabels:
app: ikg-api-demo
replicas: 3
template:
metadata:
labels:
app: ikg-api-demo
spec:
containers:
- name: ikg-api-demo
imagePullPolicy: Always
image: example.com/main_api:private_key
ports:
- containerPort: 80
</code></pre>
<p>the problem is that this image/container depends on another image/container - it needs to cp data from the other image, or use some shared volume.</p>
<p>How can I tell kubernetes to download another image, run it as a container, and then copy data from it to the container declared in the above file?</p>
<p>It looks like <a href="https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/" rel="nofollow noreferrer">this article</a> explains how.</p>
<p>but it's not 100% clear how it works. It looks like you create some shared volume, launch the two containers, using that shared volume?</p>
<p>so I according to that link, I added this to my deployment.yml:</p>
<pre><code>spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: ikg-api-demo
imagePullPolicy: Always
volumeMounts:
- name: shared-data
mountPath: /nltk_data
image: example.com/nltk_data:latest
- name: ikg-api-demo
imagePullPolicy: Always
volumeMounts:
- name: shared-data
mountPath: /nltk_data
image: example.com/main_api:private_key
ports:
- containerPort: 80
</code></pre>
<p>my primary hesitation is that mounting /nltk_data as a shared volume will overwrite what might be there already.</p>
<p>So I assume what I need to do is mount it at some other location, and then make the ENTRYPOINT for the source data container:</p>
<pre><code>ENTRYPOINT ['cp', '-r', '/nltk_data_source', '/nltk_data']
</code></pre>
<p>so that will write it to the shared volume, once the container is launched.</p>
<p><strong>So I have two questions:</strong></p>
<ol>
<li><p>How to run one container and finish a job, before another container starts using kubernetes?</p>
</li>
<li><p>How to write to a shared volume without having that shared volume overwrite what's in your image? In other words, if I have /xyz in the image/container, I don't want to have to copy <code>/xyz</code> to <code>/shared_volume_mount_location</code> if I don't have to.</p>
</li>
</ol>
| <h1>How to run one container and finish a job, before another container starts using kubernetes?</h1>
<p>Use initContainers - updated your deployment.yml, assuming <code>example.com/nltk_data:latest</code> is your data image</p>
<h1>How to write to a shared volume without having that shared volume overwrite?</h1>
<p>As you know what is there in your image, you need to select an appropriate mount path. I would use <code>/mnt/nltk_data</code></p>
<h1>Updated deployment.yml with init containers</h1>
<pre><code>spec:
volumes:
- name: shared-data
emptyDir: {}
initContainers:
- name: init-ikg-api-demo
imagePullPolicy: Always
# You can use command, if you don't want to change the ENTRYPOINT
command: ['sh', '-c', 'cp -r /nltk_data_source /mnt/nltk_data']
volumeMounts:
- name: shared-data
mountPath: /mnt/nltk_data
image: example.com/nltk_data:latest
containers:
- name: ikg-api-demo
imagePullPolicy: Always
volumeMounts:
- name: shared-data
mountPath: /nltk_data
image: example.com/main_api:private_key
ports:
- containerPort: 80
</code></pre>
|
<p>I have the following ingress resource</p>
<pre><code>
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubernetes-demo-ing
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: \"false\"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: test.my-docker-kubernetes-demo.com
http:
paths:
- path: /*
backend:
serviceName: my-demo-service
servicePort: 3000
</code></pre>
<p>My app was not getting accessed here <code>test.my-docker-kubernetes-demo.com</code> i was getting <code>too many redirects error</code></p>
<p>but when i replaced under path from <code>path: /* to path: /</code>, it worked.</p>
<p>But i am not able to find how it fixed the problem, any help in understanding or explaining this would be great.</p>
| <p>The meaning of <code>/</code> and <code>/*</code> depends on your <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="nofollow noreferrer">ingress implementation</a>, for example there are different ways of selecting a range of paths using the NGINX vs GCE ingress implementations:</p>
<ul>
<li>NGINX: <a href="https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/</a></li>
</ul>
<blockquote>
<p>path: /foo/.*</p>
</blockquote>
<ul>
<li>GCE: <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer#step_6_optional_serving_multiple_applications_on_a_load_balancer" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer#step_6_optional_serving_multiple_applications_on_a_load_balancer</a></li>
</ul>
<blockquote>
<p>path: /*</p>
</blockquote>
<p>You can choose the implementation to use by setting the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/" rel="nofollow noreferrer">kubernetes.io/ingress.class</a> annotation.</p>
<p>In your case, assuming you're using NGINX, <code>/*</code> isn't interpreted as a glob pattern so only allows connecting literally to <code>/*</code>. Anything else would be sent to the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/default-backend/" rel="nofollow noreferrer">default backend</a>.</p>
|
<p>I am trying to monitor external service (which is exporter of cassandra metrics) in prometheus-operator. I installed prometheus-operator using helm 2.11.0. I installed it using this yaml:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
</code></pre>
<p>and these commands on my kubernetes cluster:</p>
<pre><code>kubectl create -f rbac-config.yml
helm init --service-account tiller --history-max 200
helm install stable/prometheus-operator --name prometheus-operator --namespace monitoring
</code></pre>
<p>Next, basing on article:
<a href="https://devops.college/prometheus-operator-how-to-monitor-an-external-service-3cb6ac8d5acb" rel="noreferrer">how monitor to an external service</a></p>
<p>I tried to do steps described in it. As suggested I created Endpoints, Service and ServiceMonitor with label for existing Prometheus. Here are my yaml files:</p>
<pre><code>apiVersion: v1
kind: Endpoints
metadata:
name: cassandra-metrics80
labels:
app: cassandra-metrics80
subsets:
- addresses:
- ip: 10.150.1.80
ports:
- name: web
port: 7070
protocol: TCP
</code></pre>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: cassandra-metrics80
namespace: monitoring
labels:
app: cassandra-metrics80
release: prometheus-operator
spec:
externalName: 10.150.1.80
ports:
- name: web
port: 7070
protocol: TCP
targetPort: 7070
type: ExternalName
</code></pre>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: cassandra-metrics80
labels:
app: cassandra-metrics80
release: prometheus-operator
spec:
selector:
matchLabels:
app: cassandra-metrics80
release: prometheus-operator
namespaceSelector:
matchNames:
- monitoring
endpoints:
- port: web
interval: 10s
honorLabels: true
</code></pre>
<p>And in prometheus service discovery page I can see:
<a href="https://i.stack.imgur.com/8M40z.png" rel="noreferrer"><img src="https://i.stack.imgur.com/8M40z.png" alt="Service Discovery"></a></p>
<p>That this service is not active and all labels are dropped.
I did a numerous things trying to fix this, like setting targetLabels. Trying to relabel the once that are discovered, as
described here: <a href="https://github.com/coreos/prometheus-operator/issues/1166#issuecomment-447404996" rel="noreferrer">prometheus relabeling</a>
But unfortunately nothing works. What could be the issue or how can I investigate it better?</p>
| <p>Ok, I found out that service should be in the same namespace as service monitor and endpoint, after that prometheus started to see some metrics from cassandra.</p>
|
<p>Is there a good way to access the etcd datastore of a minikube cluster? I'm trying to create a watcher for events on kubernetes pods, but I need to look at the etcd changelog. So far, I have run <code>kubectl exec -it --namespace kube-system etcd-minikube sh</code>, which ssh-s me into the minikube host machine, but from there, I can't access <code>etcd</code>, <code>etcdctl</code> times out, and I can't even run <code>python</code>. Is there a clean way to do this? The links seem outdated.</p>
<p><a href="https://github.com/kubernetes/minikube/blob/master/docs/accessing_etcd.md" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/blob/master/docs/accessing_etcd.md</a> is outdated, as well as any other sources that reference localkube.</p>
| <p>I was able to check minikube <a href="https://github.com/kubernetes/minikube/releases/tag/v1.2.0" rel="nofollow noreferrer">v1.2.0</a> etcd v3 connection channel through <a href="https://github.com/etcd-io/etcd/blob/master/Documentation/dev-guide/api_grpc_gateway.md" rel="nofollow noreferrer">gRPC</a> messaging protocol and it works fine.</p>
<p>I've check connection in two ways: directly within <code>etcd-minikube</code> Pod and externally from my minikube hosting machine via <a href="https://github.com/etcd-io/etcd/releases" rel="nofollow noreferrer">etcdctl</a> binary injected there.</p>
<p>Minikube <a href="https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/" rel="nofollow noreferrer">ETCD</a> certificates inventory located here: <code>/var/lib/minikube/certs/etcd/</code> and I've used some of these certs to authenticate to etcd storage.</p>
<p>You can use below query string for connection from within <code>etcd-minikube</code> Pod, but first you have to distribute etcd certs into this Pod; this can be done via <code>kubectl cp</code> command:</p>
<p><code>sudo kubectl cp /var/lib/minikube/certs/etcd/ etcd-minikube:/SOME_PATH -n kube-system</code></p>
<p><code>ETCDCTL_API=3 etcdctl --cacert=/PATH/TO/ETCD/CERTS/ca.crt --key=/PATH/TO/ETCD/CERTS/server.key --cert=/PATH/TO/ETCD/CERTS/server.crt get / --prefix --keys-only</code></p>
<p>Query string from my minikube host machine accessing etcd endpoint:</p>
<p><code>ETCDCTL_API=3 etcdctl --cacert=/PATH/TO/ETCD/CERTS/ca.crt --key=/PATH/TO/ETCD/CERTS/server.key --cert=/PATH/TO/ETCD/CERTS/server.crt --endpoints "https://$(minikube ip):2379" get / --prefix --keys-only</code></p>
|
<p>I'd like to get some clarification for preparation for maintenance when you drain nodes in a Kubernetes cluster:</p>
<p>Here's what I know when you run <code>kubectl drain MY_NODE</code>:</p>
<ul>
<li>Node is cordoned</li>
<li>Pods are gracefully shut down</li>
<li>You can opt to ignore Daemonset pods because if they are shut down, they'll just be re-spawned right away again. </li>
</ul>
<p>I'm confused as to what happens when a node is drained though.</p>
<p><strong>Questions:</strong></p>
<ul>
<li>What happens to the pods? As far as I know, there's no 'live migration' of pods in Kubernetes. </li>
<li>Will the pod be shut down and then automatically started on another node? Or does this depend on my configuration? (i.e. could a pod be shut down via drain and not start up on another node)</li>
</ul>
<p>I would appreciate some clarification on this and any best practices or advice as well. Thanks in advance.</p>
| <p>By default <code>kubectl drain</code> is non-destructive, you have to override to change that behaviour. It runs with the following defaults:</p>
<blockquote>
<pre><code> --delete-local-data=false
--force=false
--grace-period=-1
--ignore-daemonsets=false
--timeout=0s
</code></pre>
</blockquote>
<p>Each of these safeguard deals with a different category of potential destruction (local data, bare pods, graceful termination, daemonsets). It also respects pod disruption budgets to adhere to workload availability. Any non-bare pod will be recreated on a new node by its respective controller (e.g. <code>daemonset controller</code>, <code>replication controller</code>).</p>
<p>It's up to you whether you want to override that behaviour (for example you might have a bare pod if running jenkins job. If you override by setting <code>--force=true</code> it will delete that pod and it won't be recreated). If you don't override it, the node will be in drain mode indefinitely (<code>--timeout=0s</code>)).</p>
|
<p>I am using kube-aws to create the kubernetes cluster on AWS, I have the kube-aws version v0.12.3, I am getting frequent issue on worker nodes as "too many open files in the system" when I try to ssh into the worker node and the nodes becomes unresponsive and gets restarted.</p>
<p>Because of this The pods running on the nodes gets rescheduled frequently on different nodes and application goes down for some time.</p>
<p>How can I resolve this issue.</p>
<p>✗ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T18:02:47Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:53:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}</p>
<p>Worker Node :
node|k8s-<strong>-</strong> core@ip-10-0-214-11 ~ $ ulimit -a </p>
<p>core file size (blocks, -c) unlimited</p>
<p>data seg size (kbytes, -d) unlimited</p>
<p>scheduling priority (-e) 0</p>
<p>file size (blocks, -f) unlimited</p>
<p>pending signals (-i) 251640</p>
<p>max locked memory (kbytes, -l) 16384</p>
<p>max memory size (kbytes, -m) unlimited</p>
<p><strong>open files (-n) 1024</strong></p>
<p>pipe size (512 bytes, -p) 8</p>
<p>POSIX message queues (bytes, -q) 819200</p>
<p>real-time priority (-r) 0</p>
<p>stack size (kbytes, -s) 8192</p>
<p>cpu time (seconds, -t) unlimited</p>
<p>max user processes (-u) 251640</p>
<p>virtual memory (kbytes, -v) unlimited</p>
<p>file locks (-x) unlimited</p>
| <p>As you can see the maximum number of open files is set to quite small value (<code>1024</code>). Perhaps this is inherited from the AWS template used for the worker node instance. </p>
<p>You should increase this value but this should be done with clear understanding of what level it should be set on: </p>
<ul>
<li>globally or for a specific security principal; </li>
<li>what exact principal this limit has to be applied to: user/system/daemon account or a group;</li>
<li>login service (su, ssh, telnet, etc)</li>
</ul>
<p>Also, you should be careful in order to not to exceed the kernel limit. </p>
<p>For a simple case just add the two strings like below to the end of the /etc/security/limits.conf file: </p>
<pre><code>mike soft nofile 4096
mike hard nofile 65536
</code></pre>
<p>and then re-login or restart the service which account you make the changes for.</p>
<p>You could find further explanations in the Internet;
one of many is available here: <a href="https://www.suse.com/documentation/sles-12/book_hardening/data/sec_sec_prot_dos.html" rel="nofollow noreferrer">Security and Hardening Guide</a></p>
<p>In order to keep those settings applied to your AWS instance during the launch, you might compose a simple script code like this: </p>
<pre><code>#!/bin/bash
cd /etc/security
cp limits.conf limits.conf.$(date "+%Y%m%d")
cat <<EndOfMyStrings >> limits.conf
mike soft nofile 4096
mike hard nofile 65536
EndOfMyStrings
</code></pre>
<p>and then add it into the "User data" field of the Launch Instance Wizard as described here: <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html" rel="nofollow noreferrer">Running Commands on Your Linux Instance at Launch</a> </p>
|
<p>I am trying to delete/remove all the pods running in my environment. When I issue </p>
<blockquote>
<p>docker ps</p>
</blockquote>
<p>I get the below output. This is a sample screenshot. As you can see that they are all K8s. I would like to delete all of the pods/remove them. </p>
<p><a href="https://i.stack.imgur.com/apVaT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/apVaT.png" alt="enter image description here"></a></p>
<p>I tried all the below approaches but they keep appearing again and again</p>
<pre><code>sudo kubectl delete --all pods --namespace=default/kube-public #returns "no resources found" for both default and kube-public namespaces
sudo kubectl delete --all pods --namespace=kube-system # shows "pod xxx deleted"
</code></pre>
<p><a href="https://i.stack.imgur.com/x6rht.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/x6rht.png" alt="enter image description here"></a></p>
<pre><code>sudo kubectl get deployments # returns no resources found
</code></pre>
<p>Apart from the above, I also tried using <code>docker stop</code> and <code>docker rm</code> with the container ids but they either respawn. I want to clear all of them. I would like to start from the beginning</p>
<p>I have logged out and logged back in multiple times but still I see those items.</p>
<p>Can you help me to delete all the pods and I expect the output of "docker ps" to not have any kubernetes related items like shown above?</p>
| <blockquote>
<p><code>sudo kubectl get deployments # returns no resources found</code></p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noreferrer"><code>Deployments</code></a> are not the only Controllers in Kubernetes which may manage your Pods. There are many: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="noreferrer"><code>StatefulSet</code></a>, <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="noreferrer"><code>ReplicaSet</code></a>, etc. (check the <a href="https://kubernetes.io/docs/concepts/workloads/controllers" rel="noreferrer">Controllers doc</a> for details)</p>
<p>In short, a Controller is responsible of ensuring all Pods it manages are running, and create them if necessary - when you delete all Pods, the associated Controller will realise they are missing and simply re-create them to ensure it matchs its specifications.</p>
<p>If you want to effectively delete all Pods, you should delete all the related Controllers (or update them to set their replica to 0), such as:</p>
<pre><code># do NOT run this in the kube-system namespace, it may corrupt your cluster
# You can also specify --namespace xxx to delete in a specific namespace
kubectl delete deployment --all # configures ReplicaSets, deleting Deployments should delete ReplicaSet as well as associated Pods
kubectl delete statefulset --all
kubectl delete daemonset --all
kubectl delete job --all
kubectl delete cronjob --all
kubectl delete replicationcontroller --all # their should not be any ReplicationController as Deployment should be used
# Then delete what you find
</code></pre>
<p>EDIT: as mentionned by <a href="https://stackoverflow.com/a/56868072/5465973">P Emkambaram</a> answer's you can also delete the entire namespace with <code>kubectl delete namespace mynamespace</code> (don't delete <code>kube-system</code> of course) but it may also delete other components in the namespace such as Services</p>
<hr>
<p>Important notes:</p>
<ul>
<li>You should also take care when deleting Pods or object in the <code>kube-system</code> namespace which are related to the internal plumbing of the cluster itself.</li>
<li>You should not delete directly Kubernetes components by deleting their underlying containers with <code>docker</code> commands, this may have unexpected effects, use <code>kubectl</code> instead. </li>
</ul>
|
<p>I have to install kubernetes v1.13.7 on ubuntu 18.04.2 LTS in Internal network environment.</p>
<p>I can use docker and USB device.</p>
<p>actually, I can't download files through internet directly.</p>
<p>for one thing, I installed APIserver / controller / scheduler / etcd / coredns / proxy / flannel through docker load command.</p>
<p>but now, I should install kubeadm / kubelet / kubectl but I didn't install these.</p>
<p>how can I install kubernetes?</p>
<p>let me know your experience or websites</p>
| <p>Here is step-by-step <a href="https://gist.github.com/onuryilmaz/89a29261652299d7cf768223fd61da02#download-kubernetes-rpms" rel="nofollow noreferrer">instruction</a> </p>
<p>As for the kubernetes part, you can download packages from the online workstation</p>
<pre><code>wget https://packages.cloud.google.com/yum/pool/e6aef7b2b7d9e5bd4db1e5747ebbc9f1f97bbfb8c7817ad68028565ca263a672-kubectl-1.6.0.x86_64.rpm
wget https://packages.cloud.google.com/yum/pool/af8567f1ba6f8dc1d43b60702d45c02aca88607b0e721d76897e70f6a6e53115-kubelet-1.6.0.x86_64.rpm
wget https://packages.cloud.google.com/yum/pool/e7a4403227dd24036f3b0615663a371c4e07a95be5fee53505e647fd8ae58aa6-kubernetes-cni-0.5.1.x86_64.rpm
wget https://packages.cloud.google.com/yum/pool/5116fa4b73c700823cfc76b3cedff6622f2fbd0a3d2fa09bce6d93329771e291-kubeadm-1.6.0.x86_64.rpm
</code></pre>
<p>and then just copy it over to your offline server via internal network </p>
<pre><code>scp <folder_with_rpms>/*.rpm <user>@<server>:<path>/<to>/<remote>/<folder>
</code></pre>
<p>Lastly, install packages</p>
<pre><code>yum install -y *.rpm
systemctl enable kubelet && systemctl start kubelet
</code></pre>
|
<p>On k8s you can create services either through <code>kubectl expose ..</code> command or <code>kubectl crate service ...</code> right?</p>
<p>So, with both of them I am having an issue that I am not sure why.</p>
<p>k8s allows me to do <code>kubectl expose deploy demo --type ExternalName</code>, but it doesn't allow to pass an <code>--external-name</code> flag to specify the CNAME.</p>
<pre><code>$ kubectl expose deploy demo --type ExternalName --port 80 --external-name google.com
</code></pre>
<blockquote>
<p>...</p>
<p>unknown flag: --external-name</p>
</blockquote>
<p>If I do without the <code>--external-name</code> flag.</p>
<pre><code>$ kubectl expose deploy demo --type ExternalName --port 80
</code></pre>
<blockquote>
<p>The Service "demo" is invalid: spec.externalName: Required value</p>
</blockquote>
<hr>
<p>k8s also allows me to do <code>kubectl create service externalname demo --external-name example.com --tcp <port>:<target-port></code>, but when I check the port and target port didn't go through.</p>
<pre><code>$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo ExternalName <none> google.com <none> 20m
</code></pre>
<p>I tried <code>--tcp=80:80</code>, <code>--tcp 80:80</code>, --tcp=[80:80], etc. Nothing works!</p>
<p>Anyone can point out what's the issue? I think it might be a bug.</p>
<p>I'm on GKE, with a 1.13.7 k8s version.</p>
| <p>A service of type ExternalName is exactly that: a local DNS CNAME record to an external domain name.</p>
<p>Thus exposing a deployment as ExternalName does not make sense. And since it's only a name, it does not have any ports as well.</p>
<p>This is all it takes:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: stackoverflow
namespace: default
spec:
externalName: stackoverflow.com
type: ExternalName
</code></pre>
|
<p>I want to create multiple namespaces from the Kubernetes CLI (<code>kubectl</code>) without any YAML manifests:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl create namespace app1,app2,app3
</code></pre>
<p>Can this be done?</p>
| <p><code>kubectl</code> expects exactly one namespace:</p>
<pre><code>➜ / kubectl create ns
error: exactly one NAME is required, got 0
</code></pre>
<p>depending on your shell you could pack it into a loop. here's an example for bash and zsh:</p>
<pre><code>➜ / foreach ns (ns1 ns2 ns3); kubectl create ns $ns; end
</code></pre>
|
<p>I have got several namespaces (iso, dev, sandbox, etc.) which correspond to my different environments. For each environment and thus namespace, there is an associated db.
When I deploy my pod with Helm, I would like to inject the namespace inside a value to get the appropriate password . </p>
<p>In my values file I have something that looks like this :</p>
<pre><code>db:
iso: passwordISO
dev: passwordDEV
sandbox: passwordSANDBOX
spec: passwordSPEC
val: passwordVAL
</code></pre>
<p>and in my consumer_config file I have this :</p>
<pre><code> db_host: DB_HOST-{{ .Release.Namespace }}
db_port: DB_PORT
db_name: DB_NAME
db_user: DB_PORT
db_password: {{ .Values.db.iso }}
</code></pre>
<p>I already tried to use the <code>{{- include }}</code> pattern but with no success. I also tried <code>{{ .Values.db.{{ .Release.Namespace }}}}</code> giving me the following error <code>unexpected <.> in operand</code></p>
<p>Any insight on how to do this or any workaround ?</p>
| <p>As Helm is based on the Go-template language, the functions availabe in the go-template language can be of help:</p>
<pre><code>{{ index .Values.db .Release.Namespace }}
</code></pre>
<p>From the docs (go-template docs):</p>
<blockquote>
<p>index
Returns the result of indexing its first argument by the
following arguments. Thus "index x 1 2 3" is, in Go syntax,
x[1][2][3]. Each indexed item must be a map, slice, or array.</p>
</blockquote>
<p>so in this case you index <code>.Values.db</code> with the key <code>.Release.Namespace</code></p>
|
<p>I have a pretty standard installation of Kubernetes running as a single-node cluster on Ubuntu. I am trying to configure CoreDNS to resolve all internal services within my Kubernetes cluster and SOME external domain names. So far, I have just been experimenting. I started by creating a busybox pod as seen here: <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/</a></p>
<p>Everything works as described in the guide until I make changes to the corefile. I am seeing a couple of issues:</p>
<ol>
<li>I edited the default corefile using <code>kubectl -n kube-system edit configmap coredns</code> and replaced <code>.:53</code> with <code>cluster.local:53</code>. After waiting, things look promising. <code>google.com</code> resolution began failing, while <code>kubernetes.default.svc.cluster.local</code> continued to succeed. However, <code>kubernetes.default</code> resolution began failing too. Why is that? There is still a <strong>search</strong> entry for <code>svc.cluster.local</code> in the busybody pod’s <code>/etc/resolv.conf</code>. All that changed was the corefile.</li>
<li><p>I tried to add an additional stanza/block to the corefile (again, by editing the config map). I added a simple block :</p>
<pre><code>.:53{
log
}
</code></pre>
<p>It seems that the corefile fails to compile or something. The pods seem healthy and don’t report any errors to the logs, but the requests all hang and fail. </p></li>
</ol>
<p>I have tried to add the log plugin, but this isn’t working since the plugin is only applied to domains matching the plugin, and either the domain name doesn’t match or the corefile is broken.</p>
<p>For transparency, this is my new corefile :</p>
<pre><code>cluster.local:53 {
errors
log
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
</code></pre>
| <p>It looks like your Corefile got somehow corrupted during editing through "kubectl edit ..." command. Probably it's fault of your default text editor, but it's definitely valid.</p>
<p>I would recommend you to replace your current config map with the following command:</p>
<pre><code>kubectl get -n kube-system cm/coredns --export -o yaml | kubectl replace -n kube-system -f coredns_cm.yaml
</code></pre>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>#coredns_cm.yaml
apiVersion: v1
data:
Corefile: |
cluster.local:53 {
log
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
creationTimestamp: null
name: coredns</code></pre>
</div>
</div>
</p>
|
<p>I have a websocket server that hoards memory during days, till the point that Kubernetes eventually kills it. We monitor it using <a href="https://github.com/prometheus-net/prometheus-net" rel="noreferrer">prometheous-net</a>.</p>
<pre><code># dotnet --info
Host (useful for support):
Version: 2.1.6
Commit: 3f4f8eebd8
.NET Core SDKs installed:
No SDKs were found.
.NET Core runtimes installed:
Microsoft.AspNetCore.All 2.1.6 [/usr/share/dotnet/shared/Microsoft.AspNetCore.All]
Microsoft.AspNetCore.App 2.1.6 [/usr/share/dotnet/shared/Microsoft.AspNetCore.App]
Microsoft.NETCore.App 2.1.6 [/usr/share/dotnet/shared/Microsoft.NETCore.App]
</code></pre>
<p>But when I connect remotely and take a memory dump (using <code>createdump</code>), suddently the memory drops... without the service stopping, restarting or loosing any connected user. See the green line in the picture.</p>
<p>I can see in the graphs, that GC is collecting regularly in all generations.</p>
<p>GC Server is disabled using:</p>
<pre><code><PropertyGroup>
<ServerGarbageCollection>false</ServerGarbageCollection>
</PropertyGroup>
</code></pre>
<p>Before disabling GC Server, the service used to grow memory way faster. Now it takes two weeks to get into 512Mb.</p>
<p>Other services using ASP.NET Core on request/response fashion do not show this problem. This uses Websockets, where each connection last usually around 10 minutes... so I guess everything related with the connection survives till Gen 2 easily.</p>
<p><a href="https://i.stack.imgur.com/G7wU8.png" rel="noreferrer"><img src="https://i.stack.imgur.com/G7wU8.png" alt="enter image description here"></a>
Note that there are two pods, showing the same behaviour, and then one (the green) drops suddenly in memory ussage due the taking of the memory dump.</p>
<p><a href="https://i.stack.imgur.com/7quXs.png" rel="noreferrer"><img src="https://i.stack.imgur.com/7quXs.png" alt="enter image description here"></a></p>
<p>The pods did not restart during the taking of the memory dump:
<a href="https://i.stack.imgur.com/UjyY3.png" rel="noreferrer"><img src="https://i.stack.imgur.com/UjyY3.png" alt="enter image description here"></a></p>
<p>No connection was lost or restarted.</p>
<p>Heap:</p>
<pre><code>(lldb) eeheap -gc
Number of GC Heaps: 1
generation 0 starts at 0x00007F8481C8D0B0
generation 1 starts at 0x00007F8481C7E820
generation 2 starts at 0x00007F852A1D7000
ephemeral segment allocation context: none
segment begin allocated size
00007F852A1D6000 00007F852A1D7000 00007F853A1D5E90 0xfffee90(268430992)
00007F84807D0000 00007F84807D1000 00007F8482278000 0x1aa7000(27947008)
Large object heap starts at 0x00007F853A1D7000
segment begin allocated size
00007F853A1D6000 00007F853A1D7000 00007F853A7C60F8 0x5ef0f8(6222072)
Total Size: Size: 0x12094f88 (302600072) bytes.
------------------------------
GC Heap Size: Size: 0x12094f88 (302600072) bytes.
(lldb)
</code></pre>
<p>Free objects:</p>
<pre><code>(lldb) dumpheap -type Free -stat
Statistics:
MT Count TotalSize Class Name
00000000010c52b0 219774 10740482 Free
Total 219774 objects
</code></pre>
<p>Is there any explanation to this behaviour?</p>
| <p>The problem was the connection to RabbitMQ. Because we were using sort lived channels, the "auto-reconnect" feature of the RabbitMQ.Client was keeping a lot of state about dead channels. We switched off this configuration, since we do not need the "perks" of the "auto-reconnect" feature, and everything start working normally. It was a pain, but we basically had to setup a Windows deploy and do the usual memory analysis process with Windows tools (Jetbrains dotMemory in this case). Using lldb is not productive at all.</p>
|
<p>I have one spring boot microservice running on docker container, below is the Dockerfile</p>
<pre><code>FROM java:8-jre
MAINTAINER <>
WORKDIR deploy/
#COPY config/* /deploy/config/
COPY ./ms.console.jar /deploy/
CMD chmod +R 777 ./ms.console.jar
CMD ["java","-jar","/deploy/ms.console.jar","console"]
EXPOSE 8384
</code></pre>
<p>here my configuration stores in external folder, i.e <code>/config/console-server.yml</code> and when I started the application, internally it will load the config (spring boot functionality).</p>
<p>Now I want to separate this configuration using configmap, for that I simply created one configmap and storing all the configuration details.</p>
<blockquote>
<p>kubectl create configmap console-configmap
--from-file=./config/console-server.yml</p>
<p>kubectl describe configmap console-configmap</p>
</blockquote>
<p>below are the description details:</p>
<pre><code>Name: console-configmap
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
console-server.yml:
----
server:
http:
port: 8385
compression:
enabled: true
mime-types: application/json,application/xml,text/html,text/xml,text/plain,text/css,application/javascript
min-response-size: 2048
---
spring:
thymeleaf:
prefix: classpath:/static
application:
name: console-service
profiles:
active: native
servlet:
multipart:
max-file-size: 30MB
max-request-size: 30MB
---
host:
gateway: http://apigateway:4000
webhook: http://localhost:9000
</code></pre>
<p>my deployment yml is:</p>
<pre><code>apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice
replicas: 1 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice
spec:
containers:
- name: consoleservice
image: ms-console
ports:
- containerPort: 8384
imagePullPolicy: Always
envFrom:
- configMapRef:
name: console-configmap
imagePullSecrets:
- name: regcresd
</code></pre>
<p>My doubt is, I commented config folder in the Dockerfile, so while running pods, it's throwing exception because of no configuration, how I will inject this console-configmap to my deployment, what I tried already shared, but getting same issues.</p>
| <p>First of all, how are you consuming the .yml file in your application? If you consume your yml file contents as environment variables, your config should just work fine. But I suspect that you want to consume the contents from the config file inside the container. If that is the case you have to create a volume out of the configmap as follows:</p>
<pre><code>
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice
replicas: 1 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice
spec:
containers:
- name: consoleservice
image: ms-console
ports:
- containerPort: 8384
imagePullPolicy: Always
volumeMounts:
- mountPath: /app/config
name: config
volumes:
- name: config
configMap:
name: console-configmap
imagePullSecrets:
- name: regcresd
</code></pre>
<p>The file will be available in the path <code>/app/config/console-server.yml</code>. You have to modify it as per your needs.</p>
|
<p>I have a few cron jobs on GKE.</p>
<p>One of the pods did terminate and now I am trying to access the logs.</p>
<pre><code>➣ $ kubectl get events
LAST SEEN TYPE REASON KIND MESSAGE
23m Normal SuccessfulCreate Job Created pod: virulent-angelfish-cronjob-netsuite-proservices-15622200008gc42
22m Normal SuccessfulDelete Job Deleted pod: virulent-angelfish-cronjob-netsuite-proservices-15622200008gc42
22m Warning DeadlineExceeded Job Job was active longer than specified deadline
23m Normal Scheduled Pod Successfully assigned default/virulent-angelfish-cronjob-netsuite-proservices-15622200008gc42 to staging-cluster-default-pool-4b4827bf-rpnl
23m Normal Pulling Pod pulling image "gcr.io/my-repo/myimage:v8"
23m Normal Pulled Pod Successfully pulled image "gcr.io/my-repo/my-image:v8"
23m Normal Created Pod Created container
23m Normal Started Pod Started container
22m Normal Killing Pod Killing container with id docker://virulent-angelfish-cronjob:Need to kill Pod
23m Normal SuccessfulCreate CronJob Created job virulent-angelfish-cronjob-netsuite-proservices-1562220000
22m Normal SawCompletedJob CronJob Saw completed job: virulent-angelfish-cronjob-netsuite-proservices-1562220000
</code></pre>
<p>So at least one CJ run.</p>
<p>I would like to see the pod's logs, but there is nothing there</p>
<pre><code>➣ $ kubectl get pods
No resources found.
</code></pre>
<p>Given that in my cj definition, I have:</p>
<pre><code>failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 3
</code></pre>
<p>shouldn't <strong>at least</strong> one pod be there for me to do forensics?</p>
| <p><strong>Your pod is crashing or otherwise unhealthy</strong></p>
<p>First, take a look at the logs of the current container:</p>
<pre><code>kubectl logs ${POD_NAME} ${CONTAINER_NAME}
</code></pre>
<p>If your container has previously crashed, you can access the previous container’s crash log with:</p>
<pre><code>kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
</code></pre>
<p>Alternately, you can run commands inside that container with exec:</p>
<pre><code>kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1} ${ARG2} ... ${ARGN}
</code></pre>
<p>Note: <code>-c ${CONTAINER_NAME}</code> is optional. You can omit it for pods that only contain a single container.</p>
<p>As an example, to look at the logs from a running Cassandra pod, you might run:</p>
<pre><code>kubectl exec cassandra -- cat /var/log/cassandra/system.log
</code></pre>
<p>If none of these approaches work, you can find the host machine that the pod is running on and SSH into that host.</p>
<p>Finaly, check Logging on Google StackDriver.</p>
<p><strong>Debugging Pods</strong></p>
<p>The first step in debugging a pod is taking a look at it. Check the current state of the pod and recent events with the following command:</p>
<pre><code>kubectl describe pods ${POD_NAME}
</code></pre>
<p>Look at the state of the containers in the pod. Are they all Running? Have there been recent restarts?</p>
<p>Continue debugging depending on the state of the pods.</p>
<p><strong>Debugging ReplicationControllers</strong></p>
<p>ReplicationControllers are fairly straightforward. They can either create pods or they can’t. If they can’t create pods, then please refer to the instructions above to debug your pods.</p>
<p>You can also use <code>kubectl describe rc ${CONTROLLER_NAME}</code> to inspect events related to the replication controller.</p>
<p>Hope it helps you to find exactly problem.</p>
|
<p>I am unable to run any <code>kubectl</code> commands and I believe it is a result of an expired apiserver-etcd-client certificate.</p>
<pre><code>$ openssl x509 -in /etc/kubernetes/pki/apiserver-etcd-client.crt -noout -text |grep ' Not '
Not Before: Jun 25 17:28:17 2018 GMT
Not After : Jun 25 17:28:18 2019 GMT
</code></pre>
<p>The log from the failed apiserver container shows:</p>
<pre><code>Unable to create storage backend: config (&{ /registry [https://127.0.0.1:2379] /etc/kubernetes/pki/apiserver-etcd-client.key /etc/kubernetes/pki/apiserver-etcd-client.crt /etc/kubernetes/pki/etcd/ca.crt true false 1000 0xc420363900 <nil> 5m0s 1m0s}), err (dial tcp 127.0.0.1:2379: getsockopt: connection refused)
</code></pre>
<p>I am using kubeadm 1.10, and would like to upgrade to 1.14. I was able to renew several expired certificates described by <a href="https://github.com/kubernetes/kubeadm/issues/581" rel="nofollow noreferrer">issue 581</a> on GitHub. Following the instructions updated the following keys & certs in <code>/etc/kubernetes/pki</code>:</p>
<pre><code>apiserver
apiserver-kubelet-client
front-proxy-client
</code></pre>
<p>Next, I tried:</p>
<pre><code>kubeadm --config kubeadm.yaml alpha phase certs apiserver-etcd-client
</code></pre>
<p>Where the <code>kubeadm.yaml</code> file is:</p>
<pre><code>apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: 172.XX.XX.XXX
kubernetesVersion: v1.10.5
</code></pre>
<p>But it returns:</p>
<pre><code>failure loading apiserver-etcd-client certificate: the certificate has expired
</code></pre>
<p>Further, in the directory <code>/etc/kubernetes/pki/etcd</code> with the exception of the <code>ca</code> cert and key, all of the remaining certificates and keys are expired.</p>
Is there a way to renew the expired certs without resorting to rebuilding the cluster?
<pre><code>
Logs from the etcd container:
$ sudo docker logs e4da061fc18f
2019-07-02 20:46:45.705743 I | etcdmain: etcd Version: 3.1.12
2019-07-02 20:46:45.705798 I | etcdmain: Git SHA: 918698add
2019-07-02 20:46:45.705803 I | etcdmain: Go Version: go1.8.7
2019-07-02 20:46:45.705809 I | etcdmain: Go OS/Arch: linux/amd64
2019-07-02 20:46:45.705816 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2019-07-02 20:46:45.705848 N | etcdmain: the server is already initialized as member before, starting as etcd member...
2019-07-02 20:46:45.705871 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, ca = , trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true
2019-07-02 20:46:45.705878 W | embed: The scheme of peer url http://localhost:2380 is HTTP while peer key/cert files are presented. Ignored peer key/cert files.
2019-07-02 20:46:45.705882 W | embed: The scheme of peer url http://localhost:2380 is HTTP while client cert auth (--peer-client-cert-auth) is enabled. Ignored client cert auth for this url.
2019-07-02 20:46:45.712218 I | embed: listening for peers on http://localhost:2380
2019-07-02 20:46:45.712267 I | embed: listening for client requests on 127.0.0.1:2379
2019-07-02 20:46:45.716737 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2019-07-02 20:46:45.718103 I | etcdserver: recovered store from snapshot at index 13621371
2019-07-02 20:46:45.718116 I | etcdserver: name = default
2019-07-02 20:46:45.718121 I | etcdserver: data dir = /var/lib/etcd
2019-07-02 20:46:45.718126 I | etcdserver: member dir = /var/lib/etcd/member
2019-07-02 20:46:45.718130 I | etcdserver: heartbeat = 100ms
2019-07-02 20:46:45.718133 I | etcdserver: election = 1000ms
2019-07-02 20:46:45.718136 I | etcdserver: snapshot count = 10000
2019-07-02 20:46:45.718144 I | etcdserver: advertise client URLs = https://127.0.0.1:2379
2019-07-02 20:46:45.842281 I | etcdserver: restarting member 8e9e05c52164694d in cluster cdf818194e3a8c32 at commit index 13629377
2019-07-02 20:46:45.842917 I | raft: 8e9e05c52164694d became follower at term 1601
2019-07-02 20:46:45.842940 I | raft: newRaft 8e9e05c52164694d [peers: [8e9e05c52164694d], term: 1601, commit: 13629377, applied: 13621371, lastindex: 13629377, lastterm: 1601]
2019-07-02 20:46:45.843071 I | etcdserver/api: enabled capabilities for version 3.1
2019-07-02 20:46:45.843086 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32 from store
2019-07-02 20:46:45.843093 I | etcdserver/membership: set the cluster version to 3.1 from store
2019-07-02 20:46:45.846312 I | mvcc: restore compact to 13274147
2019-07-02 20:46:45.854822 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2019-07-02 20:46:45.855232 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2019-07-02 20:46:45.855267 I | etcdserver: starting server... [version: 3.1.12, cluster version: 3.1]
2019-07-02 20:46:45.855293 I | embed: ClientTLS: cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, ca = , trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true
2019-07-02 20:46:46.443331 I | raft: 8e9e05c52164694d is starting a new election at term 1601
2019-07-02 20:46:46.443388 I | raft: 8e9e05c52164694d became candidate at term 1602
2019-07-02 20:46:46.443405 I | raft: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 1602
2019-07-02 20:46:46.443419 I | raft: 8e9e05c52164694d became leader at term 1602
2019-07-02 20:46:46.443428 I | raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 1602
2019-07-02 20:46:46.443699 I | etcdserver: published {Name:default ClientURLs:[https://127.0.0.1:2379]} to cluster cdf818194e3a8c32
2019-07-02 20:46:46.443768 I | embed: ready to serve client requests
2019-07-02 20:46:46.444012 I | embed: serving client requests on 127.0.0.1:2379
2019-07-02 20:48:05.528061 N | pkg/osutil: received terminated signal, shutting down...
2019-07-02 20:48:05.528103 I | etcdserver: skipped leadership transfer for single member cluster
</code></pre>
<p>systemd start-up script:</p>
<pre><code>sudo systemctl status -l kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Mon 2019-07-01 14:54:24 UTC; 1 day 23h ago
Docs: http://kubernetes.io/docs/
Main PID: 9422 (kubelet)
Tasks: 13
Memory: 47.0M
CGroup: /system.slice/kubelet.service
└─9422 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --authentication-token-webhook=true --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt --cgroup-driver=cgroupfs --rotate-certificates=true --cert-dir=/var/lib/kubelet/pki
Jul 03 14:10:49 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:49.871276 9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.31.22.241:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dahub-k8s-m1.aws-intanalytic.com&limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:49 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:49.872444 9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://172.31.22.241:6443/api/v1/nodes?fieldSelector=metadata.name%3Dahub-k8s-m1.aws-intanalytic.com&limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:49 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:49.880422 9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://172.31.22.241:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:50.871913 9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.31.22.241:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dahub-k8s-m1.aws-intanalytic.com&limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:50.872948 9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://172.31.22.241:6443/api/v1/nodes?fieldSelector=metadata.name%3Dahub-k8s-m1.aws-intanalytic.com&limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:50.880792 9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://172.31.22.241:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: I0703 14:10:50.964989 9422 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: I0703 14:10:50.966644 9422 kubelet_node_status.go:82] Attempting to register node ahub-k8s-m1.aws-intanalytic.com
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:50.967012 9422 kubelet_node_status.go:106] Unable to register node "ahub-k8s-m1.aws-intanalytic.com" with API server: Post https://172.31.22.241:6443/api/v1/nodes: dial tcp 172.31.22.241:6443: getsockopt: connection refused
</code></pre>
| <p>Background: kubectl utilizes a file called <code>config</code> (location in CentOS: /$USER/.kube/config) to identify itself (user which is you) to the api-server. This file is a copy of the <code>admin.conf</code> (location in CentOS: /etc/kubernetes/admin.conf .</p>
<p>Now Upgrade from 1.10 to 1.14 and renewing existing certs in an already expired cert cluster, should be treated different just not to complicate the process!</p>
<p>FYI you should upgrade one hop at a time which 1.10 to 1.11 to 1.12 ..... check <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14/" rel="nofollow noreferrer">this</a> page avaiable for different version. Also VERY important to check the <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md" rel="nofollow noreferrer">changelogs</a> and perform the <em>mandatories</em> before upgrading from one hop to next.</p>
<p>Coming back to RENEWing certs <code>kubeadm --config kubeadm.yaml alpha phase certs apiserver-etcd-client</code> alone is not enough, Please perform <code>kubeadm --config kubeadm.yaml alpha phase certs all</code><br/>
Note following the above you need to perform <code>kubeadm --config kubeadm.yaml alpha phase kubeconfig all</code> in all nodes as the ca.crt has changed by now. <a href="https://stackoverflow.com/q/49885636/4451944">Ref</a></p>
<p>In future you might want to consider <a href="https://kubernetes.io/docs/tasks/tls/certificate-rotation/" rel="nofollow noreferrer">cert rotation</a></p>
|
<p>I want to migrate an existing website hosted on wordpress to kubernetes using GKE or even gce but i do not know where to start. I haven't written any code yet. I tried to find solutions online but I didn't find anything on migrating a website HOSTED on wordpress to kubernetes.</p>
<ol>
<li>How can i fetch the database</li>
<li>What should the dockerfile look like</li>
<li>How many yaml files should be included</li>
<li>How many pods do i create</li>
</ol>
| <p>You can create and run with on one pod also but it's depends on your wensite traffic.</p>
<p>You can start with the two pod initially one for Mysql and another for wordpress application it self.</p>
<p>You can create two yaml for the same and one docker and apply it to kubernetes cluster.</p>
<p>Follow this simple guide and start your wordpress on kubernetes : </p>
<blockquote>
<p><a href="https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/</a></p>
</blockquote>
|
<p>I wrote down a simple Deploymnet yaml and it fails with the error</p>
<pre><code>kubectl create -f deployment.yaml
error: error validating "deployment.yaml": error validating data: the server could not find the requested resource; if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
<p>This used to work in the previous versions and if i turn --validate=false it still helps but i would like to understand why the error is seen.</p>
<p>My deployment.yaml file</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: httpd-deployment
labels:
app: httpd
spec:
replicas: 1
selector:
matchLabels:
app: httpd
template:
metadata:
labels:
app: httpd
spec:
containers:
- name: httpd
image: httpd:latest
ports:
- containerPort: 80
resources:
requests:
cpu: "0.3"
memory: "500Mi"
</code></pre>
<p>I'm running thsi on minikube and the minikube version is:
minikube version: v1.2.0</p>
<p>Are there any standards that we need to follow for the new version for creating the deployment yaml file. </p>
<p>There are no errors that are displayed except these warning so Troubeshooting this is becoming a pain.</p>
<p>So if there is anything that can help me fix could you please help.</p>
<p>Thank you</p>
| <p>This is an issue with kubectl validating what is going to be sent into the API server rather than Minikube itself.</p>
<p>The error is in the indentation as <code>cpu</code> and <code>memory</code> properties should be nested within requests and <strong>not</strong> in resources:</p>
<pre><code>spec:
containers:
- name: httpd
image: httpd:latest
ports:
- containerPort: 80
resources:
requests:
cpu: "0.3"
memory: "500Mi"
</code></pre>
<p>I've tested it using kubectl v1.15.0 and the error was correctly displayed:</p>
<pre><code>$ kubectl apply -f test.yaml
$ error: error validating "test.yaml": error validating data: [ValidationError(Deployment.spec.template.spec.containers[0].resources): unknown field "cpu" in io.k8s.api.core.v1.ResourceRequirements, ValidationError(Deployment.spec.template.spec.containers[0].resources): unknown field "memory" in io.k8s.api.core.v1.ResourceRequirements]; if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
|
<p>I have a mongodb server hosted outside GCP, I want to connect to it using Kubernetes endpoint service as shown here [<a href="https://www.youtube.com/watch?v=fvpq4jqtuZ8]" rel="nofollow noreferrer">https://www.youtube.com/watch?v=fvpq4jqtuZ8]</a>. How can I do that? Can you write a sample YAML file for the same?</p>
| <p>use <strong>static Kubernetes service</strong>, when you have got <strong>the Internal IP and Port number</strong> for externally hosted service. </p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: mongo
Spec:
type: ClusterIP
ports:
- port: 27017
targetPort: 27017
</code></pre>
<p>As there are no Pod selectors for this service , There will not be any endpoints thus we can create a endpoint object manually.</p>
<pre><code>kind: Endpoints
apiVersion: v1
metadata:
name: mongo
subsets:
- addresses:
- ip: 10.240.0.4 # Replace ME with your IP
ports:
- port: 27017
</code></pre>
<p>make sure that service and endpoints are having same name (for example mongo)</p>
<blockquote>
<p>If the IP address changes in the future, you can update the Endpoint with the new IP address, and your applications won’t need to make any changes.<a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-mapping-external-services" rel="noreferrer">mapping-external-services</a></p>
</blockquote>
|
<p>So I have a private Kubernetes cluster hosted on GKE inside of a Cloud VPC network, essentially the same as discussed in <a href="https://cloud.google.com/nat/docs/gke-example" rel="nofollow noreferrer">Cloud NAT GKE example</a>.</p>
<p>Thats all working, and now I've setup an Nginx ingress inside the cluster, with setting the annotation:</p>
<pre><code>annotations:
cloud.google.com/load-balancer-type: "Internal"
</code></pre>
<p>This seems to work, as it eventually provisions an internal IP address within the VPC subnet range.</p>
<p>QUESTION:</p>
<p>How do I forward incoming traffic from the Cloud NAT gateway to that internal IP of the Nginx LoadBalancer service?</p>
<p>I want to have both ingress and egress happen on the same IP (so I don't have to expose the LoadBalancer service externally) that is linked to the Cloud NAT, if thats possible.</p>
<p>Thanks!</p>
| <p><a href="https://cloud.google.com/nat/docs/overview" rel="nofollow noreferrer">Cloud NAT</a> enable instances in a private subnet to connect to the internet, but not other way around:</p>
<blockquote>
<p>Cloud NAT implements <strong>outbound</strong> NAT in conjunction with a default route
to allow your instances to reach the Internet. It does <strong>NOT</strong> implement
<strong>inbound</strong> NAT. Hosts outside of your VPC network can <strong>only respond</strong> to
established connections initiated by your instances; they <strong>cannot</strong>
initiate their own, new connections to your instances via NAT.</p>
</blockquote>
<p>In a word, you can not NAT incoming traffic with cloud NAT gateway.</p>
|
<p>How do I use the <code>can-i</code> command? It does not seem to be completely documented here: </p>
<p><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-can-i-em-" rel="noreferrer">https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-can-i-em-</a> (no mention of <code>--as</code>).</p>
<hr>
<p>All the below results seem nonsensical: </p>
<pre><code>kubectl auth can-i list pod --as=default3ueoaueo --as-group=system:authenticated --as-group=system:masters
yes
</code></pre>
<p>The above will return <code>yes</code> for anything after <code>--as=</code> - any user specified here.</p>
<p>On the other hand, the default user account (or any other I've tried) seems to have no permission at all:</p>
<pre><code>kubectl auth can-i list pod --as=default
no
</code></pre>
<p>and</p>
<pre><code>kubectl auth can-i list pod --as=default:serviceaccount:default
no
</code></pre>
<p>And according to <a href="https://github.com/kubernetes/kubernetes/issues/73123" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/73123</a> we just add <code>--as-group=system:authenticated</code> but that doesn't work either:</p>
<pre><code>kubectl auth can-i list pod --as=serviceaccount:default --as-group=system:authenticated
no
</code></pre>
| <p>The usage of <code>'--as'</code> argument with <code>kubectl</code> command is known as "User impersonation", and it's documented in official documentation <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#user-impersonation" rel="noreferrer">here</a>.</p>
<p>If you are trying to impersonate user as an API resource like 'serviceaccounts', <br >the proper syntax is: <code>'--as=system:serviceaccount:kube-system:default</code>'</p>
|
<p>Here's my situation: I have deployed some k8s resources via docker stack liek this:</p>
<p><code>docker stack deploy --compose-file /path/to/docker-compose.yml mystack</code></p>
<p><code>docker stack services mystack</code></p>
<p>Now, I want to delete some of those resources, however, any resource I attempt to delete simply restarts itself immediately. </p>
<p>I have tried to delete the pods, corresponding replica sets, deployments and services but whatever resource I deleted, it restarted itself within a second.</p>
<p>What am I doing wrong?</p>
| <p>You should use docker stack to delete the stack</p>
<pre><code>docker stack rm <stack-name>
</code></pre>
|
<p>I like to know how I can change the time interval of <code>kubectl top</code> for monitoring pods. At the moment it updates very slowly (I think it is more than 15s) and I want to have a more up to date values for the resource usage of my pods.</p>
<p>I am aware that I can use some other tools such as grafana, prometheus, etc. But when I work in terminal I like to have some simple monitoring tool running at the same environment.</p>
| <p>There is no flag for <a href="https://github.com/kubernetes/kubernetes/blob/5a7b978c7401efad5672bcc876bcf5d3dfa71bd0/pkg/kubectl/cmd/top/top_pod.go#L42-L55" rel="noreferrer">kubectl top</a> for setting the refresh rate since it depends on the information collected by Heapster and Heapster depends on the <a href="https://groups.google.com/d/msg/kubernetes-users/D-xsnFtmeAk/_KvfvjJHAQAJ" rel="noreferrer">kubelet metrics scraping interval</a>.</p>
|
<p>I'm scaling ML prediction based on CPU and memory utilization. I have used HPA for pod level scaling, where we specified the both CPU and memory metrics. While creating deployment, I have also specified the requests computing resources and limits for the same.(I have pasted both HPA configuration and pod template configuration is for reference)</p>
<p>I observed that although we specified the Resources limit and request, when I check the memory and CPU consumed by each pod, It's shows only one pod is consuming all the CPU and memory resources and other are consuming very less computing resources. As per my understanding all pods should consume approx equals resources so we can say it's scaled otherwise it's like running the code without k8s on single machine.</p>
<p>Note: I'm using python k8s client for creating the deployment and services not the yaml.</p>
<p>I have tried with tweaking with limits and resources parameter and observed that because of this is ML pipeline, memory and cpu consumption boost exponentially at some point.</p>
<p>My HPA configuration:-</p>
<pre><code>apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
namespace: default
name: #hpa_name
spec:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: #deployment_name
minReplicas: 1
maxReplicas: 40
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 80
- type: Resource
resource:
name: memory
targetAverageValue: 5Gi
</code></pre>
<p>My pod template code-</p>
<pre><code> container = client.V1Container(
ports=[client.V1ContainerPort(container_port=8080)],
env=[client.V1EnvVar(name="ABC", value=12345)],
resources=client.V1ResourceRequirements(
limits={"cpu": "2", "memory": "22Gi"},
requests={"cpu": "1", "memory": "8Gi"},
),
</code></pre>
<p>Output of <code>kubectl top pods</code></p>
<pre><code>NAME CPU(cores) MEMORY(bytes)
deploy-24b32e5dc388456f8f2263be39ffb5f7-de19236511504877-77c6ds 1m 176Mi
deploy-24b32e5dc388456f8f2263be39ffb5f7-de19236511504877-7d5n4l 1m 176Mi
deploy-24b32e5dc388456f8f2263be39ffb5f7-de19236511504877-7dq6c9 14236m 16721Mi
deploy-24b32e5dc388456f8f2263be39ffb5f7-de19236511504877-7f6nmh 1m 176Mi
deploy-24b32e5dc388456f8f2263be39ffb5f7-de19236511504877-7fzdc4 1m 176Mi
deploy-24b32e5dc388456f8f2263be39ffb5f7-de19236511504877-7gvqtj 1m 176Mi
deploy-24b32e5dc388456f8f2263be39ffb5f7-de19236511504877-7h6ld7 1m 176Mi
deploy-24b32e5dc388456f8f2263be39ffb5f7-de19236511504877-7j7gv4 1m 176Mi
deploy-24b32e5dc388456f8f2263be39ffb5f7-de19236511504877-7kxlvh 1m 176Mi
deploy-24b32e5dc388456f8f2263be39ffb5f7-de19236511504877-7nnn8x 1m 176Mi
deploy-24b32e5dc388456f8f2263be39ffb5f7-de19236511504877-7pmtnj 1m 176Mi
deploy-24b32e5dc388456f8f2263be39ffb5f7-de19236511504877-7qflkh 1m 176Mi
deploy-24b32e5dc388456f8f2263be39ffb5f7-de19236511504877-7s26cj 1m 176Mi
deploy-24b32e5dc388456f8f2263be39ffb5f7-de19236511504877-7st5lt 1m 176Mi
</code></pre>
<p>By above output , it is very clear that , third pod utilizing the most of the resources while other are at constant memory and CPU consumption which is very less.</p>
<p>My expectation is each pod should consume the resource approx equals based on limits and resources specified in requests of pod template. In this case it should be between 1 CPU and 2 CPU for CPU consumption and between 8 Gi to 22 Gi for memory/ less than requested resource but not beyond defined limits.</p>
<p>Thanks in advance for any points/help/hints.</p>
| <p>As per RCA (Root cause analysis) of this issue, we verified by running <code>ipvsadm -ln</code> while processing a workload in ours kubernetes cluster and found that only one TCP connection is made by payload and this causes all the workload to be concentrated in one pod even though other pods are available.</p>
<p>Ours application is based on GRPC and GRPC uses HTTP/2. HTTP/2 have feature to create single long lived TCP connection and request is multiplexed under the same TCP connection to minimize the TCP connection management overhead. Because of this, there was one long lived TCP connection attach to the single pod and since HPA spikes for memory and CPU, its scales the pods but load do not get distributed. Thus, we need some mechanism to distribute the load one step next to connection level load balancing(this is default load balancing in kubernetes) to request level load balancing.</p>
<p>Fortunate we found below solution, we followed this and it's worked for us.</p>
<p><a href="https://kubernetes.io/blog/2018/11/07/grpc-load-balancing-on-kubernetes-without-tears/" rel="nofollow noreferrer">https://kubernetes.io/blog/2018/11/07/grpc-load-balancing-on-kubernetes-without-tears/</a></p>
|
<p>Is there a way to tell Kubernetes to never run two pods on the same node, an example I have two pods replicas, I want them to be always distributed over <code>zone1/zone2</code> and never in the same zone together.</p>
<pre><code>apiVersion: app/v1
kind: Deployment
metadata:
name: testApp
labels:
app: testApp-front
namespace:
spec:
replicas: 2
selector:
matchLabels:
app: testApp-front
template:
metadata:
labels:
app: testApp-front
spec:
nodeSelector:
failure-domain.beta.kubernetes.io/zone: zone1
</code></pre>
| <p>Seems like it can be done with Interpod Affinity you can see : </p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: web-server
spec:
selector:
matchLabels:
app: testApp-front
replicas: 3
template:
metadata:
labels:
app: testApp-front
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- testApp-front
topologyKey: "kubernetes.io/hostname"
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- store
topologyKey: "kubernetes.io/hostname"
containers:
- name: web-testApp-front
image: nginx:1.12-alpine
</code></pre>
<p>you can see the full <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature" rel="nofollow noreferrer">example here</a></p>
|
<p>I have a GKE cluster (gke v1.13.6) and using istio (v1.1.7) with several services deployed and working successfully except one of them which always responds with HTTP 503 when calling through the gateway : upstream connect error or disconnect/reset before headers. reset reason: connection failure.</p>
<p>I've tried calling the pod directly from another pod with curl enabled and it ends up in 503 as well :</p>
<pre><code>$ kubectl exec sleep-754684654f-4mccn -c sleep -- curl -v d-vine-machine-dev:8080/d-vine-machine/swagger-ui.html
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 10.3.254.3...
* TCP_NODELAY set
* Connected to d-vine-machine-dev (10.3.254.3) port 8080 (#0)
> GET /d-vine-machine/swagger-ui.html HTTP/1.1
> Host: d-vine-machine-dev:8080
> User-Agent: curl/7.60.0
> Accept: */*
>
upstream connect error or disconnect/reset before headers. reset reason: connection failure< HTTP/1.1 503 Service Unavailable
< content-length: 91
< content-type: text/plain
< date: Thu, 04 Jul 2019 08:13:52 GMT
< server: envoy
< x-envoy-upstream-service-time: 60
<
{ [91 bytes data]
100 91 100 91 0 0 1338 0 --:--:-- --:--:-- --:--:-- 1378
* Connection #0 to host d-vine-machine-dev left intact
</code></pre>
<p>Setting the log level to TRACE at the istio-proxy level :</p>
<pre><code>$ kubectl exec -it -c istio-proxy d-vine-machine-dev-b8df755d6-bpjwl -- curl -X POST http://localhost:15000/logging?level=trace
</code></pre>
<p>I looked into the logs of the injected sidecar istio-proxy and found this :</p>
<pre><code>[2019-07-04 07:30:41.353][24][debug][router] [external/envoy/source/common/router/router.cc:381] [C119][S9661729384515860777] router decoding headers:
':authority', 'api-dev.d-vine.tech'
':path', '/d-vine-machine/swagger-ui.html'
':method', 'GET'
':scheme', 'http'
'cache-control', 'max-age=0'
'upgrade-insecure-requests', '1'
'user-agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36'
'accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3'
'accept-encoding', 'gzip, deflate'
'accept-language', 'fr-FR,fr;q=0.9,en-US;q=0.8,en;q=0.7'
'x-forwarded-for', '10.0.0.1'
'x-forwarded-proto', 'http'
'x-request-id', 'e38a257a-1356-4545-984a-109500cb71c4'
'content-length', '0'
'x-envoy-internal', 'true'
'x-forwarded-client-cert', 'By=spiffe://cluster.local/ns/default/sa/default;Hash=8b6afba64efe1035daa23b004cc255e0772a8bd23c8d6ed49ebc8dabde05d8cf;Subject="O=";URI=spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account;DNS=istio-ingressgateway.istio-system'
'x-b3-traceid', 'f749afe8b0a76435192332bfe2f769df'
'x-b3-spanid', 'bfc4618c5cda978c'
'x-b3-parentspanid', '192332bfe2f769df'
'x-b3-sampled', '0'
[2019-07-04 07:30:41.353][24][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:88] creating a new connection
[2019-07-04 07:30:41.353][24][debug][client] [external/envoy/source/common/http/codec_client.cc:26] [C121] connecting
[2019-07-04 07:30:41.353][24][debug][connection] [external/envoy/source/common/network/connection_impl.cc:644] [C121] connecting to 127.0.0.1:8080
[2019-07-04 07:30:41.353][24][debug][connection] [external/envoy/source/common/network/connection_impl.cc:653] [C121] connection in progress
[2019-07-04 07:30:41.353][24][debug][pool] [external/envoy/source/common/http/conn_pool_base.cc:20] queueing request due to no available connections
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:811] [C119][S9661729384515860777] decode headers called: filter=0x4f118b0 status=1
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/http1/codec_impl.cc:384] [C119] parsed 1272 bytes
[2019-07-04 07:30:41.353][24][trace][connection] [external/envoy/source/common/network/connection_impl.cc:282] [C119] readDisable: enabled=true disable=true
[2019-07-04 07:30:41.353][24][trace][connection] [external/envoy/source/common/network/connection_impl.cc:440] [C121] socket event: 3
[2019-07-04 07:30:41.353][24][trace][connection] [external/envoy/source/common/network/connection_impl.cc:508] [C121] write ready
[2019-07-04 07:30:41.353][24][debug][connection] [external/envoy/source/common/network/connection_impl.cc:526] [C121] delayed connection error: 111
[2019-07-04 07:30:41.353][24][debug][connection] [external/envoy/source/common/network/connection_impl.cc:183] [C121] closing socket: 0
[2019-07-04 07:30:41.353][24][debug][client] [external/envoy/source/common/http/codec_client.cc:82] [C121] disconnect. resetting 0 pending requests
[2019-07-04 07:30:41.353][24][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:129] [C121] client disconnected, failure reason:
[2019-07-04 07:30:41.353][24][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:164] [C121] purge pending, failure reason:
[2019-07-04 07:30:41.353][24][debug][router] [external/envoy/source/common/router/router.cc:644] [C119][S9661729384515860777] upstream reset: reset reason connection failure
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:1200] [C119][S9661729384515860777] encode headers called: filter=0x4f0e5f0 status=0
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:1200] [C119][S9661729384515860777] encode headers called: filter=0x4f0edc0 status=0
[2019-07-04 07:30:41.353][24][debug][filter] [src/envoy/http/mixer/filter.cc:133] Called Mixer::Filter : encodeHeaders 2
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:1200] [C119][S9661729384515860777] encode headers called: filter=0x4f0f0e0 status=0
[2019-07-04 07:30:41.353][24][debug][http] [external/envoy/source/common/http/conn_manager_impl.cc:1305] [C119][S9661729384515860777] encoding headers via codec (end_stream=false):
':status', '503'
'content-length', '91'
'content-type', 'text/plain'
'date', 'Thu, 04 Jul 2019 07:30:41 GMT'
'server', 'istio-envoy'
</code></pre>
<p>Has anyone encountered such an issue ? If you need more info about the configuration, I can provide.</p>
| <p>Thanks for your answer Manvar. There was no problem with the curl enabled pod but thanks for the insight. It was a misconfiguration of our tomcat port that was not matching the service/virtualService config.</p>
|
<p>I have an application.yml (Spring) file, which has almost 70 fields, want to move those fields to ConfigMap.
In the process of setup ConfigMap, have realized all the 70 fields has be flatened example : webservice.endpoint.transferfund
It's gonna be a painful task to convert all the 70 fields as flat, is there any alternative. </p>
<p>Please suggest.</p>
<p>Below Config is working:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: configmapname
namespace: default
data:
webservice.endpoint.transferfund: http://www.customer-service.app/api/tf
webservice.endpoint.getbalance: http://www.customer-service.app/api/balance
webservice.endpoint.customerinfo: http://www.customer-service.app/api/customerinfo
</code></pre>
<p>Below config is not working, tried it as yml format.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: configmapname
namespace: default
data:
application.yaml: |-
webservice:
endpoint:
transferfund: http://www.customer-service.app/api/tf
getbalance: http://www.customer-service.app/api/balance
customerinfo: http://www.customer-service.app/api/customerinfo
</code></pre>
<p>in src/main/resources/application.yml have below fields to access ConfigMap keys:</p>
<pre><code>webservice:
endpoint:
transferfund: ${webservice.endpoint.transferfund}
getbalance: ${webservice.endpoint.getbalance}
customerinfo: ${webservice.endpoint.customerinfo}
</code></pre>
<h2>Updated:</h2>
<p>ConfigMap Description:</p>
<pre><code>C:\Users\deskktop>kubectl describe configmap configmapname
Name: configmapname
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
application.yaml:
----
webservice:
endpoint:
transferfund: http://www.customer-service.app/api/tf
getbalance: http://www.customer-service.app/api/balance
customerinfo: http://www.customer-service.app/api/customerinfo
Events: <none>
</code></pre>
<p>Deployment script: (configMapRef name provided as configmap name as shown above)</p>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: configmap-sample
spec:
replicas: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: configmap-sample
spec:
containers:
- name: configmap-sample
image: <<image>>
ports:
- name: http-api
containerPort: 9000
envFrom:
- configMapRef:
name: configmapname
resources:
limits:
memory: 1Gi
requests:
memory: 768Mi
env:
- name: JVM_OPTS
value: "-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1 -Xms768M"
</code></pre>
| <p>A <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/configmap" rel="nofollow noreferrer">ConfigMap</a> is a dictionary of configuration settings. It consists of key-value pairs of strings. Kubernetes then adds those values to your containers.</p>
<p>In your case you have to make them flat, because Kubernetes will not understand them.</p>
<p>You can read in the documentation about <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">Creating ConfigMap</a> that:</p>
<blockquote>
<p><code>kubectl create configmap <map-name> <data-source></code></p>
<p>where is the name you want to assign to the ConfigMap and is the directory, file, or literal value to draw the data from.</p>
<p>The data source corresponds to a key-value pair in the ConfigMap, where</p>
<ul>
<li>key = the file name or the key you provided on the command line, and</li>
<li>value = the file contents or the literal value you provided on the command line.</li>
</ul>
<p>You can use <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands/#describe" rel="nofollow noreferrer"><code>kubectl describe</code></a> or <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands/#get" rel="nofollow noreferrer"><code>kubectl get</code></a> to retrieve information about a ConfigMap.</p>
</blockquote>
<p><strong>EDIT</strong></p>
<p>You could create a ConfigMap from a file with defined key.</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-the-key-to-use-when-creating-a-configmap-from-a-file" rel="nofollow noreferrer">Define the key to use when creating a ConfigMap from a file</a></p>
<p>Syntax might look like this:</p>
<p><code>kubectl create configmap my_configmap --from-file=<my-key-name>=<path-to-file></code>
And the ConfigMap migh look like the following:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: 2019-07-03T18:54:22Z
name: my_configmap
namespace: default
resourceVersion: "530"
selfLink: /api/v1/namespaces/default/configmaps/my_configmap
uid: 05f8da22-d671-11e5-8cd0-68f728db1985
data:
<my-key-name>: |
key=value
key=value
key=value
key=value
</code></pre>
<p>Also I was able to find <a href="https://github.com/tumblr/k8s-config-projector" rel="nofollow noreferrer">Create Kubernetes ConfigMaps from configuration files</a>.</p>
<blockquote>
<h2>Functionality</h2>
<p>The projector can:</p>
<ul>
<li>Take raw files and stuff them into a ConfigMap</li>
<li>Glob files in your config repo, and stuff ALL of them in your configmap</li>
<li>Extract fields from your structured data (yaml/json)</li>
<li>Create new structured outputs from a subset of a yaml/json source by pulling out some fields and dropping others</li>
<li>Translate back and forth between JSON and YAML (convert a YAML source to a JSON output, etc)</li>
<li>Support for extracting complex fields like objects+arrays from sources, and not just scalars!</li>
</ul>
</blockquote>
|
<p>I have created a GKE private cluster (<em>version: 1.13.6-gke.13</em>) using the following command:</p>
<pre><code>gcloud container clusters create a-cluster-with-user-pass \
--network vpc-name \
--subnetwork subnet-name \
--enable-master-authorized-networks \
--username random \
--password averylongpassword \
--enable-ip-alias \
--enable-private-nodes \
--enable-private-endpoint \
--master-ipv4-cidr xxx.xx.xx.xx/28 \
--cluster-version 1.13.6-gke.13 \
--num-nodes 2 \
--zone asia-south1-a
</code></pre>
<p>I can see that the port (10255) is open in both the nodes (or we can say GCP compute instances) created in the above cluster.</p>
<p>If I create a simple GCP compute instances (so I have in total 3 VM instances) and try to access the internal IP of the GKE node on 10255 port from this VM I am able to access it without any authentication or authorization.
Below is the command used to create the GCP compute instance:</p>
<pre><code>gcloud compute instances create vm-name \
--network vpc-name \
--subnetwork subnet-name \
--zone asia-south1-a
</code></pre>
<p>If I send a simple CURL GET request to (<strong><em>xxx.xx.xx.xx:10255/pods</em></strong>) I get tons of information about the pods and applications.
As I can see in the documentation of Kubernetes <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="noreferrer">here</a>, it is mentioned that:</p>
<pre><code>--read-only-port int32
The read-only port for the Kubelet to serve on with no authentication/authorization (set to 0 to disable) (default 10255)
</code></pre>
<p>I tried disabling the port by editing <code>kube-config.yaml</code> file in the node by doing an <code>ssh</code> and restarting the kubelet and I was successful. But is this a good approach? I believe there could be multiple issues when <strong><em>xxx.xx.xx.xx:10255/metrics</em></strong> is disabled. Is there way to secure the port? Rather than disabling it?</p>
<p>I see this <a href="https://github.com/kubernetes/kubernetes/issues/44330" rel="noreferrer">github issue</a> and I am certain that there is a way to secure this port. I'm not sure how to do that.</p>
<p>I see Kubernetes documentation in general provides us with multiple ways to secure the port. How to do that in Google Kubernetes Engine?</p>
| <p>Kubelet is exposing the collected node metrics using this port. Failure to expose these metrics there might lead to unexpected behavior as the system will be essentially flying blind.</p>
<p>Since GKE is a managed system, you're not really supposed to tweak the kubelet flags as the settings will be reset when a node gets recreated (nodes are based in GCE templates that will not include your own configuration).</p>
<p>As for security, I think is safe to leave that port as is, since you're using a private cluster, meaning that only the resources in the same VPC are allowed to reach the nodes.</p>
|
<p>currently I’m trying to get the the api server connected with my keycloak.</p>
<p>When I use the oidc-information from the user everything works fine, but the groups seem to be ignored
The apiserver is running with the parameter</p>
<pre><code> --oidc-ca-file=/etc/kubernetes/ssl/ca.pem
--oidc-client-id=kubernetes
--oidc-groups-claim=groups
--oidc-groups-prefix=oidc:
--oidc-issuer-url=https://keycloak.example.com/auth/realms/master
--oidc-username-claim=preferred_username
--oidc-username-prefix=oidc:
</code></pre>
<p>I added a ClusterRole and ClusterRoleBinding</p>
<pre><code>kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: developer-role
rules:
- apiGroups: [""]
resources: ["namespaces","pods"]
verbs: ["get", "watch", "list"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: developer-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: developer-role
subjects:
- kind: User
name: "oidc:myuser"
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>and for my user “myuser” everythin works fine.</p>
<p>But when I change the ClusterRoleBinding to subjet Group</p>
<pre><code>....
subjects:
- kind: User
name: "oidc:group1"
apiGroup: rbac.authorization.k8s.io
...
</code></pre>
<p>I receive forbidden.</p>
<p>I tried to debug the jwt token and the group seems to be included:</p>
<pre><code>{
...
"groups": [
"group1",
"group2",
"group3"
],
...
}
</code></pre>
<p>Any ideas why my groups are ignored/my ClusterRoleBinding not working?</p>
| <pre><code>....
subjects:
- kind: User
name: "oidc:group1"
apiGroup: rbac.authorization.k8s.io
...
</code></pre>
<p>should be:</p>
<pre><code>....
subjects:
- kind: Group
name: "oidc:group1"
apiGroup: rbac.authorization.k8s.io
...
</code></pre>
|
<p>I am learning more about Kubernetes Pod Security Policies, and while going through the list of fields, I could not find the default value for the <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#privileged" rel="nofollow noreferrer">Privileged</a> flag. Is this value dependent on the container runtime used? For example, <a href="https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities" rel="nofollow noreferrer">Docker containers</a> have this value set to false by default, so if I had a Kubernetes cluster with only Docker containers, would all the pods be unprivileged?</p>
| <p>There is a privileged flag on the SecurityContext of the container spec. The default is false.</p>
<p>Foreexample by below template you can set it to true.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec:
containers:
- name: hello-world-container
# The container definition
# ...
securityContext:
privileged: true
</code></pre>
|
<p>Is there a way to tell Kubernetes to never run two pods on the same node, an example I have two pods replicas, I want them to be always distributed over <code>zone1/zone2</code> and never in the same zone together.</p>
<pre><code>apiVersion: app/v1
kind: Deployment
metadata:
name: testApp
labels:
app: testApp-front
namespace:
spec:
replicas: 2
selector:
matchLabels:
app: testApp-front
template:
metadata:
labels:
app: testApp-front
spec:
nodeSelector:
failure-domain.beta.kubernetes.io/zone: zone1
</code></pre>
| <p>I think you need the concept of pod anti-affinity. This is within one cluster to take care that pods do not reside on one worker-node.
<a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity</a></p>
|
<p>I use <code>cert-manager</code> to generate TLS certificates for my application on Kubernetes with Let's Encrypt.</p>
<p>It is running and I can see "ca.crt", "tls.crt" and "tsl.key" inside the container of my application (in <code>/etc/letsencrypt/</code>). </p>
<p>But "ca.crt" is empty, and the application complains about it (<code>Error: Unable to load CA certificates. Check cafile "/etc/letsencrypt/ca.crt"</code>). The two other files look like normal certificates.</p>
<p>What does that mean?</p>
| <p>With cert-manager you have to use the <code>nginx-ingress</code> controller which will work as expose point.</p>
<p>ingress nginx controller will create one load balancer and you can setup your application tls certificate there.</p>
<p>There is nothing regarding certificate inside the pod of cert-manager.</p>
<p>so setup nginx ingress with cert-manager that will help to manage the tls certificate. that certificate will be stored in kubernetes secret.</p>
<p>Please follow this guide for more details:</p>
<blockquote>
<p><a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes" rel="nofollow noreferrer">https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes</a></p>
</blockquote>
|
<p>Is there a way to tell Kubernetes to never run two pods on the same node, an example I have two pods replicas, I want them to be always distributed over <code>zone1/zone2</code> and never in the same zone together.</p>
<pre><code>apiVersion: app/v1
kind: Deployment
metadata:
name: testApp
labels:
app: testApp-front
namespace:
spec:
replicas: 2
selector:
matchLabels:
app: testApp-front
template:
metadata:
labels:
app: testApp-front
spec:
nodeSelector:
failure-domain.beta.kubernetes.io/zone: zone1
</code></pre>
| <p>very simple you can use deamon set to run every pod in diffrent node or as the others said you can use pod anti-affinity</p>
|
<p>Helm greenhorn. I have to "inject" a specific <code>hazelcast.xml</code> into the <code>configMap</code> with the <a href="https://github.com/hazelcast/charts/tree/master/stable/hazelcast%20helm" rel="nofollow noreferrer">chart</a>. I am supposed to <code>--set</code> <code>hazelcast.configurationFiles</code></p>
<p>I've tried several ways:</p>
<ol>
<li><pre><code>helm install stable/hazelcast --set cluster.memberCount=3 --set hazelcast.configurationFiles[0].val="$(cat k8s/hazelcast.xml)"
</code></pre></li>
<li><pre><code>helm install --name=ciao stable/hazelcast --set cluster.memberCount=3 --set hazelcast.configurationFiles[0]="\{ key: hazelcast.xml, val:$(cat k8s/hazelcast.xml) \}"
</code></pre></li>
<li><pre><code>helm install --name=ciao stable/hazelcast --set cluster.memberCount=3 --set hazelcast.configurationFiles[0]="$(cat k8s/hazelcast.xml)"
</code></pre></li>
</ol>
<p>none of them works, and i couldn't find or understand how to correctly do it. </p>
<p>I expect that i get <code>configMap</code> correctly configured like it should be:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: hazelcast-configuration
data:
hazelcast.xml: |-
<?xml version="1.0" encoding="UTF-8"?>........
</code></pre>
<p>instead of this (closest result obtained with try nr 3):</p>
<pre><code> data:
"0": |-
<?xml version="1.0" encoding="UTF-8"?>
</code></pre>
| <p>Following the <a href="https://github.com/hazelcast/charts/blob/master/stable/hazelcast/README.md" rel="nofollow noreferrer">README</a> example</p>
<p>you need to uncomment the configurationFiles on values and put paste your own xml file conten:</p>
<pre><code>configurationFiles:
hazelcast.xml: |-
<?xml version="1.0" encoding="UTF-8"?>
<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.10.xsd"
xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<properties>
<property name="hazelcast.discovery.enabled">true</property>
</properties>
<network>
<join>
<multicast enabled="false"/>
<tcp-ip enabled="false" />
<discovery-strategies>
<discovery-strategy enabled="true" class="com.hazelcast.kubernetes.HazelcastKubernetesDiscoveryStrategy">
</discovery-strategy>
</discovery-strategies>
</join>
</network>
<management-center enabled="${hazelcast.mancenter.enabled}">${hazelcast.mancenter.url}</management-center>
<!-- Custom Configuration Placeholder -->
</hazelcast>
</code></pre>
<p>but if you don't want to past the content insede the <code>values.yaml</code> you can use. File to get the content of a file in the same path</p>
<pre><code>configurationFiles:
hazelcast.xml: |-
{{ .Files.Get "hazelcast.xml" | indent 4}}
</code></pre>
<p><a href="https://github.com/helm/helm/blob/master/docs/chart_template_guide/accessing_files.md" rel="nofollow noreferrer">Template guide</a></p>
<p>Remember you can copy the original <code>values.yaml</code> from the chart and use <code>-f</code> to specify you own values instead of using <code>--set</code> every time.</p>
|
<p>After reading thru Kubernetes documents like this, <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noreferrer">deployment</a> , <a href="https://kubernetes.io/docs/concepts/services-networking/service/#motivation" rel="noreferrer">service</a> and <a href="https://www.digitalocean.com/community/tutorials/an-introduction-to-kubernetes" rel="noreferrer">this</a> I still do not have a clear idea what the purpose of service is. </p>
<p>It seems that the service is used for 2 purposes:</p>
<ol>
<li>expose the deployment to the outside world (e.g using LoadBalancer), </li>
<li>expose one deployment to another deployment (e.g. using ClusterIP services). </li>
</ol>
<p>Is this the case? And what about the Ingress?</p>
<p>------ update ------</p>
<p><a href="https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/" rel="noreferrer">Connect a Front End to a Back End Using a Service</a> is a good example of the service working with the deployment.</p>
| <p><strong>Service</strong></p>
<p>A deployment consists of one or more pods and replicas of pods. Let's say, we have 3 replicas of pods running in a deployment. Now let's assume there is no service. How does other pods in the cluster access these pods? Through IP addresses of these pods. What happens if we say one of the pods goes down. Kunernetes bring up another pod. Now the IP address list of these pods changes and all the other pods need to keep track of the same. The same is the case when there is auto scaling enabled. The number of the pods increases or decreases based on demand. To avoid this problem services come into play. Thus services are basically programs that manages the list of the pods ip for a deployment. </p>
<p>And yes, also regarding the uses that you posted in the question.</p>
<p><strong>Ingress</strong></p>
<p>Ingress is something that is used for providing a single point of entry for the various services in your cluster. Let's take a simple scenario. In your cluster there are two services. One for the web app and another for documentation service. If you are using services alone and not ingress, you need to maintain two load balancers. This might cost more as well. To avoid this, ingress when defined, sits on top of services and routes to services based on the rules and path defined in the ingress. </p>
|
<p>I want to store logs of pods in kubernetes at one place. i.e output of <strong>kubectl logs podname</strong></p>
<p>I referred this question <a href="https://stackoverflow.com/questions/56489250/kubernetes-basic-pod-logging">Kubernetes basic pod logging</a> which gives me successfully logs for counter...How to modify this args attribute in spec to get output of <strong>kubectl logs podname</strong> stored in file at one place?</p>
<p>Here is my pod.yaml i created but not able to see any file at location <strong>/tmp/logs/</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: spring-boot-demo-pricing
spec:
ports:
- name: spring-boot-pricing
port: 8084
targetPort: 8084
selector:
app: spring-boot-demo-pricing
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: spring-boot-demo-pricing
spec:
replicas: 1
template:
metadata:
labels:
app: spring-boot-demo-pricing
spec:
containers:
- name: spring-boot-demo-pricing
image: djtijare/a2ipricing12062019:v1
imagePullPolicy: IfNotPresent
# envFrom:
#- configMapRef:
# name: spring-boot-demo-config-map
resources:
requests:
cpu: 100m
memory: 1Gi
ports:
- containerPort: 8084
args: [/bin/sh, -c,
'i=0; while true; do echo "$i: $(date)" >> /u01/kubernetes_prac/logs/log_output.txt; i=$((i+1)); sleep 1; done']
volumeMounts:
- name: varlog
mountPath: /u01/kubernetes_prac/logs
volumes:
- name: varlog
hostPath:
path: /tmp/logs
</code></pre>
| <p>The right approach would be to store logs outside the cluster, say for example if you use AWS, there are provisions like storing logs to cloudwatch, s3, etc. An example might be <a href="https://github.com/fluent/fluentd-kubernetes-daemonset" rel="nofollow noreferrer">fluentd</a>, that writes to many sources. The reason being that kubernetes clusters run on VPC(Virtual private cloud) and providing access to the nodes in the cluster might not be right all the time. So this is one such approach followed. </p>
<p>This <a href="https://eksworkshop.com/logging/" rel="nofollow noreferrer">link</a> provides the guideline to expose all the containers' log to EFK(Elasticsearch, Fluentd, and Kibana).</p>
|
<p>I am learning more about Kubernetes Pod Security Policies, and while going through the list of fields, I could not find the default value for the <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#privileged" rel="nofollow noreferrer">Privileged</a> flag. Is this value dependent on the container runtime used? For example, <a href="https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities" rel="nofollow noreferrer">Docker containers</a> have this value set to false by default, so if I had a Kubernetes cluster with only Docker containers, would all the pods be unprivileged?</p>
| <p>In a fresh cluster the default PSP is very permissive which means that you pretty much anything is allowed. So the default would be to allow privileged containers if the privileged flag was toggled on the pod, same with root users, etc. </p>
<p>You will have to explicitly turn set the flag to true in the PSP if you want to enforce it. I believe this is true regardless of the Container runtime being used, works with Windows containers too :) </p>
|
<p>When I try to check my k8s certs expiration states, I run the following command:</p>
<pre class="lang-sh prettyprint-override"><code>kubeadm alpha certs check-expiration
</code></pre>
<p>which ended up with:</p>
<pre><code>name: Invalid value: "alpha_53_116": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
</code></pre>
<p>I was wondering why it needs to check the node's hostname? As the hostname of my master node couldn't be altered, is there any way to solve this problem?</p>
<hr>
<p>complement:</p>
<p>OS: Centos 7.4<br>
kubeadm version: 1.15.0</p>
| <p>As you pointed in your question, problem is with your node name.
According to <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#check-certificate-expiration" rel="nofollow noreferrer">documentation</a> command <code>kubeadm alpha certs</code></p>
<blockquote>
<p>The command shows expiration/residual time for the client certificates
in the /etc/kubernetes/pki folder and for the client certificate
embedded in the KUBECONFIG files used by kubeadm (admin.conf,
controller-manager.conf and scheduler.conf).</p>
</blockquote>
<p>Mentioned files can be found in <code>/etc/kubernetes</code>. You can also check kubeadm init configuration using <code>kubeadm config print init-defaults</code>. </p>
<p>Those files will contain name of your hostname which is invalid in kubeadm/kubernetes.
In short, as <code>kubeadm alpha certs</code> is based on KUBECONFIG files and pki folder, it will not go trought validation due to "_" sign.
Unfortunately it is syntax issue so there is no workaround.</p>
<p>Please keep in mind that <code>alpha</code> is Kubeadm experimental sub-commands. So it might be changed in the future.</p>
|
<p>I have a couple of applications, which runs in Docker containers (all on the same VM).
In front of them, I have an nginx container as a reverse proxy.
Now I want to migrate that to Kubernetes. </p>
<p>When I start them by docker-composer locally it works like expected.
On Kubernetes not.</p>
<h3>nginx.conf</h3>
<pre><code>http {
server {
location / {
proxy_pass http://app0:80;
}
location /app1/ {
proxy_pass http://app1:80;
rewrite ^/app1(.*)$ $1 break;
}
location /app2/ {
proxy_pass http://app2:80;
rewrite ^/app2(.*)$ $1 break;
}
}
}
</code></pre>
<h3>edit: nginx.conf is not used on kubernetes. I have to use ingress-controller for that:</h3>
deployment.yaml
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: app0
spec:
replicas: 1
template:
metadata:
labels:
app: app0
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: app0
image: appscontainerregistry1.azurecr.io/app0:latest
imagePullPolicy: Always
ports:
- containerPort: 80
name: nginx
---
#the other apps
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-nginx
annotations:
# use the shared ingress-nginx
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: apps-url.com
http:
paths:
- path: /
backend:
serviceName: app0
servicePort: 80
- path: /app1
backend:
serviceName: app1
servicePort: 80
- path: /app2
backend:
serviceName: app2
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: loadbalancer
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: ingress-nginx
</code></pre>
<p>I get the response on / (app0). Unfortunately, the subroutes are not working. What I´m doing wrong?</p>
<h3>EDIT</h3>
<p>I figured out. Ich missed installing the ingress controller. Like on this page (<a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a>) described, the ingress doesn't work if no controller is installed.
I used ingress-nginx as a controller (<a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/</a>) because it was the best-described install guide which I was able to find and I didn´t want to use HELM.
I have one more question. How I can change my ingress that subdomains are working.
For example, k8url.com/app1/subroute shows me every time the start page of my app1.
And if I use a domain name proxying, it rewrites every time the domain name by the IP. </p>
| <p>you have created deployment successfully but with that service should be there. nginx ngress on kubernetes manage traffic based on the service.</p>
<p>so flow goes like </p>
<pre><code>nginx-ingress > service > deployment pod.
</code></pre>
<p>you are missing to create the service for both applications and add the proper route based on that in kubernetes ingress.</p>
|
<p>I've read some pages about monitoring k8s, and I found <strong><em>kubernetes_sd_config</em></strong> (within prometheus), <strong><em>metrics-server</em></strong> (took the place of <em>heapster</em>) and <strong><em>kube-state-metrics</em></strong>. All of them could provides metrics, but what's the difference? </p>
<ol>
<li>Does kubernetes_sd_config (within prometheus) provide all the data those I can get using metrics-server and kube-state-metrics? </li>
<li>Is kubernetes_sd_config just enough for monitoring? </li>
<li>Is metrics-server just for providing data (less than kubernetes_sd_config) to the internal components(such as hpa controller)?</li>
<li>Is kube-state-metrics just for the objects (pod, deployment...) in k8s?</li>
<li>what is their own target respectively?</li>
</ol>
| <p><strong>1</strong> <a href="https://github.com/kubernetes-incubator/metrics-server" rel="nofollow noreferrer">Metrics-server</a> is a cluster level component which periodically scrapes container CPU and memory usage metrics from all Kubernetes nodes served by Kubelet through Summary API.</p>
<p>The Kubelet exports a "summary" API that aggregates stats from all pods.</p>
<pre><code>$ kubectl proxy &
Starting to serve on 127.0.0.1:8001
$ NODE=$(kubectl get nodes -o=jsonpath="{.items[0].metadata.name}")
$ curl localhost:8001/api/v1/proxy/nodes/${NODE}:10255/stats/summary
</code></pre>
<p><strong>Use-Cases:</strong></p>
<ul>
<li>Horizontal Pod Autoscaler.</li>
<li><code>kubectl top --help</code>: command.</li>
</ul>
<p><strong>2</strong> <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">kube-state-metrics</a></p>
<blockquote>
<p>is focused on generating completely new metrics from Kubernetes' object state (e.g. metrics based on deployments, replica sets, etc.). It holds an entire snapshot of Kubernetes state in memory and continuously generates new metrics based off of it</p>
</blockquote>
<p><strong>Use-Cases</strong></p>
<ul>
<li>count the number of k8s Objects.</li>
<li>How many namespaces are there ?</li>
</ul>
<p><a href="https://sysdig.com/blog/introducing-kube-state-metrics/" rel="nofollow noreferrer">sysdig-k8s-state-metrics</a> provide the further Information.</p>
<p><strong>3</strong> <strong><a href="https://github.com/prometheus/node_exporter" rel="nofollow noreferrer">Prometheus Node_Exporter</a></strong> − Gets the host level matrices and exposes them to Prometheus.</p>
<p><strong>Use-Cases</strong></p>
<ul>
<li>User and Kernel Space level information.</li>
</ul>
<p>Lastly, <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config" rel="nofollow noreferrer">kubernetes_sd_config</a> is the configuration file defines everything related to scraping targets.</p>
<p>You can decide in the config file what kind of information you want to gather and from whom.</p>
|
<p>Is there a way to create own boilerplate helm chart and then generate from it helm charts for my micro services (which will differ only in chart names)?</p>
| <p>Yes, you'd need to create a package of your base boilerplate chart and then reference it in the <code>requirements.yaml</code> for other charts which depend on it. </p>
<p>Ref: <a href="https://helm.sh/docs/developing_charts/#chart-dependencies" rel="nofollow noreferrer">https://helm.sh/docs/developing_charts/#chart-dependencies</a></p>
|
<p><strong>TL;DR:</strong> How can I flexibly decode a k8s API object and inspect its top-level <code>metav1.ObjectMeta</code> struct without knowing the object's <code>Kind</code> in advance?</p>
<hr>
<p>I'm writing an admission controller endpoint that unmarshals a <code>metav1.AdmissionReview</code> object's <code>Request.Object.Raw</code> field into a concrete object based on the <code>Request.Kind</code> field - e.g.</p>
<pre class="lang-golang prettyprint-override"><code>if kind == "Pod" {
var pod core.Pod
// ...
if _, _, err := deserializer.Decode(admissionReview.Request.Object.Raw, nil, &pod); err != nil {
return nil, err
}
annotations := pod.ObjectMeta.Annotations
// inspect/validate the annotations...
</code></pre>
<p>This requires knowing all possible types up front, or perhaps asking a user to supply a <code>map[kind]corev1.Object</code> that we can use to be more flexible.</p>
<p>What I'd like to instead achieve is something closer to:</p>
<pre class="lang-golang prettyprint-override"><code>var objMeta core.ObjectMeta
if _, _, err := deserializer.Decode(admissionReview.Request.Object.Raw, nil, &objMeta); err != nil {
return nil, err
}
// if objMeta is populated, validate the fields, else
// assume it is an object that does not define an ObjectMeta
// as part of its schema.
</code></pre>
<p>Is this possible? The k8s API surface is fairly extensive, and I've crawled through the <a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Object" rel="noreferrer">metav1 godoc</a>, corev1 godoc & <a href="https://cs.k8s.io" rel="noreferrer">https://cs.k8s.io</a> for prior art without a decent example.</p>
<p>The closest I've found is possibly the <a href="https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#ObjectMetaAccessor" rel="noreferrer"><code>ObjectMetaAccessor</code></a> interface, but I'd need to get from an <code>AdmissionReview.Request.Object</code> (type <code>runtime.RawExtension</code>) to a <code>runtime.Object</code> first.</p>
| <p>I believe you can't find what you are looking for because, when decoding an object, Kubernetes uses <a href="https://github.com/kubernetes/apimachinery/blob/master/pkg/runtime/interfaces.go#L255" rel="nofollow noreferrer">GetObjectKind</a> and compares the result to a <a href="https://github.com/kubernetes/apimachinery/blob/master/pkg/runtime/scheme.go#L32" rel="nofollow noreferrer">Scheme</a> to convert the object to a concrete type, rather than using some <a href="https://en.wikipedia.org/wiki/Generic_programming" rel="nofollow noreferrer">generic</a> like approach and interacting with the fields of an object without knowing it's concrete type.</p>
<p>So you can use reflection instead, something like:</p>
<pre><code>k8sObjValue := reflect.ValueOf(admissionReview.Request.Object.Raw).Elem()
k8sObjObjectMeta := k8sObjValue.FieldByName("ObjectMeta")
annotations, ok := k8sObjObjectMeta.FieldByName("Annotations").Interface().(map[string]string)
if !ok {
panic("failed to retrieve annotations")
}
</code></pre>
<p>EDIT:</p>
<p>Or closer to your requirements, convert to an ObjectMeta object</p>
<pre><code>k8sObjValue := reflect.ValueOf(admissionReview.Request.Object.Raw).Elem()
objMeta, ok := k8sObjValue.FieldByName("ObjectMeta").Interface().(core.ObjectMeta)
if !ok {
panic("failed to retrieve object metadata")
}
</code></pre>
|
<p>I'm new to kubernetes and I'm following some tutorials here and there. I'm struggling to understand the different components and tools and one of these ones is kustomize. I saw different examples of kustomize usages and I don't understand one of them (even after searching it on the net). So can you explain to me the function of the following <code>kustomize</code> command:</p>
<pre><code>kustomize edit add configmap mnist-map-training --from-literal=trainSteps=200
</code></pre>
| <p>Kubernetes is a tool basically that orchestrates docker containers. We create deployments, services, ingresses using yaml and these form the components of the cluster. However, lets say a team has come up with the kubernetes cluster configuration yaml files for development environment. For say, integration environment, the yaml files differ a little. It differs, even more for a production environment. So, the obvious and simple solution is to maintain three copies of yaml files. But it might not work out in the long run. A change applied to one environment may need to be applied in other environments. Missing the changes might cause issues.</p>
<p>Kustomize is a tool that address this issue. You create a <strong>base</strong> copy(as per our example, let's assume its development environment) of kubernetes yaml config files along with the kustomization file. The kustomization file in general, describes the resources(yaml files), configmaps, secrets to create. Then the diff to create the kubernetes cluster configuration in integration and production environments are created as <strong>overlays</strong>. You can use this <a href="https://kubectl.docs.k8s.io/pages/reference/kustomize.html" rel="nofollow noreferrer">link</a> for complete reference, though it is not latest, it might help. In addition there is documentation in <a href="https://github.com/kubernetes-sigs/kustomize/tree/master/examples" rel="nofollow noreferrer">github</a> as well.</p>
<p>Now regarding this command,</p>
<pre class="lang-sh prettyprint-override"><code>kustomize edit add configmap mnist-map-training --from-literal=trainSteps=200
</code></pre>
<p>This command edits the kustomize file in the current directory, to create a snippet like this:</p>
<pre><code>configMapGenerator:
- name: mnist-map-training
literals:
- trainSteps=200
</code></pre>
<p>When the <code>kustomize build</code> command is run this creates a configmap yaml like this:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: mnist-map-training
data:
trainSteps: "200"
</code></pre>
|
<p>I set up a k8s cluster on microk8s and I ported my application to it. I also added a horizontal auto-scaler which adds pods based on the cpu load. The auto-scaler works fine and it adds pods when there is load beyond the target and when I remove the load after some time it will kill the pods.</p>
<p>The problem is I noticed at the exact same moments that the auto-scaler is creating new pods some of the requests fail:</p>
<pre><code>POST Response Code : 200
POST Response Code : 200
POST Response Code : 200
POST Response Code : 200
POST Response Code : 200
POST Response Code : 502
java.io.IOException: Server returned HTTP response code: 502 for URL: http://10.203.101.61/gateway/compile
POST Response Code : 502
java.io.IOException: Server returned HTTP response code: 502 for URL: http://10.203.101.61/gateway/compile
POST Response Code : 200
POST Response Code : 502
java.io.IOException: Server returned HTTP response code: 502 for URL: http://10.203.101.61/gateway/compile
POST Response Code : 502
java.io.IOException: Server returned HTTP response code: 502 for URL: http://10.203.101.61/gateway/compile
POST Response Code : 200
POST Response Code : 200
POST Response Code : 200
POST Response Code : 200
POST Response Code : 200
POST Response Code : 200
POST Response Code : 200
POST Response Code : 200
</code></pre>
<p>I like to know what is the reason for this and how I can fix it?</p>
<p>Update:
I think it is better I give you more information regarding my set up:</p>
<p>The traffic is coming from outside of the cluster but both the k8s node and the program that generates the requests are on one machine so there is no network problem. There is a custom nginx component which doesn't do load balancing and just act as a reverse proxy and sends the traffic to respective services.</p>
<p>I ran another test which gave me more info. I ran the same benchmarking test but this time instead of sending the requests to the reverse proxy (nginx) I used the IP address of that specific service and I had no failed request while auto-scaler did its job and launched multiple pods. I am not sure if the problem is Nginx or k8s?</p>
| <p>When the new pods are spawned, Kubernetes immediately starts to redirect traffic to them.
However, usually, it takes a bit of time for the pod to boot and become operational (ready).</p>
<p>To prevent this from happening, you can define a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-readiness-probes" rel="nofollow noreferrer">Readiness Probe</a> for your pods.
K8s will periodically call the pod on the readiness endpoint that you have provided, to determine if this pod is functional and ready to accept requests.
K8s won't redirect traffic to the pod until the readiness endpoints returns a successful result depending on the <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-setting-up-health-checks-with-readiness-and-liveness-probes" rel="nofollow noreferrer">type of probe</a> (check "<em>Types of Probes</em>" section).</p>
|
<p>I've built a docker image based on <a href="https://hub.docker.com/_/httpd" rel="nofollow noreferrer">httpd:2.4</a>. In my k8s deployment I've defined the following <code>securityContext</code>:</p>
<pre><code>securityContext:
privileged: false
runAsNonRoot: true
runAsUser: 431
allowPrivilegeEscalation: false
</code></pre>
<p>When I apply the deployment without this <code>securityContext</code> everything works fine, the server starts up correctly, etc. However when I add in the above <code>securityContext</code> my pod has the status <code>CrashLoopBackOff</code> and I get the following from <code>$ kubectl logs...</code></p>
<pre><code>(13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80
no listening sockets available, shutting down
AH00015: Unable to open logs
</code></pre>
<p>From searching around online I've found that this is because apache needs to be root in order to run, so running as non-root will fail.</p>
<p>I've also found that <code>httpd.conf</code> has the following</p>
<pre><code>#
# If you wish httpd to run as a different user or group, you must run
# httpd as root initially and it will switch.
#
# User/Group: The name (or #number) of the user/group to run httpd as.
# It is usually good practice to create a dedicated user and group for
# running httpd, as with most system services.
#
User daemon
Group daemon
</code></pre>
<p>Which seems to suggest that if I don't use <code>runAsNonRoot</code> or <code>runAsUser</code> in <code>securityContext</code> it should automatically switch to whichever user I specify in <code>httpd.conf</code>. In my case I created a user/group <code>swuser</code> and edited <code>httpd.conf</code> accordingly. However when I run the image locally with <code>docker run -p 5000:80 my-registry/my-image:1.0.12-alpha</code> and then run <code>docker exec -it my-container whoami</code> it prints <code>root</code>.</p>
<p>So my question is, what can I do to run my container safely as non-root in k8s (and be sure it is non-root)</p>
| <p>Run the apache on a port greater than 1024. </p>
<p>Ports below 1024 are privileged ports only available to the root user.</p>
<p>As you will have some ingress load balancer before that, it shouldn't matter :-)</p>
|
<p>I have created a horizontal auto-scaler based on the cpu usage and it works fine. I want to know how I can configure the autoscaler in a way that it just scales up without scaling down? The reason I want such a thing is when I have high load/request I create some operators but I want to keep them alive even if for some amount of time they don't do anything but auto-scaler kills the pods and scaling down to the minimum replicas after sometime if there is no load.</p>
<p>My autoscaler:</p>
<pre><code>apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: gateway
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: gateway
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 20
</code></pre>
<p>Edit:
By operator, I mean small applications/programs that are running in a pod.</p>
| <p>You can add <code>--horizontal-pod-autoscaler-downscale-stabilization</code> flag to kube-controller-manager as described <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-cooldown-delay" rel="nofollow noreferrer">in docs</a>. Default delay is set to 5 minutes.</p>
<p>To add flag to kube-controller-manager edit <code>/etc/kubernetes/manifests/kube-controller-manager.yaml</code> on master node, pod will be then recreated.</p>
|
<p>What is the difference between selecting the user to run as in the <code>securityContext.runAsUser</code> section of my k8s deployment, vs specifying the user using <code>USER myuser</code> in the Dockerfile? </p>
<p>I'm particularly interested in if there are security concerns associated with <code>USER myuser</code> that don't exist under <code>securityContext</code></p>
| <h3>MustRunAsNonRoot</h3>
<p><a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#users-and-groups" rel="nofollow noreferrer">Users and groups</a></p>
<blockquote>
<p>Requires that the pod be submitted with a <code>non-zero runAsUser</code> or have the <code>USER directive defined</code> (using a numeric UID) in the image.
Pods which have specified neither runAsNonRoot nor runAsUser settings
will be mutated to set <code>runAsNonRoot=true</code>, thus requiring a defined
<code>non-zero numeric USER directive</code> in the container. No default
provided. Setting allowPrivilegeEscalation=false is strongly
recommended with this strategy.</p>
</blockquote>
<p>So <code>USER directive</code> is important when you want the container to be started as non-root.</p>
|
<p>Using the python client, I've written a function to evict all pods on a node. How can i monitor/watch for all pods to be fully evicted?</p>
<p>I'm using the create_namespaced_pod_eviction method to evict all pods on a single node. While this works, it doesn't wait for the process to finish before continuing. I need the eviction process 100% complete before moving on in my script. How can i monitor the status of this process? Similar to kubectl, i'd like my function to wait for each pod to evict before returning. </p>
<pre class="lang-py prettyprint-override"><code># passes list of pods to evict function
def drain_node(self):
print("Draining node", self._node_name)
self.cordon_node()
pods = self._get_pods()
response = []
for pod in pods:
response.append(self._evict_pod(pod))
return response
# calls the eviction api on each pod
def _evict_pod(self, pod, delete_options=None):
name = pod.metadata.name
namespace = pod.metadata.namespace
body = client.V1beta1Eviction(metadata=client.V1ObjectMeta(name=name, namespace=namespace))
response = self._api.create_namespaced_pod_eviction(name, namespace, body)
return response
# gets list of pods to evict
def _get_pods(self):
all_pods = self._api.list_pod_for_all_namespaces(watch=True, field_selector='spec.nodeName=' + self._node_name)
user_pods = [p for p in all_pods.items
if (p.metadata.namespace != 'kube-system')]
return user_pods
</code></pre>
| <p>As given in this <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CoreV1Api.md#create_namespaced_pod_eviction" rel="nofollow noreferrer">link</a>, the <code>create_namespaced_pod_eviction</code> call returns <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1beta1Eviction.md" rel="nofollow noreferrer"><code>V1Beta1Eviction</code></a> object. It has <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1ObjectMeta.md" rel="nofollow noreferrer"><code>ObjectMeta</code></a> object, that contains the <code>deletion_timestamp</code> field. Perhaps you can use that to determine if the pod is already deleted. Or probably polling the pod status might give the same ObjectMeta.</p>
|
<p>I inherited a Kubernetes/Docker setup, and I accidentally crashed the pod by changing something relating to the DB password.</p>
<p>I am trying to troubleshoot this.</p>
<p>I don't have much Kubernetes or Docker experience, so I'm still learning how to do things.</p>
<p>The value is contained inside the db-user-pass credential I believe, which is an Opaque type secret.</p>
<p>I'm describing it:</p>
<pre><code>kubectl describe secrets/db-user-pass
Name: db-user-pass
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
password: 16 bytes
username: 13 bytes
</code></pre>
<p>but I have no clue how to get any data from this secret. The example on the Kubernetes site seems to assume I'll have a base64 encoded string, but I can't even seem to get that. How do I get the value for this?</p>
| <p>You can use <code>kubectl get secrets/db-user-pass -o yaml</code> or <code>-o json</code> where you'll see the base64-encoded <code>username</code> and <code>password</code>. You can then copy the value and decode it with something like <code>echo <ENCODED_VALUE> | base64 -D</code> (Mac OS X).</p>
<p>A more compact one-liner for this:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get secrets/db-user-pass --template={{.data.password}} | base64 -D
</code></pre>
<p>and likewise for the username:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get secrets/db-user-pass --template={{.data.username}} | base64 -D
</code></pre>
<p>Note: on GNU/Linux, the base64 flag is <code>-d</code>, not <code>-D</code>.</p>
|
<p>I'm trying to deploy a login consent provider with hydra.</p>
<p>Here is the <code>yaml</code> file.</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: login-consent-deployment
labels:
app: login-consent-nginx
spec:
replicas: 1
selector:
matchLabels:
app: login-consent-nginx
template:
metadata:
labels:
app: login-consent-nginx
spec:
containers:
- name: login-consent-nginx
image: ubergarm/openresty-nginx-jwt:latest
command: ["/usr/local/openresty/bin/openresty", "-g", "daemon off;", "-c", "/usr/local/openresty/nginx/conf/nginx.conf"]
ports:
- name: http-lc-port
containerPort: 80
resources:
limits:
cpu: "0.1"
memory: 32Mi
volumeMounts:
- mountPath: /etc/nginx/conf.d
name: login-consent-nginx-conf-volume
- mountPath: /usr/local/openresty/nginx/html
name: login-consent-www-resources
volumes:
- name: login-consent-nginx-conf-volume
configMap:
name: login-consent-conf
items:
- key: login-consent.conf
path: login-consent.conf
- name: login-consent-www-resources
hostPath:
path: C:\Users\myUser\www
type: Directory
---
apiVersion: v1
kind: Service
metadata:
name: login-consent-service
spec:
type: LoadBalancer
selector:
app: login-consent-nginx
ports:
- protocol: TCP
name: http-lc-nginx
port: 3000
targetPort: http-lc-port
</code></pre>
<p>The error I get in the pod description after deployment</p>
<pre><code>Warning FailedMount 2s (x4 over 6s) kubelet, docker-for-desktop MountVolume.SetUp failed for volume "login-consent-www-resources" : hostPath type check failed: C:\Users\myUser\www is not a directory
</code></pre>
<p><code>www</code> is a folder in my user home directory</p>
<pre><code>$docker run --rm -v c:/Users/myUser:/data alpine ls /data
3D Objects
...
...
...
...
www
</code></pre>
<p>I wonder what I'm doing wrong here? I'm using docker for windows with its own integrated Kubernetes and I have enabled my shared folder C inside dockers.</p>
<p>Any help?</p>
| <p>So, inside the docker container, your <code>c:/Users/myUser</code> is now available as <code>/data</code>. Hence, you have to use <code>/data/www</code> as the hostpath.</p>
|
<p>I have all ready prepared my docker image.
My Dockerfile : </p>
<pre><code>FROM python:3.7-alpine
# Creating Application Source Code Directory
RUN mkdir -p /FogAPP/src
# Setting Home Directory for containers
WORKDIR /FogAPP/src
# Copying src code to Container
COPY fogserver.py /FogAPP/src
# Application Environment variables
ENV APP_ENV development
# Exposing Ports
EXPOSE 31700
# Setting Persistent data
VOLUME ["/app-data"]
#Running Python Application
CMD ["python", "fogserver.py"]
</code></pre>
<p>My source code fogserver.py (socket programming) :</p>
<pre><code>import socket
from datetime import datetime
import os
def ReceiveDATA():
hostname = socket.gethostname()
i=0
host = socket.gethostbyname(hostname)
port = 31700
while True:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # Create a socket object
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind((host, port)) # Bind to the port
s.listen(10) # Accepts up to 10 clientections.
print("############################# ",i+1," #################################")
print('Server listening.... on '+ str(host))
client, address = s.accept()
print('Connection from : ',address[0])
i+=1
date=str(datetime.now())
date=date.replace('-', '.')
date=date.replace(' ', '-')
date=date.replace(':', '.')
PATH = 'ClientDATA-'+date+'.csv'
print(date+" : File created")
f = open(PATH,'wb') #open in binary
# receive data and write it to file
l = client.recv(1024)
while (l):
f.write(l)
l = client.recv(1024)
f.close()
dt=str(datetime.now())
dt=dt.replace('-', '.')
dt=dt.replace(' ', '-')
dt=dt.replace(':', '.')
print(dt+' : '+'Successfully get the Data')
feedback = dt
client.send(feedback.encode('utf-8'))
client.close()
s.close()
if __name__ == '__main__':
ReceiveDATA()
</code></pre>
<p>My kubernetes cluster is ready : </p>
<pre><code>kubectl get nodes
NAME STATUS ROLES AGE VERSION
rpimanager Ready master 3d23h v1.15.0
rpiworker1 Ready worker 3d23h v1.15.0
rpiworker2 Ready worker 3d23h v1.15.0
</code></pre>
<p>Then I have deployed the docker image in 2 pods through the kubernetes dashboard :</p>
<pre><code>kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cluster-fogapp NodePort 10.101.194.192 <none> 80:31700/TCP 52m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d23h
</code></pre>
<p>So actually docker image is runing in two pods : </p>
<pre><code>kubectl get pods
NAME READY STATUS RESTARTS AGE
cluster-fogapp-c987dfffd-6zc2x 1/1 Running 0 56m
cluster-fogapp-c987dfffd-gq5k4 1/1 Running 0 56m
</code></pre>
<p>and I have also a client source code which is also socket programming. Here I have found a problem which address of the server in cluster I have to put ?</p>
<p>This is my client code source : </p>
<pre><code>
host = "????????????"#Which Address should I set
port = 31700
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((host, port))
PATH = GenerateDATA()
f = open (PATH, "rb")
l = f.read(1024)
while (l):
s.send(l)
l = f.read(1024)
print(dt+' : '+'Done sending')
</code></pre>
<p>I have tried the address of the master node and I get an error of Connection refused.</p>
<p>I would like just to clarify that I am working on a cluster composed of raspberry Pi3 and the client is on my own pc. The pc and the raspberry cards are connected to the same local network.</p>
<p>Thank you for helping me. </p>
| <p>You can access the service with the worker nodes IP since you exposed the service as NodePort.</p>
<p><code>WorkerNode:<NodePort></code></p>
<p>The problem with this approach is that if any of the nodes are dead, you might face issue. The ideal solution is to expose the service as LoadBalancer, so that you can access the service outside the cluster with external IP or DNS.</p>
|
<p>I am attempting to run a docker-compose file using <code>kompose up</code>. I am on Mac OS High Sierra, running latest and greatest versions of everything (Docker CE 17.12.0, VirtualBox 5.2.8 and kompose 1.11.0). </p>
<p>My docker-compose file is:</p>
<pre><code>version: '2'
services:
es:
build: ./elastic-search
image: horcle/es
ports:
- "9200:9200"
- "9300:9300"
volumes:
- ./data:/elasticsearch/data
tab:
build: ./nlp-tab
image: horcle/nlptab
ports:
- "8000:8000"
volumes:
- ./data:/app/data
</code></pre>
<p>When I run <code>kompose up</code> the first image gets successfully built and pushed to Docker.io. However, I get the following error on the second image: <code>FATA Error while deploying application: k.Transform failed: Unable to build Docker image for service tab: Unable to create a tarball: archive/tar: write too long</code></p>
<p>I Googled this, and the issue appears to be with symlinks, which are nowhere in the directory I am using to build this image.</p>
<p>As a test, I did a <code>docker build -t horcle/nlptab .</code> followed by a successful push to Docker.io using <code>docker push horcle/nlptab</code>. Also, <code>docker-compose up</code> runs just fine, as well.</p>
<p>I'm not exactly sure why I cannot run a <code>kompose up</code> to do the same thing.</p>
| <p>I had this problem with a Node.js project. Deleting the node_modules folder (<code>rm -rf node_modules</code>) fixed it for me.</p>
|
<p>Can I know why kubectl run sometimes create deployment and sometimes pod. </p>
<p>You can see 1st one creates pod and 2nd one creates deployment. only diff is --restart=Never</p>
<pre><code>
// 1
chams@master:~/yml$ kubectl run ng --image=ngnix --command --restart=Never --dry-run -o yaml
apiVersion: v1
kind: Pod
..
status: {}
//2
chams@master:~/yml$ kubectl run ng --image=ngnix --command --dry-run -o yaml
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: ng
name: ng
..
status: {}
</code></pre>
| <p>The flags are intentionally meant to create different kind of objects. I am copying from the help of <code>kubectl run</code>:</p>
<pre><code> --restart='Always': The restart policy for this Pod. Legal values [Always,
OnFailure, Never]. If set to 'Always' a deployment is created, if set to
'OnFailure' a job is created, if set to 'Never', a regular pod is created. For
the latter two --replicas must be 1. Default 'Always', for CronJobs `Never`.
</code></pre>
<ul>
<li><code>Never</code> acts like a cronjob which is scheduled immediately. </li>
<li><code>Always</code> creates a deployment and the deployment monitors the pod and restarts in case of failure.</li>
</ul>
|
<p>I would like to copy the kube admin config file from the Kubernetes master host to the nodes using ansible synchronize but that fails due to a missing python interpreter, but I have already installed docker on all machines without any issue. </p>
<p>See my task </p>
<p><code>- name: admin conf file to nodes
environment:
ANSIBLE_PYTHON_INTERPRETER: python3
synchronize:
src: /home/{{ansible_ssh_user}}/.kube/config
dest: /home/{{ansible_ssh_user}}
delegate_to: "{{item}}"
loop: "{{groups.node}}"
</code></p>
| <p>You can use synchronize module only when rsync is enabled either in source server (kube master in your case) or in the kube nodes.</p>
<h3>Method 1: to push from master, need rsync enabled in master</h3>
<p>Synchronize use <code>push</code> mode by default </p>
<pre><code>- hosts: nodes
tasks:
- name: Transfer file from master to nodes
synchronize:
src: /etc/kubernetes/admin.conf
dest: $HOME/.kube/config
delegate_to: "{{ master }}"
</code></pre>
<h3>Method 2: to use fetch and copy modules</h3>
<pre><code> - hosts: all
tasks:
- name: Fetch the file from the master to ansible
run_once: yes
fetch: src=/etc/kubernetes/admin.conf dest=temp/ flat=yes
when: "{{ ansible_hostname == 'master' }}"
- name: Copy the file from the ansible to nodes
copy: src=temp/admin.conf dest=$HOME/.kube/config
when: "{{ ansible_hostname != 'master' }}"
</code></pre>
<p>Hope this helps. </p>
|
<p>I checked the pods in the kube-system namespace and noticed that some pods share the same ip address.The pods that share the same ip address appear to be on the same node.</p>
<p><a href="https://i.stack.imgur.com/OSiy0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OSiy0.png" alt="enter image description here"></a> </p>
<p>In the Kubernetes documenatation it said that "Evert pod gets its own ip address." (<a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/networking/</a>). I'm confused as to how same ip for some pods came about.</p>
| <p>This was reported in <a href="https://github.com/kubernetes/kubernetes/issues/51322" rel="noreferrer">issue 51322</a> and can depend on the <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/" rel="noreferrer">network plugin</a> you are using.</p>
<p>The issue was seen when using the basic <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#kubenet" rel="noreferrer">kubenet</a> network plugin on Linux.</p>
<p>Sometime, a <a href="https://github.com/kubernetes/kubernetes/issues/51322#issuecomment-388517154" rel="noreferrer">reset/reboot can help</a></p>
<blockquote>
<p>I suspect nodes have been configured with overlapped podCIDRs for such cases.<br>
The pod CIDR could be checked by <code>kubectl get node -o jsonpath='{.items[*].spec.podCIDR}'</code></p>
</blockquote>
|
<p>Created a two node cluster with kubeadm. </p>
<p>Installed <strong>istio 1.1.11</strong></p>
<p><strong>kubectl version</strong></p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Executed the commands as given in istio documentation</p>
<p>$<code>for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done</code></p>
<p>$ <code>kubectl apply -f install/kubernetes/istio-demo.yaml</code></p>
<p>Services got created.</p>
<p>$ <code>kubectl get pods -n istio-system</code></p>
<p><strong>Telemetry and policy pod status turned to CrashLoopBackOff status</strong></p>
<pre><code>istio-policy-648b5f5bb5-dv5np 1/2 **CrashLoopBackOff** 5 2m52s
istio-telemetry-57946b8569-9m7gd 1/2 **CrashLoopBackOff** 5 2m52s
</code></pre>
<p>While describing the pod, getting the following error</p>
<pre><code> Warning FailedMount 2m16s (x2 over 2m18s) kubelet, ip-xxx-xxx-xxx-xxx MountVolume.SetUp failed for volume "policy-adapter-secret" : couldn't propagate object cache: timed out waiting for the condition
</code></pre>
<p>Tried restarting the VM, restarted docker service. It did not help.</p>
<p>Because of the above error, the pod repeatedly try to restart and then crash.</p>
<p>Need your help in resolving this</p>
| <p>These Mixer services may be crashlooping if your node(s) don't have enough memory to run Istio. More and more, people use tools like <a href="https://meshery.layer5.io/docs/installation#platform-docker-" rel="nofollow noreferrer">Meshery</a> to install Istio (and other service meshes), because it will highlight points of contention like that of memory. When deploying either the <code>istio-demo</code> or <code>istio-demo-auth</code> configuration profiles, you'll want to ensure you have a <a href="https://meshery.layer5.io/docs/installation#platform-docker-" rel="nofollow noreferrer">minimum of 4GB RAM</a> per node (particularly, if the Istio control plane is only deployed to one node).</p>
|
<p>I'm following the Kubernetes install instructions for Helm: <a href="https://docs.cert-manager.io/en/latest/getting-started/install/kubernetes.html" rel="nofollow noreferrer">https://docs.cert-manager.io/en/latest/getting-started/install/kubernetes.html</a>
With Cert-manager v0.81 on K8 v1.15, Ubuntu 18.04 on-premise.
When I get to testing the installation, I get these errors:</p>
<pre><code>error when creating "test-resources.yaml": Internal error occurred: failed calling webhook "issuers.admission.certmanager.k8s.io": the server is currently unable to handle the request
Error from server (InternalError): error when creating "test-resources.yaml": Internal error occurred: failed calling webhook "certificates.admission.certmanager.k8s.io": the server is currently unable to handle the request
</code></pre>
<p>If I apply the test-resources.yaml before installing with Helm, I'm not getting the errors but it is still not working.
These errors are new to me, as Cert-manager used to work for me on my previous install about a month ago, following the same installation instructions.
I've tried with Cert-Manager 0.72(CRD 0.7) as well as I think that was the last version I managed to get installed but its not working either.</p>
<p>What does these errors mean?</p>
<p><strong>Update</strong>: It turned out to be an internal CoreDNS issue on my cluster. Somehow not being configured correctly. Possible related to wrong POD_CIDR configuration.</p>
| <p>If you experience this problem, check the logs of CoreDNS(Or KubeDNS) and you may see lots of errors related to contacting services. Unfortunately, I no longer have the errors.
But this is how I figured out that my network setup was invalid.</p>
<p>I'm using Calico(Will apply for other networks as well) and its network was not set to the same as the POD_CIDR network that I initialized my Kubernetes with.</p>
<p>Example
1. Set up K8: </p>
<pre><code>kubeadm init --pod-network-cidr=10.244.0.0/16
</code></pre>
<ol start="2">
<li><p>Configure Calico.yaml: </p>
<pre><code>- name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16"
</code></pre></li>
</ol>
|
<p>New to k8s.</p>
<p>Went through <a href="https://cloud.spring.io/spring-cloud-static/spring-cloud-kubernetes/2.1.0.RC1/multi/multi__kubernetes_propertysource_implementations.html" rel="nofollow noreferrer">https://cloud.spring.io/spring-cloud-static/spring-cloud-kubernetes/2.1.0.RC1/multi/multi__kubernetes_propertysource_implementations.html</a></p>
<p>I am having multi profiles in config map and want my app to pickup the properties based on the spring.profiles.active.</p>
<p><strong>Case 1:-</strong></p>
<p>My ConfigMap looks like,</p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: example-configmap-overriding-new-02
data:
application.properties: |-
globalkey = global key value
TeamName = Team Name value
Purpose = Purpose value
RootFile = Root file value
Company = company value
Place = Place value
Country = Country value
---
spring.profiles = qa
globalkey = global key qa value
TeamName = Team Name qa value
Purpose = Purpose qa value
RootFile = Root file qa value
---
spring.profiles = prod
globalkey = global key prod value
Company = company prod value
Place = Place prod value
Country = Country prod value
</code></pre>
<p>My deployment file looks like,</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-configmapsingleprofile
spec:
selector:
matchLabels:
app: demo-configmapsingleprofile
replicas: 1
template:
metadata:
labels:
app: demo-configmapsingleprofile
spec:
serviceAccountName: config-reader
containers:
- name: demo-configmapsingleprofile
image: image name
ports:
- containerPort: 8080
envFrom:
- configMapRef:
name: example-configmap-overriding-new-02
securityContext:
privileged: false
</code></pre>
<p>My Config file in spring boot looks like,</p>
<pre><code>@Configuration
public class ConfigConsumerConfig {
@Value(value = "${globalkey}")
private String globalkey;
@Value(value = "${TeamName}")
private String teamName;
@Value(value = "${Purpose}")
private String purpose;
@Value("${RootFile}")
private String rootFile;
@Value("${Company}")
private String company;
@Value("${Place}")
private String place;
@Value("${Country}")
private String country;
//With getters and setters
}
</code></pre>
<p>My application.properties looks like,</p>
<pre><code>spring.profiles.active=prod
spring.application.name=example-configmap-overriding-new-02
spring.cloud.kubernetes.config.name=example-configmap-overriding-new-02
spring.cloud.kubernetes.config.namespace=default
spring.cloud.kubernetes.config.sources[0].name=example-configmap-overriding-new-02
spring.cloud.kubernetes.config.enabled=true
</code></pre>
<p><strong>App is starting fine and it grabbed the values from the config map. But it is picking from wrong profile. Some values are picked up from qa profile.</strong> </p>
<p>I am getting the final result as:-</p>
<pre><code>{"globalkey":"global key prod value","teamName":"Team Name qa value","purpose":"Purpose qa value","rootFile":"Root file qa value","company":"company prod value","place":"Place prod value","country":"Country prod value"}
</code></pre>
<p><strong>Case 2:-</strong></p>
<p>But, when I used the config map with yaml, as given below,</p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: example-configmap-overriding-new-02
data:
application.yml: |-
globalkey : global key value
TeamName : Team Name value
Purpose : Purpose value
RootFile : Root file value
Company : company value
Place : Place value
Country : Country value
---
spring:
profiles: qa
globalkey : global key qa value
TeamName : Team Name qa value
Purpose : Purpose qa value
RootFile : Root file qa value
---
spring:
profiles: prod
globalkey : global key prod value
Company : company prod value
Place : Place prod value
Country : Country prod value
</code></pre>
<p><strong>I am getting the result as expected. It picks from prod profile as expected.</strong></p>
<pre><code>{"globalkey":"global key prod value","teamName":"Team Name value","purpose":"Purpose value","rootFile":"Root file value","company":"company prod value","place":"Place prod value","country":"Country prod value"}
</code></pre>
<p><strong>Case 2 is working as expected but not the Case 1.</strong></p>
<p>Am I doing any mistake/misunderstanding with config map. Could some one advice here?</p>
<p>Thx.</p>
| <p>Given that there are no other answers yet...let me try to help you.</p>
<p>The <code>---</code> divider in a YAML file separates multiple YAML documents. Not so in a properties file. Not sure how the properties file gets loaded at all with those <code>---</code> but otherwise the way it is structured right now it leads to duplicate keys where a key overwrites the same key before it.</p>
<p>Create multiple key/value pairs - one for each profile - when using properties files. Something like this:</p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: example-configmap-overriding-new-02
data:
application.properties: |-
globalkey = global key value
TeamName = Team Name value
Purpose = Purpose value
RootFile = Root file value
Company = company value
Place = Place value
Country = Country value
application-qa.properties: |-
spring.profiles = qa
globalkey = global key qa value
TeamName = Team Name qa value
Purpose = Purpose qa value
RootFile = Root file qa value
application-prod.properties: |-
spring.profiles = prod
globalkey = global key prod value
Company = company prod value
Place = Place prod value
Country = Country prod value
</code></pre>
<p>They will "materialize" inside a running container as individual files at the mounted location.</p>
|
<p>in the context of kubernetes or else, does it make sense to have one KSQL SERVER per application? When i read the capacity planning for KSQL Server, it is seems the basic settings are for running multiple queries on one server.
However I feel like to have a better control over scaling up and down with Kubernetes, it would make more sense to fix the number of Thread by per query, and launch a server configured in kube with let say 1 cpu, where only one application would run. However i am not sure how heavy are KSQL Server, and if that make actual sense or not. </p>
<p>Any recommendation. </p>
| <p>First of all, what you have mentioned is clearly doable. You can <a href="https://docs.confluent.io/current/ksql/docs/installation/install-ksql-with-docker.html" rel="nofollow noreferrer">run KSQL Server with Docker</a>, so it's you could have a container orchestrator such as kubernetes or swarm maintaining and scheduling those KSQL Server instances.</p>
<p>So you know how this would play out:</p>
<ol>
<li>Each KSQL Instance will join a group of other KSQL Instances with
the same <code>KSQL_SERVICE_ID</code> that use the same Kafka Cluster defined by <code>KSQL_KSQL_STREAMS_BOOTSTRAP_SERVERS</code></li>
<li>You can create several KSQL Server Clusters, i.e for different
applications, just use different <code>KSQL_SERVICE_ID</code> while using the
same Kafka Cluster. </li>
</ol>
<p>As a result, you now you have:</p>
<ol>
<li>Multiple Containerized KSQL Server Instances managed by a container
orchestrator such as Kubernetes.</li>
<li>All of the KSQL Instances are connected to the Same Kafka Cluster (you can also have different Kafka Clusters for different <code>KSQL_SERVICE_ID</code>)</li>
<li>The KSQL Server Instances can be grouped in different applications
(different <code>KSQL_SERVICE_ID</code>) in order to achieve separation of
concerns so that scalability, security and availability can be
better maintained.</li>
</ol>
<p>Regarding the coexistence of several KSQL Server Instances (maybe with different <code>KSQL_SERVICE_ID</code>) on the same server, you should know the available machine resources can be monopolized by a greedy instance, causing problems to the less greedy instance. With Kubernetes you could set resource limits on your Pods to avoid this, but greedy instances will be limited and slowed down. </p>
<p>Confluent <a href="https://docs.confluent.io/current/ksql/docs/capacity-planning.html#general-guidelines" rel="nofollow noreferrer">advice regarding multi-tenancy</a>:</p>
<blockquote>
<p>We recommend against using KSQL in a multi-tenant fashion. For
example, if you have two KSQL applications running on the same node,
and one is greedy, you're likely to encounter resource issues related
to multi-tenancy. We recommend using a single pool of KSQL Server
instances per use case. You should deploy separate applications onto
separate KSQL nodes, because it becomes easier to reason about scaling
and resource utilization. Also, deploying per use case makes it easier
to reason about failovers and replication.</p>
</blockquote>
<p>A possible drawback is the overhead you'll have if you run multiple KSQL Server Instances (Java Application footprint) in the same pool while having no work for them to do (i.e: no schedulable tasks due to lack of partitions on your topic(s)) or simply because you have very little workload. You might be doing the same job with less instances, avoiding idled or nearly-idled instances. </p>
<p>Of course stuffing all stream processing, maybe for completely different use cases or projects, on a single KSQL Server or pool of KSQL Servers may bring its own internal concurrency issues, development cycle complexities, management, etc.. </p>
<p>I guess something in the middle will work fine. Use a pool of KSQL Server instances for a single project or use case, which in turn might translate to a pipeline consisting on a topology of several source, process and sinks, implemented by a number of KSQL queries.</p>
<p>Also, don't forget about the scaling mechanisms of Kafka, Kafka Streams and KSQL (built on top of Kafka Streams) discussed in the <a href="https://stackoverflow.com/questions/56926562/ksql-query-number-of-thread/56926770#56926770">previous question you've posted</a>.</p>
<p>All of this mechanisms can be found here:</p>
<p><a href="https://docs.confluent.io/current/ksql/docs/capacity-planning.html" rel="nofollow noreferrer">https://docs.confluent.io/current/ksql/docs/capacity-planning.html</a>
<a href="https://docs.confluent.io/current/ksql/docs/concepts/ksql-architecture.html" rel="nofollow noreferrer">https://docs.confluent.io/current/ksql/docs/concepts/ksql-architecture.html</a>
<a href="https://docs.confluent.io/current/ksql/docs/installation/install-ksql-with-docker.html" rel="nofollow noreferrer">https://docs.confluent.io/current/ksql/docs/installation/install-ksql-with-docker.html</a></p>
|
<p>I'm using Azure DevOps, to handle PBI, repos, PRS, and builds, but all my infrastructure, including Kubernetes is managed by AWS.</p>
<p>There's not documentation, neither "the right and easy way" of how to deploy to AWS EKS using Azure DevOps Tasks.</p>
<p>I found <a href="https://www.dragonspears.com/blog/a-ci-cd-pipeline-with-azure-devops-and-aws-managed-kubernetes" rel="noreferrer">this solution</a>, its a good solution, but would be awesome to know how you guys resolve it, or if there are more approaches.</p>
| <p>After a research and try and failure, I found another way to do it, without messing around with shell scripts.</p>
<p>You just need to apply the following to Kubernetes, It will create a ServiceAccount and bind it to a custom Role, that role will have the permissions to create/delete deployments and pods (tweak it for services permissions).</p>
<p><strong>deploy-robot-conf.yaml</strong></p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: deploy-robot
automountServiceAccountToken: false
---
apiVersion: v1
kind: Secret
metadata:
name: deploy-robot-secret
annotations:
kubernetes.io/service-account.name: deploy-robot
type: kubernetes.io/service-account-token
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: deploy-robot-role
namespace: default
rules: # ## Customize these to meet your requirements ##
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["create", "delete"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["create", "delete"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: global-rolebinding
namespace: default
subjects:
- kind: ServiceAccount
name: deploy-robot
namespace: default
roleRef:
kind: Role
name: deploy-robot-role
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>This will have the minimum permissions needed for Azure DevOps be able to deploy to the cluster.</p>
<p><strong>Note:</strong> Please tweak the rules at the role resource to meet your need, for instance services resources permissions.</p>
<p>Then go to your release and create a Kubernetes Service Connection:</p>
<p><a href="https://i.stack.imgur.com/hvPD0.png" rel="noreferrer"><img src="https://i.stack.imgur.com/hvPD0.png" alt="Kubernetes Service Connection"></a></p>
<p>Fill the boxes, and follow the steps required to get your secret from the service account, remember that is <strong>deploy-robot</strong> if you didn't change the yaml file.</p>
<p><a href="https://i.stack.imgur.com/aFwNf.png" rel="noreferrer"><img src="https://i.stack.imgur.com/aFwNf.png" alt="Kubernetes Service Connection"></a></p>
<p>And then just use your Kubernetes Connection:</p>
<p><a href="https://i.stack.imgur.com/yr2se.png" rel="noreferrer"><img src="https://i.stack.imgur.com/yr2se.png" alt="Release with Kubernetes Connection"></a></p>
|
<p>As we can see in <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters" rel="nofollow noreferrer">this documentation</a>. A private cluster is accessible by the VMs (GCP compute instances) in the same subnet by default. Here is what is mentioned in the docs :</p>
<pre><code>From other VMs in the cluster's VPC network:
Other VMs can use kubectl to communicate with the private endpoint
only if they are in the same region as the cluster
and their internal IP addresses are included in
the list of master authorized networks.
</code></pre>
<p>I have tested this: <br></p>
<ul>
<li>the cluster is accessible from VMs in the same subnetwork as the cluster</li>
<li>the cluster is not accessible from the VMs in different subnets.</li>
</ul>
<p>How does this private cluster figure out which VMs to give access and which VMs to reject?</p>
| <p>It is not controlled by private cluster.</p>
<p>It is controlled by the routing and firewall rules configured for the vpc's subnets. Even within the same vpc, you can disable communication between them by adding a rule. </p>
<p><a href="https://cloud.google.com/vpc/docs/vpc#affiliated_resources" rel="nofollow noreferrer">https://cloud.google.com/vpc/docs/vpc#affiliated_resources</a></p>
|
<p>I'm trying to setup Kubernetes cluster with multi master and external etcd cluster. Followed these steps as described in <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/" rel="nofollow noreferrer">kubernetes.io</a>. I was able to create static manifest pod files in all the 3 hosts at /etc/kubernetes/manifests folder after executing Step 7.</p>
<p>After that when I executed command '<strong>sudo kubeadmin init</strong>', the initialization got failed because of kubelet errors. Also verified journalctl logs, the error says misconfiguration of cgroup driver which is similar to this <a href="https://stackoverflow.com/questions/45708175/kubelet-failed-with-kubelet-cgroup-driver-cgroupfs-is-different-from-docker-c">SO link</a>. </p>
<p>I tried as said in the above SO link but not able to resolve. </p>
<p>Please help me in resolving this issue.</p>
<p>For installation of docker, kubeadm, kubectl and kubelet, I followed kubernetes.io site only.</p>
<p><strong>Environment:</strong></p>
<p>Cloud: AWS </p>
<p>EC2 instance OS: Ubuntu 18.04</p>
<p>Docker version: 18.09.7</p>
<p>Thanks</p>
| <p>After searching few links and doing few trails, I am able to resolve this issue.</p>
<p>As given in the Container runtime <a href="https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker" rel="nofollow noreferrer">setup</a>, the Docker cgroup driver is systemd. But default cgroup driver of Kubelet is cgroupfs. So as Kubelet alone cannot identify cgroup driver automatically (as given in <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/#providing-instance-specific-configuration-details" rel="nofollow noreferrer">kubernetes.io</a> docs), we have to provide cgroup-driver externally while running Kubelet like below:</p>
<blockquote>
<p>cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf</p>
<p>[Service]</p>
<p>ExecStart=</p>
<p>ExecStart=/usr/bin/kubelet <strong>--cgroup-driver=systemd</strong> --address=127.0.0.1 --pod->manifest-path=/etc/kubernetes/manifests</p>
<p>Restart=always</p>
<p>EOF</p>
<p>systemctl daemon-reload</p>
<p>systemctl restart kubelet</p>
</blockquote>
<p>Moreover, no need to run sudo kubeadm init, as we are providing --pod-manifest-path to Kubelet, it runs etcd as Static POD.</p>
<p>For debugging, logs of Kubelet can be checked using below command</p>
<blockquote>
<p>journalctl -u kubelet -r</p>
</blockquote>
<p>Hope it helps. Thanks.</p>
|
<p>I have a local minikube setup using hyperkit VM. I installed helm on my system and the client was installed successfully but tiller is not installing in the cluster. The status for the pod is ImagePullBackOff</p>
<p>I tried installing different versions of the image with the --tiller-image flag but it still fails. If I do a normal docker pull of the same image in my system it works.</p>
<p>The command <code>helm version</code> gives this result:</p>
<pre><code>Client: &version.Version{SemVer:"v2.12.1", GitCommit:"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e", GitTreeState:"clean"}
Error: could not find a ready tiller pod
</code></pre>
<p>The command <code>kubectl -n kube-system get pods</code> gives this results:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-bh6w7 1/1 Running 0 146m
coredns-fb8b8dccf-gskc8 1/1 Running 1 146m
etcd-minikube 1/1 Running 0 145m
kube-addon-manager-minikube 1/1 Running 0 145m
kube-apiserver-minikube 1/1 Running 0 145m
kube-controller-manager-minikube 1/1 Running 0 145m
kube-proxy-jqb9b 1/1 Running 0 146m
kube-scheduler-minikube 1/1 Running 0 145m
storage-provisioner 1/1 Running 0 146m
tiller-deploy-548df79d66-xsngk 0/1 ImagePullBackOff 0 27m
</code></pre>
<p>The command <code>kubectl -n kube-system describe pod tiller-deploy-548df79d66-xsngk</code> fives the below result:</p>
<pre><code>Name: tiller-deploy-548df79d66-xsngk
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: minikube/192.168.64.56
Start Time: Sat, 06 Jul 2019 22:41:54 +0530
Labels: app=helm
name=tiller
pod-template-hash=548df79d66
Annotations: <none>
Status: Pending
IP: 172.17.0.5
Controlled By: ReplicaSet/tiller-deploy-548df79d66
Containers:
tiller:
Container ID:
Image: gcr.io/kubernetes-helm/tiller:v2.12.1
Image ID:
Ports: 44134/TCP, 44135/TCP
Host Ports: 0/TCP, 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Liveness: http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
Environment:
TILLER_NAMESPACE: kube-system
TILLER_HISTORY_MAX: 0
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-6w54d (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-6w54d:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-6w54d
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 21h default-scheduler Successfully assigned kube-system/tiller-deploy-548df79d66-xsngk to minikube
Warning Failed 21h kubelet, minikube Failed to pull image "gcr.io/kubernetes-helm/tiller:v2.12.1": rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.56:46818->192.168.64.1:53: read: connection refused
Warning Failed 21h kubelet, minikube Failed to pull image "gcr.io/kubernetes-helm/tiller:v2.12.1": rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.56:32929->192.168.64.1:53: read: connection refused
Warning Failed 21h kubelet, minikube Failed to pull image "gcr.io/kubernetes-helm/tiller:v2.12.1": rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.56:59446->192.168.64.1:53: read: connection refused
Warning Failed 21h (x4 over 21h) kubelet, minikube Error: ErrImagePull
Normal Pulling 21h (x4 over 21h) kubelet, minikube Pulling image "gcr.io/kubernetes-helm/tiller:v2.12.1"
Warning Failed 21h kubelet, minikube Failed to pull image "gcr.io/kubernetes-helm/tiller:v2.12.1": rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.56:58643->192.168.64.1:53: read: connection refused
Warning Failed 21h (x7 over 21h) kubelet, minikube Error: ImagePullBackOff
Normal BackOff 21h (x110 over 21h) kubelet, minikube Back-off pulling image "gcr.io/kubernetes-helm/tiller:v2.12.1"
</code></pre>
<p>I am not behind any proxy if that helps.</p>
<p><strong>Update 1:</strong></p>
<p>I ssh-ed into minikube and tried to pull the docker image manually but I got the error:</p>
<pre><code>Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.56:46912->192.168.64.1:53: read: connection refused
</code></pre>
<p><strong>Update 2:</strong></p>
<p>This issue is not only for just helm, but for any image. I usw Quay.io for images and the cluster can't reach that also, I tried kafka and that fails too.
If I ssh into minikube, update resolv and then manually do a docker pull inside minikube, the pod spins up.</p>
| <p><a href="https://stackoverflow.com/questions/42564058/how-to-use-local-docker-images-with-minikube">Reuse the docker daemon for minikube</a> with the command,</p>
<pre class="lang-sh prettyprint-override"><code>eval $(minikube docker-env)
</code></pre>
<p>and then pull the tiller image like this,</p>
<pre class="lang-sh prettyprint-override"><code>docker pull gcr.io/kubernetes-helm/tiller:v2.12.1
</code></pre>
<p>Now minikube will have the image downloaded. Delete the existing tiller pod with this command,</p>
<pre class="lang-sh prettyprint-override"><code>kubectl delete pod tiller-deploy-548df79d66-xsngk -n kube-system
</code></pre>
<p>The explanation is this. Since it looks like the <code>ImagePullPolicy</code> is not mentioned in the tiller deployment and also as the default ImagePullPolicy is IfNotPresent, minikube might not download the image and will try to use the already dowloaded image. </p>
<p><strong>Update</strong>: The reason that helm was not able to install tiller, being that the resolv.conf file of minikube container had <code>nameserver 192.168.64.1</code> from the previous network that he got connected to while creating the cluster. So minikube was not able to pull tiller docker image. It might probably be due to the fact that docker sets up the nameserver based on the network you are connected to.</p>
<p>This <a href="https://github.com/kubernetes/minikube/blob/master/docs/http_proxy.md#unable-to-pull-imagesclienttimeout-exceeded-while-awaiting-headers" rel="nofollow noreferrer">link</a> may help.</p>
|
<p>A Python based <code>Flask</code> HTTP server is running on Google Kubernetes cluster as <code>Docker</code> container. It is implemented as a single pod <code>flask-http-deployment</code> and placed behind a <code>Load Balancer</code>.</p>
<p>The HTTP server's Python code is quite simple and does not support <code>HTTPS</code> protocol. But other applications will need to communicate with this server via HTTPS. So there is a need to implement a support for <code>HTTPS</code>.</p>
<p>From what I read (please correct me if I am wrong), the <code>HTTPS</code> support could be implemented by configuring the <code>flask-http-deployment</code> with a <code>secret</code>.</p>
<p>Here are the steps I followed:</p>
<ol>
<li>Generated the <code>my-cert.crt</code> and <code>my-key.key</code> files:</li>
</ol>
<p><code>openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout my-key.key -out my-cert.crt -subj '//CN=mydomain.com'</code></p>
<ol start="2">
<li>With <code>my-cert.crt</code> and <code>my-key.key</code> files in place I created Kubernetes <code>secret</code>:</li>
</ol>
<p><code>kubectl create secret tls my-secret --key=my-key.key --cert=y-cert.crt</code></p>
<p>How should I now modify the <code>flask-http-deployment</code> yaml file with a <code>secret</code> I've just created?</p>
<pre><code> apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: flask-http-deployment
spec:
replicas: 5
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: flask-http-app
spec:
containers:
- name: flask-http-container
image: gcr.io/my-project-id/flask-http-container
ports:
- containerPort: 80
imagePullPolicy: IfNotPresent
</code></pre>
<p>Here is the Load Balancer yaml, in case it is needed:</p>
<pre><code> apiVersion: v1
kind: Service
metadata:
name: flask-http-load-balancer
labels:
app: flask-http-app
spec:
type: LoadBalancer
ports:
- port: 80
nodePort: 30000
protocol: TCP
name: flask
selector:
</code></pre>
| <p>I am not sure about the secret , and where are you using the secret that you created , but supporting https is the same as in traditional world , put an https termination proxy in front of your app service , such as an nginx pod or an nginx ingress controller ( use your secret in there )</p>
<p>or terminate SSL/TLS in your external load balancer.</p>
|
<p>Is it possible in GKE to create a cluster with its nodes located in multiple regions?</p>
<p>I am aware of Zonal and Regional clusters in which we can have nodes ( and even masters) in different zones. I am wondering if there is a way in GKE to create a multi-regional cluster?</p>
| <p>Not quite, but what you can do is front clusters in different regions with a single Google Cloud Load Balancing instance with multi-cluster Ingress:
<a href="https://cloud.google.com/kubernetes-engine/docs/how-to/multi-cluster-ingress" rel="nofollow noreferrer">multi-cluster-ingress</a></p>
<p>The load balancer will route requests to the cluster with the lowest latency for the end user.The Google Cloud Load Balancer is also able to detect if one of the clusters is unavailable and route traffic to the other clusters.</p>
|
<p>I have a couple of applications, which runs in Docker containers (all on the same VM).
In front of them, I have an nginx container as a reverse proxy.
Now I want to migrate that to Kubernetes. </p>
<p>When I start them by docker-composer locally it works like expected.
On Kubernetes not.</p>
<h3>nginx.conf</h3>
<pre><code>http {
server {
location / {
proxy_pass http://app0:80;
}
location /app1/ {
proxy_pass http://app1:80;
rewrite ^/app1(.*)$ $1 break;
}
location /app2/ {
proxy_pass http://app2:80;
rewrite ^/app2(.*)$ $1 break;
}
}
}
</code></pre>
<h3>edit: nginx.conf is not used on kubernetes. I have to use ingress-controller for that:</h3>
deployment.yaml
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: app0
spec:
replicas: 1
template:
metadata:
labels:
app: app0
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: app0
image: appscontainerregistry1.azurecr.io/app0:latest
imagePullPolicy: Always
ports:
- containerPort: 80
name: nginx
---
#the other apps
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-nginx
annotations:
# use the shared ingress-nginx
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: apps-url.com
http:
paths:
- path: /
backend:
serviceName: app0
servicePort: 80
- path: /app1
backend:
serviceName: app1
servicePort: 80
- path: /app2
backend:
serviceName: app2
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: loadbalancer
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: ingress-nginx
</code></pre>
<p>I get the response on / (app0). Unfortunately, the subroutes are not working. What I´m doing wrong?</p>
<h3>EDIT</h3>
<p>I figured out. Ich missed installing the ingress controller. Like on this page (<a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a>) described, the ingress doesn't work if no controller is installed.
I used ingress-nginx as a controller (<a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/</a>) because it was the best-described install guide which I was able to find and I didn´t want to use HELM.
I have one more question. How I can change my ingress that subdomains are working.
For example, k8url.com/app1/subroute shows me every time the start page of my app1.
And if I use a domain name proxying, it rewrites every time the domain name by the IP. </p>
| <p>Add this : </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: loadbalancer
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
app: ingress-nginx
</code></pre>
<p>Because you didnot route for <code>Service</code> Load balancer to <code>targetPort</code> to <code>80</code></p>
|
<p>I need to inject container port from an environment variable inside my pod. How to do that? </p>
<p>Have been through the documentation, Links:-
1. <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/</a>
2. <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/</a></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
namespace: default
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: $(MY_CONTAINER_PORT)
env:
- name: MY_CONTAINER_PORT
value: 80
</code></pre>
<pre><code>error: error validating "nginx-pod-exposed-through-env.yaml": error validating data: ValidationError(Pod.spec.containers[0].ports[0].containerPort): invalid type for io.k8s.api.core.v1.ContainerPort.containerPort: got "string", expected "integer"; if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
| <p>A way to accomplish this would be to use a templating tool such as <a href="https://get-ytt.io/" rel="nofollow noreferrer">ytt</a>. With ytt you would turn your manifest into a template like:</p>
<pre><code>#@ load("@ytt:data", "data")
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
namespace: default
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: #@ data.values.port
</code></pre>
<p>And then supply a <code>values.yml</code> like:</p>
<pre><code>#@data/values
---
port: 8080
</code></pre>
<p>Assuming the original template is named <code>test.yml</code> we could run <code>ytt</code> like so to generate the output:</p>
<pre><code>$ ytt -f test.yml -f values.yml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
namespace: default
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 8080
</code></pre>
<p>The ytt utility then lets us override the data values one the command line with <code>--data-value</code> (or <code>-v</code> for short). An example changing to port 80:</p>
<pre><code>$ ytt -v port=80 -f test.yml -f values.yml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
namespace: default
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
</code></pre>
<p>Your original question sounded like you wanted to use environment variables. This is supported with <code>--data-values-env</code>. An example using prefix <code>MYVAL</code>:</p>
<pre><code>$ export MYVAL_port=9000
$ ytt --data-values-env MYVAL -f test.yml -f values.yml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
namespace: default
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 9000
</code></pre>
<p>You can then combine <code>ytt</code> and <code>kubectl</code> to create and apply resources:</p>
<pre><code>ytt --data-values-env MYVAL -f test.yml -f values.yml | kubectl apply -f-
</code></pre>
<p>Additional information on injecting data into ytt templates is at <a href="https://github.com/k14s/ytt/blob/develop/docs/ytt-data-values.md" rel="nofollow noreferrer">https://github.com/k14s/ytt/blob/develop/docs/ytt-data-values.md</a>.</p>
|
<p>I am trying to setup spinnaker with kubernetes and getting an error: user cannot list namespace.</p>
<p>I don't have access to list namespace in cluster scope. Is it possible to set up and apply hal configuration without access to list namespaces at cluster scope? if yes, please let me know the steps.</p>
<p>Below I mention the command out for reference:</p>
<pre><code>hal deploy apply
+ Get current deployment
Success
- Prep deployment
Failure
Problems in default.provider.kubernetes.my-k8s-account:
! ERROR Unable to communicate with your Kubernetes cluster: Failure
executing: GET at: https://<company>/api/v1/namespaces. Message:
Forbidden! User apc doesn't have permission. namespaces is forbidden: User
"system:anonymous" cannot list namespaces at the cluster scope..
? Unable to authenticate with your Kubernetes cluster. Try using
kubectl to verify your credentials.
- Failed to prep Spinnaker deployment
</code></pre>
<hr>
<pre><code>$ kubectl get ns
No resources found.
Error from server (Forbidden): namespaces is forbidden: User "ds:uid:2319639648" cannot list namespaces at the cluster scope
</code></pre>
<hr>
<p>Regards,
Ajaz</p>
| <p>You can do it without ClusterRole.
Tested, works as expected.</p>
<p>See <a href="https://docs.armory.io/spinnaker-install-admin-guides/install-on-eks/" rel="nofollow noreferrer">instruction</a>.</p>
<p><a href="https://docs.armory.io/spinnaker-install-admin-guides/install-on-eks/#configure-spinnaker-to-install-in-kubernetes" rel="nofollow noreferrer">Configure Spinnaker to install in Kubernetes</a></p>
<p>Important: This will by default limit your Spinnaker to deploying to the namespace specified. If you want to be able to deploy to other namespaces, either add a second cloud provider target or remove the <code>--namespaces</code> flag.</p>
<p>Use the Halyard hal command line tool to configure Halyard to install Spinnaker in your Kubernetes cluster</p>
<pre><code>hal config deploy edit \
--type distributed \
--account-name ${ACCOUNT_NAME} \
--location ${NAMESPACE}
</code></pre>
|
<p>I want to know how I can start my deployments in a specific order. I am aware of <code>initContainers</code> but that is not working for me. I have a huge platform with around 20 deployments and 5 statefulsets that each of them has their own service, environment variables, volumes, horizontal autoscaler, etc. So it is not possible (or I don't know how) to define them in another yaml deployment as <code>initContainers</code>.</p>
<p>Is there another option to launch deployments in a specific order?</p>
| <p>It's possible to order the launch of initContainers in a Pod, or Pods that belong in the same StatefulSet. However, those solutions do not apply to your case.</p>
<p>This is because ordering initialization is not the standard approach for solving your issue. In a microservices architecture, and more specifically Kubernetes, you would write your containers such that they try to call the services they depend on (whether they are up or not) and if they aren't available, you let your containers crash. This works because Kubernetes provides a <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy" rel="noreferrer">self-healing mechanism</a> that automatically restarts containers if they fail. This way, your containers will try to connect to the services they depend on, and if the latter aren't available, the containers will crash and try again later using exponential back-off.</p>
<p>By removing unnecessary dependencies between services, you simplify the deployment of your application and reduce coupling between different services.</p>
|
<p>I'm very new to kubernetes and all I want to do at this point is restart my cluster and have it run an updated dockerfile. I'm running kubernetes in google-cloud-platform by the way. </p>
| <p>kubectl from version 1.15 should contain <code>kubectl rollout restart</code>
(according to this comment <a href="https://github.com/kubernetes/kubernetes/issues/33664#issuecomment-497242094" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/33664#issuecomment-497242094</a>)</p>
|
<p>I was wondering if it is possible to use helm charts just to generate the k8s objects themselves.</p>
<p>We are using multiple k8s clusters with managed by <code>openshift</code>.<br>
Helm doesn't work so well with multiple openshift clusters,<br>
so we figured that using the <code>oc</code> would work better (mostly for authentication and authorization on multiple clusters).</p>
<p>Helm dynamic k8s objects are very powerful and we would like to keep using them.<br>
Is it possible to tell helm to generate the object from the given <code>yaml</code> files and <code>values</code> file, and pass them along to <code>oc replace</code>?</p>
| <p>OpenShift has its own internal templating engine that overlaps significantly with helm, but is more tightly integrated.</p>
<p>You can find documentation on this in the <a href="https://docs.openshift.com/container-platform/3.11/dev_guide/templates.html" rel="nofollow noreferrer">Templates</a> section of the official OpenShift documentation.</p>
<p>When using these templates you can generate object definitions from a parameterized template by using the <code>oc process</code> command.</p>
<pre><code>$ oc process -f my-rails-postgresql \
-p POSTGRESQL_USER=bob \
-p POSTGRESQL_DATABASE=mydatabase
</code></pre>
<p>You can create the resulting objects in-line using <code>oc create</code></p>
<pre><code>$ oc process -f my-rails-postgresql \
-p POSTGRESQL_USER=bob \
-p POSTGRESQL_DATABASE=mydatabase \
| oc create -f -
</code></pre>
<p>Personally I find helm to be overkill when using OpenShift as the out-of-the-box templating engine is usually sufficient.</p>
<p>If you need something more sophisticated than that I tend to reach towards packaging my deployment in an ansible playbook, jinja2 templates, and the <a href="https://docs.ansible.com/ansible/latest/modules/k8s_module.html" rel="nofollow noreferrer">k8s</a> ansible module instead of running helm and tiller.</p>
|
<p>Getting this error message after <code>kubectl apply -f .</code></p>
<pre><code>error: error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{"include (print $.Template.BasePath \"/configmap.yaml\") . | sha256sum":interface {}(nil)}
</code></pre>
<p>I've tried putting <code>checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}</code> in different places, but I don't really understand YAML or JSON to figure out the issue.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: demo
spec:
replicas: 1
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
containers:
- name: demo
image: cloudnatived/demo:hello-config-env
ports:
- containerPort: 8888
env:
- name: GREETING
valueFrom:
configMapKeyRef:
name: demo-config
key: greeting
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
</code></pre>
<p>I just want to be able to update my pods when the config is changed. I'm supposed to <code>helm upgrade</code> here somewhere but I'm not sure what arguments to give it.</p>
| <p>You can't use the <code>{{ ... }}</code> syntax with <code>kubectl apply</code>. That syntax generally matches the <a href="https://helm.sh" rel="noreferrer">Helm</a> package manager. Without knowing to apply the template syntax, <code>{ ... }</code> looks like YAML map syntax, and the parser gets confused.</p>
<p><code>annotations:</code> generally belong under <code>metadata:</code>, next to <code>labels:</code>. <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/" rel="noreferrer">Annotations</a> in the Kubernetes documentation might be useful reading.</p>
<blockquote>
<p>I just want to be able to update my pods without restarting them.</p>
</blockquote>
<p>Kubernetes doesn't work that way, with some very limited exceptions.</p>
<p>If you're only talking about configuration data and not code, you can <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-volume" rel="noreferrer">Add ConfigMap data to a Volume</a>; then if the ConfigMap changes, the files the pod sees will also change. The syntax you're stumbling over is actually a workaround to force a pod to restart when the ConfigMap data changes: it is the opposite of what you're trying for, and you should delete these two lines.</p>
<p>For routine code changes, the standard path is to build and push a new Docker image, then update your Deployment object with the new image tag. (It must be a different image tag string than you had before, just pushing a new image with the same tag isn't enough.) Then Kubernetes will automatically start new pods with the new image, and once those start up, shut down pods with the old image. Under some circumstances Kubernetes can even delete and recreate pods on its own.</p>
|
<p>After deploying Istio 1.1.2 on OpenShift there is an istio-ingressgateway route with its associated service and pod.</p>
<p>I have successfully used that ingress gateway to access an application, configuring a Gateway and a VirtualService using * as hosts.</p>
<p>However I would like to configure a domain, e.g insuranceinc.es, to access the application. According to the documentation I have this Istio config:</p>
<p><strong>Gateway:</strong></p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: insuranceinc-gateway
namespace: istio-insuranceinc
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "insuranceinc.es"
</code></pre>
<p><strong>VirtualService</strong></p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: insuranceinc
namespace: istio-insuranceinc
spec:
hosts:
- insuranceinc.es
gateways:
- insuranceinc-gateway
http:
- route:
- destination:
host: insuranceinc-web
port:
number: 8080
</code></pre>
<p>If I make this curl invocation...</p>
<p><code>curl http://istio-ingressgateway-istio-system.apps.mycluster.com/login</code></p>
<p>... I can see a 404 error in the ingress-gateway pod:</p>
<pre><code>[2019-04-12T15:27:51.765Z] "GET /login HTTP/1.1" 404 NR "-" 0 0 1 - "xxx" "curl/7.54.0" "xxx" "istio-ingressgateway-istio-system.apps.mycluster.com" "-" - - xxx -
</code></pre>
<p>This makes sense since it isn't comming from an insuranceinc.es host. So I change the curl to send a <code>Host: insuranceinc.es</code> header:</p>
<p><code>curl -H "Host: insuranceinc.es" http://istio-ingressgateway-istio-system.apps.mycluster.com/login</code></p>
<p>Now I am getting a 503 error and there are no logs in the istio-ingressgateway pod.</p>
<blockquote>
<h1>Application is not available</h1>
<p><p>The application is currently not serving requests at this endpoint. It may not have been started or is still starting.</p></p>
</blockquote>
<p>This means the request hasn't been processed by that istio-ingressgateway route->service->poc. </p>
<p>Since it is an <code>Openshift Route</code> it must be needing a Host header containing the route host <code>istio-ingressgateway-istio-system.apps.mycluster.com</code>. In fact if I send <code>curl -H "Host: istio-ingressgateway-istio-system.apps.mycluster.com" http://istio-ingressgateway-istio-system.apps.mycluster.com/login</code> it is processed by the istio ingress gateway returning a 404.</p>
<p>So, how can I send my Host insuranceinc.es header and also reach the istio ingress gateway (which is actually an OpenShift route)?</p>
| <p>You need to create an openshift route in the istio-system namespace to relate to the hostname you created. </p>
<p>For example:</p>
<pre><code>oc -n istio-system get routes
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
gateway1-lvlfn insuranceinc.es istio-ingressgateway <all> None
</code></pre>
|
<p>I am running a go app that is creating prometheus metrics that are node specific metrics and I want to be able to add the node IP as a label.</p>
<p>Is there a way to capture the Node IP from within the Pod?</p>
| <p>The accepted answer didn't work for me, it seems <code>fieldPath</code> is required now:</p>
<pre><code> env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
</code></pre>
|
<p>As we can see in <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters" rel="nofollow noreferrer">this documentation</a>. A private cluster is accessible by the VMs (GCP compute instances) in the same subnet by default. Here is what is mentioned in the docs :</p>
<pre><code>From other VMs in the cluster's VPC network:
Other VMs can use kubectl to communicate with the private endpoint
only if they are in the same region as the cluster
and their internal IP addresses are included in
the list of master authorized networks.
</code></pre>
<p>I have tested this: <br></p>
<ul>
<li>the cluster is accessible from VMs in the same subnetwork as the cluster</li>
<li>the cluster is not accessible from the VMs in different subnets.</li>
</ul>
<p>How does this private cluster figure out which VMs to give access and which VMs to reject?</p>
| <p>The Compute Engine instances (or <em>nodes</em>) in a private cluster are isolated from the internet and have access to <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#overview" rel="nofollow noreferrer">the Master API server endpoint</a> for authentication, that is publicly exposed in the Google-managed project. However, resources outside the VPC aren't, by default, allowed to reach said endpoint.</p>
<p>Master Authorized Networks are used to allow the GKE Master API available to the whitelisted external networks/addresses that want to authenticate against it. Is not related to disallow communication within the compute resources in the cluster VPC. For that, you can simply use <a href="https://cloud.google.com/vpc/docs/firewalls#ingress_cases" rel="nofollow noreferrer">VPC level firewall rules</a>.</p>
|
<p>Hi I am new to Kubernetes and Helm Chart. A similar question has been asked and answered here (<a href="https://stackoverflow.com/questions/48457047/how-to-set-prometheus-rules-in-stable-prometheus-chart-values-yaml">How to set prometheus rules in stable/prometheus chart values.yaml?</a>)</p>
<p>But I am looking for a way to have the rules defined in another file, and then include the file in the <code>values.yaml</code>, for easier maintenance (since I have more than 2000+ lines of alerts...)</p>
<p>In particular, this is what I have in my <code>values.yaml</code>:</p>
<pre><code>serverFiles:
alerts:
groups:
- name: kubernetes-apps
rules:
- alert: KubePodCrashLooping
annotations:
message: Pod {{ $labels.namespace }}/{{ $labels.pod }} ({{ $labels.container
}}) is restarting {{ printf "%.2f" $value }} times / 5 minutes.
runbook_url: https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubepodcrashlooping
expr: rate(kube_pod_container_status_restarts_total{component="kube-state-metrics"}[15m])
* 60 * 5 > 0
for: 1h
labels:
severity: critical
...
<2000 more lines>
...
rules: {}
prometheus.yml:
rule_files:
- /etc/config/rules
- /etc/config/alerts
</code></pre>
<p>And this is what I'd like to achieve in the new <code>values.yaml</code>:</p>
<pre><code>serverFiles:
alerts: {{ include from values-alerts.yaml }}
rules: {}
prometheus.yml:
rule_files:
- /etc/config/rules
- /etc/config/alerts
</code></pre>
<p>And this is the <code>values-alerts.yaml</code> file that I'd like to include in <code>values.yaml</code>:</p>
<pre><code>alerts:
groups:
- name: kubernetes-apps
rules:
- alert: KubePodCrashLooping
annotations:
message: Pod {{ $labels.namespace }}/{{ $labels.pod }} ({{ $labels.container
}}) is restarting {{ printf "%.2f" $value }} times / 5 minutes.
runbook_url: https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubepodcrashlooping
expr: rate(kube_pod_container_status_restarts_total{component="kube-state-metrics"}[15m])
* 60 * 5 > 0
for: 1h
labels:
severity: critical
...
<2000 more lines>
...
</code></pre>
<p>Please advise if this is doable, or if there is other better approach to do so.</p>
<p>Thank you,</p>
| <p>My understanding is that you want to duplicate the code outlined in your last code snippet and you don't want to add default values to the snippet itself, am I right?</p>
<p>If that's the case, then you shouldn't use the <code>values.yaml</code>.
The <code>values.yaml</code> file should contain default values for things like key-value-pairs for your templates.</p>
<p>However, you can include <code>templates</code> in <code>templates</code> as outlined in the <a href="https://helm.sh/docs/chart_template_guide/#declaring-and-using-templates-with-define-and-template" rel="nofollow noreferrer">helm guide</a> (comments after ###):</p>
<pre><code>### Define a template (this can be a seperate file)
{{- define "mychart.labels" }}
labels:
generator: helm
date: {{ now | htmlDate }}
{{- end }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
{{- template "mychart.labels" }} ### Include the template
</code></pre>
<p>Yields:</p>
<pre><code># Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: running-panda-configmap
labels:
generator: helm
date: 2016-11-02
</code></pre>
<p>So instead of <code>including</code> the <code>values-alert</code> in <code>values.yaml</code> you can <code>include</code> the file in the <code>templates/</code> you need and <code>{{ template }}</code> out the labels.</p>
<p><strong>Don't forget the indentations, and that <code>helm template</code> is your friend!</strong></p>
|
<p>Say I have an in-memory Deployment object, what is the correct way of testing if it is fully ready? (not in the progress of rollout, upgrade or rollback).</p>
| <p>I can't comment, so it will have to be an answer.</p>
<hr>
<p>I don't think there is a <strong><em>right way</em></strong> of doing this, as it depends on a number of variables. Such as what languages you are proficient with etc.</p>
<p>Where I work, we run a <code>kubectl get pods</code> and grep the information that is relevant (in this case if the pod is available (ready) or not. This is all run through <code>bash</code> as part of a startup script:</p>
<pre><code>function not_ready_count() {
kubectl ${1} get pods -o json | jq -r '.items[].status.conditions[].status' | grep False | wc -l | awk '{ print $1 }'
}
function not_running_count() {
kubectl ${1} get pods -o json | jq -r '.items[].status.phase' | grep -v Running | wc -l | awk '{ print $1 }'
}
function wait_until_running_and_ready() {
sleep 2
while [[ "$(not_running_count ${1})" != "0" ]]; do
echo "waiting for $(not_ready_count ${1}) pods to start"
sleep 3
done
while [[ "$(not_ready_count ${1})" != "0" ]]; do
echo "waiting for $(not_ready_count ${1}) status probes to pass"
sleep 3
done
sleep 2
}
</code></pre>
|
<p>I have followed the steps from this <a href="https://docs.aws.amazon.com/efs/latest/ug/creating-using.html" rel="nofollow noreferrer">guide</a> this <a href="https://github.com/kubernetes-incubator/external-storage/blob/master/aws/efs/README.md" rel="nofollow noreferrer">guide</a> to deploy the <code>efs-provider</code> for Kubernetes and bind an EFS filesystem. I have not succed. </p>
<p>I am implementing Kubernetes with Amazon EKS and I use EC2 instances as worker nodes, all are deployed using <code>eksctl</code>.</p>
<p>After I applied this adjusted <a href="https://github.com/kubernetes-incubator/external-storage/blob/master/aws/efs/deploy/manifest.yaml" rel="nofollow noreferrer">manifest file</a>, the result is:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS
efs-provisioner-#########-##### 1/1 Running 0
$ kubectl get pvc
NAME STATUS VOLUME
test-pvc Pending efs-storage
</code></pre>
<p>No matter how much time I wait, the status of my PVC stucks in <code>Pending</code>.</p>
<p>After the creation of a Kubernetes cluster and worker nodes and the configuration of the EFS filesystem, I apply the <code>efs-provider</code> manifest with all the variables pointing to the EFS filesystem. In the <code>StorageClass</code> configuration file is specified the <code>spec.AccessModes</code> field as <code>ReadWriteMany</code>.</p>
<p>At this point my <code>efs-provider</code> pod is running without errors and the status of the <code>PVC</code>is <code>Pending</code>. What can it be? How can I configure the <code>efs-provider</code> to use the EFS filesystem? How much should I wait to get the <code>PVC</code> status in <code>Bound</code>?</p>
<hr>
<h1>Update</h1>
<p>About the configuration of the Amazon Web Services, these is what I have done:</p>
<ul>
<li>After the creation of the EFS filesystem, I have created a <strong>mount point</strong> for each subnet where my nodes are.</li>
<li>To each <strong>mount point</strong> is attached a <strong>security group</strong> with a inbound rule to grant the access to the NFS port (2049) from the security group of each nodegroup.</li>
</ul>
<p>The description of my EFS security group is:</p>
<pre><code>{
"Description": "Communication between the control plane and worker nodes in cluster",
"GroupName": "##################",
"IpPermissions": [
{
"FromPort": 2049,
"IpProtocol": "tcp",
"IpRanges": [],
"Ipv6Ranges": [],
"PrefixListIds": [],
"ToPort": 2049,
"UserIdGroupPairs": [
{
"GroupId": "sg-##################",
"UserId": "##################"
}
]
}
],
"OwnerId": "##################",
"GroupId": "sg-##################",
"IpPermissionsEgress": [
{
"IpProtocol": "-1",
"IpRanges": [
{
"CidrIp": "0.0.0.0/0"
}
],
"Ipv6Ranges": [],
"PrefixListIds": [],
"UserIdGroupPairs": []
}
],
"VpcId": "vpc-##################"
}
</code></pre>
<h2>Deployment</h2>
<p>The output of the <code>kubectl describe deploy ${DEPLOY_NAME}</code> command is:</p>
<pre><code>$ DEPLOY_NAME=efs-provisioner; \
> kubectl describe deploy ${DEPLOY_NAME}
Name: efs-provisioner
Namespace: default
CreationTimestamp: ####################
Labels: app=efs-provisioner
Annotations: deployment.kubernetes.io/revision: 1
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{},"name":"efs-provisioner","namespace":"default"},"spec"...
Selector: app=efs-provisioner
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: Recreate
MinReadySeconds: 0
Pod Template:
Labels: app=efs-provisioner
Service Account: efs-provisioner
Containers:
efs-provisioner:
Image: quay.io/external_storage/efs-provisioner:latest
Port: <none>
Host Port: <none>
Environment:
FILE_SYSTEM_ID: <set to the key 'file.system.id' of config map 'efs-provisioner'> Optional: false
AWS_REGION: <set to the key 'aws.region' of config map 'efs-provisioner'> Optional: false
DNS_NAME: <set to the key 'dns.name' of config map 'efs-provisioner'> Optional: true
PROVISIONER_NAME: <set to the key 'provisioner.name' of config map 'efs-provisioner'> Optional: false
Mounts:
/persistentvolumes from pv-volume (rw)
Volumes:
pv-volume:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: fs-#########.efs.##########.amazonaws.com
Path: /
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: efs-provisioner-576c67cf7b (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 106s deployment-controller Scaled up replica set efs-provisioner-576c67cf7b to 1
</code></pre>
<h2>Pod Logs</h2>
<p>The output of the <code>kubectl logs ${POD_NAME}</code> command is:</p>
<pre><code>$ POD_NAME=efs-provisioner-576c67cf7b-5jm95; \
> kubectl logs ${POD_NAME}
E0708 16:03:46.841229 1 efs-provisioner.go:69] fs-#########.efs.##########.amazonaws.com
I0708 16:03:47.049194 1 leaderelection.go:187] attempting to acquire leader lease default/kubernetes.io-aws-efs...
I0708 16:03:47.061830 1 leaderelection.go:196] successfully acquired lease default/kubernetes.io-aws-efs
I0708 16:03:47.062791 1 controller.go:571] Starting provisioner controller kubernetes.io/aws-efs_efs-provisioner-576c67cf7b-5jm95_f7c5689f-a199-11e9-a152-def1285e1be5!
I0708 16:03:47.062877 1 event.go:221] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"default", Name:"kubernetes.io-aws-efs", UID:"f7c682cd-a199-11e9-80bd-1640944916e4", APIVersion:"v1", ResourceVersion:"3914", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' efs-provisioner-576c67cf7b-5jm95_f7c5689f-a199-11e9-a152-def1285e1be5 became leader
I0708 16:03:47.162998 1 controller.go:620] Started provisioner controller kubernetes.io/aws-efs_efs-provisioner-576c67cf7b-5jm95_f7c5689f-a199-11e9-a152-def1285e1be5!
</code></pre>
<h2>StorageClass</h2>
<p>The output of the <code>kubectl describe sc ${STORAGE_CLASS_NAME}</code> command is:</p>
<pre><code>$ STORAGE_CLASS_NAME=aws-efs; \
> kubectl describe sc ${STORAGE_CLASS_NAME}
Name: aws-efs
IsDefaultClass: No
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"aws-efs"},"provisioner":"aws-efs"}
Provisioner: aws-efs
Parameters: <none>
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
</code></pre>
<h2>PersistentVolumeClaim</h2>
<p>The output of the <code>kubectl describe pvc ${PVC_NAME}</code> command is:</p>
<pre><code>$ PVC_NAME=efs; \
> kubectl describe pvc ${PVC_NAME}
Name: efs
Namespace: default
StorageClass: aws-efs
Status: Pending
Volume:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"volume.beta.kubernetes.io/storage-class":"aws-efs"},"name":"...
volume.beta.kubernetes.io/storage-class: aws-efs
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 43s (x12 over 11m) persistentvolume-controller no volume plugin matched
Mounted By: <none>
</code></pre>
<blockquote>
<h2>About the questions</h2>
<ol>
<li><p><strong>Do you have the EFS filesystem id properly configured for your <code>efs-provisioner</code>?</strong> </p>
<ul>
<li>Yes, both (from the fs and the configured) match.</li>
</ul></li>
<li><p><strong>Do you have the proper IAM credentials to access this EFS?</strong> </p>
<ul>
<li>Yes, my user has and also the <code>eksctl</code> tool configures it.</li>
</ul></li>
<li><p><strong>Does that EFS path specified for your provisioner exist?</strong> </p>
<ul>
<li>Yes, is only the root (/) path.</li>
</ul></li>
<li><p><strong>Did you add an EFS endpoint to the subnet that your worker node(s) are running on, or ensure that your EFS subnets have an
Internet Gateway attached?</strong> </p>
<ul>
<li>Yes, I have added the EFS endpoints to the subnet that my worker
node(s) are running on.</li>
</ul></li>
<li><p><strong>Did you set your security group to allow the Inbound for NFS port(s)?</strong> </p>
<ul>
<li>Yes.</li>
</ul></li>
</ol>
</blockquote>
| <p>I have solved my issue by replacing the provisioner name of my <code>StorageClass</code> from <code>kubernetes.io/aws-efs</code> to only <code>aws-efs</code>. </p>
<p>As we can read on <a href="https://github.com/kubernetes-incubator/external-storage/issues/1111#issuecomment-469307428" rel="nofollow noreferrer">this issue comment</a> on Github posted by <a href="https://github.com/kubernetes-incubator/external-storage/issues/1111#issuecomment-469307428" rel="nofollow noreferrer">wongma7</a>:</p>
<blockquote>
<p>The issue is that provisioner is <code>kubernetes.io/aws-efs</code>. It can't
begin with <code>kubernetes.io</code> as that is reserved by kubernetes.</p>
</blockquote>
<p>That solves the <code>ProvisioningFailed</code> on the Events produced on the <code>
PersistentVolumeClaim</code> by the <code>persistentvolume-controller</code>.</p>
|
<p>Can I know why kubectl run sometimes create deployment and sometimes pod. </p>
<p>You can see 1st one creates pod and 2nd one creates deployment. only diff is --restart=Never</p>
<pre><code>
// 1
chams@master:~/yml$ kubectl run ng --image=ngnix --command --restart=Never --dry-run -o yaml
apiVersion: v1
kind: Pod
..
status: {}
//2
chams@master:~/yml$ kubectl run ng --image=ngnix --command --dry-run -o yaml
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: ng
name: ng
..
status: {}
</code></pre>
| <p>By default <code>kubectl run</code> command creates a <strong>Deployment.</strong></p>
<p>Using <code>kubectl run</code> command you can create and run a particular image, possibly replicated.
Creates a deployment or job to manage the created containers.</p>
<p>The difference in your case is seen in command (1st one) including restart policy argument.</p>
<p>If value of restart policy is set to '<strong>Never</strong>', a regular <strong>pod</strong> is created. For the latter two --replicas must be 1. Default '<strong>Always</strong>', for CronJobs Never.</p>
<p>Try to use command:</p>
<pre><code>$ kubectl run --generator=run-pod/v1 ng --image=ngnix --command --dry-run -o yaml
</code></pre>
<p>instead of</p>
<pre><code>$ kubectl run ng --image=ngnix --command --dry-run -o yaml
</code></pre>
<p>to avoid statement
<code>"kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead."</code></p>
<p>More information you can find here:<a href="https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/" rel="nofollow noreferrer">docker-kubectl</a>, <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands" rel="nofollow noreferrer">kubectl-run</a>.</p>
|
<p>I use <a href="https://k3s.io/" rel="nofollow noreferrer">K3S</a> for my Kubernetes cluster. It's really fast and efficient. By default K3S use <a href="https://traefik.io/" rel="nofollow noreferrer">Traefik</a> for ingress controller which also work well til now.</p>
<p>The only issue I have is, I want to have HTTP2 server push. The service I have is behind the ingress, generates <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Link" rel="nofollow noreferrer">Link header</a> which in the case of <a href="https://www.nginx.com/" rel="nofollow noreferrer">NGINX</a> I can simply turn it into the HTTP2 server push (explained <a href="https://www.nginx.com/blog/nginx-1-13-9-http2-server-push/" rel="nofollow noreferrer">here</a>). Is there any same solution for Traefik? Or is it possible to switch to NGINX in K3S?</p>
| <p>I don't know about that HTTP2 in traefik, but you can simply tell k3s not to start traefik and deploy your choice of ingress controller:</p>
<p><a href="https://github.com/rancher/k3s#traefik" rel="nofollow noreferrer">https://github.com/rancher/k3s#traefik</a></p>
|
<p>I am trying to deploy a docker image on my minikube using a specific image:</p>
<p>But I always get this error </p>
<pre><code>Failed to pull image "levm38/server:latest": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: EOF
</code></pre>
<p>I am behind a firewall but I already pushed the image, so I am not sure whats failing </p>
| <p>There are a couple of problems and solutions related to</p>
<ol>
<li><p>Firewall</p></li>
<li><p>Proxy</p></li>
<li><p>Restarting and re-configuring docker</p></li>
</ol>
<p>in this post:</p>
<p><a href="https://forums.docker.com/t/error-response-from-daemon-get-https-registry-1-docker-io-v2/23741/13" rel="nofollow noreferrer">https://forums.docker.com/t/error-response-from-daemon-get-https-registry-1-docker-io-v2/23741/13</a></p>
<p>also,</p>
<p>do <code>minikush ssh</code> to the VM and see if you can connect/pull from registry on minikube VM.</p>
|
<p>I am just doing one POC on basic helm chart installation on my Google Kubernetes engine. When I create a new helm chart it creates certain files and folder structure automatically. Now my requirement is to create only the deployment & Pod. not the kubernetes service. Is there any way to avoid kubernetes service creation? For Ingress I can see <code>enabled: false</code> property but for service it does not work.</p>
| <p><a href="https://github.com/helm/helm/blob/5859403fd92bfb319ae865fcc2466701607da334/docs/helm/helm_create.md#helm-create" rel="nofollow noreferrer"><code>helm create chart</code></a> command by default creates you below file hierarchy:</p>
<pre><code>chart/
|
|- .helmignore # Contains patterns to ignore when packaging Helm charts.
|
|- Chart.yaml # Information about your chart
|
|- values.yaml # The default values for your templates
|
|- charts/ # Charts that this chart depends on
|
|- templates/ # The template files
|
|- templates/tests/ # The test files
</code></pre>
<p>So yes, you can delete unnecessary for you objects from <code>chart/templates</code> directory to avoid creation them during <code>helm install</code></p>
<p>Github source code that is responsible to create a chart directory along with the common files and directories used in a chart
<a href="https://github.com/helm/helm/blob/5859403fd92bfb319ae865fcc2466701607da334/cmd/helm/create.go" rel="nofollow noreferrer">https://github.com/helm/helm/blob/5859403fd92bfb319ae865fcc2466701607da334/cmd/helm/create.go</a></p>
|
<p>I set up a simple redis ClusterIP service to be accessed by a php LoadBalancer service inside the Cluster. The php log shows the connection timeout error. The redis service is not accessible. </p>
<pre><code>'production'.ERROR: Operation timed out {"exception":"[object] (RedisException(code: 0):
Operation timed out at /var/www/html/vendor/laravel/framework/src/Illuminate/Redis
/html/vendor/laravel/framework/src/Illuminate/Redis/Connectors/PhpRedisConnector.php(109):
Redis->connect('redis-svc', '6379', 0, '', 0, 0)
</code></pre>
<p>My redis service is quite simple so I don't know what went wrong:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
io.kompose.service: redis
name: redis
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
io.kompose.service: redis
spec:
containers:
- image: redis:alpine
name: redis
resources: {}
ports:
- containerPort: 6379
restartPolicy: Always
status: {}
---
kind: Service
apiVersion: v1
metadata:
name: redis-svc
spec:
selector:
app: redis
ports:
- protocol: TCP
port: 6379
targetPort: 6379
type: ClusterIP
</code></pre>
<p>I verify redis-svc is running, so why it can't be access by other service</p>
<pre><code>kubectl get service redis-svc git:k8s*
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
redis-svc ClusterIP 10.101.164.225 <none> 6379/TCP 22m
</code></pre>
<p>This SO <a href="https://stackoverflow.com/questions/50852542/kubernetes-cannot-ping-another-service/50853306">kubernetes cannot ping another service</a> said ping doesn't work with service's cluster IP(indeed) how do I verify whether redis-svc can be accessed or not ?</p>
<p>---- update ----</p>
<p>My first question was a silly mistake but I still don't know how do I verify whether the service can be accessed or not (by its name). For example I changed the service name to be the same as the deployment name and I found php failed to access redis again.</p>
<p><code>kubectl get endpoints</code> did not help now.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis
...
status: {}
---
kind: Service
apiVersion: v1
metadata:
name: redis
...
</code></pre>
<p>my php is another service with env set the redis's service name</p>
<pre><code>spec:
containers:
- env:
- name: REDIS_HOST # the php code access this variable
value: redis-svc #changed to "redis" when redis service name changed to "redis"
</code></pre>
<p>----- update 2------ </p>
<p>The reason I can' set my redis service name to "redis" is b/c "<a href="https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables" rel="nofollow noreferrer">kubelet adds a set of environment variables for each active Service</a>" so with the name "redis", there will be a <code>REDIS_PORT=tcp://10.101.210.23:6379</code> which overwrite my own <code>REDIS_PORT=6379</code>
But my php just expect the value of REDIS_PORT to be 6379</p>
| <p>I ran the yaml configuration given by you and it created the deployment and service. However when I run the below commands:</p>
<pre class="lang-sh prettyprint-override"><code>>>> kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d14h
redis-svc ClusterIP 10.105.31.201 <none> 6379/TCP 109s
>>>> kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 192.168.99.116:8443 5d14h
redis-svc <none> 78s
</code></pre>
<p>As you see, the endpoints for redis-svc is none, it means that the service doesn't have an endpoint to connect to. You are using selector labels as <code>app: redis</code> in the redis-svc. But <strong>the pods doesn't have the selector label defined in the service.</strong> Adding the label <code>app: redis</code> to the pod template will work. The complete working yaml configuration of deployment will look like:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
io.kompose.service: redis
name: redis
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
io.kompose.service: redis
app: redis
spec:
containers:
- image: redis:alpine
name: redis
resources: {}
ports:
- containerPort: 6379
restartPolicy: Always
status: {}
</code></pre>
|
<p>Hi I am trying to mount a directory from pod where couchdb is running . directory is <code>/opt/couchdb/data</code> and for mounting in kubernetes I am using this config for deployment .</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: couchdb0-peer0org1
spec:
ports:
- port: 5984
targetPort: 5984
type: NodePort
selector:
app: couchdb0-peer0org1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: couchdb0-peer0org1
spec:
selector:
matchLabels:
app: couchdb0-peer0org1
strategy:
type: Recreate
template:
metadata:
labels:
app: couchdb0-peer0org1
spec:
containers:
- image: hyperledger/fabric-couchdb
imagePullPolicy: IfNotPresent
name: couchdb0
env:
- name: COUCHDB_USER
value: admin
- name: COUCHDB_PASSWORD
value: admin
ports:
- containerPort: 5984
name: couchdb0
volumeMounts:
- name: datacouchdbpeer0org1
mountPath: /opt/couchdb/data
subPath: couchdb0
volumes:
- name: datacouchdbpeer0org1
persistentVolumeClaim:
claimName: worker1-incoming-volumeclaim
</code></pre>
<p>so by applying this deployments . I always gets result for the pods .</p>
<pre><code>couchdb0-peer0org1-b89b984cf-7gjfq 0/1 CrashLoopBackOff 1 9s
couchdb0-peer0org2-86f558f6bb-jzrwf 0/1 CrashLoopBackOff 1 9s
</code></pre>
<p>But now the strange thing if I changed mounted directory from <code>/opt/couchdb/data</code> to <code>/var/lib/couchdb</code> then it works fine . But the issue is that I have to store the data for couchdb database in statefull manner . </p>
| <p><strong>Edit your /etc/exports with following content</strong></p>
<pre><code>"path/exported/directory *(rw,sync,no_subtree_check,no_root_squash)"
</code></pre>
<p>and then restart NFS server:</p>
<pre><code>sudo /etc/init.d/nfs-kernel-server restart*
</code></pre>
<p><em>no_root_squash is used, remote root users are able to change any file on the shared file. This a quick solution but have some security concerns</em></p>
|
<p>I have set up a kubernetes cluster on Azure with nginx ingress. I am getting a 404 error when navigating to a particular path.</p>
<p>I have set up some sample applications that return a simple echo that works perfectly fine. My ban api app always returns a 404 error. </p>
<p>When I navigate to the ingress path e.g. </p>
<pre><code>http://51.145.17.105/apple
</code></pre>
<p>It works fine. However when I navigate to the api application, I get a 404 error using the URL:</p>
<pre><code>http://51.145.17.105/ban-simple
</code></pre>
<p>If I log into the cluster and curl the ban-simple service (not the ingress ip) e.g. </p>
<pre><code>curl -L 10.1.1.40
</code></pre>
<p>I get the correct response. When I try it using the nginx ingress I get the 404 error.</p>
<p>The ingress mapping looks right to me. Here is a copy of the ingress yaml containing the paths.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: fruit-ingress
namespace: ban
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$
spec:
rules:
- http:
paths:
- path: /apple
backend:
serviceName: apple-service
servicePort: 5678
- path: /ban-simple
backend:
serviceName: ban-service
servicePort: 80
</code></pre>
<p>A copy of the "good" service is:</p>
<pre><code>kind: Pod
apiVersion: v1
metadata:
name: apple-app
namespace: ban
labels:
app: apple
spec:
containers:
- name: apple-app
image: hashicorp/http-echo
args:
- "-text=apple"
---
kind: Service
apiVersion: v1
metadata:
name: apple-service
namespace: ban
spec:
selector:
app: apple
ports:
- port: 5678 # Default port for image
</code></pre>
<p>The service that does not work is:</p>
<pre><code>kind: Pod
apiVersion: v1
metadata:
name: ban-simple
namespace: ban
labels:
app: ban-simple
spec:
containers:
- name: ban-simple
image: xxxxx.azurecr.io/services/ban
---
kind: Service
apiVersion: v1
metadata:
name: ban-simple-service
namespace: ban
spec:
selector:
app: ban-simple
ports:
- port: 80 # Default port for image
</code></pre>
<p>I have run the container locally and it works on my machine. It does redirect to localhost/index.html if that makes a difference.</p>
<p>Any suggestions are appreciated.</p>
| <p>It was <code>nginx.ingress.kubernetes.io/rewrite-target: /$</code> that was causing the issue. I commented it out and it resolved the issue.</p>
|
<p>I am using mac. When I try kubectl edit it opens on default vi editor but I cant save it after making any change in Vi editor. I am not new to vi editor but for kubectl edit it just doest not let me save my changes.</p>
<p>I want to configure kubectl edit to open in sublime text.
Please let me know how to make it work.</p>
| <p>Sorry for the question. I found it.
all i need to add KUBE_EDITOR=/usr/local/bin/sub
in my bash_profile.</p>
|
<p>I am using Nodejs to index some data into Elasticsearch(hosted on kubernetes),I am using <code>client.create()</code> method to index document in Elasticsearch. If I run the code on localhost and kubernetes Elasticsearch endpoint, Everything is working fine. But after deploying the same code when I tried indexing a document, I am getting an error:</p>
<blockquote>
<p>"[invalid_type_name_exception] Document mapping type name can't start
with '_', found: [_create] :: {"path":"/index_name/_create/docId"]".</p>
</blockquote>
<p>Elasticsearch version "6.3.0" and node_module version "^16.0.0".
Also, it was working initially but stopped working from past few days.
I think the issue is with some compatibility and configurations, can anyone please help?</p>
<p>I tried using client.index instead of client.create and it works fine.
i already matched all configuration and compatibility files on local and server. Everything seems OK to me.</p>
<pre><code>const elasticsearchDoc = {
index: "school",
type: "_doc",
id: 12345,
body: { name:"raj",marks:40 }
};
const result = await client.create(elasticsearchDoc);
...
</code></pre>
| <p>I think the problem is that you're using the <a href="https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/16.x/index.html" rel="noreferrer">old JS client</a> library instead of <a href="https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/index.html" rel="noreferrer">the new one</a>. </p>
<p>With 16.0.0 you need <a href="https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/16.x/configuration.html#config-options" rel="noreferrer">to explicitely set the <code>apiVersion</code> parameter</a> to the 6.x version because 7.0 is the default and that's probably why you're seeing this error (since you're using ES 6.3.0):</p>
<pre><code>apiVersion: 6.7
</code></pre>
|
<pre><code>apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice1
replicas: 3 # tells deployment to run 3 pods matching the template
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice1
spec:
containers:
- name: consoleservice
image: chintamani/insightvu:ms-console1
readinessProbe:
httpGet:
path: /
port: 8385
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
ports:
- containerPort: 8384
imagePullPolicy: Always
volumeMounts:
- mountPath: /deploy/config
name: config
volumes:
- name: config
configMap:
name: console-config
</code></pre>
<p>For creating configmap I am using this command:</p>
<pre><code>kubectl create configmap console-config --from-file=deploy/config
</code></pre>
<p>While changing in configmap it doesn't reflect automatically, every time I have to restart the pod. How can I do it automatically?</p>
| <p>thank you guys .Able to fix it ,I am using reloader to reflect on pods if any changes done inside
kubectl apply -f <a href="https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml</a></p>
<p>then add the annotation inside your deployment.yml file .</p>
<pre><code>apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
annotations:
configmap.reloader.stakater.com/reload: "console-config"
</code></pre>
<p>It will restart your pods gradually .</p>
|
<p>I am new to Kubernetes and I am trying to create a simple front-end back-end application where front-end and back-end will have its own services. For some reason, I am not able to access back-end service by its name from front-end service.</p>
<p>Just because of simplicity, front-end service can be created like:<br/>
<code>kubectl run curl --image=radial/busyboxplus:curl -i --tty</code></p>
<p>When I do a nslookup I get the following:<br/></p>
<pre><code>[ root@curl-66bdcf564-rbx2k:/ ]$ nslookup msgnc-travel
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: msgnc-travel
Address 1: 10.100.171.209 msgnc-travel.default.svc.cluster.local
</code></pre>
<p>Service is available by its name msgnc-travel, but when I try to do curl it:<br/>
<code>curl msgnc-travel</code><br/>
it just keeps on waiting and no response is received. I have also tried <br/>
<code>curl 10.100.171.209</code> and <code>curl msgnc-travel.default.svc.cluster.local</code> but I have the same behaviour
<br/><br/></p>
<p>Any ideas why is this issue occurring? <br/></p>
<p>I have successfully managed to do a "workaround" by using Ingress, but I am curious why can't I access my Spring Boot backend service directly just by providing its name?</p>
<p><strong>deployment.yml</strong> looks like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: msgnc-travel-deployment
labels:
name: msgnc-travel-deployment
app: msgnc-travel-app
spec:
template:
metadata:
name: msgnc-travel-pod
labels:
name: msgnc-travel-pod
app: msgnc-travel-app
spec:
containers:
- name: msgnc-travel
image: bdjordjevic/msgnc-travel
ports:
- containerPort: 8080
replicas: 1
selector:
matchExpressions:
- {key: name, operator: In, values: [msgnc-travel-pod]}
- {key: app, operator: In, values: [msgnc-travel-app]}
</code></pre>
<p><strong>service.yml</strong> looks like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: msgnc-travel
labels:
name: msgnc-travel-service
app: msgnc-travel-app
spec:
ports:
- port: 8080
targetPort: 8080
selector:
name: msgnc-travel-pod
app: msgnc-travel-app
</code></pre>
| <p>You are defining the service to listen at port 8080. So you are supposed to execute <code>curl msgnc-travel:8080</code>.</p>
<p>I tried running wget and this is the output I got:</p>
<pre class="lang-sh prettyprint-override"><code>wget msgnc-travel:8080
Connecting to msgnc-travel:8080 (10.98.81.45:8080)
wget: server returned error: HTTP/1.1 404
</code></pre>
|
<p>I am studying services in k8s from <a href="https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors" rel="nofollow noreferrer">here</a></p>
<p>I have created service without selector and with one endpoint. What I am trying to do is I have installed apache and it's running on port 80. I have created a node port service on port 31000. Now this service should redirect ip:31000 to ip:80 port. </p>
<p>It is doing for internal ip of service but not on external ip. </p>
<p><strong>my-service.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- protocol: TCP
port: 9376
targetPort: 80
nodePort: 31000
type: NodePort
</code></pre>
<p><strong>my-endpoint.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Endpoints
metadata:
name: my-service
subsets:
- addresses:
- ip: <IP>
ports:
- port: 80
</code></pre>
<p>Output for <strong>kubectl get service -o wide</strong> </p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 53m <none>
my-service NodePort 10.111.205.207 <none> 9376:31000/TCP 30m <none>
</code></pre>
| <p>First thing is, you need to run a pod inside your cluster then assign the ip of that pod inside the Endpoints yaml with port, because services exposes the pods to within or outside the cluster, we must use either selector or the address of the pod so that service can attach it self to particular pod.</p>
<pre><code>apiVersion: v1
kind: Endpoints
metadata:
name: my-service
subsets:
- addresses:
- ip: <ip address of the pod>
ports:
- port: <port of the pod>
</code></pre>
<p>One more thing use Statefulset in place of Deployment to run pods. </p>
|
<p>I have deployed prometheus on kubernetes cluster (EKS). I was able to successfully scrape <code>prometheus</code> and <code>traefik</code> with following </p>
<pre><code>scrape_configs:
# A scrape configuration containing exactly one endpoint to scrape:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['prometheus.kube-monitoring.svc.cluster.local:9090']
- job_name: 'traefik'
static_configs:
- targets: ['traefik.kube-system.svc.cluster.local:8080']
</code></pre>
<p>But node-exporter deployed as <code>DaemonSet</code> with following definition is not exposing the node metrics. </p>
<pre><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-exporter
namespace: kube-monitoring
spec:
selector:
matchLabels:
app: node-exporter
template:
metadata:
name: node-exporter
labels:
app: node-exporter
spec:
hostNetwork: true
hostPID: true
containers:
- name: node-exporter
image: prom/node-exporter:v0.18.1
args:
- "--path.procfs=/host/proc"
- "--path.sysfs=/host/sys"
ports:
- containerPort: 9100
hostPort: 9100
name: scrape
resources:
requests:
memory: 30Mi
cpu: 100m
limits:
memory: 50Mi
cpu: 200m
volumeMounts:
- name: proc
readOnly: true
mountPath: /host/proc
- name: sys
readOnly: true
mountPath: /host/sys
tolerations:
- effect: NoSchedule
operator: Exists
volumes:
- name: proc
hostPath:
path: /proc
- name: sys
hostPath:
path: /sys
</code></pre>
<p>and following scrape_configs in prometheus</p>
<pre><code>scrape_configs:
- job_name: 'kubernetes-nodes'
scheme: http
kubernetes_sd_configs:
- role: node
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.kube-monitoring.svc.cluster.local:9100
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
</code></pre>
<p>I also tried to <code>curl http://localhost:9100/metrics</code> from one of the container, but got <code>curl: (7) Failed to connect to localhost port 9100: Connection refused</code></p>
<p>What I am missing here with the configuration ?</p>
<p>After suggestion to install Prometheus by helm, I didn't install it on test cluster and tried to compare my original configuration with helm installed Prometheus. </p>
<p>Following pods were running :</p>
<pre><code>NAME READY STATUS RESTARTS AGE
alertmanager-prometheus-prometheus-oper-alertmanager-0 2/2 Running 0 4m33s
prometheus-grafana-66c7bcbf4b-mh42x 2/2 Running 0 4m38s
prometheus-kube-state-metrics-7fbb4697c-kcskq 1/1 Running 0 4m38s
prometheus-prometheus-node-exporter-6bf9f 1/1 Running 0 4m38s
prometheus-prometheus-node-exporter-gbrzr 1/1 Running 0 4m38s
prometheus-prometheus-node-exporter-j6l9h 1/1 Running 0 4m38s
prometheus-prometheus-oper-operator-648f9ddc47-rxszj 1/1 Running 0 4m38s
prometheus-prometheus-prometheus-oper-prometheus-0 3/3 Running 0 4m23s
</code></pre>
<p>I didn't find any configuration for node exporter in pod <code>prometheus-prometheus-prometheus-oper-prometheus-0</code> at <code>/etc/prometheus/prometheus.yml</code></p>
| <p>The previous advice to use Helm is highly valid, I would also recommend that.</p>
<p>Regarding your issue: thing is that you are not scraping nodes directly, you're using node-exporter for that. So <code>role: node</code> is incorrect, you should instead use <code>role: endpoints</code>. For that you also need to create service for all pods of your DaemonSet.</p>
<p>Here is working example from my environment (installed by Helm):</p>
<pre><code>- job_name: monitoring/kube-prometheus-exporter-node/0
scrape_interval: 15s
scrape_timeout: 10s
metrics_path: /metrics
scheme: http
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- monitoring
relabel_configs:
- source_labels: [__meta_kubernetes_service_label_app]
separator: ;
regex: exporter-node
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_endpoint_port_name]
separator: ;
regex: metrics
replacement: $1
action: keep
- source_labels: [__meta_kubernetes_namespace]
separator: ;
regex: (.*)
target_label: namespace
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_pod_name]
separator: ;
regex: (.*)
target_label: pod
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_service_name]
separator: ;
regex: (.*)
target_label: service
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_service_name]
separator: ;
regex: (.*)
target_label: job
replacement: ${1}
action: replace
- separator: ;
regex: (.*)
target_label: endpoint
replacement: metrics
action: replace
</code></pre>
|
<p>I am evaluating Apache airflow for production use in a data environment and I would like to know if with airflow you can run operators in self contained docker environments on an auto scaling Kubernetes cluster.</p>
<p>I found the following operator: <code>KubernetesPodOperator</code> which seems to do the job, but the only examples I have found have been on Google Cloud. I would like to run this on AWS, however I haven't found any examples of how this would be done. I believe AWS EKS or AWS fargate might fit the bill but not sure.</p>
<p>Can anyone with airflow experience please let me know if this is possible? I have looked online and haven't found anything clear yet. </p>
| <p>We have been using Fargate and Airflow in production and the experience so far has been good. </p>
<p>We have been using it for transient workloads and it is turning out to be cheaper for us than having a dedicated Kubernetes cluster. Also, there is no management overhead of any kind.</p>
<p><a href="https://github.com/ishan4488/airflow-fargate-example/tree/master/airflow-fargate-example" rel="nofollow noreferrer">Github — Airflow DAG with ECSOperatorConfig</a></p>
|
<p>How do I change system time (not timezone) for all containers deployed on Azure Kubernetes cluster?</p>
<p>Can this be changed from inside container / pods? I guess it should be changeable from host machine. How to do that?</p>
| <p>I don't believe this is possible.</p>
<p>Time comes from the underlying kernel and that is not something that you will be able to adjust from code that runs in a pod.</p>
<p>Even if you could, I suspect it would cause a whole heap of trouble; the pod time and api-server time would be inconsistent and that won't end well!</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.