prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>I am trying to delete "Provisioned Services" or rather ServiceInstances in OpenShift 3.11. Because of a Problem with 2 ClusterSerivceClasses with the same name I had to delete the old ClusterServiceClass. Unfortunately, the ClusterServiceClass still had some ServiceInstances. Now I cannot delete these ServiceInstances they are only "marked for deletion". Why can't I delete these ServiceInstances?</p>
<p>I have already tried to delete the ServiceInstances using the oc utility but this way also did not work.</p>
| <p>After some research with a collegue I found out that the ServiceInstance has a finalizer. Unfortunately, the finalizer is normally removed by the ClusterServiceClass. In order to delete the ServiceInstance the finalizer needs to be removed manually using <code>oc edit</code></p>
<p>Finalizer:</p>
<pre><code>metadata:
finalizers:
- kubernetes-incubator/service-catalog
</code></pre>
|
<p><strong>Use case</strong></p>
<p>Let's say I've got 3 different node pools: <code>default</code>, <code>kafka</code> and <code>team-a</code>. I want to make sure that only kafka relevant deployments and stuff like daemonsets or kubernetes system services run on this node pool. I do so by simply adding a node selector to my kafka deployments, so that it can only be scheduled on the kafka nodepool:</p>
<pre><code>nodeSelector:
cloud.google.com/gke-nodepool: kafka
</code></pre>
<p><strong>The problem</strong></p>
<p>When I have further deployments or statefulsets which <strong>do not</strong> have any node selector specified they might get scheduled on that kafka nodepool. Instead I want all other deployments without nodeselector to be scheduled inside of my default nodepool.</p>
<p><strong>Worded as generic question</strong></p>
<p>How can I make sure that all deployments & statefulsets without a node selector will be scheduled inside of a specific nodepool?</p>
| <p>Use <code>taint</code> for statefulset or <code>pod</code>. Follow: <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="noreferrer">https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/</a></p>
|
<p>I am trying to cut the costs of running the kubernetes cluster on Google Cloud Platform.</p>
<p>I moved my node-pool to preemptible VM instances. I have 1 pod for Postgres and 4 nodes for web apps.
For Postgres, I've created StorageClass to make data persistent.</p>
<p>Surprisingly, maybe not, all storage data was erased after a day.</p>
<p>How to make a specific node in GCP not preemptible?
Or, could you advice what to do in that situation?</p>
| <p>I guess I found a solution.</p>
<ol>
<li>Create a disk on gcloud via:</li>
</ol>
<pre><code>gcloud compute disks create --size=10GB postgres-disk
gcloud compute disks create --size=[SIZE] [NAME]
</code></pre>
<ol start="2">
<li>Delete any StorageClasses, PV, PVC</li>
<li>Configure deployment file:</li>
</ol>
<pre><code>apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres
selector:
matchLabels:
app: postgres
replicas: 1
template:
metadata:
labels:
app: postgres
role: postgres
spec:
containers:
- name: postgres
image: postgres
env:
...
ports:
...
# Especially this part should be configured!
volumeMounts:
- name: postgres-persistent-storage
mountPath: /var/lib/postgresql
volumes:
- name: postgres-persistent-storage
gcePersistentDisk:
# This GCE PD must already exist.
pdName: postgres-disk
fsType: ext4
</code></pre>
|
<p>How to pass the <code>fullname</code> of a dependant chart into another chart in the <code>values.yaml</code>?</p>
<p>My <code>values.yaml</code> looks like this:</p>
<pre><code>##
## Prisma chart configuration
##
prisma:
enabled: true
image:
pullPolicy: Always
auth:
enabled: true
secret: scret
database:
host: {{ template "postgresql.fullname" . }}
port: 5432
password: dbpass
##
## Postgreqsl chart configuration
##
postgresql:
enabled: true
imagePullPolicy: Always
postgresqlUsername: prisma
postgresqlPassword: dbpass
persistence:
enabled: true
storageClass: storage-0
</code></pre>
<p>In there, I need to pass the name of the <code>postgresql</code> instance to <code>prisma</code>.</p>
<p>If I try to install this, it gives me the following error:</p>
<pre><code>error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{"template \"postgresql.fullname\" .":interface {}(nil)}
</code></pre>
| <p>If your charts looking as:</p>
<pre><code>charts
--- prisma
----- templates
------- prisma.yaml
----- values.yaml
--- postgresql
----- templates
------- postgresql.yaml
----- values.yaml
requirements.yaml
values.yaml
</code></pre>
<p>in prisma values.yaml define:</p>
<pre><code>dbhost: defaultdbhost
</code></pre>
<p>Then you can define in global values.yaml:</p>
<pre><code>prisma:
dbhost: mydbhost
</code></pre>
<p>And into prisma.yaml use:</p>
<pre><code>prisma:
enabled: true
image:
pullPolicy: Always
auth:
enabled: true
secret: scret
database:
host: {{ .Values.dbhost }}
port: 5432
password: dbpass
</code></pre>
<p>For understand overriding values <a href="https://github.com/helm/helm/blob/master/docs/chart_template_guide/subcharts_and_globals.md" rel="nofollow noreferrer">read this document</a></p>
|
<p><strong>Environment:</strong></p>
<p>I have:</p>
<p>1- <strong>NGINX Ingress controller version:</strong> 1.15.9, image: 0.23.0</p>
<p>2- <strong>Kubernetes version:</strong></p>
<blockquote>
<p>Client Version: version.Info{Major:"1", Minor:"13",
GitVersion:"v1.13.4",
GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1",
GitTreeState:"clean", BuildDate:"2019-02-28T13:37:52Z",
GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}</p>
<p>Server Version: version.Info{Major:"1", Minor:"13",
GitVersion:"v1.13.4",
GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1",
GitTreeState:"clean", BuildDate:"2019-02-28T13:30:26Z",
GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}</p>
</blockquote>
<p><strong>Cloud provider or hardware configuration:</strong> Virtual Machines on KVM</p>
<p><strong>OS (e.g. from /etc/os-release):</strong></p>
<blockquote>
<p>NAME="CentOS Linux" VERSION="7 (Core)" ID="centos" ID_LIKE="rhel
fedora" VERSION_ID="7" PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"</p>
<p>CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7"</p>
</blockquote>
<p>Kernel (e.g. uname -a):</p>
<blockquote>
<p>Linux node01 3.10.0-957.5.1.el7.x86_64 #1 SMP Fri Feb 1 14:54:57 UTC
2019 x86_64 x86_64 x86_64 GNU/Linux</p>
</blockquote>
<p><strong>Install tools:</strong> kubeadm</p>
<p><strong>More details:</strong></p>
<p>CNI : WEAVE </p>
<p><strong>Setup:</strong></p>
<ol>
<li>2 Resilient HA Proxy, 3 Masters, 2 infra, and worker nodes.</li>
<li>I am exposing all the services as node ports, where the HA-Proxy re-assign them to a public virtual IP.</li>
<li>Dedicated project hosted on the infra node carrying the monitoring and logging tools (Grafana, Prometheus, EFK, etc)</li>
<li>Backend NFS storage as persistent storage</li>
</ol>
<p>What happened:
I want to be able to use external Name rather than node ports, so instead of accessing grafana for instance via vip + 3000 I want to access it via <a href="http://grafana.wild-card-dns-zone" rel="nofollow noreferrer">http://grafana.wild-card-dns-zone</a></p>
<p><strong>Deployment</strong></p>
<ol>
<li>I have created a new namespace called ingress </li>
<li>I deployed it as follow:</li>
</ol>
<pre><code>
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
replicas: **2**
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
name: nginx-ingress
spec:
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
node-role.kubernetes.io/infra: infra
terminationGracePeriodSeconds: 60
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
args:
- /nginx-ingress-controller
- --default-backend-service=ingress/ingress-controller-nginx-ingress-default-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
- --v3
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
generation: 1
labels:
app: nginx-ingress
chart: nginx-ingress-1.3.1
component: default-backend
name: ingress-controller-nginx-ingress-default-backend
namespace: ingress
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx-ingress
component: default-backend
release: ingress-controller
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: nginx-ingress
component: default-backend
release: ingress-controller
spec:
nodeSelector:
node-role.kubernetes.io/infra: infra
containers:
- image: k8s.gcr.io/defaultbackend:1.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: nginx-ingress-default-backend
ports:
- containerPort: 8080
name: http
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 60
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configuration
namespace: ingress
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
name: ingress-nginx
---
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.3.1
component: default-backend
name: ingress-controller-nginx-ingress-default-backend
namespace: ingress
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app: nginx-ingress
component: default-backend
release: ingress-controller
sessionAffinity: None
type: ClusterIP
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
namespace: ingress
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrolebinding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-rolebinding
namespace: ingress
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress
</code></pre>
<p><strong>INGRESS SETUP:</strong></p>
<p><strong>services</strong></p>
<pre><code>
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-03-25T16:03:01Z"
labels:
app: jaeger
app.kubernetes.io/component: query
app.kubernetes.io/instance: jeager
app.kubernetes.io/managed-by: jaeger-operator
app.kubernetes.io/name: jeager-query
app.kubernetes.io/part-of: jaeger
name: jeager-query
namespace: monitoring-logging
resourceVersion: "3055947"
selfLink: /api/v1/namespaces/monitoring-logging/services/jeager-query
uid: 778550f0-4f17-11e9-9078-001a4a16021e
spec:
externalName: jaeger.example.com
ports:
- port: 16686
protocol: TCP
targetPort: 16686
selector:
app: jaeger
app.kubernetes.io/component: query
app.kubernetes.io/instance: jeager
app.kubernetes.io/managed-by: jaeger-operator
app.kubernetes.io/name: jeager-query
app.kubernetes.io/part-of: jaeger
sessionAffinity: None
type: ExternalName
status:
loadBalancer: {}
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-03-25T15:40:30Z"
labels:
app: grafana
chart: grafana-2.2.4
heritage: Tiller
release: grafana
name: grafana
namespace: monitoring-logging
resourceVersion: "3053698"
selfLink: /api/v1/namespaces/monitoring-logging/services/grafana
uid: 51b9d878-4f14-11e9-9078-001a4a16021e
spec:
externalName: grafana.example.com
ports:
- name: http
port: 3000
protocol: TCP
targetPort: 3000
selector:
app: grafana
release: grafana
sessionAffinity: None
type: ExternalName
status:
loadBalancer: {}
</code></pre>
<p><strong>INGRESS</strong> </p>
<p>Ingress 1</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/service-upstream: "true"
creationTimestamp: "2019-03-25T21:13:56Z"
generation: 1
labels:
app: jaeger
app.kubernetes.io/component: query-ingress
app.kubernetes.io/instance: jeager
app.kubernetes.io/managed-by: jaeger-operator
app.kubernetes.io/name: jeager-query
app.kubernetes.io/part-of: jaeger
name: jaeger-query
namespace: monitoring-logging
resourceVersion: "3111683"
selfLink: /apis/extensions/v1beta1/namespaces/monitoring-logging/ingresses/jaeger-query
uid: e6347f6b-4f42-11e9-9e8e-001a4a16021c
spec:
rules:
- host: jaeger.example.com
http:
paths:
- backend:
serviceName: jeager-query
servicePort: 16686
status:
loadBalancer: {}
</code></pre>
<p>Ingress 2</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"labels":{"app":"grafana"},"name":"grafana","namespace":"monitoring-logging"},"spec":{"rules":[{"host":"grafana.example.com","http":{"paths":[{"backend":{"serviceName":"grafana","servicePort":3000}}]}}]}}
creationTimestamp: "2019-03-25T17:52:40Z"
generation: 1
labels:
app: grafana
name: grafana
namespace: monitoring-logging
resourceVersion: "3071719"
selfLink: /apis/extensions/v1beta1/namespaces/monitoring-logging/ingresses/grafana
uid: c89d7f34-4f26-11e9-8c10-001a4a16021d
spec:
rules:
- host: grafana.example.com
http:
paths:
- backend:
serviceName: grafana
servicePort: 3000
status:
loadBalancer: {}
</code></pre>
<p><strong>EndPoints</strong></p>
<p>Endpoint 1</p>
<pre><code>apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: "2019-03-25T15:40:30Z"
labels:
app: grafana
chart: grafana-2.2.4
heritage: Tiller
release: grafana
name: grafana
namespace: monitoring-logging
resourceVersion: "3050562"
selfLink: /api/v1/namespaces/monitoring-logging/endpoints/grafana
uid: 51bb1f9c-4f14-11e9-9e8e-001a4a16021c
subsets:
- addresses:
- ip: 10.42.0.15
nodeName: kuinfra01.example.com
targetRef:
kind: Pod
name: grafana-b44b4f867-bcq2x
namespace: monitoring-logging
resourceVersion: "1386975"
uid: 433e3d21-4827-11e9-9e8e-001a4a16021c
ports:
- name: http
port: 3000
protocol: TCP
</code></pre>
<p>Endpoint 2</p>
<pre><code>apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: "2019-03-25T16:03:01Z"
labels:
app: jaeger
app.kubernetes.io/component: service-query
app.kubernetes.io/instance: jeager
app.kubernetes.io/managed-by: jaeger-operator
app.kubernetes.io/name: jeager-query
app.kubernetes.io/part-of: jaeger
name: jeager-query
namespace: monitoring-logging
resourceVersion: "3114702"
selfLink: /api/v1/namespaces/monitoring-logging/endpoints/jeager-query
uid: 7786d833-4f17-11e9-9e8e-001a4a16021c
subsets:
- addresses:
- ip: 10.35.0.3
nodeName: kunode02.example.com
targetRef:
kind: Pod
name: jeager-query-7d9775d8f7-2hwdn
namespace: monitoring-logging
resourceVersion: "3114693"
uid: fdac9771-4f49-11e9-9e8e-001a4a16021c
ports:
- name: query
port: 16686
protocol: TCP
</code></pre>
<p>I am able to curl the endpoints from inside the ingress-controller pod:</p>
<pre><code># kubectl exec -it nginx-ingress-controller-5dd67f88cc-z2g8s -n ingress -- /bin/bash
www-data@nginx-ingress-controller-5dd67f88cc-z2g8s:/etc/nginx$ curl -k https://localhost
<a href="/login">Found</a>.
www-data@nginx-ingress-controller-5dd67f88cc-z2g8s:/etc/nginx$ curl http://localhost
<html>
<head><title>308 Permanent Redirect</title></head>
<body>
<center><h1>308 Permanent Redirect</h1></center>
<hr><center>nginx/1.15.9</center>
</body>
</html>
www-data@nginx-ingress-controller-5dd67f88cc-z2g8s:/etc/nginx$ exit
</code></pre>
<p>But from out side when I am trying to reach jaeger.example.com or grafana.example.com I am getting 502 bad gatway and the following error log:</p>
<pre><code>10.39.0.0 - [10.39.0.0] - - [25/Mar/2019:16:40:32 +0000] "GET /search HTTP/1.1" 502 559 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36" 514 0.001 [monitoring-logging-jeager-query-16686] vip:16686, vip:16686, vip:16686 0, 0, 0 0.001, 0.000, 0.000 502, 502, 502 b7c813286fccf27fffa03eb6564edfd1
2019/03/25 16:40:32 [error] 2816#2816: *4617326 connect() failed (111: Connection refused) while connecting to upstream, client: 10.39.0.0, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "http://vip:16686/favicon.ico", host: "jeager.example.com", referrer: "http://jeager.example.com/search"
2019/03/25 16:40:32 [error] 2816#2816: *4617326 connect() failed (111: Connection refused) while connecting to upstream, client: 10.39.0.0, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "http://vip:16686/favicon.ico", host: "jeager.example.com", referrer: "http://jeager.example.com/search"
2019/03/25 16:40:32 [error] 2816#2816: *4617326 connect() failed (111: Connection refused) while connecting to upstream, client: 10.39.0.0, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "http://vip:16686/favicon.ico", host: "jeager.example.com", referrer: "http://jeager.example.com/search"
10.39.0.0 - [10.39.0.0] - - [25/Mar/2019:16:40:32 +0000] "GET /favicon.ico HTTP/1.1" 502 559 "http://jeager.example.com/search" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36" 494 0.001 [monitoring-logging-jeager-query-16686] vip:16686, vip:16686, vip:16686 0, 0, 0 0.000, 0.001, 0.000 502, 502, 502 9e582912614e67dfee6be1f679de5933
I0325 16:40:32.497868 8 socket.go:225] skiping metric for host jeager.example.com that is not being served
I0325 16:40:32.497886 8 socket.go:225] skiping metric for host jeager.example.com that is not being served
</code></pre>
| <p>First thanks for cookiedough for the clue to help regarding the service issue, but later I faced an issue to create service using external name but I found my mistake thanks for "Long" user in the slack, the mistake is that I was using service of type ExternalName and it should be type cluster IP here are the steps to solve the problems (Remark https issue is a separate problem):
1- Create wild character DNS zone pointing the public IP
1- For new service just create it of type ClusterIP
2- In the namespace for the service create an ingress using the following example (yaml):</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: grafana
name: grafana
namespace: grafana-namespace
spec:
rules:
host: grafana.example.com
http:
paths:
backend:
serviceName: grafana
servicePort: 3000
</code></pre>
<p>3- kubectl -f apply -f grafana-ingress.yaml
Now you can reach your grafana on <a href="http://grafana.example,com" rel="nofollow noreferrer">http://grafana.example,com</a></p>
|
<p>I have a service that has HTTP Basic Auth. In front of it I have nginx Ingress, who also has basic-auth. How can I attach Authorization header with the credentials after Sign In with the Ingress, to achieve Single-Sign-On?</p>
<p>This is the configuration of my Ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-realm: Authentication Required
nginx.ingress.kubernetes.io/auth-secret: kibana-user-basic-auth
nginx.ingress.kubernetes.io/auth-type: basic
name: kibana-user
namespace: {{.Release.Namespace}}
spec:
tls:
- secretName: kibana-tls
hosts:
- {{.Values.ingress.user.host}}
rules:
- host: {{.Values.ingress.user.host}}
http:
paths:
- backend:
serviceName: kibana-logging
servicePort: {{ .Values.kibana.service.internalPort }}
path: /
</code></pre>
| <p>You could use the annotation <code>nginx.ingress.kubernetes.io/configuration-snippet: proxy_set_header Authorization $http_authorization;</code> to forward the <code>Authorization</code> header to the back end service.</p>
<p>The Ingress resource should looks like this</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-realm: Authentication Required
nginx.ingress.kubernetes.io/auth-secret: kibana-user-basic-auth
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/configuration-snippet: "proxy_set_header Authorization $http_authorization;"
name: kibana-user
namespace: {{.Release.Namespace}}
spec:
tls:
- secretName: kibana-tls
hosts:
- {{.Values.ingress.user.host}}
rules:
- host: {{.Values.ingress.user.host}}
http:
paths:
- backend:
serviceName: kibana-logging
servicePort: {{ .Values.kibana.service.internalPort }}
path: /
</code></pre>
|
<p>If I start up a fresh clean empty minikube and <code>helm install</code> the latest <code>stable/prometheus-operator</code> with strictly default settings I see four active Prometheus alarms.</p>
<p>In this super simplified scenario where I have a clean fresh minikube that is running absolutely nothing other than Prometheus, there should be no problems and no alarms. Are these alarms bogus or broken? Is something wrong with my setup or should I submit a bug report and disable these alarms for the time being?</p>
<p>Here are my basic setup steps:</p>
<pre><code>minikube delete
# Any lower memory/cpu settings will experience problems
minikube start --memory 10240 --cpus 4 --kubernetes-version v1.12.2
eval $(minikube docker-env)
helm init
helm repo update
# wait a minute for Helm Tiller to start up.
helm install --name my-prom stable/prometheus-operator
</code></pre>
<p>Wait several minutes for everything to start up, then run port forwarding on Prometheus server and on Grafana:</p>
<pre><code>kubectl port-forward service/my-prom-prometheus-operato-prometheus 9090:9090
kubectl port-forward service/my-prom-grafana 8080:80
</code></pre>
<p>Then go to <code>http://localhost:9090/alerts</code> and see:</p>
<pre><code>DeadMansSwitch (1 active)
KubeControllerManagerDown (1 active)
KubeSchedulerDown (1 active)
TargetDown (1 active)
</code></pre>
<p>Are these bogus? Is something genuinely wrong? Should I disable these?</p>
<p>Two of these alarms are missing metrics:</p>
<ul>
<li>KubeControllerManagerDown: <code>absent(up{job="kube-controller-manager"} == 1)</code></li>
<li>KubeSchedulerDown: <code>absent(up{job="kube-scheduler"} == 1)</code></li>
</ul>
<p>In <code>http://localhost:9090/config</code>, I don't see either job configured but I do see very closely related a jobs with <code>job_name</code> values of <code>default/my-prom-prometheus-operato-kube-controller-manager/0</code> and <code>default/my-prom-prometheus-operato-kube-scheduler/0</code>. This suggests that <code>job_name</code> values are supposed to match and there is a bug where they do not match. I also don't see any collected metrics for either job. Are slashes allowed in job names?</p>
<p>The other two alarms:</p>
<ul>
<li>DeadMansSwitch: The alarm expression is <code>vector(1)</code>. I have no idea what this is.</li>
<li>TargetDown: This alarm is being triggered over <code>up{job="kubelet"}</code> which has two metric values, one up with a value of 1.0 and one down with a value of 0.0. The up value is for <code>endpoint="http-metrics"</code> and the down valie is for <code>endpoint="cadvisor"</code>. Is that latter endpoint supposed to be up? Why wouldn't it be?</li>
</ul>
<p>I go to <code>http://localhost:9090/graph</code> and run <code>sum(up) by (job)</code> I see <code>1.0</code> values for all of:</p>
<pre><code>{job="node-exporter"}
{job="my-prom-prometheus-operato-prometheus"}
{job="my-prom-prometheus-operato-operator"}
{job="my-prom-prometheus-operato-alertmanager"}
{job="kubelet"}
{job="kube-state-metrics"}
{job="apiserver"}
</code></pre>
<p>fyi, <code>kubectl version</code> shows:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-30T21:39:16Z", GoVersion:"go1.11.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:43:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| <p>The <a href="https://github.com/coreos/prometheus-operator/blob/773090e44e162e9508de0073efefc1152e6cf3d9/contrib/kube-prometheus/manifests/prometheus-rules.yaml#L870-L877" rel="noreferrer"><code>Watchdog</code></a> alert (formerly named as <code>DeadManSwitch</code>) is:</p>
<blockquote>
<p>An alert meant to ensure that the entire alerting pipeline is functional.
This alert is always firing, therefore it should always be firing in Alertmanager
and always fire against a receiver.</p>
</blockquote>
<p>In Minikube, the <code>kube-controller-manager</code> and <code>kube-scheduler</code> listen by default on <em>127.0.0.1</em>, so Prometheus cannot scrape metrics from them. You need to start Minikube with these components listening on all interfaces:</p>
<pre><code>minikube start --kubernetes-version v1.12.2 \
--bootstrapper=kubeadm \
--extra-config=scheduler.address=0.0.0.0 \
--extra-config=controller-manager.address=0.0.0.0
</code></pre>
<p>Another cause of <code>TargetDown</code> is that the default service selectors created by Prometheus Operator helm chart don’t match the labels used by Minikube components. You need to match them by setting the <code>kubeControllerManager.selector</code> and <code>kubeScheduler.selector</code> helm parameters.</p>
<p>Take a look at this article: <a href="https://medium.com/@eduardobaitello/trying-prometheus-operator-with-helm-minikube-b617a2dccfa3" rel="noreferrer">Trying Prometheus Operator with Helm + Minikube</a>. It addresses all these problems, how to solve them and much more.</p>
|
<p>I have a service in Kubernetes which I have to expose on multiple ports over HTTP.
I use Nginx-Ingress and was able to expose my service over Port 80 successfully.(<code>http://serviceA.example.com --> service-a:80</code>)</p>
<p>However I am not able to use a diffrent port for Http then Port 80.
How can I tell nginx-ingress to listen on Port 7049 aswell.</p>
<p>I've already tried to expose Port 7049 on the nginx Service and added the annotation <code>nginx.org/listen-ports: "80,7049"</code> to the nginx controller. Neither worked for me.</p>
<p>I expect the following output:</p>
<pre><code>http://serviceA.example.com --> service-a:80
http://serviceA.example.com:7049 --> service-a:7049
</code></pre>
<p>ingress-service.yml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: nginx-ingress
spec:
externalTrafficPolicy: Local
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
app: nginx-ingress
</code></pre>
<p>my-service.yml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: my-service
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
- port: 7049
targetPort: 7049
name: symbols
selector:
app: my-service
</code></pre>
<p>my-service-ingress.yml</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-service
spec:
rules:
- host: myservice.example.com
http:
paths:
- path: /
backend:
serviceName: my-service
servicePort: 80
</code></pre>
| <p>Ingress Object is used to expose application <strong>only</strong> for HTTP and HTTPS Traffic. </p>
<blockquote>
<p>Ingress, added in Kubernetes v1.1, exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress" rel="nofollow noreferrer">what-is-ingress</a></p>
<p>You can have different type of routing such as path based or hostname based routing but the port number for the nginx will be 80 or& 443.</p>
<p>If you want to expose your application on different port than 80 and 443 , you need to use <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">LoadBalancer</a> type service</p>
|
<p>Docker has a mechanism for retrieving Docker registry passwords from a remote store, instead of just storing them in a config file - this mechanism is called a <a href="https://docs.docker.com/engine/reference/commandline/login/#credentials-store" rel="nofollow noreferrer">Credentials Store</a>. It has a similar mechanism that are used to retrieve a password for a specific registry called <a href="https://docs.docker.com/engine/reference/commandline/login/#credential-helpers" rel="nofollow noreferrer">Credential Helpers</a>.</p>
<p>Basically, it involves defining a value in <code>~/.docker/config.json</code> that is interpreted as the name of an executable.</p>
<pre><code>{
"credsStore": "osxkeychain"
}
</code></pre>
<p>The value of the <code>credsStore</code> key has a prefix <code>docker-credential-</code> pre-pended to it and if that executable (e.g. <code>docker-credential-osxkeychain</code>) exists on the path then it will be executed and is expected to echo the username and password to <code>stdout</code>, which Docker will use to log in to a private registry. The idea is that the executable reaches out to a store and retrieves your password for you, so you don't have to have lots of files laying around in your cluster with your username/password encoded in them.</p>
<p>I can't get a Kubernetes kubelet to make use of this credential store. It seems to just ignore it and when Kubernetes attempts to download from a private registry I get a "no basic auth credentials" error. If I just have a <code>config.json</code> with the username / password in it then kubelet works ok.</p>
<p>Does Kubernetes support Docker credential stores/credential helpers and if so, how do I get them to work?</p>
<p>For reference, kubelet is running through <code>systemd</code>, the credential store executable is on the path and the <code>config.json</code> file is being read.</p>
| <p>As of the moment of writing Kubernetes v1.14 does not support credential helpers as per official docs <a href="https://kubernetes.io/docs/concepts/containers/images/#configuring-nodes-to-authenticate-to-a-private-registry" rel="noreferrer">Configuring Nodes to Authenticate to a Private Registry</a></p>
<blockquote>
<p>Note: Kubernetes as of now only supports the auths and HttpHeaders section of docker config. This means credential helpers (credHelpers or credsStore) are not supported.</p>
</blockquote>
|
<p>I created a secret in kubernetes using the command below - </p>
<pre><code>kubectl create secret generic -n mynamespace test --from-file=a.txt
</code></pre>
<p>Now I try to view it using below commands but am unsuccessful - </p>
<pre><code>kubectl describe secrets/test
kubectl get secret test -o json
</code></pre>
<p>This is the error I get in either case -</p>
<pre><code>Error from server (NotFound): secrets "test" not found
</code></pre>
<p>What can be the cause? I am using GCP for the kubernetes setup. Can the trial version of GCP be the cause for it?</p>
| <p>Try to access the secret in the namespace where it was created in:</p>
<pre><code>kubectl -n mynamespace describe secrets/test
kubectl -n mynamespace get secret test -o json
</code></pre>
|
<p>I'm trying to setup some basic Authorization and Authentication for various users to access a shared K8s cluster.</p>
<p><em>Requirement</em>: Multiple users can have access to multiple namespaces with a separate set of cert and keys for each of them.</p>
<p><em>Proposal</em>:</p>
<pre><code>openssl genrsa -out $PRIV_KEY 2048
# Generate CSR
openssl req -new -key $PRIV_KEY -out $CSR -subj "/CN=$USER"
# Create k8s CSR
K8S_CSR=user-request-$USER-$NAMESPACE_NAME-admin
cat <<EOF >./$K8S_CSR.yaml
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: $K8S_CSR
spec:
groups:
- system:authenticated
request: $(cat $CSR | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- client auth
EOF
kubectl create -n $NAMESPACE_NAME -f $K8S_CSR.yaml
# Approve K8s CSR
kubectl certificate approve $K8S_CSR
# Fetch User Certificate
kubectl get csr $K8S_CSR -o jsonpath='{.status.certificate}' | base64 -d > $USER-$NAMESPACE_NAME-admin.crt
# Create Admin Role Binding
kubectl create rolebinding $NAMESPACE_NAME-$USER-admin-binding --clusterrole=admin --user=$USER --namespace=$NAMESPACE_NAME
</code></pre>
<p><em>Problem</em>:
The user cert and/or pkey are not specific to this namespace.
If I just create another rolebinding for the same user in a different namespace he will able to authenticate. How do I prevent that from happening?</p>
| <p>The CA in Kubernetes is a cluster-wide CA (not namespaced) so you won't be able to create certs tied to a specific namespace. </p>
<p><code>system:authenticated</code> and <code>system:unauthenticated</code> are built-in groups in Kubernetes to identify whether a <code>Role</code> or <code>ClusterRole</code> is authenticated. And you <a href="https://stackoverflow.com/a/44764979/2989261">can't directly manager groups or users in Kubernetes</a>. You will have to configure an alternative cluster authentication methods to take advantage of users and groups. For example, a <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#static-token-file" rel="nofollow noreferrer">static token file</a> or <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens" rel="nofollow noreferrer">OpenID</a></p>
<p>Then you can, restrict users or groups defined in your identity provider to a <code>Role</code> that doesn't allow them to create either another <code>Role</code> or <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding" rel="nofollow noreferrer"><code>RoleBinding</code></a>, that way they are not able to give themselves access to other namespaces and only the cluster-admin is the one who decides which <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding" rel="nofollow noreferrer"><code>RoleBindings</code></a> (or namespaces) that specific user is part of.</p>
<p>For example, in your <code>Role</code>:</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: mynamespace
name: myrole
rules:
- apiGroups: ["extensions", "apps"]
resources: ["deployments"] <== never include role, clusterrole, rolebinding, and clusterrolebinding.
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
</code></pre>
<p>Another alternative is to restrict base on a <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#static-token-file" rel="nofollow noreferrer">Service Account Token</a> by <code>RoleBinding</code> to the service account.</p>
|
<p>I'm using Apache Ignite .Net v2.7. While resolving another issue (<a href="https://stackoverflow.com/questions/55388489/how-to-use-tcpdiscoverykubernetesipfinder-in-apache-ignite-net/55392149">How to use TcpDiscoveryKubernetesIpFinder in Apache Ignite .Net</a>) I've added a sprint configuration file where the Kubernetes configuration is specified. All other configuration goes in the C# code. </p>
<p>The content of the config file is below (taken from the docs): </p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:util="http://www.springframework.org/schema/util"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util.xsd">
<bean class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<property name="namespace" value="ignite"/>
</bean>
</property>
</bean>
</property>
</bean>
</beans>
</code></pre>
<p>The C# code which references the file is as follows:</p>
<pre><code>var igniteConfig = new IgniteConfiguration
{
SpringConfigUrl = "./kubernetes.config",
JvmClasspath = string.Join(";",
new string[] {
"ignite-kubernetes-2.7.0.jar",
"jackson-core-2.9.6.jar",
"jackson-databind-2.9.6.jar"
}
.Select(c => System.IO.Path.Combine(Environment.CurrentDirectory, "Libs", c)))}
</code></pre>
<p>The Ignite node starts fine locally but when deployed to a Kubernetes cluster, it fails with this error: </p>
<pre><code> INFO: Loading XML bean definitions from URL [file:/app/./kubernetes.config]
Mar 28, 2019 10:43:55 PM org.springframework.context.support.AbstractApplicationContext prepareRefresh
INFO: Refreshing org.springframework.context.support.GenericApplicationContext@1bc6a36e: startup date [Thu Mar 28 22:43:55 UTC 2019]; root of context hierarchy
Unhandled Exception: Apache.Ignite.Core.Common.IgniteException: Java exception occurred [class=java.lang.NoSuchFieldError, message=logger] ---> Apache.Ignite.Core.Com
mon.JavaException: java.lang.NoSuchFieldError: logger
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:727)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:543)
at org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.applicationContext(IgniteSpringHelperImpl.java:381)
at org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:104)
at org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:98)
at org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:751)
at org.apache.ignite.internal.IgnitionEx.loadConfiguration(IgnitionEx.java:809)
at org.apache.ignite.internal.processors.platform.PlatformIgnition.configuration(PlatformIgnition.java:153)
at org.apache.ignite.internal.processors.platform.PlatformIgnition.start(PlatformIgnition.java:68)
at Apache.Ignite.Core.Impl.Unmanaged.Jni.Env.ExceptionCheck()
at Apache.Ignite.Core.Impl.Unmanaged.UnmanagedUtils.IgnitionStart(Env env, String cfgPath, String gridName, Boolean clientMode, Boolean userLogger, Int64 igniteId,
Boolean redirectConsole)
at Apache.Ignite.Core.Ignition.Start(IgniteConfiguration cfg)
--- End of inner exception stack trace ---
at Apache.Ignite.Core.Ignition.Start(IgniteConfiguration cfg)
at UtilityClick.ProductService.Initializer.<>c__DisplayClass0_0.<Init>b__1(IServiceProvider sp) in /src/ProductService/Initializer.cs:line 102
at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteRuntimeResolver.VisitFactory(FactoryCallSite factoryCallSite, ServiceProviderEngineScope scope)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteVisitor`2.VisitCallSite(IServiceCallSite callSite, TArgument argument)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteRuntimeResolver.VisitScoped(ScopedCallSite scopedCallSite, ServiceProviderEngineScope scope)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteRuntimeResolver.VisitSingleton(SingletonCallSite singletonCallSite, ServiceProviderEngineScope sc
ope)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteVisitor`2.VisitCallSite(IServiceCallSite callSite, TArgument argument)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.DynamicServiceProviderEngine.<>c__DisplayClass1_0.<RealizeService>b__0(ServiceProviderEngineScope scope)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.ServiceProviderEngine.GetService(Type serviceType, ServiceProviderEngineScope serviceProviderEngineScope)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.ServiceProviderEngine.GetService(Type serviceType)
at Microsoft.Extensions.DependencyInjection.ServiceProvider.GetService(Type serviceType)
at Microsoft.Extensions.DependencyInjection.ServiceProviderServiceExtensions.GetService[T](IServiceProvider provider)
at UtilityClick.ProductService.Initializer.Init(IServiceCollection serviceCollection) in /src/ProductService/Initializer.cs:line 123
at UtilityClick.ProductService.ApiStartup.ConfigureServices(IServiceCollection services) in /src/ProductService/ApiStartup.cs:line 50
--- End of stack trace from previous location where exception was thrown ---
at Microsoft.AspNetCore.Hosting.ConventionBasedStartup.ConfigureServices(IServiceCollection services)
at Microsoft.AspNetCore.Hosting.Internal.WebHost.EnsureApplicationServices()
at Microsoft.AspNetCore.Hosting.Internal.WebHost.Initialize()
at Microsoft.AspNetCore.Hosting.WebHostBuilder.Build()
at UtilityClick.ProductService.Program.Main(String[] args) in /src/ProductService/Program.cs:line 14
</code></pre>
<p>Do you have any idea why it might happen? Which logger is it complaining about? </p>
<p>Kubernetes is running Linux containers, locally I'm using Windows 10. </p>
<p>Two more observations: </p>
<ol>
<li><p>When I specify the config file name as <code>kubernetes.config</code>, the node launches successfully on local but in Kubernetes it fails with an error which suggests that the URL "kubernetes.config" is missing the schema part.</p></li>
<li><p>JvmClasspath: I have to add "ignite-kubernetes-2.7.0.jar", otherwise the JAR is not found although it is located in the same dir as the rest of the Ignite classes. Adding the next two entries does not make any difference.</p></li>
</ol>
| <p>It looks like you have different versions of <code>spring-core</code> and <code>spring-beans</code> in your Java classpath for some reason. There's no reason for them to not match exactly. I think it should print full classpath somewhere, you can look it up.</p>
<p>In 2.7 Apache Ignite ships differing version of Spring libs in <code>ignite-spring-data_2.0</code> submodule. Maybe it got into your classpath by accident? Please remove that dir for good - you'll not need it when using .Net.</p>
<p><em>UPD:</em> With your reproducer project, it looks like I'm starting successfully:</p>
<pre><code>
Directories in the working directory:
/home/gridgain/w/ignitenet-kubernetes-example/ApacheIgniteNetKubernetesExample/bin/Debug/netcoreapp2.2/libs
Files in the working directory:
/home/gridgain/w/ignitenet-kubernetes-example/ApacheIgniteNetKubernetesExample/bin/Debug/netcoreapp2.2/ApacheIgniteNetKubernetesExample.dll
/home/gridgain/w/ignitenet-kubernetes-example/ApacheIgniteNetKubernetesExample/bin/Debug/netcoreapp2.2/log4net.config
/home/gridgain/w/ignitenet-kubernetes-example/ApacheIgniteNetKubernetesExample/bin/Debug/netcoreapp2.2/ApacheIgniteNetKubernetesExample.deps.json
/home/gridgain/w/ignitenet-kubernetes-example/ApacheIgniteNetKubernetesExample/bin/Debug/netcoreapp2.2/ApacheIgniteNetKubernetesExample.pdb
/home/gridgain/w/ignitenet-kubernetes-example/ApacheIgniteNetKubernetesExample/bin/Debug/netcoreapp2.2/ApacheIgniteNetKubernetesExample.runtimeconfig.dev.json
/home/gridgain/w/ignitenet-kubernetes-example/ApacheIgniteNetKubernetesExample/bin/Debug/netcoreapp2.2/appsettings.json
/home/gridgain/w/ignitenet-kubernetes-example/ApacheIgniteNetKubernetesExample/bin/Debug/netcoreapp2.2/ApacheIgniteNetKubernetesExample.runtimeconfig.json
/home/gridgain/w/ignitenet-kubernetes-example/ApacheIgniteNetKubernetesExample/bin/Debug/netcoreapp2.2/appsettings.development.json
log4j:WARN No appenders could be found for logger (org.springframework.core.env.StandardEnvironment).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
[2019-04-03 17:29:02,095][INFO ][main][IgniteKernal]
>>> __________ ________________
>>> / _/ ___/ |/ / _/_ __/ __/
>>> _/ // (7 7 // / / / / _/
>>> /___/\___/_/|_/___/ /_/ /___/
>>>
>>> ver. 2.7.0#20181130-sha1:256ae401
>>> 2018 Copyright(C) Apache Software Foundation
>>>
>>> Ignite documentation: http://ignite.apache.org
[2019-04-03 17:29:02,096][INFO ][main][IgniteKernal] Config URL: n/a
[2019-04-03 17:29:02,110][INFO ][main][IgniteKernal] IgniteConfiguration [igniteInstanceName=null, pubPoolSize=8, svcPoolSize=8, callbackPoolSize=8, stripedPoolSize=8, sysPoolSize=8, mgmtPoolSize=4, igfsPoolSize=8, dataStreamerPoolSize=8, utilityCachePoolSize=8, utilityCacheKeepAliveTime=60000, p2pPoolSize=2, qryPoolSize=8, igniteHome=/home/gridgain/Downloads/apache-ignite-2.7.0-bin, igniteWorkDir=/home/gridgain/Downloads/apache-ignite-2.7.0-bin/work, mbeanSrv=com.sun.jmx.mbeanserver.JmxMBeanServer@3e58a80e, nodeId=f5a4c49b-82c9-44df-ba8b-5ff97cad0a1f, marsh=BinaryMarshaller [], marshLocJobs=false, daemon=false, p2pEnabled=false, netTimeout=5000, sndRetryDelay=1000, sndRetryCnt=3, metricsHistSize=10000, metricsUpdateFreq=2000, metricsExpTime=9223372036854775807, discoSpi=TcpDiscoverySpi [addrRslvr=null, sockTimeout=0, ackTimeout=0, marsh=null, reconCnt=10, reconDelay=2000, maxAckTimeout=600000, forceSrvMode=false, clientReconnectDisabled=false, internalLsnr=null], segPlc=STOP, segResolveAttempts=2, waitForSegOnStart=true, allResolversPassReq=true, segChkFreq=10000, commSpi=TcpCommunicationSpi [connectGate=null, connPlc=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$FirstConnectionPolicy@4cc8eb05, enableForcibleNodeKill=false, enableTroubleshootingLog=false, locAddr=null, locHost=null, locPort=47100, locPortRange=100, shmemPort=-1, directBuf=true, directSndBuf=false, idleConnTimeout=600000, connTimeout=5000, maxConnTimeout=600000, reconCnt=10, sockSndBuf=32768, sockRcvBuf=32768, msgQueueLimit=0, slowClientQueueLimit=0, nioSrvr=null, shmemSrv=null, usePairedConnections=false, connectionsPerNode=1, tcpNoDelay=true, filterReachableAddresses=false, ackSndThreshold=32, unackedMsgsBufSize=0, sockWriteTimeout=2000, boundTcpPort=-1, boundTcpShmemPort=-1, selectorsCnt=4, selectorSpins=0, addrRslvr=null, ctxInitLatch=java.util.concurrent.CountDownLatch@51f116b8[Count = 1], stopping=false], evtSpi=org.apache.ignite.spi.eventstorage.NoopEventStorageSpi@19d481b, colSpi=NoopCollisionSpi [], deploySpi=LocalDeploymentSpi [], indexingSpi=org.apache.ignite.spi.indexing.noop.NoopIndexingSpi@7690781, addrRslvr=null, encryptionSpi=org.apache.ignite.spi.encryption.noop.NoopEncryptionSpi@77eca502, clientMode=false, rebalanceThreadPoolSize=1, txCfg=TransactionConfiguration [txSerEnabled=false, dfltIsolation=REPEATABLE_READ, dfltConcurrency=PESSIMISTIC, dfltTxTimeout=0, txTimeoutOnPartitionMapExchange=0, pessimisticTxLogSize=0, pessimisticTxLogLinger=10000, tmLookupClsName=null, txManagerFactory=null, useJtaSync=false], cacheSanityCheckEnabled=true, discoStartupDelay=60000, deployMode=SHARED, p2pMissedCacheSize=100, locHost=null, timeSrvPortBase=31100, timeSrvPortRange=100, failureDetectionTimeout=10000, sysWorkerBlockedTimeout=null, clientFailureDetectionTimeout=30000, metricsLogFreq=60000, hadoopCfg=null, connectorCfg=ConnectorConfiguration [jettyPath=null, host=null, port=11211, noDelay=true, directBuf=false, sndBufSize=32768, rcvBufSize=32768, idleQryCurTimeout=600000, idleQryCurCheckFreq=60000, sndQueueLimit=0, selectorCnt=4, idleTimeout=7000, sslEnabled=false, sslClientAuth=false, sslCtxFactory=null, sslFactory=null, portRange=100, threadPoolSize=8, msgInterceptor=null], odbcCfg=null, warmupClos=null, atomicCfg=AtomicConfiguration [seqReserveSize=1000, cacheMode=PARTITIONED, backups=1, aff=null, grpName=null], classLdr=null, sslCtxFactory=null, platformCfg=PlatformDotNetConfiguration [binaryCfg=null], binaryCfg=null, memCfg=null, pstCfg=null, dsCfg=DataStorageConfiguration [sysRegionInitSize=41943040, sysRegionMaxSize=104857600, pageSize=0, concLvl=0, dfltDataRegConf=DataRegionConfiguration [name=default, maxSize=6720133529, initSize=268435456, swapPath=null, pageEvictionMode=DISABLED, evictionThreshold=0.9, emptyPagesPoolSize=100, metricsEnabled=false, metricsSubIntervalCount=5, metricsRateTimeInterval=60000, persistenceEnabled=false, checkpointPageBufSize=0], dataRegions=null, storagePath=null, checkpointFreq=180000, lockWaitTime=10000, checkpointThreads=4, checkpointWriteOrder=SEQUENTIAL, walHistSize=20, maxWalArchiveSize=1073741824, walSegments=10, walSegmentSize=67108864, walPath=db/wal, walArchivePath=db/wal/archive, metricsEnabled=false, walMode=LOG_ONLY, walTlbSize=131072, walBuffSize=0, walFlushFreq=2000, walFsyncDelay=1000, walRecordIterBuffSize=67108864, alwaysWriteFullPages=false, fileIOFactory=org.apache.ignite.internal.processors.cache.persistence.file.AsyncFileIOFactory@59af0466, metricsSubIntervalCnt=5, metricsRateTimeInterval=60000, walAutoArchiveAfterInactivity=-1, writeThrottlingEnabled=false, walCompactionEnabled=false, walCompactionLevel=1, checkpointReadLockTimeout=null], activeOnStart=true, autoActivation=true, longQryWarnTimeout=3000, sqlConnCfg=null, cliConnCfg=ClientConnectorConfiguration [host=null, port=10800, portRange=100, sockSndBufSize=0, sockRcvBufSize=0, tcpNoDelay=true, maxOpenCursorsPerConn=128, threadPoolSize=8, idleTimeout=0, jdbcEnabled=true, odbcEnabled=true, thinCliEnabled=true, sslEnabled=false, useIgniteSslCtxFactory=true, sslClientAuth=false, sslCtxFactory=null], mvccVacuumThreadCnt=2, mvccVacuumFreq=5000, authEnabled=false, failureHnd=null, commFailureRslvr=null]
[2019-04-03 17:29:02,110][INFO ][main][IgniteKernal] Daemon mode: off
[2019-04-03 17:29:02,110][INFO ][main][IgniteKernal] OS: Linux 4.15.0-46-generic amd64
[2019-04-03 17:29:02,110][INFO ][main][IgniteKernal] OS user: gridgain
[2019-04-03 17:29:02,111][INFO ][main][IgniteKernal] PID: 8539
[2019-04-03 17:29:02,111][INFO ][main][IgniteKernal] Language runtime: Java Platform API Specification ver. 1.8
[2019-04-03 17:29:02,111][INFO ][main][IgniteKernal] VM information: Java(TM) SE Runtime Environment 1.8.0_144-b01 Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.144-b01
[2019-04-03 17:29:02,112][INFO ][main][IgniteKernal] VM total memory: 0.48GB
[2019-04-03 17:29:02,112][INFO ][main][IgniteKernal] Remote Management [restart: off, REST: on, JMX (remote: off)]
[2019-04-03 17:29:02,113][INFO ][main][IgniteKernal] Logger: Log4JLogger [quiet=false, config=/home/gridgain/Downloads/apache-ignite-2.7.0-bin/config/ignite-log4j.xml]
[2019-04-03 17:29:02,113][INFO ][main][IgniteKernal] IGNITE_HOME=/home/gridgain/Downloads/apache-ignite-2.7.0-bin
[2019-04-03 17:29:02,113][INFO ][main][IgniteKernal] VM arguments: [-Djava.net.preferIPv4Stack=true, -Xms512m, -Xmx512m, -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true, -DIGNITE_QUIET=false]
[2019-04-03 17:29:02,113][INFO ][main][IgniteKernal] System cache's DataRegion size is configured to 40 MB. Use DataStorageConfiguration.systemRegionInitialSize property to change the setting.
[2019-04-03 17:29:02,119][INFO ][main][IgniteKernal] Configured caches [in 'sysMemPlc' dataRegion: ['ignite-sys-cache']]
[2019-04-03 17:29:02,122][INFO ][main][IgniteKernal] 3-rd party licenses can be found at: /home/gridgain/Downloads/apache-ignite-2.7.0-bin/libs/licenses
[2019-04-03 17:29:02,123][INFO ][main][IgniteKernal] Local node user attribute [service=ProductService]
[2019-04-03 17:29:02,157][INFO ][main][IgnitePluginProcessor] Configured plugins:
[2019-04-03 17:29:02,157][INFO ][main][IgnitePluginProcessor] ^-- None
[2019-04-03 17:29:02,157][INFO ][main][IgnitePluginProcessor]
[2019-04-03 17:29:02,158][INFO ][main][FailureProcessor] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, super=AbstractFailureHandler [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED]]]]
[2019-04-03 17:29:02,186][INFO ][main][TcpCommunicationSpi] Successfully bound communication NIO server to TCP port [port=47100, locHost=0.0.0.0/0.0.0.0, selectorsCnt=4, selectorSpins=0, pairedConn=false]
[2019-04-03 17:29:07,195][WARN ][main][TcpCommunicationSpi] Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
[2019-04-03 17:29:07,232][WARN ][main][NoopCheckpointSpi] Checkpoints are disabled (to enable configure any GridCheckpointSpi implementation)
[2019-04-03 17:29:07,254][WARN ][main][GridCollisionManager] Collision resolution is disabled (all jobs will be activated upon arrival).
[2019-04-03 17:29:07,293][INFO ][main][IgniteKernal] Security status [authentication=off, tls/ssl=off]
[2019-04-03 17:29:07,430][WARN ][main][IgniteCacheDatabaseSharedManager] DataRegionConfiguration.maxWalArchiveSize instead DataRegionConfiguration.walHistorySize would be used for removing old archive wal files
[2019-04-03 17:29:07,444][INFO ][main][PartitionsEvictManager] Evict partition permits=2
[2019-04-03 17:29:07,593][INFO ][main][ClientListenerProcessor] Client connector processor has started on TCP port 10800
[2019-04-03 17:29:07,624][INFO ][main][GridTcpRestProtocol] Command protocol successfully started [name=TCP binary, host=0.0.0.0/0.0.0.0, port=11211]
[2019-04-03 17:29:07,643][WARN ][main][PlatformProcessorImpl] Marshaller is automatically set to o.a.i.i.binary.BinaryMarshaller (other nodes must have the same marshaller type).
[2019-04-03 17:29:07,675][INFO ][main][IgniteKernal] Non-loopback local IPs: 172.17.0.1, 172.25.4.188
[2019-04-03 17:29:07,675][INFO ][main][IgniteKernal] Enabled local MACs: 0242929A3D04, D481D72208BB
[2019-04-03 17:29:07,700][INFO ][main][TcpDiscoverySpi] Connection check threshold is calculated: 10000
[2019-04-03 17:29:07,702][INFO ][main][TcpDiscoverySpi] Successfully bound to TCP port [port=47500, localHost=0.0.0.0/0.0.0.0, locNodeId=f5a4c49b-82c9-44df-ba8b-5ff97cad0a1f]
[2019-04-03 17:29:07,844][ERROR][main][TcpDiscoverySpi] Failed to get registered addresses from IP finder on start (retrying every 2000ms; change 'reconnectDelay' to configure the frequency of retries).
class org.apache.ignite.spi.IgniteSpiException: Failed to retrieve Ignite pods IP addresses.
at org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:172)
</code></pre>
<p>(but that's expected)</p>
<p>Have you tried to moving <code>ignite-kubernetes</code> from <code>libs/optional/</code> to <code>libs/</code> instead of manually adding its JARs to classpath? Do you have anything else in your libs/?</p>
|
<p>I used the default installation (kubernetes on docker for windows):</p>
<p>helm install --name mymssql stable/mssql-linux --set acceptEula.value=Y --set edition.value=Developer</p>
<p>I can see that persistent volume exists
<a href="https://i.stack.imgur.com/77aIF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/77aIF.png" alt="I can see that persistent volume exists "></a></p>
<p>but
<a href="https://i.stack.imgur.com/HYfyD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HYfyD.png" alt="enter image description here"></a></p>
<hr>
<p>edit:</p>
<p><strong>kubectl describe pvc</strong></p>
<pre><code>Name: mymssql-mssql-linux-backup
Namespace: default
StorageClass: hostpath
Status: Bound
Volume: pvc-0a556593-3fa4-11e9-a695-00155dd56102
Labels: app=mymssql-mssql-linux
chart=mssql-linux-0.7.0
heritage=Tiller
release=mymssql
Annotations: control-plane.alpha.kubernetes.io/leader={"holderIdentity":"b34171f9-3f99-11e9-b4b6-f496341cc924","leaseDurationSeconds":15,"acquireTime":"2019-03-06T00:08:59Z","renewTime":"2019-03-06T00:09:01Z","lea...
pv.kubernetes.io/bind-completed=yes
pv.kubernetes.io/bound-by-controller=yes
volume.beta.kubernetes.io/storage-provisioner=docker.io/hostpath
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWO
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 24m (x3 over 24m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "docker.io/hostpath" or manually created by system administrator
Normal Provisioning 24m docker.io/hostpath DESKTOP-BH16SJ1 b34171f9-3f99-11e9-b4b6-f496341cc924 External provisioner is provisioning volume for claim "default/mymssql-mssql-linux-backup"
Normal ProvisioningSucceeded 24m docker.io/hostpath DESKTOP-BH16SJ1 b34171f9-3f99-11e9-b4b6-f496341cc924 Successfully provisioned volume pvc-0a556593-3fa4-11e9-a695-00155dd56102
Name: mymssql-mssql-linux-data
Namespace: default
StorageClass: hostpath
Status: Bound
Volume: pvc-0a56634b-3fa4-11e9-a695-00155dd56102
Labels: app=mymssql-mssql-linux
chart=mssql-linux-0.7.0
heritage=Tiller
release=mymssql
Annotations: control-plane.alpha.kubernetes.io/leader={"holderIdentity":"b34171f9-3f99-11e9-b4b6-f496341cc924","leaseDurationSeconds":15,"acquireTime":"2019-03-06T00:08:59Z","renewTime":"2019-03-06T00:09:02Z","lea...
pv.kubernetes.io/bind-completed=yes
pv.kubernetes.io/bound-by-controller=yes
volume.beta.kubernetes.io/storage-provisioner=docker.io/hostpath
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWO
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 24m (x3 over 24m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "docker.io/hostpath" or manually created by system administrator
Normal Provisioning 24m docker.io/hostpath DESKTOP-BH16SJ1 b34171f9-3f99-11e9-b4b6-f496341cc924 External provisioner is provisioning volume for claim "default/mymssql-mssql-linux-data"
Normal ProvisioningSucceeded 24m docker.io/hostpath DESKTOP-BH16SJ1 b34171f9-3f99-11e9-b4b6-f496341cc924 Successfully provisioned volume pvc-0a56634b-3fa4-11e9-a695-00155dd56102
Name: mymssql-mssql-linux-master
Namespace: default
StorageClass: hostpath
Status: Bound
Volume: pvc-0a574f80-3fa4-11e9-a695-00155dd56102
Labels: app=mymssql-mssql-linux
chart=mssql-linux-0.7.0
heritage=Tiller
release=mymssql
Annotations: control-plane.alpha.kubernetes.io/leader={"holderIdentity":"b34171f9-3f99-11e9-b4b6-f496341cc924","leaseDurationSeconds":15,"acquireTime":"2019-03-06T00:08:59Z","renewTime":"2019-03-06T00:09:02Z","lea...
pv.kubernetes.io/bind-completed=yes
pv.kubernetes.io/bound-by-controller=yes
volume.beta.kubernetes.io/storage-provisioner=docker.io/hostpath
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWO
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 24m (x4 over 24m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "docker.io/hostpath" or manually created by system administrator
Normal Provisioning 24m docker.io/hostpath DESKTOP-BH16SJ1 b34171f9-3f99-11e9-b4b6-f496341cc924 External provisioner is provisioning volume for claim "default/mymssql-mssql-linux-master"
Normal ProvisioningSucceeded 24m docker.io/hostpath DESKTOP-BH16SJ1 b34171f9-3f99-11e9-b4b6-f496341cc924 Successfully provisioned volume pvc-0a574f80-3fa4-11e9-a695-00155dd56102
Name: mymssql-mssql-linux-translog
Namespace: default
StorageClass: hostpath
Status: Bound
Volume: pvc-0a587716-3fa4-11e9-a695-00155dd56102
Labels: app=mymssql-mssql-linux
chart=mssql-linux-0.7.0
heritage=Tiller
release=mymssql
Annotations: control-plane.alpha.kubernetes.io/leader={"holderIdentity":"b34171f9-3f99-11e9-b4b6-f496341cc924","leaseDurationSeconds":15,"acquireTime":"2019-03-06T00:08:59Z","renewTime":"2019-03-06T00:09:02Z","lea...
pv.kubernetes.io/bind-completed=yes
pv.kubernetes.io/bound-by-controller=yes
volume.beta.kubernetes.io/storage-provisioner=docker.io/hostpath
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWO
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 24m (x4 over 24m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "docker.io/hostpath" or manually created by system administrator
Normal Provisioning 24m docker.io/hostpath DESKTOP-BH16SJ1 b34171f9-3f99-11e9-b4b6-f496341cc924 External provisioner is provisioning volume for claim "default/mymssql-mssql-linux-translog"
Normal ProvisioningSucceeded 24m docker.io/hostpath DESKTOP-BH16SJ1 b34171f9-3f99-11e9-b4b6-f496341cc924 Successfully provisioned volume pvc-0a587716-3fa4-11e9-a695-00155dd56102
</code></pre>
<p><strong>kubectl describe pv:</strong></p>
<pre><code>Name: pvc-0a556593-3fa4-11e9-a695-00155dd56102
Labels: <none>
Annotations: pv.kubernetes.io/provisioned-by=docker.io/hostpath
Finalizers: [kubernetes.io/pv-protection]
StorageClass: hostpath
Status: Bound
Claim: default/mymssql-mssql-linux-backup
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /host_mnt/c/Users/User/.docker/Volumes/mymssql-mssql-linux-backup/pvc-0a556593-3fa4-11e9-a695-00155dd56102
HostPathType:
Events: <none>
Name: pvc-0a56634b-3fa4-11e9-a695-00155dd56102
Labels: <none>
Annotations: pv.kubernetes.io/provisioned-by=docker.io/hostpath
Finalizers: [kubernetes.io/pv-protection]
StorageClass: hostpath
Status: Bound
Claim: default/mymssql-mssql-linux-data
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /host_mnt/c/Users/User/.docker/Volumes/mymssql-mssql-linux-data/pvc-0a56634b-3fa4-11e9-a695-00155dd56102
HostPathType:
Events: <none>
Name: pvc-0a574f80-3fa4-11e9-a695-00155dd56102
Labels: <none>
Annotations: pv.kubernetes.io/provisioned-by=docker.io/hostpath
Finalizers: [kubernetes.io/pv-protection]
StorageClass: hostpath
Status: Bound
Claim: default/mymssql-mssql-linux-master
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /host_mnt/c/Users/User/.docker/Volumes/mymssql-mssql-linux-master/pvc-0a574f80-3fa4-11e9-a695-00155dd56102
HostPathType:
Events: <none>
Name: pvc-0a587716-3fa4-11e9-a695-00155dd56102
Labels: <none>
Annotations: pv.kubernetes.io/provisioned-by=docker.io/hostpath
Finalizers: [kubernetes.io/pv-protection]
StorageClass: hostpath
Status: Bound
Claim: default/mymssql-mssql-linux-translog
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /host_mnt/c/Users/User/.docker/Volumes/mymssql-mssql-linux-translog/pvc-0a587716-3fa4-11e9-a695-00155dd56102
HostPathType:
Events: <none>
</code></pre>
<p><strong>kubectl describe storageclass</strong></p>
<pre><code>Name: hostpath
IsDefaultClass: Yes
Annotations: storageclass.kubernetes.io/is-default-class=true
Provisioner: docker.io/hostpath
Parameters: <none>
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
</code></pre>
| <p>I have tried to make this work on Kubernetes on Docker for Windows and I was not able to complete it. Seems like there is some problem with Volumes in Windows. I have tried tinkering with the paths but with no real effect. After some time I noticed that there is a problem with the size of the volumes, so I downloaded the chart and added 1 Gi to PV's but then I got an error about broken paths. Anyway still unlucky.
As I am no expert in Windows and this seems like either Windows or application problem. I can only advise to use other way of running the chart (I tested it on Minikube on ubuntu, GKE and compute engine connected by kubadm and the chart works like a charm without any additional configuration).</p>
<p>If you want to keep trying try running <code>kubectl logs pod_name</code> and follow the problems described there. </p>
|
<p>community:</p>
<p>I used kubeadm to set up a kubernetes.</p>
<p>I used a YAML file to create serviceaccount, role and rolebindings to the serviceaccount.</p>
<p>Then I curl the pods in the default namespace, the kubernetes always returns "Unauthorized"</p>
<p>I do not know what exactly I got wrong here.</p>
<p>The yaml file is like below:</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: pzhang-test
namespace: default
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
namespace: default
subjects:
- kind: ServiceAccount
name: pzhang-test
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>the secrets and token like below:</p>
<pre><code>root@robota:~# kubectl get secrets
NAME TYPE DATA AGE
default-token-9kg87 kubernetes.io/service-account-token 3 2d6h
pzhang-test-token-wz9zj kubernetes.io/service-account-token 3 29m
root@robota:~# kubectl get secrets pzhang-test-token-wz9zj -o yaml
apiVersion: v1
data:
ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNU1ETXlOekF6TkRjd04xb1hEVEk1TURNeU5EQXpORGN3TjFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTFpHCkUxajJaQnBrdzhsSmRRa2lDSlI1N1J6N0lXendudmtvNVlUK1BneXhqZm5NUjJXaWV3M3F0QWZEdi9oOWI3OUcKN1dTRFFCWG9qcmFkQjNSTTROVHhTNktCQlg1SlF6K2JvQkNhTG5hZmdQcERueHc3T082VjJLY1crS2k5ZHlOeApDQ1RTNTVBRWc3OTRkL3R1LzBvcDhLUjhhaDlMdS8zeVNRdk0zUzFsRW02c21YSmVqNVAzRGhDbUVDT1RnTHA1CkZQSStuWFNNTS83cWpYd0N4WjUyZFZSR3U0Y0NYZVRTWlBSM1R0UzhuU3luZ2ZiN1NVM1dYbFZvQVIxcXVPdnIKb2xqTmllbEFBR1lIaDNUMTJwT01HMUlOakRUNVVrdEM5TWJYeDZoRFc5ZkxwNTFkTEt4bnN5dUtsdkRXVDVOWQpwSDE5UTVvbDJRaVpnZzl0K2Y4Q0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHb2RnL0ozMUhRQkVOZVNrdEF4dS9WRU43NmQKZ1k2NG5aRTdDQTZJY2pXaEJBNmlpblh3S0E1Q0JuVEVxS05QVEdWTG5KWVd4WDlwVE9jZkhWTkROT3RDV0N4NApDN3JMWjZkRDJrUGs2UUVYSEg3MXJ4cUdKS21WbFJ0UytrOC9NdHAzNmxFVE5LTUc5VXI1RUNuWDg0dVZ5dUViCnRWWlRUcnZPQmNDSzAyQ2RSN3Q0V3pmb0FPSUFkZ1ZTd0xxVGVIdktRR1orM1JXOWFlU2ZwWnpsZDhrQlZZVzkKN1MvYXU1c3NIeHIwVG85ZStrYlhqNDl5azJjU2hKa1Y2M3JOTjN4SEZBeHdzUUVZYTNMZ1ZGWFlHYTJFWHdWdwpLbXRUZmhoQWE0Ujd5dW5SdkIrdndac3ZwbHc2RmhQQldHVTlpQnA3aU9vU0ZWVmNlMUltUks3VkRqbz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
namespace: ZGVmYXVsdA==
token: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklpSjkuZXlKcGMzTWlPaUpyZFdKbGNtNWxkR1Z6TDNObGNuWnBZMlZoWTJOdmRXNTBJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5dVlXMWxjM0JoWTJVaU9pSmtaV1poZFd4MElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WldOeVpYUXVibUZ0WlNJNkluQjZhR0Z1WnkxMFpYTjBMWFJ2YTJWdUxYZDZPWHBxSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXpaWEoyYVdObExXRmpZMjkxYm5RdWJtRnRaU0k2SW5CNmFHRnVaeTEwWlhOMElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WlhKMmFXTmxMV0ZqWTI5MWJuUXVkV2xrSWpvaVlURTNPR1F3T1RrdE5USXdZaTB4TVdVNUxUa3lNMlF0TURBd1l6STVZbVJrTlRBMklpd2ljM1ZpSWpvaWMzbHpkR1Z0T25ObGNuWnBZMlZoWTJOdmRXNTBPbVJsWm1GMWJIUTZjSHBvWVc1bkxYUmxjM1FpZlEubnNlY1lPTjJkRUIwLVVSdXFJNm1tQVJlOHBSRGlES01STXJvRHc5SThIU24wNE9Qd0JvUzdhSDRsNjlSQ19SMDFNNUp0Rm9OcVFsWjlHOGJBNW81MmsxaVplMHZJZnEzNVkzdWNweF95RDlDT2prZ0xCU2k1MXgycUtURkE5eU15QURoaTFzN2ttT2d0VERDRVpmS1l3ME1vSjgtQUZPcXJkVndfZU15a2NGU3ZpYWVEQTRYNjFCNzhXYWpYcUttbXdfTUN1XzZVaG4wdklOa3pqbHBLaGs5anRlb0JvMFdGX0c3b1RzZXJVOTRuSGNCWkYwekRQcEpXTzlEVlc1a1B0Mm1Fem1NeWJoeVBfNTBvS0NKMTB4NGF4UzlIdXlwOTZ4SzV0NmNZZVNrQkx4bmVEb19wNzlyUlNXX1FLWFZCWm1UaWI1RHlZUHZxSGdSRFJiMG5B
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: pzhang-test
kubernetes.io/service-account.uid: a178d099-520b-11e9-923d-000c29bdd506
creationTimestamp: "2019-03-29T10:15:51Z"
name: pzhang-test-token-wz9zj
namespace: default
resourceVersion: "77488"
selfLink: /api/v1/namespaces/default/secrets/pzhang-test-token-wz9zj
uid: a179dae0-520b-11e9-923d-000c29bdd506
type: kubernetes.io/service-account-token
# the TOKEN is:
root@robota:~# echo $TOKEN
ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklpSjkuZXlKcGMzTWlPaUpyZFdKbGNtNWxkR1Z6TDNObGNuWnBZMlZoWTJOdmRXNTBJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5dVlXMWxjM0JoWTJVaU9pSmtaV1poZFd4MElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WldOeVpYUXVibUZ0WlNJNkluQjZhR0Z1WnkxMFpYTjBMWFJ2YTJWdUxYZDZPWHBxSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXpaWEoyYVdObExXRmpZMjkxYm5RdWJtRnRaU0k2SW5CNmFHRnVaeTEwWlhOMElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WlhKMmFXTmxMV0ZqWTI5MWJuUXVkV2xrSWpvaVlURTNPR1F3T1RrdE5USXdZaTB4TVdVNUxUa3lNMlF0TURBd1l6STVZbVJrTlRBMklpd2ljM1ZpSWpvaWMzbHpkR1Z0T25ObGNuWnBZMlZoWTJOdmRXNTBPbVJsWm1GMWJIUTZjSHBvWVc1bkxYUmxjM1FpZlEubnNlY1lPTjJkRUIwLVVSdXFJNm1tQVJlOHBSRGlES01STXJvRHc5SThIU24wNE9Qd0JvUzdhSDRsNjlSQ19SMDFNNUp0Rm9OcVFsWjlHOGJBNW81MmsxaVplMHZJZnEzNVkzdWNweF95RDlDT2prZ0xCU2k1MXgycUtURkE5eU15QURoaTFzN2ttT2d0VERDRVpmS1l3ME1vSjgtQUZPcXJkVndfZU15a2NGU3ZpYWVEQTRYNjFCNzhXYWpYcUttbXdfTUN1XzZVaG4wdklOa3pqbHBLaGs5anRlb0JvMFdGX0c3b1RzZXJVOTRuSGNCWkYwekRQcEpXTzlEVlc1a1B0Mm1Fem1NeWJoeVBfNTBvS0NKMTB4NGF4UzlIdXlwOTZ4SzV0NmNZZVNrQkx4bmVEb19wNzlyUlNXX1FLWFZCWm1UaWI1RHlZUHZxSGdSRFJiMG5B
</code></pre>
<p>I use this command:</p>
<pre><code>root@robota:~# curl --cacert ./ca.crt --header "Authorization: Bearer $TOKEN" https://192.16.208.142:6443/api/v1/namespaces/default/pods
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}
</code></pre>
<p>As you can see the curl returns:</p>
<pre><code>{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}
</code></pre>
<p>I expected the output to be a list of the pods in my <code>default</code> namespace</p>
<pre><code>root@robota:~# kubectl get pods -n default
NAME READY STATUS RESTARTS AGE
my-nginx-64fc468bd4-8fq6j 1/1 Running 0 2d6h
my-nginx-64fc468bd4-ffkhb 1/1 Running 0 2d6h
</code></pre>
| <p>maybe you can try:</p>
<pre><code>TOKEN=$(kubectl get secret pzhang-test-token-wz9zj -o yaml | grep "token:" | awk '{print $2}' | base64 -d)
</code></pre>
<pre><code>kubectl get secret prometheus-k8s-token-x8t45 -o yaml | grep "ca.crt" | awk '{print $2}' | base64 -d > ca.crt
</code></pre>
<pre><code>curl -H "Authorization: Bearer $TOKEN" --cacert ca.crt https://192.16.208.142:6443/api/v1/namespaces/default/pods
</code></pre>
|
<p>I am using GKE Kubernetes in GCP. I am new to GCP, GKE, and kubectl. I am trying to create new Kubernetes users in order to assign them ClusterRoleBindings, and then login (kubectl) as those users.</p>
<p>I do not see the relationship between GCP users and Kubernetes "users" (I do understand there's no User object type in Kubernetes).</p>
<p>According to <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/security-overview" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/security-overview</a> , Kubernetes user accounts are Google Accounts.</p>
<p>Accordingly, I created some Google accounts and then associated them with my GCP account via IAM. I can see these accounts fine in IAM.</p>
<p>Then I performed gcloud auth login on those new users, and I could see them in gcloud auth list. I then tried accessing gcloud resources (gcloud compute disks list) as my various users. This worked as expected - the GCP user permissions were respected.</p>
<p>I then created a Kubernetes UserRole. Next step was to bind those users to those Roles, with a UserRoleBinding.</p>
<p>ClusterRole.yaml (creates fine):</p>
<pre><code>kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
# "namespace" omitted since ClusterRoles are not namespaced
name: cluster-role-pod-reader-1
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
</code></pre>
<p>ClusterRoleBinding.yaml (creates fine):</p>
<pre><code>kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cluster-role-binding-pod-reader-1
subjects:
- kind: User
name: MYTESTUSER@gmail.com # not real userid
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: cluster-role-pod-reader-1
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>In Kubernetes, I could create bindings, but my first problem is that I could create a UserRoleBinding between an existing UserRole and a <em>non</em>-existent user. I would have thought that would fail. It means I'm missing something important.</p>
<p>My second problem is I do not know how to login to kubectl as one of the new users.</p>
<p>Overall I'm missing the connection between GCP/IAM users and GKE users. Help would be much appreciated!</p>
| <p>Kubernetes doesn't have a user database. Users live outside the cluster and are usually controlled by the cloud provider. </p>
<p>If you're using GKE, the users are controlled by the GCP IAM. Therefore you can't list users with <code>kubectl</code>.</p>
<p>You can create <a href="https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#user-accounts-vs-service-accounts" rel="noreferrer">service accounts</a> though. However, it is important to understand the difference between service accounts and users. Users are for real people while service accounts are for processes inside and outside kubernetes.</p>
<p>When you create a ClusterRoleBinding this means to kubernetes: </p>
<p><em>If a user with the username <strong>MYTESTUSER@gmail.com</strong> enters the cluster, bind him to the ClusterRole <strong>cluster-role-pod-reader-1</strong></em></p>
<p>To use kubernetes with the GCP IAM users, you have to do the following: </p>
<ul>
<li>add the user to IAM</li>
<li>add him to the role <code>roles/container.viewer</code></li>
<li>create the RoleBinding/ClusterRoleBinding of your choice</li>
</ul>
<p>You can list the respective IAM roles (not to be mistaken with RBAC roles) with this command:</p>
<pre><code>gcloud iam roles list | grep 'roles/container\.' -B2 -A2
</code></pre>
<p>With the principle of least privilige in mind you should grant your user only the minimal rights to login into the cluster. The other IAM roles (except for <code>roles/container.clusterAdmin</code>) will automatically grant access with higher privileges to objects inside the <strong>all</strong> clusters of your project.</p>
<p>RBAC allows just the addition of privileges therefore you should choose the IAM role with the least privileges and add the required privileges via RBAC on top.</p>
|
<p>I have a Kubernetes-orchestraded infrastructure that is served on an AWS-hosted cluster. I'd like to have routines that would allow me to spawn similar infrastructures. The difference between the original infrastructure and the newly-spawned ones would mainly be the DNS's used and the images that I would serve.</p>
<p>My question is: What is the most appropriate place for this similar-infrastructure-spawning code to reside: Kubernetes ? My CI/CD tool, Drone? Some other DevOps stack component of which I'm not even aware?</p>
| <p>Do you ever think about InfraAsCode tech.</p>
<p>You can directly develop your infrastructure using code like:</p>
<ul>
<li>Cloudformation (AWS)</li>
<li>Terraform (multi provider)</li>
<li>Ansible</li>
<li>...</li>
</ul>
<p>You will then be able to configure all providers services (not only your cluster)</p>
<p>You will then be able to deploy it with a single command and use parameters on it.</p>
<p>Otherwise you can also you a tool like Kops (<a href="https://github.com/kubernetes/kops" rel="nofollow noreferrer">https://github.com/kubernetes/kops</a>) to automate deployment of you K8s cluster.</p>
<p>Once you choose the right tool, you will then be able to source it using Git repository or whatever.</p>
|
<p>I am using <a href="https://gitlab.com/charts/gitlab" rel="noreferrer">https://gitlab.com/charts/gitlab</a> to deploy certain components included in the chart on an Openshift cluster. For now I just want to deploy the included Prometheus chart. I accomplished this, having an specific <code>values.yaml</code> configuration.</p>
<p>I want to extend the Gitlab helm chart, to do so I am adding it as a requirement of my own chart. The problem comes whenever I add the previous <code>values.yaml</code> as subpart of my values.</p>
<p>Deploying upstream Gitlab chart works with:</p>
<pre><code>global:
registry:
enabled: false
# Disabling minio still requires to disable gitlab.minio or it will complain about "A valid backups.objectStorage.config.secret is needed"
minio:
enabled: false
ingress:
enabled: false
configureCertmanager: false
nginx-ingress:
enabled: false
registry:
enabled: false
certmanager:
install: false
rbac:
create: false
...
</code></pre>
<p>Deploying my chart including configuration as a subchart <strong>does not work</strong>:</p>
<pre><code>global:
registry:
enabled: false
# Disabling minio still requires to disable gitlab.minio or it will complain about "A valid backups.objectStorage.config.secret is needed"
minio:
enabled: false
ingress:
enabled: false
configureCertmanager: false
test:
nginx-ingress:
enabled: false
registry:
enabled: false
certmanager:
install: false
rbac:
create: false
...
</code></pre>
<p>I added the Gitlab upstream chart as a requirement:</p>
<pre><code>dependencies:
- name: gitlab
# Upgrade manually. Check https://gitlab.com/charts/gitlab/blob/master/requirements.yaml for the new Prometheus chart version.
version: 1.7.1
repository: https://charts.gitlab.io/
alias: test
</code></pre>
<p>It seems that it is not fully checking my configuration, so this creates objects that the serviceAccount does not have permissions to, failing in the process. It still tries to create objects related to <code>certmanager</code> even if it is disabled and was correctly disabled when deploying Gitlab chart directly.</p>
| <p>Found it. Requirements conditions for a subchart have to be specified on the first level of <code>values.yaml</code>.</p>
<p>If A has B as a subchart requirement, in order to specify the B requirement conditions, you have to set them at A level:</p>
<pre><code>global:
registry:
enabled: false
# Disabling minio still requires to disable gitlab.minio or it will complain about "A valid backups.objectStorage.config.secret is needed"
minio:
enabled: false
ingress:
enabled: false
configureCertmanager: false
test:
nginx-ingress:
enabled: false
registry:
enabled: false
...
certmanager:
install: false
rbac:
create: false
...
</code></pre>
|
<p>I have an nginx ingress with multiple domain definitions. But nginx ingress is wrongly configuring the service name. I have open an issue <a href="https://github.com/kubernetes/ingress-nginx/issues/3940" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/3940</a> with the info. </p>
<p>Is an actual bug? Or I'm missing something. </p>
| <p>Community wiki answer for those who will face with the same in the future.
Issue described by @bitgandtter was fixed in <a href="https://github.com/kubernetes/ingress-nginx/pull/3864" rel="nofollow noreferrer">ing.Service with multiple hosts fix</a> pull request. Fix will be included in the next version in couple of weeks. And before this time you can try to use dev image</p>
<pre><code>quay.io/kubernetes-ingress-controller/nginx-ingress-controller:dev
</code></pre>
|
<p>I am spinning up a Pod (comes up with Non Root user) that needs to write data to a volume. The volume comes from a PVC.</p>
<p>The pod definition is simple</p>
<pre><code>kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: test-pvc
containers:
- name: task-pv-container
image: jnlp/jenkins-slave:latest
command: ["/bin/bash"]
args: ["-c", "sleep 500"]
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
</code></pre>
<p>When I <code>exec</code> into the Pod and try to write into <code>/usr/share/nginx/html</code></p>
<p>I get</p>
<pre><code>jenkins@task-pv-pod:/usr/share/nginx/html$ touch test
touch: cannot touch ‘test’: Permission denied
</code></pre>
<p>Looking at the permissions of the directory</p>
<pre><code>jenkins@task-pv-pod:~$ ls -ld /usr/share/nginx/html
drwxr-xr-x 3 root root 4096 Mar 29 15:52 /usr/share/nginx/html
</code></pre>
<p>Its clear that ONLY root user can write to <code>/usr/share/nginx/html</code> but thats not what I want.</p>
<p>Is there a way to change the permissions for mounted volumes ?</p>
| <p>You can consider using an <strong>initContainer</strong> to mount your volume and change permissions. The initContainer will be run before the main container(s) start up. The usual pattern for this usage is to have a busybox image (~22 MB) to mount the volume and run a chown or chmod on the directory. When your pod's primary container runs, the volume(s) will have the correct ownership/access privileges. </p>
<p>Alternatively, you can consider using the initContainer to inject the proper files as shown in <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/" rel="nofollow noreferrer">this example</a>.</p>
<p>Hope this helps!</p>
|
<p>I am trying to run an deployment config on OpenShift. Part of my deployment config runs an init container which sets up permissions on persistent volume with chown. When the init-container fires up, it fails and the logs print out "permission denied"</p>
<p>Here is my init-container:</p>
<pre><code> - apiVersion: v1
kind: DeploymentConfig
metadata:
name: ${NAME}-primary
namespace: ${NAMESPACE}
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
name: ${NAME}-primary
test-app: ${NAME}
spec:
serviceAccountName: ${SERVICE_ACCOUNT}
initContainers:
- name: set-volume-ownership
image: alpine:3.6
command: ["sh", "-c", "chown -R 1030:1030 /var/opt"]
volumeMounts:
- name: test-app-data
mountPath: /var/opt
</code></pre>
<p>I also have chmod 777 set on the nfs mount which has my persistent volume.</p>
<p>So, I know OpenShift runs the pod as a random UID by default. I know I can add the service account from my deployment config to scc anyuid and this will work but I dont want to do that as that is a security concern and my cluster admin will not allow that. How can I get around this? I have been reading about fsGroups but they havent made sense to me. Any opinions?</p>
| <p>The OpenShift documentation talks a little about this in the <a href="https://docs.okd.io/latest/creating_images/guidelines.html#openshift-specific-guidelines" rel="nofollow noreferrer">Support Arbitrary User IDs</a> section.</p>
<p>The issue is that the user your init container is running as does not have write permissions on that directory <code>/var/opt</code>.</p>
<p>If you have your initContainer run the <code>id</code> command you will see that your uid and gid should be 1000000000+:0</p>
<p>The typical strategy expected in this situation is to grant write permissions to the root group anywhere you will need to write during runtime. This will allow your runtime user to access the files because despite the fact the uid is randomly generated, the group is always 0. </p>
<p>Unfortunately many public container images do have this configuration setup out of the box.</p>
<p>You can see examples of this in the Red Hat base images. There is a script baked into each base container image called <a href="https://github.com/sclorg/s2i-base-container/blob/master/core/root/usr/bin/fix-permissions#L21" rel="nofollow noreferrer">fix-permissions</a> that gets run anywhere it is expected the application/user will need to write to later.</p>
<p>In the above case the following code is used to tweak the permissions so that later arbitrary users with id's 1000000000+:0 can write to them.</p>
<pre><code>find $SYMLINK_OPT "$1" ${CHECK_OWNER} \! -gid 0 -exec chgrp 0 {} +
find $SYMLINK_OPT "$1" ${CHECK_OWNER} \! -perm -g+rw -exec chmod g+rw {} +
find $SYMLINK_OPT "$1" ${CHECK_OWNER} -perm /u+x -a \! -perm /g+x -exec chmod g+x {} +
find $SYMLINK_OPT "$1" ${CHECK_OWNER} -type d \! -perm /g+x -exec chmod g+x {} +
</code></pre>
<p>If you want to tweak these values at the cluster level the configuration for UIDAllocatorRange is set in the master-config.yml on the master hosts in the <a href="https://docs.okd.io/latest/install_config/master_node_configuration.html#master-node-config-project-config" rel="nofollow noreferrer">Project Config</a> section and the <a href="https://docs.okd.io/latest/install_config/master_node_configuration.html#security-allocator-configuration" rel="nofollow noreferrer">Security Allocator Configuration</a> section.</p>
<p>You can also modify the behavior of the generated uids via the openshift.io/sa.scc.uid-range annotation. Documentation discussing this can be found in the <a href="https://docs.okd.io/latest/architecture/additional_concepts/authorization.html#admission" rel="nofollow noreferrer">Understanding Pre-allocated Values and Security Context Constraints</a> section.</p>
|
<p>I see guides like <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes" rel="nofollow noreferrer">this</a> where setting up an Nginx Ingress (with ssl) requires you enter the host, i.e. <code>echo1.example.com</code>. I don't understand how you would be able to use the host specified if you didn't have the IP address (in your DNS, how would you know where to piont <code>echo.example.com</code>?).</p>
<p>When I set up an ingress like this, <code>echo.example.com</code> would show up as the ingress <code>ADDRESS</code>, so I don't know the IP. If I don't specify it, the <code>ADDRESS</code> is just empty. With this, how am I suppose to know what IP I'm suppose to point my domain name?</p>
<p>I'm running on GKE.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: echo-ingress
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-staging
spec:
tls:
- hosts:
- echo1.example.com
- echo2.example.com
secretName: letsencrypt-staging
rules:
- host: echo1.example.com
http:
paths:
- backend:
serviceName: echo1
servicePort: 80
- host: echo2.example.com
http:
paths:
- backend:
serviceName: echo2
servicePort: 80
</code></pre>
| <p>The IPs of your Kubernetes nodes should be considered ephemeral. Do not point hostnames at them for purposes of hosting websites and services. </p>
<ol>
<li><a href="https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address" rel="nofollow noreferrer">Reserve an external static IP address</a></li>
<li>Create a load balancer on ports 80 and 443. Use <a href="https://cloud.google.com/load-balancing/docs/https/setting-up-https" rel="nofollow noreferrer">HTTPS</a> if you want the load balancer to handle TLS. Use <a href="https://cloud.google.com/load-balancing/docs/tcp/setting-up-tcp" rel="nofollow noreferrer">TCP</a> if you want nginx to handle TLS</li>
<li><a href="https://cloud.google.com/load-balancing/docs/target-pools" rel="nofollow noreferrer">Configure Target Pools</a> on the load balancer to point to your K8s node pools on the correct nginx ingress port</li>
<li>Point all hostnames served by the cluster at the external static IP address</li>
</ol>
|
<p>So I want to use Cachex to share temporary data across the cluster. Cachex takes a <code>:nodes</code> list in the config file. That worked great in testing, as I could just hard code <code>[:"a@My-Laptop", :"b@My-Laptop"]</code> and it worked, but with Kubernetes the names are dynamic. How would I set these?</p>
| <p>I looked at this for my day job a while back and Cachex doesn't have a solution for dynamic node membership. It expects you to have all of your node names predefined and static. You can do this in Kubernetes with stateful sets, but you'll throw away all future possibility of autoscaling if you want to keep your cache distributed across all nodes.</p>
<p>A better solution IMO is <a href="https://hexdocs.pm/nebulex/Nebulex.html" rel="noreferrer">Nebulex</a>. Nebulex is not concerned about managing the connection to your nodes or their membership. It is only concerned with keeping a cache distributed across <code>[Node.self() | Node.list()]</code> (all of the nodes in your cluster). It all runs over PG2 which is a distributed pubsub implementation. It has the ability to transfer data as nodes join or leave and will shard it so that you have duplicate data across your nodes. It even allows you to match on node names to store your cached data.</p>
<p>To manage connecting your nodes to each other in a dynamic way, there's a library called <a href="https://hexdocs.pm/libcluster/readme.html" rel="noreferrer">libcluster</a> that actually has Kubernetes support built in. Libcluster will either use Kubernetes DNS or you can create a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="noreferrer">headless service</a> that will serve the IP addresses of all your elixir pods. Libcluster will ping this service on a polling interval and try to join the erlang cluster.</p>
<p>When you have both set up, libcluster will manage keeping your nodes in a cluster and Nebulex will manage keeping your data synced across all of them. It's pretty cool and we haven't had any problems with it yet.</p>
<p>I'm not going to write any source here because the documentation for all of these things is really well written. Let me know if you have any questions though.</p>
|
<p>I would like to create an nginx-ingress that I can link to a reserved IP address. The main reason being, that I want to minimize manual steps. Currently, the infrastructure is automatically set-up with Terraform, but I cannot get nginx-ingress to use the reserved IP with it. I already have nginx-ingress working, but it creates its own IP address.</p>
<p>According to the nginx-ingress site (<a href="https://kubernetes.github.io/ingress-nginx/examples/static-ip/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/examples/static-ip/</a>), this should be possible. First, one should create a load-balancer service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-lb
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
loadBalancerIP: 34.123.12.123
ports:
- port: 80
name: http
targetPort: 80
- port: 443
name: https
targetPort: 443
selector:
# Selects nginx-ingress-controller pods
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
</code></pre>
<p>However, then one can update the IP via <code>nginx-ingress-controller.yaml</code> file with the <code>--publish-service</code> flag. However, I install this via helm:</p>
<pre><code>helm install stable/nginx-ingress --name my-nginx --set rbac.create=true
</code></pre>
<p>How can I link the publish service to nginx-ingress-lb in my helm installation (or upgrade).</p>
| <p>Assuming your cloud provider supports LBs with static IPs (AWS, for example, will give you a CNAME instead of an IP): </p>
<p>You will have to set it as a tag as the following. Once you do this, you can set your ingress annotation: <code>kubernetes.io/ingress.class: nginx</code> and your ingress will automatically get the same IP address.</p>
<pre><code>helm install stable/nginx-ingress --set controller.service.loadBalancerIP=XXXX,rbac.create=true
</code></pre>
|
<p>I'm running a <a href="https://microk8s.io/" rel="nofollow noreferrer">microk8s</a> cluster on an Ubuntu server at home, and I have it connected to a local NAS server for persistent storage. I've been using it as my personal proving grounds for learning Kubernetes, but I seem to encounter problem after problem at just about every step of the way.</p>
<p>I've got the <a href="https://github.com/helm/charts/tree/master/stable/nfs-client-provisioner" rel="nofollow noreferrer">NFS Client Provisioner</a> Helm chart installed which I've confirmed works - it will dynamically provision PVCs on my NAS server. I later was able to successfully install the <a href="https://github.com/helm/charts/tree/master/stable/postgresql" rel="nofollow noreferrer">Postgres</a> Helm chart, or so I thought. After creating it I was able to connect to it using a SQL client, and I was feeling good.</p>
<p>Until a couple of days later, I noticed the pod was showing 0/1 containers ready. Although interestingly, the nfs-client-provisioner pod was still showing 1/1. Long story short: I've deleted/purged the Postgres Helm chart, and attempted to reinstall it, but now it no longer works. In fact, nothing new that I try to deploy works. Everything looks as though it's going to work, but then just hangs on either Init or ContainerCreating forever.</p>
<p>With Postgres in particular, the command I've been running is this:</p>
<pre><code>helm install --name postgres stable/postgresql -f postgres.yaml
</code></pre>
<p>And my <code>postgres.yaml</code> file looks like this:</p>
<pre><code>persistence:
storageClass: nfs-client
accessMode: ReadWriteMany
size: 2Gi
</code></pre>
<p>But if I do a <code>kubectl get pods</code> I see still see this:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
nfs-client-provisioner 1/1 Running 1 11d
postgres-postgresql-0 0/1 Init:0/1 0 3h51m
</code></pre>
<p>If I do a <code>kubectl describe pod postgres-postgresql-0</code>, this is the output:</p>
<pre><code>Name: postgres-postgresql-0
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: stjohn/192.168.1.217
Start Time: Thu, 28 Mar 2019 12:51:02 -0500
Labels: app=postgresql
chart=postgresql-3.11.7
controller-revision-hash=postgres-postgresql-5bfb9cc56d
heritage=Tiller
release=postgres
role=master
statefulset.kubernetes.io/pod-name=postgres-postgresql-0
Annotations: <none>
Status: Pending
IP:
Controlled By: StatefulSet/postgres-postgresql
Init Containers:
init-chmod-data:
Container ID:
Image: docker.io/bitnami/minideb:latest
Image ID:
Port: <none>
Host Port: <none>
Command:
sh
-c
chown -R 1001:1001 /bitnami
if [ -d /bitnami/postgresql/data ]; then
chmod 0700 /bitnami/postgresql/data;
fi
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Requests:
cpu: 250m
memory: 256Mi
Environment: <none>
Mounts:
/bitnami/postgresql from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-h4gph (ro)
Containers:
postgres-postgresql:
Container ID:
Image: docker.io/bitnami/postgresql:10.7.0
Image ID:
Port: 5432/TCP
Host Port: 0/TCP
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Requests:
cpu: 250m
memory: 256Mi
Liveness: exec [sh -c exec pg_isready -U "postgres" -h localhost] delay=30s timeout=5s period=10s #success=1 #failure=6
Readiness: exec [sh -c exec pg_isready -U "postgres" -h localhost] delay=5s timeout=5s period=10s #success=1 #failure=6
Environment:
PGDATA: /bitnami/postgresql
POSTGRES_USER: postgres
POSTGRES_PASSWORD: <set to the key 'postgresql-password' in secret 'postgres-postgresql'> Optional: false
Mounts:
/bitnami/postgresql from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-h4gph (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-postgres-postgresql-0
ReadOnly: false
default-token-h4gph:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-h4gph
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
</code></pre>
<p>And if I do a <code>kubectl get pod postgres-postgresql-0 -o yaml</code>, this is the output:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2019-03-28T17:51:02Z"
generateName: postgres-postgresql-
labels:
app: postgresql
chart: postgresql-3.11.7
controller-revision-hash: postgres-postgresql-5bfb9cc56d
heritage: Tiller
release: postgres
role: master
statefulset.kubernetes.io/pod-name: postgres-postgresql-0
name: postgres-postgresql-0
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: StatefulSet
name: postgres-postgresql
uid: 0d3ef673-5182-11e9-bf14-b8975a0ca30c
resourceVersion: "1953329"
selfLink: /api/v1/namespaces/default/pods/postgres-postgresql-0
uid: 0d4dfb56-5182-11e9-bf14-b8975a0ca30c
spec:
containers:
- env:
- name: PGDATA
value: /bitnami/postgresql
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
key: postgresql-password
name: postgres-postgresql
image: docker.io/bitnami/postgresql:10.7.0
imagePullPolicy: Always
livenessProbe:
exec:
command:
- sh
- -c
- exec pg_isready -U "postgres" -h localhost
failureThreshold: 6
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: postgres-postgresql
ports:
- containerPort: 5432
name: postgresql
protocol: TCP
readinessProbe:
exec:
command:
- sh
- -c
- exec pg_isready -U "postgres" -h localhost
failureThreshold: 6
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
requests:
cpu: 250m
memory: 256Mi
securityContext:
procMount: Default
runAsUser: 1001
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /bitnami/postgresql
name: data
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-h4gph
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
hostname: postgres-postgresql-0
initContainers:
- command:
- sh
- -c
- |
chown -R 1001:1001 /bitnami
if [ -d /bitnami/postgresql/data ]; then
chmod 0700 /bitnami/postgresql/data;
fi
image: docker.io/bitnami/minideb:latest
imagePullPolicy: Always
name: init-chmod-data
resources:
requests:
cpu: 250m
memory: 256Mi
securityContext:
procMount: Default
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /bitnami/postgresql
name: data
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-h4gph
readOnly: true
nodeName: stjohn
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1001
serviceAccount: default
serviceAccountName: default
subdomain: postgres-postgresql-headless
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: data
persistentVolumeClaim:
claimName: data-postgres-postgresql-0
- name: default-token-h4gph
secret:
defaultMode: 420
secretName: default-token-h4gph
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-03-28T17:51:02Z"
message: 'containers with incomplete status: [init-chmod-data]'
reason: ContainersNotInitialized
status: "False"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2019-03-28T17:51:02Z"
message: 'containers with unready status: [postgres-postgresql]'
reason: ContainersNotReady
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2019-03-28T17:51:02Z"
message: 'containers with unready status: [postgres-postgresql]'
reason: ContainersNotReady
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2019-03-28T17:51:02Z"
status: "True"
type: PodScheduled
containerStatuses:
- image: docker.io/bitnami/postgresql:10.7.0
imageID: ""
lastState: {}
name: postgres-postgresql
ready: false
restartCount: 0
state:
waiting:
reason: PodInitializing
hostIP: 192.168.1.217
initContainerStatuses:
- image: docker.io/bitnami/minideb:latest
imageID: ""
lastState: {}
name: init-chmod-data
ready: false
restartCount: 0
state:
waiting:
reason: PodInitializing
phase: Pending
qosClass: Burstable
startTime: "2019-03-28T17:51:02Z"
</code></pre>
<p>I don't see anything obvious in these to be able to pinpoint what might be going on. And I've already rebooted the server just to see if that might help. Any thoughts? Why won't my containers start?</p>
| <p>It looks like your <strong>initContainer</strong> is stuck in the <em>PodInitializing</em> state. The most likely scenario is that your PVCs are not ready. I recommend you <code>describe</code> your <code>data-postgres-postgresql-0</code> PVC to make sure that the volume has actually been provisioned and is in the <code>READY</code> state. Your NFS provisioner may be working, but that specific PV/PVC may not have been created due to an error. I have run into similar phenomena with the EFS provisioner with AWS.</p>
|
<p>I tried to create k8s cluster on aws using kops. </p>
<p>After create the cluster with default definition, I saw a LoadBalance has been created. </p>
<pre><code>apiVersion: kops/v1alpha2
kind: Cluster
metadata:
name: bungee.staging.k8s.local
spec:
api:
loadBalancer:
type: Public
....
</code></pre>
<p>I just wondering about the reason of creating the LoadBalancer along with cluster.</p>
<p>Appreciate !</p>
| <p>In the type of cluster that kops creates the apiserver (referred to as api above, a component of the Kubernetes master, aka control plane) <em>may</em> not have a static IP address. Also, kops can create a HA (replicated) control plane, which means there <strong>will</strong> be multiple IPs where the apiserver is available.</p>
<p>The apiserver functions as a central connection hub for all other Kubernetes components, for example all the nodes connect to it but also the operator humans connect to them via kubectl. For one, these configuration files do not support multiple IP address for the apiserver (as to make use of the HA setup). Plus updating the configuration files every time the apiserver IP address(es) change would be difficult.</p>
<p>So the load balancer functions as a front for the apiserver(s) with a single, static IP address (an anycast IP with AWS/GCP). This load balancer IP is specified in the configuration files of Kubernetes components instead of actual apiserver IP(s).</p>
<p>Actually, it is also possible to solve this program by using a DNS name that resolves to IP(s) of the apiserver(s) coupled with a mechanism that keeps this record updated. This solution can't react to changes of the underlying IP(s) as fast a load balancer can, but it does save you couple of bucks plus it is slightly less likely to fail and creates less dependency on the cloud provider. This can be configured like so:</p>
<pre><code>spec:
api:
dns: {}
</code></pre>
<p>See <a href="https://github.com/kubernetes/kops/blob/master/docs/cluster_spec.md#api" rel="noreferrer">specification</a> for more details.</p>
|
<p>I have a brand new kubernetes cluster on AKS.</p>
<p>I disabled the addons with the azure-cli as described in <a href="https://learn.microsoft.com/en-us/azure/aks/http-application-routing#remove-http-routing" rel="nofollow noreferrer">documentation</a>:</p>
<pre><code>az aks disable-addons --addons http_application_routing --name myAKSCluster --resource-group myResourceGroup --no-wait
</code></pre>
<p>The portal shows no domain associated to the cluster.</p>
<p>But with kubelets I still see all the pods and deployments related to the addon.</p>
<p>I tried to delete deployments and stuff with kubectl, but the deployments recreates themself.</p>
<p>Have anybody experienced the same?</p>
<p>Thanks!</p>
| <p>there is a known issue with 1.12.6</p>
<pre><code>Unable to disable addons on deployed clusters
AKS Engineering is diagnosing an issue around existing/deployed clusters being unable to disable Kubernetes addons within the addon-manager. When we have identified and repaired the issue we will roll out the required hot fix to all regions.
This impacts all addons including monitoring, http application routing, etc.
</code></pre>
<p><a href="https://github.com/Azure/AKS/releases" rel="nofollow noreferrer">https://github.com/Azure/AKS/releases</a></p>
|
<p>I have several celery workers running in minikube, and they are working on tasks passed using the rabbitMQ. Recently I updated some of the code for the celery workers and changed the image. When I do <code>helm upgrade release_name chart_path</code>, all the existing worker pods are terminated and all the unfinished tasks are abandoned. I was wondering if there is a way to upgrade the helm chart without terminating the old pods? </p>
<ol>
<li>I know that <code>helm install -n new_release_name chart_path</code> will give me a new set of celery workers; however, due to some limitations, I am not allowed to deploy pods in a new release.</li>
<li>I tried running <code>helm upgrade release_name chart_path --set deployment.name=worker2</code> because I thought having a new deployment name will stop helm from deleting the old pods, but this won't work either.</li>
</ol>
| <p>This is just how Kubernetes Deployments work. What you should do is to fix your Celery worker image so that it waits to try and complete whatever tasks are pending before actually shutting down. This should already probably be the case unless you did something funky such that the SIGTERM isn't making it to Celery? See <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods</a> for details.</p>
|
<p>I read a bout metalLB in <a href="http://blog.cowger.us/2018/07/25/using-kubernetes-externaldns-with-a-home-bare-metal-k8s.html" rel="nofollow noreferrer">http://blog.cowger.us/2018/07/25/using-kubernetes-externaldns-with-a-home-bare-metal-k8s.html</a>
the writers said</p>
<blockquote>
<p>Bare metal cluster operators are left with two lesser tools to bring
user traffic into their clusters, “NodePort” and “externalIPs”
services. Both of these options have <strong>significant downsides</strong> for
production use, which makes bare metal clusters second class citizens
in the Kubernetes ecosystem.</p>
</blockquote>
<p>I want to know what is this significant downsides.</p>
| <p>A Service with <code>type: NodePort</code> would open the same port on all of the nodes enabling clients to direct their traffic to any of the nodes and kube-proxy can balance the traffic between Pods from that point on. You face 3 problems here:</p>
<ol>
<li>Unless you are happy with depending on a single node you'd need to create your own load balancing solution to target multiple (or even all) nodes. This is doable of course but you need extra software or hardware plus configuration</li>
<li>For configuration above you also need a mechanism to discover the IP addresses of the nodes, keep that list updated and monitor for health of nodes. Again, doable but extra pain</li>
<li>NodePort only supports picking a port number from a specific range (default is 30000-32767). The range can be modified but you won't be able to pick your favourite ports like 80 or 443 this way. Again, not a huge problem if you have an external load balancing solution which will hide this implementation detail</li>
</ol>
<p>As for Service with <code>type: ClusterIP</code> (default) and <code>externalIPs: [...]</code> (must specify IP address(es) of node(s) there your problems will be:</p>
<ol>
<li>You need some method to pick some nodes that are healthy and keep the Service object updated with that list. Doable but requires extra automation.</li>
<li>Same 1., for NodePort</li>
<li>Although you get to pick arbitrary port numbers here (so 80, 443, 3306 are okay) your will need do some housekeeping to avoid attempting to use the same port number on the same node from two different Service objects. Once again, doable but you probably have something better to do</li>
</ol>
|
<p>I am spinning up a Pod (comes up with Non Root user) that needs to write data to a volume. The volume comes from a PVC.</p>
<p>The pod definition is simple</p>
<pre><code>kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: test-pvc
containers:
- name: task-pv-container
image: jnlp/jenkins-slave:latest
command: ["/bin/bash"]
args: ["-c", "sleep 500"]
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
</code></pre>
<p>When I <code>exec</code> into the Pod and try to write into <code>/usr/share/nginx/html</code></p>
<p>I get</p>
<pre><code>jenkins@task-pv-pod:/usr/share/nginx/html$ touch test
touch: cannot touch ‘test’: Permission denied
</code></pre>
<p>Looking at the permissions of the directory</p>
<pre><code>jenkins@task-pv-pod:~$ ls -ld /usr/share/nginx/html
drwxr-xr-x 3 root root 4096 Mar 29 15:52 /usr/share/nginx/html
</code></pre>
<p>Its clear that ONLY root user can write to <code>/usr/share/nginx/html</code> but thats not what I want.</p>
<p>Is there a way to change the permissions for mounted volumes ?</p>
| <p>A security context defines privilege and access control settings for a Pod or Container. Just try <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context" rel="nofollow noreferrer">securityContext</a>: </p>
<pre><code>kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
securityContext:
fsGroup: $jenkins_uid
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: test-pvc
...
</code></pre>
|
<p>I'm setting up an
<strong><code>EKS</code></strong> cluster on <strong><code>AWS</code></strong> and I want to secure all the data in flight in the Kuberentes cluster (and also between EKS and external systems but thats out of scope here).</p>
<p>In Kubernetes are different protocols in use between the different pods, mainly <strong><code>Rest/HTTP</code></strong> but also communication between <strong><code>microservice-pods with a KafkaBroker</code></strong> and JDBC between <strong><code>microservice-pods and database pods</code></strong> (Zalando Postgres), between a <strong><code>filebeat-pod and elasticsearch</code></strong>, ...</p>
<p>I see several options but I don't like any of them.</p>
<ul>
<li>Encrypt every communication individually --> too much work. operational nightmare</li>
<li>Istio or LinkerD --> Works only for Http and gRPC, not for KafkaCommunication. Heavy ControlPlane.</li>
<li>a CNI like WeaveNet --> no support for SecurityGroups, FlowLogs, ACLs</li>
</ul>
<p>Are there better alternatives?
Would you recommend any of these options?</p>
| <p>One possible solution could be using a <a href="https://medium.com/@santhoz/nginx-sidecar-reverse-proxy-for-performance-http-to-https-redirection-in-kubernetes-dd9dbe2fd0c7" rel="nofollow noreferrer">nginx sidecar reverse proxy</a> on all your pods to capture all outbound traffic in conjunction with nginx's <a href="https://chrislea.com/2014/03/20/using-proxy-protocol-nginx/" rel="nofollow noreferrer">proxy_protocol</a> directive to operate with ELBs and other load balancers. </p>
<p>You can accomplish this by modifying the iptables (or whatever SDN/pattern you choose to use in your setup) to force all outbound traffic into the reverse proxy instead of sending it out to the internet. You then use directives within <code>proxy_protocol</code> to <a href="https://docs.nginx.com/nginx/admin-guide/security-controls/securing-tcp-traffic-upstream/" rel="nofollow noreferrer">force all upstream TCP connections to use SSL</a> using a certificate defined by the reverse proxy for encryption.</p>
<p>This should work for the protocols you defined since they're all based on TCP.</p>
|
<p>I'm looking to redirect all traffic from </p>
<p><a href="http://example.com" rel="noreferrer">http://example.com</a> -> <a href="https://example.com" rel="noreferrer">https://example.com</a> like how nearly all websites do.</p>
<p>I've looked at this link with no success:
<a href="https://stackoverflow.com/questions/40763718/kubernetes-https-ingress-in-google-container-engine">Kubernetes HTTPS Ingress in Google Container Engine</a></p>
<p>And have tried the following annotations in my ingress.yaml file.</p>
<pre><code>nginx.ingress.kubernetes.io/configuration-snippet: |
if ($http_x_forwarded_proto != 'https') {
return 301 https://$host$request_uri;
}
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
kubernetes.io/ingress.allow-http: "false"
</code></pre>
<p>All without any success. To be clear, I can access <a href="https://example.com" rel="noreferrer">https://example.com</a> and <a href="http://example.com" rel="noreferrer">http://example.com</a> without any errors, I need the http call to redirect to https. </p>
<p>Thanks</p>
| <p>Currently, the documentation for how to do this properly (annotations, SSL/HTTPS, health checks etc) is severely lacking, and has been for far too long. I suspect it's because they prefer you to use the App Engine, which is magical but stupidly expensive. For GKE, here's two options:</p>
<ul>
<li>ingress with a google-managed SSL cert and additional NGINX server configuration in front of your app/site</li>
<li>the NGINX ingress controller with self-managed/third-party SSL certs</li>
</ul>
<p>The following is steps to a working setup using the former.</p>
<h1>1 The door to your app</h1>
<p><strong>nginx.conf:</strong> (ellipses represent other non-relevant, non-compulsory settings)</p>
<pre><code>user nginx;
worker_processes auto;
events {
worker_connections 1024;
}
http {
...
keepalive_timeout 620s;
## Logging ##
...
## MIME Types ##
...
## Caching ##
...
## Security Headers ##
...
## Compression ##
....
server {
listen 80;
## HTTP Redirect ##
if ($http_x_forwarded_proto = "http") {
return 301 https://[YOUR DOMAIN]$request_uri;
}
location /health/liveness {
access_log off;
default_type text/plain;
return 200 'Server is LIVE!';
}
location /health/readiness {
access_log off;
default_type text/plain;
return 200 'Server is READY!';
}
root /usr/src/app/www;
index index.html index.htm;
server_name [YOUR DOMAIN] www.[YOUR DOMAIN];
location / {
try_files $uri $uri/ /index.html;
}
}
}
</code></pre>
<p>NOTE: <strong>One serving port only</strong>. The global forwarding rule adds the <a href="https://cloud.google.com/load-balancing/docs/https/#components" rel="noreferrer">http_x_forwarded_proto</a> header to all traffic that passes through it. Because ALL traffic to your domain is now passing through this rule (remember, one port on the container, service and ingress), this header will (crucially!) always be set. Note the check and redirect above: it only continues with serving if the header value is 'https'. The root and index and location values may differ depending for your project (this is an angular project). keepalive_timeout is set to the value <a href="https://cloud.google.com/load-balancing/docs/https/#timeouts_and_retries" rel="noreferrer">recommended by google</a>. I prefer using the main nginx.conf file, but most people add a custom.conf file to /etc/nginx/conf.d; if you do this just make sure the file is imported into the main nginx.conf http block using an <em>includes</em> statement. The comments highlight where other settings you may want to add when everything is working will go, like gzip/brotli, security headers, where logs are saved, and so on.</p>
<p><strong>Dockerfile:</strong></p>
<pre><code>...
COPY nginx.conf /etc/nginx/nginx.conf
CMD ["nginx", "-g", "daemon off;"]
</code></pre>
<p>NOTE: only the final two lines. Specifying an EXPOSE port is unnecessary. COPY replaces the default nginx.conf with the modified one. CMD starts a light server.</p>
<h1>2 Create a deployment manifest and apply/create</h1>
<p><strong>deployment.yaml:</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: uber-dp
spec:
replicas: 1
selector:
matchLabels:
app: uber
template:
metadata:
labels:
app: uber
spec:
containers:
- name: uber-ctr
image: gcr.io/uber/beta:v1 // or some other registry
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 60
httpGet:
path: /health/liveness
port: 80
scheme: HTTP
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 30
httpGet:
path: /health/readiness
port: 80
scheme: HTTP
ports:
- containerPort: 80
imagePullPolicy: Always
</code></pre>
<p>NOTE: only one specified port is necessary, as we're going to point all (HTTP and HTTPS) traffic to it. For simplicity we're using the same path for liveness and readiness probes; these checks will be dealt with on the NGINX server, but you can and should add checks that probe the health of your app itself (eg a dedicated page that returns a 200 if healthy).The readiness probe will also be picked up by GCE, which by default has its own irremovable health check.</p>
<h1>3 Create a service manifest and apply/create</h1>
<p><strong>service.yaml:</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: uber-svc
labels:
app: uber
spec:
ports:
- name: default-port
port: 80
selector:
app: uber
sessionAffinity: None
type: NodePort
</code></pre>
<p>NOTE: default-port specifies port 80 on the container.</p>
<h1>4 Get a static IP address</h1>
<p>On GCP in the hamburger menu: VPC Network -> External IP Addresses. Convert your auto-generated ephemeral IP or create a new one. Take note of the name and address.</p>
<h1>5 <a href="https://cloud.google.com/load-balancing/docs/ssl-certificates" rel="noreferrer">Create an SSL cert</a> and default zone</h1>
<p>In the hamburger menu: Network Service -> Load Balancing -> click 'advanced menu' -> Certificates -> Create SSL Certificate. Follow the instructions, create or upload a certificate, and take note of the name. Then, from the menu: Cloud DNS -> Create Zone. Following the instructions, <a href="https://cloud.google.com/dns/docs/quickstart" rel="noreferrer">create a default zone</a> for your domain. Add a CNAME record with www as the DNS name and your domain as the canonical name. Add an A record with an empty DNS name value and your static IP as the IPV4. Save.</p>
<h1>6 Create an ingress manifest and apply/create</h1>
<p><strong>ingress.yaml:</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mypt-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: [NAME OF YOUR STATIC IP ADDRESS]
kubernetes.io/ingress.allow-http: "true"
ingress.gcp.kubernetes.io/pre-shared-cert: [NAME OF YOUR GOOGLE-MANAGED SSL]
spec:
backend:
serviceName: mypt-svc
servicePort: 80
</code></pre>
<p>NOTE: backend property points to the service, which points to the container, which contains your app 'protected' by a server. Annotations connect your app with SSL, and force-allow http for the health checks. Combined, the service and ingress configure the <a href="https://cloud.google.com/load-balancing/docs/https/" rel="noreferrer">G7 load balancer</a> (combined global forwarding rule, backend and frontend 'services', SSL certs and target proxies etc).</p>
<h1>7 Make a cup of tea or something</h1>
<p>Everything needs ~10 minutes to configure. Clear cache and test your domain with various browsers (Tor, Opera, Safari, IE etc). Everything will serve over https.</p>
<p><strong>What about the NGINX Ingress Controller?</strong>
I've seen discussion it being better because it's cheaper/uses less resources and is more flexible. It isn't cheaper: it requires an additional deployment/workload and service (GCE L4). And you need to do more configuration. Is it more flexible? Yes. But in taking care of most of the work, the first option gives you a more important kind of flexibility — namely allowing you to get on with more pressing matters.</p>
|
<p>I have a Kubernetes cluster in vagrant (1.14.0) and installed calico.</p>
<p>I have installed the kubernetes dashboard. When I use <code>kubectl proxy</code> to visit the dashboard:</p>
<pre><code>Error: 'dial tcp 192.168.1.4:8443: connect: connection refused'
Trying to reach: 'https://192.168.1.4:8443/'
</code></pre>
<p>Here are my pods (dashboard is restarting frequently):</p>
<pre><code>$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-etcd-cj928 1/1 Running 0 11m
calico-node-4fnb6 1/1 Running 0 18m
calico-node-qjv7t 1/1 Running 0 20m
calico-policy-controller-b9b6749c6-29c44 1/1 Running 1 11m
coredns-fb8b8dccf-jjbhk 1/1 Running 0 20m
coredns-fb8b8dccf-jrc2l 1/1 Running 0 20m
etcd-k8s-master 1/1 Running 0 19m
kube-apiserver-k8s-master 1/1 Running 0 19m
kube-controller-manager-k8s-master 1/1 Running 0 19m
kube-proxy-8mrrr 1/1 Running 0 18m
kube-proxy-cdsr9 1/1 Running 0 20m
kube-scheduler-k8s-master 1/1 Running 0 19m
kubernetes-dashboard-5f7b999d65-nnztw 1/1 Running 3 2m11s
</code></pre>
<p>logs of the dasbhoard pod:</p>
<pre><code>2019/03/30 14:36:21 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service account's configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to our FAQ and wiki pages for more information: https://github.com/kubernetes/dashboard/wiki/FAQ
</code></pre>
<p>I can telnet from both master and nodes to 10.96.0.1:443.</p>
<p>What is configured wrongly? The rest of the cluster seems to work fine, although I see this logs in kubelet:</p>
<pre><code>failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml"
</code></pre>
<p>kubelet seems to run fine on the master.
The cluster was created with this command:</p>
<pre><code>kubeadm init --apiserver-advertise-address="192.168.50.10" --apiserver-cert-extra-sans="192.168.50.10" --node-name k8s-master --pod-network-cidr=192.168.0.0/16
</code></pre>
| <p>you should define your hostname in /etc/hosts</p>
<pre><code>#hostname
YOUR_HOSTNAME
#nano /etc/hosts
YOUR_IP HOSTNAME
</code></pre>
<p>if you set your hostname in your master but it did not work try</p>
<pre><code># systemctl stop kubelet
# systemctl stop docker
# iptables --flush
# iptables -tnat --flush
# systemctl start kubelet
# systemctl start docker
</code></pre>
<p><strong>and you should install dashboard before join worker node</strong></p>
<p>and <strong>disable your firewall</strong></p>
<p>and you can check your free ram.</p>
|
<p>I have kubernetes cluster and every thing work fine. after some times I drain my worker node and reset it and join it again to master but</p>
<pre><code>#kubectl get nodes
NAME STATUS ROLES AGE VERSION
ubuntu Ready master 159m v1.14.0
ubuntu1 Ready,SchedulingDisabled <none> 125m v1.14.0
ubuntu2 Ready,SchedulingDisabled <none> 96m v1.14.0
</code></pre>
<p>what should i do?</p>
| <p>To prevent a node from scheduling new pods use:</p>
<pre><code>kubectl cordon <node-name>
</code></pre>
<p>Which will cause the node to be in the status: <code>Ready,SchedulingDisabled</code>.</p>
<p>To tell is to resume scheduling use:</p>
<pre><code>kubectl uncordon <node-name>
</code></pre>
<p>More information about draining a node can be found <a href="https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/" rel="noreferrer">here</a>. And manual node administration <a href="https://kubernetes.io/docs/concepts/architecture/nodes/#manual-node-administration" rel="noreferrer">here</a></p>
|
<p>Is there a way to provide timeout to kubernetes CronJob?</p>
<p>I need to schedule a task that runs according to Cron schedule but I need to limit execution of this task to only 20 seconds. If the task runs longer than 20 seconds than it should be terminated.
I tried using <code>.spec.startingDeadlineSeconds</code> but this didn't help.</p>
| <p>Use <code>cronjob.spec.jobTemplate.spec.activeDeadlineSeconds</code>:</p>
<blockquote>
<p>FIELDS: </p>
<p>activeDeadlineSeconds
Specifies the duration in seconds relative to the startTime that the job
may be active before the system tries to terminate it; value must be
positive integer</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#job-termination-and-cleanup" rel="noreferrer">From documentation:</a></p>
<blockquote>
<p>Another way to terminate a Job is by setting an active deadline. Do
this by setting the .spec.activeDeadlineSeconds field of the Job to a
number of seconds.</p>
<p>The activeDeadlineSeconds applies to the duration of the job, no
matter how many Pods are created. Once a Job reaches
activeDeadlineSeconds, all of its Pods are terminated and the Job
status will become type: Failed with reason: DeadlineExceeded.</p>
</blockquote>
|
<p>Does <code>LoadBalancer</code> use <code>kube-proxy</code> as mentioned <a href="https://gardener.cloud/050-tutorials/content/howto/service-access/" rel="nofollow noreferrer">in this article</a> or is it using <code>NodePort</code> as mentioned <a href="https://rancher.com/blog/2018/2018-06-08-load-balancing-user-apps-with-rancher/" rel="nofollow noreferrer">here</a>?</p>
<p>If it's in fact using <code>NodePort</code>, then why are multiple sources such as <a href="https://metallb.universe.tf/" rel="nofollow noreferrer">MetalLB</a> claiming that using <code>NodePort</code> in production is not a good idea?</p>
<p><code>Ingress</code> also operates using <code>NodePort + nginx</code>. What's so special about <code>LoadBalancer</code> then?</p>
| <p>It depends on the cloud provider implementation:</p>
<blockquote>
<p>Traffic from the external load balancer will be directed at the
backend Pods, though exactly how that works depends on the cloud
provider.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/#loadbalancer" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/#loadbalancer</a></p>
<p>In case of Google Kubernetes Engine (GKE), <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview#ext-lb-details" rel="nofollow noreferrer">this is how it works</a>:</p>
<blockquote>
<p>When using the external load balancer, arriving traffic is initially
routed to a node using a forwarding rule associated with the GCP
network. After the traffic reaches the node, the node uses its
iptables NAT table to choose a Pod. kube-proxy manages the iptables
rules on the node.</p>
</blockquote>
|
<p>I am new to kubernetes and microservices, there are 2 objects, <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noreferrer">Deployments</a> and <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="noreferrer">ReplicaSet</a>.</p>
<p>I can't wrap my head around even after reading its documentations and other articles.</p>
<p>If we have <code>Deployments</code>, why do we need a <code>ReplicaSet</code> because you can specify the replicaset in the <code>Deployment</code>. And when i delete the pods, new pods will be spawned based on the replicaset, just like the deployments.</p>
<p>Like what is the actual use-case where we only need <code>ReplicaSet</code> but not <code>Deployments</code></p>
| <p>Deployment works one level above ReplicaSet object. Deployment is recommended for application services.</p>
<p>With deployment you should be able to do rolling upgrade or rollback. You can update image from v1 to v2.</p>
<p>With ReplicaSet you define number of replicas you want to run. For a particular service. You would have those many replicas running.</p>
|
<p>My understanding is that, if a pod is configured to have one container, that container will run its "main" process as PID 1 and that's pretty much it. My pods have only one container each, and they very often have multiple processes running (always copies of the same process) - why does this happen?</p>
<p>On one cluster I have:</p>
<pre><code>USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.3 0.4 276668 76076 ? Ssl 16:50 0:48 python manage.py drive_consumer_worker
root 19 0.0 0.0 34432 2756 ? Rs 20:28 0:00 ps aux
</code></pre>
<p>On another cluster (running the same <code>Deployment</code>), I have:</p>
<pre><code>USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 1.2 0.3 1269420 104388 ? Ssl Mar16 240:12 python manage.py drive_consumer_worker
root 26 0.0 0.2 1044312 84160 ? S Mar16 0:01 python manage.py drive_consumer_worker
root 30 0.0 0.0 34440 2872 ? Rs 20:30 0:00 ps aux
</code></pre>
<p>As you can see, the memory size is significant enough to indicate that it's a "real" process, but I don't know what I can do to continue debugging. I don't see any pattern with the number of pod replicas defined and the process count.</p>
<p>Snippet from the deployment definition:</p>
<pre><code> containers:
- args:
- newrelic-admin run-program python manage.py drive_consumer_worker
command:
- /bin/bash
- -c
</code></pre>
<p>What is going on here?</p>
| <p>It really depends on the parent process, if it doesn't spawn any children the process <code>1</code> is all you'll have in the container. In this case, it looks like <code>python manage.py drive_consumer_worker</code> is spawning children processes, so it will be more up to the application to control if it spawns children as more processes in the container.</p>
|
<p>I trying to deploy my angular application with kubernates inside a container with nginx.</p>
<p>I create my docker file:</p>
<pre><code>FROM node:10-alpine as builder
COPY package.json package-lock.json ./
RUN npm ci && mkdir /ng-app && mv ./node_modules ./ng-app
WORKDIR /ng-app
COPY . .
RUN npm run ng build -- --prod --output-path=dist
FROM nginx:1.14.1-alpine
COPY nginx/default.conf /etc/nginx/conf.d/
RUN rm -rf /usr/share/nginx/html/*
COPY --from=builder /ng-app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
</code></pre>
<p>My nginx config:</p>
<pre><code>server {
listen 80;
sendfile on;
default_type application/octet-stream;
gzip on;
gzip_http_version 1.1;
gzip_disable "MSIE [1-6]\.";
gzip_min_length 1100;
gzip_vary on;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_comp_level 9;
root /usr/share/nginx/html;
location / {
try_files $uri $uri/ /index.html =404;
}
location /api {
proxy_pass https://my-api;
}
}
</code></pre>
<p>If I launch this image locally It works perfectly but when I deploy this container inside a kubernate cluster the site load fine but all api request shows the error <code>ERR_CONNECTION_REFUSED</code>.</p>
<p>I'm trying to deploy in GCP I build the image and then publish my image by GCP dashboard.</p>
<p>Some idea for this <code>ERR_CONNECTION_REFUSED</code>?</p>
| <p>I found the solution. The problem was with my requests, I was using <code>localhost</code> at the URL, with that I took the wrong pod IP. I've just changed the request to use straight the service IP and that sort out my problem.</p>
|
<p><strong>The issue:</strong></p>
<p>When running skaffold and update watched files, I see the file sync update occur and nodemon restart the server, but refreshing the page doesn't show the change. It's not until after I stop skaffold entirely and restart that I see the change.</p>
<pre><code>Syncing 1 files for test/dev-client:e9c0a112af09abedcb441j4asdfasfd1cf80f2a9bc80342fd4123f01f32e234cfc18
Watching for changes every 1s...
[client-deployment-656asdf881-m643v client] [nodemon] restarting due to changes...
[client-deployment-656asdf881-m643v client] [nodemon] starting `node bin/server.js`
</code></pre>
<p><strong>The setup:</strong></p>
<p>I have a simple microservices application. It has a server side (flask/python) and a client side (react) with express handling the dev server. I have nodemon on with the legacy watch flag as true (For Chokidar polling). On development I'm using Kubernetes via Docker for Mac.</p>
<p><strong>Code:</strong></p>
<p>I'm happy to post my code to assist. Just let me know which ones are most needed.</p>
<p>Here's some starters:</p>
<p>Skaffold.yaml:</p>
<pre><code>apiVersion: skaffold/v1beta7
kind: Config
build:
local:
push: false
artifacts:
- image: test/dev-client
docker:
dockerfile: Dockerfile.dev
context: ./client
sync:
'**/*.css': .
'**/*.scss': .
'**/*.js': .
- image: test/dev-server
docker:
dockerfile: Dockerfile.dev
context: ./server
sync:
'**/*.py': .
deploy:
kubectl:
manifests:
- k8s-test/client-ip-service.yaml
- k8s-test/client-deployment.yaml
- k8s-test/ingress-service.yaml
- k8s-test/server-cluster-ip-service.yaml
- k8s-test/server-deployment.yaml
</code></pre>
<p>The relevant part from Package.json:</p>
<pre><code> "start": "nodemon -L bin/server.js",
</code></pre>
<p>Dockerfile.dev (Client side):</p>
<pre><code># base image
FROM node:10.8.0-alpine
# setting the working directory
# may have to run this depending on environment
# RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# add '/usr/src/app/node_modules/.bin' to $PATH
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# install and cache app depencies
COPY package.json /usr/src/app/package.json
RUN npm install
# copy over everything else
COPY . .
# start the app.
CMD ["npm", "run", "start"]
</code></pre>
| <p>It turns out I was using the wrong pattern for my file syncs. **/*.js doesn't sync the directory properly.</p>
<p>After changing</p>
<pre><code>sync:
'**/*.css': .
'**/*.scss': .
'**/*.js': .
</code></pre>
<p>to </p>
<pre><code>sync:
'***/*.css': .
'***/*.scss': .
'***/*.js': .
</code></pre>
<p>It immediately began working.</p>
<hr>
<p>Update:
On the latest versions of skaffold, this pattern no longer works as skaffold abandoned flattening by default. You can now use **/* patterns and get the same results.</p>
|
<p>For testing purpose, I have enabled pod scheduling on kubernetes master node with the following command</p>
<p><code>kubectl taint nodes --all node-role.kubernetes.io/master-</code></p>
<p>Now I have added worker node to the cluster and I would like to stop scheduling pods on master. How do I do that?</p>
| <p>You simply taint the node again.</p>
<pre><code>kubectl taint nodes master node-role.kubernetes.io/master=:NoSchedule
</code></pre>
|
<ul>
<li>Since versions 2.6 (Apache Hadoop) <strong>Yarn handles docker containers</strong>. Basically it distributes the requested amount of containers on a Hadoop cluster, restart failed containers and so on.</li>
<li><strong>Kubernetes</strong> seemed to do the <strong>same</strong>.</li>
</ul>
<p>Where are the major differences?</p>
| <p>Kubernetes is developed almost from a clean slate for extending Docker container kernel to become a platform. Kubernetes development has taken bottom up approach. It has good optimization on specifying per container/pod resource requirements, but it lacks a effective global scheduler that can partition resources into logical grouping. Kubernetes design allows multiple schedulers to run in the cluster. Each scheduler manages resources within its own pods. However, Kubernetes cluster can suffer from instability when application demands more resources than physical systems can handle. It work best in infrastructure capacity exceeding application demands. Kubernetes scheduler will attempt to fill up the idle nodes with incoming application requests
and terminate low priority and starvation containers to improve resource utilization. Kubernetes containers can integrate with external storage system like S3 to provide resilience to data. Kubernetes framework uses etcd to store cluster data. Etcd cluster nodes and Hadoop Namenode are both single point of failures in Kubernetes or Hadoop platform. Etcd can have more replica than Namenode, hence, from reliability point of view seems to favor Kubernetes in theory. However, Kubernetes security is default open, unless RBAC are defined with fine-grained role binding. Security context is set correctly for pods. If omitted, primary group of the pod will default to root, which can be problematic for system administrators trying to secure the infrastructure. </p>
<p>Apache Hadoop YARN was developed to run isolated java processes to process big data workload then improved to support Docker containers. YARN provides global level resource management like capacity queues for partitioning physical resources into logical units. Each business unit can be assigned with percentage of the cluster resources. Capacity resource sharing system is designed in favor of guarentee resource
availability for Enterprise priority instead of squeezing every available physical resources. YARN does score more points in security. There are more
security featuers in Kerberos, access control for privileged/non-privileged containers, trusted docker images, and placement policy constraints. Most docker
related security are default to close, and system admin needs to manually turn on flags to grant more power to containers. Large enterprises tend to run Hadoop more
than Kubernetes because securing the system cost less. There are more distributed SQL engines built on top of YARN, including Hive, Impala, SparkSQL and IBM BigSQL.
Database options make YARN an attrative option because the ability to run online transaction processing in containers, and online analytical processing using batch workload. Hadoop Developer toolchains can be overwhelming. Mapreduce, Hive, Pig, Spark and etc, each have its own style of development. The user experience is inconsistent and take a while to learn them all. Kubernetes feels less obstructive by comparison because it only deploys docker containers. With introduction of YARN services to run
Docker container workload, YARN can feel less wordy than Kubernetes. </p>
<p>If your plan is to out source IT operations to public cloud, pick Kubernetes. If your plan is to build private/hybrid/multi-clouds, pick Apache YARN.</p>
|
<p>I am fairly new to networkpolicies on Calico. I have created the following NetworkPolicy on my cluster:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: nginxnp-po
namespace: default
spec:
podSelector:
matchLabels:
run: nginxnp
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
acces: frontend
ports:
- port: 80
</code></pre>
<p>This is how I read it: All pods that have the selector <code>run=nginxnp</code> are only accessible on port 80 from every pod that has the selector <code>access=frontend</code>.</p>
<p>Here is my nginx pod (with a running nginx in it):</p>
<pre><code>$ kubectl get pods -l run=nginxnp
NAME READY STATUS RESTARTS AGE
nginxnp-9b49f4b8d-tkz6q 1/1 Running 0 36h
</code></pre>
<p>I created a busybox container like this:</p>
<pre><code>$ kubectl run busybox --image=busybox --restart=Never --labels=access=frontend -- sleep 3600
</code></pre>
<p>I can see that it matches the selector <code>access=frontend</code>:</p>
<pre><code>$ kubectl get pods -l access=frontend
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 0 6m30s
</code></pre>
<p>However when I exec into the busybox pod and try to wget the nginx pod, the connection is still refused.</p>
<p>I also tried setting an egress rule that allows the traffic the other way round, but this didn't do anything as well. As I understood networkpolicies: When no rule is set, nothing is blocked. Hence, when I set no egress rule, egress should not be blocked.</p>
<p>If I delete the networkpolicy it works. Any pointers are highly appreciated.</p>
| <p>There is a typo in the NetworkPolicy template <code>acces: frontend</code> should be <code>access: frontend</code></p>
<pre><code> ingress:
- from:
- podSelector:
matchLabels:
acces: frontend
</code></pre>
|
<p>I am trying to set a deployment of containers in Kubernetes. I want the resource utilization to be controlled. I am referring <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/#specify-a-memory-request-and-a-memory-limit" rel="nofollow noreferrer">this</a>.</p>
<p>An example config from the docs - </p>
<pre><code>resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]
</code></pre>
<p>But I am not able to clearly understand the differences between <code>requests</code> and <code>args</code> fields. <code>limits</code> is somewhat clear that the container should not be using more than the limit amount of resource. </p>
<p>What purpose does <code>args</code> serve exactly. <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/#specify-a-memory-request-and-a-memory-limit" rel="nofollow noreferrer">Here</a>, it is stated that this is the resource the container would start with. Then how is it different from <code>requests</code> ?</p>
| <pre><code>resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"
</code></pre>
<p>Resource has request and limit field.<br>
It means minimum 100Mi memory should be allocated to the container and this values is sufficient to run the container. In case of spike in traffic, it can burst memory consumption upto 200Mi. It is kind of upper bound. If it exceeds more than 200Mi the container will get killed/restarted.</p>
<p>Args are being passed to command(stress container) as command line arguments.</p>
<p><a href="http://people.seas.harvard.edu/~apw/stress/" rel="nofollow noreferrer">Stress Tool Docs</a><br>
<a href="https://hub.docker.com/r/progrium/stress/" rel="nofollow noreferrer">DockerImageForStress</a></p>
<p>looks like stress is consuming --vm-bytes=150M memory passed as an arg</p>
<p>I think with the help of stress tool, the docs are trying to indicate the container can consume memory between request and limit values.</p>
|
<p>I have a Scala application that is using Spark 2.1 in standalone mode. The application will run for 2 hours and finish. It should be run once a month.</p>
<p>I found several approaches to combine Spark and Kubernetes: </p>
<ol>
<li>Use Apache Spark Helm Chart: <a href="https://github.com/kubernetes/charts/tree/master/stable/spark" rel="nofollow noreferrer">https://github.com/kubernetes/charts/tree/master/stable/spark</a></li>
<li>There is a special branch of Spark for Kubernetes: <a href="https://github.com/apache-spark-on-k8s/spark" rel="nofollow noreferrer">https://github.com/apache-spark-on-k8s/spark</a> </li>
<li>Build my own Docker image of my application including the Spark binary: <a href="http://blog.madhukaraphatak.com/scaling-spark-with-kubernetes-part-5/" rel="nofollow noreferrer">http://blog.madhukaraphatak.com/scaling-spark-with-kubernetes-part-5/</a> Code example: <a href="https://github.com/phatak-dev/kubernetes-spark" rel="nofollow noreferrer">https://github.com/phatak-dev/kubernetes-spark</a></li>
</ol>
<p>Most of the documentation describe how to run a Spark cluster on Kubernetes. What is the approach for running Spark standalone on Kubernetes?</p>
| <p>Check my <a href="https://github.com/radanalyticsio/spark-operator" rel="nofollow noreferrer">https://github.com/radanalyticsio/spark-operator</a></p>
<p>It deploys standalone spark on Kubernetes and OpenShift and supports also spark-on-k8s native scheduler. The default Spark version is 2.4.0</p>
<p>You can find the very quick start in the project's readme file, however here is a way to deploy the spark cluster using the operator:</p>
<pre><code># create operator
kubectl apply -f https://raw.githubusercontent.com/radanalyticsio/spark-operator/master/manifest/operator.yaml
# create cluster
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: SparkCluster
metadata:
name: my-cluster
spec:
worker:
instances: "2"
EOF
</code></pre>
|
<p>kube-controller-manager has the following property</p>
<pre><code>-deployment-controller-sync-period duration Default: 30s
Period for syncing the deployments.
</code></pre>
<p>What does this actually control and what does <code>period for syncing the deployments</code> mean?</p>
| <p>Haha most curious thing. You'd expect it does something like controlling how often the controller checks whether the status of Deployment objects are compatible with spec or if there is a change needed.</p>
<p>However currently the controller-manager is notified on changes by the apiserver so it always inherently knows this information already.</p>
<p>There is <a href="https://github.com/kubernetes/kubernetes/issues/71510" rel="nofollow noreferrer">Issue #71510</a> where someone points out that parameter seems to be unused. I've done my own <a href="https://github.com/kubernetes/kubernetes/search?utf8=%E2%9C%93&q=deployment-controller-sync-period&type=" rel="nofollow noreferrer">search for the parameter</a> and a related <a href="https://github.com/kubernetes/kubernetes/search?utf8=%E2%9C%93&q=DeploymentControllerSyncPeriod&type=" rel="nofollow noreferrer">search for the variable</a>. As far as I can tell all of these uses are for copying this value around, conversions, declarations, etc, and none of them actually use it for anything at all.</p>
<p>A good test would be setting it to a year and see what happens. I haven't done that though.</p>
|
<p>How can I expose a StatefulSet service (cassandra, mysql, etc...) with <code>ClusterIP=None</code> on Kubernetes in Google Cloud Platform?</p>
<p>I need to change the ClusterIP config? Or I need to configure Google Cloud NAT? Or I need to change other things?</p>
<p>Thanks</p>
<p>EDIT: I want to connect to cassandra from an external IP, from anyplace on the internet</p>
<p>EDIT2: I guess that the solution is to use <code>LoadBalance</code> instead of <code>ClusterIP</code>, but when I use <code>LoadBalance</code>, the Cassandra nodes can't find the seed node. Then I sill using <code>ClusterIP=None</code> to Cassandra cluster, and I created another POD with <code>type=LoadBalance</code> to connect to Cassandra and to have connections to exterior. And now it's working :)</p>
| <p>If by "<strong><em>expose</em></strong>" you mean ability to reach your service endpoints without cluster IP , then just use <code>selector</code> in your headless service, i.e.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: cassandra
spec:
clusterIP: None
selector:
app: cassandra
ports:
- port: 80
targetPort: 80
</code></pre>
<p>For more details refer to <a href="https://kubernetes.io/docs/concepts/services-networking/service/#with-selectors" rel="noreferrer">documentation</a> </p>
<p>Otherwise, if you want to expose your deployments outside of the cluster, you won't be able to do it with headless service.</p>
|
<p>I am new to helm and I have tried to deploy a few tutorial charts. Had a couple of queries:</p>
<ol>
<li><p>I have a Kubernetes job which I need to deploy. Is it possible to deploy a job via helm?</p>
</li>
<li><p>Also, currently my kubernetes job is deployed from my custom docker image and it runs a bash script to complete the job. I wanted to pass a few parameters to this chart/job so that the bash commands takes the input parameters. That's the reason I decided to move to helm because it provided a more flexibility. Is that possible?</p>
</li>
</ol>
| <p>You can use <code>Helm Hooks</code> to run jobs. Depending on how you set up your annotations you can run a different type of hook (pre-install, post-install, pre-delete, post-delete, pre-upgrade, post-upgrade, pre-rollback, post-rollback, crd-install). An example from the <a href="https://helm.sh/docs/topics/charts_hooks/" rel="nofollow noreferrer">doc</a> is as follows:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
spec:
restartPolicy: Never
containers:
- name: post-install-job
image: "alpine:3.3"
command: ["/bin/sleep","{{default "10" .Values.sleepyTime}}"]
</code></pre>
<p>You can pass your parameters as <code>secrets</code> or <code>configMaps</code> to your job as you would to a pod.</p>
|
<p>I started working with kubernetes few months ago, i create services and replicaset for differents backends applications.</p>
<p>Now i want use ingress service to access my differents applications from external with my domain name. As it's not in production yet and i use just on node i don't want use loadbalancer.</p>
<p>It is possible to set Service with NodePort and attach it to an Ingress without use loadbalancer?</p>
<p>This is my configuration: </p>
<pre><code>kind: Service
metadata:
name: nginx-replicaset-service
labels:
app: nginx-demo
spec:
externalIPs:
- XXX.XXX.XXX.XXX
selector:
app: nginx-replica
type: NodePort
ports:
- name: http
port: 80
</code></pre>
<pre><code>kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: www.mydomain.com
http:
paths:
- path: /
backend:
serviceName: nginx-replicaset-service
servicePort: 80
- host: test.mydomain.com
http:
paths:
- path: /test
backend:
serviceName: other-service
servicePort: 80
</code></pre>
<p>When i check the ingress information on my cluster i have the following: </p>
<pre><code>NAME HOSTS ADDRESS PORTS AGE
my-ingress www.mydomain.com,test.mydomain.com 80 4d22h
</code></pre>
<p>Address always empty</p>
| <p>If your cluster is running on GCP or GKE you can </p>
<p>1) create a global IP address</p>
<pre><code>gcloud compute addresses create ingress-static-ip --global
</code></pre>
<p>2) set the <strong>kubernetes.io/ingress.global-static-ip-name</strong> annotation on the Ingress config</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: ingress-static-ip
labels:
app: nginx-replica
</code></pre>
<p>3) And a service</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-replicaset-service
labels:
app: nginx-demo
spec:
selector:
app: nginx-replica
type: NodePort
ports:
- name: http
port: 80
</code></pre>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip</a></p>
|
<p>I am using feign, through spring feign and load balancing seems off. I have one instance of service A and 2 instances of service B. Service A calls service B through a feign client. I plotted incoming requests on service B and they all seem to hit the same node and after some time switch to the other node and all hit that node again. Not really what i want. I use kubernetes DNS to get a node.
Am i missing some part of the puzzle? does feign get the ip and use that for a while?</p>
<p>I am using the latest spring cloud, but am using httpclient instead of the standard client.</p>
<p>My spring feign annotation looks like:</p>
<pre><code>@FeignClient(name = "serviceB", url="http://serviceb:8080")
</code></pre>
<p>where serviceb is the name of the service in kubernetes DNS.</p>
| <p>Do you mean Pod by node?</p>
<p>To test your theory, you can continuously make a call to serviceb and bring one of the pods abruptly and see if the other pod gets the request! </p>
<p>k8s seems to follow random algorithm for load balancing - so there is a chance that it might send the request to the same pod which I also have seen when there is no enough request. When you can send multiple concurrent requests & continuously for certain duration, I have seen requests are distributed across all the pods.</p>
|
<p>What is the simplest way to configure parameter max_prepared_transactions=100 in docker kubernetes?</p>
<p>I am using image:</p>
<p><a href="https://hub.docker.com/_/postgres/" rel="nofollow noreferrer">https://hub.docker.com/_/postgres/</a></p>
<p>Which has postgresql.conf file at /var/lib/postgresql/data</p>
<p>In my kubernetes deployment, that directory is externally mounted so I can't copy postgresql.conf using Dockerfile so I need to specify that parameter as a ENV parameter in Kubernetes .yml file, or changing the location of postgresql.conf file to, for example, /etc/postgresql.conf (how can I do this as a ENV parameter too?)</p>
<p>Thanks</p>
| <p>You can set this config as runtime flag when you start your docker container. Something like this:</p>
<pre><code>$ docker run -d postgres --max_prepared_transactions=100
</code></pre>
|
<p>I'm using <a href="https://k3s.io/" rel="nofollow noreferrer">k3s</a> to test my k8s configurations. Sadly, <code>imagePullSecrets</code> seems not to work properly.</p>
<p>I've tested the same configuration in <a href="https://kubernetes.io/docs/setup/minikube/" rel="nofollow noreferrer">minikube</a> and it works fine.</p>
<p>Example:</p>
<p>I create the secret with:</p>
<pre><code>kubectl create secret generic myreg --from-file=.dockerconfigjson=$HOME/.docker/config.json
</code></pre>
<p>And this is a daemonset example:</p>
<pre><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
name: foo
namespace: default
labels:
app: foo
spec:
selector:
matchLabels:
name: foo
template:
metadata:
labels:
name: foo
spec:
imagePullSecrets:
- name: myreg
containers:
- name: foo
image: whatever/foo:latest
</code></pre>
<p>The status stays as <code>ErrImagePull</code> and running <code>describe</code> over the pod it says:</p>
<pre><code> Normal BackOff 2s kubelet, localhost Back-off pulling image "whatever/foo:latest"
Warning Failed 2s kubelet, localhost Error: ImagePullBackOff
</code></pre>
<p>Why Does it not work?</p>
| <p>Finally I found the answer in the issue <a href="https://github.com/rancher/k3s/issues/167" rel="nofollow noreferrer">Document image preloading</a>.</p>
<p>The imagePullSecrets are not implemented in k3s, but there is an undocumented feature, and you can pull the image manually to get it work.</p>
<p>To do it (as root):</p>
<pre><code># docker pull whatever/foo:latest
# docker save whatever/foo:latest -o /var/lib/rancher/k3s/agent/images/foo-latest.tgz
</code></pre>
<p>And then the image will be "downloaded" and installed into k3s.</p>
<p><strong>Remember</strong> to restart k3s after downloading it.</p>
|
<p>I have launched cluster using aws eks successfully and applied aws-auth but nodes are not joining to cluster. I checked log message of a node and found this - </p>
<pre><code>Dec 4 08:09:02 ip-10-0-8-187 kubelet: E1204 08:09:02.760634 3542 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Unauthorized
Dec 4 08:09:03 ip-10-0-8-187 kubelet: W1204 08:09:03.296102 3542 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 4 08:09:03 ip-10-0-8-187 kubelet: E1204 08:09:03.296217 3542 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 4 08:09:03 ip-10-0-8-187 kubelet: E1204 08:09:03.459361 3542 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Unauthorized`
</code></pre>
<p>I am not sure about this. I have attached eks full access to these instance node roles. </p>
| <p>If you are using terraform, or modifying tags and name variables, make sure the cluster name matches <em>in the tags</em>!</p>
<p>Node must be "owned" by a certain cluster. The nodes will only join a cluster they're supposed to. I overlooked this, but there isn't a lot of documentation to go on when using terraform. Make sure variables match. This is the node tag naming parent cluster to join:</p>
<pre><code> tag {
key = "kubernetes.io/cluster/${var.eks_cluster_name}-${terraform.workspace}"
value = "owned"
propagate_at_launch = true
}
</code></pre>
|
<p><strong>UPDATE:</strong> I'm deploying on AWS cloud with the help of kops.</p>
<p>I'm in the process applying HPA for one of my kubernete deployment.
While testing the sample app, I deployed with default namespace, I can see the metrics being exposed as below ( showing current utilisation is 0%)</p>
<pre><code>$ kubectl run busybox --image=busybox --port 8080 -- sh -c "while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; \
env | grep HOSTNAME | sed 's/.*=//g'; } | nc -l -p 8080; done"
$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
busybox Deployment/busybox 0%/20% 1 4 1 14m
</code></pre>
<p>But when I deploy with the custom namespace ( example: test), the current utilisation is showing unknown </p>
<pre><code> $ kubectl get hpa --namespace test
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
busybox Deployment/busybox <unknown>/20% 1 4 1 25m
</code></pre>
<p>Can someone please suggest whats wrong here?</p>
| <p>For future you need to meet few conditions for HPA to work. You need to have metrics server or heapster running on your cluster. What is important is to set resources on namespace basis.</p>
<p>You did not provide in what environment is your cluster running, but in GKE by default you have a cpu resource set (100m), but you need to specify it on new namespaces:</p>
<blockquote>
<p>Please note that if some of the pod’s containers do not have the
relevant resource request set, CPU utilization for the pod will not be
defined and the autoscaler will not take any action for that metric.</p>
</blockquote>
<p>In your case I am not sure why it does work after redeploy, as there is not enough information. But for future remember to:</p>
<p>1) object that you want to scale and HPA should be in the same namespace</p>
<p>2) set resources on CPU per namespace or simply add <code>--requests=cpu=value</code> so the HPA will be able to scale based on that. </p>
<p><strong>UPDATE</strong>:</p>
<p>for your particular case:</p>
<p>1) <code>kubectl run busybox --image=busybox --port 8080 -n test --requests=cpu=200m -- sh -c "while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; \
env | grep HOSTNAME | sed 's/.*=//g'; } | nc -l -p 8080; done"</code></p>
<p>2) <code>kubectl autoscale deployment busybox --cpu-percent=50 --min=1 --max=10 -n test</code></p>
|
<p>Let's say that I recently installed or upgraded a helm release, e.g.:</p>
<pre><code>helm upgrade ... config.yaml ...
</code></pre>
<p>Is there any way for me to retrieve the <code>config.yaml</code> via helm CLI? I need to verify a config value.</p>
| <p>If you want only the info about the values file used for a given release, use <a href="https://helm.sh/docs/helm/helm_get_values/" rel="noreferrer"><code>helm get values</code></a>:</p>
<p><code>helm get values RELEASE_NAME</code></p>
<p>Take a look at <a href="https://helm.sh/docs/helm/helm_get/" rel="noreferrer"><code>helm get</code></a> docs to other options:</p>
<blockquote>
<p>This command consists of multiple subcommands which can be used to get extended information about the release, including:</p>
<ul>
<li>The values used to generate the release</li>
<li>The generated manifest file</li>
<li>The notes provided by the chart of the release</li>
<li>The hooks associated with the release</li>
</ul>
</blockquote>
|
<p>I am trying to connect to a private IP from within a pod. Ping to that IP from the pod returns unreachable. However, I am able to ping that IP from the host system. What is the best way to route the traffic from the pod to the destination private IP?</p>
| <p>Pods are not allowed to connect directly outside of kubernetes network. You can find more details <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">here</a>. To connect external IP you have to define <code>Endpoints</code> and Kubernetes will redirect request from inside pod to that IP. If you private IP need any extra task like DNS configure or anything else will is out of kubernetes. For kubernetes you will need to define <code>Endpoints</code>. Create you <code>Endpoints</code></p>
<pre><code>kind: Endpoints
apiVersion: v1
metadata:
name: local-ip
subsets:
- addresses:
- ip: 10.240.0.4 # IP of your desire end point
ports:
- port: 27017 # Port that you want to access
</code></pre>
<p>Now you can connect from inside you pods using <code>Endpoints</code> name. But better to access <code>Endpoints</code> through <code>Service</code>. You can find more details <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-mapping-external-services" rel="nofollow noreferrer">here</a>.
You can find similar answer and flow diagram <a href="https://stackoverflow.com/questions/54464722/calling-an-external-service-from-within-minikube/54465759#54465759">here</a>.</p>
|
<p>I am using Google cloud composer ,and created composer environment.Composer environment is ready(has green tick), now I am trying to set variables used in DAG python code using google cloud shell.</p>
<p>command to set variables:</p>
<pre><code> gcloud composer environments run test-environment \
--location us-central1 variables -- \
--set gcp_project xxx-gcp
</code></pre>
<p><strong>Exact error message:</strong></p>
<pre><code> ERROR: (gcloud.composer.environments.run) Desired GKE pod not found. If the environment was recently started, please wait and retry.
</code></pre>
<p>I tried following things as part of investigation, but got same error each time.
I have created a new environment using UI and not google shell commands.
I checked pods in kubernetes engine and all are green , did not see any issue.
I verified composer API, Billing kubernetes, all required API's are enabled.</p>
<p>I have 'Editor' role assigned.</p>
<p><strong>added screenshot I saw first time some failures</strong></p>
<p><a href="https://i.stack.imgur.com/bkF1M.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bkF1M.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/SLfll.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SLfll.png" alt="enter image description here"></a></p>
<p><strong>Error with exit code 1</strong>
google troubleshooting guide describe: If the exit code is 1, the container crashed because the application crashed.</p>
| <p>This is a side effect of Composer version 1.6.0 if you are using a <a href="https://github.com/google-cloud-sdk/google-cloud-sdk" rel="noreferrer">google-cloud-sdk</a> that is too old, because it now launches pods in namespaces other than <code>default</code>. The error you see is a result of <a href="https://github.com/google-cloud-sdk/google-cloud-sdk/blob/636d6c1b48f97c844aaa4cc2ddc8a5c2bb1be55e/lib/googlecloudsdk/command_lib/composer/util.py#L314" rel="noreferrer">looking for Kubernetes pods in the default namespace</a> and <a href="https://github.com/google-cloud-sdk/google-cloud-sdk/blob/636d6c1b48f97c844aaa4cc2ddc8a5c2bb1be55e/lib/googlecloudsdk/command_lib/composer/util.py#L275" rel="noreferrer">failing to find them</a>.</p>
<p>To fix this, run <code>gcloud components update</code>. If you cannot yet update, a workaround to execute Airflow commands is to manually SSH to a pod yourself and run <code>airflow</code>. To start, obtain GKE cluster credentials:</p>
<pre><code>$ gcloud container clusters get-credentials $COMPOSER_GKE_CLUSTER_NAME
</code></pre>
<p>Once you have the credentials, you should find which namespace the pods are running in (which you can also find using Cloud Console):</p>
<pre><code>$ kubectl get namespaces
NAME STATUS AGE
composer-1-6-0-airflow-1-9-0-6f89fdb7 Active 17h
default Active 17h
kube-public Active 17h
kube-system Active 17h
</code></pre>
<p>You can then SSH into any scheduler/worker pod, and run commands:</p>
<pre><code>$ kubectl exec \
--namespace=$NAMESPACE \
-it airflow-worker-569bc59df5-x6jhl airflow list_dags -r
</code></pre>
<p>You can also open a shell if you prefer:</p>
<pre><code>$ kubectl exec \
--namespace=$NAMESPACE \
-it airflow-worker-569bc59df5-x6jhl bash
airflow@airflow-worker-569bc59df5-x6jhl:~$ airflow list_dags -r
</code></pre>
<p>The failed <code>airflow-database-init-job</code> jobs are unrelated and will not cause problems in your Composer environment.</p>
|
<p>I am trying to learn how to use Kibernetes with Minikube and have the following deployment and service:</p>
<pre><code>---
kind: Service
apiVersion: v1
metadata:
name: exampleservice
spec:
selector:
app: myapp
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 8081
# Port to forward to inside the pod
targetPort: 8080
# Port accessible outside cluster
nodePort: 30002
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myappdeployment
spec:
replicas: 5
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: tutum/hello-world
ports:
- containerPort: 8080
</code></pre>
<p>I expect to be able to hit this service from my local machine at </p>
<pre><code>http://192.168.64.2:30002
</code></pre>
<p>As per the command: <code>minikube service exampleservice --url</code> but when I try to access this from the browser I get a site cannot be reached error.</p>
<p>Some information that may help debugging:</p>
<p><code>kubectl get services --all-namespaces</code>:</p>
<pre><code>NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default exampleservice LoadBalancer 10.104.248.158 <pending> 8081:30002/TCP 26m
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2h
default user-service-service LoadBalancer 10.110.181.202 <pending> 8080:30001/TCP 42m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 2h
kube-system kubernetes-dashboard ClusterIP 10.110.65.24 <none> 80/TCP 2h
</code></pre>
<p>I am running minikube on OSX.</p>
| <p>You can use <strong>External IPs</strong>, as described in <a href="https://kubernetes.io/docs/concepts/services-networking/service/#external-ips" rel="nofollow noreferrer">k8s docs</a>. Update the service as shown below (where I assume <code>192.168.64.2</code> is your minikube IP):</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: exampleservice
spec:
selector:
app: myapp
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 8081
# Port to forward to inside the pod
targetPort: 80
# Port accessible outside cluster
nodePort: 30002
type: LoadBalancer
externalIPs:
- 192.168.64.2
</code></pre>
<p>Now you can access your application at <a href="http://192.168.64.2:8081/" rel="nofollow noreferrer">http://192.168.64.2:8081/</a></p>
<hr />
<p>If you need to access the application at 30002, you can use it like this</p>
<pre><code> kind: Service
apiVersion: v1
metadata:
name: exampleservice
spec:
selector:
app: myapp
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 8081
# Port to forward to inside the pod
targetPort: 80
# Port accessible outside cluster
nodePort: 30002
type: NodePort
</code></pre>
<hr />
<p>Your deployment file does not look correct to me.</p>
<p>delete it
<code>kubectl delete deploy/myappdeployment</code></p>
<p>use this to create again.</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
app: myapp
name: myappdeployment
spec:
replicas: 5
selector:
matchLabels:
app: myapp
strategy: {}
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: tutum/hello-world
name: myapp
ports:
- containerPort: 80
</code></pre>
|
<p>I'm currently working on a project using the Firebase Admin Go SDK to handle auth and to use the real time database. The project works correctly when I run it locally (by just running <code>go run main.go</code>). When I run it in Minikube via a docker image (or GKE, I've tested both) I get this error whenever I try to make any Firestore calls: </p>
<pre><code>transport: authentication handshake failed: x509: certificate signed by unknown authority
</code></pre>
<p>Here is the code I'm using on the server to make the call to the DB: </p>
<pre class="lang-go prettyprint-override"><code>// Initialize the app
opt := option.WithCredentialsFile("./serviceAccountKey.json")
app, err := firebase.NewApp(context.Background(), nil, opt)
// This is the first call I attempt to make, and where the error is thrown
// Create the client
client, err := app.Firestore(context.Background())
iter := client.Collection("remoteModels").Documents(context.Background())
snaps, err := iter.GetAll()
if err != nil {
logger.Log.Warn("Error getting all remoteModels")
fmt.Println(err)
return err
}
</code></pre>
<p>And here is my Dockerfile that adds the service account key Firebase provided me from the console: </p>
<pre><code>FROM scratch
ADD main /
ADD serviceAccountKey.json /
EXPOSE 9090
ENTRYPOINT ["/main", "-grpc-port=9090", "-http-port=9089", "-env=prod"]
</code></pre>
<p>I can't find anything in the documentation about running in Kubernetes.<br>
Is there anything I need to do to be able to connect to Firestore from Kubernetes?</p>
| <p>If you are using alpine based images try running <code>apk add ca-certificates</code> it looks like a tls error.<br>
Install ca certificates, it should resolve the issue</p>
|
<p>I am setting up Vault in Kubernetes and enabling the Kubernetes Auth method. It needs the Kubernetes CA Certificate. How do I obtain that? I couldn't find much on duckduckgo's search results.</p>
<p><a href="https://i.stack.imgur.com/cnNbB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cnNbB.png" alt="enter image description here"></a></p>
<p>Running kubernetes inside Docker for mac on MacOS Mojave:</p>
<p><a href="https://i.stack.imgur.com/16JSX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/16JSX.png" alt="enter image description here"></a>
Thank you.</p>
| <p>This can be found in your <code>kube-system</code> (or any other) namespace by running the following on your <code>default-token</code> secret:</p>
<pre><code>kubectl get secret <secret name> -n <namespace> -o jsonpath="{['data']['ca\.crt']}" | base64 --decode
</code></pre>
<p>To find the <code>secret name</code> run <code>kubectl get secret -n kube-system</code> and find the secret that starts with <code>default-token</code>.</p>
<p>This will give you something like:</p>
<pre><code>-----BEGIN CERTIFICATE-----
XXXXXXX
XXXX....
-----END CERTIFICATE-----
</code></pre>
<p>When you are entering this certificate, make sure to enter the BEGIN and END header and footer.</p>
|
<p>I am trying to create a template for a Kubernetes cluster having 1 master and 2 worker nodes. I have installed all the pre-req software and have run the kubeadmn init on my master node. But when i try to run the kubeadmn join which i get as an output of the init command i am getting an error.</p>
<blockquote>
<pre class="lang-sh prettyprint-override"><code>[discovery] Created cluster-info discovery client, requesting info
from "https://10.31.2.33:6443" [discovery] Requesting info from
"https://10.31.2.33:6443" again to validate TLS against the pinned
public key [discovery] Cluster info signature and contents are valid
and TLS certificate validates against pinned roots, will use API
Server "10.31.2.33:6443" [discovery] Successfully established
connection with API Server "10.31.2.33:6443" [kubelet] Downloading
configuration for the kubelet from the "kubelet-config-1.12" ConfigMap
in the kube-system namespace [kubelet] Writing kubelet configuration
to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet
environment file with flags to file
"/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the
kubelet service [tlsbootstrap] Waiting for the kubelet to perform the
TLS Bootstrap... [patchnode] Uploading the CRI Socket information
"/var/run/dockershim.sock" to the Node API object "<workernode2>" as
an annotation error uploading crisocket: timed out waiting for the
condition```
</code></pre>
</blockquote>
<p>I have done a swapoff -a before running this on the workdernode2</p>
<p>I was able to run the join once but after that, as a part of a script, I ran the kubeadmn reset followed by init and join few times where this has started showing up.</p>
<p>Not able to figure out what or where I am doing a mistake.</p>
<p>My main intent is to put all the commands in the form of a shell script (on master node) so that it can be run on a cluster to create a network.</p>
| <p>I had the encountered the following issue after node was rebooted:</p>
<pre class="lang-sh prettyprint-override"><code>[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8smaster" as an annotation
[kubelet-check] Initial timeout of 40s passed.
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition
</code></pre>
<p>Steps to get rid of this issue:</p>
<ol>
<li><p>Check the hostname again, after reboot it might have changed.</p>
<pre class="lang-sh prettyprint-override"><code>sudo vi /etc/hostname
sudo vi /etc/hosts
</code></pre>
</li>
<li><p>Perform the following clean-up actions</p>
<p>Code:</p>
<pre class="lang-sh prettyprint-override"><code>sudo kubeadm reset
rm -rf /var/lib/cni/
sudo rm -rf /var/lib/cni/
systemctl daemon-reload
systemctl restart kubelet
sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X
</code></pre>
</li>
<li><p>Execute the init action with the special tag as below</p>
<p>Code:</p>
<pre class="lang-sh prettyprint-override"><code>sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=10.10.10.2 --ignore-preflight-errors=all
</code></pre>
<p>(where 10.10.10.2 is the IP of master node and 192.168.0.0/16 is the private subnet assigned for Pods)</p>
</li>
</ol>
|
<p>I have a v1.8.4 deployment running nginx ingress controller. I had an ingress which works fine. But now I am trying to enable sticky sessions in it. I used <code>kubectl edit ing mying</code> to add these annotations:</p>
<pre><code>nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/session-cookie-hash: md5
nginx.ingress.kubernetes.io/session-cookie-name: foobar
</code></pre>
<p>But sticky sessions are still not working. Nginx config does not have anything about sticky sessions. Also, <code>kubectl describe ing mying</code> does not show the annotations. What is going wrong here?</p>
<p>I also tried the example for sticky sessions <a href="https://github.com/kubernetes/ingress-nginx/tree/27c863085e2503e5f3249b596f24529fc2488baa/docs/examples/affinity/cookie" rel="nofollow noreferrer">here</a>.
Describing the ingress does not show the annotations.</p>
| <p>Because item host(in ingress.yml) cannot be empty or wildzard (*.example.com).</p>
<p>Make sure your host such as test.example.com(if u don't have dns, please config it in your local hosts),then test </p>
<pre><code>curl -I http://test.example.com/test/login.jsp
</code></pre>
<p>then u will see</p>
<pre><code>Set-Cookie: route=ebfcc90982e244d1d7ce029b98f8786b; Expires=Sat, 03-Jan-70 00:00:00 GMT; Max-Age=172800; Domain=test.example.com; Path=/test; HttpOnly
</code></pre>
<p>The official example:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-test
annotations:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- host: stickyingress.example.com
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
path: /
</code></pre>
|
<p>I have deployed Prometheus on Kubernetes and I provide the <code>prometheus.yml</code> configuration file as a config map resource.
The file is mounted in the Prometheus pod as volume.</p>
<p>After the config map is changed in the cluster, I hit the Prometheus server endpoint with an empty POST request, in order to reload it (as described in the documentation)</p>
<p>When I make changes to the config map however and redeploy it, I experience a 'lag' of around 30 secs until the <code>prometheus.yml</code> file is updated inside the pod.</p>
<p>I read <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#mounted-configmaps-are-updated-automatically" rel="nofollow noreferrer">here</a> that this is expected.</p>
<p>However, there are projects that try to remedy this, eg</p>
<ul>
<li><a href="https://github.com/pusher/wave" rel="nofollow noreferrer">https://github.com/pusher/wave</a></li>
<li><a href="https://github.com/stakater/Reloader" rel="nofollow noreferrer">https://github.com/stakater/Reloader</a></li>
<li><a href="https://github.com/jimmidyson/configmap-reload" rel="nofollow noreferrer">https://github.com/jimmidyson/configmap-reload</a></li>
</ul>
<p>These, as I understand, kill the pods and replace them in order to update the configuration.</p>
<p>My question is, is there a way to make a 'hot' reconfiguration? </p>
<p>Essentially speed up on-demand the volume update inside the pod, without the need of killing any pod.</p>
| <p>I'm using right now reloader and the pods are killed always but they are recreated immediately after you change a configmap associated.</p>
<p>But you cannot avoid the fact that you need to kill the pod to be able to re-mount the volume with the new config and also it depends on your workload if is able to load a new configuration without stopping the main process inside the container.</p>
|
<p>I'm currently setting up a kubernetes cluster with 3 nodes on 3 differents vm and each node is composed of 1 pod witch run the following docker image: ethereum/client-go:stable</p>
<p>The problem is that I want to do a health check test using a bash script (because I have to test a lot of things) but I don't understand how I can export this file to each container that are deployed with my yaml deployment file.</p>
<p>I've tried to add <code>wget</code> command in the yaml file to download my health check script from my github repo but it wasn't very clean from my point of view, maybe there is an other way ?</p>
<p><strong>My current deployment file:</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: goerli
name: goerli-deploy
spec:
replicas: 3
selector:
matchLabels:
app: goerli
template:
metadata:
labels:
app: goerli
spec:
containers:
- image: ethereum/client-go:stable
name: goerli-geth
args: ["--goerli", "--datadir", "/test2"]
env:
- name: LASTBLOCK
value: "0"
- name: FAILCOUNTER
value: "0"
ports:
- containerPort: 30303
name: geth
livenessProbe:
exec:
command:
- /bin/sh
- /test/health.sh
initialDelaySeconds: 60
periodSeconds: 100
volumeMounts:
- name: test
mountPath: /test
restartPolicy: Always
volumes:
- name: test
hostPath:
path: /test
</code></pre>
<p>I expect to put health check script in /test/health.sh</p>
<p>Any ideas ?</p>
| <p>This could be a perfect usecase for the init container, As there could be different images for the init container and the Application container thus they have different file system inside the pods, therefore we need to use <strong>Emptydir</strong> in order to share the state. </p>
<p>for further detail follow the link <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">init-containers</a></p>
|
<p>I am trying to project the serviceAccount token into my pod as described in this k8s doc - <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection</a>.</p>
<p>I create a service account using below command</p>
<pre><code>kubectl create sa acct
</code></pre>
<p>Then I create the pod</p>
<pre><code>kind: Pod
apiVersion: v1
metadata:
name: nginx
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /var/run/secrets/tokens
name: vault-token
serviceAccountName: acct
volumes:
- name: vault-token
projected:
sources:
- serviceAccountToken:
path: vault-token
expirationSeconds: 7200
</code></pre>
<p>It fails due to - <code>MountVolume.SetUp failed for volume "vault-token" : failed to fetch token: the server could not find the requested resource</code></p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m15s default-scheduler Successfully assigned default/nginx to minikube
Warning FailedMount 65s (x10 over 5m15s) kubelet, minikube MountVolume.SetUp failed for volume "vault-token" : failed to fetch token: the server could not find the requested resource
</code></pre>
<p>My minikube version: v0.33.1</p>
<p>kubectl version : 1.13</p>
<p><strong>Question:</strong></p>
<ul>
<li>What am i doing wrong here?</li>
</ul>
| <p>I tried this on kubeadm, and was able to suceed.
@Aman Juneja was right, you have to add the API flags as described in the documentation. </p>
<p>You can do that by creating the serviceaccount and then adding this flags to the kubeapi:</p>
<p><code>sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml</code></p>
<pre><code>- --service-account-issuer=api
- --service-account-signing-key-file=/etc/kubernetes/pki/apiserver.key
- --service-account-api-audiences=api
</code></pre>
<p>After that apply your pod.yaml and it will work. As you will see in describe pod:</p>
<pre><code>Volumes:
vault-token:
Type: Projected (a volume that contains injected data from multiple sources)
</code></pre>
<p>[removed as not working solution]</p>
<p>unfortunately in my case my minikube did not want to start with this flags, it gets stuck on: <code>waiting for pods: apiserver</code> soon I will try to debug again.</p>
<p><strong>UPDATE</strong></p>
<p>Turns out you have to just pass the arguments into the minikube with directories from the inside of minikubeVM and not the outside as I did with previous example (so the .minikube directory), so it will look like this:</p>
<pre><code>minikube start \
--extra-config=apiserver.service-account-signing-key-file=/var/lib/minikube/certs/apiserver.key \
--extra-config=apiserver.service-account-issuer=api \
--extra-config=apiserver.service-account-api-audiences=api
</code></pre>
<p>After that creating ServiceAccount and applying pod.yaml works. </p>
|
<p>I install kubernetes on ubuntu on baremetal. I deploy 1 master and 3 worker. and then deploy rook and every thing work fine.but when i want to deploy a wordpress on it , i get this error</p>
<blockquote>
<p>Unable to mount volumes for pod
"wordpress-mysql-b78774f44-lxtfv_default(ffb4ff12-553e-11e9-a229-52540076d16c)": timeout expired waiting for volumes to attach or mount for pod
"default"/"wordpress-mysql-b78774f44-lxtfv". list of unmounted
volumes=[mysql-persistent-storage]. list of unattached
volumes=[mysql-persistent-storage default-token-nj8xw]</p>
</blockquote>
<pre><code>#kubectl describe pods wordpress-mysql-b78774f44-lxtfv
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 7m11s (x4 over 7m18s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
Normal Scheduled 7m5s default-scheduler Successfully assigned default/wordpress-mysql-b78774f44-lxtfv to worker3
Warning FailedMount 2m46s (x2 over 5m1s) kubelet, worker3 Unable to mount volumes for pod "wordpress-mysql-b78774f44-lxtfv_default(ffb4ff12-553e-11e9-a229-52540076d16c)": timeout expired waiting for volumes to attach or mount for pod "default"/"wordpress-mysql-b78774f44-lxtfv". list of unmounted volumes=[mysql-persistent-storage]. list of unattached volumes=[mysql-persistent-storage default-token-nj8xw]
Normal Pulled 107s kubelet, worker3 Container image "mysql:5.6" already present on machine
Normal Created 104s kubelet, worker3 Created container mysql
Normal Started 101s kubelet, worker3 Started container mysql
and my pods is running
#kubectl get pods
NAME READY STATUS RESTARTS AGE
wordpress-595685cc49-2bmzk 1/1 Running 4 15m
wordpress-mysql-b78774f44-lxtfv 1/1 Running 0 15m
</code></pre>
<p>my pv and pvc</p>
<pre><code>#kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-0f6722a0-553f-11e9-a229-52540076d16c 20Gi RWO Delete Bound default/wp-pv-claim rook-ceph-block 15m
persistentvolume/pvc-e9797517-553b-11e9-a229-52540076d16c 20Gi RWO Delete Released default/wp-pv-claim rook-ceph-block 37m
persistentvolume/pvc-ff52a22e-553e-11e9-a229-52540076d16c 20Gi RWO Delete Bound default/mysql-pv-claim rook-ceph-block 16m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/mysql-pv-claim Bound pvc-ff52a22e-553e-11e9-a229-52540076d16c 20Gi RWO rook-ceph-block 16m
persistentvolumeclaim/wp-pv-claim Bound pvc-0f6722a0-553f-11e9-a229-52540076d16c 20Gi RWO
rook-ceph-block 15m
</code></pre>
<p>my wordpress yaml file</p>
<pre><code># cat wordpress.yaml
apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: frontend
type: LoadBalancer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp-pv-claim
labels:
app: wordpress
spec:
storageClassName: rook-ceph-block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:4.6.1-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
value: changeme
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wp-pv-claim
and my mysql yaml file
cat mysql.yaml
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
spec:
storageClassName: rook-ceph-block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: changeme
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
</code></pre>
| <p>It looks like your deployment has progressed successfully! The error you see occurred 2m46s from the time that you did the <code>describe</code>. This could be due to ceph provisioning the PV/PVC, or some other situation in which your PVC is not ready to be mounted. However, according to the events, your container eventually successfully mounts and is started. All the pod health checks pass and the Kube scheduler progresses it into the READY state. </p>
<p>Is there any evidence that your container and the mount is not working properly after start? </p>
|
<p>I know how can i use one ingress for one domain but if i have more than one domain like below what should ido?
how should i handle DNS for ingress?<strong>I do not want to write domain in ingress.yml</strong><a href="https://i.stack.imgur.com/n4cJB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/n4cJB.png" alt="enter image description here"></a></p>
| <p><code>Ingress</code> element on diagram is <code>ingress-controller</code>, but nobody forbids creating individual Ingress resources for each route.</p>
<p>As alternative solution, you can expose service as LoadBalancer and configure external DNS service to route traffic on Kubernetes LB Service. Check <a href="https://github.com/kubernetes-incubator/external-dns" rel="nofollow noreferrer">ExternalDNS</a> project for more information.</p>
<p>MetalLB and kube-router also could be useful for Bare-Metal/On-Premise K8s setup.</p>
<p>On my opinion, Helm/Ksonnet/Kustomize will help you with Ingress resource management too.</p>
|
<p>On my cluster I use <code>traefik</code> as an ingress-controller, but now also want to provide an <code>nginx</code> controller.</p>
<p>I don't want my developers to think about how exactly their application is exposed. Therefore I would like to make traefik the "default" controller and only use nginx if the developer explicitly requests that controller by setting the proper <code>ingress.class</code>.</p>
<p>Unfortunately it looks like setting <em>no</em> class will result in both controllers fighting about that ingress. :(
Is there a way to tell a controller to <em>only</em> handle a ingress object if has the correct <code>ingress.class</code>?</p>
<p>If that is not possible, I was thinking about writing a MutatingAdmissionWebhook which will insert the traefik class in case no class is set. - Does this make sense, or is there a better way?</p>
| <p>Handling <code>no class</code> is an arbitrary ingress implementation decision.
You typically pass the desired class into a binary and it then filters all the config map events with a corresponding class:
<a href="https://github.com/helm/charts/blob/master/stable/nginx-ingress/templates/controller-deployment.yaml#L60" rel="nofollow noreferrer">https://github.com/helm/charts/blob/master/stable/nginx-ingress/templates/controller-deployment.yaml#L60</a></p>
<p>As far as I know, no one does <code>no class</code>, nor do I recommend as it would be error-prone. Someone will forget to add class and will implicitly get exposed where they did not want. </p>
<p>Mutating hook is a way to go, as it will add an explicit note of what ingress this belongs to. Try <a href="https://github.com/HotelsDotCom/kube-graffiti" rel="nofollow noreferrer">https://github.com/HotelsDotCom/kube-graffiti</a> </p>
<p>The simplest way will be just to register traefik to listen on <code>ingress.class: default</code> or <code>dev</code> and ask developers to put this in all their templates. In this way, you will abstract them from a particular ingress choice underneath.</p>
|
<p>I have a setup with <code>scdf-server</code> on <code>kubernetes</code> working fine, it deploys each task in an on-demand pod on the very same default namespace, the one that hosts the <code>scdf-server</code> pod. </p>
<p>Now, I need to deploy a pod in another namespace and I can't find the argument/property to use in the <code>scdf server dashboard</code> for the pod to be created in the given namespace. Does anybody know how to find that? I tried <code>spring.cloud.deployer.kubernetes.namespace, deployer.kubernetes.namespace, spring.cloud.deployer.kubernetes.environmentVariables, deployer.<app>.kubernetes.namespace, spring.cloud.dataflow.task.platform.kubernetes.namespace, scheduler.kubernetes.environmentVariables SPRING_CLOUD_SCHEDULER_KUBERNETES_NAMESPACE</code>... as both 'properties' and 'arguments' text boxes...</p>
| <p>This seems like a duplicate thread that was posted in SCDF gitter channel. The properties were described and pointed out in the commentary - more details <a href="https://gitter.im/spring-cloud/spring-cloud-dataflow?at=5ca3960cf851ee043d4dae32" rel="nofollow noreferrer">here</a>.</p>
|
<p>While running the <code>helm init</code> I was getting an error:</p>
<pre><code>Error: error installing: the server could not find the requested resource (post deployments.extensions)
</code></pre>
<p>But I solved it by running :</p>
<pre><code>helm init --client-only
</code></pre>
<p>But when I run:</p>
<pre><code>helm upgrade --install --namespace demo demo-databases-ephemeral charts/databases-ephemeral --wait
</code></pre>
<p>I'm getting:</p>
<pre><code>Error: serializer for text/html; charset=utf-8 doesn't exist
</code></pre>
<p>I found nothing convincing as a solution and I'm not able to proceed forward in the setup.</p>
<p>Any help would be appreciated.</p>
| <p>Check if your ~/.kube/config exists and is properly set up. If not, run the following command:</p>
<pre><code>sudo cp -i /etc/kubernetes/admin.config ~/.kube/config
</code></pre>
<p>Now check if kubectl is properly setup using:</p>
<pre><code>kubectl version
</code></pre>
<p>This answer is specific to the issue you are getting. If this does not resolve the issue, please provide more error log.</p>
|
<p>I am trying to run one Ansible playbook for deploying Kubernetes cluster using the tool kubespray on Ubuntu 16.04 OS. I have one base machine which is installed with Ansible and cloned kubespray Git repository. And one master and two worker nodes containing in cluster.</p>
<p><strong>My host (Updated) file like the followig screenshot,</strong></p>
<pre><code>[all]
MILDEVKUB020 ansible_ssh_host=MILDEVKUB020 ip=192.168.16.173 ansible_user=uName ansible_ssh_pass=pwd
MILDEVKUB030 ansible_ssh_host=MILDEVKUB030 ip=192.168.16.176 ansible_user=uName ansible_ssh_pass=pwd
MILDEVKUB040 ansible_ssh_host=MILDEVKUB040 ip=192.168.16.177 ansible_user=uName ansible_ssh_pass=pwd
[kube-master]
MILDEVKUB020
[etcd]
MILDEVKUB020
[kube-node]
MILDEVKUB020
MILDEVKUB030
MILDEVKUB040
[k8s-cluster:children]
kube-master
kube-node
</code></pre>
<p>Location of hosts.ini file is /inventory/sample. And I am trying the following Ansible command</p>
<pre><code>sudo ansible-playbook -i inventory/sample/hosts.ini cluster.yml --user=uName --extra-vars "ansible_sudo_pass=pwd"
</code></pre>
<p>And I am using the playbook "cluster.yml" from the following link</p>
<p><a href="https://github.com/kubernetes-sigs/kubespray/blob/master/cluster.yml" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kubespray/blob/master/cluster.yml</a></p>
<p><strong>And my /etc/hosts file containing the entries ,</strong></p>
<pre><code>127.0.0.1 MILDEVDCR01.Milletech.us MILDEVDCR01
192.168.16.173 MILDEVKUB020.Milletech.us MILDEVKUB020
192.168.16.176 MILDEVKUB030.Milletech.us MILDEVKUB030
192.168.16.177 MILDEVKUB040.Milletech.us MILDEVKUB040
</code></pre>
<p><strong>Updated error</strong></p>
<pre><code>TASK [adduser : User | Create User Group]
Thursday 04 April 2019 11:34:55 -0400 (0:00:00.508) 0:00:33.383 ********
fatal: [MILDEVKUB040]: FAILED! => {"changed": false, "msg": "groupadd: Permission denied.\ngroupadd: cannot lock /etc/group; try again later.\n", "name": "kube-cert"}
fatal: [MILDEVKUB020]: FAILED! => {"changed": false, "msg": "groupadd: Permission denied.\ngroupadd: cannot lock /etc/group; try again later.\n", "name": "kube-cert"}
fatal: [MILDEVKUB030]: FAILED! => {"changed": false, "msg": "groupadd: Permission denied.\ngroupadd: cannot lock /etc/group; try again later.\n", "name": "kube-cert"}
</code></pre>
<p>I am getting error like this even if I am able to connect all machine from base machine using ssh. How can I trace what is my issue for running this command to deploy Kubernetes cluster?</p>
| <p>You may need to specify ssh user or key</p>
<ul>
<li>Add username to inventory with</li>
</ul>
<pre><code>ansible_ssh_user=<USERNAME>
</code></pre>
<ul>
<li>Add password with:</li>
</ul>
<pre><code>ansible_ssh_pass=<PASSWORD>
</code></pre>
<p>if not - share the ssh command that is working.</p>
|
<p>I am trying to find out how can I isolate my Kubernetes Secrets to specific Service.</p>
<p>For example, let say I have two secrets with name <code>private-key</code> and <code>public-key</code> and two Kubernetes Services <strong>auth-service</strong> and <strong>gateway-service</strong>. </p>
<p>I want to provide <code>private-key</code> secret to <strong>auth-service</strong> to generate token and provide <code>public-key</code> to <strong>gateway-service</strong> to validate generated token. All Secrets and Services are in same namespace.</p>
<p>How can I restrict access of <code>private-key</code> to only <strong>auth-service</strong>?</p>
| <p>There is no way to achieve that, this is by design in Kubernetes. Secrets in Kubernetes are per namespaces, and any pod in this namespace can mount them. So the only way to achieve that is by using separate namespaces. BTW not only Secrets but also RBAC permissions are per namespace - you cannot limit user permissions to specific object but to the entire namespace.</p>
<p>Also, from a security point of view, you might want to consider a more secure solution for the private key used to sign tokens, like <a href="https://en.wikipedia.org/wiki/Hardware_security_module" rel="nofollow noreferrer">HSM</a>. There are a few cloud options, like Azure KeyVault or AWS CloudHSM that provide this feature.</p>
<p>On a final comment, this is one of the reasons we ended up building our own secrets encryption solution - <a href="https://github.com/Soluto/kamus" rel="nofollow noreferrer">Kamus</a>. Kamus let you encrypt secrets for a specific service, and only this service can decrypt them. This allows us to have a better granularity of secrets permissions, which Kubernetes secrets mechanism did not provide.</p>
|
<p>I am setting up 3 node Cassandra cluster on Azure using Statefull set Kubernetes and not able to mount data location in azure file share.</p>
<p>I am able to do using default kubenetes storage but not with Azurefile share option.</p>
<p>I have tried the following steps given below, finding difficulty in volumeClaimTemplates</p>
<pre><code>apiVersion: "apps/v1"
kind: StatefulSet
metadata:
name: cassandra
labels:
app: cassandra
spec:
serviceName: cassandra
replicas: 3
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
containers:
- name: cassandra
image: cassandra
imagePullPolicy: Always
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
env:
- name: CASSANDRA_SEEDS
value: cassandra-0.cassandra.default.svc.cluster.local
- name: MAX_HEAP_SIZE
value: 256M
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_CLUSTER_NAME
value: "Cassandra"
- name: CASSANDRA_DC
value: "DC1"
- name: CASSANDRA_RACK
value: "Rack1"
- name: CASSANDRA_ENDPOINT_SNITCH
value: GossipingPropertyFileSnitch
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- mountPath: /var/lib/cassandra/data
name: pv002
volumeClaimTemplates:
- metadata:
name: pv002
spec:
storageClassName: default
accessModes:
- ReadWriteOnce
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv002
accessModes:
- ReadWriteOnce
azureFile:
secretName: storage-secret
shareName: xxxxx
readOnly: false
claimRef:
namespace: default
name: az-files-02
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: az-files-02
spec:
accessModes:
- ReadWriteOnce
---
apiVersion: v1
kind: Secret
metadata:
name: storage-secret
type: Opaque
data:
azurestorageaccountname: xxxxx
azurestorageaccountkey: jjbfjbsfljbafkljasfkl;jf;kjd;kjklsfdhjbsfkjbfkjbdhueueeknekneiononeojnjnjHBDEJKBJBSDJBDJ==
</code></pre>
<p>I should able to mount data folder of each cassandra node into azure file share.</p>
| <p>For using azure file in statefulset, I think you could following this example: <a href="https://github.com/andyzhangx/demo/blob/master/linux/azurefile/attach-stress-test/statefulset-azurefile1-2files.yaml" rel="nofollow noreferrer">https://github.com/andyzhangx/demo/blob/master/linux/azurefile/attach-stress-test/statefulset-azurefile1-2files.yaml</a></p>
|
<p>I have tried now so many times setting up this pipeline in Azure devops where I want to deploy a AKS cluster and place istio on top.</p>
<p>Deploying the AKS using Terraform works great.</p>
<p>After this I try to install istio using helm but the command I use gives forbidden error.</p>
<pre><code>helm.exe install --namespace istio-system --name istio-init --wait C:\Istio\install\kubernetes\helm\istio
</code></pre>
<p>I used the local path since this was the only good way I could find for helm to find the istio chart i have on the build agent.</p>
<p>The error message</p>
<pre><code>Error: release istio-init failed: clusterroles.rbac.authorization.k8s.io "istio-galley-istio-system" is forbidden: attempt to grant extra privileges: [{[*] [admissionregistration.k8s.io] [validatingwebhookconfigurations] [] []} {[get] [config.istio.io] [*] [] []} {[list] [config.istio.io] [*] [] []} {[watch] [config.istio.io] [*] [] []} {[get] [*] [deployments] [istio-galley] []} {[get] [*] [endpoints] [istio-galley] []}] user=&{system:serviceaccount:kube-system:tillerserviceaccount 56632fa4-55e7-11e9-a4a1-9af49f3bf03a [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[[clusterroles.rbac.authorization.k8s.io "cluster-admin" not found, clusterroles.rbac.authorization.k8s.io "system:discovery" not found, clusterroles.rbac.authorization.k8s.io "cluster-admin" not found, clusterroles.rbac.authorization.k8s.io "system:discovery" not found, clusterroles.rbac.authorization.k8s.io "system:discovery" not found, clusterroles.rbac.authorization.k8s.io "cluster-admin" not found]]
</code></pre>
<p>The serviceaccount I use (system:serviceaccount:kube-system:tillerserviceaccount as you can see in error message) are configured using this rbac config:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: tillerserviceaccount
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tillerbinding
roleRef:
apiGroup: ""
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tillerserviceaccount
namespace: kube-system
</code></pre>
<p>Still the error message says in the ruleResolutionErrors that it looks for cluster-admin but it is not found.</p>
<p>I even tried the extreme and set all service accounts as cluster admins to test:</p>
<pre><code>kubectl create clusterrolebinding serviceaccounts-admins --clusterrole=cluster-admin --group=system:serviceaccounts
</code></pre>
<p>But even after that I get the same error with the same ruleResolutionErrors.</p>
<p>I am stuck and appriciate any help in what I can do differently.</p>
| <p>this is the role binding we are using in dev clusters:</p>
<pre><code>---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tillerbinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tillerserviceaccount
namespace: kube-system
</code></pre>
<p>edit: in this case the error was due to AKS created without RBAC.</p>
|
<p>I'm installing up a helm chart using <code>helm install</code> command. I have <code>values.yaml</code> which takes a few inputs from the user. One of the keys in <code>values.yaml</code> is <em>action</em>, which can only take three predefined values (let's say <em>action1</em>, <em>action2</em> and <em>action3</em>) as an input. Any other value other than this is invalid.</p>
<p>When a user provides the value to action field in <code>values.yaml</code> and trigger the <code>helm install</code> command, the first thing I need to check is that if the <em>action</em> key has a valid value or not. If the <em>action</em> value is invalid, I want the release to be failed with a proper error message.</p>
<p>e.g.: In case the user has given <code>action: action4</code>, this is not valid and release should fail as <code>.Values.action</code> can only be <em>action1</em>, <em>action2</em>, or <em>action3</em>.</p>
<p>How I can achieve this use case and which file should be best to handle this validation considering the helm structure?</p>
| <p>I was able to achieve the use case with below changes.
Added the following code in _helpers.tpl</p>
<pre><code>{{- define "actionValidate" -}}
{{ $action := .Values.actions }}
{{- if or (eq $action "action1") (eq $action "action2") (eq $action "action3") -}}
true
{{- end -}}
{{- end -}}
</code></pre>
<p>Invoked this function from a .tpl file like this:-</p>
<pre><code>{{ include "actionValidate" . | required "Action value is incorrect. The valid values are 'action1', 'action2', 'action3' " }}
</code></pre>
|
<p>Im trying to create a pod using my local docker image as follow.</p>
<p>1.First I run this command in terminal </p>
<pre><code>eval $(minikube docker-env)
</code></pre>
<p>2.I created a docker image as follow</p>
<pre><code>sudo docker image build -t my-first-image:3.0.0 .
</code></pre>
<p>3.I created the pod.yml as shown below and I run this command</p>
<pre><code>kubectl -f create pod.yml.
</code></pre>
<p>4.then i tried to run this command</p>
<pre><code>kubectl get pods
</code></pre>
<p>but it shows following error </p>
<pre><code>
NAME READY STATUS RESTARTS AGE
multiplication-b47499db9-phpb7 0/1 ImagePullBackOff 0 23h
my-first-pod 0/1 ErrImagePull 0 7s
</code></pre>
<p>5.i get the pods logs</p>
<pre><code>kubectl describe pod my-first-pod
</code></pre>
<pre><code>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 99s default-scheduler Successfully assigned default/my-first-pod to minikube
Warning Failed 41s (x3 over 94s) kubelet, minikube Failed to pull image "my-first-image:3.0.0": rpc error: code = Unknown desc = Error response from daemon: pull access denied for my-first-image, repository does not exist or may require 'docker login'
Warning Failed 41s (x3 over 94s) kubelet, minikube Error: ErrImagePull
Normal BackOff 12s (x4 over 93s) kubelet, minikube Back-off pulling image "my-first-image:3.0.0"
Warning Failed 12s (x4 over 93s) kubelet, minikube Error: ImagePullBackOff
Normal Pulling 0s (x4 over 98s) kubelet, minikube pulling image "my-first-image:3.0.0"
</code></pre>
<pre><code>Dockerfile
FROM node:carbon
WORKDIR /app
COPY . .
CMD [ "node", "index.js" ]
</code></pre>
<pre><code>pods.yml
kind: Pod
apiVersion: v1
metadata:
name: my-first-pod
spec:
containers:
- name: my-first-container
image: my-first-image:3.0.0
</code></pre>
<pre><code>index.js
var http = require('http');
var server = http.createServer(function(request, response) {
response.statusCode = 200;
response.setHeader('Content-Type', 'text/plain');
response.end('Welcome to the Golden Guide to Kubernetes
Application Development!');
});
server.listen(3000, function() {
console.log('Server running on port 3000');
});
</code></pre>
| <h2>Reason</h2>
<p>This is because it can't download the docker image defined in your pod definition file. By default it downloads required images from <a href="https://hub.docker.com/" rel="noreferrer">DockerHub</a>.</p>
<h2>Way 1</h2>
<p>So after creating your <code>my-first-image:3.0.0</code> image you have to publish it at DockerHub. For that create an account at DockerHub and login from terminal using <a href="https://docs.docker.com/engine/reference/commandline/login/" rel="noreferrer">login</a> command</p>
<pre><code>sudo docker login
</code></pre>
<p>After successful login, rebuild your docker image with your DockerHub username in tag and push it to DockerHub (<a href="https://docs.docker.com/engine/reference/commandline/image_push/" rel="noreferrer">more details</a>)</p>
<pre><code>sudo docker image build -t YOUR_DOCKERHUB_USERNAME/my-first-image:3.0.0 .
sudo docker push YOUR_DOCKERHUB_USERNAME/my-first-image:3.0.0
</code></pre>
<p>Update your image in <code>pod.yml</code> as <code>YOUR_DOCKERHUB_USERNAME/my-first-image:3.0.0</code> and create your pods as before.</p>
<h2>Way 2</h2>
<p>You can instruct to find required docker image from your local machine instead of downloading from DockerHub. To do so you have to add <code>imagePullPolicy: Never</code> in your <code>pod.yml</code> file under specific container description. Below is an example of your <code>pod.yml</code> file to show where to define image pull policy</p>
<pre><code>kind: Pod
apiVersion: v1
metadata:
name: my-first-pod
spec:
containers:
- name: my-first-container
image: YOUR_DOCKERHUB_USERNAME/my-first-image:3.0.0
imagePullPolicy: Never
</code></pre>
|
<p>What is most simple way to start Kubernetes job with the http request (webhook)? I need to build docker image after push to github and have to do it inside cluster.</p>
| <p>I think you are looking for <a href="https://www.knative.dev/docs/" rel="nofollow noreferrer">KNative</a>. Mainly the <a href="https://github.com/knative/build" rel="nofollow noreferrer">Build</a> part of it.</p>
<p>KNative is still on early stages, but is pretty much what you need. If the build features does not attend your needs, you can still use the other features like <a href="https://www.knative.dev/docs/serving/" rel="nofollow noreferrer">Serving</a> to trigger the container image from http calls and run the tools you need.</p>
<p>Here is the description from the Build Docs:</p>
<blockquote>
<p>A <strong>Knative Build</strong> extends Kubernetes and utilizes existing Kubernetes
primitives to provide you with the ability to run on-cluster container
builds from source. For example, you can write a build that uses
Kubernetes-native resources to obtain your source code from a
repository, build a container image, then run that image.</p>
<p>While Knative builds are optimized for building, testing, and
deploying source code, you are still responsible for developing the
corresponding components that:</p>
<ul>
<li>Retrieve source code from repositories.</li>
<li>Run multiple sequential jobs against a shared filesystem, for example:
<ul>
<li>Install dependencies.</li>
<li>Run unit and integration tests.</li>
</ul></li>
<li>Build container images.</li>
<li>Push container images to an image registry, or deploy them to a cluster.</li>
</ul>
<p>The goal of a Knative build is to provide a standard, portable,
reusable, and performance optimized method for defining and running
on-cluster container image builds. By providing the “boring but
difficult” task of running builds on Kubernetes, Knative saves you
from having to independently develop and reproduce these common
Kubernetes-based development processes.</p>
</blockquote>
|
<p>I'm trying to set up a MongoDB replica set on my Kubernetes cluster but the Secondary member keeps restarting after a few seconds.</p>
<p>Here's a couple of things but might be useful to know:</p>
<ul>
<li>The Mongo server (and client) version is <code>4.0.6</code></li>
<li>I'm using the official helm <a href="https://github.com/helm/charts/tree/master/stable/mongodb-replicaset" rel="nofollow noreferrer">mongodb-replicaset</a> chart to set up the replica set and the only custom setting I'm using is <code>enableMajorityReadConcern: false</code></li>
<li>Oplog size is configured to ~1228MB (only 4.7 used)</li>
<li>It happens with both a Primary-Secondary arch and with a PSA architecture where the Arbiter dies repeatedly like the Secondary member whilst the Primary is always up and running</li>
<li>This happens both on my minikube and on a staging cluster on GCP with plenty of free resources (I'm deploying this with no resources limits, see right below for cluster status)</li>
</ul>
<p>Staging cluster status (4 nodes):</p>
<pre><code>NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
gke-staging-pool-1-********-**** 551m 28% 3423Mi 60%
gke-staging-pool-1-********-**** 613m 31% 3752Mi 66%
gke-staging-pool-1-********-**** 960m 49% 2781Mi 49%
gke-staging-pool-1-********-**** 602m 31% 3590Mi 63%
</code></pre>
<p>At the moment since the Primary seems to be able to stay up and running I managed to keep the cluster live by removing the <code>votes</code> to all members but the primary. This way Mongo doesn't relinquish the primary for not being able to see a majority of the set and my app can still do writes.</p>
<p>If I turn the <code>logLevel</code> to <code>5</code> on the Secondary the only error I get is this:</p>
<pre><code>2019-04-02T15:11:42.233+0000 D EXECUTOR [replication-0] Executing a task on behalf of pool replication
2019-04-02T15:11:42.233+0000 D EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-04-02T15:12:11.980+0000
2019-04-02T15:11:42.233+0000 D NETWORK [RS] Timer received error: CallbackCanceled: Callback was canceled
2019-04-02T15:11:42.235+0000 D NETWORK [RS] Decompressing message with snappy
2019-04-02T15:11:42.235+0000 D ASIO [RS] Request 114334 finished with response: { cursor: { nextBatch: [], id: 46974224885, ns: "local.oplog.rs" }, ok: 1.0, operationTime: Timestamp(1554217899, 1), $replData: { term: 11536, lastOpCommitted: { ts: Timestamp(1554217899, 1), t: 11536 }, lastOpVisible: { ts: Timestamp(1554217899, 1), t: 11536 }, configVersion: 666752, replicaSetId: ObjectId('5c8a607380091703c787b3ff'), primaryIndex: 0, syncSourceIndex: -1 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1554217899, 1), t: 11536 }, lastOpApplied: { ts: Timestamp(1554217899, 1), t: 11536 }, rbid: 1, primaryIndex: 0, syncSourceIndex: -1 }, $clusterTime: { clusterTime: Timestamp(1554217899, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } }
2019-04-02T15:11:42.235+0000 D EXECUTOR [RS] Received remote response: RemoteResponse -- cmd:{ cursor: { nextBatch: [], id: 46974224885, ns: "local.oplog.rs" }, ok: 1.0, operationTime: Timestamp(1554217899, 1), $replData: { term: 11536, lastOpCommitted: { ts: Timestamp(1554217899, 1), t: 11536 }, lastOpVisible: { ts: Timestamp(1554217899, 1), t: 11536 }, configVersion: 666752, replicaSetId: ObjectId('5c8a607380091703c787b3ff'), primaryIndex: 0, syncSourceIndex: -1 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1554217899, 1), t: 11536 }, lastOpApplied: { ts: Timestamp(1554217899, 1), t: 11536 }, rbid: 1, primaryIndex: 0, syncSourceIndex: -1 }, $clusterTime: { clusterTime: Timestamp(1554217899, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } }
2019-04-02T15:11:42.235+0000 D EXECUTOR [replication-5] Executing a task on behalf of pool replication
2019-04-02T15:11:42.235+0000 D REPL [replication-5] oplog fetcher read 0 operations from remote oplog
2019-04-02T15:11:42.235+0000 D EXECUTOR [replication-5] Scheduling remote command request: RemoteCommand 114336 -- target:foodchain-backend-mongodb-replicaset-0.foodchain-backend-mongodb-replicaset.foodchain.svc.cluster.local:27017 db:local expDate:2019-04-02T15:11:47.285+0000 cmd:{ getMore: 46974224885, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 50, term: 11536, lastKnownCommittedOpTime: { ts: Timestamp(1554217899, 1), t: 11536 } }
2019-04-02T15:11:42.235+0000 D ASIO [replication-5] startCommand: RemoteCommand 114336 -- target:foodchain-backend-mongodb-replicaset-0.foodchain-backend-mongodb-replicaset.foodchain.svc.cluster.local:27017 db:local expDate:2019-04-02T15:11:47.285+0000 cmd:{ getMore: 46974224885, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 50, term: 11536, lastKnownCommittedOpTime: { ts: Timestamp(1554217899, 1), t: 11536 } }
2019-04-02T15:11:42.235+0000 D EXECUTOR [replication-5] Not reaping because the earliest retirement date is 2019-04-02T15:12:11.980+0000
2019-04-02T15:11:42.235+0000 D NETWORK [RS] Timer received error: CallbackCanceled: Callback was canceled
2019-04-02T15:11:42.235+0000 D NETWORK [RS] Compressing message with snappy
2019-04-02T15:11:42.235+0000 D NETWORK [RS] Timer received error: CallbackCanceled: Callback was canceled
2019-04-02T15:11:42.235+0000 D NETWORK [RS] Timer received error: CallbackCanceled: Callback was canceled
2019-04-02T15:11:42.235+0000 D NETWORK [RS] Timer received error: CallbackCanceled: Callback was canceled
</code></pre>
<p>Given the network error I verified if all members could connect to each other and they can (it's explicitly showed in the logs of all three members).</p>
<p><strong>ADDITIONAL INFO:</strong></p>
<pre><code>foodchain_rs:PRIMARY> rs.status()
{
"set" : "foodchain_rs",
"date" : ISODate("2019-04-02T15:35:02.640Z"),
"myState" : 1,
"term" : NumberLong(11536),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1554219299, 1),
"t" : NumberLong(11536)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1554219299, 1),
"t" : NumberLong(11536)
},
"appliedOpTime" : {
"ts" : Timestamp(1554219299, 1),
"t" : NumberLong(11536)
},
"durableOpTime" : {
"ts" : Timestamp(1554219299, 1),
"t" : NumberLong(11536)
}
},
"members" : [
{
"_id" : 0,
"name" : "foodchain-backend-mongodb-replicaset-0.foodchain-backend-mongodb-replicaset.foodchain.svc.cluster.local:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 4376,
"optime" : {
"ts" : Timestamp(1554219299, 1),
"t" : NumberLong(11536)
},
"optimeDate" : ISODate("2019-04-02T15:34:59Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"electionTime" : Timestamp(1554214927, 1),
"electionDate" : ISODate("2019-04-02T14:22:07Z"),
"configVersion" : 666752,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 1,
"name" : "foodchain-backend-mongodb-replicaset-1.foodchain-backend-mongodb-replicaset.foodchain.svc.cluster.local:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 10,
"optime" : {
"ts" : Timestamp(1554219299, 1),
"t" : NumberLong(11536)
},
"optimeDurable" : {
"ts" : Timestamp(1554219299, 1),
"t" : NumberLong(11536)
},
"optimeDate" : ISODate("2019-04-02T15:34:59Z"),
"optimeDurableDate" : ISODate("2019-04-02T15:34:59Z"),
"lastHeartbeat" : ISODate("2019-04-02T15:35:01.747Z"),
"lastHeartbeatRecv" : ISODate("2019-04-02T15:35:01.456Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "foodchain-backend-mongodb-replicaset-0.foodchain-backend-mongodb-replicaset.foodchain.svc.cluster.local:27017",
"syncSourceHost" : "foodchain-backend-mongodb-replicaset-0.foodchain-backend-mongodb-replicaset.foodchain.svc.cluster.local:27017",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 666752
}
],
"ok" : 1,
"operationTime" : Timestamp(1554219299, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1554219299, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
foodchain_rs:PRIMARY> rs.printReplicationInfo()
configured oplog size: 1228.8701171875MB
log length start to end: 1646798secs (457.44hrs)
oplog first event time: Thu Mar 14 2019 14:08:51 GMT+0000 (UTC)
oplog last event time: Tue Apr 02 2019 15:35:29 GMT+0000 (UTC)
now: Tue Apr 02 2019 15:35:34 GMT+0000 (UTC)
foodchain_rs:PRIMARY> db.getReplicationInfo()
{
"logSizeMB" : 1228.8701171875,
"usedMB" : 4.7,
"timeDiff" : 1646838,
"timeDiffHours" : 457.46,
"tFirst" : "Thu Mar 14 2019 14:08:51 GMT+0000 (UTC)",
"tLast" : "Tue Apr 02 2019 15:36:09 GMT+0000 (UTC)",
"now" : "Tue Apr 02 2019 15:36:11 GMT+0000 (UTC)"
}
foodchain_rs:PRIMARY> rs.conf()
{
"_id" : "foodchain_rs",
"version" : 666752,
"protocolVersion" : NumberLong(1),
"writeConcernMajorityJournalDefault" : true,
"members" : [
{
"_id" : 0,
"host" : "foodchain-backend-mongodb-replicaset-0.foodchain-backend-mongodb-replicaset.foodchain.svc.cluster.local:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 1,
"host" : "foodchain-backend-mongodb-replicaset-1.foodchain-backend-mongodb-replicaset.foodchain.svc.cluster.local:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 0,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 0
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 100,
"catchUpTimeoutMillis" : -1,
"catchUpTakeoverDelayMillis" : 30000,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("5c8a607380091703c787b3ff")
}
}
</code></pre>
| <p>The issue was a too short <a href="https://docs.mongodb.com/manual/reference/replica-configuration/#rsconf.settings.electionTimeoutMillis" rel="nofollow noreferrer">electionTimeoutMillis</a> setting.</p>
<blockquote>
<p>Lower values result in faster failover, but increased sensitivity to primary node or network slowness or spottiness.</p>
</blockquote>
<p>In my case it was set to <code>100ms</code> and that wasn't enough time for my Secondary to find the Primary member so it was unable to sync and thus unavailable.</p>
<p>I think it's also worth noting that the process was not being killed. The <code>mongod</code> PID was always <code>1</code> and the uptime showed in <code>top</code> did not coincide with the uptime showed in the <code>rs.status()</code> mongo shell.</p>
<p>What I was doing was monitoring the Secondary uptime via the mongo shell like this:</p>
<pre><code>watch -n 1.0 "kubectl -n foodchain exec -it foodchain-backend-mongodb-replicaset-0 -- mongo --eval='rs.status().members.map(m => m.uptime)'"
</code></pre>
<p>With that command I could see that the Secondary uptime was never longer than 10s so I assumed it was restarting itself or being OOM killed or something, instead I think it was trying to fire an election but didn't have the votes to do so and went silent on restarting. In fact what I think it really confused me was the lack of information in that regard despite having set the <code>logLevel</code> to <code>5</code>.</p>
|
<p>Is there is a way to automatically delay <em>all</em> Kubernetes pod deletion requests such that the endpoint deregistration is signaled, but the pod's SIGTERM is delayed by several seconds?</p>
<p>It would be preferable, but not required, if the delay only affected pods with an Endpoint/Service.</p>
<p>Background:</p>
<p>It is <a href="https://stackoverflow.com/questions/45083275/kubernetes-nginx-how-to-have-zero-downtime-deployments">well</a> <a href="https://stackoverflow.com/questions/40545581/do-kubernetes-pods-still-receive-requests-after-receiving-sigterm">established</a> that some traffic can continue to a Pod after a pod has been sent the SIGTERM termination signal due to the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods" rel="nofollow noreferrer">asynchronous</a> nature of endpoint deregistration and the deletion signal. The recommended <a href="https://freecontent.manning.com/handling-client-requests-properly-with-kubernetes/" rel="nofollow noreferrer">mitigation</a> is to introduce a few seconds delay in the pod's <code>preStop</code> lifecycle hook by invoking <code>sleep</code>.</p>
<p>The difficulty rapidly arises where the pod's deployment may be done via helm or other upstream source, or else there are large numbers of deployments and containers to be managed. Modifying many deployments in such a way may be difficult, or even impossible (e.g. the container may not have a sleep binary, shell, or anything but the application executable).</p>
<p>I briefly explored a mutating admission controller, but that seems unworkable to dynamically add a <code>preStop</code> hook, as all images do not have a <code>/bin/sleep</code> or already have a <code>preStop</code> that could need image-specific knowledge to merge.</p>
<p>(Of course, all of this could be avoided if the K8S API made the endpoint deregistration synchronous with a timeout to avoid deadlock (hint, hint), but I haven't seen any discussions of such a change. Yes, there are tons of reasons why this isn't synchronous, but that doesn't mean something can't be done.)</p>
| <p>Kubernetes lifecycle has following steps.</p>
<ul>
<li>Pod is set to the “Terminating” State and removed from the endpoints list of all Services</li>
<li>preStop hook is executed</li>
<li>SIGTERM signal is sent to the pod</li>
<li>Kubernetes waits for a grace period, default is 30 seconds</li>
<li>SIGKILL signal is sent to pod, and the pod is removed</li>
</ul>
<p>Grace period is what you need.
It's important to node that this grace period is happening in parallel to the preStop hook and the SIGTERM signal.</p>
<blockquote>
<p>A call to the preStop hook fails if the container is already in terminated or completed state. It is blocking, meaning it is synchronous, so it must complete before the call to delete the container can be sent.</p>
</blockquote>
<p>Here you can read more about <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/" rel="nofollow noreferrer">Container Lifecycle Hooks</a>.</p>
<p>So for example you could set the <code>terminationGracePeriodSeconds: 90</code> and this might look like the following:</p>
<pre><code>spec:
terminationGracePeriodSeconds: 90
containers:
- name: myApplication
</code></pre>
<p>You can read the Kubernetes docs regarding <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods" rel="nofollow noreferrer">Termination of Pods</a>. I also recommend great blog post <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-terminating-with-grace" rel="nofollow noreferrer">Kubernetes best practices: terminating with grace</a>.</p>
|
<p>I have a kubernetes cluster set up and I would like to use local images. I have configured .yaml file so that it contains (in containers -> image -section) "imagePullPolicy: Never" like this:</p>
<pre><code>spec:
containers:
- image: <name>:<version>
name: <name>
imagePullPolicy: Never
resources: {}
restartPolicy: Always
</code></pre>
<p>I have deployed this service to kubernetes but image cannot be pulled (getting ImagePullBackOff -error when viewing pods with kubectl get pod) since image cannot be found from internet/registry and for some unknown reason imagePullPolicy is in Always-value. This can be seen e.g. from /var/log/messages from text:</p>
<pre><code>"spec":{"containers":[{"image":"<name>","imagePullPolicy":"Always","name":"<name>","
</code></pre>
<p>So my question is: Why is this imagePullPolicy in Always-value though I have set imagePullPolicy to Never in my .yaml file (which has of course been taken into use)? Is there some default value for imagePullPolicy that runs over the value described in .yaml file?</p>
<p>My environment is Centos7 and I'm using Kontena Pharos 2.2.0 (uses e.g. docker 1.13.1 (Apache License 2.0) and kubernetes 1.13.2 (Apache License 2.0)).</p>
<p>I expected that when I set "imagePullPolicy: Never" in .yaml file the value should now be Never (and not Always).</p>
<p>Thank you so much for helping!</p>
| <p>welcome on StackOverflow.</p>
<p>It happens so, because your Kubernetes cluster has presumably enabled admission control plugin in api server, called 'AlwaysPullImages', which role is to overwrite (mutate) objects prior storing them in Kubernetes data store - etcd.</p>
<p>This is a default behavior of clusters bootstrapped with Kontena Pharos since version v2.0.0-alpha.2.</p>
<p>You can disable this admission plugin in your main cluster.yml config file:</p>
<pre><code>...
addons:
ingress-nginx:
enabled: true
admission_plugins:
- name: AlwaysPullImages
enabled: false
...
</code></pre>
<p>You should expect then to see PODs failing with different status reason, if image is not found on local registry:</p>
<pre><code>client-deployment-99699599d-lfqmr 0/1 ErrImageNeverPull 0 42s
</code></pre>
<p>Please read more on using of Admission Controllers <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/" rel="nofollow noreferrer">here</a> </p>
|
<p>I have a configmap like this</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
namespace: develop
name: test-config
data:
app.conf: |
connection conn1
address 127.0.0.1:8080
user cluster1
password: <HERE COPY PASS FROM SECRET>
</code></pre>
<p>app.conf section is mounted as a configuration file inside the container using</p>
<pre><code>spec:
replicas: 1
...
spec:
containers:
- name: container-name
...
volumeMounts:
- name: test-config-vol
mountPath: /etc/app/app.conf
subPath: app.conf
volumes:
- name: test-config-vol
configMap:
name: test-config
</code></pre>
<p>the app reading that file is NOT able to read environment variables, and I cannot change that behavior, it is a third party app.</p>
<p>I wonder how can I inject password from a secret into password field during pod deployment</p>
| <p>You can't, what you can do is mounting a <a href="https://kubernetes.io/docs/concepts/configuration/secret" rel="nofollow noreferrer">Kuberentes secret</a> with the password - see the documentation <a href="https://kubernetes.io/docs/concepts/storage/volumes/#secret" rel="nofollow noreferrer">here</a>. So you'll end up having 2 files - one from the config map and one from the secret.</p>
<p>If your app does not support multiple config files, you'll have to store the entire config file on the secret instead of the config map.</p>
<p>Also, please note that Kubernetes secrets cannot be stored on source control, as the secret data is encoded using base64 (see <a href="https://kubernetes.io/docs/concepts/configuration/secret/#security-properties" rel="nofollow noreferrer">here</a> for more details). There are multiple solutions for this problem, check out my <a href="https://itnext.io/can-kubernetes-keep-a-secret-it-all-depends-what-tool-youre-using-498e5dee9c25" rel="nofollow noreferrer">post</a></p>
|
<p>I have an application deployed in Google Kubernetes Engine. It uses Ingress load balancer and currently there are 2 pods running my application among which API requests are distributed. The application is a Spring Boot application. I can view what API calls made to each pod individually using access logs provided by Spring Boot, but I want to view all the requests at once. I think that's possible because all the requests are first intercepted by load balancer before they get distributed. Is there a way I can do that?</p>
| <p>Since you are getting the API calls from your application pods, it sounds like putting logs from both pods (assuming they are part of the same app) would help:</p>
<pre><code>kubectl logs -l app=<app name> -n <namespace>
</code></pre>
|
<p>I have a Kubernetes pod (let's call it <strong>POD-A</strong>) and I want it to use a certain config file to perform some actions using k8s API. The config file will be a YAML or JSON which will be parsed by the application inside the pod.</p>
<p>The config file is hosted by an application server on cloud and the latest version of it can be pulled based on a trigger. The config file contains configuration details of all the deployments in the k8s cluster and will be used to update deployments using k8s API in <strong>POD-A</strong>.</p>
<p>Now what I am thinking is to save this config file in a config-map and every time a new config file is pulled a new config-map is created by the pod which is using the k8s API.</p>
<p>What I want to do is to update the previous config map with a certain flag (a key and a value) which will basically help the application to know which is the current version of deployment. So let's say I have a running k8s cluster with multiple pods in it, a config-map is there which has all the configuration details against those pods (image version, namespace, etc.) and a flag notifying that this the current deployment and the application inside <strong>POD-A</strong> will know that by loading the config-map. Now when a new config-file is pulled a new config-map is created and the flag for current deployment is set to false for the previous config map and is set to true for the latest created config map. Then that config map is used to update all the pods in the cluster.</p>
<p>I know there are a lot of details but I had to explain them to ask the following questions:</p>
<p>1) Can <code>configmaps</code> be used for this purpose?</p>
<p>2) Can I update <code>configmaps</code> or do I have to rewrite them completely? I am thinking of writing a file in the <code>configmap</code> because that would be much simpler.</p>
<p>3) I know <code>configmaps</code> are stored in etcd but are they persisted on disk or are kept in memory?</p>
<p>4) Let's say <strong>POD-A</strong> goes down will it have any effect on the <code>configmaps</code>? Are they in any way associated with the life cycle of a pod?</p>
<p>5) If the k8s cluster itself goes down what happens to the `configmaps? Since they are in <strong>etcd</strong> and if they are persisted then will they be available again?</p>
<p>Note: There is also a <a href="https://github.com/kubernetes/kubernetes/issues/19781#issuecomment-172553264" rel="nofollow noreferrer">limit on the size of configmaps</a> so I have to keep that in mind. Although I am guessing 1MB is a fair enough size to save a config file since it would usually be in a few bytes.</p>
| <blockquote>
<p>1) I think you should not use it in this way.</p>
<p>2) ConfigMaps are kubernetes resources. You can update them.</p>
<p>3) If etcd backups to disk are enabled.</p>
<p>4) No. A pod's lifecycle should not affect configmaps, unless pod mutates(deletes) the configmap. </p>
<p>5) If the cluster itself goes down. Assuming etcd is also running on the same cluster, etcd will not be available till the cluster comes back up again. ETCD has an option to persist backups to disk. If this is enabled, when the etcd comes back up, it will have restored the values that were on the backup. So it should be available once the cluster & etcd is up.</p>
</blockquote>
<p>There are multiple ways to mount configMap in a pod like env variables, file etc.
If you change a config map, Values won't be updated on configMaps as files. Only values for configMaps as env variables are update dynamically. And now the process running in the pod should detect env variable has been updated and take some action.</p>
<p>So I think the system will be too complex. </p>
<p>Instead trigger a deployment that kills the old pods and brings up a new pod which uses the updated configMaps.</p>
|
<p>I'm deploying a simple app in Kubernetes (on AKS) which is sat behind an Ingress using Nginx, deployed using the Nginx helm chart. I have a problem that for some reason Nginx doesn't seem to be passing on the full URL to the backend service. </p>
<p>For example, my Ingress is setup with the URL of <a href="http://app.client.com" rel="nofollow noreferrer">http://app.client.com</a> and a path of /app1g going <a href="http://app.client.com/app1" rel="nofollow noreferrer">http://app.client.com/app1</a> works fine. However if I try to go to <a href="http://app.client.com/app1/service1" rel="nofollow noreferrer">http://app.client.com/app1/service1</a> I just end up at <a href="http://app.client.com/app1" rel="nofollow noreferrer">http://app.client.com/app1</a>, it seems to be stripping everything after the path.</p>
<p>My Ingress looks like this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
creationTimestamp: "2019-04-03T12:44:22Z"
generation: 1
labels:
chart: app-1.1
component: app
hostName: app.client.com
release: app
name: app-ingress
namespace: default
resourceVersion: "1789269"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/app-ingress
uid: 34bb1a1d-560e-11e9-bd46-9a03420914b9
spec:
rules:
- host: app.client.com
http:
paths:
- backend:
serviceName: app-service
servicePort: 8080
path: /app1
tls:
- hosts:
- app.client.com
secretName: app-prod
status:
loadBalancer:
ingress:
- {}
</code></pre>
<p>If I port forward to the service and hit that directly it works.</p>
| <p>So I found the answer to this. It seems that as of Nginx v0.22.0 you are required to use capture groups to capture any substrings in the request URI. Prior to 0.22.0 using just <code>nginx.ingress.kubernetes.io/rewrite-target: /</code> worked for any substring. Now it does not. I needed to ammend my ingress to use this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
creationTimestamp: "2019-04-03T12:44:22Z"
generation: 1
labels:
chart: app-1.1
component: app
hostName: app.client.com
release: app
name: app-ingress
namespace: default
resourceVersion: "1789269"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/app-ingress
uid: 34bb1a1d-560e-11e9-bd46-9a03420914b9
spec:
rules:
- host: app.client.com
http:
paths:
- backend:
serviceName: app-service
servicePort: 8080
path: /app1/?(.*)
tls:
- hosts:
- app.client.com
secretName: app-prod
status:
loadBalancer:
ingress:
- {}
</code></pre>
|
<p>I have a kubernetes cluster deployed with rke witch is composed of 3 nodes in 3 different servers and in those server there is 1 pod which is running yatsukino/healthereum which is a personal modification of ethereum/client-go:stable .
The problem is that I'm not understanding how to add an external ip to send request to the pods witch are </p>
<p>My pods could be in 3 states:</p>
<ol>
<li>they syncing the ethereum blockchain</li>
<li>they restarted because of a sync problem</li>
<li>they are sync and everything is fine</li>
</ol>
<p>I don't want my load balancer to transfer requests to the 2 first states, only the third point consider my pod as up to date.</p>
<p>I've been searching in the kubernetes doc but (maybe because a miss understanding) I only find load balancing for pods inside a unique node.</p>
<p><strong>Here is my deployment file:</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: goerli
name: goerli-deploy
spec:
replicas: 3
selector:
matchLabels:
app: goerli
template:
metadata:
labels:
app: goerli
spec:
containers:
- image: yatsukino/healthereum
name: goerli-geth
args: ["--goerli", "--datadir", "/app", "--ipcpath", "/root/.ethereum/geth.ipc"]
env:
- name: LASTBLOCK
value: "0"
- name: FAILCOUNTER
value: "0"
ports:
- containerPort: 30303
name: geth
- containerPort: 8545
name: console
livenessProbe:
exec:
command:
- /bin/sh
- /app/health.sh
initialDelaySeconds: 20
periodSeconds: 60
volumeMounts:
- name: app
mountPath: /app
initContainers:
- name: healthcheck
image: ethereum/client-go:stable
command: ["/bin/sh", "-c", "wget -O /app/health.sh http://my-bash-script && chmod 544 /app/health.sh"]
volumeMounts:
- name: app
mountPath: "/app"
restartPolicy: Always
volumes:
- name: app
hostPath:
path: /app/
</code></pre>
| <p>The answers above explains the concepts, but about your questions anout services and external ip; you must declare the service, example;</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: goerli
spec:
selector:
app: goerli
ports:
- port: 8545
type: LoadBalancer
</code></pre>
<p>The <code>type: LoadBalancer</code> will assign an external address for in public cloud or if you use something like <a href="https://metallb.universe.tf/" rel="noreferrer">metallb</a>. Check your address with <code>kubectl get svc goerli</code>. If the external address is "pending" you have a problem...</p>
<p>If this is your own setup you can use <code>externalIPs</code> to assign your own external ip;</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: goerli
spec:
selector:
app: goerli
ports:
- port: 8545
externalIPs:
- 222.0.0.30
</code></pre>
<p>The <code>externalIPs</code> can be used from outside the cluster but you must route traffic to <em>any</em> node yourself, for example;</p>
<pre><code>ip route add 222.0.0.30/32 \
nexthop via 192.168.0.1 \
nexthop via 192.168.0.2 \
nexthop via 192.168.0.3
</code></pre>
<p>Assuming yous k8s nodes have ip 192.168.0.x. This will setup ECMP routes to your nodes. When you make a request from outside the cluster to 222.0.0.30:8545 k8s will load-balance between your <em>ready</em> PODs.</p>
|
<p><strong>Short version:</strong> PostgreSQL deployed via Helm is persisting data between deployments unintentionally. How do I make sure data is cleared?</p>
<p><strong>Long version:</strong> I'm currently deploying PostgreSQL via Helm this way, using it for a local development database for an application I'm building:</p>
<pre><code>helm install stable/postgresql -n testpg \
--set global.postgresql.postgresqlDatabase=testpg \
--set global.postgresql.postgresqlUsername=testpg \
--set global.postgresql.postgresqlPassword=testpg \
--set global.postgresql.servicePort=5432 \
--set service.type=LoadBalancer
</code></pre>
<p>When I'm done (or if I mess up the database so bad and need to clear it), I uninstall it:</p>
<pre><code>helm del --purge testpg
</code></pre>
<p>(which confirms removal and <code>kubectl get all confirms</code> works)</p>
<p>However, when I spin the database up again, I'm surprised to see that the data and schema are still there when it has spun up.</p>
<p><strong>How is the data persisting and how do I make sure I have a clean database each time?</strong></p>
<p>Other details:</p>
<ul>
<li>My Kubernetes Cluster is running in Docker Desktop v2.0.0.3</li>
</ul>
| <p>Your cluster may have a default volume provisioner configured.</p>
<p><a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/#defaulting-behavior" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/#defaulting-behavior</a></p>
<p>So even if you have no storage class configured a volume will be assigned.</p>
<p>You need to set helm value persistence.enabled to false.</p>
<p>The value is true by default:</p>
<p><a href="https://github.com/helm/charts/blob/master/stable/postgresql/values.yaml" rel="nofollow noreferrer">https://github.com/helm/charts/blob/master/stable/postgresql/values.yaml</a></p>
|
<p>I am running my kubernetes cluster on AWS EKS which runs kubernetes 1.10.
I am following this guide to deploy elasticsearch in my Cluster
<a href="https://github.com/pires/kubernetes-elasticsearch-cluster" rel="nofollow noreferrer">elasticsearch Kubernetes</a></p>
<p>The first time I deployed it everything worked fine. Now, When I redeploy it gives me the following error.</p>
<pre><code>ERROR: [2] bootstrap checks failed
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
[2018-08-24T18:07:28,448][INFO ][o.e.n.Node ] [es-master-6987757898-5pzz9] stopping ...
[2018-08-24T18:07:28,534][INFO ][o.e.n.Node ] [es-master-6987757898-5pzz9] stopped
[2018-08-24T18:07:28,534][INFO ][o.e.n.Node ] [es-master-6987757898-5pzz9] closing ...
[2018-08-24T18:07:28,555][INFO ][o.e.n.Node ] [es-master-6987757898-5pzz9] closed
</code></pre>
<p>Here is my deployment file.</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: es-master
labels:
component: elasticsearch
role: master
spec:
replicas: 3
template:
metadata:
labels:
component: elasticsearch
role: master
spec:
initContainers:
- name: init-sysctl
image: busybox:1.27.2
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
containers:
- name: es-master
image: quay.io/pires/docker-elasticsearch-kubernetes:6.3.2
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CLUSTER_NAME
value: myesdb
- name: NUMBER_OF_MASTERS
value: "2"
- name: NODE_MASTER
value: "true"
- name: NODE_INGEST
value: "false"
- name: NODE_DATA
value: "false"
- name: HTTP_ENABLE
value: "false"
- name: ES_JAVA_OPTS
value: -Xms512m -Xmx512m
- name: NETWORK_HOST
value: "0.0.0.0"
- name: PROCESSORS
valueFrom:
resourceFieldRef:
resource: limits.cpu
resources:
requests:
cpu: 0.25
limits:
cpu: 1
ports:
- containerPort: 9300
name: transport
livenessProbe:
tcpSocket:
port: transport
initialDelaySeconds: 20
periodSeconds: 10
volumeMounts:
- name: storage
mountPath: /data
volumes:
- emptyDir:
medium: ""
name: "storage"
</code></pre>
<p>I have seen a lot of posts talking about increasing the value but I am not sure how to do it. Any help would be appreciated. </p>
| <p>Just want to append to <a href="https://github.com/awslabs/amazon-eks-ami/issues/193" rel="nofollow noreferrer">this issue</a>:</p>
<p>If you create EKS cluster by <a href="https://github.com/weaveworks/eksctl" rel="nofollow noreferrer">eksctl</a> then you can append to NodeGroup creation yaml:</p>
<pre><code> preBootstrapCommand:
- "sed -i -e 's/1024:4096/65536:65536/g' /etc/sysconfig/docker"
- "systemctl restart docker"
</code></pre>
<p>This will solve the problem for newly created cluster by fixing docker daemon config.</p>
|
<p>I am trying to run one Ansible playbook for deploying Kubernetes cluster using the tool kubespray on Ubuntu 16.04 OS. I have one base machine which is installed with Ansible and cloned kubespray Git repository. And one master and two worker nodes containing in cluster.</p>
<p><strong>My host (Updated) file like the followig screenshot,</strong></p>
<pre><code>[all]
MILDEVKUB020 ansible_ssh_host=MILDEVKUB020 ip=192.168.16.173 ansible_user=uName ansible_ssh_pass=pwd
MILDEVKUB030 ansible_ssh_host=MILDEVKUB030 ip=192.168.16.176 ansible_user=uName ansible_ssh_pass=pwd
MILDEVKUB040 ansible_ssh_host=MILDEVKUB040 ip=192.168.16.177 ansible_user=uName ansible_ssh_pass=pwd
[kube-master]
MILDEVKUB020
[etcd]
MILDEVKUB020
[kube-node]
MILDEVKUB020
MILDEVKUB030
MILDEVKUB040
[k8s-cluster:children]
kube-master
kube-node
</code></pre>
<p>Location of hosts.ini file is /inventory/sample. And I am trying the following Ansible command</p>
<pre><code>sudo ansible-playbook -i inventory/sample/hosts.ini cluster.yml --user=uName --extra-vars "ansible_sudo_pass=pwd"
</code></pre>
<p>And I am using the playbook "cluster.yml" from the following link</p>
<p><a href="https://github.com/kubernetes-sigs/kubespray/blob/master/cluster.yml" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kubespray/blob/master/cluster.yml</a></p>
<p><strong>And my /etc/hosts file containing the entries ,</strong></p>
<pre><code>127.0.0.1 MILDEVDCR01.Milletech.us MILDEVDCR01
192.168.16.173 MILDEVKUB020.Milletech.us MILDEVKUB020
192.168.16.176 MILDEVKUB030.Milletech.us MILDEVKUB030
192.168.16.177 MILDEVKUB040.Milletech.us MILDEVKUB040
</code></pre>
<p><strong>Updated error</strong></p>
<pre><code>TASK [adduser : User | Create User Group]
Thursday 04 April 2019 11:34:55 -0400 (0:00:00.508) 0:00:33.383 ********
fatal: [MILDEVKUB040]: FAILED! => {"changed": false, "msg": "groupadd: Permission denied.\ngroupadd: cannot lock /etc/group; try again later.\n", "name": "kube-cert"}
fatal: [MILDEVKUB020]: FAILED! => {"changed": false, "msg": "groupadd: Permission denied.\ngroupadd: cannot lock /etc/group; try again later.\n", "name": "kube-cert"}
fatal: [MILDEVKUB030]: FAILED! => {"changed": false, "msg": "groupadd: Permission denied.\ngroupadd: cannot lock /etc/group; try again later.\n", "name": "kube-cert"}
</code></pre>
<p>I am getting error like this even if I am able to connect all machine from base machine using ssh. How can I trace what is my issue for running this command to deploy Kubernetes cluster?</p>
| <p>If you are using user/password combination to login. The user with which ansible is getting executed should be present in the sudoers file to switch to root or another other privileged user </p>
<p>Check the sudoers and try to manually do a sudo su root on the target server</p>
|
<p>We're trying to set up a local development environment with several microservices app under Skaffold. We managed to do it with base Skaffold, using a (slightly outdated) tutorial at <a href="https://github.com/ahmetb/skaffold-from-laptop-to-cloud" rel="noreferrer">https://github.com/ahmetb/skaffold-from-laptop-to-cloud</a>. And to get Skaffold to push images to a local repository without Helm, all I had to do was set up the imageName to use something like localhost:5000/image_name. </p>
<p>But with Helm, well.... I set up a very crude Helm install (DISCLAIMER: I am not much familiar with Helm yet), just changing the skaffold YAML to use Helm and dumping all the .YAML deployment and service files into the Helm chart's /templates directory, and that bombed. </p>
<p>Skaffold then successfully creates any pods that rely on a stock external image (like redis), but then whenever anything uses an image that would be generated from a local Dockerfile, it gets stuck and throws this error: </p>
<blockquote>
<p>Failed to pull image "localhost:5000/k8s-skaffold/php-test": rpc
error: code = Unknown desc = Error response from daemon: Get
<a href="http://localhost:5000/v2/" rel="noreferrer">http://localhost:5000/v2/</a>: dial tcp [::1]:5000: connect: connection
refused</p>
</blockquote>
<p>As far as I can tell, that's the error that comes when we haven't initialized a local Docker image repository - but with the non-Helm version, we don't need to start up a local image repository, Skaffold just makes that magic happen. Which is part of the appeal OF Skaffold. </p>
<p>So how do we automagically get Skaffold to create Helm charts that create and pull from a local repository? (As noted, this may be my unfamiliarity with Helm. If so, I apologize.) </p>
<p>The Skaffold YAML is this: </p>
<pre><code>apiVersion: skaffold/v1beta7
kind: Config
build:
tagPolicy:
sha256: {}
artifacts:
- image: localhost:5000/k8s-skaffold/php-test
context: voting-app/php-test
deploy:
helm:
releases:
- name: php-help-test
chartPath: helm
#wait: true
#valuesFiles:
#- helm-skaffold-values.yaml
values:
image: localhost:5000/k8s-skaffold/php-test
#recreatePods will pass --recreate-pods to helm upgrade
#recreatePods: true
#overrides builds an override values.yaml file to run with the helm deploy
#overrides:
# some:
# key: someValue
#setValues get appended to the helm deploy with --set.
#setValues:
#some.key: someValue
</code></pre>
<p>And the Helm Chart values.yaml is the default provided by a generated chart. I can also provide the Dockerfile if needed, but it's just pulling from that image.</p>
| <p>You can't use <code>localhost</code> in your image definition. For the sake of testing you can try to use the ip of the host where your private registry is running, say if the host has address 222.0.0.2, then use <code>image: 222.0.0.2:5000/k8s-skaffold/php-test</code>.</p>
<p>It is of course undesirable to hard-code an address so a better way is to omit the "host" part entirely;</p>
<pre><code> image: k8s-skaffold/php-test:v0.1
</code></pre>
<p>In this case your CRI (Container Runtime Interface) plugin will try a sequence of servers, for instance <code>docker.io</code>. The servers are configurable but unfortunately I don't know how to configure it for "docker" since I use <code>cri-o</code> myself.</p>
|
<p>I am using the following job template:</p>
<pre><code> apiVersion: batch/v1
kind: Job
metadata:
name: rotatedevcreds2
spec:
template:
metadata:
name: rotatedevcreds2
spec:
containers:
- name: shell
image: akanksha/dsserver:v7
env:
- name: DEMO
value: "Hello from the environment"
- name: personal_AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: rotatecreds-env
key: personal_aws_secret_access_key
- name: personal_AWS_SECRET_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: rotatecreds-env
key: personal_aws_secret_access_key_id
- name: personal_GIT_TOKEN
valueFrom:
secretKeyRef:
name: rotatecreds-env
key: personal_git_token
command:
- "bin/bash"
- "-c"
- "whoami; pwd; /root/rotateCreds.sh"
restartPolicy: Never
imagePullSecrets:
- name: regcred
</code></pre>
<p>The shell script runs some ansible tasks which results in:</p>
<pre><code> TASK [Get the existing access keys for the functional backup ID] ***************
fatal: [localhost]: FAILED! => {"changed": false, "cmd": "aws iam list-access-keys --user-name ''", "failed_when_result": true, "msg": "[Errno 2] No such file or directory", "rc": 2}
</code></pre>
<p>However if I spin a pod using the same iamge using the following </p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: rotatedevcreds3
spec:
template:
metadata:
name: rotatedevcreds3
spec:
containers:
- name: shell
image: akanksha/dsserver:v7
env:
- name: DEMO
value: "Hello from the environment"
- name: personal_AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: rotatecreds-env
key: personal_aws_secret_access_key
- name: personal_AWS_SECRET_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: rotatecreds-env
key: personal_aws_secret_access_key_id
- name: personal_GIT_TOKEN
valueFrom:
secretKeyRef:
name: rotatecreds-env
key: personal_git_token
command:
- "bin/bash"
- "-c"
- "whoami; pwd; /root/rotateCreds.sh"
restartPolicy: Never
imagePullSecrets:
- name: regcred
</code></pre>
<p>This creates a POD and I am able to login to the pod and run <code>/root/rotateCreds.sh</code></p>
<p>While running the job it seems it not able to recognose the aws cli. I tried debugging <code>whoami</code> and <code>pwd</code> which is equal to <code>root</code> and <code>/</code> respectively and that is fine. Any pointers what is missing? I am new to jobs.</p>
<p>For further debugging in the job template I added a sleep for <code>10000</code> seconds so that I can login to the container and see what's happening. I noticed after logging in I was able to run the script manually too. <code>aws</code> command was recognised properly.</p>
| <p>It is likely your <code>PATH</code> is not set correctly,
a quick fix is to define the absolute path of aws-cli like <code>/usr/local/bin/aws</code> in <code>/root/rotateCreds.sh</code> script</p>
|
<p>I'd like to expose default port (1883) and WS port (9001) of MQTT server on an Azure Kubernetes Cluster. </p>
<p>Anyway here is the deployement I currently wrote : </p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mqtt-server
spec:
replicas: 1
selector:
matchLabels:
app: mqtt-server
template:
metadata:
labels:
app: mqtt-server
type: backend
spec:
containers:
- name: mqtt-server
image: eclipse-mosquitto:1.5.4
resources:
requests:
cpu: 250m
memory: 256Mi
ports:
- name: mqtt-dflt-port
containerPort: 1883
- name: mqtt-ws-port
containerPort: 9001
---
apiVersion: v1
kind: Service
metadata:
name: mqtt-server-service
spec:
selector:
app: mqtt-server
type: LoadBalancer
ports:
- name: mqtt-dflt-port
protocol: TCP
port: 1883
targetPort: 1883
- name: mqtt-ws-port
protocol: TCP
port: 1884
targetPort: 9001</code></pre>
</div>
</div>
</p>
<p>And when I deploy it, everything is fine but the MQTT broker is unreachable and my service is described like that : </p>
<pre><code>mqtt-server-service LoadBalancer 10.0.163.167 51.143.170.64 1883:32384/TCP,1884:31326/TCP 21m
</code></pre>
<p>Why is the 1883/9001 port aren't forwarded like it should be ? </p>
| <p>First, make sure you’re connecting to the service’s cluster IP from within the
cluster, not from the outside.
Don’t bother pinging the service IP to figure out if the service is accessible
(remember, the service’s cluster IP is a virtual IP and pinging it will never work).
If you’ve defined a readiness probe, make sure it’s succeeding; otherwise the</p>
<p>pod won’t be part of the service.
To confirm that a pod is part of the service, examine the corresponding End-
points object with kubectl get endpoints .
If you’re trying to access the service through its FQDN or a part of it (for exam-
ple, myservice.mynamespace.svc.cluster.local or myservice.mynamespace) and
it doesn’t work, see if you can access it using its cluster IP instead of the FQDN.
Check whether you’re connecting to the port exposed by the service and not
the target port.
Try connecting to the pod IP directly to confirm your pod is accepting connec-
tions on the correct port.
If you can’t even access your app through the pod’s IP, make sure your app isn’t
only binding to localhost.</p>
|
<p>How to spin up cloud proxy for cloud composer cluster</p>
<p>Currently we use airflow to manage jobs and dynamic DAG creation. For this, one separate Dag is written to check database table in PostgreSQL for existing rules & if rule is active/inactive in PostgreSQL, we manually have set up to off/on dynamic DAGs in Airflow.Now, we are going to use Google's self managed Cloud Composer but problem is that we don't have access of db of cloud composer. How can we use cloud sql proxy to resolve this problem?</p>
| <p>The Cloud Composer database is actually already accessible, because there is a Cloud SQL Proxy running within the environment's attached GKE cluster. You can use its service name <code>airflow-sqlproxy-service</code> to connect to it from within the cluster, using <code>root</code>. For example, on Composer 1.6.0, and if you have Kubernetes cluster credentials, you can list running pods:</p>
<pre><code>$ kubectl get po --all-namespaces
composer-1-6-0-airflow-1-9-0-6f89fdb7 airflow-database-init-job-kprd5 0/1 Completed 0 1d
composer-1-6-0-airflow-1-9-0-6f89fdb7 airflow-scheduler-78d889459b-254fm 2/2 Running 18 1d
composer-1-6-0-airflow-1-9-0-6f89fdb7 airflow-worker-569bc59df5-x6jhl 2/2 Running 5 1d
composer-1-6-0-airflow-1-9-0-6f89fdb7 airflow-worker-569bc59df5-xxqk7 2/2 Running 5 1d
composer-1-6-0-airflow-1-9-0-6f89fdb7 airflow-worker-569bc59df5-z5lnj 2/2 Running 5 1d
default airflow-redis-0 1/1 Running 0 1d
default airflow-sqlproxy-668fdf6c4-vxbbt 1/1 Running 0 1d
default composer-agent-6f89fdb7-0a7a-41b6-8d98-2dbe9f20d7ed-j9d4p 0/1 Completed 0 1d
default composer-fluentd-daemon-g9mgg 1/1 Running 326 1d
default composer-fluentd-daemon-qgln5 1/1 Running 325 1d
default composer-fluentd-daemon-wq5z5 1/1 Running 326 1d
</code></pre>
<p>You can see that one of the worker pods is named <code>airflow-worker-569bc59df5-x6jhl</code>, and is running in the namespace <code>composer-1-6-0-airflow-1-9-0-6f89fdb7</code>. If I SSH to one of them and run the MySQL CLI, I have access to the database:</p>
<pre><code>$ kubectl exec \
-it airflow-worker-569bc59df5-x6jhl \
--namespace=composer-1-6-0-airflow-1-9-0-6f89fdb7 -- \
mysql \
-u root \
-h airflow-sqlproxy-service.default
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 27147
Server version: 5.7.14-google-log (Google)
Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
</code></pre>
<p>TL;DR for anything running in your DAGs, connect using <code>root@airflow-sqlproxy-service.default</code> with no password. This will connect to the Airflow metadata database through the Cloud SQL Proxy that's already running in your Composer environment.</p>
<hr>
<p>If you need to connect to a database that <em>isn't</em> the Airflow database running in Cloud SQL, then you can spin up another proxy by deploying a new proxy pod into GKE (like you would deploy anything else into a Kubernetes cluster).</p>
|
<p>I'm trying to run a kubectl command from ansible.<br>
Basically the command will tell me if at least one pod is running from a deployment. </p>
<pre><code>kubectl get deploy sample-v1-deployment -o json -n sample | jq '.status.conditions[] | select(.reason == "MinimumReplicasAvailable") | .status' | tr -d '"'
</code></pre>
<p>I tried to run it from a playbook but I'm getting </p>
<blockquote>
<p>Unable to connect to the server: net/http: TLS handshake timeout</p>
</blockquote>
<p>This is my playbook:</p>
<pre><code>- hosts: master
gather_facts: no
become: true
tasks:
- name: test command
shell: kubectl get deploy sample-v1-deployment -o json -n sample | jq '.status.conditions[] | select(.reason == "MinimumReplicasAvailable") | .status' | tr -d '"'
register: result
</code></pre>
<p>This is the output from ansible:</p>
<pre><code>changed: [k8smaster01.test.com] => {
"changed": true,
"cmd": "kubectl get deploy sample-v1-deployment -o json -n sample | jq '.status.conditions[] | select(.reason == \"MinimumReplicasAvailable\") | .status' | tr -d '\"'",
"delta": "0:00:10.507704",
"end": "2019-04-02 20:59:17.882277",
"invocation": {
"module_args": {
"_raw_params": "kubectl get deploy sample-v1-deployment -o json -n sample | jq '.status.conditions[] | select(.reason == \"MinimumReplicasAvailable\") | .status' | tr -d '\"'",
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"warn": true
}
},
"rc": 0,
"start": "2019-04-02 20:59:07.374573",
"stderr": "Unable to connect to the server: net/http: TLS handshake timeout",
"stderr_lines": [
"Unable to connect to the server: net/http: TLS handshake timeout"
],
"stdout": "",
"stdout_lines": []
}
</code></pre>
<p>I can run the command manually on the master server without problems. I was also able to use k8s module to create different things on my kubernetes cluster.<br>
I know there is a kubectl module on ansible, could it be the problem?</p>
<p>Thanks</p>
| <p>I found a couple of workarounds.<br>
One was to use the k8s_facts module</p>
<pre><code>- name: Ensure running application
k8s_facts:
namespace: sample
kind: Pod
label_selectors:
- app=sample-v1-app
register: pod_list
until: pod_list.resources[0].status.phase == 'Running'
delay: 10
retries: 3
</code></pre>
<p>Its simple and gets the works done. </p>
<p>The second workaround was to use the raw module instead of shell or command</p>
<pre><code>- name: Get running status
raw: kubectl get deploy sample-v1-deployment -o json -n sample | jq -r '.status.conditions[] | select(.reason == "MinimumReplicasAvailable") | .status'
</code></pre>
<p>I'm not sure about using raw. It looks like a hammer for a simple task.<br>
But reading about the module makes me think this problem is related with the syntax (quotes, double quotes, |) more than the command it self.</p>
<blockquote>
<p>Executes a low-down and dirty SSH command, not going through the
module subsystem. This is useful and should only be done in a few
cases. A common case is installing python on a system without python
installed by default. Another is speaking to any devices such as
routers that do not have any Python installed.</p>
</blockquote>
|
<p>I have created a prometheus and grafana setup in Kubernetes like described here: <a href="https://github.com/ContainerSolutions/k8s-deployment-strategies" rel="nofollow noreferrer">https://github.com/ContainerSolutions/k8s-deployment-strategies</a>.</p>
<p>Install prometheus:</p>
<pre><code>helm install \
--namespace=monitoring \
--name=prometheus \
--set server.persistentVolume.enabled=false \
--set alertmanager.persistentVolume.enabled=false \
stable/prometheus
</code></pre>
<p>Setup grafana:</p>
<pre><code>helm install \
--namespace=monitoring \
--name=grafana \
--version=1.12.0 \
--set=adminUser=admin \
--set=adminPassword=admin \
--set=service.type=NodePort \
stable/grafana
kubectl get pods -n monitoring
NAME READY STATUS RESTARTS AGE
grafana-6d4f6ff6d5-vw8r2 1/1 Running 0 17h
prometheus-alertmanager-6cb6bc6b7-76fs4 2/2 Running 0 17h
prometheus-kube-state-metrics-5ff476d674-c7mpt 1/1 Running 0 17h
prometheus-node-exporter-4zhmk 1/1 Running 0 17h
prometheus-node-exporter-g7jqm 1/1 Running 0 17h
prometheus-node-exporter-sdnwg 1/1 Running 0 17h
prometheus-pushgateway-7967b4cf45-j24hx 1/1 Running 0 17h
prometheus-server-5dfc4f657d-sl7kv 2/2 Running 0 17h
</code></pre>
<p>I can curl from inside the grafana container to my prometheus container <a href="http://prometheus-server" rel="nofollow noreferrer">http://prometheus-server</a> (It replies "found").</p>
<p>Grafana config:</p>
<pre><code>Name: prometheus
Type: Prometheus
http://prometheus-server
</code></pre>
<p>I see metrics in a default dashboard called Prometheus 2.0 stats.
I've created an own dashboard (also described in the github link).</p>
<pre><code>sum(rate(http_requests_total{app="my-app"}[5m])) by (version)
</code></pre>
<p>I've deployed my-app which is running and I curl it a lot but see nothing in my dashboard.</p>
<pre><code>kubectl get pods
NAME READY STATUS RESTARTS AGE
my-app-7bd4b55cbd-8zm8b 1/1 Running 0 17h
my-app-7bd4b55cbd-nzs2p 1/1 Running 0 17h
my-app-7bd4b55cbd-zts78 1/1 Running 0 17h
</code></pre>
<p>curl</p>
<pre><code>while sleep 0.1; do curl http://192.168.50.10:30513/; done
Host: my-app-7bd4b55cbd-8zm8b, Version: v2.0.0
Host: my-app-7bd4b55cbd-zts78, Version: v2.0.0
Host: my-app-7bd4b55cbd-nzs2p, Version: v2.0.0
Host: my-app-7bd4b55cbd-8zm8b, Version: v2.0.0
Host: my-app-7bd4b55cbd-zts78, Version: v2.0.0
Host: my-app-7bd4b55cbd-zts78, Version: v2.0.0
Host: my-app-7bd4b55cbd-8zm8b, Version: v2.0.0
Host: my-app-7bd4b55cbd-8zm8b, Version: v2.0.0
</code></pre>
<p>How can I debug this or what am I doing wrong?</p>
<p>Update:
my-app deployment config</p>
<pre><code>kubectl describe deployment my-app
Name: my-app
Namespace: default
CreationTimestamp: Tue, 02 Apr 2019 22:17:31 +0200
Labels: app=my-app
Annotations: deployment.kubernetes.io/revision: 2
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"my-app"},"name":"my-app","namespace":"default"},...
Selector: app=my-app
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: Recreate
MinReadySeconds: 0
Pod Template:
Labels: app=my-app
version=v2.0.0
Annotations: prometheus.io/port: 9101
prometheus.io/scrape: true
Containers:
my-app:
Image: containersol/k8s-deployment-strategies
Ports: 8080/TCP, 8086/TCP
Host Ports: 0/TCP, 0/TCP
Liveness: http-get http://:probe/live delay=5s timeout=1s period=5s #success=1 #failure=3
Readiness: http-get http://:probe/ready delay=0s timeout=1s period=5s #success=1 #failure=3
Environment:
VERSION: v2.0.0
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: my-app-7bd4b55cbd (3/3 replicas created)
Events: <none>
</code></pre>
<p>yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 3
strategy:
type: Recreate
# The selector field tell the deployment which pod to update with
# the new version. This field is optional, but if you have labels
# uniquely defined for the pod, in this case the "version" label,
# then we need to redefine the matchLabels and eliminate the version
# field from there.
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
version: v2.0.0
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9101"
spec:
containers:
- name: my-app
image: containersol/k8s-deployment-strategies
ports:
- name: http
containerPort: 8080
- name: probe
containerPort: 8086
env:
- name: VERSION
value: v2.0.0
livenessProbe:
httpGet:
path: /live
port: probe
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
httpGet:
path: /ready
port: probe
periodSeconds: 5
</code></pre>
| <p>In your deployment you said to scrape port 9101 but you did not publish this port on your container.</p>
<p>Where is your prometheus endpoint on port 9101 or on 8080/8086?</p>
|
<p>The default Helm Chart for PostgreSQL (i.e. <code>stable/postgresql</code>) defines an <code>initdbScripts</code> parameter that allows initialization scripts to be run. However, I can't seem to get the format correct on how to issue it via the command line.</p>
<p><strong>Could someone provide an example of how to populate this command line parameter?</strong></p>
<p>Here's what I'm issuing, minus a working version of the <code>initdbScripts</code> parameter.</p>
<pre><code>helm install stable/postgresql -n testpg \
--set global.postgresql.postgresqlDatabase=testpg \
--set global.postgresql.postgresqlUsername=testpg \
--set global.postgresql.postgresqlPassword=testpg \
--set global.postgresql.servicePort=5432 \
--set initdbScripts=(WHAT GOES HERE TO RUN "sql/init.sql"??) \
--set service.type=LoadBalancer
</code></pre>
| <p>According to <a href="https://github.com/helm/charts/tree/master/stable/postgresql" rel="noreferrer">stable/postgresql</a> helm chart, <code>initdbScripts</code> is a dictionary of init script names which are multi-line variables:</p>
<blockquote>
<pre><code>## initdb scripts
## Specify dictionary of scripts to be run at first boot
## Alternatively, you can put your scripts under the files/docker-entrypoint-initdb.d directory
##
# initdbScripts:
# my_init_script.sh:|
# #!/bin/sh
# echo "Do something."
</code></pre>
</blockquote>
<p>Let's assume that we have the following <code>init.sql</code> script:</p>
<pre><code>CREATE USER helm;
CREATE DATABASE helm;
GRANT ALL PRIVILEGES ON DATABASE helm TO helm;
</code></pre>
<p>When we are going to inject a multi-line text into values we need to deal with indentation in YAML.</p>
<p>For above particular case it is:</p>
<pre><code>helm install stable/postgresql -n testpg \
--set global.postgresql.postgresqlDatabase=testpg \
--set global.postgresql.postgresqlUsername=testpg \
--set global.postgresql.postgresqlPassword=testpg \
--set global.postgresql.servicePort=5432 \
--set initdbScripts."init\.sql"="CREATE USER helm;
CREATE DATABASE helm;
GRANT ALL PRIVILEGES ON DATABASE helm TO helm;" \
--set service.type=LoadBalancer
</code></pre>
<p>There is some explanation to above example:</p>
<ol>
<li>If script's name has <code>.</code> it should be escaped, like <code>"init\.sql"</code>.</li>
<li>Script's content is in double quotes, because it's multi-line string variable.</li>
</ol>
|
<p>I am working with multiple Kubernetes clusters at Azure, so I need to change quickly from one cluster to another without having various files at my path <code>C:\Users\username\.kube</code>, because I have to rename or replace the file when I wish to change to other.</p>
| <p>I suggest that you use the following tools and tricks:</p>
<ul>
<li>Use <a href="https://github.com/asdf-vm/asdf" rel="noreferrer"><code>asdf</code></a> to manage multiple <code>kubectl</code> versions</li>
<li><strong><a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable" rel="noreferrer">Set the <code>KUBECONFIG</code></a> env var to change between multiple <code>kubeconfig</code> files</strong></li>
<li>Use <a href="https://github.com/jonmosco/kube-ps1" rel="noreferrer"><code>kube-ps1</code></a> to keep track of your current context/namespace</li>
<li>Use <a href="https://github.com/ahmetb/kubectx" rel="noreferrer"><code>kubectx</code> and <code>kubens</code></a> to change fast between clusters/namespaces</li>
<li>Use aliases to combine them all together</li>
</ul>
<p>Take a look at this article, it explains how to accomplish this: <a href="https://medium.com/@eduardobaitello/using-different-kubectl-versions-with-multiple-kubernetes-clusters-a3ad8707b87b" rel="noreferrer">Using different kubectl versions with multiple Kubernetes clusters</a></p>
<p>I also recommend this read: <a href="https://medium.com/@ahmetb/mastering-kubeconfig-4e447aa32c75" rel="noreferrer">Mastering the KUBECONFIG file</a></p>
|
<p>I'm trying to use this feature: <a href="https://cloud.ibm.com/docs/services/appid?topic=appid-kube-auth#kube-auth" rel="nofollow noreferrer">https://cloud.ibm.com/docs/services/appid?topic=appid-kube-auth#kube-auth</a></p>
<p>I've followed the steps in the documentation, but the authentication process is not triggered. Unfortunately I don't see any errors and don't know what else to do.</p>
<p>Here is my sample service (nginx.yaml):</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
labels:
app: nginx
spec:
ports:
- name: http
port: 80
protocol: TCP
selector:
app: nginx
type: NodePort
</code></pre>
<p>Here is my sample service (ingress.yaml). Replace 'niklas-heidloff-4' with your cluster name and 'niklas-heidloff-appid' with the name of your App ID service instance.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-with-app-id
annotations:
ingress.bluemix.net/appid-auth: "bindSecret=binding-niklas-heidloff-appid namespace=default requestType=web"
spec:
tls:
- hosts:
- niklas.niklas-heidloff-4.us-south.containers.appdomain.cloud
secretName: niklas-heidloff-4
rules:
- host: niklas.niklas-heidloff-4.us-south.containers.appdomain.cloud
http:
paths:
- path: /
backend:
serviceName: nginx
servicePort: 80
</code></pre>
<p>Here are the steps to reproduce the sample:</p>
<p>First create a new cluster with at least two worker nodes in Dallas as described in the documentation. Note that it can take some extra time to get a public IP for your cluster.</p>
<p>Then create a App ID service instance.</p>
<p>Then invoke the following commands (replace 'niklas-heidloff-4' with your cluster name):</p>
<pre><code>$ ibmcloud login -a https://api.ng.bluemix.net
$ ibmcloud ks region-set us-south
$ ibmcloud ks cluster-config niklas-heidloff-4 (and execute export....)
$ ibmcloud ks cluster-service-bind --cluster niklas-heidloff-4 --namespace default --service niklas-heidloff-appid
$ kubectl apply -f nginx.yaml
$ kubectl apply -f ingress.yaml
</code></pre>
<p>After this I could open '<a href="https://niklas.niklas-heidloff-4.us-south.containers.appdomain.cloud/" rel="nofollow noreferrer">https://niklas.niklas-heidloff-4.us-south.containers.appdomain.cloud/</a>' but the authentication process is not triggered and the page opens without authentication. </p>
| <p>I tried the steps mentioned in the <a href="https://cloud.ibm.com/docs/services/appid?topic=appid-kube-auth#kube-auth" rel="nofollow noreferrer">link</a> and this is how it <strong>worked</strong> for me. </p>
<p><strong>ingress.yaml</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
annotations:
ingress.bluemix.net/appid-auth: "bindSecret=binding-appid-ks namespace=default requestType=web serviceName=nginx idToken=false"
spec:
tls:
- hosts:
- test.vidya-think-cluster.us-south.containers.appdomain.cloud
secretName: vidya-think-cluster
rules:
- host: test.vidya-think-cluster.us-south.containers.appdomain.cloud
http:
paths:
- path: /
backend:
serviceName: nginx
servicePort: 80
</code></pre>
<p>I added the following web redirect URL in the <code>authentication settings</code> of App ID service - <code>http://test.vidya-think-cluster.us-south.containers.appdomain.cloud/appid_callback</code>.</p>
<p>Now, when you try accessing the app at <code>http://test.vidya-think-cluster.us-south.containers.appdomain.cloud/</code> you should see the redirection to App ID</p>
<p>Looks like <code>idToken=false</code> is a mandatory parameter as there is an error when you run <code>kubectl describe myingress</code> </p>
<p><strong>Error</strong>: <em>Failed to apply ingress.bluemix.net/appid-auth annotation. Error annotation format error : One of the mandatory fields not valid/missing for annotation ingress.bluemix.net/appid-auth</em></p>
|
<p>I have installed a k8s cluster on AWS with kops and I have followed instructions to deploy api-platform on that cluster with helm.</p>
<p>I don't understand why the php pod log shows a 405 when php-pod try to invalidate cache into varnish-pod.</p>
<p>In the Varnish pod inside /usr/local/etc/varnish/default.vcl my whitelist is the default one</p>
<pre><code># Hosts allowed to send BAN requests
acl invalidators {
"localhost";
"php";
}
</code></pre>
<p>UPDATE I think that the problem can be generalized in this way: from a pod A inside a service A I want to call a service B. I need that in the request (received in pod B) is preserved the IP of the service A not the IP of the pod A. </p>
| <p>Here's an easier fix from api-platform: <a href="https://github.com/api-platform/demo/blob/master/api/docker/varnish/conf/default.vcl#L22-L25" rel="nofollow noreferrer">https://github.com/api-platform/demo/blob/master/api/docker/varnish/conf/default.vcl#L22-L25</a></p>
<p>I think they will update the helm chart with this one</p>
|
<p>I have set up EKS cluser with "private access" enabled and set up one instance in the same VPC to communicate with EKS. The issue is if I enable to the "public access", I can access the api endpoint. But if I disable the public access and enable the private access, I can't access api endpoints.</p>
<p>When private access is enabled:</p>
<pre><code>kubectl get svc
Unable to connect to the server: dial tcp: lookup randomstring.region.eks.amazonaws.com on 127.0.0.53:53: no such host
</code></pre>
<p>When public access is enabled:</p>
<pre><code>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 57m
</code></pre>
| <p>I had to enable <code>enableDnsHostnames</code> and <code>enableDnsSupport</code> for my VPC.</p>
<p>When enabling the private access of a cluster, EKS creates a private hosted zone and associates with the same VPC. It is managed by AWS itself and you can't view it in your aws account. So, this private hosted zone to work properly, your VPC must have <code>enableDnsHostnames</code> and <code>enableDnsSupport</code> set to <code>true</code>.</p>
<p>Note: Wait for a while for changes to be reflected(about 5 minutes).</p>
|
<p>I have seen many posts set <code>livenessProbe</code> to health endpoint for java web application. Take springboot as an example:</p>
<pre><code>livenessProbe:
httpGet:
path: /actuator/health
port: http
</code></pre>
<p>But as I know the following command which is the last line of the dockerfile already guarantees the same as livenessProbe implicitly.</p>
<p><code>CMD ["-c", "/usr/bin/java ${DOCKER_JAVA_OPTS} -jar YOUR_JAR.jar"]</code></p>
<p>That said, if java application crashes, a new pod will be created, and therefore, it's no need to set <code>livenessProbe</code> for java application?</p>
<p>For me it also makes no sense to set <code>readinesProbe</code> the same as <code>livenessProbe</code>, because when livenessProbe passes, <code>readinessProbe</code> will pass for sure, and <code>readinessProbe</code> is more useful because after the container runs, the application has to initialize some connections for example, and waiting for the initialization is necessary.</p>
<p>one example I saw is many posts use both <code>livenessProbe</code> and <code>readinessProbe</code>, and both call the same health endpoint.</p>
<pre><code> livenessProbe:
httpGet:
path: /actuator/health
port: http
readinessProbe:
httpGet:
path: /actuator/health
port: http
</code></pre>
| <p>Kubernetes will only restart the pod if the main container process ends, not when it for example stops being responsive or hangs. The livenessProbe is intended to restart unresponsive containers.</p>
<p>I would not advise to have readinessProbe and livenessProbe exactly the same. You should at least use different values for initialDelay (with the delay less than your startup time for the readinessProbe and more than your startup time for the livenessProbe). If the delay your livenessProbe is too short, your application will get killed and restarted before it manages to fully start.</p>
<p>The readinessProbe should indicate whether your container is ready to respond to requests. Failure of the livenessProbe should indicate that the application hangs and should be restarted. For a lot of simple application that do not have complicated health checks, this situation is the same and hence they use the same tests. If you have more fine-grained health checks, you could have a different test for both probes. Another example of a difference could be in the case that you know your health check will occasionally fail due to some intermittent issue that resolves on its own. In that case I would advise you to increase the failureThreshold of the livenessProbe.</p>
<p>So in general, I would make readinessProbe more strict than the livenessProbe. This way, you ensure that requests only get sent to pods that respond, but avoid unnecessary restarts. Obviously, you need to take the specifics of your application into account.</p>
|
<p>I am going through the kubernetes tutorial at Udacity. When i run the the nginx image using the following command </p>
<pre><code>kubectl run nginx --image=nginx:1.10.0
</code></pre>
<p>It given me the error </p>
<blockquote>
<p>error: failed to discover supported resources: Get <a href="http://localhost:8080/apis/extensions/v1beta1" rel="noreferrer">http://localhost:8080/apis/extensions/v1beta1</a>: dial tcp 127.0.0.1:8080: getsockopt: connection refused</p>
</blockquote>
<p>If i try to get pods using the following command</p>
<pre><code>kubectl get pods
</code></pre>
<p>it says</p>
<blockquote>
<p>The connection to the server localhost:8080 was refused - did you specify the right host or port?</p>
</blockquote>
<p>The nginx server is running, i can tell because i can get the appropriate output by running curl <a href="http://127.0.0.1" rel="noreferrer">http://127.0.0.1</a></p>
<p>I am not able to figure out what the issue is, and there are not a lot of resources on the internet for this problem. Can anyone please tell me how do i resolve it?</p>
| <p>This issue often occurs when <code>kubectl</code> can not find the configuration credential for the intended cluster.</p>
<p>Check <code>$HOME/.kube/config</code> for cluster configuration. If configuration is empty or configuration is set for a wrong cluster, regenerate configuration by running,</p>
<pre><code>gcloud container clusters get-credentials <CLUSTER_NAME> --zone <ZONE>
</code></pre>
<p>This will update the configuration credential in <code>$HOME/.kube/config</code>.</p>
<p>Now, everything should work as expected.</p>
<p>Reference: <a href="https://github.com/googlecodelabs/feedback/issues/537" rel="noreferrer">https://github.com/googlecodelabs/feedback/issues/537</a></p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.