prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>I have the following alert configured in prometheus:</p>
<pre><code>alert: ClockSkewDetected
expr: abs(node_timex_offset_seconds{job="node-exporter"})
> 0.03
for: 2m
labels:
severity: warning
annotations:
message: Clock skew detected on node-exporter {{ $labels.namespace }}/{{ $labels.pod }}. Ensure NTP is configured correctly on this host.
</code></pre>
<p>This alert is <a href="https://github.com/coreos/kube-prometheus/blob/773a9c73e9e1a817d079cf6db13603a1cb830c8b/jsonnet/kube-prometheus/alerts/node.libsonnet#L38-L50" rel="nofollow noreferrer">part of</a> the default <code>kube-prometheus</code> stack which I am using.</p>
<p>I find this alert fires for around 10 mins every day or two.</p>
<p>I'd like to know how to deal with this problem (the alert firing!). It's suggested in <a href="https://stackoverflow.com/questions/48792742/how-do-i-set-the-correct-time-in-my-google-kubernetes-engine-container-do-i-nee">this answer</a> that I shouldn't need to run NTP (via a daemonset I guess) myself on GKE.</p>
<p>I'm also keen to use the <code>kube-prometheus</code> defaults where possible - so I'm unsure about increasing the <code>0.03</code> value.</p>
| <p>As pointed in the answer, <a href="https://cloud.google.com/compute/docs/instances/configure-ntp" rel="nofollow noreferrer">instances in GCP are preconfigured to have their own NTP server synced</a>, so there shouldn't be any need to use DaemonSets to manually configure them.</p>
<p>It might be the case that the clock is skewing on <a href="https://cloud.google.com/compute/docs/instances/live-migration" rel="nofollow noreferrer">live migrations</a> and it catches up automatically but not without triggering the alert. However, this theory only applies for non-preemptible instances.</p>
<p><a href="https://github.com/GoogleCloudPlatform/compute-image-packages/issues/634" rel="nofollow noreferrer">Some events on GCE instances are supposed to trigger the Clock Skew Daemon</a> that will eventually correct changes initiated by the user (or a process action on behalf of the user), so if this is happening in your nodes, that's another possibility.</p>
<p>Regardless of the aforementioned theories and since nodes are managed resources in GKE, I think you have a pretty solid case for the <a href="https://cloud.google.com/support/" rel="nofollow noreferrer">GKE support</a> to investigate as this might be an implementation detail.</p>
|
<p>I have a springboot app which I want to deploy on Kubernetes (I'm using minikube) with a custom context path taken from the environment variables.</p>
<p>I've compiled an app.war file. exported an environment variable in Linux as follow: </p>
<p><code>export SERVER_SERVLET_CONTEXT_PATH=/app</code> </p>
<p>And then started my app on my machine as follow: </p>
<p><code>java -jar app.war --server.servlet.context-path=$(printenv CONTEXT_PATH)</code></p>
<p>and it works as expected, I can access my app throw browser using the url <strong>localhost:8080/app/</strong></p>
<p>I want to achieve the same thing on minikube so I prepared those config files:</p>
<hr>
<ul>
<li><p>Dockerfile:</p>
<pre><code>FROM openjdk:8
ADD app.war app.war
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "app.war", "--server.servlet.context-path=$(printenv CONTEXT_PATH)"]
</code></pre></li>
</ul>
<hr>
<ul>
<li><p>deployment config file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: esse-deployment-1
labels:
app: esse-1
spec:
replicas: 1
selector:
matchLabels:
app: esse-1
template:
metadata:
labels:
app: esse-1
spec:
containers:
- image: mysql:5.7
name: esse-datasource
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: root
- image: esse-application
name: esse-app
imagePullPolicy: Never
ports:
- containerPort: 8080
env:
- name: server.servlet.context-path
value: /esse-1
volumes:
- name: esse-1-mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-persistent-storage-claim
---
apiVersion: v1
kind: Service
metadata:
name: esse-service-1
labels:
app: esse-1
spec:
selector:
app: esse-1
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: NodePort
</code></pre></li>
</ul>
<p>However, the java container inside the pod fails to start and here's the exception is thrown by spring: </p>
<blockquote>
<p>Initialization of bean failed; nested exception is
java.lang.IllegalArgumentException: ContextPath must start with '/'
and not end with '/'</p>
</blockquote>
| <p>Make use of <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="noreferrer">configmaps</a>.</p>
<p>The configmap will holds application.properties of your springboot application.</p>
<pre><code>---
apiVersion: v1
kind: ConfigMap
metadata:
name: esse-config
data:
application-dev.properties: |
spring.application.name=my-esse-service
server.port=8080
server.servlet.context-path=/esse-1
</code></pre>
<p><strong><em>NOTE:</strong> <code>server.servlet.context-path=/esse-1</code> will override context-path of your springboot application.</em></p>
<p>Now refer this configmap in your deployment yaml.</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: esse-deployment-1
labels:
app: esse-1
spec:
replicas: 1
selector:
matchLabels:
app: esse-1
template:
metadata:
labels:
app: esse-1
spec:
containers:
- image: mysql:5.7
name: esse-datasource
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: root
- image: esse-application
name: esse-app
imagePullPolicy: Never
command: [ "java", "-jar", "app.war", "--spring.config.additional-location=/config/application-dev.properties" ]
ports:
- containerPort: 8080
volumeMounts:
- name: esse-application-config
mountPath: "/config"
readOnly: true
volumes:
- name: esse-application-config
configMap:
name: esse-config
items:
- key: application-dev.properties
path: application-dev.properties
- name: esse-1-mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-persistent-storage-claim
</code></pre>
<p><strong><em>NOTE:</strong> Here we are mounting configmap inside your springboot application container at <code>/config</code> folder. Also <code>--spring.config.additional-location=/config/application-dev.properties</code> is pointing to the application.properties config file.</em></p>
<p>In future if you want to add any new config or update the value of existing config that just make the change in configmap and <code>kubectl apply</code> it. Then to reflect those new config changes, just scale down and scale up the deployment.</p>
<p>Hope this helps.</p>
|
<p>I work on an open source system that is comprised of a Postgres database and a tomcat server. I have docker images for each component. We currently use docker-compose to test the application.</p>
<p>I am attempting to model this application with kubernetes.</p>
<p>Here is my first attempt.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: dspace-pod
spec:
volumes:
- name: "pgdata-vol"
emptyDir: {}
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- image: dspace/dspace:dspace-6_x
name: dspace
ports:
- containerPort: 8080
name: http
protocol: TCP
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
#
- image: dspace/dspace-postgres-pgcrypto
name: dspacedb
ports:
- containerPort: 5432
name: http
protocol: TCP
volumeMounts:
- mountPath: "/pgdata"
name: "pgdata-vol"
env:
- name: PGDATA
value: /pgdata
</code></pre>
<p>I have a configMap that is setting the hostname to the name of the pod.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T19:14:38Z
name: local-config-map
namespace: default
data:
local.cfg: |-
dspace.dir = /dspace
db.url = jdbc:postgresql://dspace-pod:5432/dspace
dspace.hostname = dspace-pod
dspace.baseUrl = http://dspace-pod:8080
solr.server=http://dspace-pod:8080/solr
</code></pre>
<p>This application has a number of tasks that are run from the command line. </p>
<p>I have created a 3rd Docker image that contains the jars that are needed on the command line.</p>
<p>I am interested in modeling these command line tasks as <strong>Jobs</strong> in Kubernetes. Assuming that is a appropriate way to handle these tasks, how do I specify that a job should run within a Pod that is already running?</p>
<p>Here is my first attempt at defining a job. </p>
<pre><code>apiVersion: batch/v1
kind: Job
#https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
metadata:
name: dspace-create-admin
spec:
template:
spec:
volumes:
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- name: dspace-cli
image: dspace/dspace-cli:dspace-6_x
command: [
"/dspace/bin/dspace",
"create-administrator",
"-e", "test@test.edu",
"-f", "test",
"-l", "admin",
"-p", "admin",
"-c", "en"
]
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
restartPolicy: Never
</code></pre>
| <p>The following configuration has allowed me to start my services (tomcat and postgres) as I hoped.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T19:14:38Z
name: local-config-map
namespace: default
data:
# example of a simple property defined using --from-literal
#example.property.1: hello
#example.property.2: world
# example of a complex property defined using --from-file
local.cfg: |-
dspace.dir = /dspace
db.url = jdbc:postgresql://dspacedb-service:5432/dspace
dspace.hostname = dspace-service
dspace.baseUrl = http://dspace-service:8080
solr.server=http://dspace-service:8080/solr
---
apiVersion: v1
kind: Service
metadata:
name: dspacedb-service
labels:
app: dspacedb-app
spec:
type: NodePort
selector:
app: dspacedb-app
ports:
- protocol: TCP
port: 5432
# targetPort: 5432
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dspacedb-deploy
labels:
app: dspacedb-app
spec:
selector:
matchLabels:
app: dspacedb-app
template:
metadata:
labels:
app: dspacedb-app
spec:
volumes:
- name: "pgdata-vol"
emptyDir: {}
containers:
- image: dspace/dspace-postgres-pgcrypto
name: dspacedb
ports:
- containerPort: 5432
name: http
protocol: TCP
volumeMounts:
- mountPath: "/pgdata"
name: "pgdata-vol"
env:
- name: PGDATA
value: /pgdata
---
apiVersion: v1
kind: Service
metadata:
name: dspace-service
labels:
app: dspace-app
spec:
type: NodePort
selector:
app: dspace-app
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dspace-deploy
labels:
app: dspace-app
spec:
selector:
matchLabels:
app: dspace-app
template:
metadata:
labels:
app: dspace-app
spec:
volumes:
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- image: dspace/dspace:dspace-6_x-jdk8-test
name: dspace
ports:
- containerPort: 8080
name: http
protocol: TCP
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
</code></pre>
<p>After applying the configuration above, I have the following results.</p>
<pre><code>$ kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
dspace-service NodePort 10.104.224.245 <none> 8080:32459/TCP 3s app=dspace-app
dspacedb-service NodePort 10.96.212.9 <none> 5432:30947/TCP 3s app=dspacedb-app
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22h <none>
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
dspace-deploy-c59b77bb8-mr47k 1/1 Running 0 10m
dspacedb-deploy-58dd85f5b9-6v2lf 1/1 Running 0 10
</code></pre>
<p>I was pleased to see that the service name can be used for port forwarding.</p>
<pre><code>$ kubectl port-forward service/dspace-service 8080:8080
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
</code></pre>
<p>I am also able to run the following job using the defined service names in the configMap.</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: dspace-create-admin
spec:
template:
spec:
volumes:
- name: "assetstore"
emptyDir: {}
- name: my-local-config-map
configMap:
name: local-config-map
containers:
- name: dspace-cli
image: dspace/dspace-cli:dspace-6_x
command: [
"/dspace/bin/dspace",
"create-administrator",
"-e", "test@test.edu",
"-f", "test",
"-l", "admin",
"-p", "admin",
"-c", "en"
]
volumeMounts:
- mountPath: "/dspace/assetstore"
name: "assetstore"
- mountPath: "/dspace/config/local.cfg"
name: "my-local-config-map"
subPath: local.cfg
restartPolicy: Never
</code></pre>
<p>Results</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
dspace-create-admin-kl6wd 0/1 Completed 0 5m
dspace-deploy-c59b77bb8-mr47k 1/1 Running 0 10m
dspacedb-deploy-58dd85f5b9-6v2lf 1/1 Running 0 10m
</code></pre>
<p>I still have some work to do persisting the volumes.</p>
|
<p>I'm trying to play with autoscaling scenarios (currently with microk8s single node personal cluster).</p>
<p>Basic CPU scaling works fine.</p>
<p>For the more complex scenarios, I'm trying to follow the guide at <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics" rel="noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics</a> but I can't figure out how / where the possible pod metrics / object metrics are defined / documented. For example, .. where is "packets-per-second" documented .</p>
<p>I can kind of navigate via kubectl or manually exercising the REST APIs but there has to be a better way.</p>
<p>Thanks</p>
<pre><code>apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: php-apache
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: AverageUtilization
averageUtilization: 50
- type: Pods
pods:
metric:
name: packets-per-second ====> where is this name defined/documented ?
targetAverageValue: 1k
- type: Object
object:
metric:
name: requests-per-second ====> where is this name defined/documented ?
describedObject:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
name: main-route
target:
kind: Value
value: 10k
</code></pre>
| <p>CPU or Memory usage in ResourceMetric is <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/server/stats/summary.go" rel="noreferrer">provided by kubelet</a> and collected by <a href="https://github.com/kubernetes-incubator/metrics-server" rel="noreferrer">metric-server</a></p>
<p>But for <code>packets-per-second</code> and <code>requests-per-second</code>, there are no official provider, so this field can actually be any value, depend on the non-official custom metrics API you deployed. </p>
<p>Some popular custom metrics API are listed at <a href="https://github.com/kubernetes/metrics/blob/master/IMPLEMENTATIONS.md" rel="noreferrer">https://github.com/kubernetes/metrics/blob/master/IMPLEMENTATIONS.md</a></p>
|
<p>I ran a Job in Kubernetes overnight. When I check it in the morning, it had failed. Normally, I'd check the pod logs or the events to determine why. However, the pod was deleted and there are no events.</p>
<pre><code>kubectl describe job topics-etl --namespace dnc
</code></pre>
<p>Here is the <code>describe</code> output:</p>
<pre><code>Name: topics-etl
Namespace: dnc
Selector: controller-uid=391cb7e5-b5a0-11e9-a905-0697dd320292
Labels: controller-uid=391cb7e5-b5a0-11e9-a905-0697dd320292
job-name=topics-etl
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{},"name":"topics-etl","namespace":"dnc"},"spec":{"template":{"spec":{"con...
Parallelism: 1
Completions: 1
Start Time: Fri, 02 Aug 2019 22:38:56 -0500
Pods Statuses: 0 Running / 0 Succeeded / 1 Failed
Pod Template:
Labels: controller-uid=391cb7e5-b5a0-11e9-a905-0697dd320292
job-name=topics-etl
Containers:
docsund-etl:
Image: acarl005/docsund-topics-api:0.1.4
Port: <none>
Host Port: <none>
Command:
./create-topic-data
Requests:
cpu: 1
memory: 1Gi
Environment:
AWS_ACCESS_KEY_ID: <set to the key 'access_key_id' in secret 'aws-secrets'> Optional: false
AWS_SECRET_ACCESS_KEY: <set to the key 'secret_access_key' in secret 'aws-secrets'> Optional: false
AWS_S3_CSV_PATH: <set to the key 's3_csv_path' in secret 'aws-secrets'> Optional: false
Mounts:
/app/state from topics-volume (rw)
Volumes:
topics-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: topics-volume-claim
ReadOnly: false
Events: <none>
</code></pre>
<p>Here is the job config yaml. It has <code>restartPolicy: OnFailure</code>, but it never restarted. I also have no TTL set so pods should never get cleaned up.</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: topics-etl
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: docsund-etl
image: acarl005/docsund-topics-api:0.1.6
command: ["./create-topic-data"]
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: aws-secrets
key: access_key_id
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: aws-secrets
key: secret_access_key
- name: AWS_S3_CSV_PATH
valueFrom:
secretKeyRef:
name: aws-secrets
key: s3_csv_path
resources:
requests:
cpu: 1
memory: 1Gi
volumeMounts:
- name: topics-volume
mountPath: /app/state
volumes:
- name: topics-volume
persistentVolumeClaim:
claimName: topics-volume-claim
</code></pre>
<p><strong>How can I debug this?</strong></p>
| <p>The TTL would clean up the Job itself and all it's children objects. <code>ttlSecondsAfterFinished</code> is unset so the Job hasn't been cleaned up. </p>
<p>From the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#handling-pod-and-container-failures" rel="noreferrer">job docco</a></p>
<blockquote>
<p><strong>Note:</strong> If your job has <code>restartPolicy = "OnFailure"</code>, keep in mind that your container running the Job will be terminated once the job backoff limit has been reached. This can make debugging the Job’s executable more difficult. We suggest setting <code>restartPolicy = "Never"</code> when debugging the Job or using a logging system to ensure output from failed Jobs is not lost inadvertently.</p>
</blockquote>
<p>The Job spec you posted doesn't have a <code>backoffLimit</code> so it should try to run the underlying task 6 times. </p>
<p>If the container process exits with a non zero status then it will fail, so can be entirely silent in the logs. </p>
<p>The spec doesn't specify an <code>activeDeadlineSeconds</code> seconds defined so I'm not sure what type of timeout you end up with. I assume this would be a hard failure in the container then so a timeout doesn't come in to play. </p>
|
<p>Anybody know why I keep getting service unavailable for the below YAML code. The ingress points to the service which points to the container and it should be working. For this example I am just using an NGINX container to test.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
selector:
k8s-app: traefik-ingress-lb
app: nginx
ports:
- name: web
port: 80
targetPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend
spec:
rules:
- host: test.example.services
http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: web
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.15.4
ports:
- containerPort: 80
---
</code></pre>
| <p>The service had a selector which the app didn't. Once the label <code>k8s-app: traefik-ingress-lb</code> was removed everything worked fine.</p>
<p>Thanks to Daniel Tomcej on the Traefik Slack channel for helping me out here.</p>
|
<p>I am trying to follow this tutorial <a href="https://kubernetes.io/docs/tutorials/hello-minikube/#create-a-service" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/hello-minikube/#create-a-service</a></p>
<p>What confuses me is </p>
<pre><code>kubectl expose deployment hello-node --type=LoadBalancer --port=8080
</code></pre>
<p>Can some explain if this will balance load across the pods in the node? I want to, say, make 5 requests to the service of a deployment with 5 pods and want each pod to handle each request in parallel. How can I make minikube equally distribute requests across pods in a node?</p>
<p>Edit: There is also <code>--type=NodePort</code> how does it differ from type LoadBalancer above? Do any of these distribute incoming requests across pods on its own?</p>
| <p>A service is the way to expose your deployment to external requests. Type loadbalancer gives the service an external ip which will forward your request to the deployment. The deployment defaults to round Robin (based on the docs). If you want different types of load balancing use istio or another service mesh </p>
|
<p>I am trying to deploy two MySQL pods with the same PVC, but I am getting <code>CrashLoopBackoff</code> state when I create the second pod with the error in logs: "innoDB check that you do not already have another mysqld process using the same innodb log files". How to resolve this error?</p>
| <p>There are different options to solve high availability. If you are running kubernetes with an infrastructure that can provision the volume to different nodes (f.e. in the cloud) and your pod/node crashes, kubernetes will restart the database on a different node with the same volume. Aside from a short downtime you will have the database back up running in a relatively short time.
The volume will be mounted to a single running mysql pod to prevent data corruption from concurrent access. (This is what mysql notices in your scenario as well, since it is no designed for shared storage as HA solution)</p>
<p>If you need more you can use the built in replication of mysql to create a mysql 'cluster' which can be used even if one node/pod should fail. Each instance of the mysql cluster will have an individual volume in that case. Look at the kubernetes stateful set example for this scenario: <a href="https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/</a></p>
|
<p>I have a Kubernetes Cron Job for running a scheduled task every 5 minutes. I want to make sure that when a new pod is created at next schedule time, the earlier pod should have been terminated. The earlier pod should get terminated before creation of new. Can Kubernetes terminate the earlier pod before creation of new?</p>
<p>My yaml is:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-scheduled
spec:
schedule: "*/5 * * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: cmm-callout
env:
- name: SCHEDULED
value: "true"
livenessProbe:
httpGet:
path: /myapp/status
port: 7070
scheme: HTTPS
initialDelaySeconds: 120
timeoutSeconds: 30
periodSeconds: 120
image: gcr.io/projectid/folder/my-app:9.0.8000.34
restartPolicy: Never
</code></pre>
<p>How can I make sure the earlier pod is terminated before new is created?</p>
| <p>If i understood your case correctly (the earlier pod should have been terminated before creation of new one).</p>
<p><strong>1</strong>. Please use <strong>spec.jobTemplate.spec.activeDeadlineSeconds</strong> instead.</p>
<p>By setting this parameter once a Job reaches <strong>activeDeadlineSeconds</strong> - all of running Pods will be terminated and the Job status will become type: Failed with reason DeadlineExceeded.</p>
<p>example:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
activeDeadlineSeconds: 60
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster && sleep 420
restartPolicy: Never
</code></pre>
<p><strong>2</strong>. The second solution is to set-up <strong>concurrencyPolicy</strong>. and replace the currently running job with a new job.</p>
<p>example:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/2 * * * *"
concurrencyPolicy: Replace
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster && sleep 420
restartPolicy: Never
</code></pre>
<p>Resources:</p>
<ul>
<li><p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#job-termination-and-cleanup" rel="noreferrer">Job Termination</a></p></li>
<li><p><a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#concurrency-policy" rel="noreferrer">Concurrency Policy</a></p></li>
</ul>
|
<p>How can I write the following in namespaceSelector: </p>
<p>key=val OR key1=val1?</p>
<p>the following example is for AND, how can i write something for the OR option? </p>
<pre><code>matchExpressions:
- {key: app, operator: In, values: [application]}
- {key: app1, operator: In, values: [application1]}
</code></pre>
<p>i find option to do OR between values (see below in the image) but not between keys. </p>
<pre><code>{key: app, operator: In, values: [application, app2, app3]}
</code></pre>
| <p>For the <code>matchExpressions</code> the definition is that all expressions must match (AND). In official <strong>Kubernetes</strong> <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#resources-that-support-set-based-requirements" rel="nofollow noreferrer">documentation</a> you can read:</p>
<blockquote>
<p>All of the requirements, from both matchLabels and matchExpressions
are ANDed together – they must all be satisfied in order to match.</p>
</blockquote>
<p>You could add multiple <code>from</code> terms to your network policy, for them only one needs to match (OR). More on <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#behavior-of-to-and-from-selectors" rel="nofollow noreferrer">behavior of to and from selectors</a>.</p>
|
<p>I have two Kubernetes clusters representing dev and staging environments.</p>
<p>Separately, I am also deploying a custom DevOps dashboard which will be used to monitor these two clusters. On this dashboard I will need to show information such as:</p>
<ul>
<li>RAM/HD Space/CPU usage of each deployed Pod in each environment</li>
<li>Pod health (as in if it has too many container restarts etc)</li>
<li>Pod uptime</li>
</ul>
<p>All these stats have to be at a cluster level and also per namespace, preferably. As in, if I query a for a particular namespace, I have to get all the resource usages of that namespace.</p>
<p>So the webservice layer of my dashboard will send a service request to the master node of my respective cluster in order to fetch this information.</p>
<p>Another thing I need is to implement real time notifications in my DevOps dashboard. Every time a container fails, I need to catch that event and notify relevant personnel.</p>
<p>I have been reading around and two things that pop up a lot are Prometheus and Metric Server. Do I need both or will one do? I set up Prometheus on a local cluster but I can't find any endpoints it exposes which could be called by my dashboard service. I'm also trying to set up Prometheus AlertManager but so far it hasn't worked as expected. Trying to fix it now. Just wanted to check if these technologies have the capabilities to meet my requirements.</p>
<p>Thanks!</p>
| <p>I don't know why you are considering your own custom monitoring system. <a href="https://github.com/coreos/prometheus-operator" rel="nofollow noreferrer">Prometheus operator</a> provides all the functionality that you mentioned.
You will end up only with your own grafana dashboard with all required information. </p>
<p>If you need custom notification you can set it up in <a href="https://prometheus.io/docs/alerting/configuration/#configuration" rel="nofollow noreferrer">Alertmanager</a> creating correct <code>prometheusrules.monitoring.coreos.com</code>, you can find a lot of preconfigured prometheusrules in <a href="https://github.com/kubernetes-monitoring/kubernetes-mixin/blob/master/runbook.md#group-name-kubernetes-apps" rel="nofollow noreferrer">kubernetes-mixin
</a>.
Using labels and namespaces in Alertmanager you can setup a correct route to notify person responsible for a given deployment.</p>
<p><code>Do I need both or will one do?</code>, yes, you need both - <code>Prometheus</code> collects and aggregates metric when <code>Metrick server</code> exposes metrics from your cluster node for your Prometheus to scrape it.</p>
<p>If you have problems with Prometheus, Alertmanger and so on consider using <a href="https://github.com/helm/charts/tree/master/stable/prometheus-operator" rel="nofollow noreferrer">helm chart</a> as entrypoint.</p>
|
<p>I'm trying to connect to a postgres container running in docker on my mac, from my minikube setup in virtualbox. But I'm running into dns resolve issues.</p>
<p>I'm running postgres as a container on docker</p>
<pre><code>> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a794aca3a6dc postgres "docker-entrypoint.s…" 3 days ago Up 3 days 0.0.0.0:5432->5432/tcp postgres
</code></pre>
<p>On my Mac / VirtualBox / Minikube setup I create a service</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: postgres-svc
spec:
type: ExternalName
externalName: 10.0.2.2
ports:
- port: 5432
</code></pre>
<p><code>10.0.2.2</code> is alias to host interface (found this information <a href="https://stackoverflow.com/questions/9808560/why-do-we-use-10-0-2-2-to-connect-to-local-web-server-instead-of-using-computer/34732276#34732276">here</a>)</p>
<pre><code>> kubectl get service --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21d
hazelnut postgres-svc ExternalName <none> 10.0.2.2 5432/TCP 27m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 21d
kube-system kubernetes-dashboard ClusterIP 10.108.181.235 <none> 80/TCP 19d
kube-system tiller-deploy ClusterIP 10.101.218.56 <none> 44134/TCP 20d
</code></pre>
<p>(our namespace is <code>hazelnut</code>, don't ask:-)</p>
<p>In my deployment, if I connect to 10.0.2.2 directly, it connects to the postgres without issue, but if I try to resolve the hostname of the kubernetes service it doesnt' work.
So it's not a firewall or routing issue, pure dns.</p>
<p>I've tried <code>postgres-svc.hazelnut.cluster.local</code>,
<code>postgres-svc</code>, <code>postgres-svc.hazelnut.svc.cluster.local</code>, <code>postgres-svc.hazelnut</code> all resulting in NXDOMAIN</p>
<p><code>kubernetes.default</code> works though.</p>
<pre><code>> nslookup kubernetes.default
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: kubernetes.default.svc.cluster.local
Address: 10.96.0.1
</code></pre>
<p>In this <a href="https://stackoverflow.com/questions/52356455/kubernetes-externalname-service-not-visible-in-dns">post</a> they mention that using kube-dns should solve it, but I'm using it and to no avail</p>
<pre><code>> kubectl get svc --namespace=kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 21d
...
</code></pre>
<p>Any idea how I can get this to work properly?</p>
| <p>For the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname" rel="nofollow noreferrer">ExternalName service type</a> the <code>externalName</code> should be FQDN, not an IP address, e.g. </p>
<pre><code>kind: Service
...
metadata:
name: postgres-svc
spec:
type: ExternalName
externalName: mydb.mytestdomain
</code></pre>
<p>The host machine should be able to resolve the name of that FQDN. You might add a record into the <code>/etc/hosts</code> at the Mac host to achieve that: </p>
<pre><code>10.0.0.2 mydb.mytestdomain
</code></pre>
<p>Actually, coredns uses name resolver configured in the <code>/etc/resolv.conf</code> in the Minikube VM. It points to the name resolver in the VirtualBox NAT Network (10.0.2.3). In turn, VirtualBox relies on the host name resolving mechanism that looks through the local <code>/etc/hosts</code> file. </p>
<p>Tested for:
MacOS 10.14.3,
VBox 6.0.10,
kubernetes 1.15.0,
minikube 1.2.0,
coredns</p>
|
<p>It looks like I might need to traverse the <code>v1.Node->NodeStatus->Conditions[]</code> slice and sort by transition time and find if the most recent timed condition is <code>NodeConditionType == "Ready"</code>. I am wondering if there is a better way or if that approach is flawed?</p>
| <p>You are looking in the right place, but Conditions may not work exactly the way your question implies they do. Conditions shouldn't be seen as time-based events, but rather current states. To quote the <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties" rel="noreferrer">API conventions documentation</a>:</p>
<blockquote>
<p>Conditions represent the latest available observations of an object's state. </p>
</blockquote>
<p>As such, it isn't necessary to look for the latest Condition, but rather <em>the</em> condition for the type of state that you are interested in observing. There should only be one whose <code>NodeConditionType</code> is <code>Ready</code>, but you will need to check the <code>.Status</code> field on <code>NodeCondition</code> to confirm if its value is <code>True</code>, <code>False</code> or <code>Unknown</code>.</p>
|
<p>I'm running a dev kubernetes cluster on Docker Machine with GCE as provider. Cluster was setup using this tutorial: <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker.md" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker.md</a>. Everything's working fine except when I try to do <code>port-forward</code>. I get:</p>
<pre><code>E1104 00:58:23.210982 18552 portforward.go:310] An error occurred forwarding 650 -> 650: Error forwarding port 650 to pod pfsd-rc-7xrq1_default, uid : Unable to do port forwarding: socat not found.
I1104 00:58:23.220147 18552 portforward.go:251] Handling connection for 650
E1104 00:58:23.480593 18552 portforward.go:310] An error occurred forwarding 650 -> 650: Error forwarding port 650 to pod pfsd-rc-7xrq1_default, uid : Unable to do port forwarding: socat not found.
I1104 00:58:23.481531 18552 portforward.go:251] Handling connection for 650
E1104 00:58:23.851200 18552 portforward.go:310] An error occurred forwarding 650 -> 650: Error forwarding port 650 to pod pfsd-rc-7xrq1_default, uid : Unable to do port forwarding: socat not found.
I1104 00:58:23.852122 18552 portforward.go:251] Handling connection for 650
</code></pre>
<p>I've tried installing locally, on the GCE machine and inside the container and nothing did the trick. Anyone else hit this?</p>
| <p>It's a bit late but still, I think it will be helpful for other people.</p>
<p>It says <code>socat</code> isn't installed. Running <code>apt-get -y install socat</code> on the host machine resolves the problem. It worked for me.</p>
|
<p>I want to create a kubernetes cluster using KOPS that uses a private topology completely (all master/worker nodes are on private subnets, the API ELB is internal). </p>
<p>Once the cluster is created - how do I configure kubectl to use an ssh tunnel via my bastion server ?</p>
| <p>I found a way to make <code>kubectl</code> to run through an SSH tunnel, it's not ideal, but until I find something better, I posted it now.</p>
<p>First create the tunnel:</p>
<pre><code>ssh -f user@XX.XX.XXX.XX -L 6443:localhost:6443 -N
</code></pre>
<p>Then copy the <code>~/.kube/config</code> file on your local machine and change the cluster <code>server</code> in order to point to 127.0.0.1 instead of the server URL or IP address.</p>
<p>As the certificates are made for the server where the master node has been created, you'll get the following error:</p>
<pre><code>Unable to connect to the server: x509: certificate is valid for 10.96.0.1, 10.0.0.1, not 127.0.0.1
</code></pre>
<p>You have to pass the <code>--insecure-skip-tls-verify=true</code> flag:</p>
<pre><code>kubectl --insecure-skip-tls-verify=true version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:09:21Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>I hope this helps, and I hope to find a better way to avoid this <code>--insecure-skip-tls-verify=true</code> flag.</p>
<hr>
<h1>Update</h1>
<p>Since my comment, I found the <a href="https://gravitational.com/teleport/" rel="noreferrer">Teleport</a> project from <a href="https://gravitational.com" rel="noreferrer">Gravitational</a>, which was initially an SSH tool to authenticate without passwords (you login once, with an <a href="https://en.wikipedia.org/wiki/One-time_password" rel="noreferrer">OTP</a>, and a certificate with a validity limited in time for your user is delivered and used to authenticated to the allowed servers.), is also Kubernetes compatible.</p>
<p>Basically you have to :</p>
<ol>
<li>deploy their binary and configure it (quite easy).</li>
<li>login using <code>tsh login --proxy https://yourserveripaddress:3080</code></li>
<li>use <code>kubectl</code> to access your cluster.</li>
</ol>
<p>The magic thing here is that Teleport will update your <code>~/.kube/config</code> file in order to access your cluster.</p>
<p>It really works well and you should consider giving it a try.</p>
<p>In the case you're using <a href="https://www.chef.io" rel="noreferrer">Chef</a>, I have made <a href="https://gitlab.com/pharmony/teleport-ce-cookbook/" rel="noreferrer">a cookbook for Teleport</a>.</p>
|
<p>I need to setup mutual tls communication from kubernetes pod to external service. My system is running with istio system.</p>
<p>I found reference about this.</p>
<p><a href="https://istio.io/docs/reference/config/networking/v1alpha3/destination-rule/#TLSSettings" rel="nofollow noreferrer">https://istio.io/docs/reference/config/networking/v1alpha3/destination-rule/#TLSSettings</a></p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: external-mtls
spec:
host: *.external.com
trafficPolicy:
tls:
mode: MUTUAL
clientCertificate: /etc/certs/myclientcert.pem
privateKey: /etc/certs/client_private_key.pem
caCertificates: /etc/certs/rootcacerts.pem
</code></pre>
<p>According to this document, All I need to do is set mode MUTUAL (not ISTIO_MUTUAL) and set certificate files. As you can see, <strong>clientCertificate</strong>, <strong>privateKey</strong>, <strong>caCertificates</strong> is local file path.</p>
<p>I think they should be in envoy proxy's disk. But I couldn't find a way to put my certificate files into envoy proxy's volume.</p>
<p>How can I do that?</p>
| <p>I found solution.</p>
<ol>
<li>create secret or config map</li>
</ol>
<pre><code>kubectl create secret generic my-cert --from-file=cert1.crt --from-file=cert2.crt
</code></pre>
<ol start="2">
<li>annotate pod or deployment with <strong>sidecar.istio.io/userVolumeMount</strong>, <strong>sidecar.istio.io/userVolume</strong></li>
</ol>
<pre><code>annotations:
sidecar.istio.io/userVolumeMount: '[{"name":"my-cert", "mountPath":"/etc/my-cert", "readonly":true}]'
sidecar.istio.io/userVolume: '[{"name":"my-cert", "secret":{"secretName":"my-cert"}}]'
</code></pre>
<p>Documentation on these and other annotations: <a href="https://preliminary.istio.io/docs/reference/config/annotations/" rel="noreferrer">https://preliminary.istio.io/docs/reference/config/annotations/</a></p>
<p>Done. It's mounted to envoy proxy pod.</p>
|
<p>I would like to use a <code>Secret</code> inside a <code>ConfigMap</code>. Is this possible? </p>
<p><strong>Example:</strong></p>
<p>An example where this might be required is if you would like to write from <a href="https://docs.fluentd.org/output/s3" rel="nofollow noreferrer">Fluentd to S3</a>. In the configuration you have to add your AWS credentials.</p>
<p><strong>Alternatives:</strong> </p>
<p>Using <strong>environment variables</strong> on the cluster itself. I do not like this idea, because the variable would still contain the secret as plain text.</p>
<p>Passing the password during set-up. If you are using <strong>deployment tools</strong> it might be possible to pass the secret during the deployment of your application. This is also not a nice solution since you are still passing the secret as plain text to the deployment tool. An advantage of this approach is that you do not accidentally check-in your secret to git. </p>
| <p>Try to <a href="https://medium.com/merapar/securing-iam-access-in-kubernetes-cfbcc6954de" rel="nofollow noreferrer">avoid</a> making use of aws credentials in kubernetes.</p>
<p>As you can <a href="https://docs.fluentd.org/output/s3#parameters" rel="nofollow noreferrer">see</a> <code>aws_key_id</code> and <code>aws_sec_key</code> are the optional fields.</p>
<p>Make use of <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html" rel="nofollow noreferrer">AWS IAM role</a> and assign it to the kubernetes nodes.</p>
<p>And then try to run your fluentd application without aws credentials in its <a href="https://docs.fluentd.org/output/s3#example-configuration" rel="nofollow noreferrer">config</a>.</p>
<p>Just give it a try.</p>
<p>Hope this helps.</p>
<p><strong>Update:</strong></p>
<p><a href="https://medium.com/merapar/securing-iam-access-in-kubernetes-cfbcc6954de" rel="nofollow noreferrer">This</a> article explain different ways to use aws iam for kubernetes.</p>
<p><a href="https://github.com/jtblin/kube2iam" rel="nofollow noreferrer">Kube2iam</a> and many <a href="https://medium.com/building-ibotta/moving-from-kube2iam-to-kiam-a000639b839e" rel="nofollow noreferrer">other</a> tools like this, might help. Give it a try.</p>
|
<p>I have a program which has multiple independent<sup>1</sup> components. </p>
<p>It is trivial to add a liveness probe in all of the components, however it's not easy to have a <em>single</em> liveness probe which would determine the health of all of the program's components. </p>
<p>How can I make kubernetes look at multiple liveness probes and restart the container when any of those are defunct?</p>
<p>I know it can be achieved by adding more software, for example an additional bash script which does the liveness checks, but I am looking for a native way to do this.</p>
<hr/>
<p><sup>1</sup>By independent I mean that failure of one component does not make the other components fail.</p>
| <p>The Kubernetes API allows one liveness and one readness per application (Deployment / POD). I recommend creating a validations centralizing service that has an endpoint rest:</p>
<pre><code>livenessProbe:
httpGet:
path: /monitoring/alive
port: 3401
httpHeaders:
- name: X-Custom-Header
value: Awesome
initialDelaySeconds: 15
timeoutSeconds: 1
periodSeconds: 15
</code></pre>
<p>or try one bash to same task, like:</p>
<pre><code>livenessProbe:
exec:
command:
- bin/bash
- -c
- ./liveness.sh
initialDelaySeconds: 220
timeoutSeconds: 5
</code></pre>
<p>liveness.sh</p>
<pre><code>#!/bin/sh
if [ $(ps -ef | grep java | wc -l) -ge 1 ]; then
echo 0
else
echo "Nothing happens!" 1>&2
exit 1
fi
</code></pre>
<p>Recalling what the handling of messages can see in the events the failure in question:
"Warning Unhealthy Pod Liveness probe failed: Nothing happens!"</p>
<p>Hope this helps</p>
|
<p>I had been trying to implement Kubernetes HPA using Metrics from Kafka-exporter. Hpa supports Prometheus, so we tried writing the metrics to prometheus instance. From there, we are unclear on the steps to do. Is there an article where it will explain in details ?</p>
<p>I followed <a href="https://medium.com/google-cloud/kubernetes-hpa-autoscaling-with-kafka-metrics-88a671497f07" rel="nofollow noreferrer">https://medium.com/google-cloud/kubernetes-hpa-autoscaling-with-kafka-metrics-88a671497f07</a></p>
<p>for same in GCP and we used stack driver, and the implementation worked like a charm. But, we are struggling in on-premise setup, as stack driver needs to be replaced by Prometheus</p>
| <p>In order to scale based on custom metrics, Kubernetes needs to query an API for metrics to check for those metrics. That API needs to implement the custom metrics interface.</p>
<p>So for Prometheus, you need to setup an API that exposes Prometheus metrics through the custom metrics API. Luckily, there already is an <a href="https://github.com/DirectXMan12/k8s-prometheus-adapter" rel="nofollow noreferrer">adapter</a>.</p>
|
<p>I am very new to Kuberetes and I have done some work with docker previously. I am trying to accomplish following:</p>
<ol>
<li>Spin up Minikube</li>
<li>Use Kube-ctl to spin up a docker image from docker hub.</li>
</ol>
<p>I started minikube and things look like they are up and running. Then I pass following command</p>
<p>kubectl run nginx --image=nginx (Please note I do not have this image anywhere on my machine and I am expecting k8 to fetch it for me)</p>
<p>Now, when I do that, it spins up the pod but the status is <code>ImagePullBackOff</code>. So I ran <code>kubectl describe pod</code> command on it and the results look like following:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8m default-scheduler Successfully assigned default/ngix-67c6755c86-qm5mv to minikube
Warning Failed 8m kubelet, minikube Failed to pull image "nginx": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.64.1:53: read udp 192.168.64.2:52133->192.168.64.1:53: read: connection refused
Normal Pulling 8m (x2 over 8m) kubelet, minikube Pulling image "nginx"
Warning Failed 8m (x2 over 8m) kubelet, minikube Error: ErrImagePull
Warning Failed 8m kubelet, minikube Failed to pull image "nginx": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.64.1:53: read udp 192.168.64.2:40073->192.168.64.1:53: read: connection refused
Normal BackOff 8m (x3 over 8m) kubelet, minikube Back-off pulling image "nginx"
Warning Failed 8m (x3 over 8m) kubelet, minikube Error: ImagePullBackOff
</code></pre>
<p>Then I searched around to see if anyone has faced similar issues and it turned out that some people have and they did resolve it by restarting minikube using some more flags which look like below:</p>
<p><code>minikube start --vm-driver="xhyve" --insecure-registry="$REG_IP":80</code></p>
<p>when I do <code>nslookup</code> inside Minikube, it does resolve with following information:</p>
<pre><code>Server: 10.12.192.22
Address: 10.12.192.22#53
Non-authoritative answer:
hub.docker.com canonical name = elb-default.us-east-1.aws.dckr.io.
elb-default.us-east-1.aws.dckr.io canonical name = us-east-1-elbdefau-1nlhaqqbnj2z8-140214243.us-east-1.elb.amazonaws.com.
Name: us-east-1-elbdefau-1nlhaqqbnj2z8-140214243.us-east-1.elb.amazonaws.com
Address: 52.205.36.130
Name: us-east-1-elbdefau-1nlhaqqbnj2z8-140214243.us-east-1.elb.amazonaws.com
Address: 3.217.62.246
Name: us-east-1-elbdefau-1nlhaqqbnj2z8-140214243.us-east-1.elb.amazonaws.com
Address: 35.169.212.184
</code></pre>
<p>still no luck. Is there anything that I am doing wrong here?</p>
| <p>There error message suggests that the Docker daemon running in the minikube VM can't resolve the <code>registry-1.docker.io</code> hostname because the DNS nameserver it's configured to use for DNS resolution (<code>192.168.64.1:53</code>) is refusing connection. It's strange to me that the Docker deamon is trying to resolve <code>registry-1.docker.io</code> via a nameserver at <code>192.168.64.1</code> but when you <code>nslookup</code> on the VM it's using a nameserver at <code>10.12.192.22</code>. I did an Internet search for "minkube Get registry-1.docker.io/v2: dial tcp: lookup registry-1.docker.io on 192.168.64.1:53" and found an issue where someone made <a href="https://github.com/docker/for-mac/issues/1906#issuecomment-361833738" rel="noreferrer">this comment</a>, seems identical to your problem, and seems specific to <code>xhyve</code>.</p>
<p>In that comment the person says:</p>
<blockquote>
<p>This issue does look like an xhyve issue not seen with virtualbox.</p>
</blockquote>
<p>and</p>
<blockquote>
<p>Switching to <strong>virtualbox</strong> fixed this issue for me.</p>
<p>I stopped minikube, deleted it, started it without <code>--vm-driver=xhyve</code> (minikube uses virtualbox driver by default), and then <code>docker build -t hello-node:v1 .</code> worked fine without errors</p>
</blockquote>
|
<p>I am using python with python-kubernetes with a minikube running locally, e.g there are no cloud issues. </p>
<p>I am trying to create a job and provide it with data to run on. I would like to provide it with a mount of a directory with my local machine data. </p>
<p>I am using <a href="https://github.com/kubernetes-client/python/blob/master/examples/job_examples.py" rel="nofollow noreferrer">this</a> example and trying to add a mount volume
This is my code after adding the keyword volume_mounts (I tried multiple places, multiple keywords and nothing works) </p>
<pre><code>from os import path
import yaml
from kubernetes import client, config
JOB_NAME = "pi"
def create_job_object():
# Configureate Pod template container
container = client.V1Container(
name="pi",
image="perl",
volume_mounts=["/home/user/data"],
command=["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"])
# Create and configurate a spec section
template = client.V1PodTemplateSpec(
metadata=client.V1ObjectMeta(labels={
"app": "pi"}),
spec=client.V1PodSpec(restart_policy="Never",
containers=[container]))
# Create the specification of deployment
spec = client.V1JobSpec(
template=template,
backoff_limit=0)
# Instantiate the job object
job = client.V1Job(
api_version="batch/v1",
kind="Job",
metadata=client.V1ObjectMeta(name=JOB_NAME),
spec=spec)
return job
def create_job(api_instance, job):
# Create job
api_response = api_instance.create_namespaced_job(
body=job,
namespace="default")
print("Job created. status='%s'" % str(api_response.status))
def update_job(api_instance, job):
# Update container image
job.spec.template.spec.containers[0].image = "perl"
# Update the job
api_response = api_instance.patch_namespaced_job(
name=JOB_NAME,
namespace="default",
body=job)
print("Job updated. status='%s'" % str(api_response.status))
def delete_job(api_instance):
# Delete job
api_response = api_instance.delete_namespaced_job(
name=JOB_NAME,
namespace="default",
body=client.V1DeleteOptions(
propagation_policy='Foreground',
grace_period_seconds=5))
print("Job deleted. status='%s'" % str(api_response.status))
def main():
# Configs can be set in Configuration class directly or using helper
# utility. If no argument provided, the config will be loaded from
# default location.
config.load_kube_config()
batch_v1 = client.BatchV1Api()
# Create a job object with client-python API. The job we
# created is same as the `pi-job.yaml` in the /examples folder.
job = create_job_object()
create_job(batch_v1, job)
update_job(batch_v1, job)
delete_job(batch_v1)
if __name__ == '__main__':
main()
</code></pre>
<p>I get this error </p>
<blockquote>
<p>HTTP response body:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Job
in version \"v1\" cannot be handled as a Job: v1.Job.Spec:
v1.JobSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers:
[]v1.Container: v1.Container.VolumeMounts: []v1.VolumeMount:
readObjectStart: expect { or n, but found \", error found in #10 byte
of ...|ounts\": [\"/home/user|..., bigger context ...| \"image\":
\"perl\", \"name\": \"pi\", \"volumeMounts\": [\"/home/user/data\"]}],
\"restartPolicy\": \"Never\"}}}}|...","reason":"BadRequest","code":400</p>
</blockquote>
<p>What am i missing here? </p>
<p>Is there another way to expose data to the job?</p>
<p>edit: trying to use client.V1Volumemount
I am trying to add this code, and add mount object in different init functions eg.</p>
<pre><code>mount = client.V1VolumeMount(mount_path="/data", name="shai")
client.V1Container
client.V1PodTemplateSpec
client.V1JobSpec
client.V1Job
</code></pre>
<p>under multiple keywords, it all results in errors, is this the correct object to use? How shell I use it if at all? </p>
<p>edit: trying to pass volume_mounts as a list with the following code suggested in the answers: </p>
<pre><code>def create_job_object():
# Configureate Pod template container
container = client.V1Container(
name="pi",
image="perl",
volume_mounts=["/home/user/data"],
command=["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"])
# Create and configurate a spec section
template = client.V1PodTemplateSpec(
metadata=client.V1ObjectMeta(labels={
"app": "pi"}),
spec=client.V1PodSpec(restart_policy="Never",
containers=[container]))
# Create the specification of deployment
spec = client.V1JobSpec(
template=template,
backoff_limit=0)
# Instantiate the job object
job = client.V1Job(
api_version="batch/v1",
kind="Job",
metadata=client.V1ObjectMeta(name=JOB_NAME),
spec=spec)
return job
</code></pre>
<p>And still getting a similar error </p>
<blockquote>
<p>kubernetes.client.rest.ApiException: (422) Reason: Unprocessable
Entity HTTP response headers: HTTPHeaderDict({'Content-Type':
'application/json', 'Date': 'Tue, 06 Aug 2019 06:19:13 GMT',
'Content-Length': '401'}) HTTP response body:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Job.batch
\"pi\" is invalid:
spec.template.spec.containers[0].volumeMounts[0].name: Not found:
\"d\"","reason":"Invalid","details":{"name":"pi","group":"batch","kind":"Job","causes":[{"reason":"FieldValueNotFound","message":"Not
found:
\"d\"","field":"spec.template.spec.containers[0].volumeMounts[0].name"}]},"code":422}</p>
</blockquote>
| <p>The V1Container call is expecting a list of V1VolumeMount objects for volume_mounts parameter but you passed in a list of string: </p>
<p>Code:</p>
<pre><code>def create_job_object():
volume_mount = client.V1VolumeMount(
mount_path="/home/user/data"
# other optional arguments, see the volume mount doc link below
)
# Configureate Pod template container
container = client.V1Container(
name="pi",
image="perl",
volume_mounts=[volume_mount],
command=["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"])
# Create and configurate a spec section
template = client.V1PodTemplateSpec(
metadata=client.V1ObjectMeta(labels={
"app": "pi"}),
spec=client.V1PodSpec(restart_policy="Never",
containers=[container]))
....
</code></pre>
<p>references:</p>
<ul>
<li><p><a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1Container.md" rel="nofollow noreferrer">https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1Container.md</a></p></li>
<li><p><a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1VolumeMount.md" rel="nofollow noreferrer">https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1VolumeMount.md</a></p></li>
</ul>
|
<p>I want to execute data migration through Kubernetes cronjob using Luigi pipeline. My luigi task receives <code>--start</code> parameter which I want to pass through cronjob command.</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: migration
spec:
schedule: "0 0 */1 * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: spark
containers:
- name: migratecronjob
image: latest-image
imagePullPolicy: Always
env:
- name: TAG
value: latest-image
- name: MIGRATION_TAG
value: migration-v05
command:
- "luigi"
- "--module"
- "module.task"
- "Migrate10Days"
- "--start"
- $(date +%Y-%m-%dT%H)
- "--workers"
- "10"
restartPolicy: OnFailure
</code></pre>
<p>The cronjob cannot recognize the <code>$(date +%Y-%m-%dT%H)</code> as a bash script and pass this command as string to the luigi task.</p>
| <p>I am not sure what are you going to archive, but this should work:</p>
<pre><code> - command:
- sh
- -c
- 'exec luigi --module module.task Migrate10Days --start $(date +%Y-%m-%dT%H) --workers --workers'
</code></pre>
|
<p>I'm trying to setup an ingress controller(nginx) to forward some TCP traffic to a kubernetes service(GCP). There's <a href="https://matthewpalmer.net/kubernetes-app-developer/articles/kubernetes-ingress-guide-nginx-example.html" rel="noreferrer">this tutorial</a> that shows how to route HTTP traffic to services (based on the paths) using nginx. I want to have a similar setup to forward TCP traffic.</p>
<p>In my cluster, I have a pod running a TCP echo server written in python using sockets. There is a service attached to the pod. If I set the service type of this service to LoadBalancer, I can run my client as follows and get the echo from the cluster. </p>
<pre class="lang-sh prettyprint-override"><code>python client.py --host <EXTERNAL-IP-OF-LOAD-BALANCER> --port <PORT>
</code></pre>
<p>Similar to the echo server, I have other TCP services in my cluster that serves other pods. Currently I have set all of them to LoadBalancers. So, they have external IPs and listen for traffic on different ports. However, I do not want to create LoadBalancers to all of these services. How would I use the nginx to route the TCP traffic to different services based on the port numbers. If nginx cannot do this, are there other options that I can use to achieve this?</p>
<hr>
<p>UPDATE:
Following the <a href="https://stackoverflow.com/users/4666002/hang-du">HangDu</a>'s answer I created the following files.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: default
data:
9000: "default/echo-service:50000"
</code></pre>
<p>and</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: default
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
- name: proxied-tcp-9000
port: 9000
targetPort: 9000
protocol: TCP
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
</code></pre>
<p>Then I used <code>kubectl create -f <FILE_NAME></code> to create the config map and the service. So I was hoping I could use the external IP of the newly created service and the port 9000 and run <code>python client.py --host <EXTERNAL-IP-OF-LOAD-BALANCER> --port 9000</code> to run my echo client. However, I get a connection refused error when I do that. Am I doing something wrong?</p>
| <p>I answered a similar question on another thread. <a href="https://stackoverflow.com/questions/57301394/how-to-use-nginx-ingress-tcp-service-on-different-namespace/57309255#57309255">How to use nginx ingress TCP service on different namespace</a></p>
<p>Basically, you can specify the port and backend for your service in configmap.</p>
<p>The following is the link to the document.
<a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/exposing-tcp-udp-services.md" rel="noreferrer">https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/exposing-tcp-udp-services.md</a></p>
|
<p>I deploy my Helm deploys to isolated namespaces.</p>
<p>Deleting a namespace deletes all the resources in it - except the Helm deployment.</p>
<p>Deleting a Helm deployment deletes all resource in it - except the namespace.</p>
<p>I have to do this which seems redundant:</p>
<pre><code>helm del `helm ls NAMESPACE --short` --purge
kubectl delete namespace NAMESPACE
</code></pre>
<p>I'd rather just delete my namespace and have the Helm deploy also purged - is this possible?</p>
| <blockquote>
<p>Deleting a namespace deletes all the resources in it- except the helm deployment</p>
</blockquote>
<p>This can't be (deleting a namespace implies deleting everything in it, there aren't any exceptions), and must means that the state representing Helm's concept of a deployment doesn't live in that namespace. Helm stores these as config maps in the <code>TILLER_NAMESPACE</code>. See <a href="https://helm.sh/docs/architecture/#implementation" rel="nofollow noreferrer">here</a> and <a href="https://v3.helm.sh/docs/faq/#release-names-are-now-scoped-to-the-namespace" rel="nofollow noreferrer">here</a>.</p>
<p>It's not surprising that if you create some resources with <code>helm</code> and then go "under the hood" and delete those resources directly via <code>kubectl</code>, Helm's state of the world won't result in that deployment disappearing.</p>
<blockquote>
<p>Deleting a helm deployment deletes all resource in it- except the namespace</p>
</blockquote>
<p>That sounds like expected behaviour. Presumably you created the namespace out of band with <code>kubectl</code>, it's not part of your Helm deployment. So deleting the Helm deployment wouldn't delete that namespace.</p>
<p>If you <code>kubectl create namespace NS</code> and <code>helm install CHART --namespace NS</code> then it's not surprising that to clean up, you need to <code>helm delete</code> the release and then <code>kubectl delete</code> the namespace.</p>
<p>The only way I could imagine to do that would be for the Helm chart itself to both create a namespace and create all subsequent namespace-scoped resources within that namespace. <a href="https://github.com/helm/charts/tree/5e00179b109d7290eb05a2b9705839fcf666d29a/stable/magic-namespace" rel="nofollow noreferrer">Here</a> is an example that appears to do such a thing.</p>
|
<p>I am trying to install Kubernetes on Debian 9 (stretch) server, which is on cloud and therefore can't do virtualization. And it doesn't have systemd. Also, I'm trying for really minimal configuration, not big cluster.</p>
<p>I've found Minikube, <a href="https://docs.gitlab.com/charts/development/minikube/index.html" rel="nofollow noreferrer">https://docs.gitlab.com/charts/development/minikube/index.html</a> which is supposed to run without virtualization using docker, but it requires systemd, as mentioned here <a href="https://github.com/kubernetes/minikube/issues/2704" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/issues/2704</a> (and yes I get the related error message).</p>
<p>I also found k3s, <a href="https://github.com/rancher/k3s" rel="nofollow noreferrer">https://github.com/rancher/k3s</a> which can run either on systemd or openrc, but when I install openrc using <a href="https://wiki.debian.org/OpenRC" rel="nofollow noreferrer">https://wiki.debian.org/OpenRC</a> I don't have the "net" service it depends on.</p>
<p>Then I found microk8s, <a href="https://microk8s.io/" rel="nofollow noreferrer">https://microk8s.io/</a> which needs systemd simply because snapd needs systemd.</p>
<p>Is there some other alternative or solution to mentioned problems? Or did Poettering already bribed everyone?</p>
| <p>Since you are well off the beaten path, you can probably just run things by hand with k3s. It's a single executable AFAIK. See <a href="https://github.com/rancher/k3s#manual-download" rel="nofollow noreferrer">https://github.com/rancher/k3s#manual-download</a> as a simple starting point. You will eventually want some kind of service monitor to restart things if they crash, if not systemd then perhaps Upstart (which is not packaged for Deb9) or Runit (which itself usually runs under supervision).</p>
|
<p>Trying to use the python kubernetes API to stream the output of kubernetes pod logs. (eventual goal is to stream the logs out via websocket)</p>
<p>Based off this <a href="https://github.com/kubernetes-client/python-base/pull/93" rel="noreferrer">PR</a> that was merged into python kubernetes module, i thought watch would work with read_namespaced_pod_log?</p>
<pre><code>v1 = client.CoreV1Api()
w = watch.Watch()
for e in w.stream(v1.read_namespaced_pod_log, name=pod, namespace=namespace, follow=True, tail_lines=1, limit_bytes=560, _preload_content=False):
print(e)
</code></pre>
<p>But i get the error below, am i missing something that needs to be passed to watch? or read_namespaced_pod_log?</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.7/site-packages/kubernetes/watch/watch.py", line 132, in stream
resp = func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/kubernetes/client/apis/core_v1_api.py", line 18538, in read_namespaced_pod_log
(data) = self.read_namespaced_pod_log_with_http_info(name, namespace, **kwargs)
File "/usr/local/lib/python3.7/site-packages/kubernetes/client/apis/core_v1_api.py", line 18576, in read_namespaced_pod_log_with_http_info
" to method read_namespaced_pod_log" % key
TypeError: Got an unexpected keyword argument 'watch' to method read_namespaced_pod_log
</code></pre>
| <p>You should just do:</p>
<pre><code>v1 = client.CoreV1Api()
w = Watch()
for e in w.stream(v1.read_namespaced_pod_log, name=pod, namespace=namespace):
print(e)
</code></pre>
|
<p>I try to get Total and Free disk space on my Kubernetes VM so I can display % of taken space on it. I tried various metrics that included "filesystem" in name but none of these displayed correct total disk size. Which one should be used to do so?</p>
<p>Here is a list of metrics I tried</p>
<pre><code>node_filesystem_size_bytes
node_filesystem_avail_bytes
node:node_filesystem_usage:
node:node_filesystem_avail:
node_filesystem_files
node_filesystem_files_free
node_filesystem_free_bytes
node_filesystem_readonly
</code></pre>
| <p>According to my Grafana dashboard, the following metrics work nicely for alerting for available space,</p>
<p><code>100 - ((node_filesystem_avail_bytes{mountpoint="/",fstype!="rootfs"} * 100) / node_filesystem_size_bytes{mountpoint="/",fstype!="rootfs"})</code></p>
<p>The formula gives out the percentage of available space on the pointed disk. Make sure you include the mountpoint and fstype within the metrics. </p>
|
<p>I try to understand the Kubernetes network and I am now wondering if a CNI is used per default when installing Minikube and when yes which one?</p>
<p>In other words, can I run Minikube without a CNI provider like Flannel or Calico?</p>
| <p>By default Minikube runs on Docker default bridge-network, so the answer should be yes. You have to separately install other CNI providers, like Flannel.</p>
<p>For specific networking requirements, <a href="https://minikube.sigs.k8s.io/docs/" rel="nofollow noreferrer">Minikube docs</a> could be checked out.</p>
|
<p>I'm currently using Docker 19.03 and Kubernetes 1.13.5 and Rancher 2.2.4. Since 19.03, Docker has officially support natively NVIDIA GPUs just by passing <code>--gpus</code> option. Example (from <a href="https://github.com/NVIDIA/nvidia-docker/wiki/Installation-(Native-GPU-Support)#usage" rel="noreferrer">NVIDIA/nvidia-docker github</a>):</p>
<pre><code> docker run --gpus all nvidia/cuda nvidia-smi
</code></pre>
<p>But in Kubernetes, there's no option to pass Docker CLI options. So if I need to run a GPU instance, I have to install <code>nvidia-docker2</code>, which is not convenient to use.</p>
<p>Is there anyway to pass the Docker CLI options or passing NVIDIA runtime without installing <code>nvidia-docker2</code></p>
| <p><a href="https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/" rel="nofollow noreferrer">GPU's are scheduled</a> via <a href="https://kubernetes.io/docs/concepts/cluster-administration/device-plugins" rel="nofollow noreferrer">device plugins</a> in Kubernetes.</p>
<blockquote>
<p>The <a href="https://github.com/NVIDIA/k8s-device-plugin" rel="nofollow noreferrer">official NVIDIA GPU device</a> plugin has the following requirements:</p>
<ul>
<li>Kubernetes nodes have to be pre-installed with NVIDIA drivers.</li>
<li>Kubernetes nodes have to be pre-installed with <a href="https://github.com/NVIDIA/k8s-device-plugin#preparing-your-gpu-nodes" rel="nofollow noreferrer">nvidia-docker 2.0</a></li>
<li>nvidia-container-runtime must be configured as the <a href="https://github.com/NVIDIA/k8s-device-plugin#preparing-your-gpu-nodes" rel="nofollow noreferrer">default runtime</a> for docker instead of runc.</li>
<li>NVIDIA drivers ~= 361.93</li>
</ul>
</blockquote>
<p>Once the nodes are setup GPU's become another resource in your spec like <code>cpu</code> or <code>memory</code>.</p>
<pre><code>spec:
containers:
- name: gpu-thing
image: whatever
resources:
limits:
nvidia.com/gpu: 1
</code></pre>
|
<p>I have autoscaling enabled on Google Kubernetes Cluster and one of the pods I can see the usage is much lower</p>
<p><a href="https://i.stack.imgur.com/jHLLT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jHLLT.png" alt="enter image description here"></a></p>
<p>I have a total of 6 nodes and I expect at least this node to be terminated. I have gone through the following:
<a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-types-of-pods-can-prevent-ca-from-removing-a-node" rel="nofollow noreferrer">https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-types-of-pods-can-prevent-ca-from-removing-a-node</a></p>
<p>I have added this annotation to all my pods </p>
<pre><code>cluster-autoscaler.kubernetes.io/safe-to-evict: true
</code></pre>
<p>However, the cluster autoscaler scales up correctly, but doesn't scale down as I expect it to.</p>
<p>I have the following logs</p>
<pre><code>$ kubectl logs kube-dns-autoscaler-76fcd5f658-mf85c -n kube-system
autoscaler/pkg/autoscaler/k8sclient/k8sclient.go:90: Failed to list *v1.Node: Get https://10.55.240.1:443/api/v1/nodes?resourceVersion=0: dial tcp 10.55.240.1:443: getsockopt: connection refused
E0628 20:34:36.187949 1 reflector.go:190] github.com/kubernetes-incubator/cluster-proportional-autoscaler/pkg/autoscaler/k8sclient/k8sclient.go:90: Failed to list *v1.Node: Get https://10.55.240.1:443/api/v1/nodes?resourceVersion=0: dial tcp 10.55.240.1:443: getsockopt: connection refused
E0628 20:34:47.191061 1 reflector.go:190] github.com/kubernetes-incubator/cluster-proportional-autoscaler/pkg/autoscaler/k8sclient/k8sclient.go:90: Failed to list *v1.Node: Get https://10.55.240.1:443/api/v1/nodes?resourceVersion=0: net/http: TLS handshake timeout
I0628 20:35:10.248636 1 autoscaler_server.go:133] ConfigMap not found: Get https://10.55.240.1:443/api/v1/namespaces/kube-system/configmaps/kube-dns-autoscaler: net/http: TLS handshake timeout, will create one with default params
E0628 20:35:17.356197 1 autoscaler_server.go:95] Error syncing configMap with apiserver: configmaps "kube-dns-autoscaler" already exists
E0628 20:35:18.191979 1 reflector.go:190] github.com/kubernetes-incubator/cluster-proportional-autoscaler/pkg/autoscaler/k8sclient/k8sclient.go:90: Failed to list *v1.Node: Get https://10.55.240.1:443/api/v1/nodes?resourceVersion=0: dial tcp 10.55.240.1:443: i/o timeout
</code></pre>
<p>I am not sure the above are the relevant logs, what is the correct way to debug this issue?</p>
<p>My pods have got local storage. I have been trying to debug this issue using </p>
<pre><code>kubectl drain gke-mynode-d57ded4e-k8tt
error: DaemonSet-managed pods (use --ignore-daemonsets to ignore): fluentd-gcp-v3.1.1-qzdzs, prometheus-to-sd-snqtn; pods with local storage (use --delete-local-data to override): mydocs-585879b4d5-g9flr, istio-ingressgateway-9b889644-v8bgq, mydocs-585879b4d5-7lmzk
</code></pre>
<p>I think it's safe to ignore <code>daemonsets</code> as CA should be ok to evict it, however I am not sure how to make the CA understand that mydocs is ok to be evicted and move to another node after adding the annotation</p>
<h1>EDIT</h1>
<p>The min and the max nodes have been set correctly as seen on the GCP console
<a href="https://i.stack.imgur.com/SNZUa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SNZUa.png" alt="enter image description here"></a></p>
| <p>The <code>kubectl logs</code> command is for the DNS autoscaler, not the cluster autoscaler. It will give you information on the number of kube-dns replicas in the cluster, not the number of nodes or scaling decisions.</p>
<p>From the <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-types-of-pods-can-prevent-ca-from-removing-a-node" rel="nofollow noreferrer">cluster autoscaler FAQ</a> (and taking into account what you wrote in your question):</p>
<blockquote>
<p>Kube-system pods that:</p>
<ul>
<li>are not run on the node by default</li>
<li>Pods with local storage</li>
</ul>
</blockquote>
<p>And additionally, restrictive <code>Pod Disruption Budgets</code>. However since is not stated in the question, I'll assume you haven't set any.</p>
<p>Although you have pods with local storage, you added the annotation to make them safe to evict so that leaves the system pods not run by default in the nodes.</p>
<p>Since system pods in GKE are annotated with the <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/addon-manager" rel="nofollow noreferrer">reconciliation loop</a>, you can't add this directive to them, which might be preventing their eviction.</p>
<p>In this scenario, you may consider using a <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-can-i-scale-my-cluster-to-just-1-node" rel="nofollow noreferrer"><code>Pod Disruption Budget</code> configured to allow the autoscaler to evict them</a>.</p>
<p>This <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-to-set-pdbs-to-enable-ca-to-move-kube-system-pods" rel="nofollow noreferrer"><code>Pod Disruption Budget</code></a> can include DNS and logging pods that aren't run by default in the nodes.</p>
<p>Unfortunately, GKE is a managed option so there isn't much to apply from the autoscaler FAQ. However, if you want to go further, you might as well consider a <a href="https://en.wikipedia.org/wiki/Bin_packing_problem" rel="nofollow noreferrer">pod binpacking strategy</a> using <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity" rel="nofollow noreferrer">Affinity and anti-affinity</a>, <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="nofollow noreferrer">Taints and tolerations</a> and <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-resource-requests-and-limits" rel="nofollow noreferrer">requests and limits</a> to fit them properly, making the downscaling easier whenever possible.</p>
<p>Finally, on GKE you can use the <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-can-i-check-what-is-going-on-in-ca-" rel="nofollow noreferrer"><code>cluster-autoscaler-status</code> ConfigMap</a> to check what decisions the autoscaler is making.</p>
|
<p>Is there a configuration in Kubernetes horizontal pod autoscaling to specify a minimum delay for a pod to be running or created before scaling up/down?</p>
<p>For example with something like:</p>
<pre><code># I am looking for a flag like this
--horizontal-pod-autoscale-initial-upscale-delay=5m0s
# Similar to these existing flags
--horizontal-pod-autoscaler-downscale-delay=2m0s
--horizontal-pod-autoscaler-upscale-delay=2m0s
</code></pre>
<p>Having as a result:</p>
<ul>
<li>Wait for 5 min before any upscale occur</li>
<li>After 5 min, perform a downscale at most every 2 min</li>
</ul>
<p>I have a situation where a Pod consumes lots of resources on start-up for bootstrapping (which is expected) but I don't want it scaled during this time, and once bootstrap is done it may be eligible for autoscaling.</p>
| <p>This flag actually exists: <code>--horizontal-pod-autoscaler-cpu-initialization-period</code>
In addition you need to consider the readiness delay: <code>--horizontal-pod-autoscaler-initial-readiness-delay</code> and the metric loop time, <code>--horizontal-pod-autoscaler-sync-period</code> to calculate the total (max/min/average) delay.</p>
<p>See <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/</a></p>
|
<p>I have two Kubernetes clusters representing dev and staging environments.</p>
<p>Separately, I am also deploying a custom DevOps dashboard which will be used to monitor these two clusters. On this dashboard I will need to show information such as:</p>
<ul>
<li>RAM/HD Space/CPU usage of each deployed Pod in each environment</li>
<li>Pod health (as in if it has too many container restarts etc)</li>
<li>Pod uptime</li>
</ul>
<p>All these stats have to be at a cluster level and also per namespace, preferably. As in, if I query a for a particular namespace, I have to get all the resource usages of that namespace.</p>
<p>So the webservice layer of my dashboard will send a service request to the master node of my respective cluster in order to fetch this information.</p>
<p>Another thing I need is to implement real time notifications in my DevOps dashboard. Every time a container fails, I need to catch that event and notify relevant personnel.</p>
<p>I have been reading around and two things that pop up a lot are Prometheus and Metric Server. Do I need both or will one do? I set up Prometheus on a local cluster but I can't find any endpoints it exposes which could be called by my dashboard service. I'm also trying to set up Prometheus AlertManager but so far it hasn't worked as expected. Trying to fix it now. Just wanted to check if these technologies have the capabilities to meet my requirements.</p>
<p>Thanks!</p>
| <p><a href="https://prometheus.io/" rel="nofollow noreferrer">Prometheus</a> + <a href="https://grafana.com/" rel="nofollow noreferrer">Grafana</a> are a pretty standard setup. </p>
<p>Installing <a href="https://github.com/coreos/kube-prometheus" rel="nofollow noreferrer">kube-prometheus</a> or <a href="https://github.com/helm/charts/tree/master/stable/prometheus-operator" rel="nofollow noreferrer">prometheus-operator</a> via <a href="https://helm.sh/" rel="nofollow noreferrer">helm</a> will give you
Grafana, Alertmanager, <code>node-exporter</code> and <code>kube-state-metrics</code> by default and all be setup for kubernetes metrics. </p>
<p>Configure alertmanager to <a href="https://prometheus.io/docs/alerting/configuration/" rel="nofollow noreferrer">do something with the alerts</a>. SMTP is usually the first thing setup but I would recommend some sort of event manager if this is a service people need to rely on. </p>
<p>Although a dashboard isn't part of your requirements, this will inform how you can connect into prometheus as a data source. <a href="https://prometheus.io/docs/visualization/grafana/#creating-a-prometheus-data-source" rel="nofollow noreferrer">There is docco on adding prometheus data source for grafana</a>.</p>
<p>There are a number of <a href="https://grafana.com/grafana/dashboards?dataSource=prometheus&page=1&search=kubernetes" rel="nofollow noreferrer">prebuilt charts available to add to Grafana</a>. There are some <a href="https://grafana.com/grafana/dashboards/9578" rel="nofollow noreferrer">charts to visualise alertmanager</a> too.</p>
<p>Your external service won't be querying the metrics directly with prometheus, in will be querying the collected data in prometheus stored inside your cluster. To access the API externally you will need to setup an external path to the prometheus service. This can be configured via an ingress controller in the helm deployment:</p>
<pre><code>prometheus.ingress.enabled: true
</code></pre>
<p>You can do the same for the alertmanager API and grafana if needed. </p>
<pre><code>alertmanager.ingress.enabled: true
grafana.ingress.enabled: true
</code></pre>
<p>You could use Grafana outside the cluster as your dashboard via the same prometheus ingress if it proves useful.</p>
|
<p>I am trying to load <code>elasticsearch.yml</code> file using <code>ConfigMap</code> while installing ElasticSearch using Kubernetes.</p>
<pre><code>kubectl create configmap elastic-config --from-file=./elasticsearch.yml
</code></pre>
<p>The <code>elasticsearch.yml</code> file is loaded in the container with <code>root</code> as its owner and read-only permission (<a href="https://github.com/kubernetes/kubernetes/issues/62099" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/62099</a>). Since, ElasticSearch will not start with <code>root</code> ownership, the pod crashes.</p>
<p>As a work-around, I tried to mount the <code>ConfigMap</code> to a different file and then copy it to the <code>config</code> directory using an <code>initContainer</code>. However, the file in the <code>config</code> directory does not seem to be updated.
Is there anything that I am missing or is there any other way to accomplish this?</p>
<p><strong>ElasticSearch Kubernetes StatefulSet:</strong></p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
labels:
app: elasticservice
spec:
serviceName: elasticsearch
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: elastic-config-vol
mountPath: /tmp/elasticsearch
- name: elastic-storage
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: docker-elastic
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.zen.ping.unicast.hosts
value: "elastic-service"
- name: discovery.zen.minimum_master_nodes
value: "1"
- name: node.master
value: "true"
- name: node.data
value: "true"
- name: ES_JAVA_OPTS
value: "-Xmx256m -Xms256m"
volumes:
- name: elastic-config-vol
configMap:
name: elastic-config
items:
- key: elasticsearch.yml
path: elasticsearch.yml
- name: elastic-config-dir
emptyDir: {}
- name: elastic-storage
emptyDir: {}
initContainers:
# elasticsearch will not run as non-root user, fix permissions
- name: fix-vol-permission
image: busybox
command:
- sh
- -c
- chown -R 1000:1000 /usr/share/elasticsearch/data
securityContext:
privileged: true
volumeMounts:
- name: elastic-storage
mountPath: /usr/share/elasticsearch/data
- name: fix-config-vol-permission
image: busybox
command:
- sh
- -c
- cp /tmp/elasticsearch/elasticsearch.yml /usr/share/elasticsearch/config/elasticsearch.yml
securityContext:
privileged: true
volumeMounts:
- name: elastic-config-dir
mountPath: /usr/share/elasticsearch/config
- name: elastic-config-vol
mountPath: /tmp/elasticsearch
# increase default vm.max_map_count to 262144
- name: increase-vm-max-map-count
image: busybox
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
- name: increase-the-ulimit
image: busybox
command:
- sh
- -c
- ulimit -n 65536
securityContext:
privileged: true
</code></pre>
| <p>I use:</p>
<pre><code>...
volumeMounts:
- name: config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
volumes:
- name : config
configMap:
name: es-configmap
</code></pre>
<p>without any permissions problem, but you can set permissions with <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.15/#configmap-v1-core" rel="nofollow noreferrer">defaultMode</a></p>
|
<p>I want to get the value of a specific field of a secret in a shell script.</p>
<p>From the <code>kubectl get secret</code> documentation, it seems the standard way to get a secret returns the whole thing, in a specified format, with the values base64 encoded.</p>
<p>So, to get the <code>bar</code> field of the <code>foo</code> secret, output as an unencoded string, I'm doing this:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get secret foo -o json | jq -r ".data.bar" | base64 --decode
</code></pre>
<p>That is</p>
<ul>
<li>get the whole <code>foo</code> secret as JSON</li>
<li>pipe to <code>jq</code> to read the <code>bar</code> field from the JSON</li>
<li>decode the value using <code>base64</code></li>
</ul>
<p><strong>Is there a way to do this only using <code>kubectl</code>?</strong> </p>
<p>Or an elegant way in POSIX-compliant shell that doesn't rely on any dependencies like <code>jq</code>?</p>
| <p>Try this </p>
<pre><code>kubectl get secret foo --template={{.data.bar}} | base64 --decode
</code></pre>
<p>No need of jq.</p>
|
<p>I am attempting to start a RabbitMQ image in my AKS cluster. The VMs comprising the cluster are on a private VNET and have firewall rules in place.</p>
<p>What needs to be allowed through the firewall isn't clear (or if it's even the problem).</p>
<p>Here's the output when the pod starts:</p>
<blockquote>
<h1>BOOT FAILED</h1>
<p>Config file generation failed: Failed to create dirty io scheduler
thread 6, error = 11</p>
<p>Crash dump is being written to:
/var/log/rabbitmq/erl_crash.dump...Segmentation fault (core dumped)</p>
<p>{"init terminating in do_boot",generate_config_file} init terminating
in do_boot (generate_config_file)</p>
<p>Crash dump is being written to:
/var/log/rabbitmq/erl_crash.dump...done</p>
</blockquote>
<p>I have attached persistent volumes to /var/log and /var/lib/rabbitmq but there's no log files or anything else that aids in debugging this issue. Schema, lost+found, and other rabbitmq folders and files are created, so it's reading/writing fine.</p>
<p>Here's the YAML I'm using to create the pod:</p>
<pre><code> apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mayan-broker
spec:
replicas: 1
template:
metadata:
labels:
app: mayan-broker
spec:
containers:
- name: mayan-broker
image: rabbitmq:3
volumeMounts:
- name: broker-storage
mountPath: /var/lib/rabbitmq
- name: broker-logging
mountPath: /var/log/rabbitmq
ports:
- containerPort: 5672
env:
- name: RABBITMQ_DEFAULT_USER
value: mayan
- name: RABBITMQ_DEFAULT_PASS
value: mayan
- name: RABBITMQ_DEFAULT_VHOST
value: mayan
volumes:
- name: broker-storage
persistentVolumeClaim:
claimName: rabbit-claim
- name: broker-logging
persistentVolumeClaim:
claimName: logging-claim
</code></pre>
<p>YAML without volumes and mounts per request, yielding the same result:</p>
<pre><code> apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mayan-broker
spec:
replicas: 1
template:
metadata:
labels:
app: mayan-broker
spec:
containers:
- name: mayan-broker
image: rabbitmq:3
ports:
- containerPort: 5672
env:
- name: RABBITMQ_DEFAULT_USER
value: mayan
- name: RABBITMQ_DEFAULT_PASS
value: mayan
- name: RABBITMQ_DEFAULT_VHOST
value: MAYAN
</code></pre>
| <p>I had the same problem with AKS (I'm starting to think it's an AKS thing).</p>
<p>Basically AKS limits the number of thread a pod can use, and rabbitmq (and everything Erlang in general) uses <em>a lot</em> of threads.</p>
<p>You can use env vars in your yaml to reduce the number of threads, like in my config:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: rabbitmq
spec:
serviceName: "rabbitmq"
replicas: 1
selector:
matchLabels:
app: rabbitmq
template:
metadata:
labels:
app: rabbitmq
spec:
containers:
- name: rabbitmq
image: rabbitmq:3.7-management
env:
# this needs to be there because AKS (as of 1.14.3)
# limits the number of thread a pod can use
- name: RABBITMQ_IO_THREAD_POOL_SIZE
value: "30"
ports:
- containerPort: 5672
name: amqp
resources:
limits:
memory: 4Gi
requests:
cpu: "1"
memory: 1Gi
</code></pre>
<p>I'm using a statefulsets, but the fix is the same for a deployment. </p>
|
<p>I have never understood what is an external Istio service. Is it an application external to kubernetes, or is it an application that does not inject a sidecar?</p>
| <p>There are three kinds of configuration for external services: HTTP, TLS and TCP. </p>
<p>In many cases, not all the parts of a microservices-based application reside in a service mesh or ingress. Sometimes, the microservices-based applications use functionality provided by legacy systems that reside outside the mesh.</p>
<p>You would like to migrate these systems to the service mesh gradually. Until these systems are migrated, they must be accessed by the applications inside the mesh. In other cases, the applications use web services provided by third parties.</p>
<p>This is one nice example to understand : <a href="https://istio.io/blog/2018/egress-https/" rel="nofollow noreferrer">https://istio.io/blog/2018/egress-https/</a></p>
|
<p>We have a kube cluster with 5 masters and 50 worker nodes. The kube cluster is running on bare-metal server, not as containers. During the cluster initialization, we update the parameters for the kube-api-server/scheduler/controller as required for our environment, but not all options.
Like for a node, we can get the current applied kubelet configs using the API - :/api/v1/nodes//proxy/configz, is there a way to get the current configs (like kube-api-qps, kube-api-burst etc) for the master components (controller, scheduler).</p>
<p>I could get the metrics and healthz for controller and scheduler on 10252 and 10251. However, I could not find how to get the current config for these components via an API. </p>
| <p>Not from the API (As of this writing) Typically the component configs are passed in through the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/#kube-apiserver" rel="nofollow noreferrer">command line</a> or/and a Pod YAML manifest file. Typically, the manifest files are under <code>/etc/kubernetes/manifests</code></p>
<ul>
<li>kube-apisever -> <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code></li>
<li>kube-controller-manager -> <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code></li>
<li>kube-scheduler -> <code>/etc/kubernetes/manifests/kube-scheduler.yaml</code></li>
</ul>
<p>For example, minikube <code>kube-apiserver</code>:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=192.168.64.6
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/var/lib/minikube/certs/ca.crt
- --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt
- --etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt
- --etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --insecure-port=0
- --kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt
- --kubelet-client-key=/var/lib/minikube/certs/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
...
</code></pre>
<p>The kube-proxy is a bit different.</p>
<ul>
<li>kube-proxy -> kube-proxy ConfigMap YAML in the kube-system namespace</li>
</ul>
<p>To get a sense of what you can do with the API you can use the <code>kubectl get --raw</code> command. For example:</p>
<pre><code>$ kubectl get --raw /
{
"paths": [
"/api",
"/api/v1",
"/apis",
"/apis/",
"/apis/admissionregistration.k8s.io",
"/apis/admissionregistration.k8s.io/v1beta1",
"/apis/apiextensions.k8s.io",
"/apis/apiextensions.k8s.io/v1beta1",
"/apis/apiregistration.k8s.io",
"/apis/apiregistration.k8s.io/v1",
...
</code></pre>
|
<p>Can someone please tell what is the issue with my yaml file for deployment. When I remove readiness probe, I can see my deplyoment in <code>kubectl get deployments</code> as available. But with readiness probe, it remains unavailable as below.</p>
<pre><code>NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
neg-demo-app 1 1 1 0 2m33s
</code></pre>
<p>Below is my yaml file </p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: neg-demo-app # Label for the Deployment
name: neg-demo-app # Name of Deployment
spec: # Deployment's specification
minReadySeconds: 60 # Number of seconds to wait after a Pod is created and its status is Ready
selector:
matchLabels:
run: neg-demo-app
template: # Pod template
metadata:
labels:
run: neg-demo-app # Labels Pods from this Deployment
spec: # Pod specification; each Pod created by this Deployment has this specification
containers:
- image: container_name # Application to run in Deployment's Pods
name: hostname # Container name
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
readinessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
terminationGracePeriodSeconds: 60 # Number of seconds to wait for connections to terminate before shutting down Pods
</code></pre>
| <p>What i think you have added the </p>
<pre><code>minReadySeconds: 60 # Number of seconds to wait after a Pod is created and its status is Ready
</code></pre>
<blockquote>
<p>minReadySeconds is an optional field that specifies the minimum
number of seconds for which a newly created Pod should be ready
without any of its containers crashing, for it to be considered
available. This defaults is 0 (the Pod will be considered available as
soon as it is ready).</p>
</blockquote>
<pre><code>So your newly created app pod have to be ready for minReadySeconds 60 seconds to be considered as available.
initialDelaySeconds: Number of seconds after the container has started before liveness or readiness probes are initiated.
</code></pre>
<blockquote>
<p>So initialDelaySeconds comes before minReadySeconds.</p>
</blockquote>
<pre><code>container in the pod has started at 5 seconds. Readiness probe will be initiated at 5+initialDelaySeconds seconds. Assume Pod become ready at 7 seconds(7 > 5+initialDelaySeconds). So this pod will be available after 7+minReadySeconds seconds.
</code></pre>
<p>Please try adding the <code>initialDelaySeconds</code> in rediness probe and liveness probe otherwise try with removing the <code>minReadySeconds</code>.</p>
|
<p>The <code>docker-compose</code> will automatically read the <code>.env</code> file if it resides in a same with the <code>docker-compose.yml</code> directory. With this in mind, I go ahead and in <code>.env</code> file define the <code>app-container</code> environment variable to store the container URL, like so:</p>
<pre><code>app-container=12345.dkr.ecr.us-west-2.amazonaws.com/container-image:v003
</code></pre>
<p>In a <code>docker-compose.yml</code> file I then substitute the container URL with the environment variable <code>app-container</code>.</p>
<pre><code> version: '3'
services:
app1:
build:
context: .
dockerfile: my.dockerfile
image: ${app-container}
</code></pre>
<p>It would be great if I could pass the same <code>app-container</code> environment variable to substitute the container's URL defined in Helm's <code>values.yaml</code> file. So, instead of storing the container's URL explicitly</p>
<pre><code>app:
image: 12345.dkr.ecr.us-west-2.amazonaws.com/container-image:v003
</code></pre>
<p>the Helm's <code>values.yaml</code> file could use the same <code>app-container</code> environment variable that was already defined in <code>.env</code> file and used by <code>docker-compose</code>. The Helm's <code>values.yaml</code> file could be then simply defined as:</p>
<pre><code>app:
image: ${app-container}
</code></pre>
<p>I wonder if sharing the same <code>env</code> file between <code>docker-compose</code> and Helm is possible?</p>
| <p>Try this, its little hacky way:</p>
<ul>
<li>First set the environment variable from <code>.env</code> file.</li>
</ul>
<pre><code>$ cat .env
appcontainer=12345.dkr.ecr.us-west-2.amazonaws.com/container-image:v003
$ while read LINE; do export "$LINE"; done < .env
</code></pre>
<p><strong><em>NOTE:</strong> You need to remove <code>-</code> from <code>app-container</code> name. Otherwise you will get this error while setting environment variable <code>bash: export: app-container=12345.dkr.ecr.us-west-2.amazonaws.com/container-image:v003': not a valid identifier</code></em></p>
<ul>
<li>Now explicitly set that env in values.yaml using <code>--set</code> option.</li>
</ul>
<pre><code>helm install --set app.image=$appcontainer ./mychart
</code></pre>
<p>But ideally it would be good to have separate env for your <code>docker-compose</code> and <code>helm chart</code>.</p>
<p>Hope this helps.</p>
|
<p>I am trying to expose an application in my cluster by creating a service type as load balancer. The reason for this is that I want this app to have a separate channel for communication. I have a KOPS cluster. I want to use AWS's network load balancer so that it gets a static IP. When I create the Service with port 80 mapped to the port that the app is running on everything works but when I try to add port 443 it just times out.</p>
<p>Here is the configuration that works - </p>
<pre><code>apiVersion: v1
metadata:
name: abc
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
labels:
app: abc
spec:
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: 9050
selector:
app: abc
type: LoadBalancer
</code></pre>
<p>As soon as I add TLS support in the config file and deploy it. The connection to the load balancer times out. How do I add TLS support to the load balancer?
I want to do it through the service and not through an ingress.
This is the configuration that doesn't work for me and when I paste the link in the browser, it times out.</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: abc
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: xxxxx
labels:
app: abc
spec:
externalTrafficPolicy: Local
ports:
- name: http
port: 443
protocol: TCP
targetPort: 9050
selector:
app: abc
type: LoadBalancer
</code></pre>
| <p>You can use the tls & ssl termination </p>
<pre><code> apiVersion: v1
kind: Service
metadata:
name: test-service
annotations:
# Note that the backend talks over HTTP.
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# TODO: Fill in with the ARN of your certificate.
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:{region}:{user id}:certificate/{id}
# Only run SSL on the port named "https" below.
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
spec:
selector:
app: test-pod
ports:
- name: http
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8080
type: LoadBalancer
</code></pre>
<p>You can add the tls certficate in aws certificate manager and use the arn address of certificate to kubernetes service.</p>
<p>it's like in becked you can terminate the https connection and use the HTTP only.</p>
<p>you can also check this out : <a href="https://aws.amazon.com/premiumsupport/knowledge-center/terminate-https-traffic-eks-acm/" rel="nofollow noreferrer">https://aws.amazon.com/premiumsupport/knowledge-center/terminate-https-traffic-eks-acm/</a> </p>
<pre><code>https://github.com/kubernetes/kubernetes/issues/73297
</code></pre>
<p>EDIT :1</p>
<pre><code>service.beta.kubernetes.io/aws-load-balancer-type: nlb
</code></pre>
<p>if not work please try adding this annotation as per your loadbalancer type.</p>
|
<p>So i heard about <code>initConainers</code> which allow you to do pre-app-container initialization. However, i want initialization which are done either at the cluster level or statefulset, or even the whole pod.</p>
<p>For instance, I want to perform a one time hadoop namenode format on my persistent volumes and be done with that. After that is done my namenode statefulset and my datanode replicasets can proceed each time</p>
<p>Does <code>kubernetes</code> have anything to accommodate this?</p>
<p>How about its Extensions?</p>
| <p>Kubernetes itself provides <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">Jobs</a> for ad hoc executions. Jobs do not integrate very tightly with existing Pods/Deployments/Statefulsets. </p>
<p><a href="https://v3.helm.sh/" rel="nofollow noreferrer">Helm</a> is a deployment orchestrator and includes <a href="https://helm.sh/docs/charts_hooks/" rel="nofollow noreferrer"><code>pre</code> and <code>post</code> hooks</a> that can be used during an <code>install</code> or <code>upgrade</code>. </p>
<p>The <a href="https://helm.sh/docs/charts_hooks/#writing-a-hook" rel="nofollow noreferrer">helm docco provides a Job example</a> run <code>post-install</code> via annotations. </p>
<pre><code>metadata:
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
</code></pre>
<p>If you have more complex requirements you could do the same with a manager or Jobs that query the kubernetes API's to check on cluster state. </p>
<h3>Helm 3</h3>
<p>A warning that helm is moving to v3.x soon where they have rearchitected away a lot of significant problems from v2. If you are just getting started with helm, keep an eye out for the v3 beta. It's only alpha as of 08/2019.</p>
|
<p>I have a K8s deployment with one pod running among others a container with Envoy sw. I have defined image in such way that if there is an Environment variable EXTRA_OPTS defined it will be appended to the command line to start Envoy.
I want to use that variable to override default configuration as explained in
<a href="https://www.envoyproxy.io/docs/envoy/latest/operations/cli#cmdoption-config-yaml" rel="nofollow noreferrer">https://www.envoyproxy.io/docs/envoy/latest/operations/cli#cmdoption-config-yaml</a></p>
<p>Environment variable works ok for other command options such as "-l debug" for example.
Also, I have tested expected final command line and it works.</p>
<p>Dockerfile set Envoy to run in this way:</p>
<pre class="lang-sh prettyprint-override"><code>CMD ["/bin/bash", "-c", "envoy -c envoy.yaml $EXTRA_OPTS"]
</code></pre>
<p>What I want is to set this:</p>
<pre><code> ...
- image: envoy-proxy:1.10.0
imagePullPolicy: IfNotPresent
name: envoy-proxy
env:
- name: EXTRA_OPTS
value: ' --config-yaml "admin: { address: { socket_address: { address: 0.0.0.0, port_value: 9902 } } }"'
...
</code></pre>
<p>I have succesfully tested running envoy with final command line:</p>
<pre class="lang-sh prettyprint-override"><code>envoy -c /etc/envoy/envoy.yaml --config-yaml "admin: { address: { socket_address: { address: 0.0.0.0, port_value: 9902 } } }"
</code></pre>
<p>And I have also tested a "simpler" option in EXTRA_OPTS and it works:</p>
<pre><code> ...
- image: envoy-proxy:1.10.0
imagePullPolicy: IfNotPresent
name: envoy-proxy
env:
- name: EXTRA_OPTS
value: ' -l debug'
...
</code></pre>
<p>I would expect Envoy running with this new admin port, instead I'm having param errors:</p>
<pre class="lang-sh prettyprint-override"><code>PARSE ERROR: Argument: {
Couldn't find match for argument
</code></pre>
<p>It looks like quotes are not being passed to the actual Environment variable into the container... </p>
<p>Any clue??? </p>
<p>Thanks to all</p>
| <p>You should set <code>["/bin/bash", "-c", "envoy -c envoy.yaml"]</code> as an ENTRYPOINT in you dockerfile or use <code>command</code> in kubernetes and then use <code>args</code> to add additional arguments. </p>
<p>You can find more information in <a href="https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#entrypoint" rel="nofollow noreferrer">docker documentation</a></p>
<p>Let me explain by example:</p>
<p><code>$ docker build -t fl3sh/test:bash .</code></p>
<pre><code>$ cat Dockerfile
FROM ubuntu
RUN echo '#!/bin/bash' > args.sh && \
echo 'echo "$@"' >> args.sh && \
chmod -x args.sh
CMD ["args","from","docker","cmd"]
ENTRYPOINT ["/bin/bash", "args.sh", "$ENV_ARG"]
</code></pre>
<pre><code>cat args.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: args
name: args
spec:
containers:
- args:
- args
- from
- k8s
image: fl3sh/test:bash
name: args
imagePullPolicy: Always
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
Output:
pod/args $ENV_ARG args from k8s
</code></pre>
<pre><code>cat command-args.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: command-args
name: command-args
spec:
containers:
- command:
- /bin/bash
- -c
args:
- 'echo args'
image: fl3sh/test:bash
imagePullPolicy: Always
name: args
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
Output:
pod/command-args args
</code></pre>
<pre><code>cat command-env-args.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: command-env-args
name: command-env-args
spec:
containers:
- env:
- name: ENV_ARG
value: "arg from env"
command:
- /bin/bash
- -c
- exec echo "$ENV_ARG"
image: fl3sh/test:bash
imagePullPolicy: Always
name: args
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
Output:
pod/command-env-args arg from env
</code></pre>
<pre><code>cat command-no-args.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: command-no-args
name: command-no-args
spec:
containers:
- command:
- /bin/bash
- -c
- 'echo "no args";echo "$@"'
image: fl3sh/test:bash
name: args
imagePullPolicy: Always
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
Output:
pod/command-no-args no args
#notice ^ empty line above
</code></pre>
<pre><code>cat no-args.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: no-args
name: no-args
spec:
containers:
- image: fl3sh/test:bash
name: no-args
imagePullPolicy: Always
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
Output:
pod/no-args $ENV_ARG args from docker cmd
</code></pre>
<p>If you need to recreate my example you can use this loop to get this output like above:</p>
<pre><code>for p in `kubectl get po -oname`; do echo cat ${p#*/}.yaml; echo ""; \
cat ${p#*/}.yaml; echo -e "\nOutput:"; printf "$p "; \
kubectl logs $p;echo "";done
</code></pre>
<p>Conclusion if you need to pass env as arguments use:</p>
<pre><code> command:
- /bin/bash
- -c
- exec echo "$ENV_ARG"
</code></pre>
<p>I hope now it is clear.</p>
|
<p>Can I inherit node labels to pod labels?</p>
<p>i.e I want to have <code>zone</code> and <code>instance-type</code> labels from node to pods</p>
| <p>This feature is not yet supported.</p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/62078" rel="noreferrer">Here</a> is an open feature request on kubernetes.</p>
<p>Though there are <a href="https://stackoverflow.com/questions/36690446/inject-node-labels-into-kubernetes-pod">few</a> workarounds.</p>
<p>You can also refers <a href="https://gmaslowski.com/kubernetes-node-label-to-pod/" rel="noreferrer">this</a>, where they make use of <code>initContainer</code> to get node label and assign it to pod label.</p>
<p>Hope this helps.</p>
|
<p>Cluster created in Rancher with Amazon EKS.</p>
<p>MongoDB replicaset was created as a catalog app in Rancher.</p>
<p>Services in the cluster can successfully connect the the database with this connection string.</p>
<p><code>mongodb://mongodb-replicaset.mongodb-replicaset.svc.cluster.local:27017/tradeit_system?replicaSet=rs</code></p>
<p>I want to view and edit data in the db. In a local db you can do it easily by the command <code>mongo --port 27017</code>. </p>
<p>Similarly is there a way to connect to the one one on kubernetes. Either from the terminal or using an application like Robo 3t?</p>
<p>EDIT</p>
<p>The replicaset doesn't show when I do.</p>
<p><code>kubectl get deployments --all-namespace</code></p>
<p><code>kubectl get pods --all-namespaces</code></p>
<p>Show that it runs in 3 pods mongodb-replicaset-0, mongodb-replicaset-1, mongodb-replicaset-2.</p>
| <ol>
<li>run <code>kubectl get services -n <namespace></code>. this will list the replicaset service</li>
<li>execute <code>kubectl port-forward svc/mongodb-replicaset -n mongoNamespace 27018:27017</code></li>
</ol>
<p>where </p>
<p><strong>mongodb-replicaset</strong> = mongodb service name</p>
<p><strong>mongoNamespace</strong> = namespace</p>
<p>and <strong>27018</strong> = your local port</p>
<p>As best practice, you should always connect on services not on pods. Since pods are automatically recreated/restarted, it will give you a new pod name. Connecting to a service saves you from reconnecting and finding the primary pod of your mongodb replicaset.</p>
|
<p>Is there any place where I can find the UML representation of Kubernetes Templatesand constructs - Pod, ReplicaSet, Job etc. </p>
| <p>There is few diagrams you can find.</p>
<p>For example how does the Container Deployment work:
<a href="https://i.stack.imgur.com/jnFYu.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jnFYu.jpg" alt="Container Deployment"></a>
How does the Kubernetes Object work:</p>
<p><a href="https://i.stack.imgur.com/KNvLf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KNvLf.jpg" alt="Kubernetes Object"></a>
How does the architecture work:</p>
<p><a href="https://i.stack.imgur.com/KC4EP.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KC4EP.jpg" alt="architecture"></a>
<a href="https://i.stack.imgur.com/mj0hH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mj0hH.png" alt="architecture"></a></p>
<p>I also do recommend reading <a href="https://medium.com/@tsuyoshiushio/kubernetes-in-three-diagrams-6aba8432541c" rel="nofollow noreferrer">Kubernetes in three diagrams</a> by <a href="https://medium.com/@tsuyoshiushio" rel="nofollow noreferrer">Tsuyoshi Ushio</a> and <a href="https://dzone.com/refcardz/kubernetes-essentials?chapter=1" rel="nofollow noreferrer">Getting Started With Kubernetes</a>.</p>
|
<p>We have a pod in kubernetes and it is processing some jobs in there regularly. Some of these jobs take more than 2 or 3 minutes. When deploying this pod, we don't want to kill current processes on this pod. How to keep wait this pod's termination until all processes completed? I read some topics about 'terminationGracePeriodSeconds'. But is there any upper seconds limit about that property or how it works?</p>
| <p>You can use <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#wait" rel="nofollow noreferrer">wait command</a>, which takes multiple resources and waits until the specified condition.</p>
<p>For example, assume you create a job called worker that print the word 'printsth' and pause for 3 seconds ten times.</p>
<pre><code>$ kubectl create ns waitplayground
$ kubectl -n waitplayground \
create job worker \
--image centos:7 -- \
sh -c \
'for i in {1..10} ; do echo printsth ; sleep 3; done'
</code></pre>
<p>Kubectl wait command returns something similar to that:</p>
<pre><code>$ kubectl -n waitplayground \
wait --for=condition=complete --timeout=40s \
job/worker
job.batch/worker condition met
</code></pre>
<p>I hope it will helps you.</p>
|
<p>I have Kubernetes Cluster with Ingress/Traefik controller</p>
<p>Also, I installed the dashboard using the standard config from here: <a href="https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml" rel="noreferrer">https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml</a></p>
<p>I'm trying to access the Dashboard through Ingress, but I get 404 error</p>
<pre><code>404 page not found
</code></pre>
<p>My ingress.yml file looks like this</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "traefik"
name: app-ingress-system
namespace: kube-system
spec:
tls:
- hosts:
- dashboard.domain.com
secretName: kubernetes-dashboard-certs
rules:
- host: dashboard.domain.com
http:
paths:
- path: /
backend:
serviceName: kubernetes-dashboard
servicePort: 443
</code></pre>
<p>I've tried different - path: (like /dashboard, /proxy) same result</p>
| <p>This occurs because <code>kubernetes-dashboard-certs</code> doesnot have the file <code>tls.crt</code> and <code>tls.key</code> which are expected by traefik. You should get this in the traefik logs.</p>
<p>Next problems will be between traefik certificates and dashboard certificates. I still not understand how to fix properly this and configure traefik with the option :</p>
<pre><code> ssl.insecureSkipVerify: "true"
</code></pre>
<p>The last one I had, is that http endpoint doesnot accept login, then finally I declare the ingress that redirect http to https like this :</p>
<pre><code>kubectl apply -f - << EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/ssl-redirect: "true"
spec:
rules:
- host: dashboard.domain.com
http:
paths:
- path: /
backend:
serviceName: kubernetes-dashboard
servicePort: 443
EOF
</code></pre>
|
<p>I have set up the notification for cloud build CI/CD which is pushing notification to a respective slack channel.</p>
<p>After a successful build that image push to kubernetes cluster and rolling update strategy followed by deployment.</p>
<p>So I want to push notification when new pod become ready and old pod terminated so that time gets an idea about new changes applied to deployment.</p>
<p><strong>Note</strong> : I am using GKE cluster but not installed Prometheus due to resource limits.</p>
| <p>There are multiple ways of doing this, I can think of two ways right now:</p>
<ol>
<li>Use Prometheus + Alert manager to send you a slack notification when pods became ready.</li>
<li>Use CI/CD pipeline to continuously check for the status of the pods, once they are updated successfully, send a notification.</li>
</ol>
<p>Hope this answers your question.</p>
<p>EDIT:
If you would like to stick to using stackdriver, then there is a solution for it as well: <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/events-stackdriver/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug-application-cluster/events-stackdriver/</a></p>
|
<p>In my kubernetes cluster, master nodes have often DiskPressure problem due to /var partition usage.
I noticed that the /var/lib/kubelet folder takes up a lot of space and I was going to change this folder as a kubelet overlay pointing to one in a larger partition.
I've already done this with docker and etcd, but I can't figure out how to do this change of kubelet pointing, has anyone done it yet?</p>
| <p>Here is the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">documentation</a> of kubelet flags. In that page search for <code>root-dir</code> <br/></p>
<p>if you are using CentOs system then follow below:</p>
<p>Step 1.a: edit (for older k8s version) /etc/systemd/system/kubelet.service.d/10-kubeadm.conf as shown over <a href="https://stackoverflow.com/a/46065250/4451944">here</a> <br/>
Step 1.b: edit (for newer k8s version) /etc/sysconfig/kubelet as shown over <a href="https://stackoverflow.com/a/53228571/4451944">here</a><br/>
Step 2: <br/><code>systemctl daemon-reload</code><br/>
<code>systemctl restart kubelet</code></p>
<p>Explanation: <code>--root-dir</code> flag is what you need to specify your new directory. This flag needs to mentioned as an <em>extra args</em> in the kubelet drop-in file but the structure of the drop-in file changed in the new kubelet versions as shown in step 1.a and 1.b. </p>
|
<p>Following the project from <a href="https://towardsdatascience.com/kubernetesexecutor-for-airflow-e2155e0f909c" rel="nofollow noreferrer">here</a>, I am trying to integrate airflow kubernetes executor using NFS server as backed storage PV. I've a PV <code>airflow-pv</code> which is linked with NFS server. Airflow webserver and scheduler are using a PVC <code>airflow-pvc</code> which is bound with <code>airflow-pv</code>. I've placed my dag files in NFS server <code>/var/nfs/airflow/development/<dags/logs></code>. I can see newly added DAGS in webserver UI aswell. However when I execute a DAG from UI, the scheduler fires a new POD for that tasks BUT the new worker pod fails to run saying </p>
<p><code>Unable to mount volumes for pod "tutorialprintdate-3e1a4443363e4c9f81fd63438cdb9873_development(976b1e64-b46d-11e9-92af-025000000001)": timeout expired waiting for volumes to attach or mount for pod "development"/"tutorialprintdate-3e1a4443363e4c9f81fd63438cdb9873". list of unmounted volumes=[airflow-dags]. list of unattached volumes=[airflow-dags airflow-logs airflow-config default-token-hjwth]</code></p>
<p>here is my webserver and scheduler deployment files;</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: airflow-webserver-svc
namespace: development
spec:
type: NodePort
ports:
- name: web
protocol: TCP
port: 8080
selector:
app: airflow-webserver-app
namespace: development
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: airflow-webserver-dep
namespace: development
spec:
replicas: 1
selector:
matchLabels:
app: airflow-webserver-app
namespace: development
template:
metadata:
labels:
app: airflow-webserver-app
namespace: development
spec:
restartPolicy: Always
containers:
- name: airflow-webserver-app
image: airflow:externalConfigs
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
args: ["-webserver"]
env:
- name: AIRFLOW_KUBE_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: AIRFLOW__CORE__FERNET_KEY
valueFrom:
secretKeyRef:
name: airflow-secrets
key: AIRFLOW__CORE__FERNET_KEY
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: airflow-secrets
key: MYSQL_PASSWORD
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: airflow-secrets
key: MYSQL_PASSWORD
- name: DB_HOST
value: mysql-svc.development.svc.cluster.local
- name: DB_PORT
value: "3306"
- name: MYSQL_DATABASE
value: airflow
- name: MYSQL_USER
value: airflow
- name: MYSQL_PASSWORD
value: airflow
- name: AIRFLOW__CORE__EXECUTOR
value: "KubernetesExecutor"
volumeMounts:
- name: airflow-config
mountPath: /usr/local/airflow/airflow.cfg
subPath: airflow.cfg
- name: airflow-files
mountPath: /usr/local/airflow/dags
subPath: airflow/development/dags
- name: airflow-files
mountPath: /usr/local/airflow/plugins
subPath: airflow/development/plugins
- name: airflow-files
mountPath: /usr/local/airflow/logs
subPath: airflow/development/logs
- name: airflow-files
mountPath: /usr/local/airflow/temp
subPath: airflow/development/temp
volumes:
- name: airflow-files
persistentVolumeClaim:
claimName: airflow-pvc
- name: airflow-config
configMap:
name: airflow-config
</code></pre>
<p>The scheduler yaml file is exactly the same except the container args is <code>args: ["-scheduler"]</code>. Here is my airflow.cfg file, </p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: "airflow-config"
namespace: development
data:
airflow.cfg: |
[core]
airflow_home = /usr/local/airflow
dags_folder = /usr/local/airflow/dags
base_log_folder = /usr/local/airflow/logs
executor = KubernetesExecutor
plugins_folder = /usr/local/airflow/plugins
load_examples = false
[scheduler]
child_process_log_directory = /usr/local/airflow/logs/scheduler
[webserver]
rbac = false
[kubernetes]
airflow_configmap =
worker_container_repository = airflow
worker_container_tag = externalConfigs
worker_container_image_pull_policy = IfNotPresent
delete_worker_pods = true
dags_volume_claim = airflow-pvc
dags_volume_subpath =
logs_volume_claim = airflow-pvc
logs_volume_subpath =
env_from_configmap_ref = airflow-config
env_from_secret_ref = airflow-secrets
in_cluster = true
namespace = development
[kubernetes_node_selectors]
# the key-value pairs to be given to worker pods.
# the worker pods will be scheduled to the nodes of the specified key-value pairs.
# should be supplied in the format: key = value
[kubernetes_environment_variables]
//the below configs gets overwritten by above [kubernetes] configs
AIRFLOW__KUBERNETES__DAGS_VOLUME_CLAIM = airflow-pvc
AIRFLOW__KUBERNETES__DAGS_VOLUME_SUBPATH = var/nfs/airflow/development/dags
AIRFLOW__KUBERNETES__LOGS_VOLUME_CLAIM = airflow-pvc
AIRFLOW__KUBERNETES__LOGS_VOLUME_SUBPATH = var/nfs/airflow/development/logs
[kubernetes_secrets]
AIRFLOW__CORE__SQL_ALCHEMY_CONN = airflow-secrets=AIRFLOW__CORE__SQL_ALCHEMY_CONN
AIRFLOW_HOME = airflow-secrets=AIRFLOW_HOME
[cli]
api_client = airflow.api.client.json_client
endpoint_url = https://airflow.crunchanalytics.cloud
[api]
auth_backend = airflow.api.auth.backend.default
[admin]
# ui to hide sensitive variable fields when set to true
hide_sensitive_variable_fields = true
</code></pre>
<p>After firing a manual task, the logs of the Scheduler tells me that KubernetesExecutorConfig() executed with all values as None. Seems like it didn't picked up the configs ? I've tried almost everything I know of, but cannot manage to make it work. Could someone tell me waht am I missing ? </p>
<pre><code>[2019-08-01 14:44:22,944] {jobs.py:1341} INFO - Sending ('kubernetes_sample', 'run_this_first', datetime.datetime(2019, 8, 1, 13, 45, 51, 874679, tzinfo=<Timezone [UTC]>), 1) to executor with priority 3 and queue default
[2019-08-01 14:44:22,944] {base_executor.py:56} INFO - Adding to queue: airflow run kubernetes_sample run_this_first 2019-08-01T13:45:51.874679+00:00 --local -sd /usr/local/airflow/dags/airflow/development/dags/k8s_dag.py
[2019-08-01 14:44:22,948] {kubernetes_executor.py:629} INFO - Add task ('kubernetes_sample', 'run_this_first', datetime.datetime(2019, 8, 1, 13, 45, 51, 874679, tzinfo=<Timezone [UTC]>), 1) with command airflow run kubernetes_sample run_this_first 2019-08-01T13:45:51.874679+00:00 --local -sd /usr/local/airflow/dags/airflow/development/dags/k8s_dag.py with executor_config {}
[2019-08-01 14:44:22,949] {kubernetes_executor.py:379} INFO - Kubernetes job is (('kubernetes_sample', 'run_this_first', datetime.datetime(2019, 8, 1, 13, 45, 51, 874679, tzinfo=<Timezone [UTC]>), 1), 'airflow run kubernetes_sample run_this_first 2019-08-01T13:45:51.874679+00:00 --local -sd /usr/local/airflow/dags/airflow/development/dags/k8s_dag.py', KubernetesExecutorConfig(image=None, image_pull_policy=None, request_memory=None, request_cpu=None, limit_memory=None, limit_cpu=None, gcp_service_account_key=None, node_selectors=None, affinity=None, annotations={}, volumes=[], volume_mounts=[], tolerations=None))
[2019-08-01 14:44:23,042] {kubernetes_executor.py:292} INFO - Event: kubernetessamplerunthisfirst-7fe05ddb34aa4cb9a5604e420d5b60a3 had an event of type ADDED
[2019-08-01 14:44:23,046] {kubernetes_executor.py:324} INFO - Event: kubernetessamplerunthisfirst-7fe05ddb34aa4cb9a5604e420d5b60a3 Pending
[2019-08-01 14:44:23,049] {kubernetes_executor.py:292} INFO - Event: kubernetessamplerunthisfirst-7fe05ddb34aa4cb9a5604e420d5b60a3 had an event of type MODIFIED
[2019-08-01 14:44:23,049] {kubernetes_executor.py:324} INFO - Event: kubernetessamplerunthisfirst-7fe05ddb34aa4cb9a5604e420d5b60a3 Pending
</code></pre>
<p>for reference, here is my PV and PVC; </p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: airflow-pv
labels:
mode: local
environment: development
spec:
persistentVolumeReclaimPolicy: Retain
storageClassName: airflow-pv
capacity:
storage: 4Gi
accessModes:
- ReadWriteMany
nfs:
server: 10.105.225.217
path: "/"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: airflow-pvc
namespace: development
spec:
storageClassName: airflow-pv
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
selector:
matchLabels:
mode: local
environment: development
</code></pre>
<p>Using Airflow version: <strong>1.10.3</strong></p>
| <p>Since no answer yet, I'll share my findings so far. In my airflow.conf under <code>kubernetes</code> section, we are to pass the following values </p>
<pre><code>dags_volume_claim = airflow-pvc
dags_volume_subpath = airflow/development/dags
logs_volume_claim = airflow-pvc
logs_volume_subpath = airflow/development/logs
</code></pre>
<p>the way how scheduler creates a new pod from the above configs is as follows (only mentioning the volumes and volumeMounts);</p>
<pre><code>"volumes": [
{
"name": "airflow-dags",
"persistentVolumeClaim": {
"claimName": "airflow-pvc"
}
},
{
"name": "airflow-logs",
"persistentVolumeClaim": {
"claimName": "airflow-pvc"
}
}],
"containers": [
{ ...
"volumeMounts": [
{
"name": "airflow-dags",
"readOnly": true,
"mountPath": "/usr/local/airflow/dags",
"subPath": "airflow/development/dags"
},
{
"name": "airflow-logs",
"mountPath": "/usr/local/airflow/logs",
"subPath": "airflow/development/logs"
}]
...}]
</code></pre>
<p>K8s DOESN'T likes multiple volumes pointing to same pvc (airflow-pvc). To fix this, I'd to create two PVC (and PV) for dags and logs <code>dags_volume_claim = airflow-dags-pvc</code> and <code>logs_volume_claim = airflow-log-pvc</code> which works fine. </p>
<p>I don't kow if this has already been addressed in newer version of airflow (I am using 1.10.3). The airflow scheduler should handle this case when ppl using same PVC then create a pod with single volume and 2 volumeMounts referring to that Volume e.g. </p>
<pre><code>"volumes": [
{
"name": "airflow-dags-logs", <--just an example name
"persistentVolumeClaim": {
"claimName": "airflow-pvc"
}
}
"containers": [
{ ...
"volumeMounts": [
{
"name": "airflow-dags-logs",
"readOnly": true,
"mountPath": "/usr/local/airflow/dags",
"subPath": "airflow/development/dags" <--taken from configs
},
{
"name": "airflow-dags-logs",
"mountPath": "/usr/local/airflow/logs",
"subPath": "airflow/development/logs" <--taken from configs
}]
...}]
</code></pre>
<p>I deployed a pod with above configurations and it works!</p>
|
<p>I am trying to delete persistent volumes on a Kubernetes cluster. I ran the following command:</p>
<pre><code>kubectl delete pv pvc-08e65270-b7ce-11e9-ba0b-0a1e280502e2 pvc-08e87826-b7ce-11e9-ba0b-0a1e280502e2 pvc-08ea5f97-b7ce-11e9-ba0b-0a1e280502e2 pvc-08ec1cac-b7ce-11e9-ba0b-0a1e280502e2
</code></pre>
<p>However it showed:</p>
<pre><code>persistentvolume "pvc-08e65270-b7ce-11e9-ba0b-0a1e280502e2" deleted
persistentvolume "pvc-08e87826-b7ce-11e9-ba0b-0a1e280502e2" deleted
persistentvolume "pvc-08ea5f97-b7ce-11e9-ba0b-0a1e280502e2" deleted
persistentvolume "pvc-08ec1cac-b7ce-11e9-ba0b-0a1e280502e2" deleted
</code></pre>
<p>But the command did not exit. So I <code>CONTROL+C</code> to force exit the command. After a few minutes, I ran:</p>
<pre><code>kubectl get pv
</code></pre>
<p>And the status is <code>Terminating</code>, but the volumes don't appear to be deleting.</p>
<p>How can I delete these persistent volumes?</p>
| <p>It is not recommended to delete pv it should be handled by cloud provisioner. If you need to remove pv just delete pod bounded to claim and then pvc. After that cloud provisioner should also remove pv as well.</p>
<pre><code>kubectl delete pvc --all
</code></pre>
<p>It sometimes could take some time so be patient.</p>
|
<p>I am trying helm del --purge which not deleting one pod which is mentioned in a yaml separately, but it is deleting all the pods which are getting created by deployments or which are stateful pods.</p>
| <p>helm del --purge <code><helm chart name></code> will delete only pods which are related with particular helm charts.</p>
<p>You can check all the helm charts installed on the cluster via : <code>helm ls</code></p>
<p>you can delete helm chart using <code>helm del <chart name></code></p>
<p>while the pods are created separately and not part of any helm chart will not be deleted when you will run <code>helm del --purge</code></p>
<p>So, for example, you installed MySQL using helm : <code>helm install --name mysql stable/mysql</code></p>
<p>So when you will run <code>helm del --purge mysql</code> it will only delete mysql pods.</p>
<p>To delete the pod which are created separately you can run <code>kubectl delete pod <pod name></code></p>
<blockquote>
<p>To delete the deployment : <code>kubectl delete deployment <deployment name></code></p>
<p>To delete statefulsets : <code>kubectl delete statefulset <statefulset name></code></p>
</blockquote>
|
<p>I am migrating a docker-compose file for a tomcat + postgres application to kubernetes. My test environment is Dockers for Windows with Kubernetes enabled.</p>
<p>If I use an emptyDir volume for postgres, I am able to test my application successfully, but I am unable to persist postgres data.</p>
<pre><code> volumes:
- name: "pgdata-vol"
emptyDir: {}
</code></pre>
<p>On MacOS, I am able to persist postgres data using a hostPath volume.</p>
<pre><code> volumes:
- name: "pgdata-vol"
hostPath:
path: /tmp/vols/pgdata
</code></pre>
<p>Unfortunately, if I try the same solution on Windows, I encounter a permission error. This was discussed in a previous question. See <a href="https://stackoverflow.com/questions/57227990/kubernetes-in-docker-for-windows-volume-configuration-for-postgres">Kubernetes (in Docker for Windows) Volume Configuration for Postgres</a>.</p>
<p>I do not need to access my volumes through the host file system. I simply want my volume to persist from one run to the next. I am able to achieve this behavior when running <code>docker-compose ... up</code> and <code>docker-compose ... down</code> using the following volume definition.</p>
<pre><code>volumes:
pgdata:
</code></pre>
<p>Volume reference</p>
<pre><code> environment:
- PGDATA=/pgdata
volumes:
- pgdata:/pgdata
</code></pre>
<p>Is there a good name to use to describe this type of volume? Is there a way to translate this type of volume into kubernetes? </p>
| <p>Empty dir is indeed for temporary data.
If you need persistent data you have the choice between using a host path like in Docker, but if you have multiple nodes an a pod is scheduled for a different node the data will be missing. </p>
<p>A better solution is therefore to use a persistent volume. This must be provided by your infrastructure and is therefore specific to your environment.</p>
<p>See for details: <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/</a></p>
<p>For the permission problem on Docker for Windows you might want to consider to use minikube instead. </p>
|
<p>While following the tutorial found <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">here</a>, I saw that I need autoscaling/v2beta2 for having custom metrics for the Horizontal Pod Autoscaler, but don't know how to enable it.</p>
<p><strong>Environment details:</strong></p>
<ul>
<li>Google Cloud platform</li>
<li>Kubernetes version 1.13.7-gke.8 (latest)</li>
</ul>
<p><strong>What did I try:</strong></p>
<ul>
<li><p>Using "kubectl api-versions", I checked what API groups I have enable for autoscaling and have only the following:
autoscaling/v1
autoscaling/v2beta1</p></li>
<li><p>I found the following <a href="https://kubernetes.io/docs/concepts/overview/kubernetes-api/" rel="nofollow noreferrer">documentation</a> that says "Certain resources and API groups are enabled by default. They can be enabled or disabled by setting --runtime-config on apiserver. --runtime-config accepts comma separated values.". Taking a look at the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/" rel="nofollow noreferrer">kube-apiserver command documentation</a> , I could find a way to use this command for gcloud</p></li>
</ul>
| <p><strong>Update</strong>: upgrade your cluster to at least 1.15.5 to get <code>autoscaling/v2beta2</code>.</p>
<hr />
<p>Currently, GCP is not supporting <code>autoscaling/v2beta2</code>. This issue has been already reported to google, more details and progress changes can be observed at: <a href="https://issuetracker.google.com/135624588" rel="nofollow noreferrer">https://issuetracker.google.com/135624588</a></p>
<p>However, you can use it freely in Minikube and Kubeadm.</p>
|
<p>I have two kubernetes clusters who do not talk to one another in any way. The idea is to maintain one prometheus instance(in another 3rd cluster) that can scrape endpoints from both the clusters.</p>
<p>I created a service account in each cluster, gave it cluster role & clusterrolebinding and took an yaml file of the secret. I then imported the same secret in the 3rd cluster where I have prometheus running. Using these mounted secrets, I was able to pull data from all pods in cluster 1 and 2.</p>
<p>Are there any better options to achieve this usecase?
I am in a way transferring secrets from one cluster to another to get the same ca.crt and token.</p>
| <p>I think it is not safe to share secrets between clusters.</p>
<p>What about federation prometheus, one prometheus instance can export some data, which can be consumed by external prometheus instance.</p>
<blockquote>
<p>For example, a cluster scheduler running multiple services might expose resource usage information (like memory and CPU usage) about service instances running on the cluster. On the other hand, a service running on that cluster will only expose application-specific service metrics. Often, these two sets of metrics are scraped by separate Prometheus servers.</p>
</blockquote>
<p>Or deploy some exporter, which can be consumed by external prometheus. e.g. <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">https://github.com/kubernetes/kube-state-metrics</a> (but it is not providing cpu/memory usage of pods)</p>
|
<p>There is an option of mounting volumeMount for dags and logs under <code>kubernetes</code> section of airflow.conf e.g. </p>
<pre><code>[kubernetes]
airflow_configmap = airflow_config
worker_container_repository = airflow
worker_container_tag = runner2
worker_container_image_pull_policy = IfNotPresent
delete_worker_pods = true
dags_volume_claim = airflow-dags-pvc
dags_volume_subpath = airflow/development/dags
logs_volume_claim = airflow-logs-pvc
logs_volume_subpath = airflow/development/logs
namespace = development
</code></pre>
<p>This works as expected. Means I could see worker pods successfully mounting both of these volumes and its relevant volumeMounts inside the container.</p>
<pre><code> "volumeMounts": [
{
"name": "airflow-dags",
"readOnly": true,
"mountPath": "/usr/local/airflow/dags",
"subPath": "airflow/development/dags"
},
{
"name": "airflow-logs",
"mountPath": "/usr/local/airflow/logs",
"subPath": "airflow/development/logs"
},
</code></pre>
<p>BUT, My worker pods have dependency of picking up custom airflow plugins from directory <code>airflow/development/plugins</code> and <code>airflow/development/libs</code>. Due to which I need to add more volumeMount into the worker pod with relevant subPaths from NFS server. How can I achieve that ? I tried searching for any relevant config value but couldn't find any. </p>
<p><strong>Update:</strong> I was passing <code>executor_config</code> into the one of the dags sensors task as <code>executor_config={"KubernetesExecutor": {"image": "airflow:runner2"}}</code>. By the look at code <code>airflow/contrib/kubernetes/worker_configuration.py</code> , it seems like if I pass in volumes and volumeMounts as part of this configs, it should work. I will try this and update here. </p>
| <p>Sorry i'd to answer my own question. Maybe will help someone. Inside the dag file, where I define the task, I simply had to add the executor_config as follows;</p>
<pre><code> IngestionStatusSensor(
task_id=...,
executor_config={"KubernetesExecutor": {
"image": "airflow:runner2",
"volume_mounts": [
{
"name": "airflow-dags",
"mountPath": "/usr/local/airflow/libs",
"subPath": "airflow/development/libs"
},
{
"name": "airflow-dags",
"mountPath": "/usr/local/airflow/plugins",
"subPath": "airflow/development/plugins"
}],
}
},
dag=dag,
ingestion_feed=table,
poke_interval=60 * 30,
offset=0,
start_date=start_date
)
</code></pre>
<p>where <code>airflow-dags</code> volume is already defined by pod creator that claims the PVC defined in configs under <code>kubernetes</code> section of airflow.conf e.g. <code>dags_volume_claim = airflow-pvc</code>.</p>
|
<p>The whole point of my setup is to achieve (if possible) the following:</p>
<ul>
<li>I have multiple k8s nodes</li>
<li>When I contact an IP address (from my company's network), it should be routed to one of my container/pod/service/whatever.</li>
<li>I should be able to easily setup that IP (like in my service .yml definition)</li>
</ul>
<p>I'm running a small Kubernetes cluster (built with kubeadm) in order to evaluate if I can move my Docker (old)Swarm setup to k8s. The feature I absolutely need is the ability to assign IP to containers, like I do with MacVlan.</p>
<p>In my current docker setup, I'm using MacVlan to assign IP addresses from my company's network to some containers so I can reach directly (without reverse-proxy) like if it's any physical server. I'm trying to achieve something similar with k8s.</p>
<p>I found out that:</p>
<ul>
<li>I have to use <em>Service</em></li>
<li>I can't use the <em>LoadBalancer</em> type, as it's only for compatible cloud providers (like GCE or AWS).</li>
<li>I should use <a href="https://kubernetes.io/docs/concepts/services-networking/service/#external-ips" rel="nofollow noreferrer"><em>ExternalIPs</em></a></li>
<li><em>Ingress Resources</em> are some kind of reverse proxy ?</li>
</ul>
<p>My yaml file is :</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
nodeSelector:
kubernetes.io/hostname: k8s-slave-3
---
kind: Service
apiVersion: v1
metadata:
name: nginx-service
spec:
type: ClusterIP
selector:
app: nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
externalIPs:
- A.B.C.D
</code></pre>
<p>I was hopping that my service would get the IP A.B.C.D (which is one of my company's network). My deployment is working as I can reach my nginx container from inside the k8s cluster using it's <em>ClusterIP</em>.</p>
<p>What am I missing ? Or at least, where can I find informations on my network traffic in order to see if packets are coming?</p>
<pre><code>$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.96.0.1 <none> 443/TCP 6d
nginx-service 10.102.64.83 A.B.C.D 80/TCP 23h
</code></pre>
| <p>First of all run this command: </p>
<pre><code>kubectl get -n namespace services
</code></pre>
<p>Above command will return output like this:</p>
<pre><code> NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend NodePort 10.100.44.154 <none> 9400:3003/TCP 13h
frontend NodePort 10.107.53.39 <none> 3000:30017/TCP 13h
</code></pre>
<p>It is clear from the above output that External IPs are not assigned to the services yet. To assign External IPs to <em>backend</em> service run the following command.</p>
<pre><code> kubectl patch svc backend -p '{"spec":{"externalIPs":["192.168.0.194"]}}'
</code></pre>
<p>and to assign external IP to <em>frontend</em> service run this command.</p>
<pre><code> kubectl patch svc frontend -p '{"spec":{"externalIPs":["192.168.0.194"]}}'
</code></pre>
<p>Now get namespace service to check either external IPs assignment:</p>
<pre><code>kubectl get -n namespace services
</code></pre>
<p>We get an output like this:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend NodePort 10.100.44.154 192.168.0.194 9400:3003/TCP 13h
frontend NodePort 10.107.53.39 192.168.0.194 3000:30017/TCP 13h
</code></pre>
<p>Cheers!!! Kubernetes External IPs are now assigned .</p>
|
<p>I use the kubernetes ingress-nginx controller and a set custom errors on GKE but I have some problems.</p>
<p>Goal:
If a 50x error occurs in <code>something-web-app</code>, I'll return the HTTP status code 200 and JSON <code>{"status":200, "message":"ok"}</code></p>
<p>Problems:</p>
<ol>
<li><p>I have read the <a href="https://kubernetes.github.io/ingress-nginx/examples/customization/custom-errors/" rel="nofollow noreferrer">custom-errors document</a> but there is no example of how to customize the <code>default-backend</code>.</p></li>
<li><p>I do not understand the difference between ConfigMap and Annotation.</p></li>
<li><p>How does ingress-nginx controller work in the first place.</p></li>
</ol>
| <p>You can do it using two way :</p>
<ol>
<li>Adding annotation in ingress</li>
<li>Change in ingress controller configmap (which is more like beckend)</li>
</ol>
<p><strong>1</strong>. Try adding this annotation to kubernetes ingress :</p>
<blockquote>
<p>nginx.ingress.kubernetes.io/default-backend: nginx-errors-svc</p>
<p>nginx.ingress.kubernetes.io/custom-http-errors: 404,503</p>
<p>nginx.ingress.kubernetes.io/default-backend: error-pages</p>
</blockquote>
<p>If that doesn't work add this along with two :</p>
<pre><code>nginx.ingress.kubernetes.io/server-snippet: |
location @custom_503 {
return 404;
}
error_page 503 @custom_503;
</code></pre>
<p><strong>2.</strong> Configmap editing</p>
<p>You can apply this config map to ingress controller</p>
<pre><code>apiVersion: v1
kind: ConfigMap
name: nginx-configuration-ext
data:
custom-http-errors: 502,503,504
proxy-next-upstream-tries: "2"
server-tokens: "false"
</code></pre>
<p>You can also refer this blog : <a href="https://habr.com/ru/company/flant/blog/445596/" rel="nofollow noreferrer">https://habr.com/ru/company/flant/blog/445596/</a></p>
|
<p>I have a HA cluster(say 3 masters with 1 kubes-scheduler pod on each master). There is a container running in each kube-scheduler pod.
Inside this container, there two problems need to be solved here:</p>
<ol>
<li>How to know which kube-scheduler pod the container is running on?</li>
<li>How to know if the kube-scheduler this container is running on is a leader?</li>
</ol>
<p>I know the "holderIdentity" field of the "...kubernetes.io/leader" annotation of the pod will tell the ID of the leader. Then the only question is how to know which pod the container is running on.
Or there is a way I can simply know if the pod I'm running is a leader.</p>
| <p>You can check the logs of <code>kube-scheduler</code>.
You shall see <code>lock is held by <HolderIdentity> and has not yet expired</code> in the logs of non-leader Pods. And also <code>successfully acquired lease</code> or <code>successfully renewed lease</code> in the leader Pod logs.</p>
|
<p>I am using ubuntu 18 with minikube and virtual box and trying to mount the host's directory in order to get the input data my pod needs. </p>
<p>I found that minikube has issues with mounting host directories, but by default according to your OS and vm driver, there are directories that are mounted by <a href="https://kubernetes.io/docs/setup/learning-environment/minikube/" rel="noreferrer">default</a></p>
<p>I can't find those on my pods. They are simply not there. </p>
<p>I tried to create a persistent volume, it works, I can see it on my dashboard, but I cant mount it to the pod, I used this yaml to create the volume </p>
<pre><code>{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "pv0003",
"selfLink": "/api/v1/persistentvolumes/pv0001",
"uid": "28038976-9ee4-414d-8478-b312a24a6b94",
"resourceVersion": "2030",
"creationTimestamp": "2019-08-08T10:48:23Z",
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"PersistentVolume\",\"metadata\":{\"annotations\":{},\"name\":\"pv0001\"},\"spec\":{\"accessModes\":[\"ReadWriteOnce\"],\"capacity\":{\"storage\":\"5Gi\"},\"hostPath\":{\"path\":\"/data/pv0001/\"}}}\n"
},
"finalizers": [
"kubernetes.io/pv-protection"
]
},
"spec": {
"capacity": {
"storage": "6Gi"
},
"hostPath": {
"path": "/user/data",
"type": ""
},
"accessModes": [
"ReadWriteOnce"
],
"persistentVolumeReclaimPolicy": "Retain",
"volumeMode": "Filesystem"
},
"status": {
"phase": "Available"
}
}
</code></pre>
<p>And this yaml to create the job. </p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: pi31
spec:
template:
spec:
containers:
- name: pi
image: perl
command: ["sleep"]
args: ["300"]
volumeMounts:
- mountPath: /data
name: pv0003
volumes:
- name: pv0003
hostPath:
path: /user/data
restartPolicy: Never
backoffLimit: 1
</code></pre>
<p>I also tried to create the volumnes acording to the so called default mount paths but with no success.</p>
<p>I tried to add the volume claim to the job creation yaml, still nothing.</p>
<p>When I mount the drives and create them in the job creation yaml files, the jobs are able to see the data that other jobs create, but it's invisible to the host, and the host's data is invisible to them.</p>
<p>I am running minikube from my main user, and checked the logs in the dashboard, not getting any permissions error</p>
<p>Is there any way to get data into this minikube without setting up NFS? I am trying to use it for an MVP, the entire idea is for it to be simple...</p>
| <p>It's not so easy as minikube is working inside VM created in Virtualbox that's why using hostPath you see that VM's file system instead of your PC.</p>
<p>I would really recommend to use <code>minikube mount</code> command - you can find description <a href="https://github.com/kubernetes/minikube/blob/master/docs/host_folder_mount.md" rel="noreferrer">there</a></p>
<p>From docs:</p>
<blockquote>
<p>minikube mount /path/to/dir/to/mount:/vm-mount-path is the recommended
way to mount directories into minikube so that they can be used in
your local Kubernetes cluster.</p>
</blockquote>
<p>So after that you can share your host's files inside minikube Kubernetes.</p>
<p>Edit:</p>
<p>Here is log step-by-step how to test it:</p>
<pre><code>➜ ~ minikube start
* minikube v1.3.0 on Ubuntu 19.04
* Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
* Starting existing virtualbox VM for "minikube" ...
* Waiting for the host to be provisioned ...
* Preparing Kubernetes v1.15.2 on Docker 18.09.6 ...
* Relaunching Kubernetes using kubeadm ...
* Waiting for: apiserver proxy etcd scheduler controller dns
* Done! kubectl is now configured to use "minikube"
➜ ~ mkdir -p /tmp/test-dir
➜ ~ echo "test-string" > /tmp/test-dir/test-file
➜ ~ minikube mount /tmp/test-dir:/test-dir
* Mounting host path /tmp/test-dir into VM as /test-dir ...
- Mount type: <no value>
- User ID: docker
- Group ID: docker
- Version: 9p2000.L
- Message Size: 262144
- Permissions: 755 (-rwxr-xr-x)
- Options: map[]
* Userspace file server: ufs starting
* Successfully mounted /tmp/test-dir to /test-dir
* NOTE: This process must stay alive for the mount to be accessible ...
</code></pre>
<p>Now open another console:</p>
<pre><code>➜ ~ minikube ssh
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$ cat /test-dir/test-file
test-string
</code></pre>
<p>Edit 2:</p>
<p>example job.yml</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: test
spec:
template:
spec:
containers:
- name: test
image: ubuntu
command: ["cat", "/testing/test-file"]
volumeMounts:
- name: test-volume
mountPath: /testing
volumes:
- name: test-volume
hostPath:
path: /test-dir
restartPolicy: Never
backoffLimit: 4
</code></pre>
|
<p>I want to get the specific value of an annotation into a kubectl custom columns field. I can get all the current annotations on a resource like so:</p>
<pre><code>kubectl get pvc -o custom-columns=NAME:.metadata.name,"ANNOTATIONS":.metadata.annotations -n monitoring
</code></pre>
<p>This returns a map:</p>
<pre><code>NAME ANNOTATIONS
prometheus-k8s-db-prometheus-k8s-0 map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:kubernetes.io/aws-ebs]
prometheus-k8s-db-prometheus-k8s-1 map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:kubernetes.io/aws-ebs]
</code></pre>
<p>And considering <code>kubectl -o custom-columns</code> uses JSONpath to the best of my knowledge, I figured I could do this:</p>
<pre><code>kubectl get pvc -o custom-columns=NAME:.metadata.name,"ANNOTATIONS":".metadata.annotations['pv.kubernetes.io/bind-completed']" -n monitoring
</code></pre>
<p>But it seems not. Is there a way to do this?</p>
| <p>Okay, I figured this out. It's easier than I thought.</p>
<p>Annotations is a standard JSON element when it's returned. The problem is that <code>kubectl</code>'s JSONPath parser has problems with dots in elements, so you just have to escape them. Here's an example:</p>
<pre><code>kubectl get pvc -o custom-columns=NAME:.metadata.name,"ANNOTATIONS":".metadata.annotations.pv\.kubernetes\.io/bind-completed" -n monitoring
NAME ANNOTATIONS
prometheus-k8s-db-prometheus-k8s-0 yes
prometheus-k8s-db-prometheus-k8s-1 yes
</code></pre>
|
<p>We have a Kubernetes cluster which spins up 4 instances of our application. We'd like to have it share a Hazelcast data grid and keep in synch between these nodes. According to <a href="https://github.com/hazelcast/hazelcast-kubernetes" rel="nofollow noreferrer">https://github.com/hazelcast/hazelcast-kubernetes</a> the configuration is straightforward. We'd like to use the DNS approach rather than the kubernetes api.</p>
<p>With DNS we are supposed to be able to add the DNS name of our app as described <a href="https://github.com/kubernetes/kubernetes/tree/v1.0.6/cluster/addons/dns" rel="nofollow noreferrer">here</a>. So this would be something like myservice.mynamespace.svc.cluster.local.</p>
<p>The problem is that although we have 4 VMs spun up, only one Hazelcast network member is found; thus we see the following in the logs:</p>
<pre><code>Members [1] {
Member [192.168.187.3]:5701 - 50056bfb-b710-43e0-ad58-57459ed399a5 this
}
</code></pre>
<p>It seems that there aren't any errors, it just doesn't see any of the other network members.</p>
<p>Here's my configuration. I've tried both using an xml file, like the example on the hazelcast-kubernetes git repo, as well as programmatically. Neither attempt appear to work.</p>
<p>I'm using hazelcast 3.8.</p>
<hr>
<p>Using hazelcast.xml:</p>
<pre><code><hazelcast>
<properties>
<!-- only necessary prior Hazelcast 3.8 -->
<property name="hazelcast.discovery.enabled">true</property>
</properties>
<network>
<join>
<!-- deactivate normal discovery -->
<multicast enabled="false"/>
<tcp-ip enabled="false" />
<!-- activate the Kubernetes plugin -->
<discovery-strategies>
<discovery-strategy enabled="true"
class="com.hazelcast.HazelcastKubernetesDiscoveryStrategy">
<properties>
<!-- configure discovery service API lookup -->
<property name="service-dns">myapp.mynamespace.svc.cluster.local</property>
<property name="service-dns-timeout">10</property>
</properties>
</discovery-strategy>
</discovery-strategies>
</join>
</network>
</hazelcast>
</code></pre>
<p>Using the XmlConfigBuilder to construct the instance.</p>
<pre><code>Properties properties = new Properties();
XmlConfigBuilder builder = new XmlConfigBuilder();
builder.setProperties(properties);
Config config = builder.build();
this.instance = Hazelcast.newHazelcastInstance(config);
</code></pre>
<hr>
<p>And Programmatically (personal preference if I can get it to work):</p>
<pre><code>Config cfg = new Config();
NetworkConfig networkConfig = cfg.getNetworkConfig();
networkConfig.setPort(hazelcastNetworkPort);
networkConfig.setPortAutoIncrement(true);
networkConfig.setPortCount(100);
JoinConfig joinConfig = networkConfig.getJoin();
joinConfig.getMulticastConfig().setEnabled(false);
joinConfig.getTcpIpConfig().setEnabled(false);
DiscoveryConfig discoveryConfig = joinConfig.getDiscoveryConfig();
HazelcastKubernetesDiscoveryStrategyFactory factory = new HazelcastKubernetesDiscoveryStrategyFactory();
DiscoveryStrategyConfig strategyConfig = new DiscoveryStrategyConfig(factory);
strategyConfig.addProperty("service-dns", kubernetesSvcsDnsName);
strategyConfig.addProperty("service-dns-timeout", kubernetesSvcsDnsTimeout);
discoveryConfig.addDiscoveryStrategyConfig(strategyConfig);
this.instance = Hazelcast.newHazelcastInstance(cfg);
</code></pre>
<hr>
<p>Is anyone farmiliar with this setup? I have ports 5701 - 5800 open. It seems kubernetes starts up and recognizes that discovery mode is on, but only finds the one (local) node.</p>
<p>Here's a snippet from the logs for what it's worth. This was while using the xml file for config:</p>
<pre><code>2017-03-15 08:15:33,688 INFO [main] c.h.c.XmlConfigLocator [StandardLoggerFactory.java:49] Loading 'hazelcast-default.xml' from classpath.
2017-03-15 08:15:33,917 INFO [main] c.g.a.c.a.u.c.HazelcastCacheClient [HazelcastCacheClient.java:112] CONFIG: Config{groupConfig=GroupConfig [name=dev, password=********], properties={}, networkConfig=NetworkConfig{publicAddress='null', port=5701, portCount=100, portAutoIncrement=true, join=JoinConfig{multicastConfig=MulticastConfig [enabled=true, multicastGroup=224.2.2.3, multicastPort=54327, multicastTimeToLive=32, multicastTimeoutSeconds=2, trustedInterfaces=[], loopbackModeEnabled=false], tcpIpConfig=TcpIpConfig [enabled=false, connectionTimeoutSeconds=5, members=[127.0.0.1, 127.0.0.1], requiredMember=null], awsConfig=AwsConfig{enabled=false, region='us-west-1', securityGroupName='hazelcast-sg', tagKey='type', tagValue='hz-nodes', hostHeader='ec2.amazonaws.com', iamRole='null', connectionTimeoutSeconds=5}, discoveryProvidersConfig=com.hazelcast.config.DiscoveryConfig@3c153a1}, interfaces=InterfacesConfig{enabled=false, interfaces=[10.10.1.*]}, sslConfig=SSLConfig{className='null', enabled=false, implementation=null, properties={}}, socketInterceptorConfig=SocketInterceptorConfig{className='null', enabled=false, implementation=null, properties={}}, symmetricEncryptionConfig=SymmetricEncryptionConfig{enabled=false, iterationCount=19, algorithm='PBEWithMD5AndDES', key=null}}, mapConfigs={default=MapConfig{name='default', inMemoryFormat=BINARY', backupCount=1, asyncBackupCount=0, timeToLiveSeconds=0, maxIdleSeconds=0, evictionPolicy='NONE', mapEvictionPolicy='null', evictionPercentage=25, minEvictionCheckMillis=100, maxSizeConfig=MaxSizeConfig{maxSizePolicy='PER_NODE', size=2147483647}, readBackupData=false, hotRestart=HotRestartConfig{enabled=false, fsync=false}, nearCacheConfig=null, mapStoreConfig=MapStoreConfig{enabled=false, className='null', factoryClassName='null', writeDelaySeconds=0, writeBatchSize=1, implementation=null, factoryImplementation=null, properties={}, initialLoadMode=LAZY, writeCoalescing=true}, mergePolicyConfig='com.hazelcast.map.merge.PutIfAbsentMapMergePolicy', wanReplicationRef=null, entryListenerConfigs=null, mapIndexConfigs=null, mapAttributeConfigs=null, quorumName=null, queryCacheConfigs=null, cacheDeserializedValues=INDEX_ONLY}}, topicConfigs={}, reliableTopicConfigs={default=ReliableTopicConfig{name='default', topicOverloadPolicy=BLOCK, executor=null, readBatchSize=10, statisticsEnabled=true, listenerConfigs=[]}}, queueConfigs={default=QueueConfig{name='default', listenerConfigs=null, backupCount=1, asyncBackupCount=0, maxSize=0, emptyQueueTtl=-1, queueStoreConfig=null, statisticsEnabled=true}}, multiMapConfigs={default=MultiMapConfig{name='default', valueCollectionType='SET', listenerConfigs=null, binary=true, backupCount=1, asyncBackupCount=0}}, executorConfigs={default=ExecutorConfig{name='default', poolSize=16, queueCapacity=0}}, semaphoreConfigs={default=SemaphoreConfig{name='default', initialPermits=0, backupCount=1, asyncBackupCount=0}}, ringbufferConfigs={default=RingbufferConfig{name='default', capacity=10000, backupCount=1, asyncBackupCount=0, timeToLiveSeconds=0, inMemoryFormat=BINARY, ringbufferStoreConfig=RingbufferStoreConfig{enabled=false, className='null', properties={}}}}, wanReplicationConfigs={}, listenerConfigs=[], partitionGroupConfig=PartitionGroupConfig{enabled=false, groupType=PER_MEMBER, memberGroupConfigs=[]}, managementCenterConfig=ManagementCenterConfig{enabled=false, url='http://localhost:8080/mancenter', updateInterval=3}, securityConfig=SecurityConfig{enabled=false, memberCredentialsConfig=CredentialsFactoryConfig{className='null', implementation=null, properties={}}, memberLoginModuleConfigs=[], clientLoginModuleConfigs=[], clientPolicyConfig=PermissionPolicyConfig{className='null', implementation=null, properties={}}, clientPermissionConfigs=[]}, liteMember=false}
2017-03-15 08:15:33,949 INFO [main] c.h.i.DefaultAddressPicker [StandardLoggerFactory.java:49] [LOCAL] [dev] [3.8] Prefer IPv4 stack is true.
2017-03-15 08:15:33,960 INFO [main] c.h.i.DefaultAddressPicker [StandardLoggerFactory.java:49] [LOCAL] [dev] [3.8] Picked [192.168.187.3]:5701, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5701], bind any local is true
2017-03-15 08:15:34,000 INFO [main] c.h.system [StandardLoggerFactory.java:49] [192.168.187.3]:5701 [dev] [3.8] Hazelcast 3.8 (20170217 - d7998b4) starting at [192.168.187.3]:5701
2017-03-15 08:15:34,001 INFO [main] c.h.system [StandardLoggerFactory.java:49] [192.168.187.3]:5701 [dev] [3.8] Copyright (c) 2008-2017, Hazelcast, Inc. All Rights Reserved.
2017-03-15 08:15:34,001 INFO [main] c.h.system [StandardLoggerFactory.java:49] [192.168.187.3]:5701 [dev] [3.8] Configured Hazelcast Serialization version : 1
2017-03-15 08:15:34,507 INFO [main] c.h.s.i.o.i.BackpressureRegulator [StandardLoggerFactory.java:49] [192.168.187.3]:5701 [dev] [3.8] Backpressure is disabled
2017-03-15 08:15:35,170 INFO [main] c.h.i.Node [StandardLoggerFactory.java:49] [192.168.187.3]:5701 [dev] [3.8] Creating MulticastJoiner
2017-03-15 08:15:35,339 INFO [main] c.h.s.i.o.i.OperationExecutorImpl [StandardLoggerFactory.java:49] [192.168.187.3]:5701 [dev] [3.8] Starting 8 partition threads
2017-03-15 08:15:35,342 INFO [main] c.h.s.i.o.i.OperationExecutorImpl [StandardLoggerFactory.java:49] [192.168.187.3]:5701 [dev] [3.8] Starting 5 generic threads (1 dedicated for priority tasks)
2017-03-15 08:15:35,351 INFO [main] c.h.c.LifecycleService [StandardLoggerFactory.java:49] [192.168.187.3]:5701 [dev] [3.8] [192.168.187.3]:5701 is STARTING
2017-03-15 08:15:37,463 INFO [main] c.h.system [StandardLoggerFactory.java:49] [192.168.187.3]:5701 [dev] [3.8] Cluster version set to 3.8
2017-03-15 08:15:37,466 INFO [main] c.h.i.c.i.MulticastJoiner [StandardLoggerFactory.java:49] [192.168.187.3]:5701 [dev] [3.8]
Members [1] {
Member [192.168.187.3]:5701 - 50056bfb-b710-43e0-ad58-57459ed399a5 this
}
</code></pre>
| <p>I know it's kinda happened long time ago. But the problem here was using wrong class name for discovery strategy.</p>
<p>It should be <code>com.hazelcast.kubernetes.HazelcastKubernetesDiscoveryStrategy</code></p>
|
<p>i am working on integrating <strong>jenkins</strong> on <strong>Google Kubernetes</strong>. Want a pipeline for my project which will perform following steps:</p>
<ul>
<li>Build JAR from maven </li>
<li>Build docker image and push to google registry</li>
<li><strong>kubectl apply -f</strong> commands or <strong>Helm commands</strong> to run for redeploys,upgrade, downgrade etc</li>
</ul>
<p>I am familiar with above commands individually but i am new to this type of pipeline where i can run them but everytime jenkins try to teach me that you are in a container, not on kubernetes lol</p>
<p>I have a google cloud shell and i cannot directly install on that machine. So i have to somehow find a way to integrate jenkins to pass those commands directly to GKE environment.</p>
<p>i just want a proper step by step guide for anyone not familiar with how to achieve following points:</p>
<blockquote>
<ul>
<li>Have GKE</li>
<li>Have jenkins pod on it </li>
<li>Know all commands to execute (docker, gcloud, kubectl, helm etc)</li>
<li><strong>Just need an integration between jenkins and GKE to utilize all above</strong></li>
</ul>
</blockquote>
<p>Kindly bear me if i am unable to explain that much. Ask anything else you need to resolve this. Thank</p>
| <p>If your Jenkins pod is running inside GKE, just go for <a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod" rel="nofollow noreferrer">accessing the api from a pod</a> and wrap your last step into small python/go script.</p>
<p>If you prefer to use <code>kubectl</code> you could:
set up correct rbac, for example (this one allow to list pods and servies):</p>
<pre><code>kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: list
rules:
- apiGroups: [""]
resources: ["services", "pods"]
verbs: ["get", "list"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: list
subjects:
- kind: ServiceAccount
name: default
namespace: default
roleRef:
kind: ClusterRole
name: svc-list
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>and then</p>
<pre><code>kubectl run --restart=Never --rm -it ubuntu --image ubuntu:18.04
</code></pre>
<pre><code>{
apt-get update
apt-get install -y apt-transport-https curl gnupg
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF | tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubectl
}
</code></pre>
<p><code>kubectl get po,svc</code></p>
<pre><code>NAME READY STATUS RESTARTS AGE
pod/ubuntu 1/1 Running 0 16m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 122d
</code></pre>
|
<p>I have a tomcat + postgres application that I test with docker-compose. I am trying to package the application in a kubernetes config file.</p>
<p>For now, I am running kubernetes (and kubectl) using my Docker Desktop for Windows installation. Eventually, I want to deploy to other environments.</p>
<p>I am currently trying to replicate some of the volume functionality in docker-compose within the following config file.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: pg-pod
spec:
volumes:
- name: "pgdata-vol"
#emptyDir: {}
hostPath:
path: /c/temp/vols/pgdata
containers:
- image: postgres
name: db
ports:
- containerPort: 5432
name: http
protocol: TCP
volumeMounts:
- mountPath: "/pgdata"
name: "pgdata-vol"
env:
- name: PGDATA
value: /pgdata
</code></pre>
<p>When postgres launches, I get see the following error.</p>
<pre><code>fixing permissions on existing directory /pgdata ... ok
creating subdirectories ... ok
selecting default max_connections ... 20
selecting default shared_buffers ... 400kB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
2019-07-26 20:43:41.844 UTC [78] FATAL: data directory "/pgdata" has wrong ownership
2019-07-26 20:43:41.844 UTC [78] HINT: The server must be started by the user that owns the data directory.
child process exited with exit code 1
initdb: removing contents of data directory "/pgdata"
running bootstrap script ...
</code></pre>
<p>I presume that I either need to provide some additional parameters to my volume definition or I need to try a different type of volume config (<strong>local</strong> vs <strong>hostPath</strong>).</p>
| <p>I found a partial solution to this issue.</p>
<p>Interestingly, if I assign a linux-style path as my host-path (on Windows), then my pgdata-vol persists until Docker Desktop is restarted.</p>
<p>Instead of mounting to a real windows location</p>
<pre><code> volumes:
- name: "pgdata-vol"
hostPath:
path: /c/temp/vols/pgdata
</code></pre>
<p>I use a "linux" location as my Windows hostPath</p>
<pre><code> volumes:
- name: "pgdata-vol"
hostPath:
path: /tmp/vols/pgdata
</code></pre>
<p>Curiously, I cannot actually find this path from Windows. I presume this /tmp is local to my Docker Desktop instance.</p>
<p>This solution does not offer true persistence, but it has helped me to work around a roadblock that was impacting testing.</p>
|
<p>In my k8s cluster I have two pods podA and podB. Both are in same k8s cluster. Microservice on pod B is a spring boot rest api. Microservice on pod A have ip and port of pod B in its application.yaml. now every time when podB recreates, ip change which forces us to change ip in application.yml of podA. Please suggest a better way.</p>
<p>My limitation is : I can't change the code of podA.</p>
| <p>A <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a> will provide a consistent DNS name for accessing Pods.</p>
<p>An application should never address a Pod directly unless you have a specific reason to (custom load balancing is one I can think of, or StatefulSets where pods have an identity).</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
</code></pre>
<p>You will then have a consistent DNS name to access any Pods that match the <code>selector</code>:</p>
<pre><code>my-service.default.svc.cluster.local
</code></pre>
|
<p>I am new to kubernetes and trying to understand when to use kubectl autoscale and kubectl scale commands</p>
| <p><strong>Scale</strong> in deployment tells how many pods should be always running to ensure proper working of the application. You have to specify it manually.
In YAMLs you have to define it in <code>spec.replicas</code> like in example below:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
</code></pre>
<p>Second way to specify scale (replicas) of deployment is use command.</p>
<pre><code>$ kubectl run nginx --image=nginx --replicas=3
deployment.apps/nginx created
$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx 3 3 3 3 11s
</code></pre>
<p>It means that deployment will have 3 pods running and Kubernetes will always try to maintain this number of pods (If any of the pods will crush, K8s will recreate it). You can always change it with in <code>spec.replicas</code> and use <code>kubectl apply -f <name-of-deployment></code> or via command</p>
<pre><code>$ kubectl scale deployment nginx --replicas=10
deployment.extensions/nginx scaled
$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx 10 10 10 10 4m48s
</code></pre>
<p>Please read in <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noreferrer">documentation</a> about scaling and replicasets.</p>
<p><strong>Horizontal Pod Autoscaling</strong> (HPA) was invented to scale deployment based on metrics produced by pods. For example, if your application have about 300 HTTP request per minute and each your pod allows to 100 HTTP requests for minute it will be ok. However if you will receive a huge amount of HTTP request ~ 1000, 3 pods will not be enough and 70% of request will fail. When you will use <code>HPA</code>, deployment will autoscale to run 10 pods to handle all requests. After some time, when number of request will drop to 500/minute it will scale down to 5 pods. Later depends on request number it might go up or down depends on your HPA configuration.</p>
<p>Easiest way to apply autoscale is:</p>
<pre><code>$ kubectl autoscale deployment <your-deployment> --<metrics>=value --min=3 --max=10
</code></pre>
<p>It means that autoscale will automatically scale based on metrics to maximum 10 pods and later it will downscale minimum to 3.
Very good example is shown at <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#run-expose-php-apache-server" rel="noreferrer">HPA documentation</a> with CPU usage.</p>
<p>Please keep in mind that Kubernetes can use many types of metrics based on API (HTTP/HTTP request, CPU/Memory load, number of threads, etc.)</p>
<p>Hope it help you to understand difference between Scale and Autoscaling.</p>
|
<p>I was <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="noreferrer">reading the Kubernetes documentation about jobs and retries</a>. I found this:</p>
<blockquote>
<p>There are situations where you want to fail a Job after some amount of
retries due to a logical error in configuration etc. To do so, set
.spec.backoffLimit to specify the number of retries before considering
a Job as failed. The back-off limit is set by default to 6. Failed
Pods associated with the Job are recreated by the Job controller with
an exponential back-off delay (10s, 20s, 40s …) capped at six minutes.
The back-off count is reset if no new failed Pods appear before the
Job’s next status check.</p>
</blockquote>
<p>I had two questions about the above quote:</p>
<ol>
<li>The back-off limit value is on minutes or number of retries? The documentation example using the value <strong>6 (six)</strong> is confuse, because he initially affirms that the value is the number of retries but after that said "capped at <strong>six</strong> minutes".</li>
<li>There is a way to define the back-off delay time? As I understand, this behavior (10s, 20s, 40s …) is default and can't be changed.</li>
</ol>
| <p>No confusion about the <code>.spec.backoffLimit</code> is is the number of retries.</p>
<p>The Job controller recreates the failed Pods (associated with the Job) in an exponential delay (10s, 20s, 40s, ... , 360s). And of course, this delay time is set by the Job controller.</p>
<ul>
<li>If the Pod fails, after 10s new Pod will be created</li>
<li>If fails again, after 20s new one will be created</li>
<li>If fails again, after 40s new one comes</li>
<li>If fails again, next one comes after 80s (1m 20s)</li>
<li>If fails again, next one comes after 160s (2m 40s)</li>
<li>If fails again, after 320s (5m 20s), new Pod comes</li>
<li>If fails again, after 360s (not 640s, cause it is greater than 360s or 6m) you will see the next one </li>
</ul>
|
<p>I have a Kubernetes cluster deployed locally to a node prepped by kubeadm.
I am experimenting with one of the pods. This pod fails to deploy, however I can't locate the cause of it. I have guesses as to what the problem is but I'd like to see something related in the Kubernetes logs</p>
<p>Here's what i have tried:</p>
<pre><code>$kubectl logs nmnode-0-0 -c hadoop -n test
</code></pre>
<pre><code>Error from server (NotFound): pods "nmnode-0-0" not found
</code></pre>
<pre><code>$ kubectl get event -n test | grep nmnode
(empty results here)
</code></pre>
<pre><code>$ journalctl -m |grep nmnode
</code></pre>
<p>and I get a bunch of repeated entries like the following. It talks about killing the pod but it gives no reason whatsoever for it</p>
<pre><code>Aug 08 23:10:15 jeff-u16-3 kubelet[146562]: E0808 23:10:15.901051 146562 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"nmnode-0-0.15b92c3ff860aed6", GenerateName:"", Namespace:"test", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"test", Name:"nmnode-0-0", UID:"743d2876-69cf-43bc-9227-aca603590147", APIVersion:"v1", ResourceVersion:"38152", FieldPath:"spec.containers{hadoop}"}, Reason:"Killing", Message:"Stopping container hadoop", Source:v1.EventSource{Component:"kubelet", Host:"jeff-u16-3"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf4b616dacae12d6, ext:2812562895486, loc:(*time.Location)(0x781e740)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf4b616dacae12d6, ext:2812562895486, loc:(*time.Location)(0x781e740)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "nmnode-0-0.15b92c3ff860aed6" is forbidden: unable to create new content in namespace test because it is being terminated' (will not retry!)
</code></pre>
<p>The shorted version of the above message is this:</p>
<pre><code>Reason:"Killing", Message:"Stopping container hadoop",
</code></pre>
<p>The cluster is still running. Do you know how I can get to the bottom of this?</p>
| <p>Try to execute command below:</p>
<pre><code>$ kubectl get pods --all-namespaces
</code></pre>
<p>Take a look if your pod was not created in a different namespace.</p>
<p>Most common reason of pod failures:</p>
<p><strong>1.</strong> The container was never created because it failed to pull image.</p>
<p><strong>2.</strong> The container never existed in the runtime, and the error reason is not in the "special error list", so the containerStatus was never set and kept as "no state".</p>
<p><strong>3.</strong> Then the container was treated as "Unkown" and the pod was reported as Pending without any reason.
The containerStatus was always "no state" after each syncPod(), the status manager could never delete the pod even though the Deletiontimestamp was set.</p>
<p>Useful article: <a href="https://kukulinski.com/10-most-common-reasons-kubernetes-deployments-fail-part-1/" rel="nofollow noreferrer">pod-failure</a>.</p>
|
<p>I have my company S3 (<code>companys3</code>) bucket with multiple files for example <code>file1</code>, <code>file2</code> and <code>file3</code>. And client S3 bucket (<code>clients3</code>) with some files that i don't know.</p>
<p>What I want is the solution for opening only <code>file2</code> from <code>companys3</code> to <code>clients3</code>.</p>
<p>I found solutions about how to copy/clone whole buckets. But couldn't find any that copy only specific files.</p>
<p>Till this time wi copy files through Kubernetes pods, but files become too large to handle this way (ower 20GB one file), so I am searching to solution that allows us to quit using Kubernetes pods ad transfer clients. </p>
| <p>You can use S3 command line (awscli).</p>
<pre><code>aws s3 cp s3://COMPANY_BUCKET/filename s3://CLIENT_BUCKET/filename
</code></pre>
|
<p>Looking if the below scenario is possible or not -</p>
<p>Lets say user(<code>user1</code>) have access only to namespaces <code>default</code> and <code>marketing</code>. </p>
<p>When we perform <code>kubectl get ns</code> it should display both namespaces.</p>
<p>No other namespaces should be displayed even if they exists because the <code>user1</code> does not have access to any other namespaces.</p>
<p>We could relate this scenario with the databases where a user can see only the databases they have access to when <code>show databases</code> is performed</p>
| <p>This isn't possible in Kubernetes. Namespaces are the resources providing the scoping mechanism to limit visibility into other resources. There's no meta-namespace that provides scoping rules for namespaces.</p>
|
<p>From <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/" rel="nofollow noreferrer">this</a> article, I can specify 'userspace' as my proxy-mode, but I am unable to understand what command I need to use for it and at what stage? Like after creating deployment or service?
I am running a minikube cluster currently.</p>
| <p><a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/" rel="nofollow noreferrer"><code>kube-proxy</code></a> is a process that runs on each kubernetes node to <a href="https://supergiant.io/blog/understanding-kubernetes-kube-proxy/" rel="nofollow noreferrer">manage network connections coming into and out of kubernetes</a>.</p>
<p>You don't run the command as such, but your deployment method (usually kubeadm) configures the options for it to run. </p>
<p>As @Hang Du mentioned, in minikube you can modify it's options by editing the <code>kube-proxy</code> configmap and changing <code>mode</code> to <code>userspace</code></p>
<pre><code>kubectl -n kube-system edit configmap kube-proxy
</code></pre>
<p>Then delete the Pod. </p>
<pre><code>kubectl -n kube-system get pod
kubectl -n kube-system delete pod kube-proxy-XXXXX
</code></pre>
|
<p>I see some strange logs in my kong container, which internally uses nginx:</p>
<pre><code>2019/08/07 15:54:18 [info] 32#0: *96775 client closed connection while SSL handshaking, client: 10.244.0.1, server: 0.0.0.0:8443
</code></pre>
<p>This happens every 5 secs, like some sort of diagnostic is on.
In my kubernetes descriptor I set no ready or liveliness probe, so I can't understand why there are those calls and how can I prevent them from appearing as they only dirt my logs...</p>
<p><strong>edit</strong>:
It seems it's the LoadBalancer service: I tried deleting it and I get no logs anymore...how to get rid of those logs though?</p>
| <p>This has been already discussed on Kong forum in <a href="https://discuss.konghq.com/t/stopping-logs-generated-by-the-aws-elb-health-check/3752" rel="nofollow noreferrer">Stopping logs generated by the AWS ELB health check</a> thread.
The same behaviour with lb heathcheck every few seconds.</p>
<blockquote>
<p>Make Kong listen on plain HTTP port, open that port up only to the
subnet in which ELB is running (public most probably), and then don’t
open up port 80 on the ELB. So ELB will be able to Talk on port 80 for
health-check but there won’t be a HTTP port available to external
world.</p>
<p>Use L4 proxying (stream_listen) in kong, open up the port and
then make ELB healthcheck that port.</p>
</blockquote>
<p>Both of solutions are reasonable.</p>
|
<p>I have deployed minikube on MacOS using the instructions here <a href="https://kubernetes.io/docs/tasks/tools/install-minikube/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/tools/install-minikube/</a></p>
<p>The brew install was ok and the minikube status shows</p>
<pre><code> $ minikube status
host: Running
kubelet: Running
apiserver: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.102
</code></pre>
<p>I am able to interact with the cluster using kubectl</p>
<pre><code>$kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.10 --port=8080
</code></pre>
<p>Viewing the pods is also ok</p>
<pre><code>$kubectl get pod
NAME READY STATUS RESTARTS AGE
hello-minikube-856979d68c-glhsx 1/1 Running 0 18m
</code></pre>
<p>But when i try to launch the kubectl dashboard, i get 503 error</p>
<pre><code>$minikube dashboard
Temporary Error: unexpected response code: 503
Temporary Error: unexpected response code: 503
</code></pre>
<p>The Dashboard service seems to present</p>
<pre><code> $kubectl -n kube-system get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 3h19m k8s-app=kube-dns
kubernetes-dashboard ClusterIP 10.109.210.119 <none> 80/TCP 119m app=kubernetes-dashboard
</code></pre>
<p>below is the kubectl version info</p>
<pre><code> $kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T16:57:42Z", GoVersion:"go1.12.7", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T09:15:22Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Any pointers on what is missing ? How to get the dashboard working</p>
<p>Thanks<br>
Praveen</p>
| <p>Check <code>kubectl cluster-info</code>, you can find more <a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#discovering-builtin-services" rel="nofollow noreferrer">here</a></p>
<pre><code>kubectl -n kube-system port-forward svc/kubernetes-dashboard 8080:80
</code></pre>
<p>Your dashboard should be accessible on <code>http://localhost:8080</code>, keep in mind that the dashboard is deprecated so you can check <a href="https://github.com/vmware/octant" rel="nofollow noreferrer">octant</a>.</p>
|
<p>I have an application that is running inside of Kubernetes which needs to display a map using Leaflet, the map data comes from Openstreetmap. </p>
<p>The code I use to set up the map looks like this:</p>
<pre class="lang-js prettyprint-override"><code>map = L.map('mapid', {
center: [lat, long],
zoom: 19
});
L.tileLayer('https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', {
attribution: '&copy; <a href="https://www.openstreetmap.org/copyright">OpenStreetMap</a> contributors'
}).addTo(map);
</code></pre>
<p>What is bothering me right now is the url <code>{s}.tile.openstreetmap.org</code>. As the openstreetmap data resides outside the k8s cluster, I need to create a Service in Kubernetes. </p>
<p>I tried to define these services: </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: a.tile.openstreetmap.org
spec:
type: ExternalName
externalName: a.tile.openstreetmap.org
---
apiVersion: v1
kind: Service
metadata:
name: b.tile.openstreetmap.org
spec:
type: ExternalName
externalName: b.tile.openstreetmap.org
---
apiVersion: v1
kind: Service
metadata:
name: c.tile.openstreetmap.org
spec:
type: ExternalName
externalName: c.tile.openstreetmap.org
</code></pre>
<p>However that resulted in the following error message when deploying the service:</p>
<pre><code>Error from server (Invalid): error when creating "openstreetmap-service.yaml": Service "a.tile.openstreetmap.org" is invalid: metadata.name: Invalid value: "a.tile.openstreetmap.org": a DNS-1035 label must consist of lower case alphanumeric characters or '-', start with an alphabetic character, and end with an alphanumeric character (e.g. 'my-name', or 'abc-123', regex used for validation is '[a-z]([-a-z0-9]*[a-z0-9])?')
Error from server (Invalid): error when creating "openstreetmap-service.yaml": Service "b.tile.openstreetmap.org" is invalid: metadata.name: Invalid value: "b.tile.openstreetmap.org": a DNS-1035 label must consist of lower case alphanumeric characters or '-', start with an alphabetic character, and end with an alphanumeric character (e.g. 'my-name', or 'abc-123', regex used for validation is '[a-z]([-a-z0-9]*[a-z0-9])?')
Error from server (Invalid): error when creating "openstreetmap-service.yaml": Service "c.tile.openstreetmap.org" is invalid: metadata.name: Invalid value: "c.tile.openstreetmap.org": a DNS-1035 label must consist of lower case alphanumeric characters or '-', start with an alphabetic character, and end with an alphanumeric character (e.g. 'my-name', or 'abc-123', regex used for validation is '[a-z]([-a-z0-9]*[a-z0-9])?')
</code></pre>
<p>I understand that I am not allowed to use dots in .metadata.name, but is there a different possibility to achieve this? As far as I can see from the description of the leaflet source, the URL in TileLayer needs to be something like <code>http://{s}.somedomain.com/blabla/{z}/{x}/{y}{r}.png</code></p>
| <p>Calling an external service by it's name shouldn't be a problem. <code>ExternalName</code> can be used, if the service needs to be available as a Kubernetes resource, for example to proxy an ingress route to an external service.</p>
<p>If you still want to use the <code>ExternalName</code> service instead of the real FQDN, simply use valid names like a-tile-openstreetmap-org and replace the domain name in your client script with that name. You might have to set a correct HTTP host header though to avoid problems with the target server.</p>
<p>I'd still suggest using the real name, as it's simple and straight forward and there's no benefit in aliasing it.</p>
|
<p>I'm building a dashboard in Grafana, using data from Prometheus, to monitor a namespace in a Kubernetes cluster. I need all this to see what happens during a load test.</p>
<p>Now I've spent half my day looking for information about the different metrics in Prometheus. I've read through <a href="https://prometheus.io/docs/prometheus/latest/querying/basics/" rel="nofollow noreferrer">Prometheus docs</a> and <a href="https://github.com/kubernetes/kube-state-metrics/tree/master/docs" rel="nofollow noreferrer">kube state metrics docs</a> (which gets the data from our cluster) but I did not find any descriptions about which metric does what. I only can guess based on query results and examples found here and there, but that's slow and way more insecure than I'd like.</p>
<p>However, I've come upon <a href="https://stackoverflow.com/questions/40327062/how-to-calculate-containers-cpu-usage-in-kubernetes-with-prometheus-as-monitori">this</a> SO answer so I assume the quote must have been copied from somewhere. Anybody please?</p>
| <p>I found the answer for my question. <a href="https://stackoverflow.com/a/57417515/7160820">yogesh's answer</a> gave me the hint to have a look at exporters and I found the other half of the answer <a href="https://prometheus.io/docs/guides/node-exporter/" rel="noreferrer">here</a>.</p>
<p>So on Prometheus UI, there is a list of exporters and their scraped endpoints (Status > Targets). If I call one of the endpoints, the response contains a description and the type of each metric offered by the endpoint.</p>
<p>Call:</p>
<pre class="lang-sh prettyprint-override"><code>curl http://exporter.endpoint:9100/metrics
</code></pre>
<p>Sample from a response:</p>
<pre><code># HELP http_response_size_bytes The HTTP response sizes in bytes.
# TYPE http_response_size_bytes summary
http_response_size_bytes{handler="prometheus",quantile="0.5"} 16310
http_response_size_bytes{handler="prometheus",quantile="0.9"} 16326
http_response_size_bytes{handler="prometheus",quantile="0.99"} 16337
http_response_size_bytes_sum{handler="prometheus"} 1.46828673e+09
http_response_size_bytes_count{handler="prometheus"} 90835
# HELP node_arp_entries ARP entries by device
# TYPE node_arp_entries gauge
node_arp_entries{device="cali6b3dc03715a"} 1
node_arp_entries{device="eth0"} 15
</code></pre>
<p>It wasn't trivial for me how to get this done. I logged into the cluster and I found <code>curl</code> didn't get any response for any of the endpoints. The solution was to add the base of the endpoint urls to the <code>no_proxy</code> variable (my machine and the server are both sitting behind the corporate proxy).</p>
<p>But anyway, this is how one can read the description of the Prometheus metrics.</p>
|
<p>I need to be able to log the source IP address trying accessing to my application.
Problem is that I have the current setup:</p>
<pre><code>service load balancer-->kong-->myy application
</code></pre>
<p>Kong is very good and automatically adds an "X-Forwarded-for" header to requests coming to my application so that I don't get Kong's IP...but this header only contains the load balancer IP!</p>
<p>What should happen is that in that header there should be the client's IP...is that possible?</p>
| <p>You need to set <code>externalTrafficPolicy: Local</code> in the service definition, this will preserve clients ip.</p>
<p>Reading <a href="https://www.asykim.com/blog/deep-dive-into-kubernetes-external-traffic-policies" rel="nofollow noreferrer">here</a> and <a href="https://github.com/Azure/AKS/issues/607" rel="nofollow noreferrer">here</a> </p>
|
<p>This question is relevant to local cluster (kubernetes 1.10+ version, Linux). After creating a StatefulSet to manage several pods, I met a use case that requires setting up FQDN identity for the pods.</p>
<p>For instance, I wish to have the first pod with FQDN identity of <code><some-statefulset>-0.<some-subdomain>.default.svc.cluster.local</code>. Desired behavior can be achieved by adding <code><IP-address> <some-statefulset>-0.<some-subdomain>.default.svc.cluster.local</code> into <code>/etc/hosts</code> file, but I want to have the FQDN to be identified automatically in the local cluster without changing the <code>/etc/hosts</code> file.</p>
<p>Not sure whether I need to enable some settings when starting the local cluster.</p>
| <p>By default the cluster.local is the internal domain. To reach service from pods within the cluster, </p>
<blockquote>
<p>svc_name.namespace_name.cluster.local</p>
</blockquote>
|
<p>I'm running a <code>Kubernetes</code> cluster with <code>minikube</code> and my deployment (or individual Pods) won't stay running even though I specify in the <code>Dockerfile</code> that it should stay leave a terminal open (I've also tried it with <code>sh</code>). They keep getting restarted and sometimes they get stuck on a <code>CrashLoopBackOff</code> status before restarting again:</p>
<pre><code>FROM ubuntu
EXPOSE 8080
CMD /bin/bash
</code></pre>
<p>My deployment file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: sleeper-deploy
spec:
replicas: 10
selector:
matchLabels:
app: sleeper-world
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: sleeper-world
spec:
containers:
- name: sleeper-pod
image: kubelab
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
</code></pre>
<p>All in all, my workflow follows (<code>deploy.sh</code>):</p>
<pre class="lang-sh prettyprint-override"><code>#!/bin/bash
# Cleaning
kubectl delete deployments --all
kubectl delete pods --all
# Building the Image
sudo docker build \
-t kubelab \
.
# Deploying
kubectl apply -f sleeper_deployment.yml
</code></pre>
<p>By the way, I've tested the Docker Container solo using <code>sudo docker run -dt kubelab</code> and <em>it does stay up</em>. Why doesn't it stay up within <code>Kubernetes</code>? Is there a parameter (in the YAML file) or a flag I should be using for this special case?</p>
| <h1>1. Original Answer (but edited...)</h1>
<p>If you are familiar with Docker, check <a href="https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/" rel="nofollow noreferrer">this</a>.</p>
<p>If you are looking for an equivalent of <code>docker run -dt kubelab</code>, try <code>kubectl run -it kubelab --restart=Never --image=ubuntu /bin/bash</code>. In your case, with the Docker <code>-t</code> flag: <a href="https://docs.docker.com/engine/reference/run/" rel="nofollow noreferrer">Allocate a pseudo-tty</a>. That's why your Docker Container stays up.</p>
<p>Try:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl run kubelab \
--image=ubuntu \
--expose \
--port 8080 \
-- /bin/bash -c 'while true;do sleep 3600;done'
</code></pre>
<p>Or:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl run kubelab \
--image=ubuntu \
--dry-run -oyaml \
--expose \
--port 8080 \
-- /bin/bash -c 'while true;do sleep 3600;done'
</code></pre>
<h1>2. Explaining what's going on (Added by Philippe Fanaro):</h1>
<p>As stated by @David Maze, the <code>bash</code> process is going to exit immediately because the artificial terminal won't have anything going into it, a slightly different behavior from Docker.</p>
<p>If you change the <code>restart</code> Policy, it will still terminate, the difference is that the Pod won't regenerate or restart.</p>
<p>One way of doing it is (pay attention to the tabs of <code>restartPolicy</code>):</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: kubelab-pod
labels:
zone: prod
version: v1
spec:
containers:
- name: kubelab-ctr
image: kubelab
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
restartPolicy: Never
</code></pre>
<p>However, this will <strong>not</strong> work if it is specified inside a <code>deployment</code> YAML. And that's because deployments force regeneration, trying to always get to the <em>desired state</em>. This can be confirmed in the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment Documentation Webpage</a>:</p>
<blockquote>
<p>Only a <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy" rel="nofollow noreferrer"><code>.spec.template.spec.restartPolicy</code></a> equal to <code>Always</code> is allowed, which is the default if not specified.</p>
</blockquote>
<h1>3. If you really wish to force the Docker Container to Keep Running</h1>
<p>In this case, you will need something that doesn't exit. A <em>server-like process</em> is one example. But you can also try something mentioned in <a href="https://stackoverflow.com/a/35770783/4756173">this StackOverflow answer</a>:</p>
<pre><code>CMD exec /bin/bash -c "trap : TERM INT; sleep infinity & wait"
</code></pre>
<blockquote>
<p>This will keep your container alive until it is told to stop. Using trap and wait will make your container <strong>react immediately to a stop request</strong>. Without trap/wait stopping will take a few seconds.</p>
</blockquote>
|
<p>I'm trying to run android emulator container, over GKE. For this, I'm using <a href="https://github.com/budtmo/docker-android" rel="nofollow noreferrer">budtmo/docker-android</a> open source.</p>
<p>First, I tried to run it locally over docker:</p>
<pre><code>$ sudo docker run --privileged -d -p 6080:6080 -p 5554:5554 -p 4723:4723 -p 5555:5555 -e DEVICE="Samsung Galaxy S6" --name android-container budtmo/docker-android-x86-8.1
</code></pre>
<p>Then I connected to the device, using:</p>
<pre><code>$ adb connect localhost:5555
</code></pre>
<p>And I saw the device:</p>
<pre><code>>> $ adb devices
List of devices attached
localhost:5555 device
</code></pre>
<p><strong>works great!</strong></p>
<p><strong>Now I'm trying to do same thing over GKE:</strong></p>
<p>This is the pod:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: android
labels:
app: android
spec:
containers:
- name: android
image: budtmo/docker-android-x86-8.1
securityContext:
privileged: true
ports:
- containerPort: 6080
- containerPort: 5554
- containerPort: 5555
- containerPort: 4723
env:
- name: DEVICE
value: "Samsung Galaxy S6"
</code></pre>
<p>This is the service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: android-service
spec:
ports:
- port: 6080
name: serving
protocol: TCP
- port: 5555
name: srv
protocol: TCP
- port: 5554
name: srv2
protocol: TCP
- port: 4723
name: novnc
protocol: TCP
selector:
app: android
type: LoadBalancer
loadBalancerIp: "35.X.X.X"
</code></pre>
<p>Then I'm trying to connect to the emulator (from my computer), but after it claims it connected, I don't see any devices attached :</p>
<pre><code>>> $ adb connect 35.X.X.X:5555
connected to 35.X.X.X:5555
>> $ adb devices
List of devices attached
*empty*
</code></pre>
<p>I also tried to connect locally from the GKE terminal:</p>
<pre><code>adb connect 35.X.X.X:5555
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
connected to 35.X.X.X:5555
</code></pre>
<p>Then again:</p>
<pre><code>>> $ adb devices
List of devices attached
*empty*
</code></pre>
<p>Any idea what is the problem, and how can I fix it?</p>
| <p>Reproduced your steps with the only difference: I havent specified <code>loadBalancerIp</code> under service:</p>
<pre><code>---
apiVersion: v1
kind: Pod
metadata:
name: android
labels:
app: android
spec:
containers:
- name: android
image: budtmo/docker-android-x86-8.1
securityContext:
privileged: true
ports:
- containerPort: 6080
- containerPort: 5554
- containerPort: 5555
- containerPort: 4723
env:
- name: DEVICE
value: "Samsung Galaxy S6"
---
apiVersion: v1
kind: Service
metadata:
name: android-service
spec:
ports:
- port: 6080
name: serving
protocol: TCP
- port: 5555
name: srv
protocol: TCP
- port: 5554
name: srv2
protocol: TCP
- port: 4723
name: novnc
protocol: TCP
selector:
app: android
type: LoadBalancer
</code></pre>
<p>Result is everything works as expected with provided yaml:</p>
<pre><code>kubectl get po,svc
NAME READY STATUS RESTARTS AGE
pod/android 1/1 Running 0 14m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/android-service LoadBalancer 10.0.1.238 *.*.*.54 6080:31952/TCP,5555:30822/TCP,5554:30806/TCP,4723:30248/TCP 14m
</code></pre>
<p>From local pc:</p>
<pre><code>adb connect *.*.*.54:5555
already connected to *.*.*.54:5555
adb devices
List of devices attached
*.*.*.54:5555 offline
emulator-5554 device
localhost:5555 device
</code></pre>
|
<p>I have implemented a gRPC client server in Go. I have now set them up in Kubernetes as client and server pods where the client connects to the server. I've set this cluster up using vagrant (centos/7) on my PC. My issue is that the client wants to hit the port 8090 (sever serves on this port) and send a message to the server, but as they are in different pods the client cannot hit localhost 8090 and therefore the pod fails. How do I solve this problem?</p>
<pre class="lang-golang prettyprint-override"><code>func main() {
conn, err := grpc.Dial(":8090", grpc.WithInsecure())
if err != nil {
log.Fatalf("did not connect :%v", err)
}
cli := proto.NewPingClient(conn)
</code></pre>
<p>as seen it tries to dial 8090, but not able to map it to the localhost of the server pod.</p>
| <p>The most standard solution for this problem is to forget about connecting to localhost, then create ClusterIP service connected to the selected server pod in the same namespace:
<pre><code>apiVersion: v1
kind: Service
metadata:
name: server-service
labels:
app: your-label
spec:
type: ClusterIP
ports:
- port: 8090
targetPort: 8090
protocol: TCP
selector:
app: your-label
</pre></code></p>
<p>Remember to use the same <code>metadata.labels.app</code> value in your pod description, otherwise the two will not be connected to each other.<br>
If you have your cluster configured properly, and CoreDNS up and running, you should be able to access the server using <code>server-service:8090</code> if you are running your client and server pods in the same namespace.</p>
|
<p>How can I configure GCP to send me alerts when nodes events (create / shutdown) happen?
I would like to receive email alerting me about the cluster scaling.</p>
<p>tks</p>
| <p>First, note that you can retrieve such events in <a href="https://cloud.google.com/logging/docs/view/overview" rel="nofollow noreferrer">Stackdriver Logging</a> by using the following filter :</p>
<pre><code>logName="projects/[PROJECT_NAME]/logs/cloudaudit.googleapis.com%2Factivity" AND
(
protoPayload.methodName="io.k8s.core.v1.nodes.create" OR
protoPayload.methodName="io.k8s.core.v1.nodes.delete"
)
</code></pre>
<p>This filter will retrieve only audit activity log entries (<code>cloudaudit.googleapis.com%2Factivity</code>) in your project <code>[PROJECT_NAME]</code>, corresponding to a node creation event (<code>io.k8s.core.v1.nodes.create</code>) or deletion (<code>io.k8s.core.v1.nodes.delete</code>).</p>
<p>To be alerted when such a log is generated, there are multiple possibilities.</p>
<p>You could configure a <a href="https://cloud.google.com/logging/docs/export/" rel="nofollow noreferrer">sink</a> to a Pub/Sub topic based on this filter, and then <a href="https://cloud.google.com/functions/docs/calling/pubsub" rel="nofollow noreferrer">trigger a Cloud Function</a> when a filtered log entry is created. This Cloud Function will define the logic to send you a mail. This is probably the solution I'd choose, since this use case <a href="https://cloud.google.com/functions/docs/calling/logging" rel="nofollow noreferrer">is described in the documentation</a>.</p>
<p>Otherwise, you could define a <a href="https://cloud.google.com/logging/docs/logs-based-metrics/" rel="nofollow noreferrer">logs-based metric</a> based on this filter (or one logs-based metric for creation and another for deletion), and configure an <a href="https://cloud.google.com/monitoring/alerts/" rel="nofollow noreferrer">alert in Stackdriver Monitoring</a> when this log-based metric is increased. This alert could be configured to send an email. However, I won't suggest you to implement this, because this is not a real "alert" (in the sense of "something went wrong"), but rather an information. You probably don't want to have incidents opened in Stackdriver Monitoring every time a node is created or deleted. But you can keep the idea of one/multiple logs-based metric and process it/them with a custom application.</p>
|
<p>I'm building a cluster visualization tool for Kubernetes that runs inside users' clusters. </p>
<p>My goal is to make this tool freely available. The most obvious way to distribute it is to tell people to <code>kubectl apply -f www.ourgithub/our-configs.yaml</code>, which pulls our images and voila. </p>
<p>That's all fine. Now the problem is how do we push updates? </p>
<p>I've considered these options but none seem very good:</p>
<ul>
<li><p>Using something like <a href="https://github.com/chartmuseum/helm-push" rel="nofollow noreferrer">https://github.com/chartmuseum/helm-push</a></p></li>
<li><p>Having the apps themselves check for updates and "restart" themselves (i.e <code>imagePullPolicy=always</code> scale to 0)</p></li>
<li><p>Having users download an executable on their machines that periodically checks for updates</p></li>
</ul>
<p>I want to be able to push updates reliably so I want to make sure I'm using the most robust method there is. </p>
<p>What is the best practice for this?</p>
| <p>Separate CI/CD pipeline for building and testing docker images and separate pipeline for deploying.</p>
<p>Your pipeline should deploy an application in a version that's is already running on the environment, deploy a new one, run e2e tests to verify everything is correct and then push a new version to the desired cluster. </p>
|
<p>on kubernetes, one can blindly create a resource (such as deployment, service, pods, etc.). the resouce will be created only if there are enough system resources on the kubernetes nodes\cluster.</p>
<ol>
<li>how does one check that there are enough node resources before creating kubernetes resources (check whether a scale-out and\or scale-up are required)?</li>
<li>what is the best practice to check and asses that the kubernetes cluster can sustain a newly created resource before the resource is created (without doing any manual calculations)?</li>
</ol>
| <p>What you are looking for in (1) is possible with a <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/" rel="nofollow noreferrer">custom admission controller</a> - but I personally think that is too complex and not a great idea from the user experience point of view. </p>
<p>What would be ideal is that once a resource is created - and if the scheduler sees that there are not enough resources - then the cluster is autoscaled. This is possible with <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="nofollow noreferrer">Kubernetes Autoscaler</a> or <a href="https://github.com/atlassian/escalator" rel="nofollow noreferrer">Escalator</a>. Both of them allow scaling cluster based on certain conditions - but are suited for different use cases.</p>
<p>You can find detailed information on how does the autoscaler adds nodes when it sees that a pod could be not be scheduled <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-up-work" rel="nofollow noreferrer">here</a></p>
|
<p>I run a small GKE cluster with a couple of node pools (2-8 nodes in each, some preemptible). I am beginning to see a lot of health issues with the nodes themselves and experiencing pod operations taking a very long time (30+ mins). This includes terminating pods, starting pods, starting initContainers in pods, starting main containers in pods, etc. Examples below.
The cluster runs some NodeJS, PHP and Nginx containers, and a single Elastic, Redis and NFS pod. Also, a few PHP-based CronJobs. Together, they make up a website which sits behind a CDN.</p>
<ul>
<li>My question is: How do I go about debugging this on GKE, and what can be the cause? </li>
</ul>
<p>I've tried to SSH into the VM instances backing the nodes to check logs, but my SSH connection always times out, not sure if this is normal.</p>
<p><strong>Symptom: Nodes flapping between <code>Ready</code> and <code>NotReady</code>:</strong></p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-cluster-default-pool-4fa127c-l3xt Ready <none> 62d v1.13.6-gke.13
gke-cluster-default-pool-791e6c2-7b01 NotReady <none> 45d v1.13.6-gke.13
gke-cluster-preemptible-0f81875-cc5q Ready <none> 3h40m v1.13.6-gke.13
gke-cluster-preemptible-0f81875-krqk NotReady <none> 22h v1.13.6-gke.13
gke-cluster-preemptible-0f81875-mb05 Ready <none> 5h42m v1.13.6-gke.13
gke-cluster-preemptible-2453785-1c4v Ready <none> 22h v1.13.6-gke.13
gke-cluster-preemptible-2453785-nv9q Ready <none> 134m v1.13.6-gke.13
gke-cluster-preemptible-2453785-s7r2 NotReady <none> 22h v1.13.6-gke.13
</code></pre>
<p><strong>Symptom: Nodes are sometimes rebooted:</strong></p>
<pre><code>2019-08-09 14:23:54.000 CEST
Node gke-cluster-preemptible-0f81875-mb05 has been rebooted, boot id: e601f182-2eab-46b0-a953-7787f95d438
</code></pre>
<p><strong>Symptom: Cluster is unhealthy:</strong></p>
<pre><code>2019-08-09T11:29:03Z Cluster is unhealthy
2019-08-09T11:33:25Z Cluster is unhealthy
2019-08-09T11:41:08Z Cluster is unhealthy
2019-08-09T11:45:10Z Cluster is unhealthy
2019-08-09T11:49:11Z Cluster is unhealthy
2019-08-09T11:53:23Z Cluster is unhealthy
</code></pre>
<p><strong>Symptom: Various PLEG health errors in Node logs</strong> (there are many, many, many entries of this type):</p>
<pre><code>12:53:10.573176 1315163 kubelet.go:1854] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 3m26.30454685s ago; threshold is 3m0s]
12:53:18.126428 1036 setters.go:520] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2019-08-09 12:53:18.126363615 +0000 UTC m=+3924434.187952856 LastTransitionTime:2019-08-09 12:53:18.126363615 +0000 UTC m=+3924434.187952856 Reason:KubeletNotReady Message:PLEG is not healthy: pleg was last seen active 3m5.837134315s ago; threshold is 3m0s}
12:53:38.627284 1036 kubelet.go:1854] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 3m26.338024015s ago; threshold is 3m0s]
</code></pre>
<p><strong>Symptom: Pods are issuing 'Network not ready' errors:</strong></p>
<pre><code>2019-08-09T12:42:45Z network is not ready: [runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized]
2019-08-09T12:42:47Z network is not ready: [runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized]
2019-08-09T12:42:49Z network is not ready: [runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized]
</code></pre>
<p><strong>Symptom: Pods complaining about "context deadline exceeded":</strong></p>
<pre><code>2019-08-09T08:04:07Z error determining status: rpc error: code = DeadlineExceeded desc = context deadline exceeded
2019-08-09T08:04:15Z error determining status: rpc error: code = DeadlineExceeded desc = context deadline exceeded
2019-08-09T08:04:20Z error determining status: rpc error: code = DeadlineExceeded desc = context deadline exceeded
2019-08-09T08:04:26Z error determining status: rpc error: code = DeadlineExceeded desc = context deadline exceeded
</code></pre>
<p>There is obviously something particularly odd going on, but with a fairly trivial number of IOPS, ingress requests, cpu/memory saturation .. I would expect some symptoms that pointed me in some direction where I could debug this further. But it seems like these errors are all over the place.</p>
| <p>Given that GKE is a managed solution and there are many systems involved in its operation, I think it might be best for you to reach out to the <a href="https://cloud.google.com/support/#support-plans" rel="nofollow noreferrer">GCP support team</a>.</p>
<p>They have specific tools to locate issues on the nodes (if any) and can dig a bit deeper into logging to determine the root cause of this.</p>
<p>As of now, the logs you've showed may point to <a href="https://github.com/kubernetes/kubernetes/issues/45419" rel="nofollow noreferrer">this old issue</a> apparently related to Docker and also an issue with the CNI not ready, preventing the Nodes from reporting to the master, which deems them as unready.</p>
<p>Please consider this as mere speculation as the support team would be able to dig deeper and provide more accurate advise.</p>
|
<p>I have a Minikube Kubernetes cluster running a cockroachdb which looks like:</p>
<pre><code>kubectl get pods
test-cockroachdb-0 1/1 Running 17 95m
test-cockroachdb-1 1/1 Running 190 2d
test-cockroachdb-2 1/1 Running 160 2d
test-cockroachdb-init-m8rzp 0/1 Completed 0 2d
cockroachdb-client-secure 1/1 Running 0 2d
</code></pre>
<p>I want to get a connection string that I can use in my application.</p>
<p>To verify my connection string, I am using the tool DBeaver.</p>
<p>My database name is configured to 'defaultdb' which exists on my cluster, and the user with the relevant password. The port is accurate as well (default cockroachdb minikube port).</p>
<p>However as to the certificate aspect of connecting I am at a loss. How do I generate/gather the certificates I need to successfully connect to my cluster? How do I connect to my cluster using DBeaver?</p>
<p><strong>Edit:</strong></p>
<pre><code>$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/myname-cockroachdb-0 1/1 Running 27 156m
pod/myname-cockroachdb-1 1/1 Running 197 2d1h
pod/myname-cockroachdb-2 1/1 Running 167 2d1h
pod/myname-cockroachdb-init-m8rzp 0/1 Completed 0 2d1h
pod/myname-client-secure 1/1 Running 0 2d1h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/myname-cockroachdb ClusterIP None <none> 26257/TCP,8080/TCP 2d1h
service/myname-cockroachdb-public ClusterIP 10.xxx.xxx.xx <none> 26257/TCP,8080/TCP 2d1h
service/kubernetes ClusterIP 10.xx.0.1 <none> 443/TCP 2d1h
NAME READY AGE
statefulset.apps/myname-cockroachdb 3/3 2d1h
NAME COMPLETIONS DURATION AGE
job.batch/myname-cockroachdb-init 1/1 92s 2d1h
</code></pre>
| <p>Like @<a href="https://stackoverflow.com/users/2898919/fl3sh">FL3SH</a> already said.</p>
<p>You can use <code>kubectl port-forward <pod_name> <port></code> </p>
<p>This is nicely explained in Cockroach documentation <a href="https://www.cockroachlabs.com/docs/stable/orchestrate-cockroachdb-with-kubernetes.html#step-4-access-the-admin-ui" rel="nofollow noreferrer">Step 4. Access the Admin UI</a>, please us it as example and set different ports.</p>
<p>As for the certificates:</p>
<blockquote>
<p>As each pod is created, it issues a Certificate Signing Request, or CSR, to have the node's certificate signed by the Kubernetes CA. You must manually check and approve each node's certificates, at which point the CockroachDB node is started in the pod.</p>
<p>Get the name of the Pending CSR for the first pod:</p>
<p><code>kubectl get csr</code></p>
</blockquote>
<pre><code>NAME AGE REQUESTOR CONDITION
default.node.cockroachdb-0 1m system:serviceaccount:default:default Pending
node-csr-0Xmb4UTVAWMEnUeGbW4KX1oL4XV_LADpkwjrPtQjlZ4 4m kubelet Approved,Issued
node-csr-NiN8oDsLhxn0uwLTWa0RWpMUgJYnwcFxB984mwjjYsY 4m kubelet Approved,Issued
node-csr-aU78SxyU69pDK57aj6txnevr7X-8M3XgX9mTK0Hso6o 5m kubelet Approved,Issued
</code></pre>
<p>If you do not see a Pending CSR, wait a minute and try again.</p>
<p>You can check the CSR pod <code>kubectl describe csr default.node.cockroachdb-0</code></p>
<p>It might look like this:</p>
<pre><code>Name: default.node.cockroachdb-0
Labels: <none>
Annotations: <none>
CreationTimestamp: Thu, 09 Nov 2017 13:39:37 -0500
Requesting User: system:serviceaccount:default:default
Status: Pending
Subject:
Common Name: node
Serial Number:
Organization: Cockroach
Subject Alternative Names:
DNS Names: localhost
cockroachdb-0.cockroachdb.default.svc.cluster.local
cockroachdb-public
IP Addresses: 127.0.0.1
10.48.1.6
Events: <none>
</code></pre>
<p>If it does then you can approve the certificate using:</p>
<p><code>kubectl certificate approve default.node.cockroachdb-0</code></p>
<p>Please do follow the <a href="https://www.cockroachlabs.com/docs/stable/orchestrate-cockroachdb-with-kubernetes.html" rel="nofollow noreferrer">Orchestrate CockroachDB in a Single Kubernetes Cluster</a> guide.</p>
<p>Let me know if you need any further help.</p>
|
<p>I have implemented a gRPC client server in Go. I have now set them up in Kubernetes as client and server pods where the client connects to the server. I've set this cluster up using vagrant (centos/7) on my PC. My issue is that the client wants to hit the port 8090 (sever serves on this port) and send a message to the server, but as they are in different pods the client cannot hit localhost 8090 and therefore the pod fails. How do I solve this problem?</p>
<pre class="lang-golang prettyprint-override"><code>func main() {
conn, err := grpc.Dial(":8090", grpc.WithInsecure())
if err != nil {
log.Fatalf("did not connect :%v", err)
}
cli := proto.NewPingClient(conn)
</code></pre>
<p>as seen it tries to dial 8090, but not able to map it to the localhost of the server pod.</p>
| <p><code>localhost</code> can only be used to address the group of containers running in a single Pod. </p>
<p>Pod to Pod communications should go via a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a>. </p>
<p>The following example service will forward traffic to any Pods labelled with <code>app: grpc-app</code></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: grpc-service
spec:
selector:
app: grpc-app
ports:
- protocol: TCP
port: 8090
targetPort: 8090
</code></pre>
<p>You will then have a DNS hostname to access the service with. </p>
<p><code>grpc-service.<namespace>.svc.<cluster-domain></code>
which is usually<br> <code>grpc-service.default.svc.cluster.local</code></p>
<pre><code>func main() {
conn, err := grpc.Dial("name-of-your-grpc-service:8090", grpc.WithInsecure())
if err != nil {
log.Fatalf("did not connect :%v", err)
}
cli := proto.NewPingClient(conn)
</code></pre>
|
<p>I had a working example of a project a year back, which is not working anymore.</p>
<p>It's basically related to change in the behavior of <code>nginx.ingress.kubernetes.io/rewrite-target</code> property mentioned here - <a href="https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/rewrite" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/rewrite</a> </p>
<p>I have 3 application and I want to route based on conditions.</p>
<ul>
<li><code>/*</code> to frontend-cluster-ip-service</li>
<li><code>/api/battleship/*</code> to battleship-cluster-ip-service</li>
<li><code>/api/connect4/*</code> to connect-four-cluster-ip-service</li>
</ul>
<p>The working example that was working an year back was </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: frontend-cluster-ip-service
servicePort: 3000
- path: /api/connect4/
backend:
serviceName: connect-four-cluster-ip-service
servicePort: 8080
- path: /api/battleship/
backend:
serviceName: battleship-cluster-ip-service
servicePort: 8080
</code></pre>
<p>However, this is not working anymore and only routing to <code>/</code> , i.e to frontend-cluster-ip-service is working. Routing to other serives fails and I get 404.</p>
<p>Then I came to know about the change in <code>nginx.ingress.kubernetes.io/rewrite-target</code>.</p>
<p>I tried following then</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: frontend-cluster-ip-service
servicePort: 3000
- path: /api/connect4(/|$)(.*)
backend:
serviceName: connect-four-cluster-ip-service
servicePort: 8080
- path: /api/battleship(/|$)(.*)
backend:
serviceName: battleship-cluster-ip-service
servicePort: 8080
</code></pre>
<p>Now the routing to <code>connect-four-cluster-ip-service</code> and <code>battleship-cluster-ip-service</code> is working but <code>frontend-cluster-ip-service</code> is not working and few js files loads are showing error:</p>
<p><a href="https://i.stack.imgur.com/mBR23.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mBR23.png" alt="enter image description here"></a></p>
| <p>I had the same issue with a bit more complicated rewrite (it was only for one different path).</p>
<p>Making multiple Ingresses for each path worked for me but might not be the cleanest solution.</p>
<p>My ingress definition:
<a href="https://github.com/FORTH-ICS-INSPIRE/artemis/blob/master/artemis-chart/templates/ingresses.yaml" rel="nofollow noreferrer">https://github.com/FORTH-ICS-INSPIRE/artemis/blob/master/artemis-chart/templates/ingresses.yaml</a></p>
|
<p>I'm setting up a kubernetes cluster for development with Docker for Mac. I have a few private grpc microservices and a couple Rails public REST APIs that use SSL. I want my dev environment to be as close to production as possible so I'd prefer to not need a special Dockerfile for building for dev. Can someone give me a set-by-step approach for getting SSL working with my two REST services, including how to trust the self-signed certificates so the two services can talk to each other? I'm not very familiar with SSL or TLS. </p>
<p>I got the HTTPS servers working with the browser using a self-signed certificate generated with:</p>
<pre><code>openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./app1.localhost.key -out ./app1.localhost.crt -subj "/CN=app1.localhost"
</code></pre>
<p>and then</p>
<pre><code>kubectl create secret tls app1-tls --key ./app1.localhost.key --cert ./app1.localhost.crt
</code></pre>
<p>Then I have <code>nginx-ingress</code> running and an Ingress configured like</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app1
spec:
rules:
- host: app1.localhost
http:
paths:
- path: /
backend:
serviceName: app1
servicePort: 443
tls:
- hosts:
- app1.localhost
secretName: app1-tls
</code></pre>
<p>I've updated my host file to map the domains to localhost. </p>
<p>This is working in the browser by accepting the cert in Chrome through the advanced link in the "Your connection is not private" page. However, because the certificate is not trusted in general, the services can't talk to each other. I get a <code>RestClient::SSLCertificateNotVerified</code> error when attempting to make a request from one to the other. </p>
<p>A lot of the stuff online I've found refer to the client certificate and server certificate, but I'm not sure how to generate the two certs (above I get a .crt and a .key file, is that for the client or server?). And I'm not sure how to trust them.</p>
| <blockquote>
<p>how to trust the self-signed certificates so the two services can talk to each other?</p>
</blockquote>
<p>It depends on your HTTP client, for ruby it looks like:</p>
<pre class="lang-rb prettyprint-override"><code>File.open( "client_certificate.pem", 'rb' ) { |f| cert = f.read }
File.open( "client_key.pem", 'rb' ) { |f| key = f.read }
http_session.cert = OpenSSL::X509::Certificate.new(cert)
http_session.key = OpenSSL::PKey::RSA.new(key, nil)
</code></pre>
<p><code>Ingress</code> TLS is just for server, and use self-signed certificate is a bad idea. </p>
<p>If you want your two services talk to each other with TLS, and don't want to config the CA/Cert/Key manually (or call it Certificate Distribution), I recommend <a href="https://istio.io/docs/concepts/security/#high-level-architecture" rel="nofollow noreferrer">istio</a>, it support transparent TLS proxy between two pods.</p>
|
<p>Am trying to setup <code>kubernetes</code> in <code>centos</code> machine, kubelets start is giving me this error.</p>
<blockquote>
<p>Failed to get kubelets cgroup: cpu and memory cgroup hierarchy not
unified. Cpu:/, memory: /system.slice/kubelet.service.</p>
</blockquote>
<p>The cgroup driver I mentioned is systemd for both docker and kubernetes</p>
<p><code>Docker</code> version 1.13.1
<code>Kubernetes</code> version 1.15.2</p>
<p>Can any one suggest the solution.</p>
| <p>This <a href="https://github.com/kubernetes/kubernetes/issues/78950" rel="nofollow noreferrer">issue</a> is fixed in a commit but still not merged see <a href="https://github.com/kubernetes/kubernetes/pull/80121" rel="nofollow noreferrer">this</a></p>
<p>you may try this work around:</p>
<pre><code>sudo vim /etc/sysconfig/kubelet
</code></pre>
<p>add at the end of DAEMON_ARGS string:</p>
<pre><code> --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice
</code></pre>
<p>restart:</p>
<pre><code>sudo systemctl restart kubelet
</code></pre>
<p>or :</p>
<p>adding a file in : <code>/etc/systemd/system/kubelet.service.d/11-cgroups.conf</code></p>
<p>which contains:</p>
<pre><code>[Service]
CPUAccounting=true
MemoryAccounting=true
</code></pre>
<p>then reload and restart</p>
<pre><code>systemctl daemon-reload && systemctl restart kubelet
</code></pre>
|
<p>GKE Ingress: <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/ingress</a></p>
<p>Nginx Ingress: <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/</a></p>
<p><strong>Why GKE Ingress</strong></p>
<p>GKE Ingress can be used along with Google's managed SSL certificates. These certificates are deployed in edge servers of load balancer which results in very low TTFB (time to first byte)</p>
<p><strong>What's wrong about GKE Ingress</strong></p>
<p>The HTTP/domain routing is done in the load balancer using 'forward rules' which is very pricy. Costs around $7.2 per rule. Each domain requires one rule. </p>
<p><strong>Why Nginx Ingress</strong></p>
<p>Nginx Ingress also creates (TCP/UP) load balancer where we can specify routing of HTTP/domain using ingress controller. Since the routing is done inside the cluster there are no additional costs on adding domains into the rules</p>
<p><strong>What's wrong about Nginx Ingress</strong></p>
<p>To enable SSL, we can use cert-manager. But as I mentioned above, Google's managed certificate deploy certificates in edge servers which results in very low latency</p>
<p><strong>My Question</strong></p>
<p>Is it possible to use both of them together? So that HTTPS requests first hit GKE ingress which will terminate SSL and route the traffic to Nginx ingress which will route it to corresponding pods</p>
| <p>Is not possible to point an <code>Ingress</code> to another <code>Ingress</code>. Furthermore and in your particular case, is also not possible to point a <code>GCE ingress class</code> to Nginx since it <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress" rel="nofollow noreferrer">relies in an HTTP(S) Load Balancer</a>, which can only have GCE instances/<a href="https://cloud.google.com/compute/docs/instance-groups/" rel="nofollow noreferrer">instances groups</a> (basically the node pools in GKE), or <a href="https://cloud.google.com/storage/docs/json_api/v1/buckets" rel="nofollow noreferrer">GCS buckets</a> as <a href="https://cloud.google.com/load-balancing/docs/backend-service" rel="nofollow noreferrer">backends</a>.</p>
<p>If you were to deploy an Nginx ingress using GKE, it will spin up a <a href="https://cloud.google.com/load-balancing/docs/network/" rel="nofollow noreferrer">Network Load Balancer</a> which is not a valid backend for the HTTP(S) Load Balancer.</p>
<p>So is neither possible via <code>Ingress</code> nor GCP infrastructure features. However, if you need the <code>GCE ingress class</code> to be hit first, and then, manage further routing with Nginx, you might want to consider having Nginx as a Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a>/<a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> to manage the incoming traffic once is within the cluster network.</p>
<p>You can create a <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#creating-a-service" rel="nofollow noreferrer">ClusterIP</a> service for internally accessing your Nginx deployment and from there, using cluster-local <a href="https://kubernetes.io/docs/concepts/services-networking/service/#dns" rel="nofollow noreferrer">hostnames</a> to redirect to other services/applications within the cluster.</p>
|
<p>I am currently trying to access a file mounted in my Kubernetes container from a docker image. I need to pass the file in with a flag when my docker image is run.</p>
<p>The docker image is usually run (outside a container) using the command:</p>
<p><code>docker run -p 6688:6688 -v ~/.chainlink-ropsten:/chainlink -it --env-file=.env smartcontract/chainlink local n -p /chainlink/.password -a /chainlink/.api</code></p>
<p>Now I have sucessfully used the following config to mount my env, password and api files at /chainlink, but when attempting to access the files during the docker run I get the error:</p>
<p><code>flag provided but not defined: -password /chainlink/.password</code></p>
<p>The following is my current Kubernetes Deployment file</p>
<pre><code>kind: Deployment
metadata:
name: chainlink-deployment
labels:
app: chainlink-node
spec:
replicas: 1
selector:
matchLabels:
app: chainlink-node
template:
metadata:
labels:
app: chainlink-node
spec:
containers:
- name: chainlink
image: smartcontract/chainlink:latest
args: [ "local", "n", "--password /chainlink/.password", "--api /chainlink/.api"]
ports:
- containerPort: 6689
volumeMounts:
- name: config-volume
mountPath: /chainlink/.env
subPath: .env
- name: api-volume
mountPath: /chainlink/.api
subPath: .api
- name: password-volume
mountPath: /chainlink/.password
subPath: .password
volumes:
- name: config-volume
configMap:
name: node-env
- name: api-volume
configMap:
name: api-env
- name: password-volume
configMap:
name: password-env
</code></pre>
<p>Is there some definition I am missing in my file that allows me to access the mounted volumes when running my docker image?</p>
| <p>Change your <code>args</code> to:</p>
<pre><code>args: [ "local", "n", "--password", "/chainlink/.password", "--api", "/chainlink/.api"]
</code></pre>
<p>The way you currently have it, it's thinking the whole string <code>--password /chainlink/.password</code>, include the space, is a single flag. That's what the error:</p>
<pre><code>flag provided but not defined: -password /chainlink/.password
</code></pre>
<p>is telling you.</p>
|
<p>I have a Kubernetes deployment on GCP and a ClusterIP service to discover pods in this deployment. The deployment contains multiple replica set pods which come and go based on our horizontal pod scalar configuration (based on CPU Utilization). </p>
<p>Now, when a new replica set pod is created, it takes some time for the application to start servicing. But the ClusterIP already starts distributing requests to new replica set pod before the application is ready, which causes the requests to be not serviced.</p>
<p>ClusterIP service yaml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: service-name
tier: backend
environment: "dev"
creator: internal
name: service-name
spec:
clusterIP: None
ports:
- name: https
protocol: TCP
port: 7070
targetPort: 7070
selector:
app: dep-name
tier: "backend"
environment: "dev"
creator: "ME"
type: ClusterIP
</code></pre>
<p>How can the ClusterIP be told to start distributing requests to the new pod after the application starts? Can there be any initial delay or liveness probe set for this purpose?</p>
| <p>Kubernetes provides readiness probe for it. With readiness probes, Kubernetes will not send traffic to a pod until the probe is successful. When updating a deployment, it will also leave old replica(s) running until probes have been successful on new replica. That means that if your new pods are broken in some way, they’ll never see traffic, your old pods will continue to serve all traffic for the deployment.</p>
<p>You need to update the deployment file with following readiness probe:</p>
<pre><code>readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
</code></pre>
<p>If your application have http probe then you can set readiness probe in HTTP mode as well.</p>
<p>For more information how can you use readiness probe refer:</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-readiness-probes" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-readiness-probes</a></p>
|
<p>I searched for hours, but I can't find any solution for my problem.</p>
<p>Short Version: Is it possible to genarate new <code>.Value</code> properties at templating time?</p>
<p>Long Version: I want to deploy the ElasticStack with Logstash, Kibana, Elasticsearch and I want to use the offical helm templates for Kibana and Elasticsearch.</p>
<p>My Idea is to create a new Chart (elk) with the 3 subcharts.</p>
<pre><code>elk
charts
elasticsearch (official helm template)
values.yaml
kibana (official helm template)
values.yaml
logstash
values.yaml
templates
values.yaml
</code></pre>
<p>My problem is the multiple declaration of the same property in the top-level <code>values.yaml</code></p>
<p>My <code>elk/values.yaml</code> looks like the following</p>
<pre><code>elasticsearch:
clusterName: "elasticsearchtest"
imageTag: "7.3.0"
replicas: 3
minimumMasterNodes: 2
volumeClaimTemplate:
storageClassName: gp2-resize
kibana:
elasticsearchHosts: "http://elasticsearchtest-master:9200"
imageTag: "7.3.0"
logstash:
elasticsearchHosts: "http://elasticsearchtest-master:9200"
imageTag: "7.3.0"
</code></pre>
<p>Note the repitition if I want to change the <code>clustername</code> or specify the <code>imagetag</code>. It feels really bad to overwrite the subcharts' values in this way. </p>
<p>It is possible to create a top-level <code>values.yaml</code> like this:</p>
<pre><code>clusterName: "elasticsearchtest"
imageTag: "7.3.0"
</code></pre>
<p>and overwrite the subcharts values at templating time?</p>
| <p>There is no way to template values.yaml, if you are not going to use external tools (eg. Ytt)</p>
<p><a href="https://github.com/helm/helm/issues/2492" rel="nofollow noreferrer">https://github.com/helm/helm/issues/2492</a></p>
<p>However, if you can change values.yaml in the subcharts, then you can assign it to the global variables, and define them once in the parent Helm chart.</p>
|
<p>I am now testing the deployment into different namespace using Kubernetes. Here I am using Kubernetes Helm Chart for that. In my chart, I have deployment.yaml and service.yaml.</p>
<p>When I am defining the "namespace" parameter with Helm command <code>helm install --upgrade</code>, it is not working. When I a read about that I found the statement that - "Helm 2 is not overwritten by the --namespace parameter".</p>
<p>I tried the following command:</p>
<pre><code>helm upgrade --install kubedeploy --namespace=test pipeline/spacestudychart
</code></pre>
<p><strong>NB</strong> Here my service is deploying with <strong>default</strong> namespace.</p>
<p>Screenshot of describe pod:</p>
<p><a href="https://i.stack.imgur.com/UUv8a.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UUv8a.png" alt="enter image description here"></a></p>
<p>Here my "helm version" command output is like follows:</p>
<pre><code>docker@mildevdcr01:~$ helm version
Client: &version.Version{SemVer:"v2.14.3",
GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3",
GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
</code></pre>
<p><strong>Because of this reason, I tried to addthis command in deployment.yaml</strong>, under metadata.namespace like following,</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "spacestudychart.fullname" . }}
namespace: test
</code></pre>
<p>I created test and prod, 2 namespaces. But here also it's not working. When I adding like this, I am not getting my service up. I am not able to accessible. And in Jenkins console there is no error. When I defined in helm install --upgrade command it was deploying with default namespace. But here not deploying also.</p>
<p>After this, I removed the namespace from deployment.yaml and added metadata.namespace like the same. There also I am not able to access deployed service. But Jenkins console output still showing success.</p>
<p>Why namespace is not working with my Helm deployment? What changes I need to do here for deploying test/prod instead of this default namespace. </p>
| <p>Remove <code>namespace: test</code> from all of your chart files and <code>helm install --namespace=namespace2 ...</code> should work.</p>
|
<p>We have a service that sends tons of events in bulks. It basically opens multiple http POST connections.</p>
<p>Since we moved the service to kubernetes, we're <code>getaddrinfo: Temporary failure in name resolution</code> errors from time to time. (most calls work but some fail and it's weird.</p>
<p>Can anyone explain why and how to fix?</p>
| <p>Check the tinder post, they had a similar problem:</p>
<p><a href="https://medium.com/tinder-engineering/tinders-move-to-kubernetes-cda2a6372f44" rel="nofollow noreferrer">https://medium.com/tinder-engineering/tinders-move-to-kubernetes-cda2a6372f44</a></p>
<p>and the source for their dns info:</p>
<p><a href="https://www.weave.works/blog/racy-conntrack-and-dns-lookup-timeouts" rel="nofollow noreferrer">https://www.weave.works/blog/racy-conntrack-and-dns-lookup-timeouts</a></p>
<p>TLDR: check your arp tables cache gc_* host parameters, try to disable AAAA query in the containers /etc/gai.conf, move the DNS to a daemonset and inject the host IP as dns servers to the pods</p>
<p>Also, to help this and speed up dns resolve, add a ending dot to all domains (ie: database.example.com.), so coredns will try that query directly (one query, 2 with ipv6) instead of trying all the kubernetes search domain list ( about 5, 10 with ipv6). Only leave out that dot where you are querying for kubernetes resources or in apps that do not like it (if they exist, it's a bug, ALL DNS always end with a dot, while most of the time we can leave it out, with it is the correct way, must not fail)</p>
|
<p>I’ve created helm chart which is working as expected, however I want to change the names of the deployed
application
currently for each deployment I got different (random) name and I want it to be a fixed name, how can I do that?</p>
<p>This is the helper</p>
<pre><code>{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "unleash.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some K8S name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "unleash.fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
</code></pre>
<p>This is the outcome of the name after deployment</p>
<pre><code>crabby-ibex-postgresql-0 0/1 Pending 0 1s
crabby-ibex-unleash-86775cdffd-xt575 0/1 ContainerCreating 0 1s
</code></pre>
<p>This is the names from the values yaml</p>
<pre><code>replicaCount: 1
namespace: unleash
restartPolicy: Never
name: a-unleash
nameOverride: unleash
</code></pre>
<p>e.g. I want it instead of </p>
<pre><code>crabby-ibex-unleash-86775cdffd-xt575
</code></pre>
<p>to be like</p>
<pre><code>unleash-service
uleash-postgressql
</code></pre>
<p><strong>update</strong></p>
<p>I've added the following to the <code>_helper.tpl</code></p>
<pre><code>{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 26 | trimSuffix "-" -}}
{{- end -}}
</code></pre>
<p>and put the following in the <code>values.yml</code>
<code>fullnameOverride: apps</code></p>
<p>i expect that the artifacts will start with <code>apps</code> and it doesnt work</p>
| <p>Based of name <code>crabby-ibex-unleash-86775cdffd-xt575</code> I guess you are using <code>kind: Deployment</code> for this application, if you change <code>kind</code> to <code>StatefulSet</code> in you yaml you will end up with pod named <code>uleash-postgressql-0</code>, but because of helm you have additional prefix, you could use <code>--name=you_relese_name</code> which will create pod <code>you_relese_name-uleash-postgressql-0</code>.</p>
<p>If you really want to get rid of helm chart prefix you have to set <code>fullnameOverride</code> for every chart you are deploying.</p>
<p>EDIT:
To make use of <code>fullnameOverride</code> you have to define it in your <code>_helpers.tpl</code> file.</p>
<pre><code>{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "unleash.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some K8S name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "unleash.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 26 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
</code></pre>
|
<p>I have EKS Cluster with three worker nodes. Now I would like to list all pulled docker images, how can I do it?</p>
<p>I searched across documentation and couldn't find any direct way to access it. I could possibly attach key pair for worker node and try to ssh to it, but I would like to avoid that.</p>
| <p>For all nodes</p>
<pre><code>kubectl get node -o json | jq -r '.items[].status.images[].names'
</code></pre>
<p>For 'worker-node'</p>
<pre><code>kubectl get node worker-node -o json | jq -r '.status.images[].names'
</code></pre>
|
<p>Im trying to upgrade kube cluster from Ubuntu 16 to 18. After the upgrade kube-dns pod is constantly crashing. The problem appears only on U18 if i'm rolling back to U16 everything works fine.</p>
<p>Kube version "v1.10.11"</p>
<p>kube-dns pod events:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 28m default-scheduler Successfully assigned kube-dns-75966d58fb-pqxz4 to
Normal SuccessfulMountVolume 28m kubelet, MountVolume.SetUp succeeded for volume "kube-dns-config"
Normal SuccessfulMountVolume 28m kubelet, MountVolume.SetUp succeeded for volume "kube-dns-token-h4q66"
Normal Pulling 28m kubelet, pulling image "k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.10"
Normal Pulled 28m kubelet, Successfully pulled image "k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.10"
Normal Started 28m kubelet, Started container
Normal Created 28m kubelet, Created container
Normal Pulling 28m kubelet, pulling image "k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.10"
Normal Pulling 28m kubelet, pulling image "k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.10"
Normal Pulled 28m kubelet, Successfully pulled image "k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.10"
Normal Created 28m kubelet, Created container
Normal Pulled 28m kubelet, Successfully pulled image "k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.10"
Normal Started 28m kubelet, Started container
Normal Created 25m (x2 over 28m) kubelet, Created container
Normal Started 25m (x2 over 28m) kubelet, Started container
Normal Killing 25m kubelet, Killing container with id docker://dnsmasq:Container failed liveness probe.. Container will be killed and recreated.
Normal Pulled 25m kubelet, Container image "k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.10" already present on machine
Warning Unhealthy 4m (x26 over 27m) kubelet, Liveness probe failed: HTTP probe failed with statuscode: 503
</code></pre>
<p>kube-dns sidecar container logs:</p>
<pre><code>kubectl logs kube-dns-75966d58fb-pqxz4 -n kube-system -c sidecar
I0809 16:31:26.768964 1 main.go:51] Version v1.14.8.3
I0809 16:31:26.769049 1 server.go:45] Starting server (options {DnsMasqPort:53 DnsMasqAddr:127.0.0.1 DnsMasqPollIntervalMs:5000 Probes:[{Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1} {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}] PrometheusAddr:0.0.0.0 PrometheusPort:10054 PrometheusPath:/metrics PrometheusNamespace:kubedns})
I0809 16:31:26.769079 1 dnsprobe.go:75] Starting dnsProbe {Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}
I0809 16:31:26.769117 1 dnsprobe.go:75] Starting dnsProbe {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}
W0809 16:31:33.770594 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:49305->127.0.0.1:53: i/o timeout
W0809 16:31:40.771166 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:49655->127.0.0.1:53: i/o timeout
W0809 16:31:47.771773 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:53322->127.0.0.1:53: i/o timeout
W0809 16:31:54.772386 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:58999->127.0.0.1:53: i/o timeout
W0809 16:32:01.772972 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:35034->127.0.0.1:53: i/o timeout
W0809 16:32:08.773540 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:33250->127.0.0.1:53: i/o timeout
</code></pre>
<p>kube-dns dnsmasq container logs:</p>
<pre><code>kubectl logs kube-dns-75966d58fb-pqxz4 -n kube-system -c dnsmasq
I0809 16:29:51.596517 1 main.go:74] opts: {{/usr/sbin/dnsmasq [-k --cache-size=1000 --dns-forward-max=150 --no-negcache --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/in6.arpa/127.0.0.1#10053] true} /etc/k8s/dns/dnsmasq-nanny 10000000000}
I0809 16:29:51.596679 1 nanny.go:94] Starting dnsmasq [-k --cache-size=1000 --dns-forward-max=150 --no-negcache --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/in6.arpa/127.0.0.1#10053]
I0809 16:29:52.135179 1 nanny.go:119]
W0809 16:29:52.135211 1 nanny.go:120] Got EOF from stdout
I0809 16:29:52.135277 1 nanny.go:116] dnsmasq[20]: started, version 2.78 cachesize 1000
I0809 16:29:52.135293 1 nanny.go:116] dnsmasq[20]: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify
I0809 16:29:52.135303 1 nanny.go:116] dnsmasq[20]: using nameserver 127.0.0.1#10053 for domain in6.arpa
I0809 16:29:52.135314 1 nanny.go:116] dnsmasq[20]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa
I0809 16:29:52.135323 1 nanny.go:116] dnsmasq[20]: using nameserver 127.0.0.1#10053 for domain cluster.local
I0809 16:29:52.135329 1 nanny.go:116] dnsmasq[20]: reading /etc/resolv.conf
I0809 16:29:52.135334 1 nanny.go:116] dnsmasq[20]: using nameserver 127.0.0.1#10053 for domain in6.arpa
I0809 16:29:52.135343 1 nanny.go:116] dnsmasq[20]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa
I0809 16:29:52.135348 1 nanny.go:116] dnsmasq[20]: using nameserver 127.0.0.1#10053 for domain cluster.local
I0809 16:29:52.135353 1 nanny.go:116] dnsmasq[20]: using nameserver 127.0.0.53#53
I0809 16:29:52.135397 1 nanny.go:116] dnsmasq[20]: read /etc/hosts - 7 addresses
I0809 16:31:28.728897 1 nanny.go:116] dnsmasq[20]: Maximum number of concurrent DNS queries reached (max: 150)
I0809 16:31:38.746899 1 nanny.go:116] dnsmasq[20]: Maximum number of concurrent DNS queries reached (max: 150)
</code></pre>
<p>I have deleted the existing pods but newly created getting same error after some time. Not sure why this is happening only on Ubuntu 18. Any ideas how to fix this?</p>
| <p>In my case i have found that in ubuntu18 the resolve.conf was pointing pointing to: <code>/etc/resolv.conf -> ../run/systemd/resolve/stub-resolv.conf</code>
and it had <code>nameserver 127.0.0.53</code> entry.
At the same time under /run/systemd/resolve you should have another resolv.conf</p>
<pre><code>/run/systemd/resolve$ ll
total 8
drwxr-xr-x 2 systemd-resolve systemd-resolve 80 Aug 12 13:24 ./
drwxr-xr-x 23 root root 520 Aug 12 11:54 ../
-rw-r--r-- 1 systemd-resolve systemd-resolve 607 Aug 12 13:24 resolv.conf
-rw-r--r-- 1 systemd-resolve systemd-resolve 735 Aug 12 13:24 stub-resolv.conf
</code></pre>
<p>In my case resolv.conf contains private IP nameserver 172.27.0.2.
Just relink to the ../run/systemd/resolve/resolv.conf on all cluster machines and reboot the kube-dns pods.</p>
|
<p>I’ve created helm chart which is working as expected, however I want to change the names of the deployed
application
currently for each deployment I got different (random) name and I want it to be a fixed name, how can I do that?</p>
<p>This is the helper</p>
<pre><code>{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "unleash.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some K8S name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "unleash.fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
</code></pre>
<p>This is the outcome of the name after deployment</p>
<pre><code>crabby-ibex-postgresql-0 0/1 Pending 0 1s
crabby-ibex-unleash-86775cdffd-xt575 0/1 ContainerCreating 0 1s
</code></pre>
<p>This is the names from the values yaml</p>
<pre><code>replicaCount: 1
namespace: unleash
restartPolicy: Never
name: a-unleash
nameOverride: unleash
</code></pre>
<p>e.g. I want it instead of </p>
<pre><code>crabby-ibex-unleash-86775cdffd-xt575
</code></pre>
<p>to be like</p>
<pre><code>unleash-service
uleash-postgressql
</code></pre>
<p><strong>update</strong></p>
<p>I've added the following to the <code>_helper.tpl</code></p>
<pre><code>{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 26 | trimSuffix "-" -}}
{{- end -}}
</code></pre>
<p>and put the following in the <code>values.yml</code>
<code>fullnameOverride: apps</code></p>
<p>i expect that the artifacts will start with <code>apps</code> and it doesnt work</p>
| <p>I don't know, why nobody posted it yet. You can pass the name of the Helm release to the <code>helm install</code> command:</p>
<p><code>helm install <your_Chart.yaml_directory> -n <release_name></code></p>
|
<p>I am unable to run any <code>kubectl</code> commands and I believe it is a result of an expired apiserver-etcd-client certificate.</p>
<pre><code>$ openssl x509 -in /etc/kubernetes/pki/apiserver-etcd-client.crt -noout -text |grep ' Not '
Not Before: Jun 25 17:28:17 2018 GMT
Not After : Jun 25 17:28:18 2019 GMT
</code></pre>
<p>The log from the failed apiserver container shows:</p>
<pre><code>Unable to create storage backend: config (&{ /registry [https://127.0.0.1:2379] /etc/kubernetes/pki/apiserver-etcd-client.key /etc/kubernetes/pki/apiserver-etcd-client.crt /etc/kubernetes/pki/etcd/ca.crt true false 1000 0xc420363900 <nil> 5m0s 1m0s}), err (dial tcp 127.0.0.1:2379: getsockopt: connection refused)
</code></pre>
<p>I am using kubeadm 1.10, and would like to upgrade to 1.14. I was able to renew several expired certificates described by <a href="https://github.com/kubernetes/kubeadm/issues/581" rel="nofollow noreferrer">issue 581</a> on GitHub. Following the instructions updated the following keys & certs in <code>/etc/kubernetes/pki</code>:</p>
<pre><code>apiserver
apiserver-kubelet-client
front-proxy-client
</code></pre>
<p>Next, I tried:</p>
<pre><code>kubeadm --config kubeadm.yaml alpha phase certs apiserver-etcd-client
</code></pre>
<p>Where the <code>kubeadm.yaml</code> file is:</p>
<pre><code>apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: 172.XX.XX.XXX
kubernetesVersion: v1.10.5
</code></pre>
<p>But it returns:</p>
<pre><code>failure loading apiserver-etcd-client certificate: the certificate has expired
</code></pre>
<p>Further, in the directory <code>/etc/kubernetes/pki/etcd</code> with the exception of the <code>ca</code> cert and key, all of the remaining certificates and keys are expired.</p>
Is there a way to renew the expired certs without resorting to rebuilding the cluster?
<pre><code>
Logs from the etcd container:
$ sudo docker logs e4da061fc18f
2019-07-02 20:46:45.705743 I | etcdmain: etcd Version: 3.1.12
2019-07-02 20:46:45.705798 I | etcdmain: Git SHA: 918698add
2019-07-02 20:46:45.705803 I | etcdmain: Go Version: go1.8.7
2019-07-02 20:46:45.705809 I | etcdmain: Go OS/Arch: linux/amd64
2019-07-02 20:46:45.705816 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2019-07-02 20:46:45.705848 N | etcdmain: the server is already initialized as member before, starting as etcd member...
2019-07-02 20:46:45.705871 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, ca = , trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true
2019-07-02 20:46:45.705878 W | embed: The scheme of peer url http://localhost:2380 is HTTP while peer key/cert files are presented. Ignored peer key/cert files.
2019-07-02 20:46:45.705882 W | embed: The scheme of peer url http://localhost:2380 is HTTP while client cert auth (--peer-client-cert-auth) is enabled. Ignored client cert auth for this url.
2019-07-02 20:46:45.712218 I | embed: listening for peers on http://localhost:2380
2019-07-02 20:46:45.712267 I | embed: listening for client requests on 127.0.0.1:2379
2019-07-02 20:46:45.716737 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2019-07-02 20:46:45.718103 I | etcdserver: recovered store from snapshot at index 13621371
2019-07-02 20:46:45.718116 I | etcdserver: name = default
2019-07-02 20:46:45.718121 I | etcdserver: data dir = /var/lib/etcd
2019-07-02 20:46:45.718126 I | etcdserver: member dir = /var/lib/etcd/member
2019-07-02 20:46:45.718130 I | etcdserver: heartbeat = 100ms
2019-07-02 20:46:45.718133 I | etcdserver: election = 1000ms
2019-07-02 20:46:45.718136 I | etcdserver: snapshot count = 10000
2019-07-02 20:46:45.718144 I | etcdserver: advertise client URLs = https://127.0.0.1:2379
2019-07-02 20:46:45.842281 I | etcdserver: restarting member 8e9e05c52164694d in cluster cdf818194e3a8c32 at commit index 13629377
2019-07-02 20:46:45.842917 I | raft: 8e9e05c52164694d became follower at term 1601
2019-07-02 20:46:45.842940 I | raft: newRaft 8e9e05c52164694d [peers: [8e9e05c52164694d], term: 1601, commit: 13629377, applied: 13621371, lastindex: 13629377, lastterm: 1601]
2019-07-02 20:46:45.843071 I | etcdserver/api: enabled capabilities for version 3.1
2019-07-02 20:46:45.843086 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32 from store
2019-07-02 20:46:45.843093 I | etcdserver/membership: set the cluster version to 3.1 from store
2019-07-02 20:46:45.846312 I | mvcc: restore compact to 13274147
2019-07-02 20:46:45.854822 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2019-07-02 20:46:45.855232 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2019-07-02 20:46:45.855267 I | etcdserver: starting server... [version: 3.1.12, cluster version: 3.1]
2019-07-02 20:46:45.855293 I | embed: ClientTLS: cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, ca = , trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true
2019-07-02 20:46:46.443331 I | raft: 8e9e05c52164694d is starting a new election at term 1601
2019-07-02 20:46:46.443388 I | raft: 8e9e05c52164694d became candidate at term 1602
2019-07-02 20:46:46.443405 I | raft: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 1602
2019-07-02 20:46:46.443419 I | raft: 8e9e05c52164694d became leader at term 1602
2019-07-02 20:46:46.443428 I | raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 1602
2019-07-02 20:46:46.443699 I | etcdserver: published {Name:default ClientURLs:[https://127.0.0.1:2379]} to cluster cdf818194e3a8c32
2019-07-02 20:46:46.443768 I | embed: ready to serve client requests
2019-07-02 20:46:46.444012 I | embed: serving client requests on 127.0.0.1:2379
2019-07-02 20:48:05.528061 N | pkg/osutil: received terminated signal, shutting down...
2019-07-02 20:48:05.528103 I | etcdserver: skipped leadership transfer for single member cluster
</code></pre>
<p>systemd start-up script:</p>
<pre><code>sudo systemctl status -l kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Mon 2019-07-01 14:54:24 UTC; 1 day 23h ago
Docs: http://kubernetes.io/docs/
Main PID: 9422 (kubelet)
Tasks: 13
Memory: 47.0M
CGroup: /system.slice/kubelet.service
└─9422 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --authentication-token-webhook=true --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt --cgroup-driver=cgroupfs --rotate-certificates=true --cert-dir=/var/lib/kubelet/pki
Jul 03 14:10:49 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:49.871276 9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.31.22.241:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dahub-k8s-m1.aws-intanalytic.com&limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:49 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:49.872444 9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://172.31.22.241:6443/api/v1/nodes?fieldSelector=metadata.name%3Dahub-k8s-m1.aws-intanalytic.com&limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:49 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:49.880422 9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://172.31.22.241:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:50.871913 9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.31.22.241:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dahub-k8s-m1.aws-intanalytic.com&limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:50.872948 9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://172.31.22.241:6443/api/v1/nodes?fieldSelector=metadata.name%3Dahub-k8s-m1.aws-intanalytic.com&limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:50.880792 9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://172.31.22.241:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: I0703 14:10:50.964989 9422 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: I0703 14:10:50.966644 9422 kubelet_node_status.go:82] Attempting to register node ahub-k8s-m1.aws-intanalytic.com
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:50.967012 9422 kubelet_node_status.go:106] Unable to register node "ahub-k8s-m1.aws-intanalytic.com" with API server: Post https://172.31.22.241:6443/api/v1/nodes: dial tcp 172.31.22.241:6443: getsockopt: connection refused
</code></pre>
| <p>In Kubernetes 1.14 and above, you can just run <code>sudo kubeadm alpha certs renew all</code> and reboot the master. For older versions the manual steps are:</p>
<pre class="lang-sh prettyprint-override"><code>sudo -sE #switch to root
# Check certs on master to see expiration dates
echo -n /etc/kubernetes/pki/{apiserver,apiserver-kubelet-client,apiserver-etcd-client,front-proxy-client,etcd/healthcheck-client,etcd/peer,etcd/server}.crt | xargs -d ' ' -I {} bash -c "ls -hal {} && openssl x509 -in {} -noout -enddate"
# Move existing keys/config files so they can be recreated
mv /etc/kubernetes/pki/apiserver.key{,.old}
mv /etc/kubernetes/pki/apiserver.crt{,.old}
mv /etc/kubernetes/pki/apiserver-kubelet-client.crt{,.old}
mv /etc/kubernetes/pki/apiserver-kubelet-client.key{,.old}
mv /etc/kubernetes/pki/apiserver-etcd-client.crt{,.old}
mv /etc/kubernetes/pki/apiserver-etcd-client.key{,.old}
mv /etc/kubernetes/pki/front-proxy-client.crt{,.old}
mv /etc/kubernetes/pki/front-proxy-client.key{,.old}
mv /etc/kubernetes/pki/etcd/healthcheck-client.crt{,.old}
mv /etc/kubernetes/pki/etcd/healthcheck-client.key{,.old}
mv /etc/kubernetes/pki/etcd/peer.key{,.old}
mv /etc/kubernetes/pki/etcd/peer.crt{,.old}
mv /etc/kubernetes/pki/etcd/server.crt{,.old}
mv /etc/kubernetes/pki/etcd/server.key{,.old}
mv /etc/kubernetes/kubelet.conf{,.old}
mv /etc/kubernetes/admin.conf{,.old}
mv /etc/kubernetes/controller-manager.conf{,.old}
mv /etc/kubernetes/scheduler.conf{,.old}
# Regenerate keys and config files
kubeadm alpha phase certs apiserver --config /etc/kubernetes/kubeadm.yaml
kubeadm alpha phase certs apiserver-etcd-client --config /etc/kubernetes/kubeadm.yaml
kubeadm alpha phase certs apiserver-kubelet-client
kubeadm alpha phase certs front-proxy-client
kubeadm alpha phase certs etcd-healthcheck-client
kubeadm alpha phase certs etcd-peer
kubeadm alpha phase certs etcd-server
kubeadm alpha phase kubeconfig all --config /etc/kubernetes/kubeadm.yaml
# then need to restart the kubelet and services, but for the master probably best to just reboot
</code></pre>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.