prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>I'm setting up a new K8S Cluster (1.13.7-gke.8) on GKE and I want the Google cloud logging API to report properly namespace and instance names. </p>
<p>This is executed in a new GKE cluster with workload-identity enabled. </p>
<p>I started workload container to test the access to metadata service and these are the results:</p>
<pre><code>kubectl run -it --generator=run-pod/v1 --image google/cloud-sdk --namespace prod --rm workload-identity-test
</code></pre>
<p>And from the container after executing:</p>
<pre><code>curl "http://metadata.google.internal/computeMetadata/v1/instance/attributes/" -H "Metadata-Flavor: Google"
</code></pre>
<p>I expect the output of cluster-name, container-name, and namespace-id, but the actual output is only cluster-name.</p>
| <p>I was getting the same but when I ran the following the metadata showed up:</p>
<pre><code>gcloud beta container node-pools create [NODEPOOL_NAME] \
--cluster=[CLUSTER_NAME] \
--workload-metadata-from-node=EXPOSED
</code></pre>
<p>However, you will only get <code>cluster-name</code> from the metadada. For example,</p>
<pre><code>root@workload-identity-test:/# curl "http://metadata.google.internal/computeMetadata/v1/instance/attributes/" -H "Metadata-Flavor: Google"
cluster-location
cluster-name
cluster-uid
configure-sh
created-by
disable-legacy-endpoints
enable-oslogin
gci-ensure-gke-docker
gci-update-strategy
google-compute-enable-pcid
instance-template
kube-env
kube-labels
kubelet-config
user-data
</code></pre>
<p>If you are looking at getting namespaces and containers, I suggest you look at talking directly to the <a href="https://kubernetes.io/docs/concepts/overview/kubernetes-api/" rel="nofollow noreferrer">Kubernetes API</a> which essentially what the 'Workloads' tab on GKE does. I'm not really sure what you are trying to do with the 'Google cloud logging API' but maybe you can elaborate on a different question.</p>
|
<p>I need some help on debugging the error: <code>0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.</code> Can someone please help?</p>
<p>I am trying to run a pod on Mac (first) using Docker Desktop flavor of Kubernetes, and the version is 2.1.0.1 (37199). I'd like to try using hostNetwork mode because of its efficiency and the number of ports that need to be opened (in thousands). With only <code>hostNetwork: true</code> set, there is no error but I also don't see the ports being opened on the host, nor the host network interface inside the container. Since I also needs to open port 443, I added the capability of <code>NET_BIND_SERVICE</code> and that is when it started throwing the error.</p>
<p>I've run <code>lsof -i</code> inside the container (ubuntu:18.04) and then <code>sudo lsof -i</code> on my Mac, and I saw no conflict. Then, I've also looked at <code>/var/lib/log/containers/kube-apiserver-docker-desktop_kube-system_kube-apiserver-*.log</code> and I saw no clue. Thanks!</p>
<p>Additional Info:
I've run the following inside the container:</p>
<pre><code># ss -nltp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 5 0.0.0.0:10024 0.0.0.0:* users:(("pnnsvr",pid=1,fd=28))
LISTEN 0 5 0.0.0.0:2443 0.0.0.0:* users:(("pnnsvr",pid=1,fd=24))
LISTEN 0 5 0.0.0.0:10000 0.0.0.0:* users:(("pnnsvr",pid=1,fd=27))
LISTEN 0 50 0.0.0.0:6800 0.0.0.0:* users:(("pnnsvr",pid=1,fd=14))
LISTEN 0 1 0.0.0.0:6802 0.0.0.0:* users:(("pnnsvr",pid=1,fd=13))
LISTEN 0 50 0.0.0.0:443 0.0.0.0:* users:(("pnnsvr",pid=1,fd=15))
</code></pre>
<p>Then, I ran <code>netstat</code> on my Mac (the host) and searched for those ports and I can't find a collision. I'm happy to supply the output of netstat (767 lines) if needed.</p>
<p>Here is the yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: pnnsvr
labels:
app: pnnsvr
env: dev
spec:
replicas: 1
selector:
matchLabels:
app: pnnsvr
env: dev
template:
metadata:
labels:
app: pnnsvr
env: dev
spec:
hostNetwork: true
containers:
- name: pnnsvr
image: dev-pnnsvr:0.92
args: ["--root_ip=192.168.15.194"]
# for using local images
imagePullPolicy: Never
ports:
- name: https
containerPort: 443
hostPort: 443
- name: cport6800tcp
containerPort: 6800
hostPort: 6800
protocol: TCP
- name: cport10000tcp
containerPort: 10000
hostPort: 10000
protocol: TCP
- name: cport10000udp
containerPort: 10000
hostPort: 10000
protocol: UDP
- name: cport10001udp
containerPort: 10001
hostPort: 10001
protocol: UDP
#test
- name: cport23456udp
containerPort: 23456
hostPort: 23456
protocol: UDP
securityContext:
capabilities:
add:
- SYS_NICE
- NET_BIND_SERVICE
- CAP_SYS_ADMIN
</code></pre>
| <p>I've accidentally resolved this and I've done it by bouncing the pod instead of using <code>kubectl apply -f ...</code>. Soon after bouncing the pod, the new pod will become a go. My theory is that Kubernetes will bring up a new pod and get it all ready to go first before killing the old pod. Since the old pod still has the ports opened, the new pod will see those ports taken and thus the error: <code>0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports</code> is triggered.</p>
|
<p>I am using Stackdriver Monitoring API to get the metrics related to the containers. The JSON object returned from the API has the following details of the container.</p>
<p>Example:</p>
<pre><code>{
"metric": {
"type": "container.googleapis.com/container/cpu/utilization"
},
"resource": {
"type": "gke_container",
"labels": {
"zone": "us-central1-a",
"pod_id": "1138528c-c36e-11e9-a1a7-42010a800198",
"project_id": "auto-scaling-springboot",
"cluster_name": "load-test",
"container_name": "",
"namespace_id": "f0965889-c36d-11e9-9e00-42010a800198",
"instance_id": "3962380509873542383"
}
},
"metricKind": "GAUGE",
"valueType": "DOUBLE",
"points": [
{
"interval": {
"startTime": "2019-09-04T04:00:00Z",
"endTime": "2019-09-04T04:00:00Z"
},
"value": {
"doubleValue": 0.050707947222229495
}
}
]
}
</code></pre>
<p>When I execute <code>kubectl describe pod [pod name]</code>, I get none of these information unique to a container. Therefore I am unable to identify the results corresponding to a container.</p>
<p>Therfore, how to I get the pod ID so that I'll be able to identify it?</p>
| <h3>Use kubectl <code>jsonpath</code></h3>
<p>To get a specific pod's UID:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get pods -n <namespace> <pod-name> -o jsonpath='{.metadata.uid}'
</code></pre>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get pods -n kube-system kubedb-66f78 -o jsonpath='{.metadata.uid}'
275ecb36-5aa8-4c2a-9c47-d8bb681b9aff⏎
</code></pre>
<h3>Use kubectl <code>custom-columns</code></h3>
<p>List all PodName along with its UID of a namespace:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get pods -n <namespace> -o custom-columns=PodName:.metadata.name,PodUID:.metadata.uid
</code></pre>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get pods -n kube-system -o custom-columns=PodName:.metadata.name,PodUID:.metadata.uid
PodName PodUID
coredns-6955765f44-8kp9t 0ae5c03d-5fb3-4eb9-9de8-2bd4b51606ba
coredns-6955765f44-ccqgg 6aaa09a1-241a-4013-b706-fe80ae371206
etcd-kind-control-plane c7304563-95a8-4428-881e-422ce3e073e7
kindnet-jgb95 f906a249-ab9d-4180-9afa-4075e2058ac7
kube-apiserver-kind-control-plane 971165e8-6c2e-4f99-8368-7802c1e55e60
kube-controller-manager-kind-control-plane a0dce3a7-a734-485d-bfee-8ac3de6bb486
kube-proxy-27wgd d900c0b2-dc21-46b5-a97e-f30e830aa9be
kube-scheduler-kind-control-plane 9c6f2399-4986-4259-9cd7-875eff1d7198
</code></pre>
<h3>Use Unix/Linux command <code>grep</code></h3>
<p>You can use <code>kubectl get pods</code> along with <code>grep</code>.</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get pods -n <namespace> <pod-name> -o yaml | grep uid
uid: bcfbdfb5-ce0f-11e9-b83e-080027d4916d
</code></pre>
|
<p>When I want to exctract the current value of some container env variabe I could use jsonpath with syntax like:</p>
<pre><code>kubectl get pods -l component='somelabel' -n somenamespace -o \
jsonpath='{.items[*].spec.containers[*].env[?(@.name=="SOME_ENV_VARIABLE")].value}')
</code></pre>
<p>That will return me the value of env varialbe with the name <code>SOME_ENV_VARIABLE</code>. Pod section with container env variables in json will look like this:</p>
<pre><code> "spec": {
"containers": [
{
"env": [
{
"name": "SOME_ENV_VARIABLE",
"value": "some_value"
},
{
"name": "ANOTHER_ENV_VARIABLE",
"value": "another_value"
}
],
</code></pre>
<p>When I want to patch some value in my deployment I'm using commands with syntax like:</p>
<pre><code>kubectl -n kube-system patch svc kubernetes-dashboard --type='json' -p="[{'op': 'replace', 'path': '/spec/ports/0/nodePort', 'value': $PORT}]"
</code></pre>
<p>But how can I patch a variable with <code>'op': 'replace'</code> in cases where I need to use expression like <code>env[?(@.name=="SOME_ENV_VARIABLE")]</code>? Which syntax I should use?</p>
| <p>Rather than <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#patching-resources" rel="noreferrer">kubectl patch</a> command, you can make use of <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-env-em-" rel="noreferrer">kubectl set env</a> to update environment variable of k8s deployment.</p>
<pre><code>envvalue=$(kubectl get pods -l component='somelabel' -n somenamespace -o jsonpath='{.items[*].spec.containers[*].env[?(@.name=="SOME_ENV_VARIABLE")].value}')
kubectl set env deployment/my-app-deploy op=$envvalue
</code></pre>
<p>Hope this helps.</p>
|
<p>when I create my.yaml, then nginx will show failed (111: Connection refused) while connecting to upstream</p>
<p>have any sugesstion about this? Is there pod network do not connnect?</p>
<p>uwsgi:</p>
<pre><code>[uwsgi]
module = myapp.wsgi
master = true
processes = 10
socket = 127.0.0.1:8001
chmod-socket = 777
vacuum = true
enable-threads = True
</code></pre>
<p>nginx:</p>
<pre><code>upstream django_api {
server 127.0.0.1:8001 max_fails=20 fail_timeout=10s;
}
server {
listen 80;
#location /media {
# alias /media; # your Django project's media files - amend as required
#}
location /static {
alias /usr/share/apps/static; # your Django project's static files - amend as required
}
location / {
uwsgi_read_timeout 60;
uwsgi_pass django_api;
include ./uwsgi_params; # the uwsgi_params file you installed
uwsgi_param Host $host;
uwsgi_param X-Real-IP $remote_addr;
uwsgi_param X-Forwarded-For $proxy_add_x_forwarded_for;
uwsgi_param X-Forwarded-Proto $http_x_forwarded_proto;
}
</code></pre>
<p>}</p>
| <p>So just a thought, maybe set your uwsgi to <code>0.0.0.0:8001</code> then on your podSpec set container port values with just 8001:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: mypod
labels:
component: web
spec:
containers:
- name: django
image: myimage
ports:
- containerPort: 8001
- name: nginx
image: nginx:alpine
....
</code></pre>
<p>this should make it available in localhost:8001/127.0.0.1:8001</p>
|
<p>As per this official <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="noreferrer">document</a>, Kubernetes Persistent Volumes support three types of access modes.</p>
<ol>
<li>ReadOnlyMany</li>
<li>ReadWriteOnce</li>
<li>ReadWriteMany</li>
</ol>
<p>The given definitions of them in the document is very high-level. It would be great if someone can explain them in little more detail along with some examples of different use cases where we should use one vs other.</p>
| <p>You should use <code>ReadWriteX</code> when you plan to have Pods that will need to <em>write</em> to the volume, and not only <em>read</em> data from the volume.</p>
<p>You should use <code>XMany</code> when you want the ability for Pods to access the given volume while those workloads are running on different nodes in the Kubernetes cluster. These Pods may be multiple replicas belonging to a Deployment, or may be completely different Pods. There are many cases where it's desirable to have Pods running on different nodes, for instance if you have multiple Pod replicas for a single Deployment, then having them run on different nodes can help ensure some amount of continued availability even if one of the nodes fails or is being updated.</p>
<p>If you don't use <code>XMany</code>, but you do have multiple Pods that need access to the given volume, that will force Kubernetes to schedule all those Pods to run on whatever node the volume gets mounted to first, which could overload that node if there are too many such pods, and can impact the availability of Deployments whose Pods need access to that volume as explained in the previous paragraph.</p>
<p>So putting all that together:</p>
<ul>
<li>If you need to write to the volume, and you may have multiple Pods needing to write to the volume where you'd prefer the flexibility of those Pods being scheduled to different nodes, and <code>ReadWriteMany</code> is an option given the volume plugin for your K8s cluster, use <code>ReadWriteMany</code>.</li>
<li>If you need to write to the volume but either you don't have the requirement that multiple pods should be able to write to it, or <code>ReadWriteMany</code> simply isn't an available option for you, use <code>ReadWriteOnce</code>.</li>
<li>If you only need to read from the volume, and you may have multiple Pods needing to read from the volume where you'd prefer the flexibility of those Pods being scheduled to different nodes, and <code>ReadOnlyMany</code> is an option given the volume plugin for your K8s cluster, use <code>ReadOnlyMany</code>.</li>
<li>If you only need to read from the volume but either you don't have the requirement that multiple pods should be able to read from it, or <code>ReadOnlyMany</code> simply isn't an available option for you, use <code>ReadWriteOnce</code>. In this case, you want the volume to be read-only but the limitations of your volume plugin have forced you to choose <code>ReadWriteOnce</code> (there's no <code>ReadOnlyOnce</code> option). As a good practice, consider the <code>containers.volumeMounts.readOnly</code> setting to <code>true</code> in your Pod specs for volume mounts corresponding to volumes that are intended to be read-only.</li>
</ul>
|
<p>I am configuring my EFK stack to keep all Kubernetes related logs including the events.
I searched and found the metricbeat config file and deployed it in my cluster.</p>
<p>Problem: All other metricbeat modules are working fine except for the "event" resource. I can see logs from status_pod, status_node etc but no logs availble for events module.</p>
<p>ERROR : <code>2019/09/04 11:53:23.961693 watcher.go:52: ERR kubernetes: List API error kubernetes api: Failure 403 events is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "events" in API group "" at the cluster scope</code></p>
<p>My metricbeat.yml file:</p>
<pre><code>---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-config
namespace: kube-system
labels:
k8s-app: metricbeat
kubernetes.io/cluster-service: "true"
data:
metricbeat.yml: |-
metricbeat.config.modules:
# Mounted `metricbeat-daemonset-modules` configmap:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
processors:
- add_cloud_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metricbeat-deployment-modules
namespace: kube-system
labels:
k8s-app: metricbeat
kubernetes.io/cluster-service: "true"
data:
# This module requires `kube-state-metrics` up and running under `kube-system` namespace
kubernetes.yml: |-
- module: kubernetes
metricsets:
- state_node
- state_deployment
- state_replicaset
- state_pod
- state_container
period: 10s
hosts: ["kube-state-metrics:5602"]
- module: kubernetes
enabled: true
metricsets:
- event
---
# Deploy singleton instance in the whole cluster for some unique data sources, like kube-state-metrics
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: metricbeat
namespace: kube-system
labels:
k8s-app: metricbeat
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: metricbeat
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: metricbeat
image: docker.elastic.co/beats/metricbeat:6.0.1
args: [
"-c", "/etc/metricbeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: "elasticsearch"
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
securityContext:
runAsUser: 0
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/metricbeat.yml
readOnly: true
subPath: metricbeat.yml
- name: modules
mountPath: /usr/share/metricbeat/modules.d
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: metricbeat-config
- name: modules
configMap:
defaultMode: 0600
name: metricbeat-deployment-modules
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: metricbeat
subjects:
- kind: ServiceAccount
name: metricbeat
namespace: kube-system
roleRef:
kind: ClusterRole
name: metricbeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: metricbeat
labels:
k8s-app: metricbeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- events
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metricbeat
namespace: kube-system
labels:
k8s-app: metricbeat
---
</code></pre>
| <p>You are running your deployment with the default service account.<br>
Set the name of the ServiceAccount in the spec.serviceAccountName field in the Deployment definition.</p>
<pre><code>kind: Deployment
metadata:
name: metricbeat
namespace: kube-system
labels:
k8s-app: metricbeat
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: metricbeat
kubernetes.io/cluster-service: "true"
spec:
serviceAccountName: metricbeat **<<--- here**
</code></pre>
<p>Also maybe you need to add to your ClusterRole definition, the resource pods/log</p>
|
<p>I have a StatefulSet in Openshift that keeps restarting, but only on a single node. I don't see anything in PODs logs. In <code>/var/log/message</code>s I see only messages that container is restarting, volume is unmounted etc and some more cryptic <code>'error: Container is already stopped'</code> and <code>'cleanup: failed to unmount secrets: invalid argument'</code>.</p>
<p>However, when I look and Yaml for StatefulSet I see the following:</p>
<pre><code>status:
collisionCount: 1
currentReplicas: 1
</code></pre>
<p>I suppose this is what is the real cause.<br>
But how can I find out what has generated that collision? </p>
| <p><code>StatefulSets</code> internally perform an snapshot of the data via <code>ControllerRevisions</code> and <a href="https://godoc.org/k8s.io/api/apps/v1#ControllerRevision" rel="nofollow noreferrer">generate a hash for each version</a>.</p>
<p>What the <code>collisionCount</code> indicates is that the <code>ControllerRevision</code> hash collided, likely due to an <a href="https://github.com/kubernetes/kubernetes/issues/61998#issuecomment-403955359" rel="nofollow noreferrer">implementation issue</a>.</p>
<p>You can try to rule this out by getting the controller revisions:</p>
<p><code>$ kubectl get controllerrevisions</code></p>
<p>Since this is an internal mechanism in the object, there is little to do other than recreate the object to generate new hashes that don't collide. There is a <a href="https://github.com/kubernetes/kubernetes/pull/66882" rel="nofollow noreferrer">merged PR</a> that suggests that newer versions shouldn't face this issue. However, <a href="https://github.com/kubernetes/kubernetes/issues/61998#issuecomment-512631430" rel="nofollow noreferrer">it might be the case</a> that you're running a version without this patch.</p>
|
<p>I know what Priority class are in k8s but could not find anything about priority object after lot of searching.</p>
<p>So problem is I have created a PriorityClass object in my k8s cluster and set its value to -100000 and have created a pod with this priorityClass. Now when I do kubectl describe Pod I am getting two different field </p>
<pre><code>Priority: 0
PriorityClassName: imagebuild-priority
</code></pre>
<p>My admission-controller throws the following error</p>
<pre><code>Error from server (Forbidden): error when creating "/tmp/tmp.4tJSpSU0dy/app.yml":
pods "pod-name" is forbidden: the integer value of priority (0) must not be provided in pod spec;
priority admission controller computed -1000000 from the given PriorityClass name
</code></pre>
<p>Somewhere it is setting Priority to 0 and PriorityClass trying to set it to -10000.</p>
<p>PriorityClass object has globalDefault: False</p>
<p>Command Run </p>
<p><code>kubectl create -f app.yml</code></p>
<p>Yaml file</p>
<pre><code>---
apiVersion: v1
kind: Pod
metadata:
name: image-builder-serviceacc
spec:
securityContext:
runAsUser: 0
serviceAccountName: {{ serviceaccount }}
automountServiceAccountToken: false
containers:
- name: container
image: ....
imagePullPolicy: Always
env:
- name: PATH
value: "$PATH:/bin:/busybox/"
command: [ "sh", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
initContainers:
- name: init-container
image: ....
imagePullPolicy: Always
env:
- name: PATH
value: "$PATH:/bin:/busybox/"
command: [ "sh", "-c", "--" ]
args: [ "ls" ]
restartPolicy: Always
</code></pre>
<p>Mutating controlled will append PriorityClass</p>
| <p>As per <a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>PriorityClass also has two optional fields: globalDefault and description. The globalDefault field indicates that the value of this PriorityClass should be used for Pods without a priorityClassName. <strong>Only one PriorityClass with globalDefault set to true</strong> can exist in the system. If there is no PriorityClass with globalDefault set, the priority of Pods with no priorityClassName is zero.</p>
</blockquote>
<p>This error means that u have collision</p>
<pre><code>the integer value of priority (0) must not be provided in pod spec;
priority admission controller computed -1000000 from the given PriorityClass name
</code></pre>
<p>You can fix it in 2 ways:</p>
<p>your should choose between <strong>globalDefault: true</strong> :</p>
<p>PriorityClass:</p>
<pre><code>apiVersion: scheduling.k8s.io/v1beta1
kind: PriorityClass
metadata:
name: high-priority-minus
value: -2000000
globalDefault: True
description: "This priority class should be used for XYZ service pods only."
</code></pre>
<p>Pod:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx5
labels:
env: test
spec:
containers:
- name: nginx5
image: nginx
imagePullPolicy: IfNotPresent
priorityClassName: high-priority-minus
</code></pre>
<p>priorityClassName <strong>can be used here, but you dont need to</strong></p>
<p>Or with <strong>globalDefault: false</strong> :</p>
<p>You need to choose <strong>1 option,</strong> priorityClassName or priority in your pod as described in you'r message error.</p>
<p>PriorityClass:</p>
<pre><code>apiVersion: scheduling.k8s.io/v1beta1
kind: PriorityClass
metadata:
name: high-priority
value: 1000000
globalDefault: false
description: "This priority class should be used for XYZ service pods only."
</code></pre>
<p>Pod:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx7
labels:
env: test
spec:
containers:
- name: nginx7
image: nginx
imagePullPolicy: IfNotPresent
priorityClassName: high-priority
</code></pre>
|
<p>I have a Bare-Metal Kubernetes custom setup (manually setup cluster using Kubernetes the Hard Way). Everything seems to work, but I cannot access services externally.</p>
<p>I can get the list of services when curl:</p>
<pre><code>https://<ip-addr>/api/v1/namespaces/kube-system/services
</code></pre>
<p>However, when I try to proxy (using <code>kubectl proxy</code>, and also by using the <code><master-ip-address>:<port></code>):</p>
<pre><code>https://<ip-addr>/api/v1/namespaces/kube-system/services/toned-gecko-grafana:80/proxy/
</code></pre>
<p>I get:</p>
<pre><code>Error: 'dial tcp 10.44.0.16:3000: connect: no route to host'
Trying to reach: 'http://10.44.0.16:3000/'
</code></pre>
<ul>
<li><p><strike>Even if I normally curl <code>http://10.44.0.16:3000/</code> I get the same error. This is the result whether I curl from inside the VM where Kubernetes is installed.</strike> Was able to resolve this, check below.</p></li>
<li><p>I can access my services externally using NodePort.</p></li>
<li><p>I can access my services if I expose them through Nginx-Ingress.</p></li>
<li><p>I am using Weave as CNI, and the logs were normal except a couple of log-lines at the beginning about it not being able to access Namespaces (RBAC error). Though logs were fine after that.</p></li>
<li><p>Using CoreDNS, logs look normal. APIServer and Kubelet logs look normal. Kubernetes-Events look normal, too.</p></li>
<li><p><strong><em>Additional Note</em></strong>: The DNS Service-IP I assigned is <code>10.3.0.10</code>, and the service IP range is: <code>10.3.0.0/24</code>, and POD Network is <code>10.2.0.0/16</code>. I am not sure what <code>10.44.x.x</code> is or where is it coming from.</p>
<ul>
<li>Also, I am using Nginx-Ingress (Helm Chart: <a href="https://github.com/helm/charts/tree/master/stable/nginx-ingress" rel="noreferrer">https://github.com/helm/charts/tree/master/stable/nginx-ingress</a>)</li>
</ul></li>
</ul>
<p>Here is output from one of the services:</p>
<pre><code>{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "kubernetes-dashboard",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/services/kubernetes-dashboard",
"uid": "5c8bb34f-c6a2-11e8-84a7-00163cb4ceeb",
"resourceVersion": "7054",
"creationTimestamp": "2018-10-03T00:22:07Z",
"labels": {
"addonmanager.kubernetes.io/mode": "Reconcile",
"k8s-app": "kubernetes-dashboard",
"kubernetes.io/cluster-service": "true"
},
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"k8s-app\":\"kubernetes-dashboard\",\"kubernetes.io/cluster-service\":\"true\"},\"name\":\"kubernetes-dashboard\",\"namespace\":\"kube-system\"},\"spec\":{\"ports\":[{\"port\":443,\"targetPort\":8443}],\"selector\":{\"k8s-app\":\"kubernetes-dashboard\"}}}\n"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 443,
"targetPort": 8443,
"nodePort": 30033
}
],
"selector": {
"k8s-app": "kubernetes-dashboard"
},
"clusterIP": "10.3.0.30",
"type": "NodePort",
"sessionAffinity": "None",
"externalTrafficPolicy": "Cluster"
},
"status": {
"loadBalancer": {
}
}
}
</code></pre>
<p>I am not sure how to debug this, even some pointers to the right direction would help. If anything else is required, please let me know.</p>
<hr>
<p>Output from <code>kubectl get svc</code>:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
coredns-primary ClusterIP 10.3.0.10 <none> 53/UDP,53/TCP,9153/TCP 4h51m
kubernetes-dashboard NodePort 10.3.0.30 <none> 443:30033/TCP 4h51m
</code></pre>
<hr>
<p><strong>EDIT:</strong></p>
<p>Turns out I didn't have <code>kube-dns</code> service running for some reason, despite having CoreDNS running. It was as mentioned here: <a href="https://github.com/kubernetes/kubeadm/issues/1056#issuecomment-413235119" rel="noreferrer">https://github.com/kubernetes/kubeadm/issues/1056#issuecomment-413235119</a></p>
<p>Now I can curl from inside the VM successfully, but the proxy-access still gives me the same error: <code>No route to host</code>. I am not sure why or how would this fix the issue, since I don't see DNS being in play here, but it fixed the issue regardles. Would appreciate any possible explanation on this too.</p>
| <p>I encountered the same issue and resolved it by running the commands below:</p>
<pre><code>iptables --flush
iptables -tnat --flush
systemctl stop firewalld
systemctl disable firewalld
systemctl restart docker
</code></pre>
|
<p>I've started working with the docker images and set up Kubernetes. I have fixed everything but I am having problems with the timeout of pod recreations.</p>
<p>If one pod is running on one particular node and if I shut it down, it will take ~5 minutes to recreate the pod on another online node.</p>
<p>I've checked all the possible config files, also set all pod-eviction-timeout, horizontal-pod-autoscaler-downscale, horizontal-pod-autoscaler-downscale-delay flags but it is still not working.</p>
<p>Current kube controller manager config:</p>
<pre><code>spec:
containers:
- command:
- kube-controller-manager
- --address=192.168.5.135
- --allocate-node-cidrs=false
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cluster-cidr=192.168.5.0/24
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
- --node-cidr-mask-size=24
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
- --use-service-account-credentials=true
- --horizontal-pod-autoscaler-downscale-delay=20s
- --horizontal-pod-autoscaler-sync-period=20s
- --node-monitor-grace-period=40s
- --node-monitor-period=5s
- --pod-eviction-timeout=20s
- --use-service-account-credentials=true
- --horizontal-pod-autoscaler-downscale-stabilization=20s
image: k8s.gcr.io/kube-controller-manager:v1.13.0
</code></pre>
<p>Thank you.</p>
| <p>If <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/#taint-based-evictions" rel="noreferrer">Taint Based Evictions</a> are present in the pod definition, controller manager will not be able to evict the pod that tolerates the taint. Even if you don't define an eviction policy in your configuration, it gets a default one since <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#defaulttolerationseconds" rel="noreferrer">Default Toleration Seconds</a> admission controller plugin is enabled by default.</p>
<p>Default Toleration Seconds admission controller plugin configures your pod like below:</p>
<pre><code>tolerations:
- key: node.kubernetes.io/not-ready
effect: NoExecute
tolerationSeconds: 300
- key: node.kubernetes.io/unreachable
operator: Exists
effect: NoExecute
tolerationSeconds: 300
</code></pre>
<p>You can verify this by inspecting definition of your pod:</p>
<pre><code>kubectl get pods -o yaml -n <namespace> <pod-name>`
</code></pre>
<p>According to above toleration it takes more than 5 minutes to recreate the pod on another ready node since pod can tolerate <code>not-ready</code> taint for up to 5 minutes. In this case, even if you set <code>--pod-eviction-timeout</code> to 20s, there is nothing controller manager can do because of the tolerations.</p>
<p>But why it takes more than 5 minutes? Because the node will be considered as down after <code>--node-monitor-grace-period</code> which defaults to 40s. After that, pod toleration timer starts.</p>
<hr>
<h3>Recommended Solution</h3>
<p>If you want your cluster to react faster for node outages, you should use taints and tolerations without modifying options. For example, you can define your pod like below:</p>
<pre><code>tolerations:
- key: node.kubernetes.io/not-ready
effect: NoExecute
tolerationSeconds: 0
- key: node.kubernetes.io/unreachable
effect: NoExecute
tolerationSeconds: 0
</code></pre>
<p>With above toleration your pod will be recreated on a ready node just after the current node marked as not ready. This should take less then a minute since <code>--node-monitor-grace-period</code> is default to 40s.</p>
<h3>Available Options</h3>
<p>If you want to be in control of these timings below you will find plenty of options to do so. However, modifying these options should be avoided. If you use tight timings which might create an overhead on etcd as every node will try to update its status very often.</p>
<p>In addition to this, currently it is not clear how to propagate changes in controller manager, api server and kubelet configuration to all nodes in a living cluster. Please see <a href="https://github.com/kubernetes/kubeadm/issues/970" rel="noreferrer">Tracking issue for changing the cluster</a> and <a href="https://github.com/kubernetes/enhancements/issues/281" rel="noreferrer">Dynamic Kubelet Configuration</a>. As of this writing, <a href="https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/" rel="noreferrer">reconfiguring a node's kubelet in a live cluster</a> is in beta.</p>
<p>You can configure control plane and kubelet during kubeadm init or join phase. Please refer to <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/" rel="noreferrer">Customizing control plane configuration with kubeadm</a> and <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/" rel="noreferrer">Configuring each kubelet in your cluster using kubeadm</a> for more details.</p>
<p>Assuming you have a single node cluster:</p>
<ul>
<li><a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/" rel="noreferrer">controller manager</a> includes:
<ul>
<li><code>--node-monitor-grace-period</code> default 40s</li>
<li><code>--node-monitor-period</code> default 5s</li>
<li><code>--pod-eviction-timeout</code> default 5m0s</li>
</ul></li>
<li><a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/" rel="noreferrer">api server</a> includes:
<ul>
<li><code>--default-not-ready-toleration-seconds</code> default 300</li>
<li><code>--default-unreachable-toleration-seconds</code> default 300</li>
</ul></li>
<li><a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="noreferrer">kubelet</a> includes:
<ul>
<li><code>--node-status-update-frequency</code> default 10s</li>
</ul></li>
</ul>
<p>If you set up the cluster with <code>kubeadm</code> you can modify:</p>
<ul>
<li><code>/etc/kubernetes/manifests/kube-controller-manager.yaml</code> for controller manager options.</li>
<li><code>/etc/kubernetes/manifests/kube-apiserver.yaml</code> for api server options.</li>
</ul>
<p><strong>Note:</strong> Modifying these files will reconfigure and restart the respective pod in the node.</p>
<p>In order to modify <code>kubelet</code> config you can add below line:</p>
<pre><code>KUBELET_EXTRA_ARGS="--node-status-update-frequency=10s"
</code></pre>
<p>To <code>/etc/default/kubelet</code> (for DEBs), or <code>/etc/sysconfig/kubelet</code> (for RPMs) and then restart kubelet service:</p>
<pre><code>sudo systemctl daemon-reload && sudo systemctl restart kubelet
</code></pre>
|
<p>Hava a Java client application that can connect to a Kubernetes cluster over SSL using <code>io.kubernetes.client.ApiClient</code> okay on its own. The same Java client application can also connect to an MQ cluster over SSL on its own. The same application however cannot connect to both Kubernetes cluster over SSL and MQ cluster over SSL all at once.</p>
<p>I believe this may be due to the fact that only one SSL key/trust store can be configured on a JVM at any one time? But do not know what the best way forward is in order to resolve this.</p>
<p>What would be the simplest way to allow a Java client to connect to both a Kubernetes cluster and an MQ cluster each over SSL?</p>
<p>The two configurations shown in this post result in the following error being thrown when both are run together:</p>
<pre><code>WARN io.kubernetes.client.util.credentials.ClientCertificateAuthentication - Could not create key manager for Client Certificate authentication.
java.security.UnrecoverableKeyException: Cannot recover key
</code></pre>
<p>The Kubernetes part of the client application connects to the Kubernetes cluster by configuring as follows:</p>
<pre><code>String kubeConfigPath = "~/.kube/config";
apiClient = ClientBuilder.kubeconfig(
KubeConfig.loadKubeConfig(new FileReader(kubeConfigPath))).build();
apiClient.getHttpClient().setReadTimeout(0, TimeUnit.SECONDS);
Configuration.setDefaultApiClient(apiClient);
</code></pre>
<p>The Mq part of the client application connects to the MQ cluster by configuring as follows:</p>
<pre><code>System.setProperty("javax.net.ssl.trustStore", "C:\\tmp\\ssl\\dev\\mgr_mq.jks");
System.setProperty("javax.net.ssl.trustStorePassword", "password");
System.setProperty("javax.net.ssl.keyStore", "C:\\tmp\\ssl\\dev\\mgr_mq.jks");
System.setProperty("javax.net.ssl.keyStorePassword", "password");
System.setProperty("com.ibm.mq.cfg.useIBMCipherMappings", "false");
java.net.URL ccdt = new URL("file:./config/qmgrs/mgr/AMQCLCHL.TAB");
MQQueueManager mqQueueManager = new MQQueueManager("*mgr", ccdt);
</code></pre>
| <p>The <a href="https://github.com/kubernetes-client/java" rel="nofollow noreferrer">Kubernetes java client API code</a> seems to force adding the certificate referenced in <code>.kube/config</code> to a new truststore that it creates new each time before adding the certificate to it.</p>
<p>This seems to take place in <code>ClientCertifiacteAuthentication.java</code> class' <code>provide(ApiClient client)</code> method:</p>
<pre><code>final KeyManager[] keyManagers = SSLUtils.keyManagers(certificate, key, algo, "", null, null);
</code></pre>
<p>Where the two <code>null</code> values which are <code>keyStoreFile</code> and <code>keyStorePassphrase</code> then force a new truststore to be created within.</p>
<p>So for now and to prove that a solution is possible I have overriden this class to be:</p>
<pre><code>final KeyManager[] keyManagers = SSLUtils.keyManagers(certificate, key, algo, "password", "C:\\tmp\\ssl\\dev\\mgr_mq.jks", "password");
</code></pre>
<p>With this overriden code, both Kubernetes cluster and MQ cluster can be successfully connected to over SSL within the same JVM.</p>
|
<p>I'm getting the following error, when I try to deploy nexus using Kubernetes.</p>
<p><em>Command:</em> <code>kubectl appy -f templates/deployment.yaml</code></p>
<blockquote>
<p>error parsing templates/deployment.yaml: json: line 1: invalid
character '{' looking for beginning of object key string</p>
</blockquote>
<p>Please find the below code which I'm trying:</p>
<pre class="lang-yaml prettyprint-override"><code> {{- if .Values.localSetup.enabled }}
apiVersion: apps/v1
kind: Deployment
{{- else }}
apiVersion: apps/v1
kind: StatefulSet
{{- end }}
metadata:
labels:
app: nexus
name: nexus
spec:
replicas: 1
selector:
matchLabels:
app: nexus
template:
metadata:
labels:
app: nexus
spec:
{{- if .Values.localSetup.enabled }}
volumes:
- name: nexus-data
persistentVolumeClaim:
claimName: nexus-pv-claim
- name: nexus-data-backup
persistentVolumeClaim:
claimName: nexus-backup-pv-claim
{{- end }}
containers:
- name: nexus
image: "quay.io/travelaudience/docker-nexus:3.15.2"
imagePullPolicy: Always
env:
- name: INSTALL4J_ADD_VM_PARAMS
value: "-Xms1200M -Xmx1200M -XX:MaxDirectMemorySize=2G -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap"
resources:
requests:
cpu: 250m
memory: 4800Mi
ports:
- containerPort: {{ .Values.nexus.dockerPort }}
name: nexus-docker-g
- containerPort: {{ .Values.nexus.nexusPort }}
name: nexus-http
volumeMounts:
- mountPath: "/nexus-data"
name: nexus-data
- mountPath: "/nexus-data/backup"
name: nexus-data-backup
{{- if .Values.useProbes.enabled }}
livenessProbe:
httpGet:
path: {{ .Values.nexus.livenessProbe.path }}
port: {{ .Values.nexus.nexusPort }}
initialDelaySeconds: {{ .Values.nexus.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.nexus.livenessProbe.periodSeconds }}
failureThreshold: {{ .Values.nexus.livenessProbe.failureThreshold }}
{{- if .Values.nexus.livenessProbe.timeoutSeconds }}
timeoutSeconds: {{ .Values.nexus.livenessProbe.timeoutSeconds }}
{{- end }}
readinessProbe:
httpGet:
path: {{ .Values.nexus.readinessProbe.path }}
port: {{ .Values.nexus.nexusPort }}
initialDelaySeconds: {{ .Values.nexus.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.nexus.readinessProbe.periodSeconds }}
failureThreshold: {{ .Values.nexus.readinessProbe.failureThreshold }}
{{- if .Values.nexus.readinessProbe.timeoutSeconds }}
timeoutSeconds: {{ .Values.nexus.readinessProbe.timeoutSeconds }}
{{- end }}
{{- end }}
{{- if .Values.nexusProxy.enabled }}
- name: nexus-proxy
image: "quay.io/travelaudience/docker-nexus-proxy:2.4.0_8u191"
imagePullPolicy: Always
env:
- name: ALLOWED_USER_AGENTS_ON_ROOT_REGEX
value: "GoogleHC"
- name: CLOUD_IAM_AUTH_ENABLED
value: "false"
- name: BIND_PORT
value: {{ .Values.nexusProxy.targetPort | quote }}
- name: ENFORCE_HTTPS
value: "false"
{{- if .Values.localSetup.enabled }}
- name: NEXUS_DOCKER_HOST
value: {{ .Values.nexusProxy.nexusLocalDockerhost }}
- name: NEXUS_HTTP_HOST
value: {{ .Values.nexusProxy.nexusLocalHttphost }}
{{- else }}
- name: NEXUS_DOCKER_HOST
value: {{ .Values.nexusProxy.nexusDockerHost}}
- name: NEXUS_HTTP_HOST
value: {{ .Values.nexusProxy.nexusHttpHost }}
{{- end }}
- name: UPSTREAM_DOCKER_PORT
value: {{ .Values.nexus.dockerPort | quote }}
- name: UPSTREAM_HTTP_PORT
value: {{ .Values.nexus.nexusPort | quote }}
- name: UPSTREAM_HOST
value: "localhost"
ports:
- containerPort: {{ .Values.nexusProxy.targetPort }}
name: proxy-port
{{- end }}
{{- if .Values.nexusBackup.enabled }}
- name: nexus-backup
image: "quay.io/travelaudience/docker-nexus-backup:1.4.0"
imagePullPolicy: Always
env:
- name: NEXUS_AUTHORIZATION
value: false
- name: NEXUS_BACKUP_DIRECTORY
value: /nexus-data/backup
- name: NEXUS_DATA_DIRECTORY
value: /nexus-data
- name: NEXUS_LOCAL_HOST_PORT
value: "localhost:8081"
- name: OFFLINE_REPOS
value: "maven-central maven-public maven-releases maven-snapshots"
- name: TARGET_BUCKET
value: "gs://nexus-backup"
- name: GRACE_PERIOD
value: "60"
- name: TRIGGER_FILE
value: .backup
volumeMounts:
- mountPath: /nexus-data
name: nexus-data
- mountPath: /nexus-data/backup
name: nexus-data-backup
terminationGracePeriodSeconds: 10
{{- end }}
{{- if .Values.persistence.enabled }}
volumeClaimTemplates:
- metadata:
name: nexus-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 32Gi
storageClassName: {{ .Values.persistence.storageClass }}
- metadata:
name: nexus-data-backup
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 32Gi
storageClassName: {{ .Values.persistence.storageClass }}
{{- end }}
</code></pre>
| <p>The template you provided here is the part of helm chart, which can be deployed using <code>helm-cli</code>, not using <code>kubectl apply</code>.</p>
<p>More info on using helm is <a href="https://helm.sh/docs/" rel="nofollow noreferrer">here</a>.</p>
<p>You can also get the instructions to install nexus using helm in <a href="https://github.com/helm/charts/tree/master/stable/sonatype-nexus" rel="nofollow noreferrer">this</a> official stable helm chart.</p>
<p>Hope this helps.</p>
|
<p>I'm trying to run Spark on Kubernetes, with the aim of processing data from a Kerberized Hadoop cluster. My application consists of simple SparkSQL transformations. While I'm able to run the process successfully on a single driver pod, I cannot do this when attempting to use any executors. Instead, I get: </p>
<blockquote>
<p>org.apache.hadoop.security.AccessControlException: SIMPLE
authentication is not enabled. Available:[TOKEN, KERBEROS]</p>
</blockquote>
<p>Since the Hadoop environment is Kerberized, I've provided a valid keytab, as well as the core-site.xml, hive-site.xml, hadoop-site.xml, mapred-site.xml and yarn-site.xml, and a krb5.conf file inside the docker image. </p>
<p>I set up the environment settings with the following method:</p>
<pre><code>trait EnvironmentConfiguration {
def configureEnvironment(): Unit = {
val conf = new Configuration
conf.set("hadoop.security.authentication", "kerberos")
conf.set("hadoop.security.authorization", "true")
conf.set("com.sun.security.auth.module.Krb5LoginModule", "required")
System.setProperty("java.security.krb5.conf", ConfigurationProperties.kerberosConfLocation)
UserGroupInformation.loginUserFromKeytab(ConfigurationProperties.keytabUser, ConfigurationProperties.keytabLocation)
UserGroupInformation.setConfiguration(conf)
}
</code></pre>
<p>I also pass the *-site.xml files through the following method: </p>
<pre><code>trait SparkConfiguration {
def createSparkSession(): SparkSession = {
val spark = SparkSession.builder
.appName("MiniSparkK8")
.enableHiveSupport()
.master("local[*]")
.config("spark.sql.hive.metastore.version", ConfigurationProperties.hiveMetastoreVersion)
.config("spark.executor.memory", ConfigurationProperties.sparkExecutorMemory)
.config("spark.sql.hive.version", ConfigurationProperties.hiveVersion)
.config("spark.sql.hive.metastore.jars",ConfigurationProperties.hiveMetastoreJars)
spark.sparkContext.hadoopConfiguration.addResource(new Path(ConfigurationProperties.coreSiteLocation))
spark.sparkContext.hadoopConfiguration.addResource(new Path(ConfigurationProperties.hiveSiteLocation))
spark.sparkContext.hadoopConfiguration.addResource(new Path(ConfigurationProperties.hdfsSiteLocation))
spark.sparkContext.hadoopConfiguration.addResource(new Path(ConfigurationProperties.yarnSiteLocation))
spark.sparkContext.hadoopConfiguration.addResource(new Path(ConfigurationProperties.mapredSiteLocation))
}
}
</code></pre>
<p>I run the whole process with the following spark-submit command:</p>
<pre><code>spark-submit ^
--master k8s://https://kubernetes.example.environment.url:8443 ^
--deploy-mode cluster ^
--name mini-spark-k8 ^
--class org.spark.Driver ^
--conf spark.executor.instances=2 ^
--conf spark.kubernetes.namespace=<company-openshift-namespace> ^
--conf spark.kubernetes.container.image=<company_image_registry.image> ^
--conf spark.kubernetes.driver.pod.name=minisparkk8-cluster ^
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark ^
local:///opt/spark/examples/target/MiniSparkK8-1.0-SNAPSHOT.jar ^
/opt/spark/mini-spark-conf.properties
</code></pre>
<p>The above configurations are enough to get my spark application running and successfully connecting to the Kerberized Hadoop cluster. Although the spark submit command declares the creation of two executor pods, this does not happen because I have set master to <code>local[*]</code>. Consequently, only one pod is created which manages to connect to the Kerberized Hadoop cluster and successfully run my Spark transformations on Hive tables.</p>
<p>However, when I remove <code>.master(local[*])</code>, two executor pods are created. I can see from the logs that these executors connecting successfully to the driver pod, and they are assigned tasks. It doesn't take long after this point for both of them to fail with the error mentioned above, resulting in the failed executor pods to be terminated.
This is despite the executors already having all the necessary files to create a successful connection to the Kerberized Hadoop inside their image. I believe that the executors are not using the keytab, which they would be doing if they were running the JAR. Instead, they're running tasks given to them from the driver.</p>
<p>I can see from the logs that the driver manages to authenticate itself correctly with the keytab for user, <code>USER123</code>: </p>
<blockquote>
<p>INFO SecurityManager:54 - SecurityManager: authentication disabled;
ui acls disabled; users with view permissions: Set(spark, USER123);
groups with view permissions: Set(); users with modify permissions:
Set(spark, USER123); groups with modify permissions: Set()</p>
</blockquote>
<p>On the other hand, you get the following from the executor's log, you can see that user, USER123 is not authenticated:</p>
<blockquote>
<p>INFO SecurityManager:54 - SecurityManager: authentication disabled;
ui acls disabled; users with view permissions: Set(spark); groups
with view permissions: Set(); users with modify permissions:
Set(spark); groups with modify permissions: Set()</p>
</blockquote>
<p>I have looked at various sources, including <a href="https://github.com/apache/spark/blob/master/docs/security.md" rel="nofollow noreferrer">here</a>. It mentions that <code>HIVE_CONF_DIR</code> needs to be defined, but I can see from my program (which prints the environment variables) that this variable is not present, including when the driver pod manages to authenticate itself and run the spark process fine. </p>
<p>I've tried running with the following added to the previous spark-submit command: </p>
<pre><code>--conf spark.kubernetes.kerberos.enabled=true ^
--conf spark.kubernetes.kerberos.krb5.path=/etc/krb5.conf ^
--conf spark.kubernetes.kerberos.keytab=/var/keytabs/USER123.keytab ^
--conf spark.kubernetes.kerberos.principal=USER123@REALM ^
</code></pre>
<p>But this made no difference. </p>
<p>My question is: how can I get the executors to authenticate themselves with the keytab they have in their image? I'm hoping this will allow them to perform their delegated tasks. </p>
| <p>First get the delegation token from hadoop using the below command . </p>
<ol>
<li>Do a kinit -kt with your keytab and principal</li>
<li>Execute the below to store the hdfs delegation token in a tmp path
<code>spark-submit --class org.apache.hadoop.hdfs.tools.DelegationTokenFetcher "" --renewer null /tmp/spark.token</code></li>
<li>Do your actual spark submit with the adding this configuration .
<code>--conf spark.executorEnv.HADOOP_TOKEN_FILE_LOCATION=/tmp/spark.token \</code></li>
</ol>
<p>The above is how yarn executors authenticate. Do the same for kubernetes executors too. </p>
|
<p>The container set resource limit:</p>
<pre><code>resources:
limits:
cpu: "1"
memory: 1G
requests:
cpu: "1"
memory: 1G
</code></pre>
<p>the cgroup memory limit:</p>
<pre><code>cat /sys/fs/cgroup/memory/kubepods.slice/kubepods-podaace5b66_c7d0_11e9_ba2a_dcf401d01e81.slice/memory.limit_in_bytes
999997440
</code></pre>
<p>1GB= 1*1024*1024*1024=1,073,741,824B</p>
<p>k8s version:1.14.4</p>
<p>docker version: docker-ce-18.09.6
OS: ubuntu 18.04</p>
| <p>I have performed some tests.</p>
<p>For the values between 999997440 B (976560 KB) and 1000000000 B (as in your example) you will have the same results in memory.limit_in_bytes = 999997440 B.
Till you reach the next (integer) number of bytes divisible by your pagesize (default 4096). In my example it was 1000001536 B (976564K).</p>
<p>I am not linux expert but according to the <a href="https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt" rel="nofollow noreferrer">documentation</a>: </p>
<blockquote>
<p>A successful write to this file does not guarantee a successful setting of
this limit to the value written into the file. This can be due to a
number of factors, such as rounding up to page boundaries or the total
availability of memory on the system. The user is required to re-read
this file after a write to guarantee the value committed by the kernel.</p>
</blockquote>
<p>I would like to suggest use Gi notation instead as mentioned by <a href="https://stackoverflow.com/a/57761069/11207414">prometherion</a> to have more control about resources limits.</p>
<p>Hope this help.</p>
|
<p>Suppose, I just installed one of the Kubernetes CNI plugins, for example <code>weave-net</code>:</p>
<pre><code>kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
</code></pre>
<p>How can I view or list the installed CNI plugins?</p>
<p>After installing how do I know that it is running? Or if I <code>kubectl delete</code> the plugin, how do I know it was deleted?</p>
<p>After installing the plugin, I sort of expected to see some objects created for this plugin. So that if I want to delete it, then I don't have to remember the exact URL I used to install, I could just lookup the object name and delete it.</p>
| <p>if you list the pods in kube-system namespace, you can see the pods. The pod's names will be started with weave-net-xxxxx. Since it is Deamonset object, the pod's count will be based on your k8s nodes. Actually one pod will be created for one node.</p>
<pre><code>kubectl get pods -n kube-system
</code></pre>
|
<p>I was wondering if it makes sense to add the cluster package to a node application deployed using kubernetes requiring one or less cores.</p>
<p>When using the cluster package we usually do something like <a href="https://dev.to/acanimal/understanding-the-nodejs-cluster-module-14-2bi" rel="nofollow noreferrer">this</a>:</p>
<pre><code>const cluster = require('cluster');
const http = require('http');
const numCPUs = require('os').cpus().length;
if (cluster.isMaster) {
masterProcess();
} else {
childProcess();
}
function masterProcess() {
console.log(`Master ${process.pid} is running`);
for (let i = 0; i < numCPUs; i++) {
console.log(`Forking process number ${i}...`);
cluster.fork();
}
process.exit();
}
function childProcess() {
console.log(`Worker ${process.pid} started and finished`);
process.exit();
}
</code></pre>
<p>But it wouldn't make sense to me to use that with say</p>
<pre><code>resources:
limits:
cpu: "1"
requests:
cpu: "0.5"
</code></pre>
<p>since kubernetes will limit the application cpu time any ways, even when using multiple workers. Does that make sense?</p>
<p>I haven't tried this with a load yet.</p>
| <p>Well, since you’re using containers, there is really no need to use the cluster module. Just spin a new container and distribute load using a load balancer like Nginx. I hope this helps. </p>
|
<p>How to get the list of namespace in kubernetes with RBAC enable using Python.</p>
<p>I have configured kubernetes in my windows and i have config file with cluster info . After running kubectl proxy , I am able to launch UI dashboard .It requires token and once i added the access token , I am able to fetch namespace and pod details .Once after token expired in 60 minutes . </p>
<p>Again ,UI dashboard is asking new token details.Once I provided new access token,it gives </p>
<blockquote>
<p>403 unauthorized error.</p>
</blockquote>
<p>Can someone help me why my new access token is not recognized</p>
<p>Access the namespace , pods after giving new access token</p>
| <p>The access token is typically associated with a <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#service-account-tokens" rel="nofollow noreferrer">Service Account</a>. It may be that the service account that you used initially doesn't exist anymore.</p>
<p>It may be that you are using a <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#bootstrap-tokens" rel="nofollow noreferrer">Bootstrap Token</a> that looks something like this: <code>781292.db7bc3a58fc5f07e</code> if that's the case, these expire after 60 minutes and you will have to generate a new one. Assuming you created your cluster with <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/" rel="nofollow noreferrer">kubeadm</a> you can regenerate it like this:</p>
<pre><code>$ kubeadm token generate
</code></pre>
|
<p>I need to set up a kubernetes cluster in a bare VM inside a closed network environment. Network Team in my company do not allow UDP port between VMs. </p>
<p>I understand that the k8s cluster requires CNI and virtual network (like VXLAN) require UDP. May I know how could i setup a k8s cluster without UDP traffic allowed, or how could I set up a cluster and deploy my deployments without CNI?</p>
| <p>If you are planning on using flannel, you may want to use host-gw mode, which does not encapsulate any packets, since it relays on the direct layer 2 connectivity. The requirement is that all your k8s nodes must be on the same subnet. More details: <a href="https://github.com/coreos/flannel/blob/master/Documentation/backends.md" rel="nofollow noreferrer">https://github.com/coreos/flannel/blob/master/Documentation/backends.md</a>
Otherwise you can always setup a flat network without any encapsulation, by just leveraging a traditional layer 3 routing, but it will require an intervention from your network team to make nodes, pods visible to each other in the case if they are not on the same subnet. Keep in mind, that runnning k8s with flat network will be much more complex than the traditional overlays.</p>
|
<p>I'm setting up a CI/CD pipeline. The deployment step runs these commands:</p>
<pre><code>kubectl apply -f manifest.yml --namespace <namespace>
kubectl rollout status Deployment/<service> --namespace <namespace>
</code></pre>
<p>This gives the following output:</p>
<pre><code>Waiting for deployment "<service>" rollout to finish: 1 out of 2 new replicas have been updated...
error: deployment "<service>" exceeded its progress deadline
##[error]error: deployment "<service>" exceeded its progress deadline
</code></pre>
<p>By running </p>
<pre><code>kubectl get pods
</code></pre>
<p>I can see it has started a pod which is stuck in CrashLoopBackOff. I can see why the pod failed to start by running:</p>
<pre><code>kubectl logs <pod-name>
</code></pre>
<p>Is there a way to include this output in the deployment log? I can obviously check if the deployment failed, then parse the above commands and display the log output, but I would hope there's some way to get this information out of kubectl rollout status (preferably if there were a way to make kubectl rollout status notify whenever a pod in the deployment had a status change, and show logs for any pods with an error status change.)</p>
| <p>If it's not already the case, <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="nofollow noreferrer">labelize</a> you pod :</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: label-demo
labels:
environment: production
app: nginx
</code></pre>
<p>It enables you to see the logs without the pod name :</p>
<pre><code>kubectl logs -l app=nginx -n your-namespace
</code></pre>
<p>For the <code>CrashLoopBackOff</code> error, events give often more information than the logs :</p>
<pre><code>kubectl get events --sort-by lastTimestamp -n your-namespace
</code></pre>
<p>Add this at this end of your pipeline, it will give you interesting information about your pod scheduling, ect...</p>
|
<p>I am using kubernetes java client libraries in order to communicate with my kubernetes server. </p>
<p>My question is there any way programmatically get namespace of running pod from inside of which sent call to kubernetes?</p>
<p>I heard that there is file located here - <strong>/var/run/secrets/kubernetes.io/serviceaccount/namespace</strong></p>
<p>However I wanted to know is there any way ro get it using java client without reading this file. </p>
<p>I have searched in documentation, however found nothing related to this.</p>
| <p>If you set the below environment variable on the pod definition file, the namespace of the pod will be stored on the environment variables. Then it can be retrieved by the client API.</p>
<pre><code> env
- name: MYPOD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
</code></pre>
|
<p>I want to setup a k8s cluster, but I despair with the nginx-ingress controller and some special settings I need to set: especially proxy_pass.</p>
<p>I tried to achieve that already with the "server-snippet"-snippet, but it didn't work.</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1 # for versions before 1.14 use extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/hsts: "false"
nginx.ingress.kubernetes.io/server-snippet: |
location / {
internal;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_buffering off;
proxy_pass http://localhost:30022/site/;
proxy_redirect default;
proxy_cookie_path /site/ /;
}
spec:
rules:
- host: preview.test.de
http:
paths:
- path: /
backend:
serviceName: backend-service
servicePort: 8080
</code></pre>
<p>What I want to achieve is this nginx config:</p>
<pre><code>location / {
internal;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_buffering off;
proxy_pass http://localhost:30022/site/;
proxy_redirect default;
proxy_cookie_path /site/ /;
}
</code></pre>
<p>In an optimal world I would like to achieve that the host and the port in the proxy_pass directive would be depend on the backend pod I want to connect to, so that there is no hard coded port.</p>
<p>Can anyone help me out with this problem?</p>
| <p>I believe what you're looking for is:</p>
<p><code>nginx.ingress.kubernetes.io/rewrite-target: /site</code></p>
<p>However this will effectively mean that you can only use this particular ingress instance for this one app, or others like it, since this rewrite would apply to all the rules for that ingress instance.</p>
<p>You might be able to accomplish the same thing using a regex based rule, but this approach is definitely simpler.</p>
<p>Regarding your question about how to handle the two different paths which require rewrites, this should be possible with the following ingress config:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/use-regex: true
spec:
rules:
- host: preview.test.de
http:
paths:
- path: /(site/.*)
backend:
serviceName: backend-service
servicePort: 8080
- path: /(cms/.*)
backend:
serviceName: cms-service
servicePort: 8080
</code></pre>
<p>However as you might be able to guess, this might become more difficult if you end up having many paths with lots of rewrites later on.</p>
<p>As for best practices for configuring this type of set up, I would generally recommend to adopt sub-domains. Set up one for each site/path you want to be reachable, and let them have their own distinct nginx/ingress/etc to handle their rewrites as needed. Then if you want to tie these together later into some other top level domain/site, it's easily done and not having to manage many rewrite rules in the one location, which can get quite messy.</p>
|
<p>I would like to install Ingress on my Kubernetes cluster with Helm, so I did</p>
<pre><code>$> helm install stable/nginx-ingress
... a lot of output
NOTES:
The nginx-ingress controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w solemn-toucan-nginx-ingress-controller'
An example Ingress that makes use of the controller:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: example
namespace: foo
spec:
rules:
...
</code></pre>
<p>Because I'm installing everything with Helm, it is not clear to me how I should install Ingress. As you can see in the output generated by Helm, they show an example <code>Ingress</code> but not how I should provide it. </p>
<p>I can think of 2:</p>
<ul>
<li>Copy the whole chart and move my ingress.yaml into the <code>templates</code> folder</li>
<li>Use kubectl</li>
<li>Create a Helm Chart which provides the Ingress resource</li>
</ul>
<p>From the above 3 I like the last one most, but maybe there is another way (maybe with some configuration option)?</p>
| <p>A rough analogy here is that using Helm to install the nginx Ingress controller is like using <code>apt-get</code> or <code>brew</code> to install nginx on a machine. But you wouldn’t use <code>apt-get</code> to create your nginx configuration for your application and install it on that machine.</p>
<p>If you just have a Hello World app, apply the Ingress resources directly with <code>kubectl</code>. If you get to the point that you want to encapsulate all the resources that constitute your application (Services, Ingress, Deployments, Roles, RoleBindings, ServiceAccounts, etc.) into a single artifact so that other people could consume to deploy their own copies of your application on their own K8s clusters, Helm would be a packaging and distribution option you could explore using. You would put templates for <em>your</em> Ingress resources in your Helm chart, there’s no reason for you to try to modify the nginx controller Helm chart.</p>
|
<p>I'm using Minikube in Windows 10 and I'd like to use locally built Docker images instead of images hosted in a registry, so, according <a href="https://medium.com/@maumribeiro/running-your-own-docker-images-in-minikube-for-windows-ea7383d931f6" rel="nofollow noreferrer">this tutorial</a>, I have to run next commands:</p>
<p>Use local kubernetes and images:</p>
<pre><code>> minikube docker-env
</code></pre>
<p>The output is:</p>
<pre><code>PS C:\WINDOWS\system32> minikube docker-env
$Env:DOCKER_TLS_VERIFY = "1"
$Env:DOCKER_HOST = "tcp://10.98.38.126:2376"
$Env:DOCKER_CERT_PATH = "C:\Users\MyUser\.minikube\certs"
# Run this command to configure your shell:
# & minikube docker-env | Invoke-Expression
</code></pre>
<p>To configure the shell, run this:</p>
<pre><code>> & minikube docker-env | Invoke-Expression
</code></pre>
<p>After that, I need to build a new image:</p>
<pre><code>PS D:\repos\test> docker build -t miImage:v1 .
</code></pre>
<p>And I have next error:</p>
<pre><code>PS D:\repos\test> docker build -t miImage:v1 .
Sending build context to Docker daemon 8.62MB
Step 1/10 : FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS build-env
Get https://mcr.microsoft.com/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
</code></pre>
<p>This is error is thrown since I configured it to use local images, is there any way to fix it?</p>
| <p>it looks like the machine that you're using to build is unable to reach <a href="https://mcr.microsoft.com/v2/" rel="nofollow noreferrer">https://mcr.microsoft.com/v2/</a> to confirm that, try to send a simple GET to the URL</p>
<pre><code>wget https://mcr.microsoft.com/v2/
</code></pre>
<p>if that's the problem, you can use a different machine to pull the image, then save it to a file and load it on the target machine. </p>
<pre><code>#on a machine connected to internet
docker pull mcr.microsoft.com/dotnet/core/sdk:2.2
docker save mcr.microsoft.com/dotnet/core/sdk:2.2 > dotnetsdk2_2.tar
# download the file
# on the target machine
docker load < dotnetsdk2_2.tar
</code></pre>
<p>then your build should work without a problem using the local version of the image.</p>
|
<p>I have an angular server running on kubernetes. If I do</p>
<pre><code>kubectl logs angular-pod-8d5586f44-q2jz3,
</code></pre>
<p>I get, at the bottom, the well known </p>
<pre><code>ℹ 「wdm」: Compiled successfully.
</code></pre>
<p>Additionally, I have also exposed the service as a NodePort on port 4200, as following lines prove:</p>
<pre><code>angular-client NodePort 10.102.59.116 <none> 4200:32396/TCP 6s.
</code></pre>
<p>However, I get a connection refused error when I try </p>
<pre><code>curl 10.102.59.116:4200
curl: (7) Failed to connect to 10.102.59.116 port 4200: Connection refused.
</code></pre>
<p>Has anyone also encountered this problem and know how to fix it ?</p>
<p>By the way, I am using vagrant with virtualbox as vm-machine.</p>
| <p>If you are using angular-cli, you would have to run <code>ng serve --host 0.0.0.0</code> in order for angular cli to expose the angular-life-server.
However it is <strong>not</strong> recommended to do this for production purposes, as the angular-life server is not fully reviewed for security concerns, but only meant as a development tool. To deploy you angular app for production, it would be best to do a production build (<code>ng build --prod</code>) and deploy the result inside e.g. a nginx-container.</p>
|
<p>We are having a Kubernetes service whose pods take some time to warm up with first requests. Basically first incoming requests will read some cached values from Redis and these requests might take a bit longer to process. When these newly created pods become ready and receive full traffic, they might become not very responsive for up to 30 seconds, before everything is correctly loaded from Redis and cached.</p>
<p>I know, we should definitely restructure the application to prevent this, unfortunately that is not feasible in a near future (we are working on it). </p>
<p>It would be great if it was possible to reduce the weight of the newly created pods, so they would receive 1/10 of the traffic in the beggining with the weight increasing as the time would pass. This would be also great for newly deployed versions of our application to see if it behaves correctly.</p>
| <p>Why you need the cache loading in first call instead of having in heartbeat which is hooked to readiness probe? One other option is to make use of <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#using-init-containers" rel="nofollow noreferrer">init containers</a> in kubernetes</p>
|
<p>I have kubernetes cluster with installed Istio. I have two pods, for example, sleep1 and sleep2 (containers with installed curl). I want to configure istio to permit traffic from sleep1 to www.google.com and forbid traffic from sleep2 to www.google.com.</p>
<p>So, I created ServiceEntry:</p>
<pre><code>---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: google
spec:
hosts:
- www.google.com
- google.com
ports:
- name: http-port
protocol: HTTP
number: 80
resolution: DNS
</code></pre>
<p>Gateway</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istio-egressgateway
spec:
selector:
istio: egressgateway
servers:
- port:
number: 80
name: http-port
protocol: HTTP
hosts:
- "*"
</code></pre>
<p>two virtualServices (mesh->egress, egress->google)</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: mesh-to-egress
spec:
hosts:
- www.google.com
- google.com
gateways:
- mesh
http:
- match:
- gateways:
- mesh
port: 80
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
port:
number: 80
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: egress-to-google-int
spec:
hosts:
- www.google.com
- google.com
gateways:
- istio-egressgateway
http:
- match:
- gateways:
- istio-egressgateway
port: 80
route:
- destination:
host: google.com
port:
number: 80
weight: 100
</code></pre>
<p>As result, I can curl google from both pods. </p>
<p>And the question again: can i permit traffic from sleep1 to www.google.com and forbid traffic from sleep2 to www.google.com? I know that this is possible to do with kubernetes NetworkPolicy and black/white lists (<a href="https://istio.io/docs/tasks/policy-enforcement/denial-and-list/" rel="nofollow noreferrer">https://istio.io/docs/tasks/policy-enforcement/denial-and-list/</a>), but both methods are forbids (permits) traffic to specific ips or maybe I missed something?</p>
| <p>You can create different service accounts for <code>sleep1</code> and <code>sleep2</code>. Then you <a href="https://istio.io/docs/tasks/security/authz-http/#enforcing-service-level-access-control" rel="nofollow noreferrer">create an RBAC policy</a> to limit access to the <code>istio-egressgateway</code> policy, so <code>sleep2</code> will not be able to access any egress traffic through the egress gateway. This should work with forbidding any egress traffic from the cluster, that does not originate from the egress gateway. See <a href="https://istio.io/docs/tasks/traffic-management/egress/egress-gateway/#additional-security-considerations" rel="nofollow noreferrer">https://istio.io/docs/tasks/traffic-management/egress/egress-gateway/#additional-security-considerations</a>.</p>
<p>If you want to allow <code>sleep2</code> access other services, but not <code>www.google.com</code>, you can use Mixer rules and handlers, see <a href="https://istio.io/blog/2018/egress-monitoring-access-control/#access-control-by-mixer-policy-checks-part-2" rel="nofollow noreferrer">this blog post</a>. It shows how to allow a certain URL path to a specific service account.</p>
|
<p>I have been setting up a Kubernetes cluster for my work deployment. I have my MongoDB hosted on an external droplet and want to configure external service access.</p>
<p>I am following this <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-mapping-external-services" rel="nofollow noreferrer">tutorial</a>.
When I apply the my configuration with kubectl everything seems to be running fine.</p>
<p>My setup service & Endpoint</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: mongo
spec:
ports:
- name: mongodb
port: xxxx
------
kind: Endpoints
apiVersion: v1
metadata:
name: mongo
subsets:
- addresses:
- ip: 159.89.x.x
ports:
- port: xxx
</code></pre>
<p>I am developing using typescript and this is how I currently set up my DB connection</p>
<pre><code>const MONGO_URI = `mongodb://${config.mongourl}:${config.mongoport}/${config.collection}?authSource=admin`;
const options ={
user:`${config.mngusername}`,
pass:`${config.mngpassword}`,
keepAlive: true,
keepAliveInitialDelay: 300000,
useNewUrlParser: true,
useCreateIndex: true
}
mongoose.connect(MONGO_URI, options);
</code></pre>
<p>My question is, how do I consume the setup service/deploymentsin my code.
<br><b>thanks</b></p>
| <p>There are many ways to consume an external service, such as:</p>
<ol>
<li>Hardcoding the IP:Port of the service in your app.</li>
<li>Using environment variables in your app and injecting them through <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">configmaps</a>.</li>
<li>Creating a service/endpoint, <a href="http://EXTERNAL_MYSQL_SERVICE_SERVICE_HOST" rel="nofollow noreferrer">ExternalName</a> service, or a service with <a href="https://kubernetes.io/docs/concepts/services-networking/service/#external-ips" rel="nofollow noreferrer">External IPs</a>. This allows your applications to use
Kubernetes' <a href="https://kubernetes.io/docs/concepts/services-networking/service/#discovering-services" rel="nofollow noreferrer">Service Discovery</a> mechanisms.</li>
</ol>
<p>In your case you can just use:<br>
<code>const MONGO_URI = 'mongodb://mongo/${config.collection}?authSource=admin';</code><br>
as the name <code>mongo</code> will be mapped to <code>159.89.x.x:xxx</code></p>
|
<p>I am trying to enable mTLS in my mesh that I have already working with istio's sidecars.
The problem I have is that I just get working connections up to one point, and then it fails to connect.</p>
<p>This is how the services are set up right now with my failing implementation of mTLS (simplified):</p>
<p><strong>Istio IngressGateway -> NGINX pod -> API Gateway -> Service A -> <em>[ Database ]</em> -> Service B</strong></p>
<p>First thing to note is that I was using a NGINX pod as a load balancer to proxy_pass my requests to my API Gateway or my frontend page. I tried keeping that without the istio IngressGateway but I wasn't able to make it work. Then I tried to use Istio IngressGateway and connect directly to the API Gateway with VirtualService but also fails for me. So I'm leaving it like this for the moment because it was the only way that my request got to the API Gateway successfully.</p>
<p>Another thing to note is that Service A first connects to a Database outside the mesh and then makes a request to Service B which is inside the mesh and with mTLS enabled.</p>
<p>NGINX, API Gateway, Service A and Service B are within the mesh with mTLS enabled and <strong>"istioctl authn tls-check"</strong> shows that status is OK.</p>
<p>NGINX and API Gateway are in a namespace called <strong>"gateway"</strong>, Database is in <strong>"auth"</strong> and Service A and Service B are in another one called <strong>"api"</strong>.</p>
<p>Istio IngressGateway is in namespace <strong>"istio-system"</strong> right now.</p>
<p>So the problem is that everything work if I set <strong>STRICT</strong> mode to the gateway namespace and <strong>PERMISSIVE</strong> to api, but once I set <strong>STRICT</strong> to api, I see the request getting into Service A, but then it fails to send the request to Service B with a 500.</p>
<p>This is the output when it fails that I can see in the istio-proxy container in the Service A pod:</p>
<pre><code>api/serviceA[istio-proxy]: [2019-09-02T12:59:55.366Z] "- - -" 0 - "-" "-" 1939 0 2 - "-" "-" "-" "-" "10.20.208.248:4567" outbound|4567||database.auth.svc.cluster.local 10.20.128.44:35366 10.20.208.248:4567
10.20.128.44:35364 -
api/serviceA[istio-proxy]: [2019-09-02T12:59:55.326Z] "POST /api/my-call HTTP/1.1" 500 - "-" "-" 74 90 60 24 "10.90.0.22, 127.0.0.1, 127.0.0.1" "PostmanRuntime/7.15.0" "14d93a85-192d-4aa7-aa45-1501a71d4924" "serviceA.api.svc.cluster.local:9090" "127.0.0.1:9090" inbound|9090|http-serviceA|serviceA.api.svc.cluster.local - 10.20.128.44:9090 127.0.0.1:0 outbound_.9090_._.serviceA.api.svc.cluster.local
</code></pre>
<p>No messages in ServiceB though.</p>
<p>Currently, I do not have a global MeshPolicy, and I am setting Policy and DestinationRule per namespace</p>
<p><strong>Policy:</strong></p>
<pre><code>apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "default"
namespace: gateway
spec:
peers:
- mtls:
mode: STRICT
---
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "default"
namespace: auth
spec:
peers:
- mtls:
mode: STRICT
---
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "default"
namespace: api
spec:
peers:
- mtls:
mode: STRICT
</code></pre>
<p><strong>DestinationRule:</strong></p>
<pre><code>apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
name: "mutual-gateway"
namespace: "gateway"
spec:
host: "*.gateway.svc.cluster.local"
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
---
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
name: "mutual-api"
namespace: "api"
spec:
host: "*.api.svc.cluster.local"
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
---
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
name: "mutual-auth"
namespace: "auth"
spec:
host: "*.auth.svc.cluster.local"
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
</code></pre>
<p>Then I have some DestinationRule to disable mTLS for Database (I have some other services in the same namespace that I want to enable with mTLS) and for Kubernetes API</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: "myDatabase"
namespace: "auth"
spec:
host: "database.auth.svc.cluster.local"
trafficPolicy:
tls:
mode: DISABLE
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: "k8s-api-server"
namespace: default
spec:
host: "kubernetes.default.svc.cluster.local"
trafficPolicy:
tls:
mode: DISABLE
</code></pre>
<p>Then I have my IngressGateway like so:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: ingress-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- my-api.example.com
tls:
httpsRedirect: true # sends 301 redirect for http requests
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- my-api.example.com
</code></pre>
<p>And lastly, my VirtualServices:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ingress-nginx
namespace: gateway
spec:
hosts:
- my-api.example.com
gateways:
- ingress-gateway.istio-system
http:
- match:
- uri:
prefix: /
route:
- destination:
port:
number: 80
host: ingress.gateway.svc.cluster.local # this is NGINX pod
corsPolicy:
allowOrigin:
- my-api.example.com
allowMethods:
- POST
- GET
- DELETE
- PATCH
- OPTIONS
allowCredentials: true
allowHeaders:
- "*"
maxAge: "24h"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: api-gateway
namespace: gateway
spec:
hosts:
- my-api.example.com
- api-gateway.gateway.svc.cluster.local
gateways:
- mesh
http:
- match:
- uri:
prefix: /
route:
- destination:
port:
number: 80
host: api-gateway.gateway.svc.cluster.local
corsPolicy:
allowOrigin:
- my-api.example.com
allowMethods:
- POST
- GET
- DELETE
- PATCH
- OPTIONS
allowCredentials: true
allowHeaders:
- "*"
maxAge: "24h"
</code></pre>
<p>One thing that I don't understand is why do I have to create a VirtualService for my API Gateway and why do I have to use "mesh" in the gateways block. If I remove this block, I don't get my request in API Gateway, but if I do, it works and my requests even get to the next service (Service A), but not the next one to that.</p>
<p>Thanks for the help. I am really stuck with this.</p>
<p>Dump of listeners of ServiceA:</p>
<pre><code>ADDRESS PORT TYPE
10.20.128.44 9090 HTTP
10.20.253.21 443 TCP
10.20.255.77 80 TCP
10.20.240.26 443 TCP
0.0.0.0 7199 TCP
10.20.213.65 15011 TCP
0.0.0.0 7000 TCP
10.20.192.1 443 TCP
0.0.0.0 4568 TCP
0.0.0.0 4444 TCP
10.20.255.245 3306 TCP
0.0.0.0 7001 TCP
0.0.0.0 9160 TCP
10.20.218.226 443 TCP
10.20.239.14 42422 TCP
10.20.192.10 53 TCP
0.0.0.0 4567 TCP
10.20.225.206 443 TCP
10.20.225.166 443 TCP
10.20.207.244 5473 TCP
10.20.202.47 44134 TCP
10.20.227.251 3306 TCP
0.0.0.0 9042 TCP
10.20.207.141 3306 TCP
0.0.0.0 15014 TCP
0.0.0.0 9090 TCP
0.0.0.0 9091 TCP
0.0.0.0 9901 TCP
0.0.0.0 15010 TCP
0.0.0.0 15004 TCP
0.0.0.0 8060 TCP
0.0.0.0 8080 TCP
0.0.0.0 20001 TCP
0.0.0.0 80 TCP
0.0.0.0 10589 TCP
10.20.128.44 15020 TCP
0.0.0.0 15001 TCP
0.0.0.0 9000 TCP
10.20.219.237 9090 TCP
10.20.233.60 80 TCP
10.20.200.156 9100 TCP
10.20.204.239 9093 TCP
0.0.0.0 10055 TCP
0.0.0.0 10054 TCP
0.0.0.0 10251 TCP
0.0.0.0 10252 TCP
0.0.0.0 9093 TCP
0.0.0.0 6783 TCP
0.0.0.0 10250 TCP
10.20.217.136 443 TCP
0.0.0.0 15090 HTTP
</code></pre>
<p><strong>Dump clusters in json format:</strong> <a href="https://pastebin.com/73zmAPWg" rel="nofollow noreferrer">https://pastebin.com/73zmAPWg</a></p>
<p><strong>Dump listeners in json format:</strong> <a href="https://pastebin.com/Pk7ddPJ2" rel="nofollow noreferrer">https://pastebin.com/Pk7ddPJ2</a></p>
<p><strong>Curl command from serviceA container to serviceB:</strong> </p>
<pre><code>/opt/app # curl -X POST -v "http://serviceB.api.svc.cluster.local:4567/session/xxxxxxxx=?parameters=hi"
* Trying 10.20.228.217...
* TCP_NODELAY set
* Connected to serviceB.api.svc.cluster.local (10.20.228.217) port 4567 (#0)
> POST /session/xxxxxxxx=?parameters=hi HTTP/1.1
> Host: serviceB.api.svc.cluster.local:4567
> User-Agent: curl/7.61.1
> Accept: */*
>
* Empty reply from server
* Connection #0 to host serviceB.api.svc.cluster.local left intact
curl: (52) Empty reply from server
</code></pre>
<p>If I disable mTLS, request gets from serviceA to serviceB with Curl</p>
| <p>General tips for debugging Istio service mesh:</p>
<ol>
<li>Check <a href="https://istio.io/docs/setup/additional-setup/requirements/" rel="nofollow noreferrer">the requirements for services and pods</a>.</li>
<li>Try a similar task to what you are trying to perform from the list of <a href="https://istio.io/docs/tasks/" rel="nofollow noreferrer">Istio tasks</a>. See if that task works and find the differences with your task.</li>
<li>Follow the instructions on <a href="https://istio.io/latest/docs/ops/common-problems/" rel="nofollow noreferrer">Istio common problems page</a>.</li>
</ol>
|
<p>I have a service with an inline plaintext config that requires certain information that is stored in Kubernetes secrets. What <a href="https://pulumi.io/reference/pkg/nodejs/@pulumi/kubernetes/" rel="nofollow noreferrer"><code>@pulumi/kubernetes</code></a> API method can be used to access raw kubernetes secret values?</p>
| <p>Use <code>k8s.core.v1.Secret.get(pulumiName, secretName)</code> (<a href="https://www.pulumi.com/docs/reference/pkg/nodejs/pulumi/kubernetes/core/v1/#Secret-get" rel="noreferrer"><code>secretName</code> can contain the <code>namespace/</code> as prefix</a>). </p>
<p><a href="https://www.pulumi.com/blog/adopting-existing-cloud-resources-into-pulumi/#referencing-existing-resources" rel="noreferrer">Every Pulumi resource has a <code>get()</code> method</a>.</p>
<p>For example: Get the <code>token</code> from a <code>kubernetes.io/service-account-token</code>:</p>
<pre class="lang-js prettyprint-override"><code>import * as k8s from "@pulumi/kubernetes";
type KubernetesSecretData = { [key: string]: string }
const namespace = 'kube-public'
const secretName = 'default-token-tdcdz'
export const token =
k8s.core.v1.Secret.get('testSecret',`${namespace}/${secretName}`)
.data.apply(v => {
return (<KubernetesSecretData> v)["token"]
})
</code></pre>
|
<p>When a serial port is created in a docker container mapped on a host with an operating system of Linux, this is done with the <code>‘—device’</code> flag;</p>
<p>e.g. <code>docker run -dit --device=/dev/ttyUSB0 --name SerialTest <docker image></code></p>
<p>We would like to know how PODs can be mapped serial ports in Kubernetes. The figure below shows the Pod configuration for the application to be deployed in Rancher 2.x.</p>
<p>(<a href="https://i.imgur.com/RHhlD4S.png" rel="nofollow noreferrer">https://i.imgur.com/RHhlD4S.png</a>)</p>
<p>In node scheduling, we have configured pods to be distributed to specific nodes with serial ports. Also, it is of course not possible to map the serial port with the volume mount. So, I would like to raise a question because I couldn't find anything related to <code>‘—device’</code> flag of docker in my Rancher 2.x configuration.</p>
<p>(<a href="https://imgur.com/wRe7Eds.png" rel="nofollow noreferrer">https://imgur.com/wRe7Eds.png</a>) "Application configuration in Rancher 2.x"</p>
<p>(<a href="https://imgur.com/Lwil7cz.png" rel="nofollow noreferrer">https://imgur.com/Lwil7cz.png</a>) "Serial port device connected to the HOST PC"</p>
<p>(<a href="https://imgur.com/oWeW0LZ.png" rel="nofollow noreferrer">https://imgur.com/oWeW0LZ.png</a>) "Volume Mount Status of Containers in Deployed Pods"</p>
<p>(<a href="https://imgur.com/GKahqY0.png" rel="nofollow noreferrer">https://imgur.com/GKahqY0.png</a>) "Error log when running a .NET application that uses a serial port"</p>
| <p>Based on the goal of the <a href="https://i.imgur.com/RHhlD4S.png" rel="nofollow noreferrer">first diagram</a>: Kubernetes abstractions covering the communication between the pod and the outside world (for this matter, outside of the node) are meant to handle at least layer 2 communications (<em>veth</em>, as in inter-node/pod communication).</p>
<p>Is not detailed why is not possible to map the device volume in the pod, so I'm wondering if you have tried using privileged containers <a href="https://stackoverflow.com/a/42716234/10892354">like in this reference</a>:</p>
<pre><code> containers:
- name: acm
securityContext:
privileged: true
volumeMounts:
- mountPath: /dev/ttyACM0
name: ttyacm
volumes:
- name: ttyacm
hostPath:
path: /dev/ttyACM0
</code></pre>
<p>It is possible for Rancher to <a href="https://forums.rancher.com/t/any-way-to-start-a-container-in-privileged-mode/6078" rel="nofollow noreferrer">start containers in privileged mode</a>.</p>
|
<p>I am stuck in trying to get a Jenkinsfile to work. It keeps failing on <code>sh</code> step and gives the following error</p>
<pre><code> process apparently never started in /home/jenkins/workspace
...
(running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
</code></pre>
<p>I have tried adding </p>
<pre><code>withEnv(['PATH+EXTRA=/usr/sbin:/usr/bin:/sbin:/bin'])
</code></pre>
<p>before sh step in groovy file</p>
<p>also tried to add </p>
<pre><code>/bin/sh
</code></pre>
<p>in <code>Manage Jenkins -> Configure System</code> in the shell section</p>
<p>I have also tried replacing the sh line in Jenkinsfile with the following:</p>
<pre><code>sh "docker ps;"
sh "echo 'hello';"
sh ./build.sh;"
sh ```
#!/bin/sh
echo hello
```
</code></pre>
<p>This is the part of Jenkinsfile which i am stuck on</p>
<pre><code>node {
stage('Build') {
echo 'this works'
sh 'echo "this does not work"'
}
}
</code></pre>
<p>expected output is "this does not work" but it just hangs and returns the error above.</p>
<p>what am I missing?</p>
| <p>It turns out that the default workingDir value for default jnlp k8s slave nodes is now set to <code>/home/jenkins/agent</code> and I was using the old value <code>/home/jenkins</code></p>
<p>here is the config that worked for me</p>
<pre><code>containerTemplate(name: 'jnlp', image: 'lachlanevenson/jnlp-slave:3.10-1-alpine', args: '${computer.jnlpmac} ${computer.name}', workingDir: '/home/jenkins/agent')
</code></pre>
|
<p>I have set up my master node and I am trying to join a worker node as follows:</p>
<pre><code>kubeadm join 192.168.30.1:6443 --token 3czfua.os565d6l3ggpagw7 --discovery-token-ca-cert-hash sha256:3a94ce61080c71d319dbfe3ce69b555027bfe20f4dbe21a9779fd902421b1a63
</code></pre>
<p>However the command hangs forever in the following state:</p>
<pre><code>[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
</code></pre>
<p>Since this is just a warning, why does it actually fails?</p>
<p><strong>edit</strong>: I noticed the following in my <code>/var/log/syslog</code></p>
<pre><code>Mar 29 15:03:15 ubuntu-xenial kubelet[9626]: F0329 15:03:15.353432 9626 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
Mar 29 15:03:15 ubuntu-xenial systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Mar 29 15:03:15 ubuntu-xenial systemd[1]: kubelet.service: Unit entered failed state.
</code></pre>
| <p>First if you want to see more detail when your worker joins to the master use:</p>
<pre><code>kubeadm join 192.168.1.100:6443 --token m3jfbb.wq5m3pt0qo5g3bt9 --discovery-token-ca-cert-hash sha256:d075e5cc111ffd1b97510df9c517c122f1c7edf86b62909446042cc348ef1e0b --v=2
</code></pre>
<p>Using the above command I could see that my worker could not established connection with the master, so i just stoped the firewall:</p>
<pre><code>systemctl stop firewalld
</code></pre>
|
<p>I have setup Kubernetes 1.15.3 cluster on Centos 7 OS using systemd cgroupfs. on all my nodes syslog started logging this message frequently.</p>
<p>How to fix this error message?</p>
<p><code>kubelet: W0907 watcher.go:87 Error while processing event ("/sys/fs/cgroup/memory/libcontainer_10010_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): readdirent: no such file or directory</code></p>
<p>Thanks</p>
| <p>It's a <a href="https://github.com/kubernetes/kubernetes/issues/76531" rel="nofollow noreferrer">known issue</a> with a bad interaction with <code>runc</code>; someone observed it is actually <em>caused</em> by <a href="https://github.com/kubernetes/kubernetes/issues/76531#issuecomment-522286281" rel="nofollow noreferrer">a repeated etcd health check</a> but that wasn't my experience on Ubuntu, which exhibits that same behavior on <em>every</em> Node</p>
<p>They allege that updating the <code>runc</code> binary on your hosts will make the problem go away, but I haven't tried that myself</p>
|
<p>The application inside the container is inaccessible from the outside i.e if I exec into the docker container and do </p>
<pre class="lang-sh prettyprint-override"><code>curl localhost:5000
</code></pre>
<p>it works correctly but not on the browser in my computer i get error : This site cant be reached</p>
<p>My Dockerfile:</p>
<pre><code># Use an official Python runtime as a parent image
FROM python:3.7-slim
# Set the working directory to /app
WORKDIR /web-engine
# Copy the current directory contents into the container at /app
COPY . /web-engine
# Install Gunicorn3
RUN apt-get update && apt-get install default-libmysqlclient-dev gcc -y
# Install any needed packages specified in requirements.txt
RUN pip3 install --trusted-host pypi.python.org -r requirements.txt
# Make port 5000 available to the world outside this container
EXPOSE 5000
# Define environment variable
ENV username root
# Run app.py when the container launches
CMD gunicorn --workers 4 --bind 127.0.0.1:5000 application:app --threads 1
</code></pre>
<p>UPON executing docker in this way:</p>
<pre class="lang-sh prettyprint-override"><code>sudo docker run -e password=$password -p 5000:5000 $reg/web-engine:ve0.0.2
</code></pre>
<p>I get the following output:</p>
<pre><code>[2019-09-08 11:53:36 +0000] [6] [INFO] Starting gunicorn 19.9.0
[2019-09-08 11:53:36 +0000] [6] [INFO] Listening at: http://127.0.0.1:5000 (6)
[2019-09-08 11:53:36 +0000] [6] [INFO] Using worker: sync
[2019-09-08 11:53:36 +0000] [9] [INFO] Booting worker with pid: 9
[2019-09-08 11:53:36 +0000] [10] [INFO] Booting worker with pid: 10
[2019-09-08 11:53:36 +0000] [11] [INFO] Booting worker with pid: 11
[2019-09-08 11:53:36 +0000] [12] [INFO] Booting worker with pid: 12
</code></pre>
<p>So as you can see I'm mapping port 5000 of the container to port 5000 of my computer but localhost:5000 is not working</p>
<p>Therefore i tried everthing the same but with the development server of Flask
with the following change in My dockerfile </p>
<p>FRom</p>
<pre class="lang-sh prettyprint-override"><code>CMD gunicorn --workers 4 --bind 127.0.0.1:5000 application:app --threads 1
</code></pre>
<p>TO </p>
<pre class="lang-sh prettyprint-override"><code>CMD python3.7 application.py
</code></pre>
<p>and IT WORKED; I goto localhost:5000 and see the application is working</p>
<p>There is nothing wrong with the application. I suppose there's an error in gunicorn server</p>
<p>the requirements.txt file :</p>
<pre><code>Flask
Flask-SQLAlchemy
mysqlclient
gunicorn
bs4
html5lib
</code></pre>
<p>Please help me out</p>
<p>I also tried different forms of gunicorn and docker run command combinations like</p>
<pre class="lang-sh prettyprint-override"><code>CMD gunicorn -application:app && sudo docker run -e password=$password -p 5000:8000 $reg/web-engine:ve0.0.2
</code></pre>
<p>It didnt work
<a href="https://i.stack.imgur.com/Z6ors.png" rel="noreferrer">terminal image of container working with development server</a></p>
<p>I would appreciate a solution involving nothing outside whats mentioned here like nginx, supervisor etc
SOmeone please helllppp meeeeee.......😢</p>
| <p>By default, 127.0.0.1 means a different thing inside the container than it does outside. Use <code>0.0.0.0:5000</code> instead.</p>
|
<p>I'm relatively new to k8s. I have now set up a cluster several times to ensure that I understand the process. I have struggled with networking a bit. I am currently initializing as follows:</p>
<pre><code>kubeadm init --apiserver-advertise-address=10.93.98.204 --pod-network-cidr=10.244.0.0/16
</code></pre>
<p>In response to this I see the following warning:</p>
<pre><code>[WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://proxy.corp.sensis. com:3128". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
</code></pre>
<p>Amongst other things, I am trying to ensure that I configure the cluster correctly, and the overlay network (flannel).</p>
<p>I've attempted to establish the no_proxy environment variable (centos 7). </p>
<p>The way that i tried this was as follows was to update /etc/profile.d/proxy.sh as follows:</p>
<pre><code>printf -v lan '%s,' "10.93.98.204","10.93.98.23","10.93.98.36","10.93.103.236","10.93.97.123","10.93.97.202"
printf -v service '%s,' 10.244.{1..255}.{1..255}
export no_proxy="${lan%,},${service%,},127.0.0.1";
#export no_proxy="${lan%,},10.244.0.0/16,127.0.0.1";
export NO_PROXY=$no_proxy
</code></pre>
<p>However, this approach results in a massive string ($no_proxy) that far exceeds the maximum length within the Linux environment.</p>
<p>I've also tried using the pod-network-cidr in the no_proxy (<code>10.244.0.0/16</code> - commented out in the above)</p>
<p>Two questions:
- What is the proper way to deal with this warning (WARNING HTTPProxyCIDR)?
- How can I set no_proxy so that my flannel network overlay works and my cluster works</p>
| <p>The CIDR/IP range does not work in no_proxy in many environments/applications.</p>
<p>We can make a reasonable assumption that we don't access network nodes outside web-proxy thru IP address. In other words, we use FQDN to access, say python.com, google.com, github.com, but not directly using their IP addresses.</p>
<p>With this assumption, we can bypass web-proxy for all direct IP address access.</p>
<pre><code>export no_proxy=localhost,.svc
printf -v allip '%s,' .{0..255}
export no_proxy="$no_proxy,${allip%,}"
</code></pre>
<p>This adds <code>.0,.1,.2,...,.255</code> to the no_proxy env variable. There is no magic here. We just treat IP address as FQDN, so a suffix match works as FQDN no_proxy setting. Say, .120 would match all IP addresses x.x.x.120.</p>
|
<p>We are having a Kubernetes service whose pods take some time to warm up with first requests. Basically first incoming requests will read some cached values from Redis and these requests might take a bit longer to process. When these newly created pods become ready and receive full traffic, they might become not very responsive for up to 30 seconds, before everything is correctly loaded from Redis and cached.</p>
<p>I know, we should definitely restructure the application to prevent this, unfortunately that is not feasible in a near future (we are working on it). </p>
<p>It would be great if it was possible to reduce the weight of the newly created pods, so they would receive 1/10 of the traffic in the beggining with the weight increasing as the time would pass. This would be also great for newly deployed versions of our application to see if it behaves correctly.</p>
| <p>Until the application can be restructured to do this "priming" internally...</p>
<p>For when running on Kubernetes, look into <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/" rel="nofollow noreferrer">Container Lifecycle Hooks</a> and specifically into the <code>PostStart</code> hook. Documentation <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.15/#lifecycle-v1-core" rel="nofollow noreferrer">here</a> and example <a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/" rel="nofollow noreferrer">here</a>.</p>
<p>It seems that the behavior of "...The Container's status is not set to RUNNING until the postStart handler completes" is what can help you.</p>
<p>There's are few gotchas like "... there is no guarantee that the hook will execute before the container ENTRYPOINT" because "...The postStart handler runs asynchronously relative to the Container’s code", and "...No parameters are passed to the handler". </p>
<p>Perhaps a custom script can simulate that first request with some retry logic to wait for the application to be started?</p>
|
<p>Im trying to implement network policy in my kubernetes cluster to isolate my pods in a namespace but still allow them to access the internet since im using Azure MFA for authentication. </p>
<p>This is what i tried but cant seem to get it working. Ingress is working as expected but these policies blocks all egress. </p>
<pre><code>
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
</code></pre>
<pre><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: grafana-policy
namespace: default
spec:
podSelector:
matchLabels:
app: grafana
ingress:
- from:
- podSelector:
matchLabels:
app: nginx-ingress
</code></pre>
<p>Anybody who can tell me how i make above configuration work so i will also allow internet traffic but blocking traffic to other POD's?</p>
| <p>Try adding a default deny all network policy on the namespace:</p>
<pre><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: default-deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
</code></pre>
<p>Then adding an allow Internet policy after:</p>
<pre><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-internet-only
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
- 192.168.0.0/16
- 172.16.0.0/20
</code></pre>
<p>This will block all traffic except for internet outbound.
In the <code>allow-internet-only</code> policy, there is an exception for all private IPs <em>which will prevent pod to pod communication.</em></p>
<p>You will also have to allow Egress to Core DNS from <code>kube-system</code> if you require DNS lookups, as the <code>default-deny-all</code> policy will block DNS queries.</p>
|
<p>Hello and thank you for taking the time to read my question.</p>
<p>First, I have an EKS cluster setup to use public and private subnets.</p>
<p>I generated the cluster using cloudformation as described at <a href="https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html#vpc-create" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html#vpc-create</a></p>
<p>I then initialized helm by creating a service account for tiller via <code>kubectl apply -f (below file)</code>:</p>
<pre><code>---
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
</code></pre>
<p>and then <code>helm init --service-account=tiller</code>
followed by <code>helm repo update</code></p>
<p>I then used helm to install the nginx-ingress controller via:</p>
<pre><code>helm install --name nginx-ingress \
--namespace nginx-project \
stable/nginx-ingress \
-f nginx-ingress-values.yaml
</code></pre>
<p>where my nginx-ingress-values.yaml is:</p>
<pre><code>controller:
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: "60"
service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: "abc-us-west-2-elb-access-logs"
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: "vault-cluster/nginx"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-west-2:123456789:certificate/bb35b4c4-..."
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
</code></pre>
<p>And so far everything looks great, I see the ELB get created and hooked up to use acm for https</p>
<p>I then install kubernetes-dashboard via:</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
</code></pre>
<p>and I can access it via <code>kubectl proxy</code></p>
<p>But when I add an ingress rule for dashboard via:
<code>kubectl apply -f dashboard-ingress.yaml</code>
Where dashboard-ingress.yaml is:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dashboard
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
namespace: kube-system
spec:
# tls:
# - hosts:
# - abc.def.com
rules:
- host: abc.def.com
http:
paths:
- path: /dashboard
backend:
serviceName: kubernetes-dashboard
servicePort: 8443
</code></pre>
<p>then when I try to go <a href="http://abc.def.com/" rel="nofollow noreferrer">http://abc.def.com/</a> I get stuck in an infinite redirect loop.</p>
<p>same for <a href="https://abc.def.com/" rel="nofollow noreferrer">https://abc.def.com/</a>
and
<a href="http://abc.def.com/dashboard" rel="nofollow noreferrer">http://abc.def.com/dashboard</a></p>
<p>I am new to kubernetes and very stuck on this one. Any help would be GREATLY appreciated</p>
<p><strong>UPDATE - 9/5/2019:</strong>
When I take out the tls block from the ingress.yaml I then get</p>
<p>to the nginx backend but <a href="http://abc.def.com" rel="nofollow noreferrer">http://abc.def.com</a> forwards me to <a href="https://abc.def.com" rel="nofollow noreferrer">https://abc.def.com</a> and I get a 502 Bad Gateway from openresty/1.15.8.1</p>
<p>when I then try to go to <a href="https://abc.def.com/dashboard" rel="nofollow noreferrer">https://abc.def.com/dashboard</a></p>
<p>I get "404 page not found" which is a response from the nginx-ingress controller as I understand it.</p>
<p><strong>UPDATE - 9/6/2019:</strong>
Thanks so much to mk_sta for the answer below which helped me understand what I was missing.</p>
<p>For anyone reading this in the future, my nginx-ingress install via helm works as expected but my kubernetes-dashboard install was missing some key annotations. In the end I was able to configure helm to install the kubernetes-dashboard via:</p>
<pre><code>helm install --name kubernetes-dashboard \
--namespace kube-system \
stable/kubernetes-dashboard \
-f kubernetes-dashboard-values.yaml
</code></pre>
<p>where kubernetes-dashboard-values.yaml is:</p>
<pre><code>ingress:
enabled: true
hosts: [abc.def.com]
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$2
paths: [/dashboard(/|$)(.*)]
</code></pre>
<p>I can then access dashboard at <a href="http://abc.def.com/dashboard/" rel="nofollow noreferrer">http://abc.def.com/dashboard/</a> and <a href="https://abc.def.com/dashboard/" rel="nofollow noreferrer">https://abc.def.com/dashboard/</a></p>
<p>For some reason if I leave off the trailing slash it does not work however.</p>
<p>This is good enough for me at the moment.</p>
| <p>It seems to me that you've used wrong location path <code>/dashboard</code> within yours origin <code>Ingress</code> configuration, even more the relevant <em><a href="https://github.com/kubernetes/dashboard" rel="nofollow noreferrer">K8s dashboard</a> UI</em> endpoint is exposed on <code>443</code> port by default across the corresponded K8s <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a> resource, whenever you've not customized this setting. </p>
<pre><code>ports:
- port: 443
protocol: TCP
targetPort: 8443
</code></pre>
<p>In order to get a proper path based <a href="https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/" rel="nofollow noreferrer">routing</a>, override existing parameters with the following arguments:</p>
<pre><code>paths:
- path: /
backend:
serviceName: kubernetes-dashboard
servicePort: 443
</code></pre>
<p>Once you've decided accessing <em>K8s dashboard UI</em> through indirect path holder URL (<em><a href="https://abc.def.com/dashboard" rel="nofollow noreferrer">https://abc.def.com/dashboard</a></em>), you can manage applying <a href="https://en.wikipedia.org/wiki/Rewrite_engine" rel="nofollow noreferrer">Rewrite</a> rules in order to transparently change a part of the authentic URL and transmit requests to the faithful target path. Actually, <em><a href="https://en.wikipedia.org/wiki/Rewrite_engine" rel="nofollow noreferrer">Nginx</a> Ingress controller</em> adds this functionality via specific <code>nginx.ingress.kubernetes.io/rewrite-target</code> <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#rewrite" rel="nofollow noreferrer">annotation</a>:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dashboard
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$2
namespace: kube-system
spec:
# tls:
# - hosts:
# - abc.def.com
rules:
- host: abc.def.com
http:
paths:
- path: /dashboard(/|$)(.*)
backend:
serviceName: kubernetes-dashboard
servicePort: 443
</code></pre>
|
<p>My deployed Spring boot application to trying to connect to an external SQL Server database from Kubernetes Pod. But every time it fails with error </p>
<blockquote>
<p>Failed to initialize pool: The TCP/IP connection to the host <>, port 1443 has failed.<br>
Error: "Connection timed out: no further information.<br>
Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.</p>
</blockquote>
<p>I have tried to exec into the Pod and successfully ping the DB server without any issues</p>
<p>Below are the solutions I have tried:</p>
<ol>
<li><p>Created a Service and Endpoint and provided the DB IP in configuration file tried to bring up the application in the Pod</p></li>
<li><p>Tried using the Internal IP from Endpoint instead of DB IP in configuration to see Internal IP is resolved to DB IP</p></li>
</ol>
<p>But both these cases gave the same result. Below is the yaml I am using the create the Service and Endpoint.</p>
<pre class="lang-css prettyprint-override"><code>---
apiVersion: v1
kind: Service
metadata:
name: mssql
namespace: cattle
spec:
type: ClusterIP
ports:
- port: 1433
---
apiVersion: v1
kind: Endpoints
metadata:
name: mssql
namespace: cattle
subsets:
- addresses:
- ip: <<DB IP>>
ports:
- port: 1433
</code></pre>
<p>Please let me know if I am wrong or missing in this setup.</p>
<p>Additional information the K8s setup</p>
<ul>
<li>It is clustered master with external etcd cluster topology</li>
<li>OS on the nodes is CentOS</li>
<li>Able to ping the server from all nodes and the pods that are created</li>
</ul>
| <p>For this scenario a headless service is very useful. You will redirect traffic to this ip without defining an endpoint. </p>
<pre><code>kind: "Service"
apiVersion: "v1"
metadata:
namespace: "your-namespace"
name: "ftp"
spec:
type: ExternalName
externalName: your-ip
</code></pre>
|
<p>I have a Kubernetes Cluster setup with below topology <a href="https://i.stack.imgur.com/yBVLf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yBVLf.jpg" alt=""></a></p>
<p>I have deployed Kubernetes Dashboard on the cluster and able to access dashboard with kubectl proxy.</p>
<p>But when I try to access the Dashboard via Floating IP/VIP using the URL:</p>
<pre><code>https://<FloatingIP>:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
</code></pre>
<p>I end up with the below response on the browser</p>
<pre><code>{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "services \"https:kubernetes-dashboard:\" is forbidden: User \"system:anonymous\" cannot get resource \"services/proxy\" in API group \"\" in the namespace \"kube-system\"",
"reason": "Forbidden",
"details": {
"name": "https:kubernetes-dashboard:",
"kind": "services"
},
"code": 403
}
</code></pre>
<p>I do understand that the issue is because of RBAC on Kubernetes and did some reading around this topic, but I am still unclear with what needs to be done to resolve this issue on a master clustered implementation. I was able to expose Dashboard successfully on a single master - multiple node setup with NodePort access, but that would fail with Clustered master setup.</p>
<p>I am also open to better suggestions on implementing Dashboard in this topology.</p>
<p>Please let me know if you need any additional information</p>
| <p>You will need to create a clusterrole to grant permission to kubernetes-dashboard and bind it to system:anonymous user as followed.</p>
<pre><code>kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-anonymous
rules:
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["https:kubernetes-dashboard:"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- nonResourceURLs: ["/ui", "/ui/*", "/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/*"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard-anonymous
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard-anonymous
subjects:
- kind: User
name: system:anonymous
</code></pre>
<p><strong>Edit:</strong>
To apply these changes, save it into a .yaml (e.g.: clusterrole.yaml) file and run</p>
<pre><code>kubectl apply -f clusterrole.yaml
</code></pre>
|
<p>I have a simple <code>docker-compose</code> file like the following:</p>
<pre><code>version: "3.7"
services:
mongo:
image: asia.gcr.io/myproj/mymongo:latest
hostname: mongo
volumes:
- type: bind
source: $MONGO_DB_DATA
target: /data/db
command: [ "--bind_ip_all", "--replSet", "rs0", "--wiredTigerCacheSizeGB", "1.5"]
</code></pre>
<p>I am launching it in Kubernetes using the following command</p>
<pre><code>docker-compose config | docker stack deploy --orchestrator kubernetes --compose-file - mystack
</code></pre>
<p>However, when the pod fails with this error</p>
<pre><code>Failed to pull image "asia.gcr.io/myproj/mymongo:latest": rpc error: code = Unknown desc = Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
</code></pre>
<p>My private registry is the gcloud one. I have already logged in docker like the following using the service account keyfile.</p>
<pre><code>docker login -u _json_key -p "$(cat keyfile.json)" https://asia.gcr.io
</code></pre>
<p>The image is pulled correctly when I run</p>
<pre><code>docker-compose pull
</code></pre>
<p>From this link <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a>, I found that I need to create <code>ImagePullSecrets</code> </p>
<p>I have two questions.</p>
<ol>
<li><p>How can I write the <code>ImagePullSecrets</code> syntax in my docker-compose so that it is referred correctly.</p></li>
<li><p>The method that the links mentions asks you to use <code>.docker/config.json</code> file. However, my config.json has</p>
<p>"auths": {
"asia.gcr.io": {},
},
It doesn't include the username and password since I configured it using the keyfile. How can I do this?</p></li>
<li><p>Or is there any simpler way to do this?</p></li>
</ol>
| <p>I solved this issue by first creating a secret like this</p>
<pre><code>kubectl create secret docker-registry regcred --docker-server https://<docker registry> --docker-username _json_key --docker-password <json key> --docker-email=<email>
</code></pre>
<p>and then adding it to the default service account </p>
<pre><code>kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "regcred"}]}'
</code></pre>
|
<p>Seems <a href="https://book.kubebuilder.io/reference/using-finalizers.html" rel="noreferrer">easy peasy</a> to implement custom <em>finalizers</em> if I'm implementing my Kubernetes CRD: the code snippet is quite easy since the first delete request on the object sets a value for the <code>metadata.deletionTimestamp</code> field instead of deleting the object that triggers the custom controller watching the object to execute any finalizers it handles.</p>
<p>However, let's imagine I'd like to set a <em>custom finalizer</em> for a default Kubernetes resource like a Namespace, a Deployment or whatever: is that possible or not?</p>
| <p>Ok, tested a little bit taking <code>Namespace</code> as an example.</p>
<pre class="lang-sh prettyprint-override"><code># k create ns my-namespace
namespace/my-namespace created
# k edit ns my-namespace
(... adding the .metadata.finalizers list)
# k get ns my-namespace -o yaml
apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: "2019-09-08T06:50:25Z"
finalizers:
- prometherion/do-something
name: my-namespace
resourceVersion: "1131"
selfLink: /api/v1/namespaces/my-namespace
uid: 75b5bae8-1d5b-44c6-86bc-e632341aabfd
spec:
finalizers:
- kubernetes
status:
phase: Active
# k delete ns my-namespace
namespace "my-namespace" deleted
</code></pre>
<p>If I open another terminal, I can see the resource in <code>Terminating</code> state.</p>
<pre class="lang-sh prettyprint-override"><code># k get ns my-namespace
NAME STATUS AGE
my-namespace Terminating 6m8s
</code></pre>
<p>So, actually the resource is marked to be deleted since I got a <code>deletionTimestamp</code>:</p>
<pre class="lang-sh prettyprint-override"><code>k get ns my-namespace -o jsonpath='{.metadata.deletionTimestamp}'
2019-09-08T06:58:07
</code></pre>
<p>To complete the deletion, I just need a simple <a href="https://stackoverflow.com/a/40975308/10104472">Watch</a> (using the Kubernetes Go Client) to get the change of the object (or a <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/" rel="noreferrer">Dynamic Admission Controll</a> to get the event, process my business logic in async mode (like a pre delete hook) and remove my fully-qualified Finalizer... just for sake of simplicity, I tested removing it with <code>kubectl</code> and it worked.</p>
<p>Just for information, <em>Finalizer</em> must be fully qualified since there's a <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/core/validation/validation.go#L4676-L4680" rel="noreferrer">validation</a> process, so it must be declared according to the pattern <code>prometherion/whatever_you_want</code>, taking care the first part must adhere to <code>DNS-1123</code> specification.</p>
|
<p>Because my Kubernetes Cluster is behind a corporate proxy, I need to set http/https proxy in pods via environment variables and set no_proxy to allow inter-pod and inter-service communication and communication with other local private servers. </p>
<p>Proxy http/https configuration worked perfectly when passing to the pods through env variables; but no_proxy did not work well and it breaks internal pod/service communication. </p>
<p>I have tried unsuccessfully to set no_proxy and NO_PROXY at different levels in Kubernetes, mainly at: </p>
<ul>
<li>Docker daemon: /etc/systemd/system/docker.service.d/http-proxy.conf</li>
<li>Docker client: /root/.docker/config.json (although it does not seem applicable when using Docker v1.13.1 on CentOS) </li>
<li>Docker client: through environment variables passed to the pods at creation time, in the yaml file used to run them with kubectl </li>
<li>Kubernetes master and worker nodes as environment variables </li>
<li>and many combinations of the above settings</li>
</ul>
<p>Proxy configuration within PODs succeeded with env variables inside the PODs:</p>
<blockquote>
<p>export http_proxy="http://10.16.1.1:8080"<br>
export https_proxy="https://10.16.1.1:8080" </p>
</blockquote>
<p>But, none of the above worked for no_proxy exceptions, and I tried many syntax and also added my nodes, pod & service networks, and .svc (as suggested for OpenShift)... as listed below:</p>
<blockquote>
<p>export no_proxy=".svc,.example.com"<br>
export no_proxy="localhost,127.0.0.0/8,10.1.16.0/24,10.240.0.0/16,10.241.0.0/16,*.domain.com"<br>
export no_proxy=".svc,.default,.local,.cluster.local,localhost,127.0.0.0/8,10.1.16.0/24,10.240.0.0/16,10.241.0.0/16,.domain.com"<br>
export NO_PROXY=$no_proxy </p>
</blockquote>
<p>I am using Kubernetes v1.11.2 + Docker v1.13.1 on CentOS7;<br>
Any help would be appreciated. </p>
| <p>We can make a reasonable assumption that we don't directly use IP address to access external network servers. In other words, we use FQDN to access, say python.com, google.com, github.com, but not directly using their IP addresses.</p>
<p>With this assumption, we can bypass web-proxy for all direct IP address access.</p>
<pre><code>export no_proxy=localhost,.svc
printf -v allip '%s,' .{0..255}
export no_proxy="$no_proxy,${allip%,}"
</code></pre>
<p>This adds <code>.0,.1,.2,...,.255</code> to the no_proxy env variable. There is no magic here. We just treat IP address as FQDN, so a suffix match works as FQDN no_proxy setting. Say, .120 would match all IP addresses x.x.x.122.</p>
|
<p>I am using Build Toolkit to build docker image for each microservice.</p>
<p><strong>./build.sh</strong></p>
<pre><code>export DOCKER_BUILDKIT=1
# ....
docker build -t ....
# ...
</code></pre>
<p>This works on my machine with docker (18.09.2).</p>
<p>However, it does not work with Jenkins, that I setup as following : </p>
<ul>
<li><p>EKS is provisioned with a Terraform module </p>
<pre><code>module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "5.0.0"
# ....
}
</code></pre></li>
<li><p>Jenkins is deployed on EKS (<code>v1.12.10-eks-ffbd9</code> , <code>docker://18.6.1</code>) via this <a href="https://github.com/helm/charts/tree/master/stable/jenkins" rel="nofollow noreferrer">Helm Chart</a>.</p></li>
<li><p>Jenkins plugins as defined in Values of the helm release: </p>
<ul>
<li>kubernetes:1.18.1</li>
<li>workflow-job:2.33</li>
<li>workflow-aggregator:2.6</li>
<li>credentials-binding:1.19</li>
<li>git:3.11.0</li>
<li>blueocean:1.19.0</li>
<li>bitbucket-oauth:0.9</li>
</ul></li>
<li><p>Jenkins Pipeline is declarative, and it uses a Pod template where the container image is <code>docker:18-dind</code> and the container name is <code>dind</code>.</p></li>
<li><p>This is my Jenkinsfile</p>
<pre><code>pipeline {
agent {
kubernetes {
defaultContainer 'jnlp'
yamlFile 'jenkins-pod.yaml'
}
}
stages {
stage('Build Backends') {
steps {
container('dind') {
sh 'chmod +x *sh'
sh './build.sh -t=dev'
}
containerLog 'dind'
}
}
}
</code></pre>
<p>} </p></li>
</ul>
<p>When Jenkins executes this pipeline, it shows this error :</p>
<pre><code>buildkit not supported by daemon
</code></pre>
<p>I am not sure which software should I upgrade to make docker-buildkit work ? and to which version ?</p>
<ul>
<li>Terraform eks Module which is now 5.0.0 ?</li>
</ul>
<p>Or</p>
<ul>
<li>docker:18-dind image which behaves like environment of the ephemeral Jenkins slaves ?</li>
</ul>
<p>Or</p>
<ul>
<li>the Jenkins Plugin <code>kubernetes:1.18.1</code> ?</li>
</ul>
| <p>As per <a href="https://github.com/docker/docker-ce/blob/68852aa863516d5d3f2fc33f0a59e65d886a1671/components/cli/cli/command/image/build_session.go#L30" rel="nofollow noreferrer">docker-ce sources</a>, there are two requirements to make successful check <code>isSessionSupported</code> for starting <code>buildkit</code> session:</p>
<ul>
<li><code>dockerCli.ServerInfo().HasExperimental</code></li>
<li><code>versions.GreaterThanOrEqualTo(dockerCli.Client().ClientVersion(), "1.31"</code></li>
</ul>
<p>So:</p>
<ul>
<li>check version of your <code>docker-cli</code> library</li>
<li>and is <code>HasExperimental</code> option enabled.</li>
</ul>
<p>To check if it has Experimantal support, run:</p>
<p><code> docker version -f '{{.Server.Experimental}}'</code></p>
|
<p>Minikube runs a single-node Kubernetes cluster inside a Virtual Machine (VM) on our laptop. </p>
<p>Wanted to check if we start minikube to use custom virtual machine? </p>
<p>I am trying to run minikube on Mac OS. </p>
| <p>We can start a custom image using the below option.</p>
<p>minikube start --iso-url=<> -p=test123 --vm-driver=hyperkit</p>
|
<p>I have <code>Pod</code> and <code>Service</code> <code>ymal</code> files in my system. I want to run these two using <code>kubectl create -f <file></code> and connect from outside browser to test connectivity.Here what I have followed.</p>
<p>My Pod :</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: client-nginx
labels:
component: web
spec:
containers:
- name: client
image: nginx
ports:
- containerPort: 3000
</code></pre>
<p>My Services file :</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: client-nginx-port
spec:
type: NodePort
ports:
- port: 3050
targetPort: 3000
nodePort: 31616
selector:
component: web
</code></pre>
<p>I used <code>kubectl create -f my_pod.yaml</code> and then <code>kubectl get pods</code> shows my pod <code>client-nginx</code></p>
<p>And then <code>kubectl create -f my_service.yaml</code>, No errors here and then shows all the services.</p>
<p>When I try to curl to service, it gives </p>
<blockquote>
<p>curl: (7) Failed to connect to 192.168.0.10 port 31616: Connection refused.</p>
</blockquote>
<p><code>kubectl get deployments</code> doesnt show my pod. Do I have to deploy it? I am a bit confused. If I use instructions <a href="https://www.linode.com/docs/applications/containers/kubernetes/how-to-deploy-nginx-on-a-kubernetes-cluster/" rel="nofollow noreferrer">given here</a>, <code>I can deploy</code>nginx<code>successfully and access from outside browsers.</code></p>
<p>I used instructions <a href="http://containertutorials.com/get_started_kubernetes/k8s_example.html" rel="nofollow noreferrer">given here</a> to test this.</p>
| <p>Try with this service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: client-nginx-port
spec:
type: NodePort
ports:
- port: 3050
targetPort: 80
nodePort: 31616
selector:
component: web
</code></pre>
|
<p><strong>Background:</strong></p>
<p>I am using <strong>istio 1.2.5</strong></p>
<p>I have deployed istio using helm default profile from the istio documentation by enabling the tracing, kiali and logLevel to "debug".</p>
<p>My pods and service in istio system namespace looks like this:</p>
<pre><code>(⎈ |cluster-dev:default)➜ istio-1.2.5 git:(master) ✗ k pods -n istio-system
NAME READY STATUS RESTARTS AGE
grafana-97fb6966d-cv5fq 1/1 Running 0 1d
istio-citadel-76f9586b8b-4bbcx 1/1 Running 0 1d
istio-galley-78f65c8469-v5cmn 1/1 Running 0 1d
istio-ingressgateway-5d5487c666-jjhb7 1/1 Running 0 1d
istio-pilot-65cb5648bf-4nfl7 2/2 Running 0 1d
istio-policy-8596cc6554-7sgzt 2/2 Running 0 1d
istio-sidecar-injector-76f487845d-ktf6p 1/1 Running 0 1d
istio-telemetry-5c6b6d59f6-lppqt 2/2 Running 0 1d
istio-tracing-595796cf54-zsnvj 1/1 Running 0 1d
kiali-55fcfc86cc-p2jrk 1/1 Running 0 1d
prometheus-5679cb4dcd-h7qsj 1/1 Running 0 1d
(⎈ |cluster-dev:default)➜ istio-1.2.5 git:(master) ✗ k svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 172.20.116.246 <none> 3000/TCP 1d
istio-citadel ClusterIP 172.20.177.97 <none> 8060/TCP,15014/TCP 1d
istio-galley ClusterIP 172.20.162.16 <none> 443/TCP,15014/TCP,9901/TCP 1d
istio-ingressgateway LoadBalancer 172.20.199.160 xxxxxxxxxxxxx... 15020:31334/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:30200/TCP,15030:32111/TCP,15031:32286/TCP,15032:32720/TCP,15443:30857/TCP 1d
istio-pilot ClusterIP 172.20.137.21 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 1d
istio-policy ClusterIP 172.20.188.114 <none> 9091/TCP,15004/TCP,15014/TCP 1d
istio-sidecar-injector ClusterIP 172.20.47.238 <none> 443/TCP 1d
istio-telemetry ClusterIP 172.20.77.52 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 1d
jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 1d
jaeger-collector ClusterIP 172.20.225.255 <none> 14267/TCP,14268/TCP 1d
jaeger-query ClusterIP 172.20.181.245 <none> 16686/TCP 1d
kiali ClusterIP 172.20.72.227 <none> 20001/TCP 1d
prometheus ClusterIP 172.20.25.75 <none> 9090/TCP 1d
tracing ClusterIP 172.20.211.135 <none> 80/TCP 1d
zipkin ClusterIP 172.20.204.123 <none> 9411/TCP 1d
</code></pre>
<p>I have't use any egress gateway and my outboundTrafficPolicy mode is ALLOW_ANY. So, I am assuming I don't need any service entry.</p>
<p>I am using nginx ingress controller as a entrypoint to my cluster and haven't started istio ingress gateway. </p>
<p><strong>Issue</strong>: </p>
<p>I have a micro service in my cluster that is hitting/reaching-out to an external URL(out of my cluster to legacy system) with a HTTP POST query. This system normally respond backs to my microservice in 25 seconds and have hard timeout on itself 0f 30 seconds.</p>
<p>When I am not using istio sidecar, my microservice is responding normally. But after deploying istio with sidecar I am getting 504 gateway everytime after 15 seconds.</p>
<p>Logs of microservice and well as istio-proxy:</p>
<p><strong>Microservice logs without istio(search response took 21.957 Seconds in logs)</strong></p>
<pre><code>2019-09-06 19:42:20.113 INFO [xx-xx-adapter,9b32565791541300,9b32565791541300,false] 1 --- [or-http-epoll-4] c.s.t.s.impl.common.PrepareRequest : Start Preparing search request
2019-09-06 19:42:20.117 INFO [xx-xx-adapter,9b32565791541300,9b32565791541300,false] 1 --- [or-http-epoll-4] c.s.t.s.impl.common.PrepareRequest : Done Preparing search request
2019-09-06 19:42:42.162 INFO [xx-xx-adapter,9b32565791541300,9b32565791541300,false] 1 --- [or-http-epoll-8] c.s.t.service.impl.TVSearchServiceImpl : xxxx search response took 21.957 Seconds
2019-09-06 19:42:42.292 INFO [xx-xx-adapter,9b32565791541300,9b32565791541300,false] 1 --- [or-http-epoll-8] c.s.t.service.impl.common.Transformer : Doing transformation of supplier response into our response
2019-09-06 19:42:42.296 INFO [xx-xx-adapter,9b32565791541300,9b32565791541300,false] 1 --- [or-http-epoll-8] c.s.t.service.impl.common.Transformer : Transformer: Parsing completed in 3 mSeconds
</code></pre>
<p><strong>Microservice logs with istio(response took 15.009 Seconds in logs)</strong></p>
<pre><code>2019-09-06 19:40:00.048 INFO [xxx-xx-adapter,32c55821a507d6f3,32c55821a507d6f3,false] 1 --- [or-http-epoll-3] c.s.t.s.impl.common.PrepareRequest : Start Preparing search request
2019-09-06 19:40:00.048 INFO [xxx-xx-adapter,32c55821a507d6f3,32c55821a507d6f3,false] 1 --- [or-http-epoll-3] c.s.t.s.impl.common.PrepareRequest : Done Preparing search request
2019-09-06 19:40:15.058 INFO [xx-xx-adapter,32c55821a507d6f3,32c55821a507d6f3,false] 1 --- [or-http-epoll-7] c.s.t.service.impl.xxxx : xxx Search Request {"rqst":{"Request":{"__type":"xx","CheckIn":"/Date(1569628800000+0000)/","CheckOut":"/Date(1569801600000+0000)/","DetailLevel":9,"ExcludeHotelDetails":false,"GeoLocationInfo":{"Latitude":25.204849,"Longitude":55.270782},"Nights":0,"RadiusInMeters":25000,"Rooms":[{"AdultsCount":2,"KidsAges":[2]}],"DesiredResultCurrency":"EUR","Residency":"GB","TimeoutSeconds":25,"ClientIP":"127.0.0.1"},"RequestType":1,"TypeOfService":2,"Credentials":{"UserName":"xxxx","Password":"xx"}}}
2019-09-06 19:40:15.058 ERROR [xx-xx-adapter,32c55821a507d6f3,32c55821a507d6f3,false] 1 --- [or-http-epoll-7] c.s.t.service.impl.xxxx : xxx Search request failed 504 GATEWAY_TIMEOUT
2019-09-06 19:40:15.058 INFO [xxx-xx-adapter,32c55821a507d6f3,32c55821a507d6f3,false] 1 --- [or-http-epoll-7] c.s.t.service.impl.xxxx : xx search response took 15.009 Seconds
2019-09-06 19:40:15.059 ERROR [xxx-xx-adapter,32c55821a507d6f3,32c55821a507d6f3,false] 1 --- [or-http-epoll-7] a.w.r.e.AbstractErrorWebExceptionHandler : [79d38e2f] 500 Server Error for HTTP POST "/search/geo-location"
java.lang.RuntimeException: Error occurred, We did not receive proper search response from xx please check internal logs for more info
<Java Stack trace >
2019-09-06 19:40:15.061 ERROR [xxx-xx-adapter,32c55821a507d6f3,32c55821a507d6f3,false] 1 --- [or-http-epoll-7] c.s.t.service.impl.xxxx : xxx search response upstream request timeout
2019-09-06 19:41:16.081 INFO [xxx-xx--adapter,,,] 1 --- [ Thread-22] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'executor'
</code></pre>
<p><strong>Envoy sidecar proxy logs Masked</strong></p>
<pre><code>[2019-09-06T20:32:15.418Z] "POST /xxxxxx/xxxxx.svc/xx/ServiceRequest HTTP/1.1" 504 UT "-" "-" 517 24 14997 - "-" "ReactorNetty/0.8.10.RELEASE" "c273fac1-8xxa-xxxx-xxxx-xxxxxx" "testdomain.testurl.com" "40.67.217.71:80" PassthroughCluster - 1.7.17.71:80 1.20.21.25:42386 -
[2019-09-06 20:39:01.719][34][debug][router] [external/envoy/source/common/router/router.cc:332] [C57][S17104382791712695742] cluster 'PassthroughCluster' match for URL '/xxxxx/xxxx.svc/json/ServiceRequest'
[2019-09-06 20:39:01.719][34][debug][router] [external/envoy/source/common/router/router.cc:332] [C57][S17104382791712695742] cluster 'PassthroughCluster' match for URL '/xxxxx/xxxx.svc/json/ServiceRequest'
[2019-09-06 20:39:01.719][34][debug][upstream] [external/envoy/source/common/upstream/original_dst_cluster.cc:87] Created host 40.67.217.71:80.
[2019-09-06 20:39:01.719][34][debug][router] [external/envoy/source/common/router/router.cc:393] [C57][S17104382791712695742] router decoding headers:
':authority', 'x.x.com'
':path', '/xxxxx/xxxxx.svc/json/ServiceRequest'
':method', 'POST'
':scheme', 'http'
'user-agent', 'ReactorNetty/0.8.10.RELEASE'
'accept', 'application/json'
'accept-encoding', 'gzip, deflate'
'x-newrelic-transaction', 'PxRSBVVQXAdVUgNTUgcPUQUBFB8EBw8RVU4aB1wLB1YHAA8DAAQFWlNXB0NKQV5XCVVQAQcGFTs='
'x-newrelic-id', 'VgUDWFVaARADUFNWAgQHV1A='
'content-type', 'application/json;charset=UTF-8'
'content-length', '517'
'x-forwarded-proto', 'http'
'x-request-id', '750f4fdb-83f9-409c-9ecf-e0a1fdacbb65'
'x-istio-attributes', 'CloKCnNvdXJjZS51aWQSTBJKa3ViZXJuZXRlczovL2hvdGVscy10di1hZGFwdGVyLXNlcnZpY2UtZGVwbG95bWVudC03ZjQ0ZDljNjVjLWhweHo3LmRlZmF1bHQ='
'x-envoy-expected-rq-timeout-ms', '15000'
'x-b3-traceid', '971ac547c63fa66e'
'x-b3-spanid', '58c12e7da54ae50f'
'x-b3-parentspanid', 'dc7bda5b98d522bf'
'x-b3-sampled', '0'
[2019-09-06 20:39:01.719][34][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:88] creating a new connection
[2019-09-06 20:39:01.719][34][debug][client] [external/envoy/source/common/http/codec_client.cc:26] [C278] connecting
[2019-09-06 20:39:01.719][35][debug][upstream] [external/envoy/source/common/upstream/cluster_manager_impl.cc:973] membership update for TLS cluster PassthroughCluster added 1 removed 0
[2019-09-06 20:39:01.719][28][debug][upstream] [external/envoy/source/common/upstream/cluster_manager_impl.cc:973] membership update for TLS cluster PassthroughCluster added 1 removed 0
[2019-09-06 20:39:01.719][34][debug][connection] [external/envoy/source/common/network/connection_impl.cc:704] [C278] connecting to 40.67.217.71:80
[2019-09-06 20:39:01.720][35][debug][upstream] [external/envoy/source/common/upstream/original_dst_cluster.cc:41] Adding host 40.67.217.71:80.
[2019-09-06 20:39:01.720][28][debug][upstream] [external/envoy/source/common/upstream/original_dst_cluster.cc:41] Adding host 40.67.217.71:80.
[2019-09-06 20:39:01.720][34][debug][connection] [external/envoy/source/common/network/connection_impl.cc:713] [C278] connection in progress
[2019-09-06 20:39:01.720][34][debug][pool] [external/envoy/source/common/http/conn_pool_base.cc:20] queueing request due to no available connections
[2019-09-06 20:39:01.720][34][debug][filter] [src/envoy/http/mixer/filter.cc:102] Called Mixer::Filter : decodeData (517, false)
[2019-09-06 20:39:01.720][34][debug][http] [external/envoy/source/common/http/conn_manager_impl.cc:1079] [C57][S17104382791712695742] request end stream
[2019-09-06 20:39:01.720][34][debug][filter] [src/envoy/http/mixer/filter.cc:102] Called Mixer::Filter : decodeData (0, true)
[2019-09-06 20:39:01.720][34][debug][upstream] [external/envoy/source/common/upstream/cluster_manager_impl.cc:973] membership update for TLS cluster PassthroughCluster added 1 removed 0
[2019-09-06 20:39:01.748][34][debug][connection] [external/envoy/source/common/network/connection_impl.cc:552] [C278] connected
[2019-09-06 20:39:01.748][34][debug][client] [external/envoy/source/common/http/codec_client.cc:64] [C278] connected
[2019-09-06 20:39:01.748][34][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:245] [C278] attaching to next request
[2019-09-06 20:39:01.748][34][debug][router] [external/envoy/source/common/router/router.cc:1210] [C57][S17104382791712695742] pool ready
[2019-09-06 20:39:02.431][35][debug][filter] [external/envoy/source/extensions/filters/listener/original_dst/original_dst.cc:18] original_dst: New connection accepted
[2019-09-06 20:39:02.431][35][debug][filter] [external/envoy/source/extensions/filters/listener/tls_inspector/tls_inspector.cc:72] tls inspector: new connection accepted
[2019-09-06 20:39:02.431][35][debug][filter] [src/envoy/tcp/mixer/filter.cc:30] Called tcp filter: Filter
[2019-09-06 20:39:02.431][35][debug][filter] [src/envoy/tcp/mixer/filter.cc:40] Called tcp filter: initializeReadFilterCallbacks
[2019-09-06 20:39:02.431][35][debug][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:200] [C279] new tcp proxy session
[2019-09-06 20:39:02.431][35][debug][filter] [src/envoy/tcp/mixer/filter.cc:135] [C279] Called tcp filter onNewConnection: remote 1.210.4.2:39482, local 1.10.1.5:80
[2019-09-06 20:39:02.431][35][debug][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:343] [C279] Creating connection to cluster inbound|80|xx-xx-adapter-service|xx-xx-adapter-service.default.svc.cluster.local
[2019-09-06 20:39:02.431][35][debug][pool] [external/envoy/source/common/tcp/conn_pool.cc:80] creating a new connection
[2019-09-06 20:39:02.431][35][debug][pool] [external/envoy/source/common/tcp/conn_pool.cc:372] [C280] connecting
[2019-09-06 20:39:02.431][35][debug][connection] [external/envoy/source/common/network/connection_impl.cc:704] [C280] connecting to 127.0.0.1:80
[2019-09-06 20:39:02.431][35][debug][connection] [external/envoy/source/common/network/connection_impl.cc:713] [C280] connection in progress
[2019-09-06 20:39:02.431][35][debug][pool] [external/envoy/source/common/tcp/conn_pool.cc:106] queueing request due to no available connections
[2019-09-06 20:39:02.431][35][debug][main] [external/envoy/source/server/connection_handler_impl.cc:257] [C279] new connection
[2019-09-06 20:39:02.431][35][debug][connection] [external/envoy/source/common/network/connection_impl.cc:552] [C280] connected
[2019-09-06 20:39:02.431][35][debug][pool] [external/envoy/source/common/tcp/conn_pool.cc:293] [C280] assigning connection
[2019-09-06 20:39:02.431][35][debug][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:542] TCP:onUpstreamEvent(), requestedServerName:
[2019-09-06 20:39:02.431][35][debug][filter] [src/envoy/tcp/mixer/filter.cc:143] Called tcp filter completeCheck: OK
[2019-09-06 20:39:02.432][35][debug][filter] [src/istio/control/client_context_base.cc:140] Report attributes: attributes {
key: "connection.event"
value {
string_value: "open"
}
}
attributes {
key: "connection.id"
value {
string_value: "82e869af-aec6-406a-8a52-4168a19eb1f0-279"
[2019-09-06 20:39:02.432][35][debug][filter] [src/envoy/tcp/mixer/filter.cc:102] [C279] Called tcp filter onRead bytes: 130
[2019-09-06 20:39:02.435][35][debug][filter] [src/envoy/tcp/mixer/filter.cc:125] [C279] Called tcp filter onWrite bytes: 147
[2019-09-06 20:39:02.435][35][debug][filter] [src/envoy/tcp/mixer/filter.cc:102] [C279] Called tcp filter onRead bytes: 0
[2019-09-06 20:39:02.436][35][debug][filter] [src/envoy/tcp/mixer/filter.cc:125] [C279] Called tcp filter onWrite bytes: 0
[2019-09-06 20:39:02.436][35][debug][connection] [external/envoy/source/common/network/connection_impl.cc:520] [C280] remote close
[2019-09-06 20:39:02.436][35][debug][connection] [external/envoy/source/common/network/connection_impl.cc:190] [C280] closing socket: 0
[2019-09-06 20:39:02.436][35][debug][pool] [external/envoy/source/common/tcp/conn_pool.cc:121] [C280] client disconnected
[2019-09-06 20:39:02.436][35][debug][connection] [external/envoy/source/common/network/connection_impl.cc:101] [C279] closing data_to_write=0 type=0
[2019-09-06 20:39:02.436][35][debug][connection] [external/envoy/source/common/network/connection_impl.cc:190] [C279] closing socket: 1
[2019-09-06 20:39:02.436][35][debug][filter] [src/envoy/tcp/mixer/filter.cc:174] [C279] Called tcp filter onEvent: 1 upstream 127.0.0.1:80
[2019-09-06 20:39:02.436][35][debug][filter] [src/istio/control/client_context_base.cc:140] Report attributes: attributes {
key: "connection.duration"
value {
duration_value {
nanos: 4358000
}
}
}
attributes {
key: "connection.event"
value {
string_value: "close"
}
}
at
[2019-09-06 20:39:02.436][35][debug][main] [external/envoy/source/server/connection_handler_impl.cc:68] [C279] adding to cleanup list
[2019-09-06 20:39:02.436][35][debug][pool] [external/envoy/source/common/tcp/conn_pool.cc:246] [C280] connection destroyed
</code></pre>
<p>I tried creating a virtual service over my MS with timeout of 30seconds but no luck.</p>
<p>I am not sure what I am missing. Need Help.</p>
<p><strong>Crux:</strong>
Internet(or from pod console) --> micro-service --> micro-service calling 3rd part legacy URL internally in code and getting timeouts in exact 15 seconds every-time. </p>
<p>Editing and adding more details:</p>
<p>Just to add the delay, I tried "<code>curl http://slowwly.robertomurray.co.uk/delay/17000/url/http://www.google.com</code>" (17000 = 17 seconds )from any of the microservice pod that have istio-proxy sidecar and always getting timeouts at 15 seconds.
I am not sure where to change this 15 seconds envoy setting.</p>
| <p>After lot of hit and trial we got this working.</p>
<p>Reference: <a href="https://github.com/istio/istio/issues/16915#issuecomment-529210672" rel="nofollow noreferrer">https://github.com/istio/istio/issues/16915#issuecomment-529210672</a></p>
<p>So for any outbound traffic no matter if you have egress gateway or not default timeout is 15 seconds. So, if you want to increase that you need to have a ServiceEntry and a VirtualService that defines the timeout.</p>
<p><strong>ServiceEntry</strong></p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-se-test
spec:
hosts:
- slowwly.robertomurray.co.uk
location: MESH_EXTERNAL
ports:
- number: 80
name: example-http
protocol: HTTP
resolution: DNS
</code></pre>
<p><strong>VirtualService</strong></p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: slow-ext
spec:
hosts:
- slowwly.robertomurray.co.uk
http:
- timeout: 30s
route:
- destination:
host: slowwly.robertomurray.co.uk
weight: 100
</code></pre>
|
<p>For example, kubelet(cAdvisor) <code>container_cpu_usage_seconds_total</code> has value with some parameter (e.g. pod, namespace).</p>
<p>I wonder how to summarize this kind of values into Service(for example, CPU usage per service)? I understand that Service is a set of pods so that just aggregating these values per pod to service, but I do not know how?</p>
<p>Is there any aggregation method to Service? Or, <code>process_cpu_seconds_total</code> is a kind of aggregated value per service of 'container_cpu_usage_seconds_total'?</p>
<p>Thank you for your help!</p>
| <p>What about</p>
<pre><code>sum(rate(container_cpu_usage_seconds_total{job="kubelet", cluster="", namespace="default", pod_name=~"your_pod_name.*"}[3m]))
</code></pre>
<p>Taken from <a href="https://github.com/kubernetes-monitoring/kubernetes-mixin" rel="nofollow noreferrer">kubernetes-mixin</a></p>
|
<p>I migrated Jenkins with <code>thinBackup</code> plugins. When I restart my new Jenkins master, accessing it via <a href="http://new_jenkins_ip:8080" rel="nofollow noreferrer">http://new_jenkins_ip:8080</a> will redirect me to <a href="https://old_jenkins_domain" rel="nofollow noreferrer">https://old_jenkins_domain</a>.</p>
<p>My old Jenkins runs with services, it was setup long ago. My new Jenkins runs in k8s, I edited stable/jenkins chart and deployed it with Helm.</p>
<p>At first, I thought Jenkins URL is the cause, so I change configuration in <code>jenkins.model.JenkinsLocationConfiguration.xml</code> (according to <a href="https://stackoverflow.com/questions/11723735/where-is-jenkins-url-configuration-stored/36031036">this</a>) and restart Jenkins by killing its pod (I deploy new Jenkins in k8s). But it's still redirecting to <a href="https://old_jenkins_domain" rel="nofollow noreferrer">https://old_jenkins_domain</a>.</p>
<p>I also try copying entire JENKINS_HOME (<a href="https://stackoverflow.com/questions/8724939/how-to-move-jenkins-from-one-pc-to-another">this</a>) directory and yes, it's still redirecting to <a href="https://old_jenkins_domain" rel="nofollow noreferrer">https://old_jenkins_domain</a>.</p>
<p>Another problem is that I'm using <code>github-oauth</code> and <code>matrix-auth</code> plugins and if I don't install these before performing thinBackup migration, Jenkins will fail to start up and restart itself forever. I want to know why.</p>
<p>I actually can access <a href="http://new_jenkins_ip:8080/login" rel="nofollow noreferrer">http://new_jenkins_ip:8080/login</a> but when I hit login, <code>github-oauth</code> just throw an NullPointerException.
I might try this <a href="https://github.com/prasadsalvi21/JenkinsMigration" rel="nofollow noreferrer">python</a> to migrate plugins because I think <code>thinBackup</code> can't migrate all the <code>github-oauth</code> config.</p>
| <p>It's <code>github-oauth</code> plugins that keep directing me to my old site. I new oauth app in github and let it redirecting to my new IP.</p>
|
<p>For example, kubelet(cAdvisor) <code>container_cpu_usage_seconds_total</code> has value with some parameter (e.g. pod, namespace).</p>
<p>I wonder how to summarize this kind of values into Service(for example, CPU usage per service)? I understand that Service is a set of pods so that just aggregating these values per pod to service, but I do not know how?</p>
<p>Is there any aggregation method to Service? Or, <code>process_cpu_seconds_total</code> is a kind of aggregated value per service of 'container_cpu_usage_seconds_total'?</p>
<p>Thank you for your help!</p>
| <p>In general, cAdvisor collects metrics about containers and doesn't know anything about Services. If you want to aggregate by Service, you need to manually select the metrics of the Pod that belong to this Service.</p>
<p>For example, if your cAdvisor metrics are in Prometheus, you can use this PromQL query:</p>
<pre><code>sum(rate(container_cpu_usage_seconds_total{pod_name=~"myprefix-*"}[2m]))
</code></pre>
<p>This adds up the CPU usages of all containers of all Pods that have a name starting with <code>myprefix-</code>.</p>
<p>Or if you have the <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/resource-metrics-api.md" rel="nofollow noreferrer">Resource Metrics API</a> enabled (i.e. the <a href="https://github.com/kubernetes-incubator/metrics-server" rel="nofollow noreferrer">Metrics Server</a> installed), you can query the CPU usage of a specific Pod (in fractions of a CPU core) with:</p>
<pre><code>kubectl get --raw="/apis/metrics.k8s.io/v1beta1/namespaces/{namespace}/pods/{pod}"
</code></pre>
<p>To get the total usage of a Service, you would need to iterate through all the Pods of the Service, extract the values, and add them together.</p>
<p>In general, Service is a Kubernetes concept and does not exist in cAdvisor, which is an independent project and just happens to be used in Kubernetes.</p>
|
<p>We have a kind of evaluation job which consists of several thousand invocations of a legacy binary with various inputs, each of which running like a minute. The individual runs are perfectly parallelizable (one instance per core).</p>
<p>What is the state of the art to do this in a hybrid cloud scenario?</p>
<p>Kubernetes itself does not seem to provide an interface for prioritizing or managing waiting jobs. Jenkins would be good at these points, but feels like a hack. Of course, we could hack something ourselves, but the problem should be sufficiently generic to already have an out-of-the box solution.</p>
| <p>There are a lot of frameworks that helps managing jobs in Kubernetes cluster. The most popular are:</p>
<ul>
<li><a href="https://github.com/argoproj/argo" rel="nofollow noreferrer">Argo</a> for orchestrating parallel jobs on Kubernetes. Workflows is implemented as a Kubernetes CRD (Custom Resource Definition).</li>
<li><a href="https://github.com/apache/airflow" rel="nofollow noreferrer">Airflow</a> - has a modular architecture and uses a message queue to orchestrate an arbitrary number of workers. Also take a look for <a href="https://airflow.apache.org/kubernetes.html#kubernetes-executor" rel="nofollow noreferrer">kubernetes-executor</a>. </li>
</ul>
<p>I recommend you to look for <a href="https://www.youtube.com/watch?v=oXPgX7G_eow&utm_campaign=Singapore&utm_content=97139107&utm_medium=social&utm_source=twitter&hss_channel=tw-381629790" rel="nofollow noreferrer">this video</a> which describe each of framework and help you decide which is better for you.</p>
|
<p>I've have RabbitMQ deployment with ClusterIP service.</p>
<p>Deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: rabbit-mq
name: rabbit-mq
spec:
replicas: 1
selector:
matchLabels:
app: rabbit-mq
strategy:
type: Recreate
template:
metadata:
labels:
app: rabbit-mq
spec:
containers:
- image: rabbitmq:3.6.1-management
name: rabbit-mq
ports:
- containerPort: 5672
volumeMounts:
- mountPath: /etc/rabbitmq
name: rabbit-mq-data
restartPolicy: Always
hostname: rabbit-mq
volumes:
- name: rabbit-mq-data
persistentVolumeClaim:
claimName: rabbit-mq-data
</code></pre>
<p>Service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: rabbit-mq-service
labels:
app: rabbit-mq
namespace: default
spec:
type: ClusterIP
ports:
- port: 5672
selector:
app: rabbit-mq
</code></pre>
<p>As I tested with other services and as it says in documentation, ClusterIP service should be visible to all pods and nodes inside cluster.
I need my RabbitMQ service to be accessible only inside cluster.
As I defined port 5672, same is used for targetPort.</p>
<p>I have .NET Core application connecting to service with following connection string:
<code>amqp://user:password@10.100.10.10:5672/</code></p>
<p><code>kubectl get svc rabbit-mq-service</code>
<code>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rabbit-mq-service ClusterIP 10.100.10.10 <none> 5672/TCP 3h7m
</code></p>
<p>When I try to connect to RabbitMQ, I get the error:</p>
<p><code>Unhandled Exception: RabbitMQ.Client.Exceptions.BrokerUnreachableException: None of the specified endpoints were reachable ---> System.AggregateException: One or more errors occurred. (Connection failed) ---> RabbitMQ.Client.Exceptions.ConnectFailureException: Connection failed ---> System.TimeoutException: The operation has timed out.</code></p>
<p>Have I misunderstood how ClusterIP service works?</p>
| <p>Try this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: rabbit-mq-service
labels:
app: rabbit-mq
namespace: default
spec:
type: ClusterIP
clusterIP: None
ports:
- port: 5672
selector:
app: rabbit-mq
</code></pre>
<p>and <code>amqp://user:password@rabbit-mq-service:5672/</code></p>
|
<p>I'm trying to set which cri-o socket to use by kubeadm ! </p>
<p>To achieve this I should use the flag <code>--cri-socket /var/run/crio/crio.sock</code></p>
<hr>
<p>The current command is in the form <code>kubeadm init phase <phase_name></code>. I must add the <code>--cri-socket</code> flag to it. </p>
<p>I edited the command this way <code>kubeadm init --cri-socket /var/run/crio/crio.sock phase <phase_name></code>.</p>
<p>Unfortunatly I am getting the <strong>error</strong> <code>Error: unknown flag: --cri-socket</code>.<br>
=> It seems that the argument <code>phase <phase_name></code> and the flag <code>--cri-socket /var/run/crio/crio.sock</code> is not compatible.</p>
<p>How do I fix that ?<br>
Thx</p>
<hr>
<p><strong>##################Update 1######################</strong> </p>
<p><strong>File</strong> : <em>/etc/kubernetes/kubeadm-config.yaml</em> </p>
<pre><code>apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.10.3.15
bindPort: 6443
certificateKey: 9063a1ccc9c5e926e02f245c06b8xxxxxxxxxxx
nodeRegistration:
name: p3kubemaster1
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
criSocket: /var/run/crio/crio.sock
</code></pre>
| <p>I see two things that may help:</p>
<ol>
<li>Check <code>/var/lib/kubelet/kubeadm-flags.env</code> if it is properly configured. </li>
</ol>
<blockquote>
<p>In addition to the flags used when starting the kubelet, the file also
contains dynamic parameters such as the cgroup driver and whether to
use a different CRI runtime socket (--cri-socket).</p>
</blockquote>
<p>More details can be found <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/#workflow-when-using-kubeadm-init" rel="nofollow noreferrer">here</a>.</p>
<ol start="2">
<li>Check your init config file (<code>kubeadm init --config string</code> will show you the path do the configuration file) and try to add something like this:</li>
</ol>
<hr>
<pre><code>apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
nodeRegistration:
criSocket: "unix:///var/run/crio/crio.sock"
</code></pre>
<hr>
<p>Please let me know if that helped.</p>
|
<p>How do I determine which <code>apiGroup</code> any given resource belongs in?</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: default
name: thing
rules:
- apiGroups: ["<wtf goes here>"]
resources: ["deployments"]
verbs: ["get", "list"]
resourceNames: []
</code></pre>
| <p>To get API resources - supported by your Kubernetes cluster:</p>
<pre><code> kubectl api-resources -o wide
example:
NAME SHORTNAMES APIGROUP NAMESPACED KIND VERBS
deployments deploy apps true Deployment [create delete deletecollection get list patch update watch]
deployments deploy extensions true Deployment [create delete deletecollection get list patch update watch]
</code></pre>
<p>To get API versions - supported by your Kubernetes cluster:</p>
<pre><code>kubectl api-versions
</code></pre>
<p>You can verify f.e. deployment:</p>
<pre><code>kubectl explain deploy
KIND: Deployment
VERSION: extensions/v1beta1
DESCRIPTION:
DEPRECATED - This group version of Deployment is deprecated by
apps/v1beta2/Deployment.
</code></pre>
<p>Furthermore you can investigate with api-version:</p>
<pre><code>kubectl explain deploy --api-version apps/v1
</code></pre>
<p>Shortly you an specify in you apiGroups like:</p>
<pre><code>apiGroups: ["extensions", "apps"]
</code></pre>
<p>You can also configure those settings for your cluster using (for example to test it will work with next <a href="https://kubernetes.io/docs/reference/using-api/api-overview/#enabling-api-groups" rel="noreferrer">1.16 release</a>) by passing options into <code>--runtime-config</code> in <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/#kube-apiserver" rel="noreferrer">kube-apiserver</a>. </p>
<p>Additional resources:</p>
<ul>
<li><a href="https://akomljen.com/kubernetes-api-resources-which-group-and-version-to-use/" rel="noreferrer">api Resources</a>: </li>
<li><a href="https://kubernetes.io/docs/reference/using-api/deprecation-policy/#deprecating-parts-of-the-api" rel="noreferrer">Kubernetes Deprecation Policy</a> </li>
<li><p>Additional Notable Feature Updates <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md#kubernetes-v115-release-notes" rel="noreferrer">for specific release please follow</a> like:</p>
<p><code>Continued deprecation of extensions/v1beta1, apps/v1beta1, and apps/v1beta2 APIs; these extensions will be retired in 1.16!</code></p></li>
</ul>
|
<p>I have a cluster hosted on GKE, I have several deployments on this cluster, <br>I can connect with <code>kubectl exec</code> to the pods:</p>
<p><code>kubectl exec -it mypod-1234566-7890976 -- bash</code></p>
<p>I want to remove the option to connect with <code>kubectl exec</code> to a certain container </p>
<p>is there a way to block the option to connect to the container by blocking the ssh on the <code>DOCKERFILE</code> of the container? or any other way</p>
| <p>To limit the ability to <code>kubectl exec</code> to pods what you want to do is create a custom Role & RoleBinding that removes the <code>create</code> verb for the <code>pods/exec</code> resource. An easy approach to this might be to copy the default RBAC policies, and then make the appropriate edit and rename.</p>
<p>Because of how RBAC works, the finest granularity you could apply to this is per-namespace, but it's not possible to filter this to a particular pod/deployment/etc.</p>
<p>As for other inbound external connections to a pod, this shouldn't be possible by default, unless you have created an Ingress and/or Service to specifically do this. This is because by in large most providers will be using private IP address ranges for the node IP's and also the Pod networking, hence they aren't reachable from outside without some NAT'ing or Proxying.</p>
<p>Hope this helps.</p>
|
<p>Cloud Platform: GCP</p>
<p>Kubernetes Engine: GKE</p>
<p>For a Kubernetes service with Type=LoadBalanacer, a corresponding automatic firewall gets created to allow from 0.0.0.0/0 and the name of the firewall starts with k8s-fw.*</p>
<p>The more LoadBalancer service we have for a cluster the more automatic firewall gets created.</p>
<p>Is it possible to keep only one firewall rule for a cluster as all the firewall rules are same?</p>
<p>I tested it by deleting a firewall rule of a newly created LoadBalancer service as there was already firewall in place for the other LoadBalancer service and I was able to access application with the new LoadBalancer IP.</p>
<p>Please confirm if this can be done.</p>
| <p>Yes, you can keep one and delete multiple firewall rules (gets created to allow from 0.0.0.0/0 and the name of the firewall starts with k8s-fw.*) for different loadbalancer services for application within the same GKE cluster. But, keep in mind that all the targetPort should be added to the firewall rule that you are keeping to allow from 0.0.0.0/0.</p>
|
<p>I am trying to update an object's Env field but I don't want to have to iterate through them to figure out which fields already exist and need to be updated instead of inserted if that isn't necessary. I'm using controller-runtime.</p>
<p>I tried patching using an ApplyPatchType, like so:<br>
<code>r.Patch(ctx, myDeployment, client.Apply)</code><br>
where client.Apply is <a href="https://github.com/kubernetes-sigs/controller-runtime/blob/master/pkg/client/patch.go#L28" rel="nofollow noreferrer">here</a>, and <code>myDeployment</code> is an <code>*appsv1.Deployment</code> that I found with <code>List(...)</code> and then modified.</p>
<p>This failed with a <code>415 Unsupported Media Type</code>. I assume it has something to do with it needing to be encoded in YAML (since the type's value is "application/apply-patch+yaml"</p>
<p>I got it working by doing this (r has an embedded controller-runtime Client):</p>
<pre class="lang-golang prettyprint-override"><code>newDeploy := dep.DeepCopy()
original := dep.DeepCopy()
// ... (changing newDeploy)
rawObj, _ := apijson.Marshal(newDeploy)
r.Patch(ctx, original, client.ConstantPatch(types.StrategicMergePatchType, rawObj))
</code></pre>
<p>This feels inefficient though, with possibly unnecessary DeepCopies, and I feel like I'm unnecessarily Marshalling my object. I figured I could somehow use the <code>Patch</code> command similar to how I use the <code>Update</code> command, where I could just pass in the object in its already modified state and get the StrategicMergePatchType (or ApplyPatchType) behavior.</p>
<p>It seems to me like if I want to just pass in the modified object and have the client "figure out" what to patch, I want to use the client.Apply (ApplyPatchType) behavior, since I think that StrategicMergePatchType is just for diffs to apply (though providing the whole object can still be a valid diff I suppose). Any guidance is appreciated!</p>
| <p>For now, I got the behavior I'm looking for without having to manually write the merging logic by using <a href="https://github.com/kubernetes-sigs/controller-runtime/blob/ee41a803ccc9c8a308b751433fc35282c716a9ff/pkg/client/patch.go#L76" rel="nofollow noreferrer">client.MergeFrom</a>:</p>
<blockquote>
<p>MergeFrom creates a Patch that patches using the merge-patch strategy with the given object as base.</p>
</blockquote>
<p>Relevant code:</p>
<p><code>r.Patch(ctx, newDeploy, client.MergeFrom(original))</code></p>
|
<p>I have configured my ingress controller with nginx-ingress <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#custom-nginx-upstream-hashing" rel="nofollow noreferrer">hashing</a> and I define HPA for my deployments. When we do load testing we hit a problem on the newly created pods that aren't warmed up enough and while the load balancing shifts immediately target portion of the traffic the latency spikes and service is choking. Is there a way to define some smooth load rebalancing that would rather move the traffic gradually and thus warm up the service in more natural way ?</p>
<p>Here is an example effect we see now:
<a href="https://i.stack.imgur.com/5MVT6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5MVT6.png" alt="enter image description here"></a></p>
| <p>At glance I see 2 possible reasons for that behaviour: </p>
<ol>
<li><p>I think there is a chance that you are facing the same problem as encountered in this question: <a href="https://stackoverflow.com/q/56899429/8971507">Some requests fails during autoscaling in kubernetes</a>. In that case, Nginx was sending requests to Pods that were not completely ready. To solve this you can configure a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer">Readiness Probe</a>. Personally, I configure my Readiness Probes to send a http request to a /health endpoint of my services. </p></li>
<li><p>There is a chance however that your application naturally performs slowly during the first requests, usually because of caching or some other operation that needs to be done at the beginning of its life. I encountered this problem in a Django+Gunicorn app where the Gunicorn only started my app after the first request. To solve this I used a <code>PostStart</code> <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks" rel="nofollow noreferrer">Container Hook</a> which sends a request to my app right after the container is created. <a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/" rel="nofollow noreferrer">Here</a> is an example of its use. You may also have a look at this question: <a href="https://stackoverflow.com/q/57837018/8971507">Kubernetes Pod warm-up for load balancing</a>.</p></li>
</ol>
|
<p>We are using the default ingress gateway for istio. We would like to create two different ingress gateway for using private and public external load balancer.</p>
<p>Is there any way to achieve this?</p>
| <p>See <a href="https://github.com/istio-ecosystem/multi-mesh-examples/tree/master/add_hoc_limited_trust#deploy-a-private-ingress-gateway-in-the-second-cluster" rel="nofollow noreferrer">this example</a>, step 3: <em>Deploy a private ingress gateway and mount the new secrets as data volumes by the following command</em>. You may want to edit the helm values of the example, for example remove the mounted volumes with the certificates, change the name of the gateway, the namespace it is deployed to.</p>
|
<p>I want to create a secret in several k8s clusters in the Google Kubernetes Engine using the Terraform.</p>
<p>I know that I can use "host", "token" and some else parameters in "kubernetes" provider, but I can describe these parameters only once, and I don’t know how to connect to another cluster during the file of terraform.</p>
<p>My question is how to create a secret (or do other operations) in multiple k8s cluster via Terraform. Maybe you know some tools on github or other tips for doing via single terraform file?</p>
| <p>You can use alias for provider in terraform like described in <a href="https://www.terraform.io/docs/configuration/providers.html#alias-multiple-provider-instances" rel="noreferrer">documentation</a></p>
<p>So you can define multiple providers for multiple k8s clusters and then refer them by alias.</p>
<p>e.g.</p>
<pre><code>provider "kubernetes" {
config_context_auth_info = "ops1"
config_context_cluster = "mycluster1"
alias = "cluster1"
}
provider "kubernetes" {
config_context_auth_info = "ops2"
config_context_cluster = "mycluster2"
alias = "cluster2"
}
resource "kubernetes_secret" "example" {
...
provider = kubernetes.cluster1
}
</code></pre>
|
<p>I've created a GKE cluster with Terraform and I also want to manage Kubernetes with Terraform as well. However, I don't know how to pass GKE's credentials to the <code>kubernetes</code> provider.</p>
<p>I followed the <a href="https://www.terraform.io/docs/providers/google/d/datasource_client_config.html" rel="noreferrer">example in the <code>google_client_config</code> data source documentation</a> and I got </p>
<blockquote>
<p><strong>data.google_container_cluster.cluster.endpoint</strong> is null</p>
</blockquote>
<p>Here is my failed attempt <a href="https://github.com/varshard/gke-cluster-terraform/tree/title-terraform" rel="noreferrer">https://github.com/varshard/gke-cluster-terraform/tree/title-terraform</a></p>
<p><code>cluster.tf</code> is responsible for creating a GKE cluster, which work fine.</p>
<p><code>kubernetes.tf</code> is responsible for managing Kubernetes, which failed to get GKE credential.</p>
| <p>You don't need the <a href="https://www.terraform.io/docs/providers/google/d/google_container_cluster.html" rel="nofollow noreferrer"><code>google_container_cluster</code> data source</a> here at all because the <a href="https://www.terraform.io/docs/providers/google/r/container_cluster.html#endpoint" rel="nofollow noreferrer">relevant information</a> is also in the <a href="https://www.terraform.io/docs/providers/google/r/container_cluster.html" rel="nofollow noreferrer"><code>google_container_cluster</code> resource</a> that you are creating in the same context.</p>
<p>Data sources are for accessing data about a resource that is created either entirely outside of Terraform or in a different Terraform context (eg different state file and different directory that is <code>terraform apply</code>'d).</p>
<p>I'm not sure how you're in your current state where the data source is selecting an existing container cluster and then you define a resource to create that container cluster using the outputs of the data source but this is way overcomplicated and slightly broken - if you destroyed everything and reapplied it wouldn't work as is.</p>
<p>Instead you should remove the <code>google_container_cluster</code> data source and amend your <code>google_container_cluster</code> resource to instead be:</p>
<pre><code>resource "google_container_cluster" "cluster" {
name = "${var.project}-cluster"
location = var.region
# ...
}
</code></pre>
<p>And then refer to this resource in your <code>kubernetes</code> provider:</p>
<pre><code>provider "kubernetes" {
load_config_file = false
host = "https://${google_container_cluster.cluster.endpoint}"
cluster_ca_certificate = base64decode(google_container_cluster.cluster.master_auth.0.cluster_ca_certificate)
token = data.google_client_config.current.access_token
}
</code></pre>
|
<p>We have multiple people ( with admin access ) doing the deployments in kubernetes cluster. We are finding it difficult to manage who has modified which object.</p>
<p>We can control the access and privileges using RBAC with roles and role bindings. We are planning to implement well defined roles and rolebindings for different groups.</p>
<p>We would also want to list all objects modified in N days in a specific namespace. Is there a way to display the objects using kubectl? please let me know</p>
| <p>This probably can't be done easily just with kubectl.</p>
<p>But you might look into <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/audit/" rel="nofollow noreferrer">Kubernetes auditing</a>. It causes the API server to record all requests to the API, and you can query them in different ways. For example, it should be possible to query the audit logs for all the objects that have been specified in the last N days.</p>
|
<p>I am trying to debug the storage usage in my kubernetes pod. I have seen the pod is evicted because of Disk Pressure. When i login to running pod, the see the following </p>
<pre><code>Filesystem Size Used Avail Use% Mounted on
overlay 30G 21G 8.8G 70% /
tmpfs 64M 0 64M 0% /dev
tmpfs 14G 0 14G 0% /sys/fs/cgroup
/dev/sda1 30G 21G 8.8G 70% /etc/hosts
shm 64M 0 64M 0% /dev/shm
tmpfs 14G 12K 14G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 14G 0 14G 0% /proc/acpi
tmpfs 14G 0 14G 0% /proc/scsi
tmpfs 14G 0 14G 0% /sys/firmware
root@deploy-9f45856c7-wx9hj:/# du -sh /
du: cannot access '/proc/1142/task/1142/fd/3': No such file or directory
du: cannot access '/proc/1142/task/1142/fdinfo/3': No such file or directory
du: cannot access '/proc/1142/fd/4': No such file or directory
du: cannot access '/proc/1142/fdinfo/4': No such file or directory
227M /
root@deploy-9f45856c7-wx9hj:/# du -sh /tmp
11M /tmp
root@deploy-9f45856c7-wx9hj:/# du -sh /dev
0 /dev
root@deploy-9f45856c7-wx9hj:/# du -sh /sys
0 /sys
root@deploy-9f45856c7-wx9hj:/# du -sh /etc
1.5M /etc
root@deploy-9f45856c7-wx9hj:/#
</code></pre>
<p>As we can see 21G is consumed, but when i try to run <code>du -sh</code> it just returns 227M. I would like to find out who(which directory) is consuming the space</p>
| <p>According to the docs <a href="https://github.com/kubernetes/website/blob/master/content/en/docs/tasks/administer-cluster/out-of-resource.md#node-conditions" rel="nofollow noreferrer">Node Conditions</a>, <code>DiskPressure</code> has to do with conditions on the node causing kubelet to evict the pod. It doesn't necessarily mean it's the pod that caused the conditions. </p>
<blockquote>
<p>DiskPressure</p>
<p>Available disk space and inodes on either the node’s root filesystem
or image filesystem has satisfied an eviction threshold</p>
</blockquote>
<p>You may want to investigate what's happening on the node instead. </p>
|
<p>How to change the restart policy of a running pod from <code>Always</code> to <code>Never</code>. I've <code>kubectl edit</code> but it does not allow to change it. </p>
| <p>If <code>kubectl edit</code> is not allowing you to change the <code>restartPolicy</code></p>
<p>Do this,
<code>kubectl delete deployment <deployment_name></code></p>
<p>Edit yaml file with new changes.
Then create deployment again.
<code>kubectl create -f deployment.yaml</code></p>
<p><strong>Or</strong> </p>
<p>Edit yaml file with new changes.
and apply changes to deployment using.
<code>kubectl apply -f deployment.yaml</code></p>
<p>This will delete old pod and will create new pod with new changes.</p>
|
<p>I have a workload which is exposed through <code>NodePort</code> service with the name <code>online-forms-lib-service</code>. This workload has the <code>/version</code> route.</p>
<p>Also I have the following ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: online-forms-lib-service
servicePort: 80
path: /formslib/
</code></pre>
<p>The problem is, the <code>/version</code> route is not available at:</p>
<pre><code>example.com/formslib/version
</code></pre>
<p>How to solve this?</p>
<p><strong>Update</strong> </p>
<p>It goes to the application root when I call: </p>
<pre><code>example.com/formslib/
</code></pre>
<p>Adding any path from there directs me to the default backend</p>
<p><strong>Update</strong>
Added the annotation: </p>
<pre><code> annotations:
ingress.kubernetes.io/rewrite-target: /
</code></pre>
<p>Still the same behaviour.</p>
| <p>Actually, <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress" rel="nofollow noreferrer">Ingress</a> resource mostly relies on the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="nofollow noreferrer">Ingress controller</a> implemented in K8s cluster with the aim to propagate Ingress rules and supplying load-balancing and traffic routing features. </p>
<p>As @ Utku Özdemir mentioned in the comment, most of the cloud providers on the market propose native Ingress controllers support, i.e. <a href="https://github.com/kubernetes/ingress-gce" rel="nofollow noreferrer">Ingress-gce</a> in <em>Google Cloud</em>, making possible to create External <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">HTTP(S) load balancer</a> via particular Ingress resource.</p>
<p>In addition, you might find a lot of third party Ingress controllers <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/#additional-controllers" rel="nofollow noreferrer">solutions</a>, that can potentially expand <em>L7 network traffic</em> functionality depending on the client needs.</p>
<p>I've checked your current Ingress configuration in a similar scenario, and I've managed a proper sub-path routing adopting wildcard <code>*</code> matching rule, preceding the root application path for the particular backend service:</p>
<pre><code>- backend:
serviceName: online-forms-lib-service
servicePort: 80
path: /formslib/*
</code></pre>
|
<p>I have two jobs that will run only once. One is called <code>Master</code> and one is called <code>Slave</code>. As the name implies a Master pod needs some info from the slave then queries some API online.
A simple scheme on how the communicate can be done like this:</p>
<pre><code>Slave --- port 6666 ---> Master ---- port 8888 ---> internet:www.example.com
</code></pre>
<p>To achieve this I created 5 yaml file:</p>
<ol>
<li>A job-master.yaml for creating a Master pod:</li>
</ol>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: master-job
labels:
app: master-job
role: master-job
spec:
template:
metadata:
name: master
spec:
containers:
- name: master
image: registry.gitlab.com/example
command: ["python", "run.py", "-wait"]
ports:
- containerPort: 6666
imagePullSecrets:
- name: regcred
restartPolicy: Never
</code></pre>
<ol>
<li>A service (ClusterIP) that allows the Slave to send info to the Master node on port 6666:</li>
</ol>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: master-service
labels:
app: master-job
role: master-job
spec:
selector:
app: master-job
role: master-job
ports:
- protocol: TCP
port: 6666
targetPort: 6666
</code></pre>
<ol start="2">
<li>A service(NodePort) that will allow the master to fetch info online:</li>
</ol>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: master-np-service
spec:
type: NodePort
selector:
app: master-job
ports:
- protocol: TCP
port: 8888
targetPort: 8888
nodePort: 31000
</code></pre>
<ol start="3">
<li>A job for the Slave pod:</li>
</ol>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: slave-job
labels:
app: slave-job
spec:
template:
metadata:
name: slave
spec:
containers:
- name: slave
image: registry.gitlab.com/example2
ports:
- containerPort: 6666
#command: ["python", "run.py", "master-service.default.svc.cluster.local"]
#command: ["python", "run.py", "10.106.146.155"]
command: ["python", "run.py", "master-service"]
imagePullSecrets:
- name: regcred
restartPolicy: Never
</code></pre>
<ol start="5">
<li>And a service (ClusterIP) that allows the Slave pod to send the info to the Master pod:</li>
</ol>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: slave-service
spec:
selector:
app: slave-job
ports:
- protocol: TCP
port: 6666
targetPort: 6666
</code></pre>
<p>But no matter what I do (as it can be seen in the job_slave.yaml file in the commented lines) they cannot communicate with each other except when I put the IP of the Master node in the command section of the Slave. Also the Master node cannot communicate with the outside world (even though I created a <code>configMap</code> with <code>upstreamNameservers: | ["8.8.8.8"]</code>
Everything is running in a minikube environment.
But I cannot pinpoint what my problem is. Any help is appreciated. </p>
| <p>Your Job spec has two parts: a description of the Job itself, and a description of the Pods it creates. (Using a Job here is a little odd and I'd probably pick a Deployment instead, but the same applies here.) Where the Service object has a <code>selector:</code> that matches the <code>labels:</code> of the Pods.</p>
<p>In the YAML files you show the Jobs have correct labels but the generated Pods don't. You need to add (potentially duplicate) labels to the pod spec part:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: master-job
labels: {...}
spec:
template:
metadata:
# name: will get ignored here
labels:
app: master-job
role: master-job
</code></pre>
<p>You should be able to verify with <code>kubectl describe service master-service</code>. At the end of its output will be a line that says <code>Endpoints:</code>. If the Service selector and the Pod labels don't match this will say <code><none></code>; if they do match you will see the Pod IP addresses.</p>
<p>(You don't need a <code>NodePort</code> service unless you need to <em>accept</em> requests from outside the cluster; it could be the same as the service you use to accept requests from within the cluster. You don't need to include objects' types in their names. Nothing you've shown has any obvious relevance to communication out of the cluster.)</p>
|
<p>Could not find how can I obtain information about memory consumption on kubernetes node with kubernetes-client library in Python. I know how to obtain this information with <code>kubectl</code> command.</p>
<p>Could you provide me with a piece of code how can I do this via the lib?</p>
<p>Thank you.</p>
| <h1>Getting started</h1>
<p><a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/README.md" rel="nofollow noreferrer">kubernetes-client</a> contains pieces of code that you need:</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes import client, config
def main():
# Configs can be set in Configuration class directly or using helper
# utility. If no argument provided, the config will be loaded from
# default location.
config.load_kube_config()
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" %
(i.status.pod_ip, i.metadata.namespace, i.metadata.name))
if __name__ == '__main__':
main()
</code></pre>
<h1>Method</h1>
<p>For get pod metrics, try this <strong>beta</strong> class:
<a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V2beta2PodsMetricStatus.md" rel="nofollow noreferrer">V2beta2PodsMetricStatus</a></p>
<h1>Metrics</h1>
<p>According to <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/docs/pod-metrics.md" rel="nofollow noreferrer"><code>k8s</code> documentation</a> Your metrics supposed to be:</p>
<ul>
<li><p><code>kube_pod_container_resource_limits_memory_bytes</code></p>
</li>
<li><p><code>kube_pod_container_resource_requests_memory_bytes</code></p>
</li>
</ul>
|
<p>We Could deploy applications using 'Helm Charts' with</p>
<pre><code>helm install --name the-release helm/the-service-helm --namespace myns
</code></pre>
<p>And we cold <em>'Rolling Upgrade'</em> the deployment using,</p>
<pre><code>helm upgrade --recreate-pods the-release helm/the-service-helm --namespace myns
</code></pre>
<p>Is there a way to use <em>'Helm Charts'</em> to achieve <strong><em>'Blue/Green'</em></strong> Deployments?</p>
| <h1>Let's start from definitions</h1>
<p>Since there are <a href="https://thenewstack.io/deployment-strategies/" rel="nofollow noreferrer">many deployment strategies</a>, let's start from the definition.</p>
<p>As per <a href="https://martinfowler.com/bliki/BlueGreenDeployment.html" rel="nofollow noreferrer">Martin Flower</a>:</p>
<blockquote>
<p>The blue-green deployment approach does this by ensuring you have two production environments, as identical as possible. At any time one of them, let's say blue for the example, is live. As you prepare a new release of your software you do your final stage of testing in the green environment. Once the software is working in the green environment, you switch the router so that all incoming requests go to the green environment - the blue one is now idle.</p>
</blockquote>
<h1><code>Blue/Green</code> is not recommended in Helm. But there are workaround solutions</h1>
<ul>
<li><p>As per to <a href="https://github.com/helm/helm/issues/3518" rel="nofollow noreferrer">helm issue #3518</a>, it's not recommended to use <code>Helm</code> for <code>blue/green</code> or <code>canary</code> deployment.</p>
</li>
<li><p>There are at least 3 solutions based on top of Helm, see below</p>
</li>
<li><p>However there is a Helm chart for that case.</p>
</li>
</ul>
<h2>Helm itself (TL;DR: not recommended)</h2>
<p>Helm itself is not intended for the case. See their explanation:</p>
<blockquote>
<p><a href="https://github.com/helm/helm/issues/3518" rel="nofollow noreferrer">direct support for blue / green deployment pattern in helm · Issue #3518 · helm/helm</a></p>
</blockquote>
<blockquote>
<p>Helm works more in the sense of a traditional package manager, upgrading charts from one version to the next in a graceful manner (thanks to pod liveness/readiness probes and deployment update strategies), much like how one expects something like <code>apt upgrade</code> to work. Blue/green deployments are a very different beast compared to the package manager style of upgrade workflows; blue/green sits at a level higher in the toolchain because the use cases around these deployments require step-in/step-out policies, gradual traffic migrations and rollbacks. Because of that, we decided that blue/green deployments are something out of scope for Helm, though a tool that utilizes Helm under the covers (or something parallel like istio) could more than likely be able to handle that use case.</p>
</blockquote>
<h2>Other solutions based on <code>Helm</code></h2>
<p>There are at least three solution based on top of <code>Helm</code>, described and compared <a href="https://blog.csanchez.org/2019/01/22/progressive-delivery-in-kubernetes-blue-green-and-canary-deployments/" rel="nofollow noreferrer">here</a>:</p>
<ul>
<li><a href="https://github.com/carlossg/croc-hunter-jenkinsx-serverless/tree/master/shipper" rel="nofollow noreferrer">Shipper</a></li>
<li><a href="https://github.com/carlossg/croc-hunter-jenkinsx-serverless/tree/master/istio" rel="nofollow noreferrer">Istio</a></li>
<li><a href="https://github.com/carlossg/croc-hunter-jenkinsx-serverless/tree/master/flagger" rel="nofollow noreferrer">Flagger</a>.</li>
</ul>
<h3><a href="https://github.com/carlossg/croc-hunter-jenkinsx-serverless/tree/master/shipper" rel="nofollow noreferrer">Shipper</a> by Booking.com - <strong>DEPRECATED</strong></h3>
<p><a href="https://github.com/bookingcom/shipper" rel="nofollow noreferrer">bookingcom/shipper</a>: Kubernetes native multi-cluster canary or blue-green rollouts using Helm</p>
<blockquote>
<p>It does this by relying on Helm, and using Helm Charts as the unit of configuration deployment. Shipper's Application object provides an interface for specifying values to a Chart just like the helm command line tool.
Shipper consumes Charts directly from a Chart repository like ChartMuseum, and installs objects into clusters itself. This has the nice property that regular Kubernetes authentication and RBAC controls can be used to manage access to Shipper APIs.</p>
</blockquote>
<p>Kubernetes native multi-cluster canary or blue-green rollouts using Helm</p>
<h3><a href="https://github.com/carlossg/croc-hunter-jenkinsx-serverless/tree/master/istio" rel="nofollow noreferrer">Istio</a></h3>
<p>You can try something <a href="http://dougbtv.com/nfvpe/2017/06/05/istio-deploy/" rel="nofollow noreferrer">like this</a>:</p>
<pre><code>kubectl create -f <(istioctl kube-inject -f cowsay-v1.yaml) # deploy v1
</code></pre>
<pre><code>kubectl create -f <(istioctl kube-inject -f cowsay-v2.yaml) # deploy v1
</code></pre>
<h3><a href="https://github.com/carlossg/croc-hunter-jenkinsx-serverless/tree/master/flagger" rel="nofollow noreferrer">Flagger</a>.</h3>
<p>There is guide written by Flagger team: <a href="https://docs.flagger.app/usage/blue-green" rel="nofollow noreferrer">Blue/Green Deployments - Flagger</a>
This guide shows you how to automate Blue/Green deployments with Flagger and Kubernetes</p>
<h2>You might try Helm itself</h2>
<p>Also, as <a href="https://stackoverflow.com/users/11032044/kamol-hasan">Kamol Hasan</a> recommended, you can try that chart: <a href="https://github.com/puneetsaraswat/HelmCharts/tree/master/blue-green" rel="nofollow noreferrer">puneetsaraswat/HelmCharts/blue-green</a>.</p>
<p><a href="https://github.com/puneetsaraswat/HelmCharts/blob/master/blue-green/templates/deployment-blue.yaml" rel="nofollow noreferrer"><code>blue.yml</code> sample</a></p>
<pre class="lang-yaml prettyprint-override"><code>{{ if .Values.blue.enabled }}
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "blue-green.fullname" . }}-blue
labels:
release: {{ .Release.Name }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app: {{ template "blue-green.name" . }}
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: {{ template "blue-green.name" . }}
release: {{ .Release.Name }}
slot: blue
spec:
containers:
- name: {{ template "blue-green.name" . }}-blue
image: nginx:stable
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
# This (and the volumes section below) mount the config map as a volume.
volumeMounts:
- mountPath: /usr/share/nginx/html
name: wwwdata-volume
volumes:
- name: wwwdata-volume
configMap:
name: {{ template "blue-green.fullname" . }}
{{ end }}
</code></pre>
<p>Medium blog post: <a href="https://medium.com/@saraswatpuneet/blue-green-deployments-using-helm-charts-93ec479c0282" rel="nofollow noreferrer">Blue/Green Deployments using Helm Charts</a></p>
|
<p>I need to use an environment variable inside my kubeconfig file to point the <code>NODE_IP</code> of the Kubernetes API server. </p>
<p>My config is:</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://$NODE_IP:6443
name: docker-for-desktop-cluster
contexts:
- context:
cluster: docker-for-desktop-cluster
user: docker-for-desktop
name: docker-for-desktop
current-context: docker-for-desktop
kind: Config
preferences: {}
users:
- name: docker-for-desktop
user:
......
</code></pre>
<p>But it seems like the kubeconfig file is not getting rendered variables when I run the command:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl --kubeconfig mykubeConfigFile get pods.
</code></pre>
<p>It complains as below:</p>
<blockquote>
<p>Unable to connect to the server: dial tcp: lookup $NODE_IP: no such host</p>
</blockquote>
<p>Did anyone try to do something like this or is it possible to make it work?</p>
<p>Thanks in advance </p>
| <p>This <a href="https://serverfault.com/questions/791715/using-environment-variables-in-kubernetes-deployment-spec">thread</a> contains explanations and answers:</p>
<ul>
<li>... either <a href="https://github.com/kubernetes/kubernetes/issues/23896" rel="nofollow noreferrer">wait</a> <a href="https://github.com/kubernetes/kubernetes/issues/23896" rel="nofollow noreferrer">Implement templates · Issue #23896 · kubernetes/kubernetes</a> for <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/templates.md" rel="nofollow noreferrer">the implementation of the templating proposal</a> in <code>k8s</code> (<a href="https://github.com/kubernetes/kubernetes/pull/24777" rel="nofollow noreferrer">not</a> merged yet)</li>
<li>... or preprocess your yaml with tools like:
<ul>
<li><code>envsubst</code>: </li>
</ul>
<pre class="lang-sh prettyprint-override"><code>export NODE_IP="127.0.11.1"
envsubst < mykubeConfigFile.yml | kubectl --kubeconfig mykubeConfigFile.yml get pods
</code></pre>
<ul>
<li><code>sed</code>: </li>
</ul>
<pre class="lang-sh prettyprint-override"><code>cat mykubeConfigFile.yml | sed s/\$\$EXTERNAL_IP/127.0.11.1/ | kubectl --kubeconfig mykubeConfigFile.yml get pods
</code></pre></li>
</ul>
|
<p>I've also posted this as a question on the official Elastic forum, but that doesn't seem super frequented.</p>
<p><a href="https://discuss.elastic.co/t/x-pack-check-on-oss-docker-image/198521" rel="nofollow noreferrer">https://discuss.elastic.co/t/x-pack-check-on-oss-docker-image/198521</a></p>
<p>At any rate, here's the query:</p>
<p>We're running a managed AWS Elasticsearch cluster — not ideal, but that's life — and run most the rest of our stuff with Kubernetes. We recently upgraded our cluster to Elasticsearch 7, so I wanted to upgrade the Filebeat service we have running on the Kubernetes nodes to capture logs.</p>
<p>I've specified <code>image: docker.elastic.co/beats/filebeat-oss:7.3.1</code> in my daemon configuration, but I still see</p>
<pre><code>Connection marked as failed because the onConnect callback failed:
request checking for ILM availability failed:
401 Unauthorized: {"Message":"Your request: '/_xpack' is not allowed."}
</code></pre>
<p>in the logs. Same thing when I've tried other 7.x images. A bug? Or something that's new in v7?</p>
<p>The license file is an Apache License, and the build when I do <code>filebeat version</code> inside the container is <code>a4be71b90ce3e3b8213b616adfcd9e455513da45</code>.</p>
| <p>It turns out that starting in one of the 7.x versions they turned on index lifecycle management checks by default. ILM (index lifecycle management) is an X-Pack feature, so turning this on by default means that Filebeat will do an X-Pack check by default.</p>
<p>This can be fixed by adding <code>setup.ilm.enabled: false</code> to the Filebeat configuration. So, not a bug per se in the OSS Docker build.</p>
|
<p>I have a simple script which watches Kubernetes events and then publishes a message to a NATS server:</p>
<pre><code>#!/usr/bin/env python
import asyncio
import argparse
import json
import logging
import os
from kubernetes import client, config, watch
from nats.aio.client import Client as NATS
from nats.aio.errors import ErrConnectionClosed, ErrTimeout, ErrNoServers
# monkey patch
from kube import local_load_oid_token
config.kube_config.KubeConfigLoader._load_oid_token = local_load_oid_token
parser = argparse.ArgumentParser()
parser.add_argument('--in-cluster', help="use in cluster kubernetes config", action="store_true")
parser.add_argument('-a', '--nats-address', help="address of nats cluster", default=os.environ.get('NATS_ADDRESS', None))
parser.add_argument('-d', '--debug', help="enable debug logging", action="store_true")
parser.add_argument('-p', '--publish-events', help="publish events to NATS", action="store_true")
parser.add_argument('--output-events', help="output all events to stdout", action="store_true", dest='enable_output')
parser.add_argument('--connect-timeout', help="NATS connect timeout (s)", type=int, default=10, dest='conn_timeout')
parser.add_argument('--max-reconnect-attempts', help="number of times to attempt reconnect", type=int, default=1, dest='conn_attempts')
parser.add_argument('--reconnect-time-wait', help="how long to wait between reconnect attempts", type=int, default=10, dest='conn_wait')
args = parser.parse_args()
logger = logging.getLogger('script')
ch = logging.StreamHandler()
if args.debug:
logger.setLevel(logging.DEBUG)
ch.setLevel(logging.DEBUG)
else:
logger.setLevel(logging.INFO)
ch.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
logger.addHandler(ch)
if not args.nats_address:
logger.critical("No NATS cluster specified")
exit(parser.print_usage())
else:
logger.debug("Using nats address: %s", args.nats_address)
if args.in_cluster:
config.load_incluster_config()
else:
try:
config.load_kube_config()
except Exception as e:
logger.critical("Error creating Kubernetes configuration: %s", e)
exit(2)
v1 = client.CoreV1Api()
async def run(loop):
nc = NATS()
try:
await nc.connect(args.nats_address, loop=loop, connect_timeout=args.conn_timeout, max_reconnect_attempts=args.conn_attempts, reconnect_time_wait=args.conn_wait)
logger.info("Connected to NATS at %s..." % (nc.connected_url.netloc))
except Exception as e:
exit(e)
#print("Connected to NATS at {}...".format(nc.connected_url.netloc))
async def get_node_events():
w = watch.Watch()
for event in w.stream(v1.list_node):
accepted = ["DELETED"]
if event['type'] in accepted:
logger.info("Event: %s %s %s" % (event['type'], event['object'].kind, event['object'].metadata.name))
msg = {'type':event['type'],'object':event['raw_object']}
logger.debug("Raw Message: %s" % msg)
await nc.publish("k8s_events", json.dumps(msg).encode('utf-8'))
if args.enable_output:
print(json.dumps(msg))
await get_node_events()
await nc.flush(timeout=3)
await nc.close()
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.create_task(run(loop))
try:
loop.run_forever()
except KeyboardInterrupt:
logger.info('keyboard shutdown')
tasks = asyncio.gather(*asyncio.Task.all_tasks(loop=loop), loop=loop, return_exceptions=True)
tasks.add_done_callback(lambda t: loop.stop())
tasks.cancel()
# Keep the event loop running until it is either destroyed or all
# tasks have really terminated
while not tasks.done() and not loop.is_closed():
loop.run_forever()
finally:
logger.info('closing event loop')
loop.close()
</code></pre>
<p>When running this with the event publishing enabled, I can see the event JSON being output.</p>
<p>However, for some reason my receiver isn't actually getting a NATS message for the deletion event.</p>
<ul>
<li>How can I debug the message made it onto the topic? Is there anything I can add which validates the message made it onto the topic via the code?</li>
<li>Is my asyncio logic correct here?</li>
<li>Why might the deletion event not make it onto the topic with this logic?</li>
</ul>
| <p>You can validate message delivery through the NATS server logs. In configuring the NATS servers, temporarily enable debug/trace with the <strong>-DV</strong> flag passed to the NATS server or in the NATS configuration file with:</p>
<pre><code>debug=true
trace=true
</code></pre>
<p>You should see something like this:</p>
<pre><code>[31070] 2019/09/10 13:34:40.426198 [DBG] 127.0.0.1:53203 - cid:6 - Client connection created
[31070] 2019/09/10 13:34:40.426582 [TRC] 127.0.0.1:53203 - cid:6 - <<- [CONNECT {"verbose":false,"pedantic":false,"tls_required":false,"name":"NATS Sample Subscriber","lang":"go","version":"1.7.0","protocol":1,"echo":true}]
[31070] 2019/09/10 13:34:40.426614 [TRC] 127.0.0.1:53203 - cid:6 - <<- [PING]
[31070] 2019/09/10 13:34:40.426625 [TRC] 127.0.0.1:53203 - cid:6 - ->> [PONG]
[31070] 2019/09/10 13:34:40.426804 [TRC] 127.0.0.1:53203 - cid:6 - <<- [SUB k8s_events 1]
[31070] 2019/09/10 13:34:40.426821 [TRC] 127.0.0.1:53203 - cid:6 - <<- [PING]
[31070] 2019/09/10 13:34:40.426827 [TRC] 127.0.0.1:53203 - cid:6 - ->> [PONG]
[31070] 2019/09/10 13:34:44.167844 [DBG] ::1:53206 - cid:7 - Client connection created
[31070] 2019/09/10 13:34:44.168352 [TRC] ::1:53206 - cid:7 - <<- [CONNECT {"verbose":false,"pedantic":false,"tls_required":false,"name":"NATS Sample Publisher","lang":"go","version":"1.7.2","protocol":1,"echo":true}]
[31070] 2019/09/10 13:34:44.168383 [TRC] ::1:53206 - cid:7 - <<- [PING]
[31070] 2019/09/10 13:34:44.168390 [TRC] ::1:53206 - cid:7 - ->> [PONG]
[31070] 2019/09/10 13:34:44.168594 [TRC] ::1:53206 - cid:7 - <<- [PUB k8s_events 11]
[31070] 2019/09/10 13:34:44.168607 [TRC] ::1:53206 - cid:7 - <<- MSG_PAYLOAD: ["{json data}"]
[31070] 2019/09/10 13:34:44.168623 [TRC] 127.0.0.1:53203 - cid:6 - ->> [MSG k8s_events 1 11]
[31070] 2019/09/10 13:34:44.168648 [TRC] ::1:53206 - cid:7 - <<- [PING]
[31070] 2019/09/10 13:34:44.168653 [TRC] ::1:53206 - cid:7 - ->> [PONG]
</code></pre>
<p>Using the connection ID, you can see that connection ID 7 published 11 bytes to <code>k8s_events</code> (the protocol message <code>PUB k8s_events 11</code> with the message payload following it), and connection ID 6 (the subscriber) received the message (<code>MSG k8s_events 1 11</code>).</p>
<p>This is one way you can check that your client is publishing the message and your subscribers are listening to the correct subject.</p>
|
<p>I am trying to deploy Kube State Metrics into the <code>kube-system</code> namespace in my EKS Cluster (eks.4) running Kubernetes v1.14.</p>
<p><strong>Kubernetes Connection</strong></p>
<pre><code>provider "kubernetes" {
host = var.cluster.endpoint
token = data.aws_eks_cluster_auth.cluster_auth.token
cluster_ca_certificate = base64decode(var.cluster.certificate)
load_config_file = true
}
</code></pre>
<p><strong>Deployment Manifest (as .tf)</strong></p>
<pre><code>resource "kubernetes_deployment" "kube_state_metrics" {
metadata {
name = "kube-state-metrics"
namespace = "kube-system"
labels = {
k8s-app = "kube-state-metrics"
}
}
spec {
replicas = 1
selector {
match_labels = {
k8s-app = "kube-state-metrics"
}
}
template {
metadata {
labels = {
k8s-app = "kube-state-metrics"
}
}
spec {
container {
name = "kube-state-metrics"
image = "quay.io/coreos/kube-state-metrics:v1.7.2"
port {
name = "http-metrics"
container_port = 8080
}
port {
name = "telemetry"
container_port = 8081
}
liveness_probe {
http_get {
path = "/healthz"
port = "8080"
}
initial_delay_seconds = 5
timeout_seconds = 5
}
readiness_probe {
http_get {
path = "/"
port = "8080"
}
initial_delay_seconds = 5
timeout_seconds = 5
}
}
service_account_name = "kube-state-metrics"
}
}
}
}
</code></pre>
<p>I have deployed all the required RBAC manifests from <a href="https://github.com/kubernetes/kube-state-metrics/tree/master/kubernetes" rel="nofollow noreferrer">https://github.com/kubernetes/kube-state-metrics/tree/master/kubernetes</a> as well - redacted here for brevity.</p>
<p>When I run <code>terraform apply</code> on the deployment above, the Terraform output is as follows :
<code>kubernetes_deployment.kube_state_metrics: Still creating... [6m50s elapsed]</code></p>
<p>Eventually timing out at 10m.</p>
<p>Here are the outputs of the logs for the <code>kube-state-metrics</code> pod</p>
<pre><code>I0910 23:41:19.412496 1 main.go:140] metric white-blacklisting: blacklisting the following items:
W0910 23:41:19.412535 1 client_config.go:541] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
W0910 23:41:19.412565 1 client_config.go:546] error creating inClusterConfig, falling back to default config: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
F0910 23:41:19.412782 1 main.go:148] Failed to create client: invalid configuration: no configuration has been provided
</code></pre>
| <p>Adding the following to the <code>spec</code> has taken me to a successful deployment.</p>
<pre><code>automount_service_account_token = true
</code></pre>
<p>For posterity :</p>
<pre><code>resource "kubernetes_deployment" "kube_state_metrics" {
metadata {
name = "kube-state-metrics"
namespace = "kube-system"
labels = {
k8s-app = "kube-state-metrics"
}
}
spec {
replicas = 1
selector {
match_labels = {
k8s-app = "kube-state-metrics"
}
}
template {
metadata {
labels = {
k8s-app = "kube-state-metrics"
}
}
spec {
automount_service_account_token = true
container {
name = "kube-state-metrics"
image = "quay.io/coreos/kube-state-metrics:v1.7.2"
port {
name = "http-metrics"
container_port = 8080
}
port {
name = "telemetry"
container_port = 8081
}
liveness_probe {
http_get {
path = "/healthz"
port = "8080"
}
initial_delay_seconds = 5
timeout_seconds = 5
}
readiness_probe {
http_get {
path = "/"
port = "8080"
}
initial_delay_seconds = 5
timeout_seconds = 5
}
}
service_account_name = "kube-state-metrics"
}
}
}
}
</code></pre>
|
<p>I also want to see the logs of kube-scheduler go files. I tried different methods like</p>
<ul>
<li>using kubeclt logs: this show the event logs of kube-scheduler but not all the logs in kube-scheduler.</li>
<li>using docker logs [kube-scheduler container id]: it does not have any log.</li>
<li>journalctl -u kubelet: only show the log of kubelet.</li>
</ul>
<p>All of these do not work. Please let me know if you guys found the way to log them out.</p>
| <p>Check this on Master <code>/var/log/kube-scheduler.log</code></p>
<p>see also:
<a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster/</a></p>
|
<p>I found that we can create subcharts and conditionally include them as described here: <a href="https://stackoverflow.com/questions/54032974/helm-conditionally-install-subchart">Helm conditionally install subchart</a></p>
<p>I have just one template that I want conditionally include in my chart but I could not find anything in the docs. Is there such feature?</p>
| <p>I discovered that empty templates are not loaded. I solved it by wrapping my yaml file content in an <code>if</code> condition.</p>
<pre><code>{{ if .Values.something }}
content of yaml file
{{ end }}
</code></pre>
|
<p>It is not possible to join master nodes without having <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/#steps-for-the-first-control-plane-node" rel="nofollow noreferrer">set a <code>controlPlaneEndpoint</code></a>:</p>
<blockquote>
<p>error execution phase preflight:<br>
One or more conditions for hosting a new control plane instance is not satisfied.<br>
unable to add a new control plane instance a cluster that doesn't have a stable controlPlaneEndpoint address<br>
Please ensure that:<br>
* The cluster has a stable controlPlaneEndpoint address.</p>
</blockquote>
<p>But if you instead join a worker node (i.e. without <code>--control-plane</code>), then it is not only aware of other nodes in the cluster, but also which ones are masters.</p>
<p>This is because the <code>mark-control-plane</code> phase does:</p>
<blockquote>
<p>Marking the node as control-plane by adding the label "node-role.kubernetes.io/master=''"
Marking the node as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]</p>
</blockquote>
<p>So couldn't masters (<code>--control-plane</code>) join the cluster and use the role label to <em>discover</em> the other control plane nodes?</p>
<p>Is there any such plugin or other way of configuring this behaviour to avoid separate infrastructure for load balancing the API server?</p>
| <p>Looking at the <a href="https://github.com/kubernetes/kubernetes/blob/63e27a02ed41f9554b547b18d4bffd69f6adf08a/cmd/kubeadm/app/apis/kubeadm/v1beta2/types.go#L70-L80" rel="noreferrer">kubeadm types definition</a> I found this nice description that clearly explains it:</p>
<blockquote>
<p>ControlPlaneEndpoint sets a stable IP address or DNS name for the control plane; it
can be a valid IP address or a RFC-1123 DNS subdomain, both with optional TCP port.
In case the ControlPlaneEndpoint is not specified, the AdvertiseAddress + BindPort
are used; in case the ControlPlaneEndpoint is specified but without a TCP port,
the BindPort is used.
Possible usages are:
e.g. In a cluster with more than one control plane instances, this field should be
assigned the address of the external load balancer in front of the
control plane instances.
e.g. in environments with enforced node recycling, the ControlPlaneEndpoint
could be used for assigning a stable DNS to the control plane.</p>
</blockquote>
<p>This also likely affects the PKI generated by kubernetes, as it will need to know a common name/IP that you will access the cluster via to include in the certs it generates for the api nodes, otherwise these won't match up correctly.</p>
<p>If you really didn't want to have a loadbalancer you might be able to set up a round-robin dns entry with the IP's of all the control plane nodes and try specifying this for the <code>controlPlaneEndpoint</code> value. However this won't help with failover and redundancy, since failed nodes won't be removed from the record, and some clients might cache the address and not try to re-resolve it, thus further prolonging any outages.</p>
<p>Hope this helps.</p>
|
<p>I have the attached CRD in some namespaces. My issue is that the CRD persists even though the namespace in which it lives is deleted!</p>
<pre><code>> oc project
error: the project "uhc-development-15o4llu96oe7rftq8nms5t99djl3cnmh" specified in your config does not exist.
</code></pre>
<p>Please note there are no finelizers on the CRD:</p>
<pre><code>> oc get accountclaim managed-test2005 -o yaml
apiVersion: aws.managed.openshift.io/v1alpha1
kind: AccountClaim
metadata:
creationTimestamp: 2019-05-20T12:38:35Z
generation: 1
labels:
api.openshift.com/id: 15o4llu96oe7rftq8nms5t99djl3cnmh
api.openshift.com/name: managed-test2005
name: managed-test2005
namespace: uhc-development-15o4llu96oe7rftq8nms5t99djl3cnmh
resourceVersion: "13051561"
selfLink: /apis/aws.managed.openshift.io/v1alpha1/namespaces/uhc-development-15o4llu96oe7rftq8nms5t99djl3cnmh/accountclaims/managed-test2005
uid: 2f719fc0-7afc-11e9-964f-001a4a162604
spec:
accountLink: ""
aws:
regions:
- name: us-east-1
awsCredentialSecret:
name: aws
namespace: uhc-development-15o4llu96oe7rftq8nms5t99djl3cnmh
legalEntity:
id: 0
name:
status:
conditions:
- lastProbeTime: 2019-05-20T12:38:35Z
lastTransitionTime: 2019-05-20T12:38:35Z
message: Attempting to claim account
reason: AccountClaimed
status: "True"
type: Unclaimed
state: Ready
</code></pre>
<p>Has anyone encountered this issue before? Any help will be much appreciated.</p>
| <p>Shot in the dark... If you're able to get at the CRD itself - then it may have some finalisers against it which stop it getting deleted. If you edit the crd and remove those finalisers you may find the CRD will get deleted.</p>
|
<p>I am running my monolith application in a docker container and k8s on GKE.</p>
<p>The application contains python & node dependencies also webpack for front end bundle. </p>
<p>We have implemented CI/CD which is taking around 5-6 min to build & deploy new version to k8s cluster.</p>
<p>Main goal is to reduce the build time as much possible. Written Dockerfile is multi stage.</p>
<p>Webpack is taking more time to generate the bundle.To buid docker image i am using already high config worker.</p>
<p>To reduce time i tried using the <a href="https://cloud.google.com/blog/products/gcp/introducing-kaniko-build-container-images-in-kubernetes-and-google-container-builder-even-without-root-access" rel="nofollow noreferrer">Kaniko builder</a>.</p>
<p>Issue :</p>
<p>As docker cache layers for python code it's working perfectly. But when there is any changes in <code>JS</code> or <code>CSS</code> file we have to generate bundle.</p>
<p>When there is any changes in <code>JS</code> & <code>CSS</code> file instead if generate new bundle its use caching layer.</p>
<p>Is there any way to separate out build new bundle or use cache by passing some value to docker file.</p>
<p>Here is my docker file : </p>
<pre><code>FROM python:3.5 AS python-build
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt &&\
pip3 install Flask-JWT-Extended==3.20.0
ADD . /app
FROM node:10-alpine AS node-build
WORKDIR /app
COPY --from=python-build ./app/app/static/package.json app/static/
COPY --from=python-build ./app ./
WORKDIR /app/app/static
RUN npm cache verify && npm install && npm install -g --unsafe-perm node-sass && npm run sass && npm run build
FROM python:3.5-slim
COPY --from=python-build /root/.cache /root/.cache
WORKDIR /app
COPY --from=node-build ./app ./
RUN apt-get update -yq \
&& apt-get install curl -yq \
&& pip install -r requirements.txt
EXPOSE 9595
CMD python3 run.py
</code></pre>
| <p>I would suggest to create separate build pipelines for your docker images, where you know that the requirements for npm and pip aren't so frequent.
This will incredibly improve the speed, reducing the time of access to npm and pip registries.</p>
<p>Use a private docker registry (the <a href="https://docs.docker.com/registry/deploying/" rel="nofollow noreferrer">official one</a> or something like <a href="https://goharbor.io/" rel="nofollow noreferrer">VMWare harbor</a> or <a href="https://www.sonatype.com/download-oss-sonatype" rel="nofollow noreferrer">SonaType Nexus OSS</a>).</p>
<p>You store those build images on your registry and use them whenever something on the project changes.</p>
<p>Something like this:</p>
<p><strong>First Docker Builder</strong> // <strong>python-builder:YOUR_TAG</strong> [gitrev, date, etc.)</p>
<pre><code>docker build --no-cache -t python-builder:YOUR_TAG -f Dockerfile.python.build .
FROM python:3.5
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt &&\
pip3 install Flask-JWT-Extended==3.20.0
</code></pre>
<p><strong>Second Docker Builder</strong> // <strong>js-builder:YOUR_TAG</strong> [gitrev, date, etc.)</p>
<pre><code>docker build --no-cache -t js-builder:YOUR_TAG -f Dockerfile.js.build .
FROM node:10-alpine
WORKDIR /app
COPY app/static/package.json /app/app/static
WORKDIR /app/app/static
RUN npm cache verify && npm install && npm install -g --unsafe-perm node-sass
</code></pre>
<p>Your <strong>Application Multi-stage build</strong>:</p>
<pre><code>docker build --no-cache -t app_delivery:YOUR_TAG -f Dockerfile.app .
FROM python-builder:YOUR_TAG as python-build
# Nothing, already "stoned" in another build process
FROM js-builder:YOUR_TAG AS node-build
ADD ##### YOUR JS/CSS files only here, required from npm! ###
RUN npm run sass && npm run build
FROM python:3.5-slim
COPY . /app # your original clean app
COPY --from=python-build #### only the files installed with the pip command
WORKDIR /app
COPY --from=node-build ##### Only the generated files from npm here! ###
RUN apt-get update -yq \
&& apt-get install curl -yq \
&& pip install -r requirements.txt
EXPOSE 9595
CMD python3 run.py
</code></pre>
<p>A question is: why do you install <code>curl</code> and execute again the <code>pip install -r requirements.txt</code> command in the final docker image?
Triggering every time an <code>apt-get update</code> and install <em>without</em> cleaning the apt cache <code>/var/cache/apt</code> folder produces a bigger image.</p>
<p>As suggestion, use the docker build command with the option <strong>--no-cache</strong> to avoid caching result:</p>
<pre><code>docker build --no-cache -t your_image:your_tag -f your_dockerfile .
</code></pre>
<p><strong>Remarks</strong>:</p>
<p>You'll have 3 separate Dockerfiles, as I listed above.
Build the Docker images 1 and 2 <strong>only</strong> if you change your <code>python-pip</code> and <code>node-npm</code> requirements, otherwise keep them fixed for your project.
If any dependency requirement changes, then update the docker image involved and then the multistage one to point to the latest built image.</p>
<p>You should always build only the source code of your project (CSS, JS, python). In this way, you have also guaranteed reproducible builds.</p>
<p>To optimize your environment and copy files across the multi-stage builders, try to use <code>virtualenv</code> for python build.</p>
|
<p>I have a workload which is exposed through <code>NodePort</code> service with the name <code>online-forms-lib-service</code>. This workload has the <code>/version</code> route.</p>
<p>Also I have the following ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: online-forms-lib-service
servicePort: 80
path: /formslib/
</code></pre>
<p>The problem is, the <code>/version</code> route is not available at:</p>
<pre><code>example.com/formslib/version
</code></pre>
<p>How to solve this?</p>
<p><strong>Update</strong> </p>
<p>It goes to the application root when I call: </p>
<pre><code>example.com/formslib/
</code></pre>
<p>Adding any path from there directs me to the default backend</p>
<p><strong>Update</strong>
Added the annotation: </p>
<pre><code> annotations:
ingress.kubernetes.io/rewrite-target: /
</code></pre>
<p>Still the same behaviour.</p>
| <p>This behaviour is controlled by the <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">rewrite</a> annotations.</p>
<pre><code> annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
</code></pre>
<p>As of September, 2019, the GKE ingress doesn't support the <code>rewrite</code> rules.</p>
<p><a href="https://github.com/kubernetes/ingress-gce/issues/109" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-gce/issues/109</a></p>
<p>And no published plans for implementing it either.</p>
<p>The only solution is nginx or other 3rd party ingress controller which supports rewrite annotations.</p>
|
<p>I'm running atmoz/sftp deployment for sftp on GKE. I succeeded mounting a persistent volume and using a configmap to mount public keys for users, but I can't mount a host key so every time my container is restarting I'm getting a warning that my host key has changed.. </p>
<p>I tried to mount it to /etc/ssh and changing sshd_config and nothing worked - it says <code>file already exists, overwrite? (y/n)</code> and I can't manipulate it because it's inside the container.</p>
<p>And even if I try to run a command, any command like echo, the container is turning into CrashLoopBackhOff</p>
<p>my configmap:</p>
<pre><code>apiVersion: v1
data:
ssh_host_rsa_key: |
<my key>
kind: ConfigMap
metadata:
name: ssh-host-rsa
namespace: default
</code></pre>
<p>my deployment yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
name: sftp
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: sftp
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: sftp
spec:
containers:
- args:
- client::::sftp
env:
- name: sftp
value: "1"
image: atmoz/sftp
imagePullPolicy: IfNotPresent
name: sftp
ports:
- containerPort: 22
name: sftp
protocol: TCP
resources: {}
securityContext:
capabilities:
add:
- SYS_ADMIN
procMount: Default
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /home/client/sftp
name: sftp
- mountPath: /home/client/.ssh/keys
name: sftp-public-keys
- mountPath: /etc/ssh
name: ssh-host-ed25519
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 100
terminationGracePeriodSeconds: 30
volumes:
- name: sftp
persistentVolumeClaim:
claimName: sftp-uat
- configMap:
defaultMode: 420
name: sftp-public-keys
name: sftp-public-keys
- configMap:
defaultMode: 420
name: ssh-host-ed25519
name: ssh-host-ed25519
</code></pre>
<p>the echo test: </p>
<pre><code>containers:
- args:
- client::::sftp
env:
- name: sftp
value: "1"
image: atmoz/sftp
command:
- "echo hi"
imagePullPolicy: IfNotPresent
name: sftp
ports:
- containerPort: 22
name: sftp
protocol: TCP
resources: {}
securityContext:
capabilities:
add:
- SYS_ADMIN
procMount: Default
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /home/client/sftp
name: sftp
- mountPath: /home/client/.ssh/keys
name: sftp-public-keys
- mountPath: /etc/ssh
name: ssh-host-ed25519
</code></pre>
<p>Any ideas?</p>
| <p>"Not sure if you're still looking for a way to get host keys to persist, but mounting host key secrets into their relevant /etc/ssh/ files seems to work for me, eg."</p>
<pre><code>kind: Deployment
...
spec:
template:
spec:
#secrets and config
volumes:
...
- name: sftp-host-keys
secret:
secretName: sftp-host-keys
defaultMode: 0600
...
containers:
#the sftp server itself
- name: sftp
image: atmoz/sftp:latest
...
volumeMounts:
- mountPath: /etc/ssh/ssh_host_ed25519_key
name: sftp-host-keys
subPath: ssh_host_ed25519_key
readOnly: true
- mountPath: /etc/ssh/ssh_host_ed25519_key.pub
name: sftp-host-keys
subPath: ssh_host_ed25519_key.pub
readOnly: true
- mountPath: /etc/ssh/ssh_host_rsa_key
name: sftp-host-keys
subPath: ssh_host_rsa_key
readOnly: true
- mountPath: /etc/ssh/ssh_host_rsa_key.pub
name: sftp-host-keys
subPath: ssh_host_rsa_key.pub
readOnly: true
...
---
apiVersion: v1
kind: Secret
metadata:
name: sftp-host-keys
namespace: sftp
stringData:
ssh_host_ed25519_key: |
-----BEGIN OPENSSH PRIVATE KEY-----
...
-----END OPENSSH PRIVATE KEY-----
ssh_host_ed25519_key.pub: |
ssh-ed25519 AAAA...
ssh_host_rsa_key: |
-----BEGIN RSA PRIVATE KEY-----
...
-----END RSA PRIVATE KEY-----
ssh_host_rsa_key.pub: |
ssh-rsa AAAA...
type: Opaque
</code></pre>
|
<p>I have created a PersistentVolume, PersistentVolumeClaim and StorageClass for elasticsearch in a perisistance.yaml file. </p>
<p>The PersistentVolume, StorageClass,PersistentVolumeClaim is created successfully. The bound is also successful. </p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ssd
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
reclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-persistent-volume
spec:
storageClassName: ssd
capacity:
storage: 30G
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: gke-webtech-instance-2-pvc-f5964ddc-d446-11e9-9d1c-42010a800076
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim
spec:
storageClassName: ssd
volumeName: pv-persistent-volume
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30G
</code></pre>
<p><a href="https://i.stack.imgur.com/ORorR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ORorR.png" alt="pv-claim_bound_successful"></a></p>
<p>I have also attached the deployment.yaml for elasticsearch below. </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: elasticsearch
labels:
name: elasticsearch
spec:
type: NodePort
ports:
- name: elasticsearch-port1
port: 9200
protocol: TCP
targetPort: 9200
- name: elasticsearch-port2
port: 9300
protocol: TCP
targetPort: 9300
selector:
app: elasticsearch
tier: elasticsearch
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: elasticsearch-application
labels:
app: elasticsearch
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: elasticsearch
tier: elasticsearch
spec:
hostname: elasticsearch
containers:
- image: gcr.io/xxxxxxx/elasticsearch:7.3.1
name: elasticsearch
ports:
- containerPort: 9200
name: elasticport1
- containerPort: 9300
name: elasticport2
env:
- name: discovery.type
value: single-node
volumeMounts:
- mountPath: "/usr/share/elasticsearch/html"
name: pv-volume
volumes:
- name: pv-volume
persistentVolumeClaim:
claimName: pv-claim
</code></pre>
<p>I have created the deployment.yaml file as well. The Elastic search applications runs successfully without any issues and i am able to hit the elasticsearch URL also. I have run tests and populated the data in elaticsearch and i am able to view the data's also.</p>
<p>Once deleted the cluster in Kubernetes i try to connect with the same disk which has the persistance data. Everything is perfect. But i am not able to get the data which is already stored. My data is lost and i have a disk empty i guess. </p>
| <p>Kubernetes has <code>reclaimPolicy</code> for persistent volumes which defaults <em>in most cases</em> to <code>delete</code>. You can change it with:</p>
<blockquote>
<p>kubectl patch pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'</p>
</blockquote>
<p>Or simply adding <code>persistentVolumeReclaimPolicy: Retain</code> in persistentVolume.yaml</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/" rel="nofollow noreferrer">Some additional reading about this</a>. </li>
<li><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">And more general in storage</a></li>
</ul>
<p>Edited: As in comment below, this problem may not about data being lost. Pasting my comment below:</p>
<p>"I don't think your data is lost. Elasticsearch just needs to index existing data because it doesn't just grab existing stored data. You need to reingest data to elasticsearch or save snapshots regularly or use master, data, client architecture."</p>
|
<p>I can generate the <code>ConfigMap</code> from the directory but they are not translating the template directives or values. Below is an example of the <code>Release.Namespace</code> template directive not being output in the <code>ConfigMap</code>.</p>
<pre><code>.
|____Chart.yaml
|____charts
|____.helmignore
|____templates
| |____my-scripts.yaml
|____values.yaml
|____test-files
|____test1.txt
</code></pre>
<pre><code>---
# templates/myscripts.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-scripts
namespace: {{ .Release.Namespace }}
data:
test.txt: |-
{{ .Files.Get "test-files/test1.txt" | indent 4}}
</code></pre>
<pre><code># test-files/test1.txt
test file
{{ .Release.Namespace }}
</code></pre>
<p>When I run <code>helm install . --dry-run --debug --namespace this-should-print</code> here's what I'm getting vs what I'm expecting:</p>
<p>Actual:</p>
<pre><code>---
# Source: test/templates/my-scripts.yaml
# templates/myscripts.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-scripts
namespace: test
data:
test.txt: |-
# test-files/test1.txt
test file
{{ .Release.Namespace }}
</code></pre>
<p>Expected:</p>
<pre><code>---
# Source: test/templates/my-scripts.yaml
# templates/myscripts.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-scripts
namespace: test
data:
test.txt: |-
# test-files/test1.txt
test file
this-should-print
</code></pre>
<p>Alternatively, I would be interested in every file in a specified directory being output in the format like:</p>
<pre><code><filename>: |-
<content>
</code></pre>
| <p>I've found a way of doing it using the tpl function:</p>
<pre><code>---
# templates/myscripts.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-scripts
namespace: {{ .Release.Namespace }}
data:
test.txt: |-
{{ tpl ( .Files.Get "test-files/test1.txt" ) . | indent 4 }}
</code></pre>
<p>The new output is exactly as expected:</p>
<pre><code># templates/myscripts.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-scripts
namespace: this-should-print
data:
test.txt: |-
# test-files/test1.txt
test file
this-should-print
</code></pre>
<p>And for bonus points, getting all files from a directory without having to update this list within the config map:</p>
<pre><code>---
# templates/myscripts.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-scripts
namespace: {{ .Release.Namespace }}
data:
{{ tpl (.Files.Glob "groovy-scripts/*").AsConfig . | indent 4 }}
</code></pre>
|
<p>Is it possible to have a list of all reasons for the various <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-states" rel="nofollow noreferrer">container states</a> mentioned in kubernetes. So list of reasons for the state <code>Waiting</code>, <code>Running</code> and <code>Terminated</code>. For example, a state of <code>Waiting</code> can have a Reason of <code>ErrImagePull</code> or <code>CrashLoopBackOff</code>.</p>
<p>Please note that this is different from the list of events given in the answer to this <a href="https://stackoverflow.com/a/51423080/3079474">question</a>. I have also looked at the docker container statuses mentioned <a href="https://stackoverflow.com/questions/34472201/all-docker-container-statuses">here</a> but they don't seem to cover all the possibilities.</p>
| <p>What you are looking for is this <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/docs/pod-metrics.md" rel="noreferrer">document</a> on official Kubernetes repository, which describes list of reasons for each of container status. For <code>Waiting</code> container status it is possible to get one of following reason:</p>
<ul>
<li>ContainerCreating,</li>
<li>CrashLoopBackOff,</li>
<li>ErrImagePull,</li>
<li>ImagePullBackOff,</li>
<li>CreateContainerConfigError,</li>
<li>InvalidImageName,</li>
<li>CreateContainerError.</li>
</ul>
<p>For <code>Terminated</code> container status there are different list of reasons:</p>
<ul>
<li>OOMKilled,</li>
<li>Error,</li>
<li>Completed,</li>
<li>ContainerCannotRun,</li>
<li>DeadlineExceeded</li>
</ul>
<p>And so on, I hope it will helps you.</p>
|
<p>Kubernetes newbie here. Just want to get my fundamental understanding correct. Minikube is known for local development and is it possible for connection outside (not just outside cluster) to access the pods I have deployed in minikube? </p>
<p>I am running my minikube in ec2 instance so I started my minikube with command <code>minikube start --vm-driver=none</code>, which means running minikube with Docker, no VM provisioned. My end goal is to allow connection outside to reach my pods inside the cluster and perform POST request through the pod (for example using Postman). </p>
<blockquote>
<p>If yes, I also have my service resource applied using <code>kubectl apply
-f</code> into my minikube using NodePort in yaml file. Also, I also wish to understand <code>port</code>, <code>nodePort</code>, and <code>targetPort</code> correctly. <code>port</code> is
the port number assigned to that particular service, <code>nodePort</code> is the
port number on the node (in my case is my ec2 instance private IP),
<code>targetPort</code> is the port number equivalent to the <code>containerPort</code> I've
assigned in yaml of my deployment. Correct me if I am wrong in this statement.</p>
</blockquote>
<p>Thanks. </p>
| <p>Yes you can do that</p>
<p>as you have started the minikube with : </p>
<pre><code>minikube start --vm-driver=none
</code></pre>
<p><code>nodePort</code> is the port that a client outside of the cluster will "see". nodePort is opened on every node in your cluster via kube-proxy. You can use nodePort to access the application from outside world. Like <code>https://loadbalancerIP:NodePort</code></p>
<p><code>port</code> is the port your service listens on inside the cluster. Let's take this example:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: test-service
spec:
ports:
- port: 8080
targetPort: 8070
nodePort: 31222
protocol: TCP
selector:
component: test-service-app
</code></pre>
<p>From inside k8s cluster this service will be reachable via <a href="http://test-service.default.svc.cluster.local:8080" rel="nofollow noreferrer">http://test-service.default.svc.cluster.local:8080</a> (service to service communication inside your cluster) and any request reaching there is forwarded to a running pod on targetPort 8070.</p>
<p><code>tagetPort</code> is also by default the same value as port if not specified otherwise.</p>
|
<p>I'm new to kubernetes and <strong>trying to deploy VM using Kubernetes</strong> and using this <strong><a href="https://gist.github.com/rajatsingh25aug/5f522906f2606613927c5cd43d2b6c9e" rel="nofollow noreferrer">YAML</a></strong>. But when I do
<strong><code>oc create -f <yaml_link_above></code></strong>, I get an error as</p>
<p><em><strong><code> The "" is invalid: : spec.template.spec.volumes.userData in body is a forbidden property</code></strong></em></p>
<p>I don't see any problem with the formatting with the YAML or whatsoever, maybe I'm missing anything?.</p>
| <p>It seems that your Dynamic Provisioning doesn't work properly. Follow these steps <a href="https://docs.openshift.com/container-platform/3.5/install_config/storage_examples/ceph_rbd_dynamic_example.html" rel="nofollow noreferrer">dynamics-provisioning-ceph</a> to configure Ceph RBD Dynamic Storage Class.</p>
<p>Then check if pvc is created properly. After all apply your VM config file.</p>
<p>Here are useful documentations: <a href="https://kubevirt.io/user-guide/docs/latest/creating-virtual-machines/virtualized-hardware-configuration.html" rel="nofollow noreferrer">hardware-configuration</a>, <a href="https://kubevirt.io/user-guide/docs/latest/creating-virtual-machines/disks-and-volumes.html" rel="nofollow noreferrer">disk-volumes</a>.</p>
|
<p>I am trying to take snapshot backups with Velero in Kubernetes of a 12 node test CockroachDB cluster with Velero such that, if the cluster failed, we could rebuild the cluster and restore the cockroachdb from these snapshots.</p>
<p>We're using Velero to do that and the snapshot and restore seems to work, but on recovery, we seem to have issues with CockroachDB losing ranges.</p>
<p>Has anyone gotten snapshot backups to work with CockroachDB with a high scale database? (Given the size of the dataset, doing dumps or restores from dumps is not viable.)</p>
| <p>Performing backups of the underlying disks while CockroachDB nodes are running is unlikely to work as expected.</p>
<p>The main reason is that even if a persistent disk snapshot is atomic, there is no way to ensure that all disks are captured at the exact same time (time being defined by CockroachDB's consistency mechanism). The restore would contain data with replicas across nodes at different commit indices, resulting in data loss or loss of quorum (show in the Admin UI as "unavailable" ranges).</p>
<p>You have a few options (in order or convenience):</p>
<ul>
<li><a href="https://www.cockroachlabs.com/docs/v20.2/backup.html" rel="nofollow noreferrer">CockroachDB BACKUP</a> which has all nodes write data to external storage (S3, GCS, etc...). Before version 20.2, this is only available with an <a href="https://www.cockroachlabs.com/product/" rel="nofollow noreferrer">enterprise license</a>.</li>
<li><a href="https://www.cockroachlabs.com/docs/v19.1/sql-dump.html" rel="nofollow noreferrer">SQL dump</a> which is impractical for large datasets</li>
<li>stop all nodes, snapshot all disks, startup all nodes again. <strong>warning</strong>: this is something we have used to quickly load testing datasets but have not used it in production environments.</li>
</ul>
|
<p>I have the below Horizontal Pod Autoscaller configuration on Google Kubernetes Engine to scale a deployment by a custom metric - <code>RabbitMQ messages ready count</code> for a specific queue: <code>foo-queue</code>.</p>
<p>It picks up the metric value correctly.</p>
<p>When inserting 2 messages it scales the deployment to the maximum 10 replicas.
I expect it to scale to 2 replicas since the targetValue is 1 and there are 2 messages ready.</p>
<p>Why does it scale so aggressively?</p>
<p>HPA configuration:</p>
<pre><code>apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: foo-hpa
namespace: development
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: foo
minReplicas: 1
maxReplicas: 10
metrics:
- type: External
external:
metricName: "custom.googleapis.com|rabbitmq_queue_messages_ready"
metricSelector:
matchLabels:
metric.labels.queue: foo-queue
targetValue: 1
</code></pre>
| <p>According to <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/</a></p>
<p>From the most basic perspective, the Horizontal Pod Autoscaler controller operates on the ratio between desired metric value and current metric value:</p>
<pre><code>desiredReplicas = ceil[currentReplicas * ( currentMetricValue / desiredMetricValue )]
</code></pre>
<p>From the above I understand that as long as the queue has messages the k8 HPA will continue to scale up since <code>currentReplicas</code> is part of the <code>desiredReplicas</code> calculation.</p>
<p>For example if:</p>
<p><code>currentReplicas</code> = 1</p>
<p><code>currentMetricValue</code> / <code>desiredMetricValue</code> = 2/1</p>
<p>then:</p>
<p><code>desiredReplicas</code> = 2</p>
<p>If the metric stay the same in the next hpa cycle <code>currentReplicas</code> will become 2 and <code>desiredReplicas</code> will be raised to 4</p>
|
<p>I have a multi deployments application on K8s and suddenly DNS randomly fails for one of the components (deployer). From inside the deployer pod if I run <code>curl</code> command with the service name or service IP of another component (bridge) randomly I get:</p>
<pre><code>curl -v http://bridge:9998
* Could not resolve host: bridge
* Expire in 200 ms for 1 (transfer 0x555f0636fdd0)
* Closing connection 0
curl: (6) Could not resolve host: bridge
</code></pre>
<p>But if I use the IP of bridge pod it can resolve and connects:</p>
<pre><code>curl -v http://10.36.0.25:9998
* Expire in 0 ms for 6 (transfer 0x558d6c3eadd0)
* Trying 10.36.0.25...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x558d6c3eadd0)
* Connected to 10.36.0.25 (10.36.0.25) port 9998 (#0)
> GET / HTTP/1.1
> Host: 10.36.0.25:9998
> User-Agent: curl/7.64.0
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Powered-By: Express
< Accept-Ranges: bytes
< Cache-Control: public, max-age=0
< Last-Modified: Mon, 08 Apr 2019 14:06:42 GMT
< ETag: W/"179-169fd45c550"
< Content-Type: text/html; charset=UTF-8
< Content-Length: 377
< Date: Wed, 11 Sep 2019 08:25:24 GMT
< Connection: keep-alive
</code></pre>
<p>And my deployer yaml file:</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
annotations:
Process: deployer
creationTimestamp: null
labels:
io.kompose.service: deployer
name: deployer
spec:
ports:
- name: "8004"
port: 8004
targetPort: 8004
selector:
io.kompose.service: deployer
status:
loadBalancer: {}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
Process: deployer
creationTimestamp: null
labels:
io.kompose.service: deployer
name: deployer
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: deployer
spec:
containers:
- args:
- bash
- -c
- lttng create && python src/rest.py
env:
- name: CONFIG_OVERRIDE
value: {{ .Values.CONFIG_OVERRIDE | quote}}
- name: WWS_RTMP_SERVER_URL
value: {{ .Values.WWS_RTMP_SERVER_URL | quote}}
- name: WWS_DEPLOYER_DEFAULT_SITE
value: {{ .Values.WWS_DEPLOYER_DEFAULT_SITE | quote}}
image: {{ .Values.image }}
name: deployer
readinessProbe:
exec:
command:
- ls
- /tmp
initialDelaySeconds: 5
periodSeconds: 5
ports:
- containerPort: 8004
resources:
requests:
cpu: 0.1
memory: 250Mi
limits:
cpu: 2
memory: 5Gi
restartPolicy: Always
imagePullSecrets:
- name: deployersecret
status: {}
</code></pre>
<p>As I mentioned this happens for just this component and I ran the exact same command from inside other pods and it works properly. Any idea how I can solve this issue?</p>
<h1>Update</h1>
<p>Since people is getting this wrong I describe the situation more: The yaml file above belongs to the component that is facing this problem (other components working properly) and the curl command is the command that I run from inside this troubled pod. If I run the exact same command from within another pod it resolves.
And below is the deployment and service of the target for your information:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
Process: bridge
creationTimestamp: null
labels:
io.kompose.service: bridge
name: bridge
spec:
ports:
- name: "9998"
port: 9998
targetPort: 9998
- name: "9226"
port: 9226
targetPort: 9226
- name: 9226-udp
port: 9226
protocol: UDP
targetPort: 9226
selector:
io.kompose.service: bridge
status:
loadBalancer: {}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
Process: bridge
creationTimestamp: null
labels:
io.kompose.service: bridge
name: bridge
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: bridge
spec:
containers:
- args:
- bash
- -c
- npm run startDebug
env:
- name: NODE_ENV
value: {{ .Values.NODE_ENV | quote }}
image: {{ .Values.image }}
name: bridge
readinessProbe:
httpGet:
port: 9998
initialDelaySeconds: 3
periodSeconds: 15
ports:
- containerPort: 9998
- containerPort: 9226
- containerPort: 9226
protocol: UDP
resources:
requests:
cpu: 0.1
memory: 250Mi
limits:
cpu: 2
memory: 5Gi
restartPolicy: Always
imagePullSecrets:
- name: bridgesecret
status: {}
</code></pre>
| <p>The problem was the image that I was using. The troubled component and also one other component were using an image based on python2.7 but with different configurations and both had DNS problems but all the other components working properly. I built an image based on Ubuntu and now everything is good.</p>
<p>I think this might be related to the GO implementation that CoreDNS is using and for some reason, the python image can't work properly with that implementation, this is what one of my colleagues told me and he has faced the same issue before on another project when he was working with GO.</p>
|
<p>Here's a simple Kubernetes role:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: temp-role
namespace: stackoverflow
rules:
- apiGroups: [""]
resources:
- pods
verbs:
- get
</code></pre>
<p>This role allows me to say <code>kubectl get pod foobar</code> and I can get the pod.</p>
<p>However, I cannot get the pod logs now:</p>
<pre><code>Error from server (Forbidden): pods "foobar" is forbidden: User "system:serviceaccount:kube-system:myuser" cannot get resource "pods/log" in API group "" in the namespace "stackoverflow"
</code></pre>
<p>So the error tells me there's a separate sub-resource <code>pods/log</code> that I need to mention explicitly in my resources. </p>
<p>Interestingly <code>kubectl auth can-i</code> lies to me:</p>
<pre><code>$ kubectl -n stackoverflow auth can-i get pods/log
yes
</code></pre>
<p>Okay let's fix this and menetion the subresource directly:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: temp-role
namespace: stackoverflow
rules:
- apiGroups: [""]
resources:
- pods
- pods/log
verbs:
- get
</code></pre>
<p>Now I can retrieve logs properly!</p>
<hr>
<h2>So what's the problem</h2>
<p>Thing is, I'm trying to create a <code>ClusterRole</code> with read/write access to some specific resources (subset of <code>edit</code> ClusterRole, specifically), and I had hoped I can accomplish it by using <code>kubectl api-resources</code> and allowing everything from there, except the few things I don't want to allow.</p>
<p>But subresources like <code>pods/log</code> don't appear on the list, so this approach doesn't work - I'd block access to some things I intend to expose, but I don't even know what exactly. I only learned about <code>pods/log</code> after I tried it and noticed it doesn't work.</p>
<p>So I'm looking for a way to either:</p>
<ul>
<li>mention a resource with all sub-resources in <code>rules.resources</code> (I tried <code>pods/*</code> but it didn't seem to do anything)</li>
<li>if the above is not possible: get a list of all resources <em>and sub-resources</em>, so that I can whitelist them all in <code>rules.resources</code> individually.</li>
</ul>
<p>Thoughts?</p>
| <p>Answer is inspired by <a href="https://medium.com/@tannhauser.sphinx/bash-kubernetes-script-to-list-all-available-resources-subresources-c65a5c2c1173" rel="noreferrer">[Bash] [Kubernetes] Script to List All Available Resource/Sub-resource Name for RBAC Configuration</a> article.</p>
<p>2 scripts, both worked for me:</p>
<pre><code>_list=($(kubectl get --raw / |grep "^ \"/api"|sed 's/[",]//g'));
for _api in ${_list[@]}; do
_aruyo=$(kubectl get --raw ${_api} | jq .resources);
if [ "x${_aruyo}" != "xnull" ]; then
echo;
echo "===${_api}===";
kubectl get --raw ${_api} | jq -r ".resources[].name";
fi;
done
</code></pre>
<p>or</p>
<pre><code>_list=($(kubectl get --raw / |grep "^ \"/api"|sed 's/[",]//g')); for _api in ${_list[@]}; do _aruyo=$(kubectl get --raw ${_api} | jq .resources); if [ "x${_aruyo}" != "xnull" ]; then echo; echo "===${_api}==="; kubectl get --raw ${_api} | jq -r ".resources[].name"; fi; done
</code></pre>
<p>Result:</p>
<pre><code>===/api/v1===
bindings
componentstatuses
configmaps
endpoints
events
limitranges
namespaces
namespaces/finalize
namespaces/status
nodes
nodes/proxy
nodes/status
persistentvolumeclaims
persistentvolumeclaims/status
persistentvolumes
persistentvolumes/status
pods
pods/attach
pods/binding
pods/eviction
pods/exec
pods/log
pods/portforward
pods/proxy
pods/status
podtemplates
replicationcontrollers
replicationcontrollers/scale
replicationcontrollers/status
resourcequotas
resourcequotas/status
secrets
serviceaccounts
serviceaccounts/token
services
services/proxy
services/status
===/apis/admissionregistration.k8s.io/v1beta1===
mutatingwebhookconfigurations
validatingwebhookconfigurations
===/apis/apiextensions.k8s.io/v1beta1===
customresourcedefinitions
customresourcedefinitions/status
===/apis/apiregistration.k8s.io/v1===
apiservices
apiservices/status
===/apis/apiregistration.k8s.io/v1beta1===
apiservices
apiservices/status
===/apis/apps/v1===
controllerrevisions
daemonsets
daemonsets/status
deployments
deployments/scale
deployments/status
replicasets
replicasets/scale
replicasets/status
statefulsets
statefulsets/scale
statefulsets/status
===/apis/apps/v1beta1===
controllerrevisions
deployments
deployments/rollback
deployments/scale
deployments/status
statefulsets
statefulsets/scale
statefulsets/status
===/apis/apps/v1beta2===
controllerrevisions
daemonsets
daemonsets/status
deployments
deployments/scale
deployments/status
replicasets
replicasets/scale
replicasets/status
statefulsets
statefulsets/scale
statefulsets/status
===/apis/authentication.k8s.io/v1===
tokenreviews
===/apis/authentication.k8s.io/v1beta1===
tokenreviews
===/apis/authorization.k8s.io/v1===
localsubjectaccessreviews
selfsubjectaccessreviews
selfsubjectrulesreviews
subjectaccessreviews
===/apis/authorization.k8s.io/v1beta1===
localsubjectaccessreviews
selfsubjectaccessreviews
selfsubjectrulesreviews
subjectaccessreviews
===/apis/autoscaling/v1===
horizontalpodautoscalers
horizontalpodautoscalers/status
===/apis/autoscaling/v2beta1===
horizontalpodautoscalers
horizontalpodautoscalers/status
===/apis/batch/v1===
jobs
jobs/status
===/apis/batch/v1beta1===
cronjobs
cronjobs/status
===/apis/certificates.k8s.io/v1beta1===
certificatesigningrequests
certificatesigningrequests/approval
certificatesigningrequests/status
===/apis/cloud.google.com/v1beta1===
backendconfigs
===/apis/coordination.k8s.io/v1beta1===
leases
===/apis/extensions/v1beta1===
daemonsets
daemonsets/status
deployments
deployments/rollback
deployments/scale
deployments/status
ingresses
ingresses/status
networkpolicies
podsecuritypolicies
replicasets
replicasets/scale
replicasets/status
replicationcontrollers
replicationcontrollers/scale
===/apis/metrics.k8s.io/v1beta1===
nodes
pods
===/apis/networking.gke.io/v1beta1===
managedcertificates
===/apis/networking.k8s.io/v1===
networkpolicies
===/apis/policy/v1beta1===
poddisruptionbudgets
poddisruptionbudgets/status
podsecuritypolicies
===/apis/rbac.authorization.k8s.io/v1===
clusterrolebindings
clusterroles
rolebindings
roles
===/apis/rbac.authorization.k8s.io/v1beta1===
clusterrolebindings
clusterroles
rolebindings
roles
===/apis/scalingpolicy.kope.io/v1alpha1===
scalingpolicies
===/apis/scheduling.k8s.io/v1beta1===
priorityclasses
===/apis/storage.k8s.io/v1===
storageclasses
volumeattachments
volumeattachments/status
===/apis/storage.k8s.io/v1beta1===
storageclasses
volumeattachments
</code></pre>
<p>What I also wanna to do - is to pay your attention that kubernetes doesn't allow you get this list ny default and this is expect and by design.</p>
<p>Refer to <a href="https://github.com/kubernetes/kubernetes/issues/78936#issuecomment-501455971" rel="noreferrer">Permission to "pods/*" should work</a> </p>
<p>comment:</p>
<blockquote>
<p>services/* does not grant permissions to service status updates.</p>
<p>If you want to give unrestricted access to all resources, you can
grant that with *</p>
<p>Unrestricted access to all current and future subresources is
misleading to reason about. Different subresources are used for
different purposes. Authorizing all subresources of a resource assumes
that no new subresource will ever be added that grants access to far
more powerful capabilities. Granting access to pods/* would allow what
is currently a restricted user access to future subresources, even if
those subresources far exceeded the capabilities of the current
subresources.</p>
<p>The format */scale can be used to grant access to the subresource
named scale on all resources, and is useful for things like
autoscaling which needs access to a specific subresource.</p>
</blockquote>
|
<p>I need to grab some pod information which will be used for some unit tests which will be run in-cluster. I need all the information which kubectl describe po gives but from an in cluster api call. </p>
<p>I have some working code which makes an api call to apis/metrics.k8s.io/v1beta1/pods, and have installed the metrics-server on minikube for testing which is all working and gives me output like this: </p>
<pre><code>Namespace: kube-system
Pod name: heapster-rgnlj
SelfLink: /apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/heapster-rgnlj
CreationTimestamp: 2019-09-10 12:27:13 +0000 UTC
Window: 30s
Timestamp: 2019-09-10 12:26:23 +0000 UTC
Name: heapster
Cpu usage: 82166n
Mem usage: 19420Ki
</code></pre>
<pre><code>...
func getMetrics(clientset *kubernetes.Clientset, pods *PodMetricsList) error {
data, err := clientset.RESTClient().Get().AbsPath("apis/metrics.k8s.io/v1beta1/pods").DoRaw()
if err != nil {
return err
}
err = json.Unmarshal(data, &pods)
return err
}
func main() {
config, err := rest.InClusterConfig()
if err != nil {
fmt.Println(err)
}
// creates the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
fmt.Println(err)
}
var pods PodMetricsList
err = getMetrics(clientset, &pods)
if err != nil {
fmt.Println(err)
}
for _, m := range pods.Items {
fmt.Print("Namespace: ", m.Metadata.Namespace, "\n", "Pod name: ", m.Metadata.Name, "\n", )
fmt.Print("SelfLink: ", m.Metadata.SelfLink, "\n", "CreationTimestamp: ", m.Metadata.CreationTimestamp, "\n", )
fmt.Print("Window: ", m.Window, "\n", "Timestamp: ", m.Timestamp, "\n", )
for _, c := range m.Containers {
fmt.Println("Name:", c.Name)
fmt.Println("Cpu usage:", c.Usage.CPU)
fmt.Println("Mem usage:", c.Usage.Memory, "\n")
...
</code></pre>
<p>As I say, what i really need is what you'd get with a 'describe pods' type call. Having looked through the kubernetes source this NodeDescriber looks like the right type of function, but I'm slightly at a loss as to how to integrate / implement it to get the desired results. </p>
<p>kubernetes/pkg/printers/internalversion/describe.go</p>
<p>Line 2451 in 4f2d7b9</p>
<p>func (d *NodeDescriber) Describe(namespace, name string, describerSettings...etc)</p>
<p>I'm new to Go and not particularly familiar with kubernetes.
Any pointers as to how to go about it would be greatly appreciated.</p>
| <p>Looking at the <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubectl/pkg/describe/versioned/describe.go#L668" rel="nofollow noreferrer">describePod</a> and <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubectl/pkg/describe/describe.go#L693" rel="nofollow noreferrer">Describe</a> funcs from staging/src/k8s.io/kubectl/pkg/describe/versioned/describe.go should give you a better picture of how to do this. And since <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubectl/pkg/describe/describe.go#L693" rel="nofollow noreferrer">Describe</a> and <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubectl/pkg/describe/versioned/describe.go#L628" rel="nofollow noreferrer">PodDescriber</a> are public, you can reuse these for your use case.</p>
<p>You could couple this with a <a href="https://godoc.org/k8s.io/client-go/kubernetes/typed/core/v1#CoreV1Client" rel="nofollow noreferrer">CoreV1Client</a> which has a <a href="https://godoc.org/k8s.io/client-go/kubernetes/typed/core/v1#CoreV1Client.Pods" rel="nofollow noreferrer">Pods</a> func, that returns a <a href="https://godoc.org/k8s.io/client-go/kubernetes/typed/core/v1#PodInterface" rel="nofollow noreferrer">PodInterface</a> that has a <a href="https://godoc.org/k8s.io/api/core/v1#PodList" rel="nofollow noreferrer">List</a> func which would return a list of <a href="https://godoc.org/k8s.io/api/core/v1#Pod" rel="nofollow noreferrer">Pod</a> objects for the given namespace.</p>
<p>Those pod objects will provide the Name needed for the <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubectl/pkg/describe/describe.go#L693" rel="nofollow noreferrer">Describe</a> func, the Namespace is already known, and the <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubectl/pkg/describe/interface.go#L49" rel="nofollow noreferrer">describe.DescriberSettings</a> is just a struct type that you could inline to enable showing events in the <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubectl/pkg/describe/describe.go#L693" rel="nofollow noreferrer">Describe</a> output.</p>
<p>Using the <a href="https://godoc.org/k8s.io/api/core/v1#PodList" rel="nofollow noreferrer">List</a> func will only list the pods that one time. If you're interested in having this list be updated regularly, you might want to look at the Reflector and Informer patterns; both of which are largely implemented in the <a href="https://godoc.org/k8s.io/client-go/tools/cache" rel="nofollow noreferrer">tools/cache</a> package, and the docs briefly explain this concept in the <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#efficient-detection-of-changes" rel="nofollow noreferrer">Efficient detection of changes</a> section.</p>
<p>Hope this helps.</p>
|
<p>I'm looking for a way to wait for Job to finish execution Successfully once deployed. </p>
<p>Job is being deployed from Azure DevOps though CD on K8S on AWS. It is running one time incremental database migrations using <a href="https://fluentmigrator.github.io" rel="nofollow noreferrer">Fluent migrations</a> each time it's deployed. I need to read <code>pod.status.phase</code> field. </p>
<p>If field is "<code>Succeeded</code>", then CD will continue. If it's "<code>Failed</code>", CD stops.</p>
<p>Anyone have an idea how to achieve this?</p>
| <p>I think the best approach is to use the <strong><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#wait" rel="nofollow noreferrer"><code>kubectl wait</code></a></strong> command:</p>
<blockquote>
<p>Wait for a specific condition on one or many resources.</p>
<p>The command takes multiple resources and waits until the specified
condition is seen in the Status field of every given resource.</p>
</blockquote>
<p>It will only return when the Job is completed (or the timeout is reached):</p>
<p><code>kubectl wait --for=condition=complete job/myjob --timeout=60s</code></p>
<p>If you don't set a <code>--timeout</code>, the default wait is 30 seconds.</p>
<hr>
<p><strong>Note:</strong> <code>kubectl wait</code> was introduced on Kubernetes <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#sig-cli-1" rel="nofollow noreferrer">v1.11.0</a>. If you are using older versions, you can create some logic using <code>kubectl get</code> with <code>--field-selector</code>:</p>
<p><code>kubectl get pod --field-selector=status.phase=Succeeded</code></p>
|
<p>I have an <em>optional</em> secret volume defined in my StatefulSet like</p>
<pre><code> - name: my-secret-volume
secret:
secretName: my-secret
optional: true
</code></pre>
<p>And further, I mount it in my container. When I provision the pod, the secret does not exist yet.</p>
<p>Later, another service is installed on the cluster, which creates <code>my-secret</code>. Is there any way to mount that secret inside my pod where it was originally marked as optional, without having to restart the pod?</p>
| <p>Apparently, mounted secrets are updated automatically. From <a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets</a> :</p>
<blockquote>
<p>When a secret being already consumed in a volume is updated, projected
keys are eventually updated as well. Kubelet is checking whether the
mounted secret is fresh on every periodic sync.</p>
</blockquote>
<p>I have tested it manually, defined <code>my-secret</code> and it eventually (in under a minute) showed up in the mounted path, which was previously empty (before I created <code>my-secret</code>).</p>
|
<p>When I run <code>kubectl</code> inside of a pod it defaults to "in-cluster config" (defined by files in <code>/var/run/secrets/kubernetes.io/serviceaccount</code>). If I want to wrap <code>kubectl</code> inside of a call to Python subprocess with <code>shell=False</code>, how do I tell <code>kubectl</code> where to find the in-cluster config?</p>
<p>Since when I run <code>shell=False</code> none of the environment makes it into the subprocess. It seems I need to explicitly pass some environment variables or other system state to the subprocess call for <code>kubectl</code> to discover the in-cluster config. </p>
<p>How does <code>kubectl</code> discover this config? Are there a simple few variables to pass through?</p>
| <p>You will have to construct a <code>KUBECONFIG</code> by hand, given those values, since that's more-or-less exactly what the <a href="https://github.com/kubernetes-client/python-base/blob/474e9fb32293fa05098e920967bb0e0645182d5b/config/incluster_config.py#L81-L86" rel="nofollow noreferrer">python client does anyway</a>. In short, either in python or via the following commands:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl config set-cluster the-cluster --server="https://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT}" --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
kubectl config set-credentials pod-token --token="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"
kubectl config set-context pod-context --cluster=the-cluster --user=pod-token
kubectl config use-context pod-context
</code></pre>
<p>and then you're off to the races</p>
|
<p>Is there a way to tell Kubernetes what pods to kill before or after a downscale? For example, suppose that I have 10 replicas and I want to downscale them to 5, but I want certain replicas to be alive and others to be killed after the downscale. Is that possible?</p>
| <p>While it's not possible to selectively choose which pod is killed, you can prevent what you're really concerned about, which is the killing of pods that are in the midst of processing tasks. This requires you do two things:</p>
<ol>
<li>Your application should be able to listen for and handle SIGTERM events, which Kubernetes sends to pods before it kills them. In your case, your app would handle SIGTERM by finishing any in-flight tasks then exiting.</li>
<li>You set the <code>terminationGracePeriodSeconds</code> on the pod to something greater than the longest time it takes for the longest task to be processed. Setting this property extends the period of time between k8s sending the SIGTERM (asking your application to finish up), and SIGKILL (forcefully terminating).</li>
</ol>
|
<p>We have a Rails app that we deploy in a Docker environment (currently ECS but moving to Kubernetes). We are adding Kafka to our architecture, using <a href="https://aws.amazon.com/msk/" rel="nofollow noreferrer">Amazon's managed Kafka service</a>. </p>
<p>What's the best way to add the Kafka consumer library to our Rails app? We are using <a href="https://github.com/appsignal/rdkafka-ruby" rel="nofollow noreferrer">rdkafka-ruby</a>, but should we:</p>
<ul>
<li>Add it to our Rails app container, or</li>
<li>Deploy it as a separate container?</li>
</ul>
| <p>According to <a href="https://github.com/appsignal/rdkafka-ruby" rel="nofollow noreferrer">rdkafka</a> intro:</p>
<blockquote>
<p>This gem only provides a high-level Kafka consumer</p>
</blockquote>
<p>As for service, it's recommended to use <code>kafka</code> service in separate instances. Since you are going to use <a href="https://aws.amazon.com/msk/" rel="nofollow noreferrer">Amazon MSK</a> as a <code>kafka</code> service, it seems ok to add the <code>rdkafka</code> gem into your app.</p>
<p>P.S. That's how Xing <a href="https://tech.xing.com/asynchronous-state-replication-9e504aeac0de" rel="nofollow noreferrer">used</a> <a href="https://github.com/appsignal/rdkafka-ruby" rel="nofollow noreferrer">rdkafka-ruby</a> for <a href="https://www.infoq.com/presentations/event-streams-kafka/" rel="nofollow noreferrer">migrate from transactions to streams</a></p>
|
<p>I believe that I must create multiple <code>Ingress</code> resources to achieve the desired effect, but must ask, is it possible to have multiple rewrite annotations with nginx (community) controller for the same host?</p>
<p>I have the following, but it won't work, as I understand it, because there is no way to link the rewrite rule to the path explicitly. In this case, I suppose it could use the fact that there are different numbers of capture groups, but that wouldn't always be the case. </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: https://staging.example.com/keyword-lnk/badnewsbears/$1/$2
nginx.ingress.kubernetes.io/rewrite-target: https://staging.example.com/lnk/$2
certmanager.k8s.io/cluster-issuer: staging-letsencrypt-prod
spec:
rules:
- host: staging.example.com
http:
paths:
- path: /
backend:
serviceName: api-service
servicePort: http
- host: badnewsbears.example.com
http:
paths:
- backend:
serviceName: api-service
servicePort: http
path: ^(/a/)(.*)$
- backend:
serviceName: api-service
servicePort: http
path: ^/([tbc])/(.*)
tls:
- hosts:
- staging.example.com
- badnewsbears.example.com
secretName: letsencrypt-prod-issuer-account-key
# ^ must be set to the key matching the annotation
# certmanager.k8s.io/cluster-issuer above
</code></pre>
<p>The goal is to have requests to <code>staging.example.com</code> not have rewrites, but requests to <code>badnewsbears.example.com/t/soup</code> rewrite to <code>https://staging.example.com/keyword-lnk/badnewsbears/t/soup</code>, while
<code>badnewsbears.example.com/a/soup</code> yields <code>https://staging.example.com/lnk/$2</code></p>
<p>Is there a way to specify a mapping of rewrite target->path in the <code>Ingress</code> (or elsewhere), or will I have to separate out the different rewrite rules for the same host into different </p>
| <p>TL;DR; you're not really meant to be able to configure multiple rewrites for the kubernetes/ingress-nginx controller type. Although it is possible to hack this together in a limited fashion using regex based rewrites with capture groups, as explained in this answer I posted to <a href="https://stackoverflow.com/a/57822415/207488">How to proxy_pass with nginx-ingress?
</a>.</p>
<p>Hope this helps.</p>
|
<p>I am trying to write a .net core application to run in a kubernetes pod. This application needs to know if the cluster has been unable to schedule any pods. </p>
<p>I have tried getting deployment data from </p>
<pre><code>kubectl get --raw /apis/apps/v1/namespaces/default/deployments
</code></pre>
<p>I can see the <code>unavailableReplicas</code> number and the <code>MinimumReplicasUnavailable</code> message. </p>
<p>Are these valid metrics to watch for the cluster status? </p>
<p>Is there a way to query the cluster as a whole instead of by deployment? </p>
| <p>If you are looking for the images in each node in the cluster you can try</p>
<pre><code>kubectl get nodes -o json
</code></pre>
<p>which will return a json object or using <strong>--field-selector</strong> as shown below.</p>
<pre><code>kubectl get pods --all-namespaces --field-selector=status.phase==Pending
</code></pre>
<p>and using api </p>
<pre><code>kubectl get --raw /api/v1/pods?fieldSelector=status.phase==Pending
</code></pre>
|
<p>Before I could run this command <code>kubectl logs <pod></code> without issue for many days/versions. However, after I pushed another image and deployed recently, I faced below error:</p>
<blockquote>
<p>Error from server: Get <a href="https://aks-agentpool-xxx-0:10250/containerLogs/default/" rel="noreferrer">https://aks-agentpool-xxx-0:10250/containerLogs/default/</a><-pod->/<-service->: dial tcp 10.240.0.4:10250: i/o timeout</p>
</blockquote>
<p>I tried to re-build and re-deploy but failed.</p>
<p>Below was the Node info for reference:
<a href="https://i.stack.imgur.com/NqxBA.png" rel="noreferrer"><img src="https://i.stack.imgur.com/NqxBA.png" alt="Node Info"></a></p>
| <p>Not sure if your issue is caused by the problem described in this <a href="https://learn.microsoft.com/en-us/azure/aks/troubleshooting#i-cant-get-logs-by-using-kubectl-logs-or-i-cant-connect-to-the-api-server-im-getting-error-from-server-error-dialing-backend-dial-tcp-what-should-i-do" rel="nofollow noreferrer">troubleshooting</a>. But maybe you can take a try, it shows below:</p>
<blockquote>
<p>Make sure that the default network security group isn't modified and
that both port 22 and 9000 are open for connection to the API server.
Check whether the <code>tunnelfront</code> pod is running in the <code>kube-system</code>
namespace using the <code>kubectl get pods --namespace kube-system</code> command.
If it isn't, force deletion of the pod and it will restart.</p>
</blockquote>
|
<p>I am new to gitlab CI. So I am trying to use <a href="https://gitlab.com/gitlab-org/gitlab-ce/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml" rel="noreferrer">https://gitlab.com/gitlab-org/gitlab-ce/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml</a>, to deploy simple test django app to the kubernetes cluster attached to my gitlab project using a custom chat <a href="https://gitlab.com/aidamir/citest/tree/master/chart" rel="noreferrer">https://gitlab.com/aidamir/citest/tree/master/chart</a>. All things goes well, but the last moment it show error message from kubectl and it fails. here is output of the pipeline:</p>
<pre><code>Running with gitlab-runner 12.2.0 (a987417a)
on docker-auto-scale 72989761
Using Docker executor with image registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v0.1.0 ...
Running on runner-72989761-project-13952749-concurrent-0 via runner-72989761-srm-1568200144-ab3eb4d8...
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/myporject/kubetest/.git/
Created fresh repository.
From https://gitlab.com/myproject/kubetest
* [new branch] master -> origin/master
Checking out 3efeaf21 as master...
Skipping Git submodules setup
Authenticating with credentials from job payload (GitLab Registry)
$ auto-deploy check_kube_domain
$ auto-deploy download_chart
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.
Not installing Tiller due to 'client-only' flag having been set
"gitlab" has been added to your repositories
No requirements found in /builds/myproject/kubetest/chart/charts.
No requirements found in chart//charts.
$ auto-deploy ensure_namespace
NAME STATUS AGE
kubetest-13952749-production Active 46h
$ auto-deploy initialize_tiller
Checking Tiller...
Tiller is listening on localhost:44134
Client: &version.Version{SemVer:"v2.14.0", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}
[debug] SERVER: "localhost:44134"
Kubernetes: &version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.7-gke.24", GitCommit:"2ce02ef1754a457ba464ab87dba9090d90cf0468", GitTreeState:"clean", BuildDate:"2019-08-12T22:05:28Z", GoVersion:"go1.11.5b4", Compiler:"gc", Platform:"linux/amd64"}
Server: &version.Version{SemVer:"v2.14.0", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}
$ auto-deploy create_secret
Create secret...
secret "gitlab-registry" deleted
secret/gitlab-registry replaced
$ auto-deploy deploy
secret "production-secret" deleted
secret/production-secret replaced
Deploying new release...
Release "production" has been upgraded.
LAST DEPLOYED: Wed Sep 11 11:12:21 2019
NAMESPACE: kubetest-13952749-production
STATUS: DEPLOYED
RESOURCES:
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
production-djtest 1/1 1 1 46h
==> v1/Job
NAME COMPLETIONS DURATION AGE
djtest-update-static-auik5 0/1 3s 3s
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nginx-storage-pvc Bound nfs 10Gi RWX 3s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
djtest-update-static-auik5-zxd6m 0/1 ContainerCreating 0 3s
production-djtest-5bf5665c4f-n5g78 1/1 Running 0 46h
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
production-djtest ClusterIP 10.0.0.146 <none> 5000/TCP 46h
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace kubetest-13952749-production -l "app.kubernetes.io/name=djtest,app.kubernetes.io/instance=production" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
error: arguments in resource/name form must have a single resource and name
ERROR: Job failed: exit code 1
</code></pre>
<p>Please help me to find the reason of the error message. </p>
<p>I did look to the auto-deploy script from the image registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v0.1.0. There is a settings variable to disable rollout status check </p>
<pre><code> if [[ -z "$ROLLOUT_STATUS_DISABLED" ]]; then
kubectl rollout status -n "$KUBE_NAMESPACE" -w "$ROLLOUT_RESOURCE_TYPE/$name"
fi
</code></pre>
<p>So setting</p>
<pre><code>variables:
ROLLOUT_STATUS_DISABLED: "true"
</code></pre>
<p>prevents job fail. But I still have no answer why the script does not work with my custom chat?. When I do execution of the status checking command from my laptop it shows nothing errors.</p>
<pre><code>kubectl rollout status -n kubetest-13952749-production -w "deployment/production-djtest"
deployment "production-djtest" successfully rolled out
</code></pre>
<p>I also found a complaint to a similar issue
<a href="https://gitlab.com/gitlab-com/support-forum/issues/4737" rel="noreferrer">https://gitlab.com/gitlab-com/support-forum/issues/4737</a>, but there is no activity on the post.</p>
<p>It is my gitlab-ci.yaml:</p>
<pre><code>image: alpine:latest
variables:
POSTGRES_ENABLED: "false"
DOCKER_DRIVER: overlay2
ROLLOUT_RESOURCE_TYPE: deployment
DOCKER_TLS_CERTDIR: "" # https://gitlab.com/gitlab-org/gitlab-runner/issues/4501
stages:
- build
- test
- deploy # dummy stage to follow the template guidelines
- review
- dast
- staging
- canary
- production
- incremental rollout 10%
- incremental rollout 25%
- incremental rollout 50%
- incremental rollout 100%
- performance
- cleanup
include:
- template: Jobs/Deploy.gitlab-ci.yml # https://gitlab.com/gitlab-org/gitlab-ce/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml
variables:
CI_APPLICATION_REPOSITORY: eu.gcr.io/myproject/django-test
</code></pre>
| <blockquote>
<p>error: arguments in resource/name form must have a single resource and name</p>
</blockquote>
<p>That issue you linked to has <code>Closed (moved)</code> in its status because it was moved from <a href="https://gitlab.com/gitlab-org/gitlab-ce/issues/66016#note_203406467" rel="noreferrer">issue 66016</a>, which has what I believe is the real answer:</p>
<blockquote>
<p>Please try adding the following to your .gitlab-ci.yml:</p>
</blockquote>
<pre><code>variables:
ROLLOUT_RESOURCE_TYPE: deployment
</code></pre>
<p>Using <strong>just</strong> the <code>Jobs/Deploy.gitlab-ci.yml</code> omits <a href="https://gitlab.com/gitlab-org/gitlab-ce/blob/v12.2.5/lib/gitlab/ci/templates/Auto-DevOps.gitlab-ci.yml#L55" rel="noreferrer">the <code>variables:</code> block from <code>Auto-DevOps.gitlab-ci.yml</code></a> which correctly sets that variable</p>
<p>In your case, I think you just need to move that <code>variables:</code> up to the top, since (afaik) one cannot have two top-level <code>variables:</code> blocks. I'm actually genuinely surprised your <code>.gitlab-ci.yml</code> passed validation</p>
<hr>
<p>Separately, if you haven't yet seen, you can set the <code>TRACE</code> variable to <a href="https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/blob/v0.2.2/src/bin/auto-deploy#L3" rel="noreferrer">switch auto-deploy into <code>set -x</code> mode</a> which is super, super helpful in seeing exactly what it is trying to do. I believe your command was trying to run <code>rollout status /whatever-name</code> and with just a slash, it doesn't know what kind of name that is.</p>
|
<p>I want to use HTTPS for my Kubernetes on Azure (AKS).
For that I use nginx-ingress (<a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md#azure" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md#azure</a>).
All of the ressources, which I created by this tutorial, are using the namespace ingress-nginx. That's why I continue using this namespace instead of default.
My Ingress is working like expected. Now I want to use HTTPS instead of HTTP.</p>
<p>For that I created an CSR:</p>
<pre><code>openssl req -new -newkey rsa:2048 -nodes -keyout private.key -out example.csr -subj "/CN=domain.com"
</code></pre>
<p>I send the CSR to a signing provider (QuoVadis) who send me the following files back:</p>
<ul>
<li>domain_com(chain).crt </li>
<li>domain_com.crt </li>
<li>QuoVadis_Global_SSL_ICA_G2.crt</li>
<li>QuoVadis_Root_CA_2.crt</li>
</ul>
<p>I'm a little bit confused because in all tutorials I found only one crt is mentioned. The chain looks like an combination of all other three files. That's why I continued with the chain:</p>
<pre><code>sudo kubectl create secret tls ssl-secret-test --cert domain_com(chain).crt --key private.key -n ingress-nginx
</code></pre>
<p>I added the secret to my ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
namespace: ingress-nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
tls:
- hosts:
- domain.com
secretName: ssl-secret-test
rules:
- host: domain.com
- http:
paths:
- path: /app1(/|$)(.*)
backend:
serviceName: app1-service
servicePort: 80
- path: /app2(/|$)(.*)
backend:
serviceName: app2-service
servicePort: 80
</code></pre>
<p>My deployments app1 & app2 are not available by the domain anymore. If I use the IP it is still working:</p>
<p><strong>domain.com/app1:</strong></p>
<p><em>404 Not Found - openresty/1.15.8.1</em></p>
<p><strong>52.xxx.xxx.xx/app1:</strong></p>
<p><em>Hello World</em></p>
<p>In both cases, I still get the warning of an unsecured connection.
Here an overview of my services:</p>
<pre><code>$ sudo kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 57d
ingress-nginx app1-service NodePort 10.0.229.109 <none> 80:31343/TCP 22h
ingress-nginx app2-service NodePort 10.0.175.201 <none> 80:31166/TCP 22h
ingress-nginx ingress-nginx LoadBalancer 10.0.40.172 52.xxx.xxx.xx 80:32564/TCP,443:32124/TCP 22h
kube-system healthmodel-replicaset-service ClusterIP 10.0.233.181 <none> 25227/TCP 5d10h
kube-system heapster ClusterIP 10.0.214.146 <none> 80/TCP 57d
kube-system kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 57d
kube-system kubernetes-dashboard ClusterIP 10.0.160.230 <none> 80/TCP 57d
kube-system metrics-server ClusterIP 10.0.170.103 <none> 443/TCP 57d
$ sudo kubectl get ingress --all-namespaces
NAMESPACE NAME HOSTS ADDRESS PORTS AGE
ingress-nginx nginx-ingress domain.com 52.xxx.xxx.xx 80, 443 37m
$ sudo kubectl get deployments --all-namespaces
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
ingress-nginx app1 2 2 2 2 22h
ingress-nginx app2 2 2 2 2 22h
ingress-nginx nginx-ingress-controller 1 1 1 1 57d
kube-system coredns 2 2 2 2 58d
kube-system coredns-autoscaler 1 1 1 1 58d
kube-system heapster 1 1 1 1 5d10h
kube-system kubernetes-dashboard 1 1 1 1 58d
kube-system metrics-server 1 1 1 1 58d
kube-system omsagent-rs 1 1 1 1 58d
kube-system tunnelfront 1 1 1 1 58d
</code></pre>
<p>What I'm doing wrong?</p>
<h1>Update with cert-manager</h1>
<p>I followed the following tutorial:</p>
<p><a href="https://docs.cert-manager.io/en/latest/getting-started/install/kubernetes.html" rel="nofollow noreferrer">https://docs.cert-manager.io/en/latest/getting-started/install/kubernetes.html</a></p>
<p>and confirmed everything by the example test-resources.yaml.</p>
<p>Now I followed the steps for setting up a CA ISSUER.</p>
<pre><code>apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: domain-com
namespace: default
spec:
secretName: domain-com-tls
issuerRef:
name: ca-issuer
kind: ClusterIssuer
commonName: domain.com
organization:
- QuoVadis
dnsNames:
- domain.com
- www.domain.com
-----
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: ca-issuer
namespace: default
spec:
ca:
secretName: ssl-secret-test
</code></pre>
<p>But it seems that it is not working:</p>
<pre><code>$ kubectl describe certificate domain-com
...
Status:
Conditions:
Last Transition Time: 2019-09-12T07:48:19Z
Message: Certificate does not exist
Reason: NotFound
Status: False
Type: Ready
Not After: 2021-09-11T07:46:00Z
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning IssuerNotReady 8s (x9 over 4h51m) cert-manager Issuer ca-issuer not ready
</code></pre>
<p>And on the troubleshooting page, I found an additional uncertainty:</p>
<pre><code>$ kubectl --namespace cert-manager get secret cert-manager-webhook-webhook-tls
Error from server (NotFound): secrets "cert-manager-webhook-webhook-tls" not found
</code></pre>
| <p>I answered in the comments but will just add the Answer here:</p>
<p>Cert Manager is the easiest way to handle TLS with nginx ingress, after setting it up here <a href="https://docs.cert-manager.io/en/latest/getting-started/install/kubernetes.html" rel="nofollow noreferrer">https://docs.cert-manager.io/en/latest/getting-started/install/kubernetes.html</a> when you define your ingress it would look similar to:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
namespace: ingress-nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: "true"
kubernetes.io/ingress.class: "nginx" <-- This is very important to define which ingress controller to use
certmanager.k8s.io/cluster-issuer: letsencrypt-staging <-- defines the cert manager issuer
spec:
tls:
- hosts:
- domain.com
secretName: ssl-secret-test
rules:
- host: domain.com
- http:
paths:
- path: /app1(/|$)(.*)
backend:
serviceName: app1-service
servicePort: 80
- path: /app2(/|$)(.*)
backend:
serviceName: app2-service
servicePort: 80
</code></pre>
<p>Cert manager will then setup the TLS for your service</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.